add tusd support

This commit is contained in:
jqzhang 2021-06-17 15:26:12 +08:00
parent b0f1c0d42a
commit 07cdd7327f
28 changed files with 2869 additions and 115 deletions

5
vendor/github.com/sjqzhang/tusd/.gitignore generated vendored Normal file
View File

@ -0,0 +1,5 @@
tusd/data
cover.out
data/
node_modules/
.DS_Store

51
vendor/github.com/sjqzhang/tusd/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,51 @@
language: go
go:
- 1.5
- 1.6
- 1.7
- 1.8
- 1.9
- "1.10"
- 1.11
sudo: required
addons:
apt:
packages:
- docker-ce
cache:
apt: true
directories:
- $HOME/.gimme
- "$HOME/google-cloud-sdk/"
env:
global:
- GO15VENDOREXPERIMENT=1
install:
- true
script:
- ./.scripts/test_all.sh
before_deploy:
- if [[ "$TRAVIS_TAG" != "" ]]; then ./.scripts/build_all.sh; fi
- if [ ! -d "$HOME/google-cloud-sdk/bin" ]; then rm -rf $HOME/google-cloud-sdk; curl https://sdk.cloud.google.com | bash; fi
- source /home/travis/google-cloud-sdk/path.bash.inc
- gcloud --quiet version
- gcloud --quiet components update
- gcloud --quiet components update kubectl
- curl https://raw.githubusercontent.com/kubernetes/helm/9476fcc10aaaf4bbcbeb6b7b9e62e9f03d312697/scripts/get | bash
deploy:
- provider: releases
api_key:
secure: dV3wr9ebEps3YrzIoqmkYc7fw0IECz7QLPRENPSxTJyd5TTYXGsnTS26cMe2LdGwYrXw0njt2GGovMyBZFTtxyYI3mMO4AZRwvZfx/yGzPWJBbVi6NjZVRg/bpyK+mQJ5BUlkPAYJmRpdc6qD+nvCGakBOxoByC5XDK+yM+bKFs=
file_glob: true
file: tusd_*.*
skip_cleanup: true
on:
tags: true
go: 1.8
repo: tus/tusd
- provider: script
script: .scripts/deploy_gcloud.sh
on:
branch: master
go: 1.8
repo: tus/tusd

35
vendor/github.com/sjqzhang/tusd/Dockerfile generated vendored Normal file
View File

@ -0,0 +1,35 @@
FROM golang:1.7-alpine AS builder
# Copy in the git repo from the build context
COPY . /go/src/github.com/tus/tusd/
# Create app directory
WORKDIR /go/src/github.com/tus/tusd
RUN apk add --no-cache \
git \
&& go get -d -v ./... \
&& version="$(git tag -l --points-at HEAD)" \
&& commit=$(git log --format="%H" -n 1) \
&& GOOS=linux GOARCH=amd64 go build \
-ldflags="-X github.com/tus/tusd/cmd/tusd/cli.VersionName=${version} -X github.com/tus/tusd/cmd/tusd/cli.GitCommit=${commit} -X 'github.com/tus/tusd/cmd/tusd/cli.BuildDate=$(date --utc)'" \
-o "/go/bin/tusd" ./cmd/tusd/main.go \
&& rm -r /go/src/* \
&& apk del git
# start a new stage that copies in the binary built in the previous stage
FROM alpine:3.8
COPY --from=builder /go/bin/tusd /usr/local/bin/tusd
RUN apk add --no-cache ca-certificates \
&& addgroup -g 1000 tusd \
&& adduser -u 1000 -G tusd -s /bin/sh -D tusd \
&& mkdir -p /srv/tusd-hooks \
&& mkdir -p /srv/tusd-data \
&& chown tusd:tusd /srv/tusd-data
WORKDIR /srv/tusd-data
EXPOSE 1080
ENTRYPOINT ["tusd"]
CMD ["--hooks-dir","/srv/tusd-hooks"]

19
vendor/github.com/sjqzhang/tusd/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2013-2017 Transloadit Ltd and Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

272
vendor/github.com/sjqzhang/tusd/README.md generated vendored Normal file
View File

@ -0,0 +1,272 @@
# tusd
<img alt="Tus logo" src="https://github.com/tus/tus.io/blob/master/assets/img/tus1.png?raw=true" width="30%" align="right" />
> **tus** is a protocol based on HTTP for *resumable file uploads*. Resumable
> means that an upload can be interrupted at any moment and can be resumed without
> re-uploading the previous data again. An interruption may happen willingly, if
> the user wants to pause, or by accident in case of an network issue or server
> outage.
tusd is the official reference implementation of the [tus resumable upload
protocol](http://www.tus.io/protocols/resumable-upload.html). The protocol
specifies a flexible method to upload files to remote servers using HTTP.
The special feature is the ability to pause and resume uploads at any
moment allowing to continue seamlessly after e.g. network interruptions.
It is capable of accepting uploads with arbitrary sizes and storing them locally
on disk, on Google Cloud Storage or on AWS S3 (or any other S3-compatible
storage system). Due to its modularization and extensibility, support for
nearly any other cloud provider could easily be added to tusd.
**Protocol version:** 1.0.0
## Getting started
### Download pre-builts binaries (recommended)
You can download ready-to-use packages including binaries for OS X, Linux and
Windows in various formats of the
[latest release](https://github.com/tus/tusd/releases/latest).
### Compile from source
The only requirement for building tusd is [Go](http://golang.org/doc/install) 1.5 or newer.
If you meet this criteria, you can clone the git repository, install the remaining
dependencies and build the binary:
```bash
git clone git@github.com:tus/tusd.git
cd tusd
go get -u github.com/aws/aws-sdk-go/...
go get -u github.com/prometheus/client_golang/prometheus
go build -o tusd cmd/tusd/main.go
```
## Running tusd
Start the tusd upload server is as simple as invoking a single command. For example, following
snippet demonstrates how to start a tusd process which accepts tus uploads at
`http://localhost:1080/files/` and stores them locally in the `./data` directory:
```
$ tusd -dir ./data
[tusd] Using './data' as directory storage.
[tusd] Using 0.00MB as maximum size.
[tusd] Using 0.0.0.0:1080 as address to listen.
[tusd] Using /files/ as the base path.
[tusd] Using /metrics as the metrics path.
```
Alternatively, if you want to store the uploads on an AWS S3 bucket, you only have to specify
the bucket and provide the corresponding access credentials and region information using
environment variables (if you want to use a S3-compatible store, use can use the `-s3-endpoint`
option):
```
$ export AWS_ACCESS_KEY_ID=xxxxx
$ export AWS_SECRET_ACCESS_KEY=xxxxx
$ export AWS_REGION=eu-west-1
$ tusd -s3-bucket my-test-bucket.com
[tusd] Using 's3://my-test-bucket.com' as S3 bucket for storage.
[tusd] Using 0.00MB as maximum size.
[tusd] Using 0.0.0.0:1080 as address to listen.
[tusd] Using /files/ as the base path.
[tusd] Using /metrics as the metrics path.
```
tusd is also able to read the credentials automatically from a shared credentials file (~/.aws/credentials) as described in https://github.com/aws/aws-sdk-go#configuring-credentials.
Furthermore, tusd also has support for storing uploads on Google Cloud Storage. In order to
enable this feature, supply the path to your account file containing the necessary credentials:
```
$ export GCS_SERVICE_ACCOUNT_FILE=./account.json
$ tusd -gcs-bucket my-test-bucket.com
[tusd] Using 'gcs://my-test-bucket.com' as GCS bucket for storage.
[tusd] Using 0.00MB as maximum size.
[tusd] Using 0.0.0.0:1080 as address to listen.
[tusd] Using /files/ as the base path.
[tusd] Using /metrics as the metrics path.
```
Besides these simple examples, tusd can be easily configured using a variety of command line
options:
```
$ tusd -help
Usage of tusd:
-base-path string
Basepath of the HTTP server (default "/files/")
-behind-proxy
Respect X-Forwarded-* and similar headers which may be set by proxies
-dir string
Directory to store uploads in (default "./data")
-expose-metrics
Expose metrics about tusd usage (default true)
-gcs-bucket string
Use Google Cloud Storage with this bucket as storage backend (requires the GCS_SERVICE_ACCOUNT_FILE environment variable to be set)
-hooks-dir string
Directory to search for available hooks scripts
-hooks-http string
An HTTP endpoint to which hook events will be sent to
-hooks-http-backoff int
Number of seconds to wait before retrying each retry (default 1)
-hooks-http-retry int
Number of times to retry on a 500 or network timeout (default 3)
-host string
Host to bind HTTP server to (default "0.0.0.0")
-max-size int
Maximum size of a single upload in bytes
-metrics-path string
Path under which the metrics endpoint will be accessible (default "/metrics")
-port string
Port to bind HTTP server to (default "1080")
-s3-bucket string
Use AWS S3 with this bucket as storage backend (requires the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_REGION environment variables to be set)
-s3-endpoint string
Endpoint to use S3 compatible implementations like minio (requires s3-bucket to be pass)
-store-size int
Size of space allowed for storage
-timeout int
Read timeout for connections in milliseconds. A zero value means that reads will not timeout (default 30000)
-version
Print tusd version information
```
## Monitoring tusd
tusd exposes metrics at the `/metrics` endpoint ([example](https://master.tus.io/metrics)) in the [Prometheus Text Format](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format). This allows you to hook up Prometheus or any other compatible service to your tusd instance and let it monitor tusd. Alternatively, there are many [parsers and client libraries](https://prometheus.io/docs/instrumenting/clientlibs/) available for consuming the metrics format directly.
The endpoint contains details about Go's internals, general HTTP numbers and details about tus uploads and tus-specific errors. It can be completely disabled using the `-expose-metrics false` flag and it's path can be changed using the `-metrics-path /my/numbers` flag.
## Using tusd manually
Besides from running tusd using the provided binary, you can embed it into
your own Go program:
```go
package main
import (
"fmt"
"net/http"
"github.com/tus/tusd"
"github.com/tus/tusd/filestore"
)
func main() {
// Create a new FileStore instance which is responsible for
// storing the uploaded file on disk in the specified directory.
// This path _must_ exist before tusd will store uploads in it.
// If you want to save them on a different medium, for example
// a remote FTP server, you can implement your own storage backend
// by implementing the tusd.DataStore interface.
store := filestore.FileStore{
Path: "./uploads",
}
// A storage backend for tusd may consist of multiple different parts which
// handle upload creation, locking, termination and so on. The composer is a
// place where all those separated pieces are joined together. In this example
// we only use the file store but you may plug in multiple.
composer := tusd.NewStoreComposer()
store.UseIn(composer)
// Create a new HTTP handler for the tusd server by providing a configuration.
// The StoreComposer property must be set to allow the handler to function.
handler, err := tusd.NewHandler(tusd.Config{
BasePath: "/files/",
StoreComposer: composer,
})
if err != nil {
panic(fmt.Errorf("Unable to create handler: %s", err))
}
// Right now, nothing has happened since we need to start the HTTP server on
// our own. In the end, tusd will start listening on and accept request at
// http://localhost:8080/files
http.Handle("/files/", http.StripPrefix("/files/", handler))
err = http.ListenAndServe(":8080", nil)
if err != nil {
panic(fmt.Errorf("Unable to listen: %s", err))
}
}
```
Please consult the [online documentation](https://godoc.org/github.com/tus/tusd)
for more details about tusd's APIs and its sub-packages.
## Implementing own storages
The tusd server is built to be as flexible as possible and to allow the use
of different upload storage mechanisms. By default the tusd binary includes
[`filestore`](https://godoc.org/github.com/tus/tusd/filestore) which will save every upload
to a specific directory on disk.
If you have different requirements, you can build your own storage backend
which will save the files to S3, a remote FTP server or similar. Doing so
is as simple as implementing the [`tusd.DataStore`](https://godoc.org/github.com/tus/tusd/#DataStore)
interface and using the new struct in the [configuration object](https://godoc.org/github.com/tus/tusd/#Config).
Please consult the documentation about detailed information about the
required methods.
## Packages
This repository does not only contain the HTTP server's code but also other
useful tools:
* [**s3store**](https://godoc.org/github.com/tus/tusd/s3store): A storage backend using AWS S3
* [**filestore**](https://godoc.org/github.com/tus/tusd/filestore): A storage backend using the local file system
* [**gcsstore**](https://godoc.org/github.com/tus/tusd/gcsstore): A storage backend using Google cloud storage
* [**memorylocker**](https://godoc.org/github.com/tus/tusd/memorylocker): An in-memory locker for handling concurrent uploads
* [**consullocker**](https://godoc.org/github.com/tus/tusd/consullocker): A locker using the distributed Consul service
* [**etcd3locker**](https://godoc.org/github.com/tus/tusd/etcd3locker): A locker using the distributed KV etcd3 store
* [**limitedstore**](https://godoc.org/github.com/tus/tusd/limitedstore): A storage wrapper limiting the total used space for uploads
## Running the testsuite
[![Build Status](https://travis-ci.org/tus/tusd.svg?branch=master)](https://travis-ci.org/tus/tusd)
[![Build status](https://ci.appveyor.com/api/projects/status/2y6fa4nyknoxmyc8/branch/master?svg=true)](https://ci.appveyor.com/project/Acconut/tusd/branch/master)
```bash
go test -v ./...
```
## FAQ
### How can I access tusd using HTTPS?
The tusd binary, once executed, listens on the provided port for only non-encrypted HTTP requests and *does not accept* HTTPS connections. This decision has been made to limit the functionality inside this repository which has to be developed, tested and maintained. If you want to send requests to tusd in a secure fashion - what we absolutely encourage, we recommend you to utilize a reverse proxy in front of tusd which accepts incoming HTTPS connections and forwards them to tusd using plain HTTP. More information about this topic, including sample configurations for Nginx and Apache, can be found in [issue #86](https://github.com/tus/tusd/issues/86#issuecomment-269569077) and in the [Apache example configuration](/docs/apache2.conf).
### Can I run tusd behind a reverse proxy?
Yes, it is absolutely possible to do so. Firstly, you should execute the tusd binary using the `-behind-proxy` flag indicating it to pay attention to special headers which are only relevant when used in conjunction with a proxy. Furthermore, there are additional details which should be kept in mind, depending on the used software:
- *Disable request buffering.* Nginx, for example, reads the entire incoming HTTP request, including its body, before sending it to the backend, by default. This behavior defeats the purpose of resumability where an upload is processed while it's being transfered. Therefore, such as feature should be disabled.
- *Adjust maximum request size.* Some proxies have default values for how big a request may be in order to protect your services. Be sure to check these settings to match the requirements of your application.
- *Forward hostname and scheme.* If the proxy rewrites the request URL, the tusd server does not know the original URL which was used to reach the proxy. This behavior can lead to situations, where tusd returns a redirect to a URL which can not be reached by the client. To avoid this confusion, you can explicitly tell tusd which hostname and scheme to use by supplying the `X-Forwarded-Host` and `X-Forwarded-Proto` headers.
Explicit examples for the above points can be found in the [Nginx configuration](/docs/nginx.conf) which is used to power the [master.tus.io](https://master.tus.io) instace.
### Can I run custom verification/authentication checks before an upload begins?
Yes, this is made possible by the [hook system](/docs/hooks.md) inside the tusd binary. It enables custom routines to be executed when certain events occurs, such as a new upload being created which can be handled by the `pre-create` hook. Inside the corresponding hook file, you can run your own validations against the provided upload metadata to determine whether the action is actually allowed or should be rejected by tusd. Please have a look at the [corresponding documentation](docs/hooks.md#pre-create) for a more detailed explanation.
### Can I run tusd inside a VM/Vagrant/VirtualBox?
Yes, you can absolutely do so without any modifications. However, there is one known problem: If you are using tusd inside VirtualBox (the default provider for Vagrant) and are storing the files inside a shared/synced folder, you might get TemporaryErrors (Lockfile created, but doesn't exist) when trying to upload. This happens because shared folders do not support symbolic links which are necessary for tusd. Please use another non-shared folder for storing files (see https://github.com/tus/tusd/issues/201).
### I am getting TemporaryErrors (Lockfile created, but doesn't exist)! What can I do?
This error should only occur when you are using tusd inside VirtualBox. Please see the answer above for more details on when this can happen and how to avoid it.
## License
This project is licensed under the MIT license, see `LICENSE.txt`.

26
vendor/github.com/sjqzhang/tusd/appveyor.yml generated vendored Normal file
View File

@ -0,0 +1,26 @@
clone_folder: c:\projects\go\src\github.com\tus\tusd
environment:
GOPATH: c:\projects\go
GO15VENDOREXPERIMENT: 1
install:
- git submodule update --init --recursive
build_script:
- set PATH=%GOPATH%\bin;%PATH%
- go env
- go version
- go get ./s3store
- go get ./consullocker
- go get ./prometheuscollector
- go get github.com/hashicorp/consul
test_script:
- go test .
- go test ./filestore
- go test ./limitedstore
- go test ./memorylocker
- go test ./s3store
- go vet ./prometheuscollector
- go test ./gcsstore

138
vendor/github.com/sjqzhang/tusd/composer.go generated vendored Normal file
View File

@ -0,0 +1,138 @@
package tusd
// StoreComposer represents a composable data store. It consists of the core
// data store and optional extensions. Please consult the package's overview
// for a more detailed introduction in how to use this structure.
type StoreComposer struct {
Core DataStore
UsesTerminater bool
Terminater TerminaterDataStore
UsesFinisher bool
Finisher FinisherDataStore
UsesLocker bool
Locker LockerDataStore
UsesGetReader bool
GetReader GetReaderDataStore
UsesConcater bool
Concater ConcaterDataStore
UsesLengthDeferrer bool
LengthDeferrer LengthDeferrerDataStore
}
// NewStoreComposer creates a new and empty store composer.
func NewStoreComposer() *StoreComposer {
return &StoreComposer{}
}
// newStoreComposerFromDataStore creates a new store composer and attempts to
// extract the extensions for the provided store. This is intended to be used
// for transitioning from data stores to composers.
func newStoreComposerFromDataStore(store DataStore) *StoreComposer {
composer := NewStoreComposer()
composer.UseCore(store)
if mod, ok := store.(TerminaterDataStore); ok {
composer.UseTerminater(mod)
}
if mod, ok := store.(FinisherDataStore); ok {
composer.UseFinisher(mod)
}
if mod, ok := store.(LockerDataStore); ok {
composer.UseLocker(mod)
}
if mod, ok := store.(GetReaderDataStore); ok {
composer.UseGetReader(mod)
}
if mod, ok := store.(ConcaterDataStore); ok {
composer.UseConcater(mod)
}
if mod, ok := store.(LengthDeferrerDataStore); ok {
composer.UseLengthDeferrer(mod)
}
return composer
}
// Capabilities returns a string representing the provided extensions in a
// human-readable format meant for debugging.
func (store *StoreComposer) Capabilities() string {
str := "Core: "
if store.Core != nil {
str += "✓"
} else {
str += "✗"
}
str += ` Terminater: `
if store.UsesTerminater {
str += "✓"
} else {
str += "✗"
}
str += ` Finisher: `
if store.UsesFinisher {
str += "✓"
} else {
str += "✗"
}
str += ` Locker: `
if store.UsesLocker {
str += "✓"
} else {
str += "✗"
}
str += ` GetReader: `
if store.UsesGetReader {
str += "✓"
} else {
str += "✗"
}
str += ` Concater: `
if store.UsesConcater {
str += "✓"
} else {
str += "✗"
}
str += ` LengthDeferrer: `
if store.UsesLengthDeferrer {
str += "✓"
} else {
str += "✗"
}
return str
}
// UseCore will set the used core data store. If the argument is nil, the
// property will be unset.
func (store *StoreComposer) UseCore(core DataStore) {
store.Core = core
}
func (store *StoreComposer) UseTerminater(ext TerminaterDataStore) {
store.UsesTerminater = ext != nil
store.Terminater = ext
}
func (store *StoreComposer) UseFinisher(ext FinisherDataStore) {
store.UsesFinisher = ext != nil
store.Finisher = ext
}
func (store *StoreComposer) UseLocker(ext LockerDataStore) {
store.UsesLocker = ext != nil
store.Locker = ext
}
func (store *StoreComposer) UseGetReader(ext GetReaderDataStore) {
store.UsesGetReader = ext != nil
store.GetReader = ext
}
func (store *StoreComposer) UseConcater(ext ConcaterDataStore) {
store.UsesConcater = ext != nil
store.Concater = ext
}
func (store *StoreComposer) UseLengthDeferrer(ext LengthDeferrerDataStore) {
store.UsesLengthDeferrer = ext != nil
store.LengthDeferrer = ext
}

91
vendor/github.com/sjqzhang/tusd/composer.mgo generated vendored Normal file
View File

@ -0,0 +1,91 @@
package tusd
#define USE_FUNC(TYPE) \
func (store *StoreComposer) Use ## TYPE(ext TYPE ## DataStore) { \
store.Uses ## TYPE = ext != nil; \
store.TYPE = ext; \
}
#define USE_FIELD(TYPE) Uses ## TYPE bool; \
TYPE TYPE ## DataStore
#define USE_FROM(TYPE) if mod, ok := store.(TYPE ## DataStore); ok { \
composer.Use ## TYPE (mod) \
}
#define USE_CAP(TYPE) str += ` TYPE: `; \
if store.Uses ## TYPE { \
str += "✓" \
} else { \
str += "✗" \
}
// StoreComposer represents a composable data store. It consists of the core
// data store and optional extensions. Please consult the package's overview
// for a more detailed introduction in how to use this structure.
type StoreComposer struct {
Core DataStore
USE_FIELD(Terminater)
USE_FIELD(Finisher)
USE_FIELD(Locker)
USE_FIELD(GetReader)
USE_FIELD(Concater)
USE_FIELD(LengthDeferrer)
}
// NewStoreComposer creates a new and empty store composer.
func NewStoreComposer() *StoreComposer {
return &StoreComposer{}
}
// newStoreComposerFromDataStore creates a new store composer and attempts to
// extract the extensions for the provided store. This is intended to be used
// for transitioning from data stores to composers.
func newStoreComposerFromDataStore(store DataStore) *StoreComposer {
composer := NewStoreComposer()
composer.UseCore(store)
USE_FROM(Terminater)
USE_FROM(Finisher)
USE_FROM(Locker)
USE_FROM(GetReader)
USE_FROM(Concater)
USE_FROM(LengthDeferrer)
return composer
}
// Capabilities returns a string representing the provided extensions in a
// human-readable format meant for debugging.
func (store *StoreComposer) Capabilities() string {
str := "Core: "
if store.Core != nil {
str += "✓"
} else {
str += "✗"
}
USE_CAP(Terminater)
USE_CAP(Finisher)
USE_CAP(Locker)
USE_CAP(GetReader)
USE_CAP(Concater)
USE_CAP(LengthDeferrer)
return str
}
// UseCore will set the used core data store. If the argument is nil, the
// property will be unset.
func (store *StoreComposer) UseCore(core DataStore) {
store.Core = core
}
USE_FUNC(Terminater)
USE_FUNC(Finisher)
USE_FUNC(Locker)
USE_FUNC(GetReader)
USE_FUNC(Concater)
USE_FUNC(LengthDeferrer)

83
vendor/github.com/sjqzhang/tusd/config.go generated vendored Normal file
View File

@ -0,0 +1,83 @@
package tusd
import (
"errors"
"log"
"net/url"
"os"
)
// Config provides a way to configure the Handler depending on your needs.
type Config struct {
// DataStore implementation used to store and retrieve the single uploads.
// The usage of this field is deprecated and should be avoided in favor of
// StoreComposer.
DataStore DataStore
// StoreComposer points to the store composer from which the core data store
// and optional dependencies should be taken. May only be nil if DataStore is
// set.
StoreComposer *StoreComposer
// MaxSize defines how many bytes may be stored in one single upload. If its
// value is is 0 or smaller no limit will be enforced.
MaxSize int64
// BasePath defines the URL path used for handling uploads, e.g. "/files/".
// If no trailing slash is presented it will be added. You may specify an
// absolute URL containing a scheme, e.g. "http://tus.io"
BasePath string
isAbs bool
// NotifyCompleteUploads indicates whether sending notifications about
// completed uploads using the CompleteUploads channel should be enabled.
NotifyCompleteUploads bool
// NotifyTerminatedUploads indicates whether sending notifications about
// terminated uploads using the TerminatedUploads channel should be enabled.
NotifyTerminatedUploads bool
// NotifyUploadProgress indicates whether sending notifications about
// the upload progress using the UploadProgress channel should be enabled.
NotifyUploadProgress bool
// NotifyCreatedUploads indicates whether sending notifications about
// the upload having been created using the CreatedUploads channel should be enabled.
NotifyCreatedUploads bool
// Logger is the logger to use internally, mostly for printing requests.
Logger *log.Logger
// Respect the X-Forwarded-Host, X-Forwarded-Proto and Forwarded headers
// potentially set by proxies when generating an absolute URL in the
// response to POST requests.
RespectForwardedHeaders bool
}
func (config *Config) validate() error {
if config.Logger == nil {
config.Logger = log.New(os.Stdout, "[tusd] ", 0)
}
base := config.BasePath
uri, err := url.Parse(base)
if err != nil {
return err
}
// Ensure base path ends with slash to remove logic from absFileURL
if base != "" && string(base[len(base)-1]) != "/" {
base += "/"
}
// Ensure base path begins with slash if not absolute (starts with scheme)
if !uri.IsAbs() && len(base) > 0 && string(base[0]) != "/" {
base = "/" + base
}
config.BasePath = base
config.isAbs = uri.IsAbs()
if config.StoreComposer == nil {
config.StoreComposer = newStoreComposerFromDataStore(config.DataStore)
config.DataStore = nil
} else if config.DataStore != nil {
return errors.New("tusd: either StoreComposer or DataStore may be set in Config, but not both")
}
if config.StoreComposer.Core == nil {
return errors.New("tusd: StoreComposer in Config needs to contain a non-nil core")
}
return nil
}

124
vendor/github.com/sjqzhang/tusd/datastore.go generated vendored Normal file
View File

@ -0,0 +1,124 @@
package tusd
import (
"io"
)
type MetaData map[string]string
type FileInfo struct {
ID string
// Total file size in bytes specified in the NewUpload call
Size int64
// Indicates whether the total file size is deferred until later
SizeIsDeferred bool
// Offset in bytes (zero-based)
Offset int64
MetaData MetaData
// Indicates that this is a partial upload which will later be used to form
// a final upload by concatenation. Partial uploads should not be processed
// when they are finished since they are only incomplete chunks of files.
IsPartial bool
// Indicates that this is a final upload
IsFinal bool
// If the upload is a final one (see IsFinal) this will be a non-empty
// ordered slice containing the ids of the uploads of which the final upload
// will consist after concatenation.
PartialUploads []string
}
type DataStore interface {
// Create a new upload using the size as the file's length. The method must
// return an unique id which is used to identify the upload. If no backend
// (e.g. Riak) specifes the id you may want to use the uid package to
// generate one. The properties Size and MetaData will be filled.
NewUpload(info FileInfo) (id string, err error)
// Write the chunk read from src into the file specified by the id at the
// given offset. The handler will take care of validating the offset and
// limiting the size of the src to not overflow the file's size. It may
// return an os.ErrNotExist which will be interpreted as a 404 Not Found.
// It will also lock resources while they are written to ensure only one
// write happens per time.
// The function call must return the number of bytes written.
WriteChunk(id string, offset int64, src io.Reader) (int64, error)
// Read the fileinformation used to validate the offset and respond to HEAD
// requests. It may return an os.ErrNotExist which will be interpreted as a
// 404 Not Found.
GetInfo(id string) (FileInfo, error)
}
// TerminaterDataStore is the interface which must be implemented by DataStores
// if they want to receive DELETE requests using the Handler. If this interface
// is not implemented, no request handler for this method is attached.
type TerminaterDataStore interface {
// Terminate an upload so any further requests to the resource, both reading
// and writing, must return os.ErrNotExist or similar.
Terminate(id string) error
}
// FinisherDataStore is the interface which can be implemented by DataStores
// which need to do additional operations once an entire upload has been
// completed. These tasks may include but are not limited to freeing unused
// resources or notifying other services. For example, S3Store uses this
// interface for removing a temporary object.
type FinisherDataStore interface {
// FinishUpload executes additional operations for the finished upload which
// is specified by its ID.
FinishUpload(id string) error
}
// LockerDataStore is the interface required for custom lock persisting mechanisms.
// Common ways to store this information is in memory, on disk or using an
// external service, such as ZooKeeper.
// When multiple processes are attempting to access an upload, whether it be
// by reading or writing, a synchronization mechanism is required to prevent
// data corruption, especially to ensure correct offset values and the proper
// order of chunks inside a single upload.
type LockerDataStore interface {
// LockUpload attempts to obtain an exclusive lock for the upload specified
// by its id.
// If this operation fails because the resource is already locked, the
// tusd.ErrFileLocked must be returned. If no error is returned, the attempt
// is consider to be successful and the upload to be locked until UnlockUpload
// is invoked for the same upload.
LockUpload(id string) error
// UnlockUpload releases an existing lock for the given upload.
UnlockUpload(id string) error
}
// GetReaderDataStore is the interface which must be implemented if handler should
// expose and support the GET route. It will allow clients to download the
// content of an upload regardless whether it's finished or not.
// Please, be aware that this feature is not part of the official tus
// specification. Instead it's a custom mechanism by tusd.
type GetReaderDataStore interface {
// GetReader returns a reader which allows iterating of the content of an
// upload specified by its ID. It should attempt to provide a reader even if
// the upload has not been finished yet but it's not required.
// If the returned reader also implements the io.Closer interface, the
// Close() method will be invoked once everything has been read.
// If the given upload could not be found, the error tusd.ErrNotFound should
// be returned.
GetReader(id string) (io.Reader, error)
}
// ConcaterDataStore is the interface required to be implemented if the
// Concatenation extension should be enabled. Only in this case, the handler
// will parse and respect the Upload-Concat header.
type ConcaterDataStore interface {
// ConcatUploads concatenations the content from the provided partial uploads
// and write the result in the destination upload which is specified by its
// ID. The caller (usually the handler) must and will ensure that this
// destination upload has been created before with enough space to hold all
// partial uploads. The order, in which the partial uploads are supplied,
// must be respected during concatenation.
ConcatUploads(destination string, partialUploads []string) error
}
// LengthDeferrerDataStore is the interface that must be implemented if the
// creation-defer-length extension should be enabled. The extension enables a
// client to upload files when their total size is not yet known. Instead, the
// client must send the total size as soon as it becomes known.
type LengthDeferrerDataStore interface {
DeclareLength(id string, length int64) error
}

69
vendor/github.com/sjqzhang/tusd/doc.go generated vendored Normal file
View File

@ -0,0 +1,69 @@
/*
Package tusd provides ways to accept tus 1.0 calls using HTTP.
tus is a protocol based on HTTP for resumable file uploads. Resumable means that
an upload can be interrupted at any moment and can be resumed without
re-uploading the previous data again. An interruption may happen willingly, if
the user wants to pause, or by accident in case of an network issue or server
outage (http://tus.io).
The basics of tusd
tusd was designed in way which allows an flexible and customizable usage. We
wanted to avoid binding this package to a specific storage system particularly
a proprietary third-party software. Therefore tusd is an abstract layer whose
only job is to accept incoming HTTP requests, validate them according to the
specification and finally passes them to the data store.
The data store is another important component in tusd's architecture whose
purpose is to do the actual file handling. It has to write the incoming upload
to a persistent storage system and retrieve information about an upload's
current state. Therefore it is the only part of the system which communicates
directly with the underlying storage system, whether it be the local disk, a
remote FTP server or cloud providers such as AWS S3.
Using a store composer
The only hard requirements for a data store can be found in the DataStore
interface. It contains methods for creating uploads (NewUpload), writing to
them (WriteChunk) and retrieving their status (GetInfo). However, there
are many more features which are not mandatory but may still be used.
These are contained in their own interfaces which all share the *DataStore
suffix. For example, GetReaderDataStore which enables downloading uploads or
TerminaterDataStore which allows uploads to be terminated.
The store composer offers a way to combine the basic data store - the core -
implementation and these additional extensions:
composer := tusd.NewStoreComposer()
composer.UseCore(dataStore) // Implements DataStore
composer.UseTerminater(terminater) // Implements TerminaterDataStore
composer.UseLocker(locker) // Implements LockerDataStore
The corresponding methods for adding an extension to the composer are prefixed
with Use* followed by the name of the corresponding interface. However, most
data store provide multiple extensions and adding all of them manually can be
tedious and error-prone. Therefore, all data store distributed with tusd provide
an UseIn() method which does this job automatically. For example, this is the
S3 store in action (see S3Store.UseIn):
store := s3store.New()
locker := memorylocker.New()
composer := tusd.NewStoreComposer()
store.UseIn(composer)
locker.UseIn(composer)
Finally, once you are done with composing your data store, you can pass it
inside the Config struct in order to create create a new tusd HTTP handler:
config := tusd.Config{
StoreComposer: composer,
BasePath: "/files/",
}
handler, err := tusd.NewHandler(config)
This handler can then be mounted to a specific path, e.g. /files:
http.Handle("/files/", http.StripPrefix("/files/", handler))
*/
package tusd

229
vendor/github.com/sjqzhang/tusd/filestore/filestore.go generated vendored Normal file
View File

@ -0,0 +1,229 @@
// Package filestore provide a storage backend based on the local file system.
//
// FileStore is a storage backend used as a tusd.DataStore in tusd.NewHandler.
// It stores the uploads in a directory specified in two different files: The
// `[id].info` files are used to store the fileinfo in JSON format. The
// `[id].bin` files contain the raw binary data uploaded.
// No cleanup is performed so you may want to run a cronjob to ensure your disk
// is not filled up with old and finished uploads.
//
// In addition, it provides an exclusive upload locking mechanism using lock files
// which are stored on disk. Each of them stores the PID of the process which
// acquired the lock. This allows locks to be automatically freed when a process
// is unable to release it on its own because the process is not alive anymore.
// For more information, consult the documentation for tusd.LockerDataStore
// interface, which is implemented by FileStore
package filestore
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/sjqzhang/tusd"
"github.com/sjqzhang/tusd/uid"
"gopkg.in/Acconut/lockfile.v1"
)
var defaultFilePerm = os.FileMode(0664)
// See the tusd.DataStore interface for documentation about the different
// methods.
type FileStore struct {
// Relative or absolute path to store files in. FileStore does not check
// whether the path exists, use os.MkdirAll in this case on your own.
Path string
// modify by sjqzhang
GetReaderExt func(id string) (io.Reader, error)
}
// New creates a new file based storage backend. The directory specified will
// be used as the only storage entry. This method does not check
// whether the path exists, use os.MkdirAll to ensure.
// In addition, a locking mechanism is provided.
func New(path string) FileStore {
store:= FileStore{Path:path}
//modify by sjqzhang
store.GetReaderExt= func(id string) (io.Reader, error) {
return os.Open(store.binPath(id))
}
return store
}
// UseIn sets this store as the core data store in the passed composer and adds
// all possible extension to it.
func (store FileStore) UseIn(composer *tusd.StoreComposer) {
composer.UseCore(store)
composer.UseGetReader(store)
composer.UseTerminater(store)
composer.UseLocker(store)
composer.UseConcater(store)
composer.UseLengthDeferrer(store)
}
func (store FileStore) NewUpload(info tusd.FileInfo) (id string, err error) {
id = uid.Uid()
info.ID = id
// Create .bin file with no content
file, err := os.OpenFile(store.binPath(id), os.O_CREATE|os.O_WRONLY, defaultFilePerm)
if err != nil {
if os.IsNotExist(err) {
err = fmt.Errorf("upload directory does not exist: %s", store.Path)
}
return "", err
}
defer file.Close()
// writeInfo creates the file by itself if necessary
err = store.writeInfo(id, info)
return
}
func (store FileStore) WriteChunk(id string, offset int64, src io.Reader) (int64, error) {
file, err := os.OpenFile(store.binPath(id), os.O_WRONLY|os.O_APPEND, defaultFilePerm)
if err != nil {
return 0, err
}
defer file.Close()
n, err := io.Copy(file, src)
return n, err
}
func (store FileStore) GetInfo(id string) (tusd.FileInfo, error) {
info := tusd.FileInfo{}
data, err := ioutil.ReadFile(store.infoPath(id))
if err != nil {
return info, err
}
if err := json.Unmarshal(data, &info); err != nil {
return info, err
}
stat, err := os.Stat(store.binPath(id))
if err != nil {
return info, err
}
info.Offset = stat.Size()
return info, nil
}
// modify by sjqzhang
func (store FileStore) GetReader(id string) (io.Reader, error) {
if store.GetReaderExt!=nil {
return store.GetReaderExt(id)
}
return os.Open(store.binPath(id))
}
func (store FileStore) Terminate(id string) error {
if err := os.Remove(store.infoPath(id)); err != nil {
return err
}
if err := os.Remove(store.binPath(id)); err != nil {
return err
}
return nil
}
func (store FileStore) ConcatUploads(dest string, uploads []string) (err error) {
file, err := os.OpenFile(store.binPath(dest), os.O_WRONLY|os.O_APPEND, defaultFilePerm)
if err != nil {
return err
}
defer file.Close()
for _, id := range uploads {
src, err := store.GetReader(id)
if err != nil {
return err
}
if _, err := io.Copy(file, src); err != nil {
return err
}
}
return
}
func (store FileStore) DeclareLength(id string, length int64) error {
info, err := store.GetInfo(id)
if err != nil {
return err
}
info.Size = length
info.SizeIsDeferred = false
return store.writeInfo(id, info)
}
func (store FileStore) LockUpload(id string) error {
lock, err := store.newLock(id)
if err != nil {
return err
}
err = lock.TryLock()
if err == lockfile.ErrBusy {
return tusd.ErrFileLocked
}
return err
}
func (store FileStore) UnlockUpload(id string) error {
lock, err := store.newLock(id)
if err != nil {
return err
}
err = lock.Unlock()
// A "no such file or directory" will be returned if no lockfile was found.
// Since this means that the file has never been locked, we drop the error
// and continue as if nothing happened.
if os.IsNotExist(err) {
err = nil
}
return err
}
// newLock contructs a new Lockfile instance.
func (store FileStore) newLock(id string) (lockfile.Lockfile, error) {
path, err := filepath.Abs(filepath.Join(store.Path, id+".lock"))
if err != nil {
return lockfile.Lockfile(""), err
}
// We use Lockfile directly instead of lockfile.New to bypass the unnecessary
// check whether the provided path is absolute since we just resolved it
// on our own.
return lockfile.Lockfile(path), nil
}
// binPath returns the path to the .bin storing the binary data.
func (store FileStore) binPath(id string) string {
return filepath.Join(store.Path, id+".bin")
}
// infoPath returns the path to the .info file storing the file's info.
func (store FileStore) infoPath(id string) string {
return filepath.Join(store.Path, id+".info")
}
// writeInfo updates the entire information. Everything will be overwritten.
func (store FileStore) writeInfo(id string, info tusd.FileInfo) error {
data, err := json.Marshal(info)
if err != nil {
return err
}
return ioutil.WriteFile(store.infoPath(id), data, defaultFilePerm)
}

55
vendor/github.com/sjqzhang/tusd/handler.go generated vendored Normal file
View File

@ -0,0 +1,55 @@
package tusd
import (
"net/http"
"github.com/bmizerany/pat"
)
// Handler is a ready to use handler with routing (using pat)
type Handler struct {
*UnroutedHandler
http.Handler
}
// NewHandler creates a routed tus protocol handler. This is the simplest
// way to use tusd but may not be as configurable as you require. If you are
// integrating this into an existing app you may like to use tusd.NewUnroutedHandler
// instead. Using tusd.NewUnroutedHandler allows the tus handlers to be combined into
// your existing router (aka mux) directly. It also allows the GET and DELETE
// endpoints to be customized. These are not part of the protocol so can be
// changed depending on your needs.
func NewHandler(config Config) (*Handler, error) {
if err := config.validate(); err != nil {
return nil, err
}
handler, err := NewUnroutedHandler(config)
if err != nil {
return nil, err
}
routedHandler := &Handler{
UnroutedHandler: handler,
}
mux := pat.New()
routedHandler.Handler = handler.Middleware(mux)
mux.Post("", http.HandlerFunc(handler.PostFile))
mux.Head(":id", http.HandlerFunc(handler.HeadFile))
mux.Add("PATCH", ":id", http.HandlerFunc(handler.PatchFile))
// Only attach the DELETE handler if the Terminate() method is provided
if config.StoreComposer.UsesTerminater {
mux.Del(":id", http.HandlerFunc(handler.DelFile))
}
// GET handler requires the GetReader() method
if config.StoreComposer.UsesGetReader {
mux.Get(":id", http.HandlerFunc(handler.GetFile))
}
return routedHandler, nil
}

27
vendor/github.com/sjqzhang/tusd/log.go generated vendored Normal file
View File

@ -0,0 +1,27 @@
package tusd
import (
"log"
)
func (h *UnroutedHandler) log(eventName string, details ...string) {
LogEvent(h.logger, eventName, details...)
}
func LogEvent(logger *log.Logger, eventName string, details ...string) {
result := make([]byte, 0, 100)
result = append(result, `event="`...)
result = append(result, eventName...)
result = append(result, `" `...)
for i := 0; i < len(details); i += 2 {
result = append(result, details[i]...)
result = append(result, `="`...)
result = append(result, details[i+1]...)
result = append(result, `" `...)
}
result = append(result, "\n"...)
logger.Output(2, string(result))
}

137
vendor/github.com/sjqzhang/tusd/metrics.go generated vendored Normal file
View File

@ -0,0 +1,137 @@
package tusd
import (
"errors"
"sync"
"sync/atomic"
)
// Metrics provides numbers about the usage of the tusd handler. Since these may
// be accessed from multiple goroutines, it is necessary to read and modify them
// atomically using the functions exposed in the sync/atomic package, such as
// atomic.LoadUint64. In addition the maps must not be modified to prevent data
// races.
type Metrics struct {
// RequestTotal counts the number of incoming requests per method
RequestsTotal map[string]*uint64
// ErrorsTotal counts the number of returned errors by their message
ErrorsTotal *ErrorsTotalMap
BytesReceived *uint64
UploadsFinished *uint64
UploadsCreated *uint64
UploadsTerminated *uint64
}
// incRequestsTotal increases the counter for this request method atomically by
// one. The method must be one of GET, HEAD, POST, PATCH, DELETE.
func (m Metrics) incRequestsTotal(method string) {
if ptr, ok := m.RequestsTotal[method]; ok {
atomic.AddUint64(ptr, 1)
}
}
// incErrorsTotal increases the counter for this error atomically by one.
func (m Metrics) incErrorsTotal(err HTTPError) {
ptr := m.ErrorsTotal.retrievePointerFor(err)
atomic.AddUint64(ptr, 1)
}
// incBytesReceived increases the number of received bytes atomically be the
// specified number.
func (m Metrics) incBytesReceived(delta uint64) {
atomic.AddUint64(m.BytesReceived, delta)
}
// incUploadsFinished increases the counter for finished uploads atomically by one.
func (m Metrics) incUploadsFinished() {
atomic.AddUint64(m.UploadsFinished, 1)
}
// incUploadsCreated increases the counter for completed uploads atomically by one.
func (m Metrics) incUploadsCreated() {
atomic.AddUint64(m.UploadsCreated, 1)
}
// incUploadsTerminated increases the counter for completed uploads atomically by one.
func (m Metrics) incUploadsTerminated() {
atomic.AddUint64(m.UploadsTerminated, 1)
}
func newMetrics() Metrics {
return Metrics{
RequestsTotal: map[string]*uint64{
"GET": new(uint64),
"HEAD": new(uint64),
"POST": new(uint64),
"PATCH": new(uint64),
"DELETE": new(uint64),
"OPTIONS": new(uint64),
},
ErrorsTotal: newErrorsTotalMap(),
BytesReceived: new(uint64),
UploadsFinished: new(uint64),
UploadsCreated: new(uint64),
UploadsTerminated: new(uint64),
}
}
// ErrorsTotalMap stores the counters for the different HTTP errors.
type ErrorsTotalMap struct {
lock sync.RWMutex
counter map[simpleHTTPError]*uint64
}
type simpleHTTPError struct {
Message string
StatusCode int
}
func simplifyHTTPError(err HTTPError) simpleHTTPError {
return simpleHTTPError{
Message: err.Error(),
StatusCode: err.StatusCode(),
}
}
func newErrorsTotalMap() *ErrorsTotalMap {
m := make(map[simpleHTTPError]*uint64, 20)
return &ErrorsTotalMap{
counter: m,
}
}
// retrievePointerFor returns (after creating it if necessary) the pointer to
// the counter for the error.
func (e *ErrorsTotalMap) retrievePointerFor(err HTTPError) *uint64 {
serr := simplifyHTTPError(err)
e.lock.RLock()
ptr, ok := e.counter[serr]
e.lock.RUnlock()
if ok {
return ptr
}
// For pointer creation, a write-lock is required
e.lock.Lock()
// We ensure that the pointer wasn't created in the meantime
if ptr, ok = e.counter[serr]; !ok {
ptr = new(uint64)
e.counter[serr] = ptr
}
e.lock.Unlock()
return ptr
}
// Load retrieves the map of the counter pointers atomically
func (e *ErrorsTotalMap) Load() map[HTTPError]*uint64 {
m := make(map[HTTPError]*uint64, len(e.counter))
e.lock.RLock()
for err, ptr := range e.counter {
httpErr := NewHTTPError(errors.New(err.Message), err.StatusCode)
m[httpErr] = ptr
}
e.lock.RUnlock()
return m
}

8
vendor/github.com/sjqzhang/tusd/tusd.code-workspace generated vendored Normal file
View File

@ -0,0 +1,8 @@
{
"folders": [
{
"path": "."
}
],
"settings": {}
}

23
vendor/github.com/sjqzhang/tusd/uid/uid.go generated vendored Normal file
View File

@ -0,0 +1,23 @@
package uid
import (
"crypto/rand"
"encoding/hex"
"io"
)
// uid returns a unique id. These ids consist of 128 bits from a
// cryptographically strong pseudo-random generator and are like uuids, but
// without the dashes and significant bits.
//
// See: http://en.wikipedia.org/wiki/UUID#Random_UUID_probability_of_duplicates
func Uid() string {
id := make([]byte, 16)
_, err := io.ReadFull(rand.Reader, id)
if err != nil {
// This is probably an appropriate way to handle errors from our source
// for random bits.
panic(err)
}
return hex.EncodeToString(id)
}

1067
vendor/github.com/sjqzhang/tusd/unrouted_handler.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

27
vendor/gopkg.in/Acconut/lockfile.v1/.gitignore generated vendored Normal file
View File

@ -0,0 +1,27 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# popular temporaries
.err
.out
.diff
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe

3
vendor/gopkg.in/Acconut/lockfile.v1/.gitmodules generated vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "git-hooks"]
path = git-hooks
url = https://github.com/nightlyone/git-hooks

11
vendor/gopkg.in/Acconut/lockfile.v1/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,11 @@
language: go
go:
- 1.4
- 1.5
- 1.6
- 1.7
- 1.8
script:
- go test .
- go vet .

19
vendor/gopkg.in/Acconut/lockfile.v1/LICENSE generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2012 Ingo Oeser
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

50
vendor/gopkg.in/Acconut/lockfile.v1/README.md generated vendored Normal file
View File

@ -0,0 +1,50 @@
lockfile
=========
Handle locking via pid files.
*Attention:* This is a fork of [Ingo Oeser's amazing work](https://github.com/nightlyone/lockfile)
whose behavior differs a bit. While the original package allows a process to
obtain the same lock twice, this fork forbids this behavior.
[![Build Status Unix][1]][2]
[![Build status Windows][3]][4]
[1]: https://secure.travis-ci.org/Acconut/lockfile.png
[2]: https://travis-ci.org/Acconut/lockfile
[3]: https://ci.appveyor.com/api/projects/status/bwy487h8cgue6up5?svg=true
[4]: https://ci.appveyor.com/project/Acconut/lockfile/branch/master
install
-------
Install [Go 1][5], either [from source][6] or [with a prepackaged binary][7].
For Windows suport, Go 1.4 or newer is required.
Then run
go get gopkg.in/Acconut/lockfile.v1
[5]: http://golang.org
[6]: http://golang.org/doc/install/source
[7]: http://golang.org/doc/install
LICENSE
-------
BSD
documentation
-------------
[package documentation at godoc.org](http://godoc.org/gopkg.in/Acconut/lockfile.v1)
contributing
============
Contributions are welcome. Please open an issue or send me a pull request for a dedicated branch.
Make sure the git commit hooks show it works.
git commit hooks
-----------------------
enable commit hooks via
cd .git ; rm -rf hooks; ln -s ../git-hooks hooks ; cd ..

13
vendor/gopkg.in/Acconut/lockfile.v1/appveyor.yml generated vendored Normal file
View File

@ -0,0 +1,13 @@
clone_folder: c:\gopath\src\github.com\Acconut\lockfile
environment:
GOPATH: c:\gopath
install:
- go version
- go env
- go get -v -t ./...
build_script:
- go test -v .
- go vet .

201
vendor/gopkg.in/Acconut/lockfile.v1/lockfile.go generated vendored Normal file
View File

@ -0,0 +1,201 @@
// Package lockfile handles pid file based locking.
// While a sync.Mutex helps against concurrency issues within a single process,
// this package is designed to help against concurrency issues between cooperating processes
// or serializing multiple invocations of the same process.
package lockfile
import (
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
)
// Lockfile is a pid file which can be locked
type Lockfile string
// TemporaryError is a type of error where a retry after a random amount of sleep should help to mitigate it.
type TemporaryError string
func (t TemporaryError) Error() string { return string(t) }
// Temporary returns always true.
// It exists, so you can detect it via
// if te, ok := err.(interface{ Temporary() bool }); ok {
// fmt.Println("I am a temporay error situation, so wait and retry")
// }
func (t TemporaryError) Temporary() bool { return true }
// Various errors returned by this package
var (
ErrBusy = TemporaryError("Locked by other process") // If you get this, retry after a short sleep might help
ErrNotExist = TemporaryError("Lockfile created, but doesn't exist") // If you get this, retry after a short sleep might help
ErrNeedAbsPath = errors.New("Lockfiles must be given as absolute path names")
ErrInvalidPid = errors.New("Lockfile contains invalid pid for system")
ErrDeadOwner = errors.New("Lockfile contains pid of process not existent on this system anymore")
ErrRogueDeletion = errors.New("Lockfile owned by me has been removed unexpectedly")
)
// New describes a new filename located at the given absolute path.
func New(path string) (Lockfile, error) {
if !filepath.IsAbs(path) {
return Lockfile(""), ErrNeedAbsPath
}
return Lockfile(path), nil
}
// GetOwner returns who owns the lockfile.
func (l Lockfile) GetOwner() (*os.Process, error) {
name := string(l)
// Ok, see, if we have a stale lockfile here
content, err := ioutil.ReadFile(name)
if err != nil {
return nil, err
}
// try hard for pids. If no pid, the lockfile is junk anyway and we delete it.
pid, err := scanPidLine(content)
if err != nil {
return nil, err
}
running, err := isRunning(pid)
if err != nil {
return nil, err
}
if running {
proc, err := os.FindProcess(pid)
if err != nil {
return nil, err
}
return proc, nil
}
return nil, ErrDeadOwner
}
// TryLock tries to own the lock.
// It Returns nil, if successful and and error describing the reason, it didn't work out.
// Please note, that existing lockfiles containing pids of dead processes
// and lockfiles containing no pid at all are simply deleted.
func (l Lockfile) TryLock() error {
name := string(l)
// This has been checked by New already. If we trigger here,
// the caller didn't use New and re-implemented it's functionality badly.
// So panic, that he might find this easily during testing.
if !filepath.IsAbs(name) {
panic(ErrNeedAbsPath)
}
tmplock, err := ioutil.TempFile(filepath.Dir(name), "")
if err != nil {
return err
}
cleanup := func() {
_ = tmplock.Close()
_ = os.Remove(tmplock.Name())
}
defer cleanup()
if err := writePidLine(tmplock, os.Getpid()); err != nil {
return err
}
// return value intentionally ignored, as ignoring it is part of the algorithm
_ = os.Link(tmplock.Name(), name)
fiTmp, err := os.Lstat(tmplock.Name())
if err != nil {
return err
}
fiLock, err := os.Lstat(name)
if err != nil {
// tell user that a retry would be a good idea
if os.IsNotExist(err) {
return ErrNotExist
}
return err
}
// Success
if os.SameFile(fiTmp, fiLock) {
return nil
}
_, err = l.GetOwner()
switch err {
default:
// Other errors -> defensively fail and let caller handle this
return err
case nil:
// This fork differs from the upstream repository in this line. We do
// not want a process to obtain a lock if this lock is already help by
// the same process. Therefore, we always return ErrBusy if the lockfile
// contains a non-dead PID.
return ErrBusy
case ErrDeadOwner, ErrInvalidPid:
// cases we can fix below
}
// clean stale/invalid lockfile
err = os.Remove(name)
if err != nil {
// If it doesn't exist, then it doesn't matter who removed it.
if !os.IsNotExist(err) {
return err
}
}
// now that the stale lockfile is gone, let's recurse
return l.TryLock()
}
// Unlock a lock again, if we owned it. Returns any error that happend during release of lock.
func (l Lockfile) Unlock() error {
proc, err := l.GetOwner()
switch err {
case ErrInvalidPid, ErrDeadOwner:
return ErrRogueDeletion
case nil:
if proc.Pid == os.Getpid() {
// we really own it, so let's remove it.
return os.Remove(string(l))
}
// Not owned by me, so don't delete it.
return ErrRogueDeletion
default:
// This is an application error or system error.
// So give a better error for logging here.
if os.IsNotExist(err) {
return ErrRogueDeletion
}
// Other errors -> defensively fail and let caller handle this
return err
}
}
func writePidLine(w io.Writer, pid int) error {
_, err := io.WriteString(w, fmt.Sprintf("%d\n", pid))
return err
}
func scanPidLine(content []byte) (int, error) {
if len(content) == 0 {
return 0, ErrInvalidPid
}
var pid int
if _, err := fmt.Sscanln(string(content), &pid); err != nil {
return 0, ErrInvalidPid
}
if pid <= 0 {
return 0, ErrInvalidPid
}
return pid, nil
}

20
vendor/gopkg.in/Acconut/lockfile.v1/lockfile_unix.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
// +build darwin dragonfly freebsd linux nacl netbsd openbsd solaris
package lockfile
import (
"os"
"syscall"
)
func isRunning(pid int) (bool, error) {
proc, err := os.FindProcess(pid)
if err != nil {
return false, err
}
if err := proc.Signal(syscall.Signal(0)); err != nil {
return false, nil
}
return true, nil
}

View File

@ -0,0 +1,30 @@
package lockfile
import (
"syscall"
)
//For some reason these consts don't exist in syscall.
const (
error_invalid_parameter = 87
code_still_active = 259
)
func isRunning(pid int) (bool, error) {
procHnd, err := syscall.OpenProcess(syscall.PROCESS_QUERY_INFORMATION, true, uint32(pid))
if err != nil {
if scerr, ok := err.(syscall.Errno); ok {
if uintptr(scerr) == error_invalid_parameter {
return false, nil
}
}
}
var code uint32
err = syscall.GetExitCodeProcess(procHnd, &code)
if err != nil {
return false, err
}
return code == code_still_active, nil
}

151
vendor/modules.txt vendored
View File

@ -1,189 +1,110 @@
# github.com/StackExchange/wmi v0.0.0-20210224194228-fe8f1750fd46
## explicit
github.com/StackExchange/wmi
# github.com/astaxie/beego v1.12.3
## explicit
github.com/astaxie/beego/httplib
github.com/astaxie/beego
github.com/astaxie/beego/config
github.com/astaxie/beego/context
github.com/astaxie/beego/context/param
github.com/astaxie/beego/grace
github.com/astaxie/beego/logs
github.com/astaxie/beego/session
github.com/astaxie/beego/toolbox
github.com/astaxie/beego/utils
github.com/astaxie/beego/orm
github.com/astaxie/beego/cache
github.com/astaxie/beego/cache/redis
# github.com/beorn7/perks v1.0.1
github.com/beorn7/perks/quantile
# github.com/bmizerany/pat v0.0.0-20170815010413-6226ea591a40
## explicit
github.com/bmizerany/pat
# github.com/cespare/xxhash/v2 v2.1.1
github.com/cespare/xxhash/v2
# github.com/cihub/seelog v0.0.0-20170130134532-f561c5e57575
github.com/cihub/seelog/archive/gzip
## explicit
github.com/cihub/seelog/archive
github.com/cihub/seelog/archive/gzip
github.com/cihub/seelog/archive/tar
github.com/cihub/seelog/archive/zip
# github.com/deckarep/golang-set v1.7.1
## explicit
github.com/deckarep/golang-set
# github.com/esap/wechat v1.1.0
github.com/esap/wechat
github.com/esap/wechat/util
## explicit
# github.com/eventials/go-tus v0.0.0-20200718001131-45c7ec8f5d59
## explicit
github.com/eventials/go-tus
# github.com/garyburd/redigo v1.6.2
github.com/garyburd/redigo/internal
## explicit
# github.com/go-ole/go-ole v1.2.5
## explicit
github.com/go-ole/go-ole
github.com/go-ole/go-ole/oleutil
# github.com/go-sql-driver/mysql v1.5.0
github.com/go-sql-driver/mysql
# github.com/golang/protobuf v1.4.2
github.com/golang/protobuf/proto
github.com/golang/protobuf/ptypes
github.com/golang/protobuf/ptypes/timestamp
github.com/golang/protobuf/ptypes/any
github.com/golang/protobuf/ptypes/duration
# github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db
## explicit
github.com/golang/snappy
# github.com/gomodule/redigo v2.0.0+incompatible
github.com/gomodule/redigo/redis
github.com/gomodule/redigo/internal
# github.com/google/gopacket v1.1.19
github.com/google/gopacket
github.com/google/gopacket/pcap
github.com/google/gopacket/layers
# github.com/hashicorp/golang-lru v0.5.4
github.com/hashicorp/golang-lru
github.com/hashicorp/golang-lru/simplelru
## explicit
# github.com/inconshreveable/mousetrap v1.0.0
github.com/inconshreveable/mousetrap
# github.com/json-iterator/go v1.1.11
## explicit
github.com/json-iterator/go
# github.com/lhtzbj12/sdrms v0.0.0-20190701115710-2b304003159e
github.com/lhtzbj12/sdrms/routers
github.com/lhtzbj12/sdrms/sysinit
github.com/lhtzbj12/sdrms/controllers
github.com/lhtzbj12/sdrms/models
github.com/lhtzbj12/sdrms/utils
github.com/lhtzbj12/sdrms/enums
# github.com/matttproud/golang_protobuf_extensions v1.0.1
github.com/matttproud/golang_protobuf_extensions/pbutil
## explicit
# github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd
github.com/modern-go/concurrent
# github.com/modern-go/reflect2 v1.0.1
github.com/modern-go/reflect2
# github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646
## explicit
github.com/nfnt/resize
# github.com/pkg/errors v0.9.1
github.com/pkg/errors
# github.com/prometheus/client_golang v1.7.0
github.com/prometheus/client_golang/prometheus/promhttp
github.com/prometheus/client_golang/prometheus
github.com/prometheus/client_golang/prometheus/internal
# github.com/prometheus/client_model v0.2.0
github.com/prometheus/client_model/go
# github.com/prometheus/common v0.10.0
github.com/prometheus/common/expfmt
github.com/prometheus/common/model
github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg
# github.com/prometheus/procfs v0.1.3
github.com/prometheus/procfs
github.com/prometheus/procfs/internal/fs
github.com/prometheus/procfs/internal/util
# github.com/radovskyb/watcher v1.0.7
## explicit
github.com/radovskyb/watcher
# github.com/shiena/ansicolor v0.0.0-20151119151921-a422bbe96644
github.com/shiena/ansicolor
# github.com/shirou/gopsutil v3.21.5+incompatible
github.com/shirou/gopsutil/disk
github.com/shirou/gopsutil/mem
github.com/shirou/gopsutil/internal/common
## explicit
# github.com/shirou/gopsutil/v3 v3.21.4
## explicit
github.com/shirou/gopsutil/v3/disk
github.com/shirou/gopsutil/v3/mem
github.com/shirou/gopsutil/v3/internal/common
github.com/shirou/gopsutil/v3/mem
# github.com/sjqzhang/googleAuthenticator v0.0.0-20160926062737-f198f070e0b1
## explicit
github.com/sjqzhang/googleAuthenticator
# github.com/sjqzhang/goutil v0.0.0-20200618044433-2319148e0a46
## explicit
github.com/sjqzhang/goutil
# github.com/sjqzhang/seelog v0.0.0-20180104061743-556439109558
## explicit
github.com/sjqzhang/seelog
# github.com/sjqzhang/tusd v0.0.0-20190220031306-a6a9d78ef54a
## explicit
github.com/sjqzhang/tusd
github.com/sjqzhang/tusd/filestore
github.com/sjqzhang/tusd/uid
# github.com/spf13/cobra v1.1.3
## explicit
github.com/spf13/cobra
# github.com/spf13/pflag v1.0.5
github.com/spf13/pflag
# github.com/syndtr/goleveldb v1.0.0
## explicit
github.com/syndtr/goleveldb/leveldb
github.com/syndtr/goleveldb/leveldb/filter
github.com/syndtr/goleveldb/leveldb/opt
github.com/syndtr/goleveldb/leveldb/util
github.com/syndtr/goleveldb/leveldb/cache
github.com/syndtr/goleveldb/leveldb/comparer
github.com/syndtr/goleveldb/leveldb/errors
github.com/syndtr/goleveldb/leveldb/filter
github.com/syndtr/goleveldb/leveldb/iterator
github.com/syndtr/goleveldb/leveldb/journal
github.com/syndtr/goleveldb/leveldb/memdb
github.com/syndtr/goleveldb/leveldb/opt
github.com/syndtr/goleveldb/leveldb/storage
github.com/syndtr/goleveldb/leveldb/table
github.com/syndtr/goleveldb/leveldb/util
# go.uber.org/automaxprocs v1.4.0
## explicit
go.uber.org/automaxprocs
go.uber.org/automaxprocs/maxprocs
go.uber.org/automaxprocs/internal/runtime
go.uber.org/automaxprocs/internal/cgroups
# golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550
golang.org/x/crypto/acme/autocert
golang.org/x/crypto/acme
# golang.org/x/net v0.0.0-20190620200207-3b0461eec859
golang.org/x/net/idna
go.uber.org/automaxprocs/internal/runtime
go.uber.org/automaxprocs/maxprocs
# golang.org/x/sys v0.0.0-20210217105451-b926d437f341
## explicit
golang.org/x/sys/internal/unsafeheader
golang.org/x/sys/unix
golang.org/x/sys/windows
golang.org/x/sys/internal/unsafeheader
# golang.org/x/text v0.3.2
golang.org/x/text/secure/bidirule
golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm
golang.org/x/text/transform
# golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
golang.org/x/time/rate
# google.golang.org/protobuf v1.23.0
google.golang.org/protobuf/encoding/prototext
google.golang.org/protobuf/encoding/protowire
google.golang.org/protobuf/proto
google.golang.org/protobuf/reflect/protoreflect
google.golang.org/protobuf/reflect/protoregistry
google.golang.org/protobuf/runtime/protoiface
google.golang.org/protobuf/runtime/protoimpl
google.golang.org/protobuf/types/known/timestamppb
google.golang.org/protobuf/internal/encoding/messageset
google.golang.org/protobuf/internal/encoding/text
google.golang.org/protobuf/internal/errors
google.golang.org/protobuf/internal/fieldnum
google.golang.org/protobuf/internal/flags
google.golang.org/protobuf/internal/mapsort
google.golang.org/protobuf/internal/pragma
google.golang.org/protobuf/internal/set
google.golang.org/protobuf/internal/strs
google.golang.org/protobuf/internal/fieldsort
google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/filetype
google.golang.org/protobuf/internal/impl
google.golang.org/protobuf/internal/version
google.golang.org/protobuf/types/known/anypb
google.golang.org/protobuf/types/known/durationpb
google.golang.org/protobuf/internal/detrand
google.golang.org/protobuf/internal/descfmt
google.golang.org/protobuf/internal/descopts
google.golang.org/protobuf/internal/encoding/defval
google.golang.org/protobuf/internal/encoding/tag
google.golang.org/protobuf/internal/genname
## explicit
# gopkg.in/Acconut/lockfile.v1 v1.1.0
## explicit
gopkg.in/Acconut/lockfile.v1
# gopkg.in/yaml.v2 v2.4.0
## explicit
gopkg.in/yaml.v2