├── README.md ├── dicomimport ├── README.md ├── bindata.go ├── data │ ├── CT.dcm │ └── MR.dcm ├── dicomimport.go └── instructions.md ├── distributed-functional-test ├── README.md └── server_test.go ├── exec-concurrent ├── README.md ├── api_functional_v4_test.go ├── exec-concurrent.go └── vendor │ ├── github.com │ └── minio │ │ └── minio-go │ │ ├── CONTRIBUTING.md │ │ ├── LICENSE │ │ ├── MAINTAINERS.md │ │ ├── README.md │ │ ├── api-datatypes.go │ │ ├── api-error-response.go │ │ ├── api-get-object-file.go │ │ ├── api-get-object.go │ │ ├── api-get-policy.go │ │ ├── api-list.go │ │ ├── api-notification.go │ │ ├── api-presigned.go │ │ ├── api-put-bucket.go │ │ ├── api-put-object-common.go │ │ ├── api-put-object-copy.go │ │ ├── api-put-object-file.go │ │ ├── api-put-object-multipart.go │ │ ├── api-put-object-progress.go │ │ ├── api-put-object-readat.go │ │ ├── api-put-object.go │ │ ├── api-remove.go │ │ ├── api-s3-datatypes.go │ │ ├── api-stat.go │ │ ├── api.go │ │ ├── appveyor.yml │ │ ├── bucket-cache.go │ │ ├── bucket-notification.go │ │ ├── constants.go │ │ ├── copy-conditions.go │ │ ├── hook-reader.go │ │ ├── minio.test │ │ ├── pkg │ │ └── policy │ │ │ ├── bucket-policy-condition.go │ │ │ └── bucket-policy.go │ │ ├── post-policy.go │ │ ├── retry-continous.go │ │ ├── retry.go │ │ ├── s3-endpoints.go │ │ ├── signature-type.go │ │ ├── tempfile.go │ │ └── utils.go │ └── vendor.json ├── js-upload-load ├── README.md ├── minio.json └── pound-it.js ├── mc-cat-serial ├── README.md └── mc-cat.go ├── minio-java-functional-test └── README.md ├── parallel-put-lock └── parallel-put.go ├── parallel-upload-download ├── README.md ├── parallel-get.go └── parallel-put.go ├── perftest.go ├── raid_ephemeral.sh └── upload-perftest ├── Dockerfile ├── README.md └── uploadsperftest.go /README.md: -------------------------------------------------------------------------------- 1 | # Performance Tests for Minio 2 | 3 | This repository shows the results of some performance tests that were executed on several different server configurations of the Minio Object Storage server. 4 | 5 | First we present the results as that is probably what most people are interested in. The 6 | 7 | ## Results 8 | 9 | | Objects/sec | 4 node | 8 node | 12 node | 16 node | 10 | | ----------- | ------:| ------:| -------:| -------:| 11 | | 1M | 220 | 353 | | | 12 | | 3M | 221 | 292 | | | 13 | | 6M | 224 | 294 | | | 14 | | 12M | 198 | 262 | | | 15 | 16 | ## Setup 17 | 18 | << describe test setup >> 19 | 20 | << describe setup.sh >> 21 | 22 | ## Installation 23 | 24 | << describe how to install a server >> 25 | 26 | Log into the machine 27 | 28 | ### Prepare disk 29 | 30 | ``` 31 | wget https://raw.githubusercontent.com/minio/perftest/master/raid_ephemeral.sh 32 | chmod +x raid_ephemeral.sh 33 | sudo ./raid_ephemeral.sh 34 | sudo chmod 0777 /mnt 35 | mkdir /mnt/distr 36 | ``` 37 | 38 | ### Install Golang 39 | 40 | ``` 41 | sudo yum install git 42 | ``` 43 | 44 | ``` 45 | wget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz 46 | tar -C ${HOME} -xzf go1.7.3.linux-amd64.tar.gz 47 | echo export GOROOT=${HOME}/go >> ~/.bashrc 48 | echo export GOPATH=${HOME}/work >> ~/.bashrc 49 | echo export PATH=$PATH:${HOME}/go/bin:${HOME}/work/bin >> ~/.bashrc 50 | source ~/.bashrc 51 | ``` 52 | 53 | ``` 54 | go version 55 | ``` 56 | 57 | ### Install minio 58 | ``` 59 | go get -u github.com/minio/minio 60 | ``` 61 | 62 | 63 | ## Start Minio Server 64 | 65 | ``` 66 | minio server 172.31.13.67:/mnt/distr 172.31.13.66:/mnt/distr 172.31.13.69:/mnt/distr 172.31.13.68:/mnt/distr 172.31.14.165:/mnt/distr 172.31.14.164:/mnt/distr 172.31.14.163:/mnt/distr 172.31.14.162:/mnt/distr 67 | ``` 68 | 69 | ## Running Performance Tests 70 | 71 | ``` 72 | time ./perftest -p "12" -d 2 -b "lifedrive-100m-usw2" -r "us-west-2" -e "https://s3-us-west-2.amazonaws.com" -a "ACCESS" -s "SECRET" 73 | ``` 74 | 75 | 76 | ## Code 77 | 78 | << describe perftest.go >> 79 | -------------------------------------------------------------------------------- /dicomimport/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | CT image obtained from http://deanvaughan.org/wordpress/2013/07/dicom-sample-images/ 4 | 5 | -------------------------------------------------------------------------------- /dicomimport/data/CT.dcm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/minio/perftest/6179faf716e5b1962f7ddc144a84aac1830aa9be/dicomimport/data/CT.dcm -------------------------------------------------------------------------------- /dicomimport/data/MR.dcm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/minio/perftest/6179faf716e5b1962f7ddc144a84aac1830aa9be/dicomimport/data/MR.dcm -------------------------------------------------------------------------------- /dicomimport/dicomimport.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Cloud Storage, (C) 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package main 18 | 19 | import ( 20 | "bytes" 21 | "flag" 22 | "fmt" 23 | "github.com/aws/aws-sdk-go/aws" 24 | "github.com/aws/aws-sdk-go/aws/credentials" 25 | "github.com/aws/aws-sdk-go/aws/session" 26 | "github.com/aws/aws-sdk-go/service/s3/s3manager" 27 | "github.com/minio/blake2b-simd" 28 | "io/ioutil" 29 | "os" 30 | "sync" 31 | "time" 32 | ) 33 | 34 | var ( 35 | workers = flag.Int("w", 10, "Number of workers to use. Defaults to 10.") 36 | modality = flag.String("m", "", "Modality to use (CT, MR, etc.)") 37 | runs = flag.Int("r", 10, "Number of runs to do. Defaults to 10.") 38 | ) 39 | 40 | // modifyUid modifies the past part of the SOP Instance UID 41 | func modifyUid(uid, modifier string) string { 42 | 43 | s := uid[:len(uid)-len(modifier)] + modifier 44 | //fmt.Println("modifier", s) 45 | 46 | return s 47 | } 48 | 49 | var offsets = map[string]int{ 50 | "CT": 0x1e0, 51 | "MR": 0x56, 52 | } 53 | 54 | // generateBlob generates an image of a certain modality and makes sure 55 | // that is had a unique SOP Instance UID 56 | func genererateBlob(modifier, modality string) ([]byte, error) { 57 | 58 | data, err := Asset("data/" + modality + ".dcm") 59 | if err != nil { 60 | return nil, err 61 | } 62 | 63 | offset := offsets[modality] 64 | size := int(data[offset]) + int(data[offset+1])*0x100 65 | uid := string(data[offset+2 : offset+2+size]) 66 | uid = modifyUid(uid, modifier) 67 | 68 | // write modified uid back 69 | copy(data[offset+2:], []byte(uid)) 70 | 71 | return data, nil 72 | } 73 | 74 | // hashBlob returns the BLAKE2 hash of the blob 75 | func hashBlob(data []byte) (string, error) { 76 | 77 | h := blake2b.New512() 78 | h.Reset() 79 | h.Write(data) 80 | sum := h.Sum(nil) 81 | 82 | return fmt.Sprintf("%x", sum), nil 83 | } 84 | 85 | func saveBlob(data []byte, hash string) { 86 | ioutil.WriteFile(hash, data, os.ModePerm) 87 | } 88 | 89 | // uploadBlob does an upload to the S3/Minio server 90 | func uploadBlob(data []byte, hash string) error { 91 | 92 | credsUp := credentials.NewStaticCredentials(os.Getenv("ACCESSKEY"), os.Getenv("SECRETKEY"), "") 93 | sessUp := session.New(aws.NewConfig().WithCredentials(credsUp).WithRegion("us-east-1").WithEndpoint(os.Getenv("ENDPOINT")).WithS3ForcePathStyle(true)) 94 | 95 | // split key at 2nd character to force creation of directory 96 | key := hash[0:2] + "/" + hash[2:] 97 | 98 | uploader := s3manager.NewUploader(sessUp) 99 | var err error 100 | _, err = uploader.Upload(&s3manager.UploadInput{ 101 | Body: bytes.NewReader(data), 102 | Bucket: aws.String("dicom"), 103 | Key: aws.String(key), 104 | }) 105 | 106 | return err 107 | } 108 | 109 | // Worker routine for uploading an image 110 | func putWorker(imageCh <-chan imageDescriptor, outCh chan<- int) { 111 | 112 | for i := range imageCh { 113 | 114 | data, err := genererateBlob(i.instUID, i.modality) 115 | if err != nil { 116 | fmt.Println("Exiting out due to error from genererateBlob:", err) 117 | return 118 | } 119 | hash, err := hashBlob(data) 120 | if err != nil { 121 | fmt.Println("Exiting out due to error from hashBlob:", err) 122 | return 123 | } 124 | //saveBlob(data, hash) 125 | err = uploadBlob(data, hash) 126 | if err != nil { 127 | fmt.Println("Exiting out due to error from uploadBlob:", err) 128 | return 129 | } 130 | 131 | outCh <- len(data) 132 | } 133 | } 134 | 135 | type imageDescriptor struct { 136 | instUID string 137 | modality string 138 | } 139 | 140 | func main() { 141 | flag.Parse() 142 | 143 | if os.Getenv("ACCESSKEY") == "" { 144 | fmt.Println("Environment variable ACCESSKEY needs to be set") 145 | return 146 | } 147 | if os.Getenv("SECRETKEY") == "" { 148 | fmt.Println("Environment variable SECRETKEY needs to be set") 149 | return 150 | } 151 | if os.Getenv("ENDPOINT") == "" { 152 | fmt.Println("Environment variable ENDPOINT needs to be set") 153 | return 154 | } 155 | 156 | if *modality == "" { 157 | fmt.Println("Bad arguments") 158 | return 159 | } 160 | _, found := offsets[*modality] 161 | if !found { 162 | fmt.Println("Unknown modality:", *modality) 163 | return 164 | } 165 | 166 | var wg sync.WaitGroup 167 | imageCh := make(chan imageDescriptor) 168 | outCh := make(chan int) 169 | 170 | // Start worker go routines 171 | for i := 0; i < *workers; i++ { 172 | wg.Add(1) 173 | go func() { 174 | defer wg.Done() 175 | putWorker(imageCh, outCh) 176 | }() 177 | } 178 | 179 | pid := os.Getpid() 180 | 181 | start := time.Now() 182 | 183 | // Push onto input channel 184 | go func() { 185 | for i := 0; i < *runs; i++ { 186 | t := fmt.Sprintf("%v", time.Now().UnixNano()) 187 | modifier := fmt.Sprintf("%d.%v.%d", pid, t[len(t)-7:], i) 188 | imageCh <- imageDescriptor{instUID: modifier, modality: *modality} 189 | } 190 | 191 | // Close input channel 192 | close(imageCh) 193 | }() 194 | 195 | // Wait for workers to complete 196 | go func() { 197 | wg.Wait() 198 | close(outCh) // Close output channel 199 | }() 200 | 201 | // compute total size of bytes uploaded 202 | totalSize := 0 203 | for o := range outCh { 204 | totalSize += o 205 | } 206 | 207 | fmt.Println("Total size :", totalSize, "bytes") 208 | elapsed := time.Since(start) 209 | fmt.Println("Elapsed time :", elapsed) 210 | seconds := float64(elapsed) / float64(time.Second) 211 | fmt.Printf("Speed : %4.0f objs/sec\n", float64(*runs)/seconds) 212 | fmt.Printf("Bandwidth : %4.0f MBit/sec\n", 8*float64(totalSize)/seconds/1024/1024) 213 | 214 | //fmt.Println("Number of objects:", len(list)) 215 | } 216 | -------------------------------------------------------------------------------- /dicomimport/instructions.md: -------------------------------------------------------------------------------- 1 | 2 | # dicomimport 3 | 4 | ## Introduction 5 | 6 | This is a load testing tool for Minio (or any other S3 compatible server) with an ephasis on medical images in the DICOM format. It embeds images of various modalities that are modified on the fly to generate unique binary objects. Each image is then hashed before being uploaded to the server whereby the hash is used as a key name of the object. 7 | 8 | ## Downloading 9 | 10 | You can download the Windows executable for dicomimport from here: https://github.com/minio/perftest/releases/download/v0.1/dicomimport.exe 11 | 12 | ## Building from source 13 | 14 | Make sure you have git and golang installed, and then run as follows: 15 | 16 | ``` 17 | go get github.com/minio/perftest/dicomimport 18 | ``` 19 | 20 | ## Preparation 21 | 22 | Make sure a bucket called dicom is available. 23 | 24 | Using `mc` you can create it as follows: 25 | 26 | ``` 27 | mc mb myminio/dicom 28 | ``` 29 | 30 | ## Configuration 31 | 32 | In order to run `dicomimport` you first need to configure the access information to the Minio server. As such you will need to configure with the following three parameters: 33 | 34 | - access key 35 | - secret key 36 | - endpoint 37 | 38 | Here are the command line statements to define the environment variables for this: 39 | 40 | Windows: 41 | ``` 42 | set ACCESSKEY=5D94Q9WPYAV26D068GIO 43 | set SECRETKEY=GOgBwUsaKn3RmWwO25zq+ZyqLeuSK2aNGu7Z7GTA 44 | set ENDPOINT=http://172.31.17.143:9000 45 | ``` 46 | 47 | ## How to run 48 | 49 | You can run dicomimport as follows: 50 | 51 | ``` 52 | dicomimport -m "CT" -w 50 -r 1000 53 | ``` 54 | 55 | The meaning of command line flags is as follows 56 | 57 | - `-m`: modality (currently "CT" or "MR") 58 | - `-w`: number of worker threads in parallel 59 | - `-r`: total number of objects to upload 60 | 61 | ## Output 62 | 63 | Here is a typical output: 64 | 65 | ``` 66 | C:\Users\Administrator>dicomimport -m "CT" -w 100 -r 1000 67 | Total size : 525968000 bytes 68 | Elapsed time : 4.2669473s 69 | Speed : 234 objs/sec 70 | Bandwidth : 940 MBit/sec 71 | ``` 72 | 73 | The last two lines show the speed in objects per second as well as bandwidth in MBit/sec. -------------------------------------------------------------------------------- /distributed-functional-test/README.md: -------------------------------------------------------------------------------- 1 | # Exhasutive functional test for Minio distributed. 2 | `server_test.go` from the server side suite test is made generic so that it could now be run against an external 3 | distributed server instance. 4 | 5 | Since the suite test is exhaustive and covers most of the functionalities, its a good validator for most of the server side functionalities. 6 | 7 | Facilities to run individual tests concurrently and under chaos test will be done. 8 | 9 | # How to run. 10 | 11 | - Set ENDPOINT. 12 | 13 | ```sh 14 | $ export S3_ENDPOINT=http://xxx.xx.xxx.xxx: 15 | ``` 16 | 17 | - Set ACCESS_KEY. 18 | 19 | ```sh 20 | $ export ACCESS_KEY=xxxx 21 | ``` 22 | 23 | - Set SECRET_KEY. 24 | 25 | ```sh 26 | $ export SECRET_KEY=xxxx 27 | ``` 28 | - Run the test. 29 | 30 | ```sh 31 | $ go test -v 32 | ``` 33 | OR 34 | 35 | ```sh 36 | $ go test -run= 37 | ``` 38 | 39 | - Here is the list of supported tests. 40 | ``` 41 | TestBucket 42 | TestBucketMultipartList 43 | TestBucketPolicy 44 | TestBucketSQSNotification 45 | TestContentTypePersists 46 | TestCopyObject 47 | TestDeleteBucket 48 | TestDeleteBucketNotEmpty 49 | TestDeleteMultipleObjects 50 | TestDeleteObject 51 | TestEmptyObject 52 | TestGetObjectErrors 53 | TestGetObjectLarge10MiB 54 | TestGetObjectLarge11MiB 55 | TestGetObjectRangeErrors 56 | TestGetPartialObjectLarge10MiB 57 | TestGetPartialObjectLarge11MiB 58 | TestGetPartialObjectMisAligned 59 | TestHeadOnBucket 60 | TestHeadOnObjectLastModified 61 | TestHeader 62 | TestListBuckets 63 | TestListObjectsHandler 64 | TestListObjectsHandlerErrors 65 | TestListenBucketNotificationHandler 66 | TestMultipleObjects 67 | TestNonExistentBucket 68 | TestNotBeAbleToCreateObjectInNonexistentBucket 69 | TestNotImplemented 70 | TestObjectGet 71 | TestObjectGetAnonymous 72 | TestObjectMultipart 73 | TestObjectMultipartAbort 74 | TestObjectMultipartListError 75 | TestObjectValidMD5 76 | TestPartialContent 77 | TestPutBucket 78 | TestPutBucketErrors 79 | TestPutObject 80 | TestPutObjectLongName 81 | TestSHA256Mismatch 82 | TestValidateObjectMultipartUploadID 83 | TestValidateSignature 84 | ``` 85 | -------------------------------------------------------------------------------- /exec-concurrent/README.md: -------------------------------------------------------------------------------- 1 | # exec-concurent tests. 2 | 3 | - Runs Minio-go functional tests concurrently on the directed Minio instances generating load. 4 | 5 | # Instructions to run. 6 | 7 | - Set the access Key. 8 | 9 | ```sh 10 | export ACCESS_KEY=xxxxxx 11 | ``` 12 | 13 | - Set the Secret Key. 14 | 15 | ```sh 16 | export SECRET_KEY=xxxxxxx 17 | ``` 18 | 19 | - Set the Minio server endpoint. 20 | 21 | ```sh 22 | export export S3_ADDRESS=xxx.xx.xxx.xxx: 23 | ``` 24 | 25 | - Set S3_SECURE to `true` for htttps connections. 26 | 27 | ```sh 28 | export S3_SECURE=true 29 | ``` 30 | OR 31 | 32 | Set S3_SECURE to `false` for htttp connections. 33 | 34 | ```sh 35 | export S3_SECURE=false 36 | ``` 37 | 38 | - Set concurrent level of the load. 39 | 40 | ```sh 41 | export CONCURRENCY=100 42 | ``` 43 | 44 | - Build the api_functional_v4_test.go 45 | 46 | ```sh 47 | $ go test -c api_functional_v4_test.go 48 | ``` 49 | 50 | - Build exec-concurrent.go 51 | 52 | ```sh 53 | $ go build exec-concurrent.go 54 | ``` 55 | 56 | - Run the test. 57 | 58 | ```sh 59 | $ ./exec-concurrent 60 | ``` 61 | 62 | - Here are the list of supported TestNAMES. 63 | 64 | ``` 65 | TestMakeBucketError 66 | TestMakeBucketRegions 67 | TestPutObjectReadAt 68 | TestListPartiallyUploaded 69 | TestGetOjectSeekEnd 70 | TestGetObjectClosedTwice 71 | TestRemovePartiallyUploaded 72 | TestResumablePutObject 73 | TestResumableFPutObject 74 | TestFPutObjectMultipart 75 | TestFPutObject 76 | TestGetObjectReadSeekFunctional 77 | TestGetObjectReadAtFunctional 78 | TestPresignedPostPolicy 79 | TestCopyObject 80 | TestFunctional 81 | ``` 82 | 83 | 84 | 85 | -------------------------------------------------------------------------------- /exec-concurrent/exec-concurrent.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Cloud Storage, (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package main 18 | 19 | import ( 20 | "fmt" 21 | "os" 22 | "os/exec" 23 | "strconv" 24 | "sync" 25 | "syscall" 26 | ) 27 | 28 | // For all unixes we need to bump allowed number of open files to a 29 | // higher value than its usual default of '1024'. The reasoning is 30 | // that this value is too small. 31 | func setMaxOpenFiles() error { 32 | var rLimit syscall.Rlimit 33 | err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit) 34 | if err != nil { 35 | return err 36 | } 37 | // Set the current limit to Max, it is usually around 4096. 38 | // TO increase this limit further user has to manually edit 39 | // `/etc/security/limits.conf` 40 | rLimit.Cur = rLimit.Max 41 | return syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit) 42 | } 43 | 44 | func init() { 45 | os.Setenv("ACCESS_KEY", os.Getenv("ACCESS_KEY")) 46 | os.Setenv("SECRET_KEY", os.Getenv("SECRET_KEY")) 47 | os.Setenv("ENDPOINT", os.Getenv("ENDPOINT")) 48 | os.Setenv("S3_SECURE", os.Getenv("S3_SECURE")) 49 | os.Setenv("CONCURRENCY", os.Getenv("CONCURRENCY")) 50 | } 51 | 52 | func main() { 53 | 54 | setErr := setMaxOpenFiles() 55 | if setErr != nil { 56 | fmt.Println("Error bumping up open file limits: ", setErr) 57 | return 58 | } 59 | 60 | concurrency, err := strconv.Atoi(os.Getenv("CONCURRENCY")) 61 | if err != nil { 62 | fmt.Println("Please set a valid integer for concurrency level. ex: `export CONCURRENCY=100`: ", err) 63 | return 64 | } 65 | var wg sync.WaitGroup 66 | f, _ := os.Create("output.log") 67 | defer f.Close() 68 | testCmd := "./minio.test -test.timeout 3600s" 69 | if len(os.Args[1]) != 0 { 70 | testCmd = fmt.Sprintf("./minio.test -test.timeout 3600s -test.run %s", os.Args[1]) 71 | } 72 | for i := 0; i < concurrency; i++ { 73 | wg.Add(1) 74 | go func(routineId int) { 75 | defer wg.Done() 76 | 77 | out, err := exec.Command("sh", "-c", testCmd).Output() 78 | if err != nil { 79 | fmt.Println(err.Error()) 80 | } 81 | fmt.Fprintf(f, "\nGoroutine: %d\n", routineId+1) 82 | fmt.Fprintf(f, string(out)) 83 | 84 | }(i) 85 | } 86 | wg.Wait() 87 | 88 | } 89 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | 2 | ### Developer Guidelines 3 | 4 | ``minio-go`` welcomes your contribution. To make the process as seamless as possible, we ask for the following: 5 | 6 | * Go ahead and fork the project and make your changes. We encourage pull requests to discuss code changes. 7 | - Fork it 8 | - Create your feature branch (git checkout -b my-new-feature) 9 | - Commit your changes (git commit -am 'Add some feature') 10 | - Push to the branch (git push origin my-new-feature) 11 | - Create new Pull Request 12 | 13 | * When you're ready to create a pull request, be sure to: 14 | - Have test cases for the new code. If you have questions about how to do it, please ask in your pull request. 15 | - Run `go fmt` 16 | - Squash your commits into a single commit. `git rebase -i`. It's okay to force update your pull request. 17 | - Make sure `go test -race ./...` and `go build` completes. 18 | NOTE: go test runs functional tests and requires you to have a AWS S3 account. Set them as environment variables 19 | ``ACCESS_KEY`` and ``SECRET_KEY``. To run shorter version of the tests please use ``go test -short -race ./...`` 20 | 21 | * Read [Effective Go](https://github.com/golang/go/wiki/CodeReviewComments) article from Golang project 22 | - `minio-go` project is strictly conformant with Golang style 23 | - if you happen to observe offending code, please feel free to send a pull request 24 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/MAINTAINERS.md: -------------------------------------------------------------------------------- 1 | # For maintainers only 2 | 3 | ## Responsibilities 4 | 5 | Please go through this link [Maintainer Responsibility](https://gist.github.com/abperiasamy/f4d9b31d3186bbd26522) 6 | 7 | ### Making new releases 8 | 9 | Edit `libraryVersion` constant in `api.go`. 10 | 11 | ``` 12 | $ grep libraryVersion api.go 13 | libraryVersion = "0.3.0" 14 | ``` 15 | 16 | ``` 17 | $ git tag 0.3.0 18 | $ git push --tags 19 | ``` -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/README.md: -------------------------------------------------------------------------------- 1 | # Minio Go Client SDK for Amazon S3 Compatible Cloud Storage [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/Minio/minio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 2 | The Minio Go Client SDK provides simple APIs to access any Amazon S3 compatible object storage. 3 | 4 | **Supported cloud storage providers:** 5 | 6 | - AWS Signature Version 4 7 | - Amazon S3 8 | - Minio 9 | 10 | 11 | - AWS Signature Version 2 12 | - Google Cloud Storage (Compatibility Mode) 13 | - Openstack Swift + Swift3 middleware 14 | - Ceph Object Gateway 15 | - Riak CS 16 | 17 | This quickstart guide will show you how to install the Minio client SDK, connect to Minio, and provide a walkthrough for a simple file uploader. For a complete list of APIs and examples, please take a look at the [Go Client API Reference](https://docs.minio.io/docs/golang-client-api-reference). 18 | 19 | This document assumes that you have a working [Go development environment](https://docs.minio.io/docs/how-to-install-golang). 20 | 21 | 22 | ## Download from Github 23 | 24 | ```sh 25 | 26 | go get -u github.com/minio/minio-go 27 | 28 | ``` 29 | ## Initialize Minio Client 30 | 31 | Minio client requires the following four parameters specified to connect to an Amazon S3 compatible object storage. 32 | 33 | 34 | | Parameter | Description| 35 | | :--- | :--- | 36 | | endpoint | URL to object storage service. | 37 | | accessKeyID | Access key is the user ID that uniquely identifies your account. | 38 | | secretAccessKey | Secret key is the password to your account. | 39 | | secure | Set this value to 'true' to enable secure (HTTPS) access. | 40 | 41 | 42 | ```go 43 | 44 | package main 45 | 46 | import ( 47 | "github.com/minio/minio-go" 48 | "log" 49 | ) 50 | 51 | func main() { 52 | endpoint := "play.minio.io:9000" 53 | accessKeyID := "Q3AM3UQ867SPQQA43P2F" 54 | secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG" 55 | useSSL := true 56 | 57 | // Initialize minio client object. 58 | minioClient, err := minio.New(endpoint, accessKeyID, secretAccessKey, useSSL) 59 | if err != nil { 60 | log.Fatalln(err) 61 | } 62 | 63 | log.Println("%v", minioClient) // minioClient is now setup 64 | 65 | 66 | ``` 67 | 68 | ## Quick Start Example - File Uploader 69 | 70 | This example program connects to an object storage server, creates a bucket and uploads a file to the bucket. 71 | 72 | 73 | 74 | 75 | We will use the Minio server running at [https://play.minio.io:9000](https://play.minio.io:9000) in this example. Feel free to use this service for testing and development. Access credentials shown in this example are open to the public. 76 | 77 | #### FileUploader.go 78 | 79 | ```go 80 | package main 81 | 82 | import ( 83 | "github.com/minio/minio-go" 84 | "log" 85 | ) 86 | 87 | func main() { 88 | endpoint := "play.minio.io:9000" 89 | accessKeyID := "Q3AM3UQ867SPQQA43P2F" 90 | secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG" 91 | useSSL := true 92 | 93 | // Initialize minio client object. 94 | minioClient, err := minio.New(endpoint, accessKeyID, secretAccessKey, useSSL) 95 | if err != nil { 96 | log.Fatalln(err) 97 | } 98 | 99 | // Make a new bucked called mymusic. 100 | bucketName := "mymusic" 101 | location := "us-east-1" 102 | 103 | err = minioClient.MakeBucket(bucketName, location) 104 | if err != nil { 105 | // Check to see if we already own this bucket (which happens if you run this twice) 106 | exists, err := minioClient.BucketExists(bucketName) 107 | if err == nil && exists { 108 | log.Printf("We already own %s\n", bucketName) 109 | } else { 110 | log.Fatalln(err) 111 | } 112 | } 113 | log.Printf("Successfully created %s\n", bucketName) 114 | 115 | // Upload the zip file 116 | objectName := "golden-oldies.zip" 117 | filePath := "/tmp/golden-oldies.zip" 118 | contentType := "application/zip" 119 | 120 | // Upload the zip file with FPutObject 121 | n, err := minioClient.FPutObject(bucketName, objectName, filePath, contentType) 122 | if err != nil { 123 | log.Fatalln(err) 124 | } 125 | 126 | log.Printf("Successfully uploaded %s of size %d\n", objectName, n) 127 | } 128 | ``` 129 | 130 | #### Run FileUploader 131 | 132 | ```sh 133 | 134 | go run file-uploader.go 135 | 2016/08/13 17:03:28 Successfully created mymusic 136 | 2016/08/13 17:03:40 Successfully uploaded golden-oldies.zip of size 16253413 137 | 138 | mc ls play/mymusic/ 139 | [2016-05-27 16:02:16 PDT] 17MiB golden-oldies.zip 140 | 141 | ``` 142 | 143 | ## API Reference 144 | 145 | The full API Reference is available here. 146 | 147 | * [Complete API Reference](https://docs.minio.io/docs/golang-client-api-reference) 148 | 149 | ### API Reference : Bucket Operations 150 | 151 | * [`MakeBucket`](https://docs.minio.io/docs/golang-client-api-reference#MakeBucket) 152 | * [`ListBuckets`](https://docs.minio.io/docs/golang-client-api-reference#ListBuckets) 153 | * [`BucketExists`](https://docs.minio.io/docs/golang-client-api-reference#BucketExists) 154 | * [`RemoveBucket`](https://docs.minio.io/docs/golang-client-api-reference#RemoveBucket) 155 | * [`ListObjects`](https://docs.minio.io/docs/golang-client-api-reference#ListObjects) 156 | * [`ListObjectsV2`](https://docs.minio.io/docs/golang-client-api-reference#ListObjectsV2) 157 | * [`ListIncompleteUploads`](https://docs.minio.io/docs/golang-client-api-reference#ListIncompleteUploads) 158 | 159 | ### API Reference : Bucket policy Operations 160 | 161 | * [`SetBucketPolicy`](https://docs.minio.io/docs/golang-client-api-reference#SetBucketPolicy) 162 | * [`GetBucketPolicy`](https://docs.minio.io/docs/golang-client-api-reference#GetBucketPolicy) 163 | * [`ListBucketPolicies`](https://docs.minio.io/docs/golang-client-api-reference#ListBucketPolicies) 164 | 165 | ### API Reference : Bucket notification Operations 166 | 167 | * [`SetBucketNotification`](https://docs.minio.io/docs/golang-client-api-reference#SetBucketNotification) 168 | * [`GetBucketNotification`](https://docs.minio.io/docs/golang-client-api-reference#GetBucketNotification) 169 | * [`RemoveAllBucketNotification`](https://docs.minio.io/docs/golang-client-api-reference#RemoveAllBucketNotification) 170 | * [`ListenBucketNotification`](https://docs.minio.io/docs/golang-client-api-reference#ListenBucketNotification) (Minio Extension) 171 | 172 | ### API Reference : File Object Operations 173 | 174 | * [`FPutObject`](https://docs.minio.io/docs/golang-client-api-reference#FPutObject) 175 | * [`FGetObject`](https://docs.minio.io/docs/golang-client-api-reference#FPutObject) 176 | 177 | ### API Reference : Object Operations 178 | 179 | * [`GetObject`](https://docs.minio.io/docs/golang-client-api-reference#GetObject) 180 | * [`PutObject`](https://docs.minio.io/docs/golang-client-api-reference#PutObject) 181 | * [`StatObject`](https://docs.minio.io/docs/golang-client-api-reference#StatObject) 182 | * [`CopyObject`](https://docs.minio.io/docs/golang-client-api-reference#CopyObject) 183 | * [`RemoveObject`](https://docs.minio.io/docs/golang-client-api-reference#RemoveObject) 184 | * [`RemoveObjects`](https://docs.minio.io/docs/golang-client-api-reference#RemoveObjects) 185 | * [`RemoveIncompleteUpload`](https://docs.minio.io/docs/golang-client-api-reference#RemoveIncompleteUpload) 186 | 187 | ### API Reference : Presigned Operations 188 | 189 | * [`PresignedGetObject`](https://docs.minio.io/docs/golang-client-api-reference#PresignedGetObject) 190 | * [`PresignedPutObject`](https://docs.minio.io/docs/golang-client-api-reference#PresignedPutObject) 191 | * [`PresignedPostPolicy`](https://docs.minio.io/docs/golang-client-api-reference#PresignedPostPolicy) 192 | 193 | ### API Reference : Client custom settings 194 | * [`SetAppInfo`](http://docs.minio.io/docs/golang-client-api-reference#SetAppInfo) 195 | * [`SetCustomTransport`](http://docs.minio.io/docs/golang-client-api-reference#SetCustomTransport) 196 | * [`TraceOn`](http://docs.minio.io/docs/golang-client-api-reference#TraceOn) 197 | * [`TraceOff`](http://docs.minio.io/docs/golang-client-api-reference#TraceOff) 198 | 199 | 200 | ## Full Examples 201 | 202 | #### Full Examples : Bucket Operations 203 | 204 | * [makebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/makebucket.go) 205 | * [listbuckets.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbuckets.go) 206 | * [bucketexists.go](https://github.com/minio/minio-go/blob/master/examples/s3/bucketexists.go) 207 | * [removebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucket.go) 208 | * [listobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjects.go) 209 | * [listobjectsV2.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjectsV2.go) 210 | * [listincompleteuploads.go](https://github.com/minio/minio-go/blob/master/examples/s3/listincompleteuploads.go) 211 | 212 | #### Full Examples : Bucket policy Operations 213 | 214 | * [setbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketpolicy.go) 215 | * [getbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketpolicy.go) 216 | * [listbucketpolicies.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbucketpolicies.go) 217 | 218 | #### Full Examples : Bucket notification Operations 219 | 220 | * [setbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketnotification.go) 221 | * [getbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketnotification.go) 222 | * [removeallbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeallbucketnotification.go) 223 | * [listenbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listenbucketnotification.go) (Minio Extension) 224 | 225 | #### Full Examples : File Object Operations 226 | 227 | * [fputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputobject.go) 228 | * [fgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fgetobject.go) 229 | 230 | #### Full Examples : Object Operations 231 | 232 | * [putobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/putobject.go) 233 | * [getobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/getobject.go) 234 | * [statobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/statobject.go) 235 | * [copyobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/copyobject.go) 236 | * [removeobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobject.go) 237 | * [removeincompleteupload.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeincompleteupload.go) 238 | * [removeobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobjects.go) 239 | 240 | #### Full Examples : Presigned Operations 241 | * [presignedgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedgetobject.go) 242 | * [presignedputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedputobject.go) 243 | * [presignedpostpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedpostpolicy.go) 244 | 245 | ## Explore Further 246 | * [Complete Documentation](https://docs.minio.io) 247 | * [Minio Go Client SDK API Reference](https://docs.minio.io/docs/golang-client-api-reference) 248 | * [Go Music Player App- Full Application Example ](https://docs.minio.io/docs/go-music-player-app) 249 | 250 | ## Contribute 251 | 252 | [Contributors Guide](https://github.com/minio/minio-go/blob/master/CONTRIBUTING.md) 253 | 254 | [![Build Status](https://travis-ci.org/minio/minio-go.svg)](https://travis-ci.org/minio/minio-go) 255 | [![Build status](https://ci.appveyor.com/api/projects/status/1d05e6nvxcelmrak?svg=true)](https://ci.appveyor.com/project/harshavardhana/minio-go) 256 | 257 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-datatypes.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "net/http" 21 | "time" 22 | ) 23 | 24 | // BucketInfo container for bucket metadata. 25 | type BucketInfo struct { 26 | // The name of the bucket. 27 | Name string `json:"name"` 28 | // Date the bucket was created. 29 | CreationDate time.Time `json:"creationDate"` 30 | } 31 | 32 | // ObjectInfo container for object metadata. 33 | type ObjectInfo struct { 34 | // An ETag is optionally set to md5sum of an object. In case of multipart objects, 35 | // ETag is of the form MD5SUM-N where MD5SUM is md5sum of all individual md5sums of 36 | // each parts concatenated into one string. 37 | ETag string `json:"etag"` 38 | 39 | Key string `json:"name"` // Name of the object 40 | LastModified time.Time `json:"lastModified"` // Date and time the object was last modified. 41 | Size int64 `json:"size"` // Size in bytes of the object. 42 | ContentType string `json:"contentType"` // A standard MIME type describing the format of the object data. 43 | 44 | // Collection of additional metadata on the object. 45 | // eg: x-amz-meta-*, content-encoding etc. 46 | Metadata http.Header `json:"metadata"` 47 | 48 | // Owner name. 49 | Owner struct { 50 | DisplayName string `json:"name"` 51 | ID string `json:"id"` 52 | } `json:"owner"` 53 | 54 | // The class of storage used to store the object. 55 | StorageClass string `json:"storageClass"` 56 | 57 | // Error 58 | Err error `json:"-"` 59 | } 60 | 61 | // ObjectMultipartInfo container for multipart object metadata. 62 | type ObjectMultipartInfo struct { 63 | // Date and time at which the multipart upload was initiated. 64 | Initiated time.Time `type:"timestamp" timestampFormat:"iso8601"` 65 | 66 | Initiator initiator 67 | Owner owner 68 | 69 | // The type of storage to use for the object. Defaults to 'STANDARD'. 70 | StorageClass string 71 | 72 | // Key of the object for which the multipart upload was initiated. 73 | Key string 74 | 75 | // Size in bytes of the object. 76 | Size int64 77 | 78 | // Upload ID that identifies the multipart upload. 79 | UploadID string `xml:"UploadId"` 80 | 81 | // Error 82 | Err error 83 | } 84 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-error-response.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "encoding/xml" 21 | "fmt" 22 | "net/http" 23 | "strconv" 24 | ) 25 | 26 | /* **** SAMPLE ERROR RESPONSE **** 27 | 28 | 29 | AccessDenied 30 | Access Denied 31 | bucketName 32 | objectName 33 | F19772218238A85A 34 | GuWkjyviSiGHizehqpmsD1ndz5NClSP19DOT+s2mv7gXGQ8/X1lhbDGiIJEXpGFD 35 | 36 | */ 37 | 38 | // ErrorResponse - Is the typed error returned by all API operations. 39 | type ErrorResponse struct { 40 | XMLName xml.Name `xml:"Error" json:"-"` 41 | Code string 42 | Message string 43 | BucketName string 44 | Key string 45 | RequestID string `xml:"RequestId"` 46 | HostID string `xml:"HostId"` 47 | 48 | // Region where the bucket is located. This header is returned 49 | // only in HEAD bucket and ListObjects response. 50 | Region string 51 | } 52 | 53 | // ToErrorResponse - Returns parsed ErrorResponse struct from body and 54 | // http headers. 55 | // 56 | // For example: 57 | // 58 | // import s3 "github.com/minio/minio-go" 59 | // ... 60 | // ... 61 | // reader, stat, err := s3.GetObject(...) 62 | // if err != nil { 63 | // resp := s3.ToErrorResponse(err) 64 | // } 65 | // ... 66 | func ToErrorResponse(err error) ErrorResponse { 67 | switch err := err.(type) { 68 | case ErrorResponse: 69 | return err 70 | default: 71 | return ErrorResponse{} 72 | } 73 | } 74 | 75 | // Error - Returns HTTP error string 76 | func (e ErrorResponse) Error() string { 77 | return e.Message 78 | } 79 | 80 | // Common string for errors to report issue location in unexpected 81 | // cases. 82 | const ( 83 | reportIssue = "Please report this issue at https://github.com/minio/minio-go/issues." 84 | ) 85 | 86 | // httpRespToErrorResponse returns a new encoded ErrorResponse 87 | // structure as error. 88 | func httpRespToErrorResponse(resp *http.Response, bucketName, objectName string) error { 89 | if resp == nil { 90 | msg := "Response is empty. " + reportIssue 91 | return ErrInvalidArgument(msg) 92 | } 93 | var errResp ErrorResponse 94 | err := xmlDecoder(resp.Body, &errResp) 95 | // Xml decoding failed with no body, fall back to HTTP headers. 96 | if err != nil { 97 | switch resp.StatusCode { 98 | case http.StatusNotFound: 99 | if objectName == "" { 100 | errResp = ErrorResponse{ 101 | Code: "NoSuchBucket", 102 | Message: "The specified bucket does not exist.", 103 | BucketName: bucketName, 104 | RequestID: resp.Header.Get("x-amz-request-id"), 105 | HostID: resp.Header.Get("x-amz-id-2"), 106 | Region: resp.Header.Get("x-amz-bucket-region"), 107 | } 108 | } else { 109 | errResp = ErrorResponse{ 110 | Code: "NoSuchKey", 111 | Message: "The specified key does not exist.", 112 | BucketName: bucketName, 113 | Key: objectName, 114 | RequestID: resp.Header.Get("x-amz-request-id"), 115 | HostID: resp.Header.Get("x-amz-id-2"), 116 | Region: resp.Header.Get("x-amz-bucket-region"), 117 | } 118 | } 119 | case http.StatusForbidden: 120 | errResp = ErrorResponse{ 121 | Code: "AccessDenied", 122 | Message: "Access Denied.", 123 | BucketName: bucketName, 124 | Key: objectName, 125 | RequestID: resp.Header.Get("x-amz-request-id"), 126 | HostID: resp.Header.Get("x-amz-id-2"), 127 | Region: resp.Header.Get("x-amz-bucket-region"), 128 | } 129 | case http.StatusConflict: 130 | errResp = ErrorResponse{ 131 | Code: "Conflict", 132 | Message: "Bucket not empty.", 133 | BucketName: bucketName, 134 | RequestID: resp.Header.Get("x-amz-request-id"), 135 | HostID: resp.Header.Get("x-amz-id-2"), 136 | Region: resp.Header.Get("x-amz-bucket-region"), 137 | } 138 | default: 139 | errResp = ErrorResponse{ 140 | Code: resp.Status, 141 | Message: resp.Status, 142 | BucketName: bucketName, 143 | RequestID: resp.Header.Get("x-amz-request-id"), 144 | HostID: resp.Header.Get("x-amz-id-2"), 145 | Region: resp.Header.Get("x-amz-bucket-region"), 146 | } 147 | } 148 | } 149 | return errResp 150 | } 151 | 152 | // ErrEntityTooLarge - Input size is larger than supported maximum. 153 | func ErrEntityTooLarge(totalSize, maxObjectSize int64, bucketName, objectName string) error { 154 | msg := fmt.Sprintf("Your proposed upload size ‘%d’ exceeds the maximum allowed object size ‘%d’ for single PUT operation.", totalSize, maxObjectSize) 155 | return ErrorResponse{ 156 | Code: "EntityTooLarge", 157 | Message: msg, 158 | BucketName: bucketName, 159 | Key: objectName, 160 | } 161 | } 162 | 163 | // ErrEntityTooSmall - Input size is smaller than supported minimum. 164 | func ErrEntityTooSmall(totalSize int64, bucketName, objectName string) error { 165 | msg := fmt.Sprintf("Your proposed upload size ‘%d’ is below the minimum allowed object size '0B' for single PUT operation.", totalSize) 166 | return ErrorResponse{ 167 | Code: "EntityTooLarge", 168 | Message: msg, 169 | BucketName: bucketName, 170 | Key: objectName, 171 | } 172 | } 173 | 174 | // ErrUnexpectedEOF - Unexpected end of file reached. 175 | func ErrUnexpectedEOF(totalRead, totalSize int64, bucketName, objectName string) error { 176 | msg := fmt.Sprintf("Data read ‘%s’ is not equal to the size ‘%s’ of the input Reader.", 177 | strconv.FormatInt(totalRead, 10), strconv.FormatInt(totalSize, 10)) 178 | return ErrorResponse{ 179 | Code: "UnexpectedEOF", 180 | Message: msg, 181 | BucketName: bucketName, 182 | Key: objectName, 183 | } 184 | } 185 | 186 | // ErrInvalidBucketName - Invalid bucket name response. 187 | func ErrInvalidBucketName(message string) error { 188 | return ErrorResponse{ 189 | Code: "InvalidBucketName", 190 | Message: message, 191 | RequestID: "minio", 192 | } 193 | } 194 | 195 | // ErrInvalidObjectName - Invalid object name response. 196 | func ErrInvalidObjectName(message string) error { 197 | return ErrorResponse{ 198 | Code: "NoSuchKey", 199 | Message: message, 200 | RequestID: "minio", 201 | } 202 | } 203 | 204 | // ErrInvalidObjectPrefix - Invalid object prefix response is 205 | // similar to object name response. 206 | var ErrInvalidObjectPrefix = ErrInvalidObjectName 207 | 208 | // ErrInvalidArgument - Invalid argument response. 209 | func ErrInvalidArgument(message string) error { 210 | return ErrorResponse{ 211 | Code: "InvalidArgument", 212 | Message: message, 213 | RequestID: "minio", 214 | } 215 | } 216 | 217 | // ErrNoSuchBucketPolicy - No Such Bucket Policy response 218 | // The specified bucket does not have a bucket policy. 219 | func ErrNoSuchBucketPolicy(message string) error { 220 | return ErrorResponse{ 221 | Code: "NoSuchBucketPolicy", 222 | Message: message, 223 | RequestID: "minio", 224 | } 225 | } 226 | 227 | // ErrAPINotSupported - API not supported response 228 | // The specified API call is not supported 229 | func ErrAPINotSupported(message string) error { 230 | return ErrorResponse{ 231 | Code: "APINotSupported", 232 | Message: message, 233 | RequestID: "minio", 234 | } 235 | } 236 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-get-object-file.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "io" 21 | "os" 22 | "path/filepath" 23 | ) 24 | 25 | // FGetObject - download contents of an object to a local file. 26 | func (c Client) FGetObject(bucketName, objectName, filePath string) error { 27 | // Input validation. 28 | if err := isValidBucketName(bucketName); err != nil { 29 | return err 30 | } 31 | if err := isValidObjectName(objectName); err != nil { 32 | return err 33 | } 34 | 35 | // Verify if destination already exists. 36 | st, err := os.Stat(filePath) 37 | if err == nil { 38 | // If the destination exists and is a directory. 39 | if st.IsDir() { 40 | return ErrInvalidArgument("fileName is a directory.") 41 | } 42 | } 43 | 44 | // Proceed if file does not exist. return for all other errors. 45 | if err != nil { 46 | if !os.IsNotExist(err) { 47 | return err 48 | } 49 | } 50 | 51 | // Extract top level directory. 52 | objectDir, _ := filepath.Split(filePath) 53 | if objectDir != "" { 54 | // Create any missing top level directories. 55 | if err := os.MkdirAll(objectDir, 0700); err != nil { 56 | return err 57 | } 58 | } 59 | 60 | // Gather md5sum. 61 | objectStat, err := c.StatObject(bucketName, objectName) 62 | if err != nil { 63 | return err 64 | } 65 | 66 | // Write to a temporary file "fileName.part.minio" before saving. 67 | filePartPath := filePath + objectStat.ETag + ".part.minio" 68 | 69 | // If exists, open in append mode. If not create it as a part file. 70 | filePart, err := os.OpenFile(filePartPath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0600) 71 | if err != nil { 72 | return err 73 | } 74 | 75 | // Issue Stat to get the current offset. 76 | st, err = filePart.Stat() 77 | if err != nil { 78 | return err 79 | } 80 | 81 | // Seek to current position for incoming reader. 82 | objectReader, objectStat, err := c.getObject(bucketName, objectName, st.Size(), 0) 83 | if err != nil { 84 | return err 85 | } 86 | 87 | // Write to the part file. 88 | if _, err = io.CopyN(filePart, objectReader, objectStat.Size); err != nil { 89 | return err 90 | } 91 | 92 | // Close the file before rename, this is specifically needed for Windows users. 93 | if err = filePart.Close(); err != nil { 94 | return err 95 | } 96 | 97 | // Safely completed. Now commit by renaming to actual filename. 98 | if err = os.Rename(filePartPath, filePath); err != nil { 99 | return err 100 | } 101 | 102 | // Return. 103 | return nil 104 | } 105 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-get-policy.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "encoding/json" 21 | "io/ioutil" 22 | "net/http" 23 | "net/url" 24 | 25 | "github.com/minio/minio-go/pkg/policy" 26 | ) 27 | 28 | // GetBucketPolicy - get bucket policy at a given path. 29 | func (c Client) GetBucketPolicy(bucketName, objectPrefix string) (bucketPolicy policy.BucketPolicy, err error) { 30 | // Input validation. 31 | if err := isValidBucketName(bucketName); err != nil { 32 | return policy.BucketPolicyNone, err 33 | } 34 | if err := isValidObjectPrefix(objectPrefix); err != nil { 35 | return policy.BucketPolicyNone, err 36 | } 37 | policyInfo, err := c.getBucketPolicy(bucketName, objectPrefix) 38 | if err != nil { 39 | return policy.BucketPolicyNone, err 40 | } 41 | return policy.GetPolicy(policyInfo.Statements, bucketName, objectPrefix), nil 42 | } 43 | 44 | // ListBucketPolicies - list all policies for a given prefix and all its children. 45 | func (c Client) ListBucketPolicies(bucketName, objectPrefix string) (bucketPolicies map[string]policy.BucketPolicy, err error) { 46 | // Input validation. 47 | if err := isValidBucketName(bucketName); err != nil { 48 | return map[string]policy.BucketPolicy{}, err 49 | } 50 | if err := isValidObjectPrefix(objectPrefix); err != nil { 51 | return map[string]policy.BucketPolicy{}, err 52 | } 53 | policyInfo, err := c.getBucketPolicy(bucketName, objectPrefix) 54 | if err != nil { 55 | return map[string]policy.BucketPolicy{}, err 56 | } 57 | return policy.GetPolicies(policyInfo.Statements, bucketName), nil 58 | } 59 | 60 | // Request server for current bucket policy. 61 | func (c Client) getBucketPolicy(bucketName string, objectPrefix string) (policy.BucketAccessPolicy, error) { 62 | // Get resources properly escaped and lined up before 63 | // using them in http request. 64 | urlValues := make(url.Values) 65 | urlValues.Set("policy", "") 66 | 67 | // Execute GET on bucket to list objects. 68 | resp, err := c.executeMethod("GET", requestMetadata{ 69 | bucketName: bucketName, 70 | queryValues: urlValues, 71 | }) 72 | 73 | defer closeResponse(resp) 74 | if err != nil { 75 | return policy.BucketAccessPolicy{}, err 76 | } 77 | 78 | if resp != nil { 79 | if resp.StatusCode != http.StatusOK { 80 | errResponse := httpRespToErrorResponse(resp, bucketName, "") 81 | if ToErrorResponse(errResponse).Code == "NoSuchBucketPolicy" { 82 | return policy.BucketAccessPolicy{Version: "2012-10-17"}, nil 83 | } 84 | return policy.BucketAccessPolicy{}, errResponse 85 | } 86 | } 87 | bucketPolicyBuf, err := ioutil.ReadAll(resp.Body) 88 | if err != nil { 89 | return policy.BucketAccessPolicy{}, err 90 | } 91 | 92 | policy := policy.BucketAccessPolicy{} 93 | err = json.Unmarshal(bucketPolicyBuf, &policy) 94 | return policy, err 95 | } 96 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-notification.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "bufio" 21 | "encoding/json" 22 | "io" 23 | "net/http" 24 | "net/url" 25 | "time" 26 | 27 | "github.com/minio/minio-go/pkg/s3utils" 28 | ) 29 | 30 | // GetBucketNotification - get bucket notification at a given path. 31 | func (c Client) GetBucketNotification(bucketName string) (bucketNotification BucketNotification, err error) { 32 | // Input validation. 33 | if err := isValidBucketName(bucketName); err != nil { 34 | return BucketNotification{}, err 35 | } 36 | notification, err := c.getBucketNotification(bucketName) 37 | if err != nil { 38 | return BucketNotification{}, err 39 | } 40 | return notification, nil 41 | } 42 | 43 | // Request server for notification rules. 44 | func (c Client) getBucketNotification(bucketName string) (BucketNotification, error) { 45 | urlValues := make(url.Values) 46 | urlValues.Set("notification", "") 47 | 48 | // Execute GET on bucket to list objects. 49 | resp, err := c.executeMethod("GET", requestMetadata{ 50 | bucketName: bucketName, 51 | queryValues: urlValues, 52 | }) 53 | 54 | defer closeResponse(resp) 55 | if err != nil { 56 | return BucketNotification{}, err 57 | } 58 | return processBucketNotificationResponse(bucketName, resp) 59 | 60 | } 61 | 62 | // processes the GetNotification http response from the server. 63 | func processBucketNotificationResponse(bucketName string, resp *http.Response) (BucketNotification, error) { 64 | if resp.StatusCode != http.StatusOK { 65 | errResponse := httpRespToErrorResponse(resp, bucketName, "") 66 | return BucketNotification{}, errResponse 67 | } 68 | var bucketNotification BucketNotification 69 | err := xmlDecoder(resp.Body, &bucketNotification) 70 | if err != nil { 71 | return BucketNotification{}, err 72 | } 73 | return bucketNotification, nil 74 | } 75 | 76 | // Indentity represents the user id, this is a compliance field. 77 | type identity struct { 78 | PrincipalID string `json:"principalId"` 79 | } 80 | 81 | // Notification event bucket metadata. 82 | type bucketMeta struct { 83 | Name string `json:"name"` 84 | OwnerIdentity identity `json:"ownerIdentity"` 85 | ARN string `json:"arn"` 86 | } 87 | 88 | // Notification event object metadata. 89 | type objectMeta struct { 90 | Key string `json:"key"` 91 | Size int64 `json:"size,omitempty"` 92 | ETag string `json:"eTag,omitempty"` 93 | VersionID string `json:"versionId,omitempty"` 94 | Sequencer string `json:"sequencer"` 95 | } 96 | 97 | // Notification event server specific metadata. 98 | type eventMeta struct { 99 | SchemaVersion string `json:"s3SchemaVersion"` 100 | ConfigurationID string `json:"configurationId"` 101 | Bucket bucketMeta `json:"bucket"` 102 | Object objectMeta `json:"object"` 103 | } 104 | 105 | // NotificationEvent represents an Amazon an S3 bucket notification event. 106 | type NotificationEvent struct { 107 | EventVersion string `json:"eventVersion"` 108 | EventSource string `json:"eventSource"` 109 | AwsRegion string `json:"awsRegion"` 110 | EventTime string `json:"eventTime"` 111 | EventName string `json:"eventName"` 112 | UserIdentity identity `json:"userIdentity"` 113 | RequestParameters map[string]string `json:"requestParameters"` 114 | ResponseElements map[string]string `json:"responseElements"` 115 | S3 eventMeta `json:"s3"` 116 | } 117 | 118 | // NotificationInfo - represents the collection of notification events, additionally 119 | // also reports errors if any while listening on bucket notifications. 120 | type NotificationInfo struct { 121 | Records []NotificationEvent 122 | Err error 123 | } 124 | 125 | // ListenBucketNotification - listen on bucket notifications. 126 | func (c Client) ListenBucketNotification(bucketName, prefix, suffix string, events []string, doneCh <-chan struct{}) <-chan NotificationInfo { 127 | notificationInfoCh := make(chan NotificationInfo, 1) 128 | // Only success, start a routine to start reading line by line. 129 | go func(notificationInfoCh chan<- NotificationInfo) { 130 | defer close(notificationInfoCh) 131 | 132 | // Validate the bucket name. 133 | if err := isValidBucketName(bucketName); err != nil { 134 | notificationInfoCh <- NotificationInfo{ 135 | Err: err, 136 | } 137 | return 138 | } 139 | 140 | // Check ARN partition to verify if listening bucket is supported 141 | if s3utils.IsAmazonEndpoint(c.endpointURL) || s3utils.IsGoogleEndpoint(c.endpointURL) { 142 | notificationInfoCh <- NotificationInfo{ 143 | Err: ErrAPINotSupported("Listening bucket notification is specific only to `minio` partitions"), 144 | } 145 | return 146 | } 147 | 148 | // Continously run and listen on bucket notification. 149 | // Create a done channel to control 'ListObjects' go routine. 150 | retryDoneCh := make(chan struct{}, 1) 151 | 152 | // Indicate to our routine to exit cleanly upon return. 153 | defer close(retryDoneCh) 154 | 155 | // Wait on the jitter retry loop. 156 | for range c.newRetryTimerContinous(time.Second, time.Second*30, MaxJitter, retryDoneCh) { 157 | urlValues := make(url.Values) 158 | urlValues.Set("prefix", prefix) 159 | urlValues.Set("suffix", suffix) 160 | urlValues["events"] = events 161 | 162 | // Execute GET on bucket to list objects. 163 | resp, err := c.executeMethod("GET", requestMetadata{ 164 | bucketName: bucketName, 165 | queryValues: urlValues, 166 | }) 167 | if err != nil { 168 | continue 169 | } 170 | 171 | // Validate http response, upon error return quickly. 172 | if resp.StatusCode != http.StatusOK { 173 | errResponse := httpRespToErrorResponse(resp, bucketName, "") 174 | notificationInfoCh <- NotificationInfo{ 175 | Err: errResponse, 176 | } 177 | return 178 | } 179 | 180 | // Initialize a new bufio scanner, to read line by line. 181 | bio := bufio.NewScanner(resp.Body) 182 | 183 | // Close the response body. 184 | defer resp.Body.Close() 185 | 186 | // Unmarshal each line, returns marshalled values. 187 | for bio.Scan() { 188 | var notificationInfo NotificationInfo 189 | if err = json.Unmarshal(bio.Bytes(), ¬ificationInfo); err != nil { 190 | continue 191 | } 192 | // Send notifications on channel only if there are events received. 193 | if len(notificationInfo.Records) > 0 { 194 | select { 195 | case notificationInfoCh <- notificationInfo: 196 | case <-doneCh: 197 | return 198 | } 199 | } 200 | } 201 | // Look for any underlying errors. 202 | if err = bio.Err(); err != nil { 203 | // For an unexpected connection drop from server, we close the body 204 | // and re-connect. 205 | if err == io.ErrUnexpectedEOF { 206 | resp.Body.Close() 207 | } 208 | } 209 | } 210 | }(notificationInfoCh) 211 | 212 | // Returns the notification info channel, for caller to start reading from. 213 | return notificationInfoCh 214 | } 215 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-presigned.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "errors" 21 | "net/url" 22 | "time" 23 | 24 | "github.com/minio/minio-go/pkg/s3signer" 25 | "github.com/minio/minio-go/pkg/s3utils" 26 | ) 27 | 28 | // supportedGetReqParams - supported request parameters for GET presigned request. 29 | var supportedGetReqParams = map[string]struct{}{ 30 | "response-expires": {}, 31 | "response-content-type": {}, 32 | "response-cache-control": {}, 33 | "response-content-language": {}, 34 | "response-content-encoding": {}, 35 | "response-content-disposition": {}, 36 | } 37 | 38 | // presignURL - Returns a presigned URL for an input 'method'. 39 | // Expires maximum is 7days - ie. 604800 and minimum is 1. 40 | func (c Client) presignURL(method string, bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) { 41 | // Input validation. 42 | if method == "" { 43 | return nil, ErrInvalidArgument("method cannot be empty.") 44 | } 45 | if err := isValidBucketName(bucketName); err != nil { 46 | return nil, err 47 | } 48 | if err := isValidObjectName(objectName); err != nil { 49 | return nil, err 50 | } 51 | if err := isValidExpiry(expires); err != nil { 52 | return nil, err 53 | } 54 | 55 | // Convert expires into seconds. 56 | expireSeconds := int64(expires / time.Second) 57 | reqMetadata := requestMetadata{ 58 | presignURL: true, 59 | bucketName: bucketName, 60 | objectName: objectName, 61 | expires: expireSeconds, 62 | } 63 | 64 | // For "GET" we are handling additional request parameters to 65 | // override its response headers. 66 | if method == "GET" { 67 | // Verify if input map has unsupported params, if yes exit. 68 | for k := range reqParams { 69 | if _, ok := supportedGetReqParams[k]; !ok { 70 | return nil, ErrInvalidArgument(k + " unsupported request parameter for presigned GET.") 71 | } 72 | } 73 | // Save the request parameters to be used in presigning for GET request. 74 | reqMetadata.queryValues = reqParams 75 | } 76 | 77 | // Instantiate a new request. 78 | // Since expires is set newRequest will presign the request. 79 | req, err := c.newRequest(method, reqMetadata) 80 | if err != nil { 81 | return nil, err 82 | } 83 | return req.URL, nil 84 | } 85 | 86 | // PresignedGetObject - Returns a presigned URL to access an object 87 | // without credentials. Expires maximum is 7days - ie. 604800 and 88 | // minimum is 1. Additionally you can override a set of response 89 | // headers using the query parameters. 90 | func (c Client) PresignedGetObject(bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) { 91 | return c.presignURL("GET", bucketName, objectName, expires, reqParams) 92 | } 93 | 94 | // PresignedPutObject - Returns a presigned URL to upload an object without credentials. 95 | // Expires maximum is 7days - ie. 604800 and minimum is 1. 96 | func (c Client) PresignedPutObject(bucketName string, objectName string, expires time.Duration) (u *url.URL, err error) { 97 | return c.presignURL("PUT", bucketName, objectName, expires, nil) 98 | } 99 | 100 | // PresignedPostPolicy - Returns POST urlString, form data to upload an object. 101 | func (c Client) PresignedPostPolicy(p *PostPolicy) (u *url.URL, formData map[string]string, err error) { 102 | // Validate input arguments. 103 | if p.expiration.IsZero() { 104 | return nil, nil, errors.New("Expiration time must be specified") 105 | } 106 | if _, ok := p.formData["key"]; !ok { 107 | return nil, nil, errors.New("object key must be specified") 108 | } 109 | if _, ok := p.formData["bucket"]; !ok { 110 | return nil, nil, errors.New("bucket name must be specified") 111 | } 112 | 113 | bucketName := p.formData["bucket"] 114 | // Fetch the bucket location. 115 | location, err := c.getBucketLocation(bucketName) 116 | if err != nil { 117 | return nil, nil, err 118 | } 119 | 120 | u, err = c.makeTargetURL(bucketName, "", location, nil) 121 | if err != nil { 122 | return nil, nil, err 123 | } 124 | 125 | // Keep time. 126 | t := time.Now().UTC() 127 | // For signature version '2' handle here. 128 | if c.signature.isV2() { 129 | policyBase64 := p.base64() 130 | p.formData["policy"] = policyBase64 131 | // For Google endpoint set this value to be 'GoogleAccessId'. 132 | if s3utils.IsGoogleEndpoint(c.endpointURL) { 133 | p.formData["GoogleAccessId"] = c.accessKeyID 134 | } else { 135 | // For all other endpoints set this value to be 'AWSAccessKeyId'. 136 | p.formData["AWSAccessKeyId"] = c.accessKeyID 137 | } 138 | // Sign the policy. 139 | p.formData["signature"] = s3signer.PostPresignSignatureV2(policyBase64, c.secretAccessKey) 140 | return u, p.formData, nil 141 | } 142 | 143 | // Add date policy. 144 | if err = p.addNewPolicy(policyCondition{ 145 | matchType: "eq", 146 | condition: "$x-amz-date", 147 | value: t.Format(iso8601DateFormat), 148 | }); err != nil { 149 | return nil, nil, err 150 | } 151 | 152 | // Add algorithm policy. 153 | if err = p.addNewPolicy(policyCondition{ 154 | matchType: "eq", 155 | condition: "$x-amz-algorithm", 156 | value: signV4Algorithm, 157 | }); err != nil { 158 | return nil, nil, err 159 | } 160 | 161 | // Add a credential policy. 162 | credential := s3signer.GetCredential(c.accessKeyID, location, t) 163 | if err = p.addNewPolicy(policyCondition{ 164 | matchType: "eq", 165 | condition: "$x-amz-credential", 166 | value: credential, 167 | }); err != nil { 168 | return nil, nil, err 169 | } 170 | 171 | // Get base64 encoded policy. 172 | policyBase64 := p.base64() 173 | // Fill in the form data. 174 | p.formData["policy"] = policyBase64 175 | p.formData["x-amz-algorithm"] = signV4Algorithm 176 | p.formData["x-amz-credential"] = credential 177 | p.formData["x-amz-date"] = t.Format(iso8601DateFormat) 178 | p.formData["x-amz-signature"] = s3signer.PostPresignSignatureV4(policyBase64, t, c.secretAccessKey, location) 179 | return u, p.formData, nil 180 | } 181 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-put-bucket.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "bytes" 21 | "encoding/base64" 22 | "encoding/hex" 23 | "encoding/json" 24 | "encoding/xml" 25 | "fmt" 26 | "io/ioutil" 27 | "net/http" 28 | "net/url" 29 | "path" 30 | 31 | "github.com/minio/minio-go/pkg/policy" 32 | "github.com/minio/minio-go/pkg/s3signer" 33 | ) 34 | 35 | /// Bucket operations 36 | 37 | // MakeBucket creates a new bucket with bucketName. 38 | // 39 | // Location is an optional argument, by default all buckets are 40 | // created in US Standard Region. 41 | // 42 | // For Amazon S3 for more supported regions - http://docs.aws.amazon.com/general/latest/gr/rande.html 43 | // For Google Cloud Storage for more supported regions - https://cloud.google.com/storage/docs/bucket-locations 44 | func (c Client) MakeBucket(bucketName string, location string) error { 45 | // Validate the input arguments. 46 | if err := isValidBucketName(bucketName); err != nil { 47 | return err 48 | } 49 | 50 | // If location is empty, treat is a default region 'us-east-1'. 51 | if location == "" { 52 | location = "us-east-1" 53 | } 54 | 55 | // Instantiate the request. 56 | req, err := c.makeBucketRequest(bucketName, location) 57 | if err != nil { 58 | return err 59 | } 60 | 61 | // Execute the request. 62 | resp, err := c.do(req) 63 | defer closeResponse(resp) 64 | if err != nil { 65 | return err 66 | } 67 | 68 | if resp != nil { 69 | if resp.StatusCode != http.StatusOK { 70 | return httpRespToErrorResponse(resp, bucketName, "") 71 | } 72 | } 73 | 74 | // Save the location into cache on a successful makeBucket response. 75 | c.bucketLocCache.Set(bucketName, location) 76 | 77 | // Return. 78 | return nil 79 | } 80 | 81 | // makeBucketRequest constructs request for makeBucket. 82 | func (c Client) makeBucketRequest(bucketName string, location string) (*http.Request, error) { 83 | // Validate input arguments. 84 | if err := isValidBucketName(bucketName); err != nil { 85 | return nil, err 86 | } 87 | 88 | // In case of Amazon S3. The make bucket issued on already 89 | // existing bucket would fail with 'AuthorizationMalformed' error 90 | // if virtual style is used. So we default to 'path style' as that 91 | // is the preferred method here. The final location of the 92 | // 'bucket' is provided through XML LocationConstraint data with 93 | // the request. 94 | targetURL := c.endpointURL 95 | targetURL.Path = path.Join(bucketName, "") + "/" 96 | 97 | // get a new HTTP request for the method. 98 | req, err := http.NewRequest("PUT", targetURL.String(), nil) 99 | if err != nil { 100 | return nil, err 101 | } 102 | 103 | // set UserAgent for the request. 104 | c.setUserAgent(req) 105 | 106 | // set sha256 sum for signature calculation only with signature version '4'. 107 | if c.signature.isV4() { 108 | req.Header.Set("X-Amz-Content-Sha256", hex.EncodeToString(sum256([]byte{}))) 109 | } 110 | 111 | // If location is not 'us-east-1' create bucket location config. 112 | if location != "us-east-1" && location != "" { 113 | createBucketConfig := createBucketConfiguration{} 114 | createBucketConfig.Location = location 115 | var createBucketConfigBytes []byte 116 | createBucketConfigBytes, err = xml.Marshal(createBucketConfig) 117 | if err != nil { 118 | return nil, err 119 | } 120 | createBucketConfigBuffer := bytes.NewBuffer(createBucketConfigBytes) 121 | req.Body = ioutil.NopCloser(createBucketConfigBuffer) 122 | req.ContentLength = int64(len(createBucketConfigBytes)) 123 | // Set content-md5. 124 | req.Header.Set("Content-Md5", base64.StdEncoding.EncodeToString(sumMD5(createBucketConfigBytes))) 125 | if c.signature.isV4() { 126 | // Set sha256. 127 | req.Header.Set("X-Amz-Content-Sha256", hex.EncodeToString(sum256(createBucketConfigBytes))) 128 | } 129 | } 130 | 131 | // Sign the request. 132 | if c.signature.isV4() { 133 | // Signature calculated for MakeBucket request should be for 'us-east-1', 134 | // regardless of the bucket's location constraint. 135 | req = s3signer.SignV4(*req, c.accessKeyID, c.secretAccessKey, "us-east-1") 136 | } else if c.signature.isV2() { 137 | req = s3signer.SignV2(*req, c.accessKeyID, c.secretAccessKey) 138 | } 139 | 140 | // Return signed request. 141 | return req, nil 142 | } 143 | 144 | // SetBucketPolicy set the access permissions on an existing bucket. 145 | // 146 | // For example 147 | // 148 | // none - owner gets full access [default]. 149 | // readonly - anonymous get access for everyone at a given object prefix. 150 | // readwrite - anonymous list/put/delete access to a given object prefix. 151 | // writeonly - anonymous put/delete access to a given object prefix. 152 | func (c Client) SetBucketPolicy(bucketName string, objectPrefix string, bucketPolicy policy.BucketPolicy) error { 153 | // Input validation. 154 | if err := isValidBucketName(bucketName); err != nil { 155 | return err 156 | } 157 | if err := isValidObjectPrefix(objectPrefix); err != nil { 158 | return err 159 | } 160 | if !bucketPolicy.IsValidBucketPolicy() { 161 | return ErrInvalidArgument(fmt.Sprintf("Invalid bucket policy provided. %s", bucketPolicy)) 162 | } 163 | policyInfo, err := c.getBucketPolicy(bucketName, objectPrefix) 164 | if err != nil { 165 | return err 166 | } 167 | 168 | if bucketPolicy == policy.BucketPolicyNone && policyInfo.Statements == nil { 169 | // As the request is for removing policy and the bucket 170 | // has empty policy statements, just return success. 171 | return nil 172 | } 173 | 174 | policyInfo.Statements = policy.SetPolicy(policyInfo.Statements, bucketPolicy, bucketName, objectPrefix) 175 | 176 | // Save the updated policies. 177 | return c.putBucketPolicy(bucketName, policyInfo) 178 | } 179 | 180 | // Saves a new bucket policy. 181 | func (c Client) putBucketPolicy(bucketName string, policyInfo policy.BucketAccessPolicy) error { 182 | // Input validation. 183 | if err := isValidBucketName(bucketName); err != nil { 184 | return err 185 | } 186 | 187 | // If there are no policy statements, we should remove entire policy. 188 | if len(policyInfo.Statements) == 0 { 189 | return c.removeBucketPolicy(bucketName) 190 | } 191 | 192 | // Get resources properly escaped and lined up before 193 | // using them in http request. 194 | urlValues := make(url.Values) 195 | urlValues.Set("policy", "") 196 | 197 | policyBytes, err := json.Marshal(&policyInfo) 198 | if err != nil { 199 | return err 200 | } 201 | 202 | policyBuffer := bytes.NewReader(policyBytes) 203 | reqMetadata := requestMetadata{ 204 | bucketName: bucketName, 205 | queryValues: urlValues, 206 | contentBody: policyBuffer, 207 | contentLength: int64(len(policyBytes)), 208 | contentMD5Bytes: sumMD5(policyBytes), 209 | contentSHA256Bytes: sum256(policyBytes), 210 | } 211 | 212 | // Execute PUT to upload a new bucket policy. 213 | resp, err := c.executeMethod("PUT", reqMetadata) 214 | defer closeResponse(resp) 215 | if err != nil { 216 | return err 217 | } 218 | if resp != nil { 219 | if resp.StatusCode != http.StatusNoContent { 220 | return httpRespToErrorResponse(resp, bucketName, "") 221 | } 222 | } 223 | return nil 224 | } 225 | 226 | // Removes all policies on a bucket. 227 | func (c Client) removeBucketPolicy(bucketName string) error { 228 | // Input validation. 229 | if err := isValidBucketName(bucketName); err != nil { 230 | return err 231 | } 232 | // Get resources properly escaped and lined up before 233 | // using them in http request. 234 | urlValues := make(url.Values) 235 | urlValues.Set("policy", "") 236 | 237 | // Execute DELETE on objectName. 238 | resp, err := c.executeMethod("DELETE", requestMetadata{ 239 | bucketName: bucketName, 240 | queryValues: urlValues, 241 | }) 242 | defer closeResponse(resp) 243 | if err != nil { 244 | return err 245 | } 246 | return nil 247 | } 248 | 249 | // SetBucketNotification saves a new bucket notification. 250 | func (c Client) SetBucketNotification(bucketName string, bucketNotification BucketNotification) error { 251 | // Input validation. 252 | if err := isValidBucketName(bucketName); err != nil { 253 | return err 254 | } 255 | 256 | // Get resources properly escaped and lined up before 257 | // using them in http request. 258 | urlValues := make(url.Values) 259 | urlValues.Set("notification", "") 260 | 261 | notifBytes, err := xml.Marshal(bucketNotification) 262 | if err != nil { 263 | return err 264 | } 265 | 266 | notifBuffer := bytes.NewReader(notifBytes) 267 | reqMetadata := requestMetadata{ 268 | bucketName: bucketName, 269 | queryValues: urlValues, 270 | contentBody: notifBuffer, 271 | contentLength: int64(len(notifBytes)), 272 | contentMD5Bytes: sumMD5(notifBytes), 273 | contentSHA256Bytes: sum256(notifBytes), 274 | } 275 | 276 | // Execute PUT to upload a new bucket notification. 277 | resp, err := c.executeMethod("PUT", reqMetadata) 278 | defer closeResponse(resp) 279 | if err != nil { 280 | return err 281 | } 282 | if resp != nil { 283 | if resp.StatusCode != http.StatusOK { 284 | return httpRespToErrorResponse(resp, bucketName, "") 285 | } 286 | } 287 | return nil 288 | } 289 | 290 | // RemoveAllBucketNotification - Remove bucket notification clears all previously specified config 291 | func (c Client) RemoveAllBucketNotification(bucketName string) error { 292 | return c.SetBucketNotification(bucketName, BucketNotification{}) 293 | } 294 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-put-object-common.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "fmt" 21 | "hash" 22 | "io" 23 | "io/ioutil" 24 | "math" 25 | "os" 26 | ) 27 | 28 | // Verify if reader is *os.File 29 | func isFile(reader io.Reader) (ok bool) { 30 | _, ok = reader.(*os.File) 31 | return 32 | } 33 | 34 | // Verify if reader is *minio.Object 35 | func isObject(reader io.Reader) (ok bool) { 36 | _, ok = reader.(*Object) 37 | return 38 | } 39 | 40 | // Verify if reader is a generic ReaderAt 41 | func isReadAt(reader io.Reader) (ok bool) { 42 | _, ok = reader.(io.ReaderAt) 43 | return 44 | } 45 | 46 | // shouldUploadPart - verify if part should be uploaded. 47 | func shouldUploadPart(objPart objectPart, uploadReq uploadPartReq) bool { 48 | // If part not found should upload the part. 49 | if uploadReq.Part == nil { 50 | return true 51 | } 52 | // if size mismatches should upload the part. 53 | if objPart.Size != uploadReq.Part.Size { 54 | return true 55 | } 56 | // if md5sum mismatches should upload the part. 57 | if objPart.ETag != uploadReq.Part.ETag { 58 | return true 59 | } 60 | return false 61 | } 62 | 63 | // optimalPartInfo - calculate the optimal part info for a given 64 | // object size. 65 | // 66 | // NOTE: Assumption here is that for any object to be uploaded to any S3 compatible 67 | // object storage it will have the following parameters as constants. 68 | // 69 | // maxPartsCount - 10000 70 | // minPartSize - 64MiB 71 | // maxMultipartPutObjectSize - 5TiB 72 | // 73 | func optimalPartInfo(objectSize int64) (totalPartsCount int, partSize int64, lastPartSize int64, err error) { 74 | // object size is '-1' set it to 5TiB. 75 | if objectSize == -1 { 76 | objectSize = maxMultipartPutObjectSize 77 | } 78 | // object size is larger than supported maximum. 79 | if objectSize > maxMultipartPutObjectSize { 80 | err = ErrEntityTooLarge(objectSize, maxMultipartPutObjectSize, "", "") 81 | return 82 | } 83 | // Use floats for part size for all calculations to avoid 84 | // overflows during float64 to int64 conversions. 85 | partSizeFlt := math.Ceil(float64(objectSize / maxPartsCount)) 86 | partSizeFlt = math.Ceil(partSizeFlt/minPartSize) * minPartSize 87 | // Total parts count. 88 | totalPartsCount = int(math.Ceil(float64(objectSize) / partSizeFlt)) 89 | // Part size. 90 | partSize = int64(partSizeFlt) 91 | // Last part size. 92 | lastPartSize = objectSize - int64(totalPartsCount-1)*partSize 93 | return totalPartsCount, partSize, lastPartSize, nil 94 | } 95 | 96 | // hashCopyBuffer is identical to hashCopyN except that it doesn't take 97 | // any size argument but takes a buffer argument and reader should be 98 | // of io.ReaderAt interface. 99 | // 100 | // Stages reads from offsets into the buffer, if buffer is nil it is 101 | // initialized to optimalBufferSize. 102 | func hashCopyBuffer(hashAlgorithms map[string]hash.Hash, hashSums map[string][]byte, writer io.Writer, reader io.ReaderAt, buf []byte) (size int64, err error) { 103 | hashWriter := writer 104 | for _, v := range hashAlgorithms { 105 | hashWriter = io.MultiWriter(hashWriter, v) 106 | } 107 | 108 | // Buffer is nil, initialize. 109 | if buf == nil { 110 | buf = make([]byte, optimalReadBufferSize) 111 | } 112 | 113 | // Offset to start reading from. 114 | var readAtOffset int64 115 | 116 | // Following block reads data at an offset from the input 117 | // reader and copies data to into local temporary file. 118 | for { 119 | readAtSize, rerr := reader.ReadAt(buf, readAtOffset) 120 | if rerr != nil { 121 | if rerr != io.EOF { 122 | return 0, rerr 123 | } 124 | } 125 | writeSize, werr := hashWriter.Write(buf[:readAtSize]) 126 | if werr != nil { 127 | return 0, werr 128 | } 129 | if readAtSize != writeSize { 130 | return 0, fmt.Errorf("Read size was not completely written to writer. wanted %d, got %d - %s", readAtSize, writeSize, reportIssue) 131 | } 132 | readAtOffset += int64(writeSize) 133 | size += int64(writeSize) 134 | if rerr == io.EOF { 135 | break 136 | } 137 | } 138 | 139 | for k, v := range hashAlgorithms { 140 | hashSums[k] = v.Sum(nil) 141 | } 142 | return size, err 143 | } 144 | 145 | // hashCopyN - Calculates chosen hashes up to partSize amount of bytes. 146 | func hashCopyN(hashAlgorithms map[string]hash.Hash, hashSums map[string][]byte, writer io.Writer, reader io.Reader, partSize int64) (size int64, err error) { 147 | hashWriter := writer 148 | for _, v := range hashAlgorithms { 149 | hashWriter = io.MultiWriter(hashWriter, v) 150 | } 151 | 152 | // Copies to input at writer. 153 | size, err = io.CopyN(hashWriter, reader, partSize) 154 | if err != nil { 155 | // If not EOF return error right here. 156 | if err != io.EOF { 157 | return 0, err 158 | } 159 | } 160 | 161 | for k, v := range hashAlgorithms { 162 | hashSums[k] = v.Sum(nil) 163 | } 164 | return size, err 165 | } 166 | 167 | // getUploadID - fetch upload id if already present for an object name 168 | // or initiate a new request to fetch a new upload id. 169 | func (c Client) newUploadID(bucketName, objectName string, metaData map[string][]string) (uploadID string, err error) { 170 | // Input validation. 171 | if err := isValidBucketName(bucketName); err != nil { 172 | return "", err 173 | } 174 | if err := isValidObjectName(objectName); err != nil { 175 | return "", err 176 | } 177 | 178 | // Initiate multipart upload for an object. 179 | initMultipartUploadResult, err := c.initiateMultipartUpload(bucketName, objectName, metaData) 180 | if err != nil { 181 | return "", err 182 | } 183 | return initMultipartUploadResult.UploadID, nil 184 | } 185 | 186 | // getMpartUploadSession returns the upload id and the uploaded parts to continue a previous upload session 187 | // or initiate a new multipart session if no current one found 188 | func (c Client) getMpartUploadSession(bucketName, objectName string, metaData map[string][]string) (string, map[int]objectPart, error) { 189 | // A map of all uploaded parts. 190 | var partsInfo map[int]objectPart 191 | var err error 192 | 193 | uploadID, err := c.findUploadID(bucketName, objectName) 194 | if err != nil { 195 | return "", nil, err 196 | } 197 | 198 | if uploadID == "" { 199 | // Initiates a new multipart request 200 | uploadID, err = c.newUploadID(bucketName, objectName, metaData) 201 | if err != nil { 202 | return "", nil, err 203 | } 204 | } else { 205 | // Fetch previously upload parts and maximum part size. 206 | partsInfo, err = c.listObjectParts(bucketName, objectName, uploadID) 207 | if err != nil { 208 | // When the server returns NoSuchUpload even if its previouls acknowleged the existance of the upload id, 209 | // initiate a new multipart upload 210 | if respErr, ok := err.(ErrorResponse); ok && respErr.Code == "NoSuchUpload" { 211 | uploadID, err = c.newUploadID(bucketName, objectName, metaData) 212 | if err != nil { 213 | return "", nil, err 214 | } 215 | } else { 216 | return "", nil, err 217 | } 218 | } 219 | } 220 | 221 | // Allocate partsInfo if not done yet 222 | if partsInfo == nil { 223 | partsInfo = make(map[int]objectPart) 224 | } 225 | 226 | return uploadID, partsInfo, nil 227 | } 228 | 229 | // computeHash - Calculates hashes for an input read Seeker. 230 | func computeHash(hashAlgorithms map[string]hash.Hash, hashSums map[string][]byte, reader io.ReadSeeker) (size int64, err error) { 231 | hashWriter := ioutil.Discard 232 | for _, v := range hashAlgorithms { 233 | hashWriter = io.MultiWriter(hashWriter, v) 234 | } 235 | 236 | // If no buffer is provided, no need to allocate just use io.Copy. 237 | size, err = io.Copy(hashWriter, reader) 238 | if err != nil { 239 | return 0, err 240 | } 241 | 242 | // Seek back reader to the beginning location. 243 | if _, err := reader.Seek(0, 0); err != nil { 244 | return 0, err 245 | } 246 | 247 | for k, v := range hashAlgorithms { 248 | hashSums[k] = v.Sum(nil) 249 | } 250 | return size, nil 251 | } 252 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-put-object-copy.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "net/http" 21 | 22 | "github.com/minio/minio-go/pkg/s3utils" 23 | ) 24 | 25 | // CopyObject - copy a source object into a new object with the provided name in the provided bucket 26 | func (c Client) CopyObject(bucketName string, objectName string, objectSource string, cpCond CopyConditions) error { 27 | // Input validation. 28 | if err := isValidBucketName(bucketName); err != nil { 29 | return err 30 | } 31 | if err := isValidObjectName(objectName); err != nil { 32 | return err 33 | } 34 | if objectSource == "" { 35 | return ErrInvalidArgument("Object source cannot be empty.") 36 | } 37 | 38 | // customHeaders apply headers. 39 | customHeaders := make(http.Header) 40 | for _, cond := range cpCond.conditions { 41 | customHeaders.Set(cond.key, cond.value) 42 | } 43 | 44 | // Set copy source. 45 | customHeaders.Set("x-amz-copy-source", s3utils.EncodePath(objectSource)) 46 | 47 | // Execute PUT on objectName. 48 | resp, err := c.executeMethod("PUT", requestMetadata{ 49 | bucketName: bucketName, 50 | objectName: objectName, 51 | customHeader: customHeaders, 52 | }) 53 | defer closeResponse(resp) 54 | if err != nil { 55 | return err 56 | } 57 | if resp != nil { 58 | if resp.StatusCode != http.StatusOK { 59 | return httpRespToErrorResponse(resp, bucketName, objectName) 60 | } 61 | } 62 | 63 | // Decode copy response on success. 64 | cpObjRes := copyObjectResult{} 65 | err = xmlDecoder(resp.Body, &cpObjRes) 66 | if err != nil { 67 | return err 68 | } 69 | 70 | // Return nil on success. 71 | return nil 72 | } 73 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-put-object-file.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "crypto/md5" 21 | "crypto/sha256" 22 | "encoding/hex" 23 | "fmt" 24 | "hash" 25 | "io" 26 | "io/ioutil" 27 | "mime" 28 | "os" 29 | "path/filepath" 30 | "sort" 31 | 32 | "github.com/minio/minio-go/pkg/s3utils" 33 | ) 34 | 35 | // FPutObject - Create an object in a bucket, with contents from file at filePath. 36 | func (c Client) FPutObject(bucketName, objectName, filePath, contentType string) (n int64, err error) { 37 | // Input validation. 38 | if err := isValidBucketName(bucketName); err != nil { 39 | return 0, err 40 | } 41 | if err := isValidObjectName(objectName); err != nil { 42 | return 0, err 43 | } 44 | 45 | // Open the referenced file. 46 | fileReader, err := os.Open(filePath) 47 | // If any error fail quickly here. 48 | if err != nil { 49 | return 0, err 50 | } 51 | defer fileReader.Close() 52 | 53 | // Save the file stat. 54 | fileStat, err := fileReader.Stat() 55 | if err != nil { 56 | return 0, err 57 | } 58 | 59 | // Save the file size. 60 | fileSize := fileStat.Size() 61 | 62 | // Check for largest object size allowed. 63 | if fileSize > int64(maxMultipartPutObjectSize) { 64 | return 0, ErrEntityTooLarge(fileSize, maxMultipartPutObjectSize, bucketName, objectName) 65 | } 66 | 67 | objMetadata := make(map[string][]string) 68 | 69 | // Set contentType based on filepath extension if not given or default 70 | // value of "binary/octet-stream" if the extension has no associated type. 71 | if contentType == "" { 72 | if contentType = mime.TypeByExtension(filepath.Ext(filePath)); contentType == "" { 73 | contentType = "application/octet-stream" 74 | } 75 | } 76 | 77 | objMetadata["Content-Type"] = []string{contentType} 78 | 79 | // NOTE: Google Cloud Storage multipart Put is not compatible with Amazon S3 APIs. 80 | // Current implementation will only upload a maximum of 5GiB to Google Cloud Storage servers. 81 | if s3utils.IsGoogleEndpoint(c.endpointURL) { 82 | if fileSize > int64(maxSinglePutObjectSize) { 83 | return 0, ErrorResponse{ 84 | Code: "NotImplemented", 85 | Message: fmt.Sprintf("Invalid Content-Length %d for file uploads to Google Cloud Storage.", fileSize), 86 | Key: objectName, 87 | BucketName: bucketName, 88 | } 89 | } 90 | // Do not compute MD5 for Google Cloud Storage. Uploads up to 5GiB in size. 91 | return c.putObjectNoChecksum(bucketName, objectName, fileReader, fileSize, objMetadata, nil) 92 | } 93 | 94 | // NOTE: S3 doesn't allow anonymous multipart requests. 95 | if s3utils.IsAmazonEndpoint(c.endpointURL) && c.anonymous { 96 | if fileSize > int64(maxSinglePutObjectSize) { 97 | return 0, ErrorResponse{ 98 | Code: "NotImplemented", 99 | Message: fmt.Sprintf("For anonymous requests Content-Length cannot be %d.", fileSize), 100 | Key: objectName, 101 | BucketName: bucketName, 102 | } 103 | } 104 | // Do not compute MD5 for anonymous requests to Amazon 105 | // S3. Uploads up to 5GiB in size. 106 | return c.putObjectNoChecksum(bucketName, objectName, fileReader, fileSize, objMetadata, nil) 107 | } 108 | 109 | // Small object upload is initiated for uploads for input data size smaller than 5MiB. 110 | if fileSize < minPartSize && fileSize >= 0 { 111 | return c.putObjectSingle(bucketName, objectName, fileReader, fileSize, objMetadata, nil) 112 | } 113 | // Upload all large objects as multipart. 114 | n, err = c.putObjectMultipartFromFile(bucketName, objectName, fileReader, fileSize, objMetadata, nil) 115 | if err != nil { 116 | errResp := ToErrorResponse(err) 117 | // Verify if multipart functionality is not available, if not 118 | // fall back to single PutObject operation. 119 | if errResp.Code == "NotImplemented" { 120 | // If size of file is greater than '5GiB' fail. 121 | if fileSize > maxSinglePutObjectSize { 122 | return 0, ErrEntityTooLarge(fileSize, maxSinglePutObjectSize, bucketName, objectName) 123 | } 124 | // Fall back to uploading as single PutObject operation. 125 | return c.putObjectSingle(bucketName, objectName, fileReader, fileSize, objMetadata, nil) 126 | } 127 | return n, err 128 | } 129 | return n, nil 130 | } 131 | 132 | // putObjectMultipartFromFile - Creates object from contents of *os.File 133 | // 134 | // NOTE: This function is meant to be used for readers with local 135 | // file as in *os.File. This function resumes by skipping all the 136 | // necessary parts which were already uploaded by verifying them 137 | // against MD5SUM of each individual parts. This function also 138 | // effectively utilizes file system capabilities of reading from 139 | // specific sections and not having to create temporary files. 140 | func (c Client) putObjectMultipartFromFile(bucketName, objectName string, fileReader io.ReaderAt, fileSize int64, metaData map[string][]string, progress io.Reader) (int64, error) { 141 | // Input validation. 142 | if err := isValidBucketName(bucketName); err != nil { 143 | return 0, err 144 | } 145 | if err := isValidObjectName(objectName); err != nil { 146 | return 0, err 147 | } 148 | 149 | // Get the upload id of a previously partially uploaded object or initiate a new multipart upload 150 | uploadID, partsInfo, err := c.getMpartUploadSession(bucketName, objectName, metaData) 151 | if err != nil { 152 | return 0, err 153 | } 154 | 155 | // Total data read and written to server. should be equal to 'size' at the end of the call. 156 | var totalUploadedSize int64 157 | 158 | // Complete multipart upload. 159 | var complMultipartUpload completeMultipartUpload 160 | 161 | // Calculate the optimal parts info for a given size. 162 | totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(fileSize) 163 | if err != nil { 164 | return 0, err 165 | } 166 | 167 | // Create a channel to communicate a part was uploaded. 168 | // Buffer this to 10000, the maximum number of parts allowed by S3. 169 | uploadedPartsCh := make(chan uploadedPartRes, 10000) 170 | 171 | // Create a channel to communicate which part to upload. 172 | // Buffer this to 10000, the maximum number of parts allowed by S3. 173 | uploadPartsCh := make(chan uploadPartReq, 10000) 174 | 175 | // Just for readability. 176 | lastPartNumber := totalPartsCount 177 | 178 | // Send each part through the partUploadCh to be uploaded. 179 | for p := 1; p <= totalPartsCount; p++ { 180 | part, ok := partsInfo[p] 181 | if ok { 182 | uploadPartsCh <- uploadPartReq{PartNum: p, Part: &part} 183 | } else { 184 | uploadPartsCh <- uploadPartReq{PartNum: p, Part: nil} 185 | } 186 | } 187 | close(uploadPartsCh) 188 | 189 | // Use three 'workers' to upload parts in parallel. 190 | for w := 1; w <= 3; w++ { 191 | go func() { 192 | // Deal with each part as it comes through the channel. 193 | for uploadReq := range uploadPartsCh { 194 | // Add hash algorithms that need to be calculated by computeHash() 195 | // In case of a non-v4 signature or https connection, sha256 is not needed. 196 | hashAlgos := make(map[string]hash.Hash) 197 | hashSums := make(map[string][]byte) 198 | hashAlgos["md5"] = md5.New() 199 | if c.signature.isV4() && !c.secure { 200 | hashAlgos["sha256"] = sha256.New() 201 | } 202 | 203 | // If partNumber was not uploaded we calculate the missing 204 | // part offset and size. For all other part numbers we 205 | // calculate offset based on multiples of partSize. 206 | readOffset := int64(uploadReq.PartNum-1) * partSize 207 | missingPartSize := partSize 208 | 209 | // As a special case if partNumber is lastPartNumber, we 210 | // calculate the offset based on the last part size. 211 | if uploadReq.PartNum == lastPartNumber { 212 | readOffset = (fileSize - lastPartSize) 213 | missingPartSize = lastPartSize 214 | } 215 | 216 | // Get a section reader on a particular offset. 217 | sectionReader := io.NewSectionReader(fileReader, readOffset, missingPartSize) 218 | var prtSize int64 219 | var err error 220 | 221 | prtSize, err = computeHash(hashAlgos, hashSums, sectionReader) 222 | if err != nil { 223 | uploadedPartsCh <- uploadedPartRes{ 224 | Error: err, 225 | } 226 | // Exit the goroutine. 227 | return 228 | } 229 | 230 | // Create the part to be uploaded. 231 | verifyObjPart := objectPart{ 232 | ETag: hex.EncodeToString(hashSums["md5"]), 233 | PartNumber: uploadReq.PartNum, 234 | Size: partSize, 235 | } 236 | 237 | // If this is the last part do not give it the full part size. 238 | if uploadReq.PartNum == lastPartNumber { 239 | verifyObjPart.Size = lastPartSize 240 | } 241 | 242 | // Verify if part should be uploaded. 243 | if shouldUploadPart(verifyObjPart, uploadReq) { 244 | // Proceed to upload the part. 245 | var objPart objectPart 246 | objPart, err = c.uploadPart(bucketName, objectName, uploadID, sectionReader, uploadReq.PartNum, hashSums["md5"], hashSums["sha256"], prtSize) 247 | if err != nil { 248 | uploadedPartsCh <- uploadedPartRes{ 249 | Error: err, 250 | } 251 | // Exit the goroutine. 252 | return 253 | } 254 | // Save successfully uploaded part metadata. 255 | uploadReq.Part = &objPart 256 | } 257 | // Return through the channel the part size. 258 | uploadedPartsCh <- uploadedPartRes{ 259 | Size: verifyObjPart.Size, 260 | PartNum: uploadReq.PartNum, 261 | Part: uploadReq.Part, 262 | Error: nil, 263 | } 264 | } 265 | }() 266 | } 267 | 268 | // Retrieve each uploaded part once it is done. 269 | for u := 1; u <= totalPartsCount; u++ { 270 | uploadRes := <-uploadedPartsCh 271 | if uploadRes.Error != nil { 272 | return totalUploadedSize, uploadRes.Error 273 | } 274 | // Retrieve each uploaded part and store it to be completed. 275 | part := uploadRes.Part 276 | if part == nil { 277 | return totalUploadedSize, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", uploadRes.PartNum)) 278 | } 279 | // Update the total uploaded size. 280 | totalUploadedSize += uploadRes.Size 281 | // Update the progress bar if there is one. 282 | if progress != nil { 283 | if _, err = io.CopyN(ioutil.Discard, progress, uploadRes.Size); err != nil { 284 | return totalUploadedSize, err 285 | } 286 | } 287 | // Store the part to be completed. 288 | complMultipartUpload.Parts = append(complMultipartUpload.Parts, completePart{ 289 | ETag: part.ETag, 290 | PartNumber: part.PartNumber, 291 | }) 292 | } 293 | 294 | // Verify if we uploaded all data. 295 | if totalUploadedSize != fileSize { 296 | return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, fileSize, bucketName, objectName) 297 | } 298 | 299 | // Sort all completed parts. 300 | sort.Sort(completedParts(complMultipartUpload.Parts)) 301 | _, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload) 302 | if err != nil { 303 | return totalUploadedSize, err 304 | } 305 | 306 | // Return final size. 307 | return totalUploadedSize, nil 308 | } 309 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-put-object-progress.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "io" 21 | "strings" 22 | 23 | "github.com/minio/minio-go/pkg/s3utils" 24 | ) 25 | 26 | // PutObjectWithProgress - with progress. 27 | func (c Client) PutObjectWithProgress(bucketName, objectName string, reader io.Reader, contentType string, progress io.Reader) (n int64, err error) { 28 | metaData := make(map[string][]string) 29 | metaData["Content-Type"] = []string{contentType} 30 | return c.PutObjectWithMetadata(bucketName, objectName, reader, metaData, progress) 31 | } 32 | 33 | // PutObjectWithMetadata - with metadata. 34 | func (c Client) PutObjectWithMetadata(bucketName, objectName string, reader io.Reader, metaData map[string][]string, progress io.Reader) (n int64, err error) { 35 | // Input validation. 36 | if err := isValidBucketName(bucketName); err != nil { 37 | return 0, err 38 | } 39 | if err := isValidObjectName(objectName); err != nil { 40 | return 0, err 41 | } 42 | if reader == nil { 43 | return 0, ErrInvalidArgument("Input reader is invalid, cannot be nil.") 44 | } 45 | 46 | // Size of the object. 47 | var size int64 48 | 49 | // Get reader size. 50 | size, err = getReaderSize(reader) 51 | if err != nil { 52 | return 0, err 53 | } 54 | 55 | // Check for largest object size allowed. 56 | if size > int64(maxMultipartPutObjectSize) { 57 | return 0, ErrEntityTooLarge(size, maxMultipartPutObjectSize, bucketName, objectName) 58 | } 59 | 60 | // NOTE: Google Cloud Storage does not implement Amazon S3 Compatible multipart PUT. 61 | // So we fall back to single PUT operation with the maximum limit of 5GiB. 62 | if s3utils.IsGoogleEndpoint(c.endpointURL) { 63 | if size <= -1 { 64 | return 0, ErrorResponse{ 65 | Code: "NotImplemented", 66 | Message: "Content-Length cannot be negative for file uploads to Google Cloud Storage.", 67 | Key: objectName, 68 | BucketName: bucketName, 69 | } 70 | } 71 | if size > maxSinglePutObjectSize { 72 | return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName) 73 | } 74 | // Do not compute MD5 for Google Cloud Storage. Uploads up to 5GiB in size. 75 | return c.putObjectNoChecksum(bucketName, objectName, reader, size, metaData, progress) 76 | } 77 | 78 | // NOTE: S3 doesn't allow anonymous multipart requests. 79 | if s3utils.IsAmazonEndpoint(c.endpointURL) && c.anonymous { 80 | if size <= -1 { 81 | return 0, ErrorResponse{ 82 | Code: "NotImplemented", 83 | Message: "Content-Length cannot be negative for anonymous requests.", 84 | Key: objectName, 85 | BucketName: bucketName, 86 | } 87 | } 88 | if size > maxSinglePutObjectSize { 89 | return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName) 90 | } 91 | // Do not compute MD5 for anonymous requests to Amazon 92 | // S3. Uploads up to 5GiB in size. 93 | return c.putObjectNoChecksum(bucketName, objectName, reader, size, metaData, progress) 94 | } 95 | 96 | // putSmall object. 97 | if size < minPartSize && size >= 0 { 98 | return c.putObjectSingle(bucketName, objectName, reader, size, metaData, progress) 99 | } 100 | // For all sizes greater than 5MiB do multipart. 101 | n, err = c.putObjectMultipart(bucketName, objectName, reader, size, metaData, progress) 102 | if err != nil { 103 | errResp := ToErrorResponse(err) 104 | // Verify if multipart functionality is not available, if not 105 | // fall back to single PutObject operation. 106 | if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") { 107 | // Verify if size of reader is greater than '5GiB'. 108 | if size > maxSinglePutObjectSize { 109 | return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName) 110 | } 111 | // Fall back to uploading as single PutObject operation. 112 | return c.putObjectSingle(bucketName, objectName, reader, size, metaData, progress) 113 | } 114 | return n, err 115 | } 116 | return n, nil 117 | } 118 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-put-object-readat.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "bytes" 21 | "crypto/md5" 22 | "crypto/sha256" 23 | "fmt" 24 | "hash" 25 | "io" 26 | "io/ioutil" 27 | "sort" 28 | ) 29 | 30 | // uploadedPartRes - the response received from a part upload. 31 | type uploadedPartRes struct { 32 | Error error // Any error encountered while uploading the part. 33 | PartNum int // Number of the part uploaded. 34 | Size int64 // Size of the part uploaded. 35 | Part *objectPart 36 | } 37 | 38 | type uploadPartReq struct { 39 | PartNum int // Number of the part uploaded. 40 | Part *objectPart // Size of the part uploaded. 41 | } 42 | 43 | // shouldUploadPartReadAt - verify if part should be uploaded. 44 | func shouldUploadPartReadAt(objPart objectPart, uploadReq uploadPartReq) bool { 45 | // If part not found part should be uploaded. 46 | if uploadReq.Part == nil { 47 | return true 48 | } 49 | // if size mismatches part should be uploaded. 50 | if uploadReq.Part.Size != objPart.Size { 51 | return true 52 | } 53 | return false 54 | } 55 | 56 | // putObjectMultipartFromReadAt - Uploads files bigger than 5MiB. Supports reader 57 | // of type which implements io.ReaderAt interface (ReadAt method). 58 | // 59 | // NOTE: This function is meant to be used for all readers which 60 | // implement io.ReaderAt which allows us for resuming multipart 61 | // uploads but reading at an offset, which would avoid re-read the 62 | // data which was already uploaded. Internally this function uses 63 | // temporary files for staging all the data, these temporary files are 64 | // cleaned automatically when the caller i.e http client closes the 65 | // stream after uploading all the contents successfully. 66 | func (c Client) putObjectMultipartFromReadAt(bucketName, objectName string, reader io.ReaderAt, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) { 67 | // Input validation. 68 | if err := isValidBucketName(bucketName); err != nil { 69 | return 0, err 70 | } 71 | if err := isValidObjectName(objectName); err != nil { 72 | return 0, err 73 | } 74 | 75 | // Get the upload id of a previously partially uploaded object or initiate a new multipart upload 76 | uploadID, partsInfo, err := c.getMpartUploadSession(bucketName, objectName, metaData) 77 | if err != nil { 78 | return 0, err 79 | } 80 | 81 | // Total data read and written to server. should be equal to 'size' at the end of the call. 82 | var totalUploadedSize int64 83 | 84 | // Complete multipart upload. 85 | var complMultipartUpload completeMultipartUpload 86 | 87 | // Calculate the optimal parts info for a given size. 88 | totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(size) 89 | if err != nil { 90 | return 0, err 91 | } 92 | 93 | // Used for readability, lastPartNumber is always totalPartsCount. 94 | lastPartNumber := totalPartsCount 95 | 96 | // Declare a channel that sends the next part number to be uploaded. 97 | // Buffered to 10000 because thats the maximum number of parts allowed 98 | // by S3. 99 | uploadPartsCh := make(chan uploadPartReq, 10000) 100 | 101 | // Declare a channel that sends back the response of a part upload. 102 | // Buffered to 10000 because thats the maximum number of parts allowed 103 | // by S3. 104 | uploadedPartsCh := make(chan uploadedPartRes, 10000) 105 | 106 | // Send each part number to the channel to be processed. 107 | for p := 1; p <= totalPartsCount; p++ { 108 | part, ok := partsInfo[p] 109 | if ok { 110 | uploadPartsCh <- uploadPartReq{PartNum: p, Part: &part} 111 | } else { 112 | uploadPartsCh <- uploadPartReq{PartNum: p, Part: nil} 113 | } 114 | } 115 | close(uploadPartsCh) 116 | 117 | // Receive each part number from the channel allowing three parallel uploads. 118 | for w := 1; w <= 3; w++ { 119 | go func() { 120 | // Read defaults to reading at 5MiB buffer. 121 | readAtBuffer := make([]byte, optimalReadBufferSize) 122 | 123 | // Each worker will draw from the part channel and upload in parallel. 124 | for uploadReq := range uploadPartsCh { 125 | // Declare a new tmpBuffer. 126 | tmpBuffer := new(bytes.Buffer) 127 | 128 | // If partNumber was not uploaded we calculate the missing 129 | // part offset and size. For all other part numbers we 130 | // calculate offset based on multiples of partSize. 131 | readOffset := int64(uploadReq.PartNum-1) * partSize 132 | missingPartSize := partSize 133 | 134 | // As a special case if partNumber is lastPartNumber, we 135 | // calculate the offset based on the last part size. 136 | if uploadReq.PartNum == lastPartNumber { 137 | readOffset = (size - lastPartSize) 138 | missingPartSize = lastPartSize 139 | } 140 | 141 | // Get a section reader on a particular offset. 142 | sectionReader := io.NewSectionReader(reader, readOffset, missingPartSize) 143 | 144 | // Choose the needed hash algorithms to be calculated by hashCopyBuffer. 145 | // Sha256 is avoided in non-v4 signature requests or HTTPS connections 146 | hashSums := make(map[string][]byte) 147 | hashAlgos := make(map[string]hash.Hash) 148 | hashAlgos["md5"] = md5.New() 149 | if c.signature.isV4() && !c.secure { 150 | hashAlgos["sha256"] = sha256.New() 151 | } 152 | 153 | var prtSize int64 154 | var err error 155 | prtSize, err = hashCopyBuffer(hashAlgos, hashSums, tmpBuffer, sectionReader, readAtBuffer) 156 | if err != nil { 157 | // Send the error back through the channel. 158 | uploadedPartsCh <- uploadedPartRes{ 159 | Size: 0, 160 | Error: err, 161 | } 162 | // Exit the goroutine. 163 | return 164 | } 165 | 166 | // Verify object if its uploaded. 167 | verifyObjPart := objectPart{ 168 | PartNumber: uploadReq.PartNum, 169 | Size: partSize, 170 | } 171 | // Special case if we see a last part number, save last part 172 | // size as the proper part size. 173 | if uploadReq.PartNum == lastPartNumber { 174 | verifyObjPart.Size = lastPartSize 175 | } 176 | 177 | // Only upload the necessary parts. Otherwise return size through channel 178 | // to update any progress bar. 179 | if shouldUploadPartReadAt(verifyObjPart, uploadReq) { 180 | // Proceed to upload the part. 181 | var objPart objectPart 182 | objPart, err = c.uploadPart(bucketName, objectName, uploadID, tmpBuffer, uploadReq.PartNum, hashSums["md5"], hashSums["sha256"], prtSize) 183 | if err != nil { 184 | uploadedPartsCh <- uploadedPartRes{ 185 | Size: 0, 186 | Error: err, 187 | } 188 | // Exit the goroutine. 189 | return 190 | } 191 | // Save successfully uploaded part metadata. 192 | uploadReq.Part = &objPart 193 | } 194 | // Send successful part info through the channel. 195 | uploadedPartsCh <- uploadedPartRes{ 196 | Size: verifyObjPart.Size, 197 | PartNum: uploadReq.PartNum, 198 | Part: uploadReq.Part, 199 | Error: nil, 200 | } 201 | } 202 | }() 203 | } 204 | 205 | // Gather the responses as they occur and update any 206 | // progress bar. 207 | for u := 1; u <= totalPartsCount; u++ { 208 | uploadRes := <-uploadedPartsCh 209 | if uploadRes.Error != nil { 210 | return totalUploadedSize, uploadRes.Error 211 | } 212 | // Retrieve each uploaded part and store it to be completed. 213 | // part, ok := partsInfo[uploadRes.PartNum] 214 | part := uploadRes.Part 215 | if part == nil { 216 | return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", uploadRes.PartNum)) 217 | } 218 | // Update the totalUploadedSize. 219 | totalUploadedSize += uploadRes.Size 220 | // Update the progress bar if there is one. 221 | if progress != nil { 222 | if _, err = io.CopyN(ioutil.Discard, progress, uploadRes.Size); err != nil { 223 | return totalUploadedSize, err 224 | } 225 | } 226 | // Store the parts to be completed in order. 227 | complMultipartUpload.Parts = append(complMultipartUpload.Parts, completePart{ 228 | ETag: part.ETag, 229 | PartNumber: part.PartNumber, 230 | }) 231 | } 232 | 233 | // Verify if we uploaded all the data. 234 | if totalUploadedSize != size { 235 | return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, size, bucketName, objectName) 236 | } 237 | 238 | // Sort all completed parts. 239 | sort.Sort(completedParts(complMultipartUpload.Parts)) 240 | _, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload) 241 | if err != nil { 242 | return totalUploadedSize, err 243 | } 244 | 245 | // Return final size. 246 | return totalUploadedSize, nil 247 | } 248 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-put-object.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "bytes" 21 | "crypto/md5" 22 | "crypto/sha256" 23 | "hash" 24 | "io" 25 | "io/ioutil" 26 | "net/http" 27 | "os" 28 | "reflect" 29 | "runtime" 30 | "strings" 31 | ) 32 | 33 | // toInt - converts go value to its integer representation based 34 | // on the value kind if it is an integer. 35 | func toInt(value reflect.Value) (size int64) { 36 | size = -1 37 | if value.IsValid() { 38 | switch value.Kind() { 39 | case reflect.Int: 40 | fallthrough 41 | case reflect.Int8: 42 | fallthrough 43 | case reflect.Int16: 44 | fallthrough 45 | case reflect.Int32: 46 | fallthrough 47 | case reflect.Int64: 48 | size = value.Int() 49 | } 50 | } 51 | return size 52 | } 53 | 54 | // getReaderSize - Determine the size of Reader if available. 55 | func getReaderSize(reader io.Reader) (size int64, err error) { 56 | size = -1 57 | if reader == nil { 58 | return -1, nil 59 | } 60 | // Verify if there is a method by name 'Size'. 61 | sizeFn := reflect.ValueOf(reader).MethodByName("Size") 62 | // Verify if there is a method by name 'Len'. 63 | lenFn := reflect.ValueOf(reader).MethodByName("Len") 64 | if sizeFn.IsValid() { 65 | if sizeFn.Kind() == reflect.Func { 66 | // Call the 'Size' function and save its return value. 67 | result := sizeFn.Call([]reflect.Value{}) 68 | if len(result) == 1 { 69 | size = toInt(result[0]) 70 | } 71 | } 72 | } else if lenFn.IsValid() { 73 | if lenFn.Kind() == reflect.Func { 74 | // Call the 'Len' function and save its return value. 75 | result := lenFn.Call([]reflect.Value{}) 76 | if len(result) == 1 { 77 | size = toInt(result[0]) 78 | } 79 | } 80 | } else { 81 | // Fallback to Stat() method, two possible Stat() structs exist. 82 | switch v := reader.(type) { 83 | case *os.File: 84 | var st os.FileInfo 85 | st, err = v.Stat() 86 | if err != nil { 87 | // Handle this case specially for "windows", 88 | // certain files for example 'Stdin', 'Stdout' and 89 | // 'Stderr' it is not allowed to fetch file information. 90 | if runtime.GOOS == "windows" { 91 | if strings.Contains(err.Error(), "GetFileInformationByHandle") { 92 | return -1, nil 93 | } 94 | } 95 | return 96 | } 97 | // Ignore if input is a directory, throw an error. 98 | if st.Mode().IsDir() { 99 | return -1, ErrInvalidArgument("Input file cannot be a directory.") 100 | } 101 | // Ignore 'Stdin', 'Stdout' and 'Stderr', since they 102 | // represent *os.File type but internally do not 103 | // implement Seekable calls. Ignore them and treat 104 | // them like a stream with unknown length. 105 | switch st.Name() { 106 | case "stdin", "stdout", "stderr": 107 | return 108 | // Ignore read/write stream of os.Pipe() which have unknown length too. 109 | case "|0", "|1": 110 | return 111 | } 112 | size = st.Size() 113 | case *Object: 114 | var st ObjectInfo 115 | st, err = v.Stat() 116 | if err != nil { 117 | return 118 | } 119 | size = st.Size 120 | } 121 | } 122 | // Returns the size here. 123 | return size, err 124 | } 125 | 126 | // completedParts is a collection of parts sortable by their part numbers. 127 | // used for sorting the uploaded parts before completing the multipart request. 128 | type completedParts []completePart 129 | 130 | func (a completedParts) Len() int { return len(a) } 131 | func (a completedParts) Swap(i, j int) { a[i], a[j] = a[j], a[i] } 132 | func (a completedParts) Less(i, j int) bool { return a[i].PartNumber < a[j].PartNumber } 133 | 134 | // PutObject creates an object in a bucket. 135 | // 136 | // You must have WRITE permissions on a bucket to create an object. 137 | // 138 | // - For size smaller than 5MiB PutObject automatically does a single atomic Put operation. 139 | // - For size larger than 5MiB PutObject automatically does a resumable multipart Put operation. 140 | // - For size input as -1 PutObject does a multipart Put operation until input stream reaches EOF. 141 | // Maximum object size that can be uploaded through this operation will be 5TiB. 142 | // 143 | // NOTE: Google Cloud Storage does not implement Amazon S3 Compatible multipart PUT. 144 | // So we fall back to single PUT operation with the maximum limit of 5GiB. 145 | // 146 | // NOTE: For anonymous requests Amazon S3 doesn't allow multipart upload. So we fall back to single PUT operation. 147 | func (c Client) PutObject(bucketName, objectName string, reader io.Reader, contentType string) (n int64, err error) { 148 | return c.PutObjectWithProgress(bucketName, objectName, reader, contentType, nil) 149 | } 150 | 151 | // putObjectNoChecksum special function used Google Cloud Storage. This special function 152 | // is used for Google Cloud Storage since Google's multipart API is not S3 compatible. 153 | func (c Client) putObjectNoChecksum(bucketName, objectName string, reader io.Reader, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) { 154 | // Input validation. 155 | if err := isValidBucketName(bucketName); err != nil { 156 | return 0, err 157 | } 158 | if err := isValidObjectName(objectName); err != nil { 159 | return 0, err 160 | } 161 | if size > maxSinglePutObjectSize { 162 | return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName) 163 | } 164 | 165 | // Update progress reader appropriately to the latest offset as we 166 | // read from the source. 167 | readSeeker := newHook(reader, progress) 168 | 169 | // This function does not calculate sha256 and md5sum for payload. 170 | // Execute put object. 171 | st, err := c.putObjectDo(bucketName, objectName, readSeeker, nil, nil, size, metaData) 172 | if err != nil { 173 | return 0, err 174 | } 175 | if st.Size != size { 176 | return 0, ErrUnexpectedEOF(st.Size, size, bucketName, objectName) 177 | } 178 | return size, nil 179 | } 180 | 181 | // putObjectSingle is a special function for uploading single put object request. 182 | // This special function is used as a fallback when multipart upload fails. 183 | func (c Client) putObjectSingle(bucketName, objectName string, reader io.Reader, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) { 184 | // Input validation. 185 | if err := isValidBucketName(bucketName); err != nil { 186 | return 0, err 187 | } 188 | if err := isValidObjectName(objectName); err != nil { 189 | return 0, err 190 | } 191 | if size > maxSinglePutObjectSize { 192 | return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName) 193 | } 194 | // If size is a stream, upload up to 5GiB. 195 | if size <= -1 { 196 | size = maxSinglePutObjectSize 197 | } 198 | 199 | // Add the appropriate hash algorithms that need to be calculated by hashCopyN 200 | // In case of non-v4 signature request or HTTPS connection, sha256 is not needed. 201 | hashAlgos := make(map[string]hash.Hash) 202 | hashSums := make(map[string][]byte) 203 | hashAlgos["md5"] = md5.New() 204 | if c.signature.isV4() && !c.secure { 205 | hashAlgos["sha256"] = sha256.New() 206 | } 207 | 208 | if size <= minPartSize { 209 | // Initialize a new temporary buffer. 210 | tmpBuffer := new(bytes.Buffer) 211 | size, err = hashCopyN(hashAlgos, hashSums, tmpBuffer, reader, size) 212 | reader = bytes.NewReader(tmpBuffer.Bytes()) 213 | tmpBuffer.Reset() 214 | } else { 215 | // Initialize a new temporary file. 216 | var tmpFile *tempFile 217 | tmpFile, err = newTempFile("single$-putobject-single") 218 | if err != nil { 219 | return 0, err 220 | } 221 | defer tmpFile.Close() 222 | size, err = hashCopyN(hashAlgos, hashSums, tmpFile, reader, size) 223 | if err != nil { 224 | return 0, err 225 | } 226 | // Seek back to beginning of the temporary file. 227 | if _, err = tmpFile.Seek(0, 0); err != nil { 228 | return 0, err 229 | } 230 | reader = tmpFile 231 | } 232 | // Return error if its not io.EOF. 233 | if err != nil { 234 | if err != io.EOF { 235 | return 0, err 236 | } 237 | } 238 | // Execute put object. 239 | st, err := c.putObjectDo(bucketName, objectName, reader, hashSums["md5"], hashSums["sha256"], size, metaData) 240 | if err != nil { 241 | return 0, err 242 | } 243 | if st.Size != size { 244 | return 0, ErrUnexpectedEOF(st.Size, size, bucketName, objectName) 245 | } 246 | // Progress the reader to the size if putObjectDo is successful. 247 | if progress != nil { 248 | if _, err = io.CopyN(ioutil.Discard, progress, size); err != nil { 249 | return size, err 250 | } 251 | } 252 | return size, nil 253 | } 254 | 255 | // putObjectDo - executes the put object http operation. 256 | // NOTE: You must have WRITE permissions on a bucket to add an object to it. 257 | func (c Client) putObjectDo(bucketName, objectName string, reader io.Reader, md5Sum []byte, sha256Sum []byte, size int64, metaData map[string][]string) (ObjectInfo, error) { 258 | // Input validation. 259 | if err := isValidBucketName(bucketName); err != nil { 260 | return ObjectInfo{}, err 261 | } 262 | if err := isValidObjectName(objectName); err != nil { 263 | return ObjectInfo{}, err 264 | } 265 | 266 | if size <= -1 { 267 | return ObjectInfo{}, ErrEntityTooSmall(size, bucketName, objectName) 268 | } 269 | 270 | if size > maxSinglePutObjectSize { 271 | return ObjectInfo{}, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName) 272 | } 273 | 274 | // Set headers. 275 | customHeader := make(http.Header) 276 | 277 | // Set metadata to headers 278 | for k, v := range metaData { 279 | if len(v) > 0 { 280 | customHeader.Set(k, v[0]) 281 | } 282 | } 283 | 284 | // If Content-Type is not provided, set the default application/octet-stream one 285 | if v, ok := metaData["Content-Type"]; !ok || len(v) == 0 { 286 | customHeader.Set("Content-Type", "application/octet-stream") 287 | } 288 | 289 | // Populate request metadata. 290 | reqMetadata := requestMetadata{ 291 | bucketName: bucketName, 292 | objectName: objectName, 293 | customHeader: customHeader, 294 | contentBody: reader, 295 | contentLength: size, 296 | contentMD5Bytes: md5Sum, 297 | contentSHA256Bytes: sha256Sum, 298 | } 299 | 300 | // Execute PUT an objectName. 301 | resp, err := c.executeMethod("PUT", reqMetadata) 302 | defer closeResponse(resp) 303 | if err != nil { 304 | return ObjectInfo{}, err 305 | } 306 | if resp != nil { 307 | if resp.StatusCode != http.StatusOK { 308 | return ObjectInfo{}, httpRespToErrorResponse(resp, bucketName, objectName) 309 | } 310 | } 311 | 312 | var objInfo ObjectInfo 313 | // Trim off the odd double quotes from ETag in the beginning and end. 314 | objInfo.ETag = strings.TrimPrefix(resp.Header.Get("ETag"), "\"") 315 | objInfo.ETag = strings.TrimSuffix(objInfo.ETag, "\"") 316 | // A success here means data was written to server successfully. 317 | objInfo.Size = size 318 | 319 | // Return here. 320 | return objInfo, nil 321 | } 322 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-remove.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "bytes" 21 | "encoding/xml" 22 | "io" 23 | "net/http" 24 | "net/url" 25 | ) 26 | 27 | // RemoveBucket deletes the bucket name. 28 | // 29 | // All objects (including all object versions and delete markers). 30 | // in the bucket must be deleted before successfully attempting this request. 31 | func (c Client) RemoveBucket(bucketName string) error { 32 | // Input validation. 33 | if err := isValidBucketName(bucketName); err != nil { 34 | return err 35 | } 36 | // Execute DELETE on bucket. 37 | resp, err := c.executeMethod("DELETE", requestMetadata{ 38 | bucketName: bucketName, 39 | }) 40 | defer closeResponse(resp) 41 | if err != nil { 42 | return err 43 | } 44 | if resp != nil { 45 | if resp.StatusCode != http.StatusNoContent { 46 | return httpRespToErrorResponse(resp, bucketName, "") 47 | } 48 | } 49 | 50 | // Remove the location from cache on a successful delete. 51 | c.bucketLocCache.Delete(bucketName) 52 | 53 | return nil 54 | } 55 | 56 | // RemoveObject remove an object from a bucket. 57 | func (c Client) RemoveObject(bucketName, objectName string) error { 58 | // Input validation. 59 | if err := isValidBucketName(bucketName); err != nil { 60 | return err 61 | } 62 | if err := isValidObjectName(objectName); err != nil { 63 | return err 64 | } 65 | // Execute DELETE on objectName. 66 | resp, err := c.executeMethod("DELETE", requestMetadata{ 67 | bucketName: bucketName, 68 | objectName: objectName, 69 | }) 70 | defer closeResponse(resp) 71 | if err != nil { 72 | return err 73 | } 74 | if resp != nil { 75 | // if some unexpected error happened and max retry is reached, we want to let client know 76 | if resp.StatusCode != http.StatusNoContent { 77 | return httpRespToErrorResponse(resp, bucketName, objectName) 78 | } 79 | } 80 | 81 | // DeleteObject always responds with http '204' even for 82 | // objects which do not exist. So no need to handle them 83 | // specifically. 84 | return nil 85 | } 86 | 87 | // RemoveObjectError - container of Multi Delete S3 API error 88 | type RemoveObjectError struct { 89 | ObjectName string 90 | Err error 91 | } 92 | 93 | // generateRemoveMultiObjects - generate the XML request for remove multi objects request 94 | func generateRemoveMultiObjectsRequest(objects []string) []byte { 95 | rmObjects := []deleteObject{} 96 | for _, obj := range objects { 97 | rmObjects = append(rmObjects, deleteObject{Key: obj}) 98 | } 99 | xmlBytes, _ := xml.Marshal(deleteMultiObjects{Objects: rmObjects, Quiet: true}) 100 | return xmlBytes 101 | } 102 | 103 | // processRemoveMultiObjectsResponse - parse the remove multi objects web service 104 | // and return the success/failure result status for each object 105 | func processRemoveMultiObjectsResponse(body io.Reader, objects []string, errorCh chan<- RemoveObjectError) { 106 | // Parse multi delete XML response 107 | rmResult := &deleteMultiObjectsResult{} 108 | err := xmlDecoder(body, rmResult) 109 | if err != nil { 110 | errorCh <- RemoveObjectError{ObjectName: "", Err: err} 111 | return 112 | } 113 | 114 | // Fill deletion that returned an error. 115 | for _, obj := range rmResult.UnDeletedObjects { 116 | errorCh <- RemoveObjectError{ 117 | ObjectName: obj.Key, 118 | Err: ErrorResponse{ 119 | Code: obj.Code, 120 | Message: obj.Message, 121 | }, 122 | } 123 | } 124 | } 125 | 126 | // RemoveObjects remove multiples objects from a bucket. 127 | // The list of objects to remove are received from objectsCh. 128 | // Remove failures are sent back via error channel. 129 | func (c Client) RemoveObjects(bucketName string, objectsCh <-chan string) <-chan RemoveObjectError { 130 | errorCh := make(chan RemoveObjectError, 1) 131 | 132 | // Validate if bucket name is valid. 133 | if err := isValidBucketName(bucketName); err != nil { 134 | defer close(errorCh) 135 | errorCh <- RemoveObjectError{ 136 | Err: err, 137 | } 138 | return errorCh 139 | } 140 | // Validate objects channel to be properly allocated. 141 | if objectsCh == nil { 142 | defer close(errorCh) 143 | errorCh <- RemoveObjectError{ 144 | Err: ErrInvalidArgument("Objects channel cannot be nil"), 145 | } 146 | return errorCh 147 | } 148 | 149 | // Generate and call MultiDelete S3 requests based on entries received from objectsCh 150 | go func(errorCh chan<- RemoveObjectError) { 151 | maxEntries := 1000 152 | finish := false 153 | urlValues := make(url.Values) 154 | urlValues.Set("delete", "") 155 | 156 | // Close error channel when Multi delete finishes. 157 | defer close(errorCh) 158 | 159 | // Loop over entries by 1000 and call MultiDelete requests 160 | for { 161 | if finish { 162 | break 163 | } 164 | count := 0 165 | var batch []string 166 | 167 | // Try to gather 1000 entries 168 | for object := range objectsCh { 169 | batch = append(batch, object) 170 | if count++; count >= maxEntries { 171 | break 172 | } 173 | } 174 | if count == 0 { 175 | // Multi Objects Delete API doesn't accept empty object list, quit immediatly 176 | break 177 | } 178 | if count < maxEntries { 179 | // We didn't have 1000 entries, so this is the last batch 180 | finish = true 181 | } 182 | 183 | // Generate remove multi objects XML request 184 | removeBytes := generateRemoveMultiObjectsRequest(batch) 185 | // Execute GET on bucket to list objects. 186 | resp, err := c.executeMethod("POST", requestMetadata{ 187 | bucketName: bucketName, 188 | queryValues: urlValues, 189 | contentBody: bytes.NewReader(removeBytes), 190 | contentLength: int64(len(removeBytes)), 191 | contentMD5Bytes: sumMD5(removeBytes), 192 | contentSHA256Bytes: sum256(removeBytes), 193 | }) 194 | if err != nil { 195 | for _, b := range batch { 196 | errorCh <- RemoveObjectError{ObjectName: b, Err: err} 197 | } 198 | continue 199 | } 200 | 201 | // Process multiobjects remove xml response 202 | processRemoveMultiObjectsResponse(resp.Body, batch, errorCh) 203 | 204 | closeResponse(resp) 205 | } 206 | }(errorCh) 207 | return errorCh 208 | } 209 | 210 | // RemoveIncompleteUpload aborts an partially uploaded object. 211 | // Requires explicit authentication, no anonymous requests are allowed for multipart API. 212 | func (c Client) RemoveIncompleteUpload(bucketName, objectName string) error { 213 | // Input validation. 214 | if err := isValidBucketName(bucketName); err != nil { 215 | return err 216 | } 217 | if err := isValidObjectName(objectName); err != nil { 218 | return err 219 | } 220 | // Find multipart upload id of the object to be aborted. 221 | uploadID, err := c.findUploadID(bucketName, objectName) 222 | if err != nil { 223 | return err 224 | } 225 | if uploadID != "" { 226 | // Upload id found, abort the incomplete multipart upload. 227 | err := c.abortMultipartUpload(bucketName, objectName, uploadID) 228 | if err != nil { 229 | return err 230 | } 231 | } 232 | return nil 233 | } 234 | 235 | // abortMultipartUpload aborts a multipart upload for the given 236 | // uploadID, all previously uploaded parts are deleted. 237 | func (c Client) abortMultipartUpload(bucketName, objectName, uploadID string) error { 238 | // Input validation. 239 | if err := isValidBucketName(bucketName); err != nil { 240 | return err 241 | } 242 | if err := isValidObjectName(objectName); err != nil { 243 | return err 244 | } 245 | 246 | // Initialize url queries. 247 | urlValues := make(url.Values) 248 | urlValues.Set("uploadId", uploadID) 249 | 250 | // Execute DELETE on multipart upload. 251 | resp, err := c.executeMethod("DELETE", requestMetadata{ 252 | bucketName: bucketName, 253 | objectName: objectName, 254 | queryValues: urlValues, 255 | }) 256 | defer closeResponse(resp) 257 | if err != nil { 258 | return err 259 | } 260 | if resp != nil { 261 | if resp.StatusCode != http.StatusNoContent { 262 | // Abort has no response body, handle it for any errors. 263 | var errorResponse ErrorResponse 264 | switch resp.StatusCode { 265 | case http.StatusNotFound: 266 | // This is needed specifically for abort and it cannot 267 | // be converged into default case. 268 | errorResponse = ErrorResponse{ 269 | Code: "NoSuchUpload", 270 | Message: "The specified multipart upload does not exist.", 271 | BucketName: bucketName, 272 | Key: objectName, 273 | RequestID: resp.Header.Get("x-amz-request-id"), 274 | HostID: resp.Header.Get("x-amz-id-2"), 275 | Region: resp.Header.Get("x-amz-bucket-region"), 276 | } 277 | default: 278 | return httpRespToErrorResponse(resp, bucketName, objectName) 279 | } 280 | return errorResponse 281 | } 282 | } 283 | return nil 284 | } 285 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-s3-datatypes.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "encoding/xml" 21 | "time" 22 | ) 23 | 24 | // listAllMyBucketsResult container for listBuckets response. 25 | type listAllMyBucketsResult struct { 26 | // Container for one or more buckets. 27 | Buckets struct { 28 | Bucket []BucketInfo 29 | } 30 | Owner owner 31 | } 32 | 33 | // owner container for bucket owner information. 34 | type owner struct { 35 | DisplayName string 36 | ID string 37 | } 38 | 39 | // commonPrefix container for prefix response. 40 | type commonPrefix struct { 41 | Prefix string 42 | } 43 | 44 | // listBucketResult container for listObjects V2 response. 45 | type listBucketV2Result struct { 46 | // A response can contain CommonPrefixes only if you have 47 | // specified a delimiter. 48 | CommonPrefixes []commonPrefix 49 | // Metadata about each object returned. 50 | Contents []ObjectInfo 51 | Delimiter string 52 | 53 | // Encoding type used to encode object keys in the response. 54 | EncodingType string 55 | 56 | // A flag that indicates whether or not ListObjects returned all of the results 57 | // that satisfied the search criteria. 58 | IsTruncated bool 59 | MaxKeys int64 60 | Name string 61 | 62 | // Hold the token that will be sent in the next request to fetch the next group of keys 63 | NextContinuationToken string 64 | 65 | ContinuationToken string 66 | Prefix string 67 | 68 | // FetchOwner and StartAfter are currently not used 69 | FetchOwner string 70 | StartAfter string 71 | } 72 | 73 | // listBucketResult container for listObjects response. 74 | type listBucketResult struct { 75 | // A response can contain CommonPrefixes only if you have 76 | // specified a delimiter. 77 | CommonPrefixes []commonPrefix 78 | // Metadata about each object returned. 79 | Contents []ObjectInfo 80 | Delimiter string 81 | 82 | // Encoding type used to encode object keys in the response. 83 | EncodingType string 84 | 85 | // A flag that indicates whether or not ListObjects returned all of the results 86 | // that satisfied the search criteria. 87 | IsTruncated bool 88 | Marker string 89 | MaxKeys int64 90 | Name string 91 | 92 | // When response is truncated (the IsTruncated element value in 93 | // the response is true), you can use the key name in this field 94 | // as marker in the subsequent request to get next set of objects. 95 | // Object storage lists objects in alphabetical order Note: This 96 | // element is returned only if you have delimiter request 97 | // parameter specified. If response does not include the NextMaker 98 | // and it is truncated, you can use the value of the last Key in 99 | // the response as the marker in the subsequent request to get the 100 | // next set of object keys. 101 | NextMarker string 102 | Prefix string 103 | } 104 | 105 | // listMultipartUploadsResult container for ListMultipartUploads response 106 | type listMultipartUploadsResult struct { 107 | Bucket string 108 | KeyMarker string 109 | UploadIDMarker string `xml:"UploadIdMarker"` 110 | NextKeyMarker string 111 | NextUploadIDMarker string `xml:"NextUploadIdMarker"` 112 | EncodingType string 113 | MaxUploads int64 114 | IsTruncated bool 115 | Uploads []ObjectMultipartInfo `xml:"Upload"` 116 | Prefix string 117 | Delimiter string 118 | // A response can contain CommonPrefixes only if you specify a delimiter. 119 | CommonPrefixes []commonPrefix 120 | } 121 | 122 | // initiator container for who initiated multipart upload. 123 | type initiator struct { 124 | ID string 125 | DisplayName string 126 | } 127 | 128 | // copyObjectResult container for copy object response. 129 | type copyObjectResult struct { 130 | ETag string 131 | LastModified string // time string format "2006-01-02T15:04:05.000Z" 132 | } 133 | 134 | // objectPart container for particular part of an object. 135 | type objectPart struct { 136 | // Part number identifies the part. 137 | PartNumber int 138 | 139 | // Date and time the part was uploaded. 140 | LastModified time.Time 141 | 142 | // Entity tag returned when the part was uploaded, usually md5sum 143 | // of the part. 144 | ETag string 145 | 146 | // Size of the uploaded part data. 147 | Size int64 148 | } 149 | 150 | // listObjectPartsResult container for ListObjectParts response. 151 | type listObjectPartsResult struct { 152 | Bucket string 153 | Key string 154 | UploadID string `xml:"UploadId"` 155 | 156 | Initiator initiator 157 | Owner owner 158 | 159 | StorageClass string 160 | PartNumberMarker int 161 | NextPartNumberMarker int 162 | MaxParts int 163 | 164 | // Indicates whether the returned list of parts is truncated. 165 | IsTruncated bool 166 | ObjectParts []objectPart `xml:"Part"` 167 | 168 | EncodingType string 169 | } 170 | 171 | // initiateMultipartUploadResult container for InitiateMultiPartUpload 172 | // response. 173 | type initiateMultipartUploadResult struct { 174 | Bucket string 175 | Key string 176 | UploadID string `xml:"UploadId"` 177 | } 178 | 179 | // completeMultipartUploadResult container for completed multipart 180 | // upload response. 181 | type completeMultipartUploadResult struct { 182 | Location string 183 | Bucket string 184 | Key string 185 | ETag string 186 | } 187 | 188 | // completePart sub container lists individual part numbers and their 189 | // md5sum, part of completeMultipartUpload. 190 | type completePart struct { 191 | XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Part" json:"-"` 192 | 193 | // Part number identifies the part. 194 | PartNumber int 195 | ETag string 196 | } 197 | 198 | // completeMultipartUpload container for completing multipart upload. 199 | type completeMultipartUpload struct { 200 | XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CompleteMultipartUpload" json:"-"` 201 | Parts []completePart `xml:"Part"` 202 | } 203 | 204 | // createBucketConfiguration container for bucket configuration. 205 | type createBucketConfiguration struct { 206 | XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CreateBucketConfiguration" json:"-"` 207 | Location string `xml:"LocationConstraint"` 208 | } 209 | 210 | // deleteObject container for Delete element in MultiObjects Delete XML request 211 | type deleteObject struct { 212 | Key string 213 | VersionID string `xml:"VersionId,omitempty"` 214 | } 215 | 216 | // deletedObject container for Deleted element in MultiObjects Delete XML response 217 | type deletedObject struct { 218 | Key string 219 | VersionID string `xml:"VersionId,omitempty"` 220 | // These fields are ignored. 221 | DeleteMarker bool 222 | DeleteMarkerVersionID string 223 | } 224 | 225 | // nonDeletedObject container for Error element (failed deletion) in MultiObjects Delete XML response 226 | type nonDeletedObject struct { 227 | Key string 228 | Code string 229 | Message string 230 | } 231 | 232 | // deletedMultiObjects container for MultiObjects Delete XML request 233 | type deleteMultiObjects struct { 234 | XMLName xml.Name `xml:"Delete"` 235 | Quiet bool 236 | Objects []deleteObject `xml:"Object"` 237 | } 238 | 239 | // deletedMultiObjectsResult container for MultiObjects Delete XML response 240 | type deleteMultiObjectsResult struct { 241 | XMLName xml.Name `xml:"DeleteResult"` 242 | DeletedObjects []deletedObject `xml:"Deleted"` 243 | UnDeletedObjects []nonDeletedObject `xml:"Error"` 244 | } 245 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/api-stat.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "net/http" 21 | "strconv" 22 | "strings" 23 | "time" 24 | 25 | "github.com/minio/minio-go/pkg/s3utils" 26 | ) 27 | 28 | // BucketExists verify if bucket exists and you have permission to access it. 29 | func (c Client) BucketExists(bucketName string) (bool, error) { 30 | // Input validation. 31 | if err := isValidBucketName(bucketName); err != nil { 32 | return false, err 33 | } 34 | 35 | // Execute HEAD on bucketName. 36 | resp, err := c.executeMethod("HEAD", requestMetadata{ 37 | bucketName: bucketName, 38 | }) 39 | defer closeResponse(resp) 40 | if err != nil { 41 | if ToErrorResponse(err).Code == "NoSuchBucket" { 42 | return false, nil 43 | } 44 | return false, err 45 | } 46 | if resp != nil { 47 | if resp.StatusCode != http.StatusOK { 48 | return false, httpRespToErrorResponse(resp, bucketName, "") 49 | } 50 | } 51 | return true, nil 52 | } 53 | 54 | // List of header keys to be filtered, usually 55 | // from all S3 API http responses. 56 | var defaultFilterKeys = []string{ 57 | "Transfer-Encoding", 58 | "Accept-Ranges", 59 | "Date", 60 | "Server", 61 | "Vary", 62 | "x-amz-request-id", 63 | "x-amz-id-2", 64 | // Add new headers to be ignored. 65 | } 66 | 67 | // Extract only necessary metadata header key/values by 68 | // filtering them out with a list of custom header keys. 69 | func extractObjMetadata(header http.Header) http.Header { 70 | filterKeys := append([]string{ 71 | "ETag", 72 | "Content-Length", 73 | "Last-Modified", 74 | "Content-Type", 75 | }, defaultFilterKeys...) 76 | return filterHeader(header, filterKeys) 77 | } 78 | 79 | // StatObject verifies if object exists and you have permission to access. 80 | func (c Client) StatObject(bucketName, objectName string) (ObjectInfo, error) { 81 | // Input validation. 82 | if err := isValidBucketName(bucketName); err != nil { 83 | return ObjectInfo{}, err 84 | } 85 | if err := isValidObjectName(objectName); err != nil { 86 | return ObjectInfo{}, err 87 | } 88 | 89 | // Execute HEAD on objectName. 90 | resp, err := c.executeMethod("HEAD", requestMetadata{ 91 | bucketName: bucketName, 92 | objectName: objectName, 93 | }) 94 | defer closeResponse(resp) 95 | if err != nil { 96 | return ObjectInfo{}, err 97 | } 98 | if resp != nil { 99 | if resp.StatusCode != http.StatusOK { 100 | return ObjectInfo{}, httpRespToErrorResponse(resp, bucketName, objectName) 101 | } 102 | } 103 | 104 | // Trim off the odd double quotes from ETag in the beginning and end. 105 | md5sum := strings.TrimPrefix(resp.Header.Get("ETag"), "\"") 106 | md5sum = strings.TrimSuffix(md5sum, "\"") 107 | 108 | // Content-Length is not valid for Google Cloud Storage, do not verify. 109 | var size int64 = -1 110 | if !s3utils.IsGoogleEndpoint(c.endpointURL) { 111 | // Parse content length. 112 | size, err = strconv.ParseInt(resp.Header.Get("Content-Length"), 10, 64) 113 | if err != nil { 114 | return ObjectInfo{}, ErrorResponse{ 115 | Code: "InternalError", 116 | Message: "Content-Length is invalid. " + reportIssue, 117 | BucketName: bucketName, 118 | Key: objectName, 119 | RequestID: resp.Header.Get("x-amz-request-id"), 120 | HostID: resp.Header.Get("x-amz-id-2"), 121 | Region: resp.Header.Get("x-amz-bucket-region"), 122 | } 123 | } 124 | } 125 | // Parse Last-Modified has http time format. 126 | date, err := time.Parse(http.TimeFormat, resp.Header.Get("Last-Modified")) 127 | if err != nil { 128 | return ObjectInfo{}, ErrorResponse{ 129 | Code: "InternalError", 130 | Message: "Last-Modified time format is invalid. " + reportIssue, 131 | BucketName: bucketName, 132 | Key: objectName, 133 | RequestID: resp.Header.Get("x-amz-request-id"), 134 | HostID: resp.Header.Get("x-amz-id-2"), 135 | Region: resp.Header.Get("x-amz-bucket-region"), 136 | } 137 | } 138 | // Fetch content type if any present. 139 | contentType := strings.TrimSpace(resp.Header.Get("Content-Type")) 140 | if contentType == "" { 141 | contentType = "application/octet-stream" 142 | } 143 | 144 | // Extract only the relevant header keys describing the object. 145 | // following function filters out a list of standard set of keys 146 | // which are not part of object metadata. 147 | metadata := extractObjMetadata(resp.Header) 148 | 149 | // Save object metadata info. 150 | return ObjectInfo{ 151 | ETag: md5sum, 152 | Key: objectName, 153 | Size: size, 154 | LastModified: date, 155 | ContentType: contentType, 156 | Metadata: metadata, 157 | }, nil 158 | } 159 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/appveyor.yml: -------------------------------------------------------------------------------- 1 | # version format 2 | version: "{build}" 3 | 4 | # Operating system (build VM template) 5 | os: Windows Server 2012 R2 6 | 7 | clone_folder: c:\gopath\src\github.com\minio\minio-go 8 | 9 | # environment variables 10 | environment: 11 | GOPATH: c:\gopath 12 | GO15VENDOREXPERIMENT: 1 13 | 14 | # scripts that run after cloning repository 15 | install: 16 | - set PATH=%GOPATH%\bin;c:\go\bin;%PATH% 17 | - go version 18 | - go env 19 | - go get -u github.com/golang/lint/golint 20 | - go get -u github.com/remyoudompheng/go-misc/deadcode 21 | - go get -u github.com/gordonklaus/ineffassign 22 | 23 | # to run your custom scripts instead of automatic MSBuild 24 | build_script: 25 | - go vet ./... 26 | - gofmt -s -l . 27 | - golint github.com/minio/minio-go... 28 | - deadcode 29 | - ineffassign . 30 | - go test -short -v 31 | - go test -short -race -v 32 | 33 | # to disable automatic tests 34 | test: off 35 | 36 | # to disable deployment 37 | deploy: off 38 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/bucket-cache.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "encoding/hex" 21 | "net/http" 22 | "net/url" 23 | "path" 24 | "strings" 25 | "sync" 26 | 27 | "github.com/minio/minio-go/pkg/s3signer" 28 | "github.com/minio/minio-go/pkg/s3utils" 29 | ) 30 | 31 | // bucketLocationCache - Provides simple mechanism to hold bucket 32 | // locations in memory. 33 | type bucketLocationCache struct { 34 | // mutex is used for handling the concurrent 35 | // read/write requests for cache. 36 | sync.RWMutex 37 | 38 | // items holds the cached bucket locations. 39 | items map[string]string 40 | } 41 | 42 | // newBucketLocationCache - Provides a new bucket location cache to be 43 | // used internally with the client object. 44 | func newBucketLocationCache() *bucketLocationCache { 45 | return &bucketLocationCache{ 46 | items: make(map[string]string), 47 | } 48 | } 49 | 50 | // Get - Returns a value of a given key if it exists. 51 | func (r *bucketLocationCache) Get(bucketName string) (location string, ok bool) { 52 | r.RLock() 53 | defer r.RUnlock() 54 | location, ok = r.items[bucketName] 55 | return 56 | } 57 | 58 | // Set - Will persist a value into cache. 59 | func (r *bucketLocationCache) Set(bucketName string, location string) { 60 | r.Lock() 61 | defer r.Unlock() 62 | r.items[bucketName] = location 63 | } 64 | 65 | // Delete - Deletes a bucket name from cache. 66 | func (r *bucketLocationCache) Delete(bucketName string) { 67 | r.Lock() 68 | defer r.Unlock() 69 | delete(r.items, bucketName) 70 | } 71 | 72 | // GetBucketLocation - get location for the bucket name from location cache, if not 73 | // fetch freshly by making a new request. 74 | func (c Client) GetBucketLocation(bucketName string) (string, error) { 75 | if err := isValidBucketName(bucketName); err != nil { 76 | return "", err 77 | } 78 | return c.getBucketLocation(bucketName) 79 | } 80 | 81 | // getBucketLocation - Get location for the bucketName from location map cache, if not 82 | // fetch freshly by making a new request. 83 | func (c Client) getBucketLocation(bucketName string) (string, error) { 84 | if err := isValidBucketName(bucketName); err != nil { 85 | return "", err 86 | } 87 | if location, ok := c.bucketLocCache.Get(bucketName); ok { 88 | return location, nil 89 | } 90 | 91 | if s3utils.IsAmazonChinaEndpoint(c.endpointURL) { 92 | // For china specifically we need to set everything to 93 | // cn-north-1 for now, there is no easier way until AWS S3 94 | // provides a cleaner compatible API across "us-east-1" and 95 | // China region. 96 | return "cn-north-1", nil 97 | } 98 | 99 | // Initialize a new request. 100 | req, err := c.getBucketLocationRequest(bucketName) 101 | if err != nil { 102 | return "", err 103 | } 104 | 105 | // Initiate the request. 106 | resp, err := c.do(req) 107 | defer closeResponse(resp) 108 | if err != nil { 109 | return "", err 110 | } 111 | location, err := processBucketLocationResponse(resp, bucketName) 112 | if err != nil { 113 | return "", err 114 | } 115 | c.bucketLocCache.Set(bucketName, location) 116 | return location, nil 117 | } 118 | 119 | // processes the getBucketLocation http response from the server. 120 | func processBucketLocationResponse(resp *http.Response, bucketName string) (bucketLocation string, err error) { 121 | if resp != nil { 122 | if resp.StatusCode != http.StatusOK { 123 | err = httpRespToErrorResponse(resp, bucketName, "") 124 | errResp := ToErrorResponse(err) 125 | // For access denied error, it could be an anonymous 126 | // request. Move forward and let the top level callers 127 | // succeed if possible based on their policy. 128 | if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") { 129 | return "us-east-1", nil 130 | } 131 | return "", err 132 | } 133 | } 134 | 135 | // Extract location. 136 | var locationConstraint string 137 | err = xmlDecoder(resp.Body, &locationConstraint) 138 | if err != nil { 139 | return "", err 140 | } 141 | 142 | location := locationConstraint 143 | // Location is empty will be 'us-east-1'. 144 | if location == "" { 145 | location = "us-east-1" 146 | } 147 | 148 | // Location can be 'EU' convert it to meaningful 'eu-west-1'. 149 | if location == "EU" { 150 | location = "eu-west-1" 151 | } 152 | 153 | // Save the location into cache. 154 | 155 | // Return. 156 | return location, nil 157 | } 158 | 159 | // getBucketLocationRequest - Wrapper creates a new getBucketLocation request. 160 | func (c Client) getBucketLocationRequest(bucketName string) (*http.Request, error) { 161 | // Set location query. 162 | urlValues := make(url.Values) 163 | urlValues.Set("location", "") 164 | 165 | // Set get bucket location always as path style. 166 | targetURL := c.endpointURL 167 | 168 | // Requesting a bucket location from an accelerate endpoint returns a 400, 169 | // so default to us-east-1 for the lookup 170 | if s3utils.IsAmazonS3AccelerateEndpoint(c.endpointURL) { 171 | targetURL.Host = getS3Endpoint("us-east-1") 172 | } 173 | 174 | targetURL.Path = path.Join(bucketName, "") + "/" 175 | targetURL.RawQuery = urlValues.Encode() 176 | 177 | // Get a new HTTP request for the method. 178 | req, err := http.NewRequest("GET", targetURL.String(), nil) 179 | if err != nil { 180 | return nil, err 181 | } 182 | 183 | // Set UserAgent for the request. 184 | c.setUserAgent(req) 185 | 186 | // Set sha256 sum for signature calculation only with signature version '4'. 187 | if c.signature.isV4() { 188 | var contentSha256 string 189 | if c.secure { 190 | contentSha256 = unsignedPayload 191 | } else { 192 | contentSha256 = hex.EncodeToString(sum256([]byte{})) 193 | } 194 | req.Header.Set("X-Amz-Content-Sha256", contentSha256) 195 | } 196 | 197 | // Sign the request. 198 | if c.signature.isV4() { 199 | req = s3signer.SignV4(*req, c.accessKeyID, c.secretAccessKey, "us-east-1") 200 | } else if c.signature.isV2() { 201 | req = s3signer.SignV2(*req, c.accessKeyID, c.secretAccessKey) 202 | } 203 | return req, nil 204 | } 205 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/bucket-notification.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "encoding/xml" 21 | "reflect" 22 | ) 23 | 24 | // NotificationEventType is a S3 notification event associated to the bucket notification configuration 25 | type NotificationEventType string 26 | 27 | // The role of all event types are described in : 28 | // http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#notification-how-to-event-types-and-destinations 29 | const ( 30 | ObjectCreatedAll NotificationEventType = "s3:ObjectCreated:*" 31 | ObjectCreatePut = "s3:ObjectCreated:Put" 32 | ObjectCreatedPost = "s3:ObjectCreated:Post" 33 | ObjectCreatedCopy = "s3:ObjectCreated:Copy" 34 | ObjectCreatedCompleteMultipartUpload = "sh:ObjectCreated:CompleteMultipartUpload" 35 | ObjectRemovedAll = "s3:ObjectRemoved:*" 36 | ObjectRemovedDelete = "s3:ObjectRemoved:Delete" 37 | ObjectRemovedDeleteMarkerCreated = "s3:ObjectRemoved:DeleteMarkerCreated" 38 | ObjectReducedRedundancyLostObject = "s3:ReducedRedundancyLostObject" 39 | ) 40 | 41 | // FilterRule - child of S3Key, a tag in the notification xml which 42 | // carries suffix/prefix filters 43 | type FilterRule struct { 44 | Name string `xml:"Name"` 45 | Value string `xml:"Value"` 46 | } 47 | 48 | // S3Key - child of Filter, a tag in the notification xml which 49 | // carries suffix/prefix filters 50 | type S3Key struct { 51 | FilterRules []FilterRule `xml:"FilterRule,omitempty"` 52 | } 53 | 54 | // Filter - a tag in the notification xml structure which carries 55 | // suffix/prefix filters 56 | type Filter struct { 57 | S3Key S3Key `xml:"S3Key,omitempty"` 58 | } 59 | 60 | // Arn - holds ARN information that will be sent to the web service, 61 | // ARN desciption can be found in http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html 62 | type Arn struct { 63 | Partition string 64 | Service string 65 | Region string 66 | AccountID string 67 | Resource string 68 | } 69 | 70 | // NewArn creates new ARN based on the given partition, service, region, account id and resource 71 | func NewArn(partition, service, region, accountID, resource string) Arn { 72 | return Arn{Partition: partition, 73 | Service: service, 74 | Region: region, 75 | AccountID: accountID, 76 | Resource: resource} 77 | } 78 | 79 | // Return the string format of the ARN 80 | func (arn Arn) String() string { 81 | return "arn:" + arn.Partition + ":" + arn.Service + ":" + arn.Region + ":" + arn.AccountID + ":" + arn.Resource 82 | } 83 | 84 | // NotificationConfig - represents one single notification configuration 85 | // such as topic, queue or lambda configuration. 86 | type NotificationConfig struct { 87 | ID string `xml:"Id,omitempty"` 88 | Arn Arn `xml:"-"` 89 | Events []NotificationEventType `xml:"Event"` 90 | Filter *Filter `xml:"Filter,omitempty"` 91 | } 92 | 93 | // NewNotificationConfig creates one notification config and sets the given ARN 94 | func NewNotificationConfig(arn Arn) NotificationConfig { 95 | return NotificationConfig{Arn: arn} 96 | } 97 | 98 | // AddEvents adds one event to the current notification config 99 | func (t *NotificationConfig) AddEvents(events ...NotificationEventType) { 100 | t.Events = append(t.Events, events...) 101 | } 102 | 103 | // AddFilterSuffix sets the suffix configuration to the current notification config 104 | func (t *NotificationConfig) AddFilterSuffix(suffix string) { 105 | if t.Filter == nil { 106 | t.Filter = &Filter{} 107 | } 108 | newFilterRule := FilterRule{Name: "suffix", Value: suffix} 109 | // Replace any suffix rule if existing and add to the list otherwise 110 | for index := range t.Filter.S3Key.FilterRules { 111 | if t.Filter.S3Key.FilterRules[index].Name == "suffix" { 112 | t.Filter.S3Key.FilterRules[index] = newFilterRule 113 | return 114 | } 115 | } 116 | t.Filter.S3Key.FilterRules = append(t.Filter.S3Key.FilterRules, newFilterRule) 117 | } 118 | 119 | // AddFilterPrefix sets the prefix configuration to the current notification config 120 | func (t *NotificationConfig) AddFilterPrefix(prefix string) { 121 | if t.Filter == nil { 122 | t.Filter = &Filter{} 123 | } 124 | newFilterRule := FilterRule{Name: "prefix", Value: prefix} 125 | // Replace any prefix rule if existing and add to the list otherwise 126 | for index := range t.Filter.S3Key.FilterRules { 127 | if t.Filter.S3Key.FilterRules[index].Name == "prefix" { 128 | t.Filter.S3Key.FilterRules[index] = newFilterRule 129 | return 130 | } 131 | } 132 | t.Filter.S3Key.FilterRules = append(t.Filter.S3Key.FilterRules, newFilterRule) 133 | } 134 | 135 | // TopicConfig carries one single topic notification configuration 136 | type TopicConfig struct { 137 | NotificationConfig 138 | Topic string `xml:"Topic"` 139 | } 140 | 141 | // QueueConfig carries one single queue notification configuration 142 | type QueueConfig struct { 143 | NotificationConfig 144 | Queue string `xml:"Queue"` 145 | } 146 | 147 | // LambdaConfig carries one single cloudfunction notification configuration 148 | type LambdaConfig struct { 149 | NotificationConfig 150 | Lambda string `xml:"CloudFunction"` 151 | } 152 | 153 | // BucketNotification - the struct that represents the whole XML to be sent to the web service 154 | type BucketNotification struct { 155 | XMLName xml.Name `xml:"NotificationConfiguration"` 156 | LambdaConfigs []LambdaConfig `xml:"CloudFunctionConfiguration"` 157 | TopicConfigs []TopicConfig `xml:"TopicConfiguration"` 158 | QueueConfigs []QueueConfig `xml:"QueueConfiguration"` 159 | } 160 | 161 | // AddTopic adds a given topic config to the general bucket notification config 162 | func (b *BucketNotification) AddTopic(topicConfig NotificationConfig) { 163 | newTopicConfig := TopicConfig{NotificationConfig: topicConfig, Topic: topicConfig.Arn.String()} 164 | for _, n := range b.TopicConfigs { 165 | if reflect.DeepEqual(n, newTopicConfig) { 166 | // Avoid adding duplicated entry 167 | return 168 | } 169 | } 170 | b.TopicConfigs = append(b.TopicConfigs, newTopicConfig) 171 | } 172 | 173 | // AddQueue adds a given queue config to the general bucket notification config 174 | func (b *BucketNotification) AddQueue(queueConfig NotificationConfig) { 175 | newQueueConfig := QueueConfig{NotificationConfig: queueConfig, Queue: queueConfig.Arn.String()} 176 | for _, n := range b.QueueConfigs { 177 | if reflect.DeepEqual(n, newQueueConfig) { 178 | // Avoid adding duplicated entry 179 | return 180 | } 181 | } 182 | b.QueueConfigs = append(b.QueueConfigs, newQueueConfig) 183 | } 184 | 185 | // AddLambda adds a given lambda config to the general bucket notification config 186 | func (b *BucketNotification) AddLambda(lambdaConfig NotificationConfig) { 187 | newLambdaConfig := LambdaConfig{NotificationConfig: lambdaConfig, Lambda: lambdaConfig.Arn.String()} 188 | for _, n := range b.LambdaConfigs { 189 | if reflect.DeepEqual(n, newLambdaConfig) { 190 | // Avoid adding duplicated entry 191 | return 192 | } 193 | } 194 | b.LambdaConfigs = append(b.LambdaConfigs, newLambdaConfig) 195 | } 196 | 197 | // RemoveTopicByArn removes all topic configurations that match the exact specified ARN 198 | func (b *BucketNotification) RemoveTopicByArn(arn Arn) { 199 | var topics []TopicConfig 200 | for _, topic := range b.TopicConfigs { 201 | if topic.Topic != arn.String() { 202 | topics = append(topics, topic) 203 | } 204 | } 205 | b.TopicConfigs = topics 206 | } 207 | 208 | // RemoveQueueByArn removes all queue configurations that match the exact specified ARN 209 | func (b *BucketNotification) RemoveQueueByArn(arn Arn) { 210 | var queues []QueueConfig 211 | for _, queue := range b.QueueConfigs { 212 | if queue.Queue != arn.String() { 213 | queues = append(queues, queue) 214 | } 215 | } 216 | b.QueueConfigs = queues 217 | } 218 | 219 | // RemoveLambdaByArn removes all lambda configurations that match the exact specified ARN 220 | func (b *BucketNotification) RemoveLambdaByArn(arn Arn) { 221 | var lambdas []LambdaConfig 222 | for _, lambda := range b.LambdaConfigs { 223 | if lambda.Lambda != arn.String() { 224 | lambdas = append(lambdas, lambda) 225 | } 226 | } 227 | b.LambdaConfigs = lambdas 228 | } 229 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/constants.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | /// Multipart upload defaults. 20 | 21 | // miniPartSize - minimum part size 64MiB per object after which 22 | // putObject behaves internally as multipart. 23 | const minPartSize = 1024 * 1024 * 64 24 | 25 | // maxPartsCount - maximum number of parts for a single multipart session. 26 | const maxPartsCount = 10000 27 | 28 | // maxPartSize - maximum part size 5GiB for a single multipart upload 29 | // operation. 30 | const maxPartSize = 1024 * 1024 * 1024 * 5 31 | 32 | // maxSinglePutObjectSize - maximum size 5GiB of object per PUT 33 | // operation. 34 | const maxSinglePutObjectSize = 1024 * 1024 * 1024 * 5 35 | 36 | // maxMultipartPutObjectSize - maximum size 5TiB of object for 37 | // Multipart operation. 38 | const maxMultipartPutObjectSize = 1024 * 1024 * 1024 * 1024 * 5 39 | 40 | // optimalReadBufferSize - optimal buffer 5MiB used for reading 41 | // through Read operation. 42 | const optimalReadBufferSize = 1024 * 1024 * 5 43 | 44 | // unsignedPayload - value to be set to X-Amz-Content-Sha256 header when 45 | // we don't want to sign the request payload 46 | const unsignedPayload = "UNSIGNED-PAYLOAD" 47 | 48 | // Signature related constants. 49 | const ( 50 | signV4Algorithm = "AWS4-HMAC-SHA256" 51 | iso8601DateFormat = "20060102T150405Z" 52 | ) 53 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/copy-conditions.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "net/http" 21 | "time" 22 | ) 23 | 24 | // copyCondition explanation: 25 | // http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html 26 | // 27 | // Example: 28 | // 29 | // copyCondition { 30 | // key: "x-amz-copy-if-modified-since", 31 | // value: "Tue, 15 Nov 1994 12:45:26 GMT", 32 | // } 33 | // 34 | type copyCondition struct { 35 | key string 36 | value string 37 | } 38 | 39 | // CopyConditions - copy conditions. 40 | type CopyConditions struct { 41 | conditions []copyCondition 42 | } 43 | 44 | // NewCopyConditions - Instantiate new list of conditions. 45 | func NewCopyConditions() CopyConditions { 46 | return CopyConditions{ 47 | conditions: make([]copyCondition, 0), 48 | } 49 | } 50 | 51 | // SetMatchETag - set match etag. 52 | func (c *CopyConditions) SetMatchETag(etag string) error { 53 | if etag == "" { 54 | return ErrInvalidArgument("ETag cannot be empty.") 55 | } 56 | c.conditions = append(c.conditions, copyCondition{ 57 | key: "x-amz-copy-source-if-match", 58 | value: etag, 59 | }) 60 | return nil 61 | } 62 | 63 | // SetMatchETagExcept - set match etag except. 64 | func (c *CopyConditions) SetMatchETagExcept(etag string) error { 65 | if etag == "" { 66 | return ErrInvalidArgument("ETag cannot be empty.") 67 | } 68 | c.conditions = append(c.conditions, copyCondition{ 69 | key: "x-amz-copy-source-if-none-match", 70 | value: etag, 71 | }) 72 | return nil 73 | } 74 | 75 | // SetUnmodified - set unmodified time since. 76 | func (c *CopyConditions) SetUnmodified(modTime time.Time) error { 77 | if modTime.IsZero() { 78 | return ErrInvalidArgument("Modified since cannot be empty.") 79 | } 80 | c.conditions = append(c.conditions, copyCondition{ 81 | key: "x-amz-copy-source-if-unmodified-since", 82 | value: modTime.Format(http.TimeFormat), 83 | }) 84 | return nil 85 | } 86 | 87 | // SetModified - set modified time since. 88 | func (c *CopyConditions) SetModified(modTime time.Time) error { 89 | if modTime.IsZero() { 90 | return ErrInvalidArgument("Modified since cannot be empty.") 91 | } 92 | c.conditions = append(c.conditions, copyCondition{ 93 | key: "x-amz-copy-source-if-modified-since", 94 | value: modTime.Format(http.TimeFormat), 95 | }) 96 | return nil 97 | } 98 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/hook-reader.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import "io" 20 | 21 | // hookReader hooks additional reader in the source stream. It is 22 | // useful for making progress bars. Second reader is appropriately 23 | // notified about the exact number of bytes read from the primary 24 | // source on each Read operation. 25 | type hookReader struct { 26 | source io.Reader 27 | hook io.Reader 28 | } 29 | 30 | // Seek implements io.Seeker. Seeks source first, and if necessary 31 | // seeks hook if Seek method is appropriately found. 32 | func (hr *hookReader) Seek(offset int64, whence int) (n int64, err error) { 33 | // Verify for source has embedded Seeker, use it. 34 | sourceSeeker, ok := hr.source.(io.Seeker) 35 | if ok { 36 | return sourceSeeker.Seek(offset, whence) 37 | } 38 | // Verify if hook has embedded Seeker, use it. 39 | hookSeeker, ok := hr.hook.(io.Seeker) 40 | if ok { 41 | return hookSeeker.Seek(offset, whence) 42 | } 43 | return n, nil 44 | } 45 | 46 | // Read implements io.Reader. Always reads from the source, the return 47 | // value 'n' number of bytes are reported through the hook. Returns 48 | // error for all non io.EOF conditions. 49 | func (hr *hookReader) Read(b []byte) (n int, err error) { 50 | n, err = hr.source.Read(b) 51 | if err != nil && err != io.EOF { 52 | return n, err 53 | } 54 | // Progress the hook with the total read bytes from the source. 55 | if _, herr := hr.hook.Read(b[:n]); herr != nil { 56 | if herr != io.EOF { 57 | return n, herr 58 | } 59 | } 60 | return n, err 61 | } 62 | 63 | // newHook returns a io.ReadSeeker which implements hookReader that 64 | // reports the data read from the source to the hook. 65 | func newHook(source, hook io.Reader) io.Reader { 66 | if hook == nil { 67 | return source 68 | } 69 | return &hookReader{source, hook} 70 | } 71 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/minio.test: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/minio/perftest/6179faf716e5b1962f7ddc144a84aac1830aa9be/exec-concurrent/vendor/github.com/minio/minio-go/minio.test -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/pkg/policy/bucket-policy-condition.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package policy 18 | 19 | import "github.com/minio/minio-go/pkg/set" 20 | 21 | // ConditionKeyMap - map of policy condition key and value. 22 | type ConditionKeyMap map[string]set.StringSet 23 | 24 | // Add - adds key and value. The value is appended If key already exists. 25 | func (ckm ConditionKeyMap) Add(key string, value set.StringSet) { 26 | if v, ok := ckm[key]; ok { 27 | ckm[key] = v.Union(value) 28 | } else { 29 | ckm[key] = set.CopyStringSet(value) 30 | } 31 | } 32 | 33 | // Remove - removes value of given key. If key has empty after removal, the key is also removed. 34 | func (ckm ConditionKeyMap) Remove(key string, value set.StringSet) { 35 | if v, ok := ckm[key]; ok { 36 | if value != nil { 37 | ckm[key] = v.Difference(value) 38 | } 39 | 40 | if ckm[key].IsEmpty() { 41 | delete(ckm, key) 42 | } 43 | } 44 | } 45 | 46 | // RemoveKey - removes key and its value. 47 | func (ckm ConditionKeyMap) RemoveKey(key string) { 48 | if _, ok := ckm[key]; ok { 49 | delete(ckm, key) 50 | } 51 | } 52 | 53 | // CopyConditionKeyMap - returns new copy of given ConditionKeyMap. 54 | func CopyConditionKeyMap(condKeyMap ConditionKeyMap) ConditionKeyMap { 55 | out := make(ConditionKeyMap) 56 | 57 | for k, v := range condKeyMap { 58 | out[k] = set.CopyStringSet(v) 59 | } 60 | 61 | return out 62 | } 63 | 64 | // mergeConditionKeyMap - returns a new ConditionKeyMap which contains merged key/value of given two ConditionKeyMap. 65 | func mergeConditionKeyMap(condKeyMap1 ConditionKeyMap, condKeyMap2 ConditionKeyMap) ConditionKeyMap { 66 | out := CopyConditionKeyMap(condKeyMap1) 67 | 68 | for k, v := range condKeyMap2 { 69 | if ev, ok := out[k]; ok { 70 | out[k] = ev.Union(v) 71 | } else { 72 | out[k] = set.CopyStringSet(v) 73 | } 74 | } 75 | 76 | return out 77 | } 78 | 79 | // ConditionMap - map of condition and conditional values. 80 | type ConditionMap map[string]ConditionKeyMap 81 | 82 | // Add - adds condition key and condition value. The value is appended if key already exists. 83 | func (cond ConditionMap) Add(condKey string, condKeyMap ConditionKeyMap) { 84 | if v, ok := cond[condKey]; ok { 85 | cond[condKey] = mergeConditionKeyMap(v, condKeyMap) 86 | } else { 87 | cond[condKey] = CopyConditionKeyMap(condKeyMap) 88 | } 89 | } 90 | 91 | // Remove - removes condition key and its value. 92 | func (cond ConditionMap) Remove(condKey string) { 93 | if _, ok := cond[condKey]; ok { 94 | delete(cond, condKey) 95 | } 96 | } 97 | 98 | // mergeConditionMap - returns new ConditionMap which contains merged key/value of two ConditionMap. 99 | func mergeConditionMap(condMap1 ConditionMap, condMap2 ConditionMap) ConditionMap { 100 | out := make(ConditionMap) 101 | 102 | for k, v := range condMap1 { 103 | out[k] = CopyConditionKeyMap(v) 104 | } 105 | 106 | for k, v := range condMap2 { 107 | if ev, ok := out[k]; ok { 108 | out[k] = mergeConditionKeyMap(ev, v) 109 | } else { 110 | out[k] = CopyConditionKeyMap(v) 111 | } 112 | } 113 | 114 | return out 115 | } 116 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/post-policy.go: -------------------------------------------------------------------------------- 1 | package minio 2 | 3 | import ( 4 | "encoding/base64" 5 | "fmt" 6 | "strings" 7 | "time" 8 | ) 9 | 10 | // expirationDateFormat date format for expiration key in json policy. 11 | const expirationDateFormat = "2006-01-02T15:04:05.999Z" 12 | 13 | // policyCondition explanation: 14 | // http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html 15 | // 16 | // Example: 17 | // 18 | // policyCondition { 19 | // matchType: "$eq", 20 | // key: "$Content-Type", 21 | // value: "image/png", 22 | // } 23 | // 24 | type policyCondition struct { 25 | matchType string 26 | condition string 27 | value string 28 | } 29 | 30 | // PostPolicy - Provides strict static type conversion and validation 31 | // for Amazon S3's POST policy JSON string. 32 | type PostPolicy struct { 33 | // Expiration date and time of the POST policy. 34 | expiration time.Time 35 | // Collection of different policy conditions. 36 | conditions []policyCondition 37 | // ContentLengthRange minimum and maximum allowable size for the 38 | // uploaded content. 39 | contentLengthRange struct { 40 | min int64 41 | max int64 42 | } 43 | 44 | // Post form data. 45 | formData map[string]string 46 | } 47 | 48 | // NewPostPolicy - Instantiate new post policy. 49 | func NewPostPolicy() *PostPolicy { 50 | p := &PostPolicy{} 51 | p.conditions = make([]policyCondition, 0) 52 | p.formData = make(map[string]string) 53 | return p 54 | } 55 | 56 | // SetExpires - Sets expiration time for the new policy. 57 | func (p *PostPolicy) SetExpires(t time.Time) error { 58 | if t.IsZero() { 59 | return ErrInvalidArgument("No expiry time set.") 60 | } 61 | p.expiration = t 62 | return nil 63 | } 64 | 65 | // SetKey - Sets an object name for the policy based upload. 66 | func (p *PostPolicy) SetKey(key string) error { 67 | if strings.TrimSpace(key) == "" || key == "" { 68 | return ErrInvalidArgument("Object name is empty.") 69 | } 70 | policyCond := policyCondition{ 71 | matchType: "eq", 72 | condition: "$key", 73 | value: key, 74 | } 75 | if err := p.addNewPolicy(policyCond); err != nil { 76 | return err 77 | } 78 | p.formData["key"] = key 79 | return nil 80 | } 81 | 82 | // SetKeyStartsWith - Sets an object name that an policy based upload 83 | // can start with. 84 | func (p *PostPolicy) SetKeyStartsWith(keyStartsWith string) error { 85 | if strings.TrimSpace(keyStartsWith) == "" || keyStartsWith == "" { 86 | return ErrInvalidArgument("Object prefix is empty.") 87 | } 88 | policyCond := policyCondition{ 89 | matchType: "starts-with", 90 | condition: "$key", 91 | value: keyStartsWith, 92 | } 93 | if err := p.addNewPolicy(policyCond); err != nil { 94 | return err 95 | } 96 | p.formData["key"] = keyStartsWith 97 | return nil 98 | } 99 | 100 | // SetBucket - Sets bucket at which objects will be uploaded to. 101 | func (p *PostPolicy) SetBucket(bucketName string) error { 102 | if strings.TrimSpace(bucketName) == "" || bucketName == "" { 103 | return ErrInvalidArgument("Bucket name is empty.") 104 | } 105 | policyCond := policyCondition{ 106 | matchType: "eq", 107 | condition: "$bucket", 108 | value: bucketName, 109 | } 110 | if err := p.addNewPolicy(policyCond); err != nil { 111 | return err 112 | } 113 | p.formData["bucket"] = bucketName 114 | return nil 115 | } 116 | 117 | // SetContentType - Sets content-type of the object for this policy 118 | // based upload. 119 | func (p *PostPolicy) SetContentType(contentType string) error { 120 | if strings.TrimSpace(contentType) == "" || contentType == "" { 121 | return ErrInvalidArgument("No content type specified.") 122 | } 123 | policyCond := policyCondition{ 124 | matchType: "eq", 125 | condition: "$Content-Type", 126 | value: contentType, 127 | } 128 | if err := p.addNewPolicy(policyCond); err != nil { 129 | return err 130 | } 131 | p.formData["Content-Type"] = contentType 132 | return nil 133 | } 134 | 135 | // SetContentLengthRange - Set new min and max content length 136 | // condition for all incoming uploads. 137 | func (p *PostPolicy) SetContentLengthRange(min, max int64) error { 138 | if min > max { 139 | return ErrInvalidArgument("Minimum limit is larger than maximum limit.") 140 | } 141 | if min < 0 { 142 | return ErrInvalidArgument("Minimum limit cannot be negative.") 143 | } 144 | if max < 0 { 145 | return ErrInvalidArgument("Maximum limit cannot be negative.") 146 | } 147 | p.contentLengthRange.min = min 148 | p.contentLengthRange.max = max 149 | return nil 150 | } 151 | 152 | // SetSuccessStatusAction - Sets the status success code of the object for this policy 153 | // based upload. 154 | func (p *PostPolicy) SetSuccessStatusAction(status string) error { 155 | if strings.TrimSpace(status) == "" || status == "" { 156 | return ErrInvalidArgument("Status is empty") 157 | } 158 | policyCond := policyCondition{ 159 | matchType: "eq", 160 | condition: "$success_action_status", 161 | value: status, 162 | } 163 | if err := p.addNewPolicy(policyCond); err != nil { 164 | return err 165 | } 166 | p.formData["success_action_status"] = status 167 | return nil 168 | } 169 | 170 | // addNewPolicy - internal helper to validate adding new policies. 171 | func (p *PostPolicy) addNewPolicy(policyCond policyCondition) error { 172 | if policyCond.matchType == "" || policyCond.condition == "" || policyCond.value == "" { 173 | return ErrInvalidArgument("Policy fields are empty.") 174 | } 175 | p.conditions = append(p.conditions, policyCond) 176 | return nil 177 | } 178 | 179 | // Stringer interface for printing policy in json formatted string. 180 | func (p PostPolicy) String() string { 181 | return string(p.marshalJSON()) 182 | } 183 | 184 | // marshalJSON - Provides Marshalled JSON in bytes. 185 | func (p PostPolicy) marshalJSON() []byte { 186 | expirationStr := `"expiration":"` + p.expiration.Format(expirationDateFormat) + `"` 187 | var conditionsStr string 188 | conditions := []string{} 189 | for _, po := range p.conditions { 190 | conditions = append(conditions, fmt.Sprintf("[\"%s\",\"%s\",\"%s\"]", po.matchType, po.condition, po.value)) 191 | } 192 | if p.contentLengthRange.min != 0 || p.contentLengthRange.max != 0 { 193 | conditions = append(conditions, fmt.Sprintf("[\"content-length-range\", %d, %d]", 194 | p.contentLengthRange.min, p.contentLengthRange.max)) 195 | } 196 | if len(conditions) > 0 { 197 | conditionsStr = `"conditions":[` + strings.Join(conditions, ",") + "]" 198 | } 199 | retStr := "{" 200 | retStr = retStr + expirationStr + "," 201 | retStr = retStr + conditionsStr 202 | retStr = retStr + "}" 203 | return []byte(retStr) 204 | } 205 | 206 | // base64 - Produces base64 of PostPolicy's Marshalled json. 207 | func (p PostPolicy) base64() string { 208 | return base64.StdEncoding.EncodeToString(p.marshalJSON()) 209 | } 210 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/retry-continous.go: -------------------------------------------------------------------------------- 1 | package minio 2 | 3 | import "time" 4 | 5 | // newRetryTimerContinous creates a timer with exponentially increasing delays forever. 6 | func (c Client) newRetryTimerContinous(unit time.Duration, cap time.Duration, jitter float64, doneCh chan struct{}) <-chan int { 7 | attemptCh := make(chan int) 8 | 9 | // normalize jitter to the range [0, 1.0] 10 | if jitter < NoJitter { 11 | jitter = NoJitter 12 | } 13 | if jitter > MaxJitter { 14 | jitter = MaxJitter 15 | } 16 | 17 | // computes the exponential backoff duration according to 18 | // https://www.awsarchitectureblog.com/2015/03/backoff.html 19 | exponentialBackoffWait := func(attempt int) time.Duration { 20 | // 1< maxAttempt { 23 | attempt = maxAttempt 24 | } 25 | //sleep = random_between(0, min(cap, base * 2 ** attempt)) 26 | sleep := unit * time.Duration(1< cap { 28 | sleep = cap 29 | } 30 | if jitter != NoJitter { 31 | sleep -= time.Duration(c.random.Float64() * float64(sleep) * jitter) 32 | } 33 | return sleep 34 | } 35 | 36 | go func() { 37 | defer close(attemptCh) 38 | var nextBackoff int 39 | for { 40 | select { 41 | // Attempts starts. 42 | case attemptCh <- nextBackoff: 43 | nextBackoff++ 44 | case <-doneCh: 45 | // Stop the routine. 46 | return 47 | } 48 | time.Sleep(exponentialBackoffWait(nextBackoff)) 49 | } 50 | }() 51 | return attemptCh 52 | } 53 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/retry.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "net" 21 | "net/http" 22 | "net/url" 23 | "strings" 24 | "time" 25 | ) 26 | 27 | // MaxRetry is the maximum number of retries before stopping. 28 | var MaxRetry = 5 29 | 30 | // MaxJitter will randomize over the full exponential backoff time 31 | const MaxJitter = 1.0 32 | 33 | // NoJitter disables the use of jitter for randomizing the exponential backoff time 34 | const NoJitter = 0.0 35 | 36 | // newRetryTimer creates a timer with exponentially increasing delays 37 | // until the maximum retry attempts are reached. 38 | func (c Client) newRetryTimer(maxRetry int, unit time.Duration, cap time.Duration, jitter float64, doneCh chan struct{}) <-chan int { 39 | attemptCh := make(chan int) 40 | 41 | // computes the exponential backoff duration according to 42 | // https://www.awsarchitectureblog.com/2015/03/backoff.html 43 | exponentialBackoffWait := func(attempt int) time.Duration { 44 | // normalize jitter to the range [0, 1.0] 45 | if jitter < NoJitter { 46 | jitter = NoJitter 47 | } 48 | if jitter > MaxJitter { 49 | jitter = MaxJitter 50 | } 51 | 52 | //sleep = random_between(0, min(cap, base * 2 ** attempt)) 53 | sleep := unit * time.Duration(1< cap { 55 | sleep = cap 56 | } 57 | if jitter != NoJitter { 58 | sleep -= time.Duration(c.random.Float64() * float64(sleep) * jitter) 59 | } 60 | return sleep 61 | } 62 | 63 | go func() { 64 | defer close(attemptCh) 65 | for i := 0; i < maxRetry; i++ { 66 | select { 67 | // Attempts start from 1. 68 | case attemptCh <- i + 1: 69 | case <-doneCh: 70 | // Stop the routine. 71 | return 72 | } 73 | time.Sleep(exponentialBackoffWait(i)) 74 | } 75 | }() 76 | return attemptCh 77 | } 78 | 79 | // isNetErrorRetryable - is network error retryable. 80 | func isNetErrorRetryable(err error) bool { 81 | switch err.(type) { 82 | case net.Error: 83 | switch err.(type) { 84 | case *net.DNSError, *net.OpError, net.UnknownNetworkError: 85 | return true 86 | case *url.Error: 87 | // For a URL error, where it replies back "connection closed" 88 | // retry again. 89 | if strings.Contains(err.Error(), "Connection closed by foreign host") { 90 | return true 91 | } 92 | default: 93 | if strings.Contains(err.Error(), "net/http: TLS handshake timeout") { 94 | // If error is - tlsHandshakeTimeoutError, retry. 95 | return true 96 | } else if strings.Contains(err.Error(), "i/o timeout") { 97 | // If error is - tcp timeoutError, retry. 98 | return true 99 | } 100 | } 101 | } 102 | return false 103 | } 104 | 105 | // List of AWS S3 error codes which are retryable. 106 | var retryableS3Codes = map[string]struct{}{ 107 | "RequestError": {}, 108 | "RequestTimeout": {}, 109 | "Throttling": {}, 110 | "ThrottlingException": {}, 111 | "RequestLimitExceeded": {}, 112 | "RequestThrottled": {}, 113 | "InternalError": {}, 114 | "ExpiredToken": {}, 115 | "ExpiredTokenException": {}, 116 | // Add more AWS S3 codes here. 117 | } 118 | 119 | // isS3CodeRetryable - is s3 error code retryable. 120 | func isS3CodeRetryable(s3Code string) (ok bool) { 121 | _, ok = retryableS3Codes[s3Code] 122 | return ok 123 | } 124 | 125 | // List of HTTP status codes which are retryable. 126 | var retryableHTTPStatusCodes = map[int]struct{}{ 127 | 429: {}, // http.StatusTooManyRequests is not part of the Go 1.5 library, yet 128 | http.StatusInternalServerError: {}, 129 | http.StatusBadGateway: {}, 130 | http.StatusServiceUnavailable: {}, 131 | // Add more HTTP status codes here. 132 | } 133 | 134 | // isHTTPStatusRetryable - is HTTP error code retryable. 135 | func isHTTPStatusRetryable(httpStatusCode int) (ok bool) { 136 | _, ok = retryableHTTPStatusCodes[httpStatusCode] 137 | return ok 138 | } 139 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/s3-endpoints.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | // awsS3EndpointMap Amazon S3 endpoint map. 20 | // "cn-north-1" adds support for AWS China. 21 | var awsS3EndpointMap = map[string]string{ 22 | "us-east-1": "s3.amazonaws.com", 23 | "us-east-2": "s3-us-east-2.amazonaws.com", 24 | "us-west-2": "s3-us-west-2.amazonaws.com", 25 | "us-west-1": "s3-us-west-1.amazonaws.com", 26 | "ca-central-1": "s3.ca-central-1.amazonaws.com", 27 | "eu-west-1": "s3-eu-west-1.amazonaws.com", 28 | "eu-west-2": "s3-eu-west-2.amazonaws.com", 29 | "eu-central-1": "s3-eu-central-1.amazonaws.com", 30 | "ap-south-1": "s3-ap-south-1.amazonaws.com", 31 | "ap-southeast-1": "s3-ap-southeast-1.amazonaws.com", 32 | "ap-southeast-2": "s3-ap-southeast-2.amazonaws.com", 33 | "ap-northeast-1": "s3-ap-northeast-1.amazonaws.com", 34 | "ap-northeast-2": "s3-ap-northeast-2.amazonaws.com", 35 | "sa-east-1": "s3-sa-east-1.amazonaws.com", 36 | "cn-north-1": "s3.cn-north-1.amazonaws.com.cn", 37 | } 38 | 39 | // getS3Endpoint get Amazon S3 endpoint based on the bucket location. 40 | func getS3Endpoint(bucketLocation string) (s3Endpoint string) { 41 | s3Endpoint, ok := awsS3EndpointMap[bucketLocation] 42 | if !ok { 43 | // Default to 's3.amazonaws.com' endpoint. 44 | s3Endpoint = "s3.amazonaws.com" 45 | } 46 | return s3Endpoint 47 | } 48 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/signature-type.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | // SignatureType is type of Authorization requested for a given HTTP request. 20 | type SignatureType int 21 | 22 | // Different types of supported signatures - default is Latest i.e SignatureV4. 23 | const ( 24 | Latest SignatureType = iota 25 | SignatureV4 26 | SignatureV2 27 | ) 28 | 29 | // isV2 - is signature SignatureV2? 30 | func (s SignatureType) isV2() bool { 31 | return s == SignatureV2 32 | } 33 | 34 | // isV4 - is signature SignatureV4? 35 | func (s SignatureType) isV4() bool { 36 | return s == SignatureV4 || s == Latest 37 | } 38 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/tempfile.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "io/ioutil" 21 | "os" 22 | "sync" 23 | ) 24 | 25 | // tempFile - temporary file container. 26 | type tempFile struct { 27 | *os.File 28 | mutex *sync.Mutex 29 | } 30 | 31 | // newTempFile returns a new temporary file, once closed it automatically deletes itself. 32 | func newTempFile(prefix string) (*tempFile, error) { 33 | // use platform specific temp directory. 34 | file, err := ioutil.TempFile(os.TempDir(), prefix) 35 | if err != nil { 36 | return nil, err 37 | } 38 | return &tempFile{ 39 | File: file, 40 | mutex: &sync.Mutex{}, 41 | }, nil 42 | } 43 | 44 | // Close - closer wrapper to close and remove temporary file. 45 | func (t *tempFile) Close() error { 46 | t.mutex.Lock() 47 | defer t.mutex.Unlock() 48 | if t.File != nil { 49 | // Close the file. 50 | if err := t.File.Close(); err != nil { 51 | return err 52 | } 53 | // Remove file. 54 | if err := os.Remove(t.File.Name()); err != nil { 55 | return err 56 | } 57 | t.File = nil 58 | } 59 | return nil 60 | } 61 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/github.com/minio/minio-go/utils.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package minio 18 | 19 | import ( 20 | "crypto/md5" 21 | "crypto/sha256" 22 | "encoding/xml" 23 | "io" 24 | "io/ioutil" 25 | "net" 26 | "net/http" 27 | "net/url" 28 | "regexp" 29 | "strings" 30 | "time" 31 | "unicode/utf8" 32 | 33 | "github.com/minio/minio-go/pkg/s3utils" 34 | ) 35 | 36 | // xmlDecoder provide decoded value in xml. 37 | func xmlDecoder(body io.Reader, v interface{}) error { 38 | d := xml.NewDecoder(body) 39 | return d.Decode(v) 40 | } 41 | 42 | // sum256 calculate sha256 sum for an input byte array. 43 | func sum256(data []byte) []byte { 44 | hash := sha256.New() 45 | hash.Write(data) 46 | return hash.Sum(nil) 47 | } 48 | 49 | // sumMD5 calculate sumMD5 sum for an input byte array. 50 | func sumMD5(data []byte) []byte { 51 | hash := md5.New() 52 | hash.Write(data) 53 | return hash.Sum(nil) 54 | } 55 | 56 | // getEndpointURL - construct a new endpoint. 57 | func getEndpointURL(endpoint string, secure bool) (*url.URL, error) { 58 | if strings.Contains(endpoint, ":") { 59 | host, _, err := net.SplitHostPort(endpoint) 60 | if err != nil { 61 | return nil, err 62 | } 63 | if !s3utils.IsValidIP(host) && !s3utils.IsValidDomain(host) { 64 | msg := "Endpoint: " + endpoint + " does not follow ip address or domain name standards." 65 | return nil, ErrInvalidArgument(msg) 66 | } 67 | } else { 68 | if !s3utils.IsValidIP(endpoint) && !s3utils.IsValidDomain(endpoint) { 69 | msg := "Endpoint: " + endpoint + " does not follow ip address or domain name standards." 70 | return nil, ErrInvalidArgument(msg) 71 | } 72 | } 73 | // If secure is false, use 'http' scheme. 74 | scheme := "https" 75 | if !secure { 76 | scheme = "http" 77 | } 78 | 79 | // Construct a secured endpoint URL. 80 | endpointURLStr := scheme + "://" + endpoint 81 | endpointURL, err := url.Parse(endpointURLStr) 82 | if err != nil { 83 | return nil, err 84 | } 85 | 86 | // Validate incoming endpoint URL. 87 | if err := isValidEndpointURL(*endpointURL); err != nil { 88 | return nil, err 89 | } 90 | return endpointURL, nil 91 | } 92 | 93 | // closeResponse close non nil response with any response Body. 94 | // convenient wrapper to drain any remaining data on response body. 95 | // 96 | // Subsequently this allows golang http RoundTripper 97 | // to re-use the same connection for future requests. 98 | func closeResponse(resp *http.Response) { 99 | // Callers should close resp.Body when done reading from it. 100 | // If resp.Body is not closed, the Client's underlying RoundTripper 101 | // (typically Transport) may not be able to re-use a persistent TCP 102 | // connection to the server for a subsequent "keep-alive" request. 103 | if resp != nil && resp.Body != nil { 104 | // Drain any remaining Body and then close the connection. 105 | // Without this closing connection would disallow re-using 106 | // the same connection for future uses. 107 | // - http://stackoverflow.com/a/17961593/4465767 108 | io.Copy(ioutil.Discard, resp.Body) 109 | resp.Body.Close() 110 | } 111 | } 112 | 113 | // Sentinel URL is the default url value which is invalid. 114 | var sentinelURL = url.URL{} 115 | 116 | // Verify if input endpoint URL is valid. 117 | func isValidEndpointURL(endpointURL url.URL) error { 118 | if endpointURL == sentinelURL { 119 | return ErrInvalidArgument("Endpoint url cannot be empty.") 120 | } 121 | if endpointURL.Path != "/" && endpointURL.Path != "" { 122 | return ErrInvalidArgument("Endpoint url cannot have fully qualified paths.") 123 | } 124 | if strings.Contains(endpointURL.Host, ".amazonaws.com") { 125 | if !s3utils.IsAmazonEndpoint(endpointURL) { 126 | return ErrInvalidArgument("Amazon S3 endpoint should be 's3.amazonaws.com'.") 127 | } 128 | } 129 | if strings.Contains(endpointURL.Host, ".googleapis.com") { 130 | if !s3utils.IsGoogleEndpoint(endpointURL) { 131 | return ErrInvalidArgument("Google Cloud Storage endpoint should be 'storage.googleapis.com'.") 132 | } 133 | } 134 | return nil 135 | } 136 | 137 | // Verify if input expires value is valid. 138 | func isValidExpiry(expires time.Duration) error { 139 | expireSeconds := int64(expires / time.Second) 140 | if expireSeconds < 1 { 141 | return ErrInvalidArgument("Expires cannot be lesser than 1 second.") 142 | } 143 | if expireSeconds > 604800 { 144 | return ErrInvalidArgument("Expires cannot be greater than 7 days.") 145 | } 146 | return nil 147 | } 148 | 149 | // We support '.' with bucket names but we fallback to using path 150 | // style requests instead for such buckets. 151 | var validBucketName = regexp.MustCompile(`^[a-z0-9][a-z0-9\.\-]{1,61}[a-z0-9]$`) 152 | 153 | // Invalid bucket name with double dot. 154 | var invalidDotBucketName = regexp.MustCompile(`\.\.`) 155 | 156 | // isValidBucketName - verify bucket name in accordance with 157 | // - http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html 158 | func isValidBucketName(bucketName string) error { 159 | if strings.TrimSpace(bucketName) == "" { 160 | return ErrInvalidBucketName("Bucket name cannot be empty.") 161 | } 162 | if len(bucketName) < 3 { 163 | return ErrInvalidBucketName("Bucket name cannot be smaller than 3 characters.") 164 | } 165 | if len(bucketName) > 63 { 166 | return ErrInvalidBucketName("Bucket name cannot be greater than 63 characters.") 167 | } 168 | if bucketName[0] == '.' || bucketName[len(bucketName)-1] == '.' { 169 | return ErrInvalidBucketName("Bucket name cannot start or end with a '.' dot.") 170 | } 171 | if invalidDotBucketName.MatchString(bucketName) { 172 | return ErrInvalidBucketName("Bucket name cannot have successive periods.") 173 | } 174 | if !validBucketName.MatchString(bucketName) { 175 | return ErrInvalidBucketName("Bucket name contains invalid characters.") 176 | } 177 | return nil 178 | } 179 | 180 | // isValidObjectName - verify object name in accordance with 181 | // - http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html 182 | func isValidObjectName(objectName string) error { 183 | if strings.TrimSpace(objectName) == "" { 184 | return ErrInvalidObjectName("Object name cannot be empty.") 185 | } 186 | if len(objectName) > 1024 { 187 | return ErrInvalidObjectName("Object name cannot be greater than 1024 characters.") 188 | } 189 | if !utf8.ValidString(objectName) { 190 | return ErrInvalidBucketName("Object name with non UTF-8 strings are not supported.") 191 | } 192 | return nil 193 | } 194 | 195 | // isValidObjectPrefix - verify if object prefix is valid. 196 | func isValidObjectPrefix(objectPrefix string) error { 197 | if len(objectPrefix) > 1024 { 198 | return ErrInvalidObjectPrefix("Object prefix cannot be greater than 1024 characters.") 199 | } 200 | if !utf8.ValidString(objectPrefix) { 201 | return ErrInvalidObjectPrefix("Object prefix with non UTF-8 strings are not supported.") 202 | } 203 | return nil 204 | } 205 | 206 | // make a copy of http.Header 207 | func cloneHeader(h http.Header) http.Header { 208 | h2 := make(http.Header, len(h)) 209 | for k, vv := range h { 210 | vv2 := make([]string, len(vv)) 211 | copy(vv2, vv) 212 | h2[k] = vv2 213 | } 214 | return h2 215 | } 216 | 217 | // Filter relevant response headers from 218 | // the HEAD, GET http response. The function takes 219 | // a list of headers which are filtered out and 220 | // returned as a new http header. 221 | func filterHeader(header http.Header, filterKeys []string) (filteredHeader http.Header) { 222 | filteredHeader = cloneHeader(header) 223 | for _, key := range filterKeys { 224 | filteredHeader.Del(key) 225 | } 226 | return filteredHeader 227 | } 228 | -------------------------------------------------------------------------------- /exec-concurrent/vendor/vendor.json: -------------------------------------------------------------------------------- 1 | { 2 | "comment": "", 3 | "ignore": "test", 4 | "package": [ 5 | { 6 | "checksumSHA1": "C/MND9GBgvU61fHeJBZVqoZ3UGM=", 7 | "path": "github.com/minio/minio-go", 8 | "revision": "52cc94e879db78c2e2c6e160869df943137ec4cd", 9 | "revisionTime": "2017-01-01T22:57:21Z" 10 | }, 11 | { 12 | "checksumSHA1": "neH34/65OXeKHM/MlV8MbhcdFBc=", 13 | "path": "github.com/minio/minio-go/pkg/policy", 14 | "revision": "52cc94e879db78c2e2c6e160869df943137ec4cd", 15 | "revisionTime": "2017-01-01T22:57:21Z" 16 | } 17 | ], 18 | "rootPath": "github.com/minio/perftest/exec-concurrent" 19 | } 20 | -------------------------------------------------------------------------------- /js-upload-load/README.md: -------------------------------------------------------------------------------- 1 | # JS Upload Load. 2 | 3 | Uploads the given file for a total of 400 times and 80 asynchronous uploads. 4 | 5 | # Configure. 6 | 7 | - open minio.json 8 | 9 | - Fill in Secret Key, AccessKey, IP's of the nodes and the path of the file to upload. 10 | 11 | - Here is the sample minio.json. 12 | 13 | ```sh 14 | { 15 | "access_key": "Z7IXGOO6BZ0REAN1Q26I", 16 | "public_ips": [ 17 | "localhost", 18 | "192.168.1.10" 19 | ], 20 | "secret_key": "+m4G6buANjXWX8B/6/KUQRzbAi/l47aX7M+BG2+4", 21 | "file":"/home/user/minio.json" 22 | } 23 | ``` 24 | 25 | # Run. 26 | ```sh 27 | $ npm install minio@3 async uuid 28 | 29 | $ node pound-it.js 30 | ``` 31 | -------------------------------------------------------------------------------- /js-upload-load/minio.json: -------------------------------------------------------------------------------- 1 | { 2 | "access_key": "Z7IXGOO6BZ0REAN1Q26I", 3 | "public_ips": [ 4 | "localhost" 5 | ], 6 | "secret_key": "+m4G6buANjXWX8B/6/KUQRzbAi/l47aX7M+BG2+4", 7 | "file":"minio.json" 8 | } 9 | -------------------------------------------------------------------------------- /js-upload-load/pound-it.js: -------------------------------------------------------------------------------- 1 | "use strict"; 2 | 3 | var Minio = require('minio'); 4 | var settings = require('./minio.json'); 5 | var uuid = require('uuid'); 6 | var async = require('async'); 7 | 8 | var minioClients = []; 9 | for (var i = 0; i < settings.public_ips.length; i++) { 10 | minioClients.push(new Minio.Client({ 11 | endPoint: settings.public_ips[i], 12 | port: 9000, 13 | secure: false, 14 | accessKey: settings.access_key, 15 | secretKey: settings.secret_key 16 | })); 17 | } 18 | let file = settings.file; 19 | 20 | minioClients[0].makeBucket('test', 'us-east-1', (err) => { 21 | if (err) { 22 | console.log("error creating the bucket", err); 23 | } 24 | async.map(minioClients, (client, callback) => { 25 | 26 | if (err) { 27 | console.log("error creating the bucket", err); 28 | } 29 | async.timesLimit(400, 80, (n, callback) => { 30 | var uuidStr = uuid.v4() 31 | client.fPutObject('test', uuidStr, file, 'application/octet-stream', (err) => { 32 | if (err) { 33 | console.log(err); 34 | } else { 35 | process.stdout.write('.'); 36 | } 37 | var size = 0 38 | // Get a full object. 39 | client.getObject('test', uuidStr, function(e, dataStream) { 40 | if (e) { 41 | return console.log(e) 42 | } 43 | dataStream.on('data', function(chunk) { 44 | size += chunk.length 45 | }) 46 | dataStream.on('end', function() { 47 | console.log("End. Total size = " + size) 48 | }) 49 | dataStream.on('error', function(e) { 50 | console.log(e) 51 | }) 52 | }) 53 | callback(); 54 | }); 55 | }, (err) => { 56 | if (err) { 57 | console.log(err); 58 | } 59 | return callback(); 60 | }); 61 | 62 | }, (err) => { 63 | console.log(err); 64 | }); 65 | }); 66 | -------------------------------------------------------------------------------- /mc-cat-serial/README.md: -------------------------------------------------------------------------------- 1 | # mc-cat serial test. 2 | 3 | Uses `mc cat` to download the object and verify its sanctity using its MD5Sum. 4 | 5 | # Usage. 6 | - Install [mc](https://github.com/minio/mc). 7 | - Upload an object. 8 | - Set the object path. 9 | 10 | ```sh 11 | # export OBJECT=//Object 12 | export OBJECT=myminio/bucket/file 13 | ``` 14 | - Set the expected MD5 of the object. 15 | 16 | ```sh 17 | export MD5=xxxxxxxxxxxxxxxxxxxxx 18 | ``` 19 | 20 | - Set the number of times the test has to be run. 21 | 22 | ```sh 23 | export COUNT=100 24 | ``` 25 | 26 | - Run the program. 27 | ```sh 28 | go run mc-cat.go 29 | ``` 30 | - Check output.log for the result. 31 | 32 | -------------------------------------------------------------------------------- /mc-cat-serial/mc-cat.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Cloud Storage, (C) 2015, 2016 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package main 18 | 19 | import ( 20 | "fmt" 21 | "log" 22 | "os" 23 | "os/exec" 24 | "strconv" 25 | "strings" 26 | ) 27 | 28 | func execCommand(command string) ([]byte, error) { 29 | return exec.Command("sh", "-c", command).Output() 30 | } 31 | 32 | func main() { 33 | f, err := os.Create("output.log") 34 | if err != nil { 35 | log.Fatal(err) 36 | } 37 | defer f.Close() 38 | 39 | obj := os.Getenv("OBJECT") 40 | if obj == "" { 41 | log.Fatal("Set Object to be downloaded, `export OBJECT=//`.") 42 | } 43 | 44 | expectedMD5 := os.Getenv("MD5") 45 | if expectedMD5 == "" { 46 | log.Fatal("Set Expected MD5 `export MD5=xxxxx`.") 47 | } 48 | 49 | countStr := os.Getenv("COUNT") 50 | if countStr == "" { 51 | log.Fatal("COUNT not set, `export COUNT=100") 52 | } 53 | 54 | count, err := strconv.Atoi(countStr) 55 | if err != nil { 56 | log.Fatal(err) 57 | } 58 | 59 | fmt.Fprintf(f, "Expected Md5Sum: "+expectedMD5) 60 | 61 | for i := 0; i < count; i++ { 62 | log.Println("\nLoop ", i+1, ": Copy operation and verifying Md5sum\n") 63 | cpBackCmd := "mc cat " + obj + " | md5sum" 64 | out, err := execCommand(cpBackCmd) 65 | if err != nil { 66 | fmt.Println(err.Error()) 67 | return 68 | } 69 | if strings.Contains(string(out), expectedMD5) { 70 | fmt.Fprintf(f, "\nSuccess Match: Loop: %d\n", i+1) 71 | fmt.Fprintf(f, string(out)) 72 | } else { 73 | fmt.Fprintf(f, "\nFailed Match: loop: %d\n", i+1) 74 | fmt.Fprintf(f, string(out)) 75 | } 76 | } 77 | } 78 | -------------------------------------------------------------------------------- /minio-java-functional-test/README.md: -------------------------------------------------------------------------------- 1 | 2 | * Clone minio-java source by 3 | ```bash 4 | git clone https://github.com/minio/minio-java.git 5 | ``` 6 | 7 | * Go into minio-java source directory 8 | ```bash 9 | cd minio-java 10 | ``` 11 | 12 | * Run functional test using gradle 13 | ```bash 14 | ./gradlew -Pendpoint= -PaccessKey= -PsecretKey= runFunctionalTest 15 | ``` 16 | -------------------------------------------------------------------------------- /parallel-put-lock/parallel-put.go: -------------------------------------------------------------------------------- 1 | // Start the minio servers in another terminal. 2 | // -------------------- 3 | // #!/bin/bash 4 | // 5 | // for i in $(seq 1 6); do 6 | // minio server --address localhost:900${i} http://localhost:9001/tmp/disk1 http://localhost:9002/tmp/disk2 \ 7 | // http://localhost:9003/tmp/disk3 http://localhost:9004/tmp/disk4 http://localhost:9005/tmp/disk5 \ 8 | // http://localhost:9006/tmp/disk6 & 9 | // done 10 | // --------------------- 11 | // 12 | // This starts 6 disk distributed XL setup locally. 13 | // 14 | // On another terminal compile the code. 15 | // 16 | // go build parallel-put.go 17 | // 18 | // Grab accessKey and secretKey from minio servers and set them as env 19 | // MINIO_ACCESS_KEY and MINIO_SECRET_KEY respectively. 20 | // 21 | // Proceed to run the test on all the 6 nodes. 22 | // 23 | // ./parallel-put localhost:900{1..6} 24 | // 25 | package main 26 | 27 | import ( 28 | "fmt" 29 | "log" 30 | "os" 31 | "strings" 32 | "sync" 33 | 34 | minio "github.com/minio/minio-go" 35 | ) 36 | 37 | func getMinioClients(minioNodes []string) ([]*minio.Client, error) { 38 | accessKey := os.Getenv("MINIO_ACCESS_KEY") 39 | secretKey := os.Getenv("MINIO_SECRET_KEY") 40 | 41 | clnts := make([]*minio.Client, len(minioNodes)) 42 | for i, minioNode := range minioNodes { 43 | client, err := minio.New(minioNode, accessKey, secretKey, false) 44 | if err != nil { 45 | return nil, err 46 | } 47 | clnts[i] = client 48 | } 49 | return clnts, nil 50 | } 51 | 52 | func main() { 53 | clnts, err := getMinioClients(os.Args[1:]) 54 | if err != nil { 55 | log.Fatalln(err) 56 | } 57 | 58 | // Data to be uploaded. 59 | data := make([]string, len(clnts)) 60 | for i := range data { 61 | data[i] = strings.Repeat(fmt.Sprintf("Hello, World - %d", i), 10) 62 | } 63 | 64 | // Continously write to all nodes. 65 | j := 0 66 | for { 67 | j++ 68 | fmt.Println("Running: ", j) 69 | wg := &sync.WaitGroup{} 70 | for i, d := range data { 71 | wg.Add(1) 72 | go func(i int, d string) { 73 | defer wg.Done() 74 | _, perr := clnts[i].PutObject("test", "testobject", strings.NewReader(d), "") 75 | if perr != nil { 76 | log.Println(perr) 77 | } 78 | }(i, d) 79 | } 80 | wg.Wait() 81 | } 82 | } 83 | -------------------------------------------------------------------------------- /parallel-upload-download/README.md: -------------------------------------------------------------------------------- 1 | # parallel 2 | 3 | ## Read and Write 4 | 5 | Uploads requested concurrent number of objects to the server. 6 | 7 | ``` 8 | wget https://raw.githubusercontent.com/minio/perftest/master/parallel-upload-download/parallel-put.go 9 | go build parallel-put.go 10 | ``` 11 | 12 | Now that you have built the code. 13 | 14 | ``` 15 | ACCESSKEY=minio SECRETKEY=minio123 ENDPOINT=http://147.75.193.69:9001 CONCURRENCY=500 BUCKET=parallel-put ./parallel-put 16 | Elapsed time : 40.209136441s 17 | Speed : 29 objs/sec 18 | Bandwidth : 294 MBytes/sec 19 | ``` 20 | 21 | By default all objects uploaded are 10 MiB in size, to change the size to say 1 MiB. You can use `-size` specified in bytes. 22 | 23 | Once you have successfully gathered the results for upload operation, now proceed to download the same uploaded objects. 24 | 25 | ``` 26 | wget https://raw.githubusercontent.com/minio/perftest/master/parallel-upload-download/parallel-get.go 27 | go build parallel-get.go 28 | ``` 29 | 30 | Now that you have built the code, proceed to run. 31 | ``` 32 | ACCESSKEY=minio SECRETKEY=minio123 ENDPOINT=http://147.75.193.69:9001 CONCURRENCY=1000 BUCKET=parallel-put ./parallel-get 33 | Elapsed time : 6.443437387s 34 | Speed : 155 objs/sec 35 | Bandwidth : 1552 MBytes/sec 36 | ``` 37 | -------------------------------------------------------------------------------- /parallel-upload-download/parallel-get.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Cloud Storage (C) 2017 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package main 18 | 19 | import ( 20 | "fmt" 21 | "io" 22 | "log" 23 | "os" 24 | "strconv" 25 | "sync" 26 | "time" 27 | 28 | "github.com/aws/aws-sdk-go/aws" 29 | "github.com/aws/aws-sdk-go/aws/credentials" 30 | "github.com/aws/aws-sdk-go/aws/session" 31 | "github.com/aws/aws-sdk-go/service/s3" 32 | "github.com/aws/aws-sdk-go/service/s3/s3manager" 33 | ) 34 | 35 | type devNull int 36 | 37 | func (devNull) WriteAt(p []byte, off int64) (int, error) { 38 | return len(p), nil 39 | } 40 | 41 | // Discard is an io.WriterAt on which 42 | // all WriteAt calls succeed without 43 | // doing anything. 44 | var Discard io.WriterAt = devNull(0) 45 | 46 | // Downloads all object names in parallel. 47 | func parallelDownloads(objectNames []string) { 48 | var wg sync.WaitGroup 49 | for _, objectName := range objectNames { 50 | wg.Add(1) 51 | go func(objectName string) { 52 | defer wg.Done() 53 | if err := downloadBlob(objectName); err != nil { 54 | panic(err) 55 | } 56 | }(objectName) 57 | } 58 | wg.Wait() 59 | } 60 | 61 | // downloadBlob does an upload to the S3/Minio server 62 | func downloadBlob(objectName string) error { 63 | credsUp := credentials.NewStaticCredentials(os.Getenv("ACCESSKEY"), os.Getenv("SECRETKEY"), "") 64 | sessUp := session.New(aws.NewConfig(). 65 | WithCredentials(credsUp). 66 | WithRegion("us-east-1"). 67 | WithEndpoint(os.Getenv("ENDPOINT")). 68 | WithS3ForcePathStyle(true)) 69 | 70 | downloader := s3manager.NewDownloader(sessUp, func(u *s3manager.Downloader) { 71 | u.PartSize = 64 * 1024 * 1024 // 64MB per part 72 | }) 73 | 74 | var err error 75 | _, err = downloader.Download(Discard, &s3.GetObjectInput{ 76 | Bucket: aws.String(os.Getenv("BUCKET")), 77 | Key: aws.String(objectName), 78 | }) 79 | 80 | return err 81 | } 82 | 83 | func main() { 84 | concurrency := os.Getenv("CONCURRENCY") 85 | conc, err := strconv.Atoi(concurrency) 86 | if err != nil { 87 | log.Fatalln(err) 88 | } 89 | 90 | var objectNames []string 91 | for i := 0; i < conc; i++ { 92 | objectNames = append(objectNames, fmt.Sprintf("object%d", i+1)) 93 | } 94 | 95 | start := time.Now().UTC() 96 | parallelDownloads(objectNames) 97 | totalSize := conc * 10485760 98 | elapsed := time.Since(start) 99 | fmt.Println("Elapsed time :", elapsed) 100 | seconds := float64(elapsed) / float64(time.Second) 101 | fmt.Printf("Speed : %4.0f objs/sec\n", float64(conc)/seconds) 102 | fmt.Printf("Bandwidth : %4.0f MBit/sec\n", float64(totalSize)/seconds/1024/1024) 103 | } 104 | -------------------------------------------------------------------------------- /parallel-upload-download/parallel-put.go: -------------------------------------------------------------------------------- 1 | /* 2 | * Minio Cloud Storage (C) 2017 Minio, Inc. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | package main 18 | 19 | import ( 20 | "bytes" 21 | "flag" 22 | "fmt" 23 | "log" 24 | "os" 25 | "strconv" 26 | "sync" 27 | "time" 28 | 29 | "github.com/aws/aws-sdk-go/aws" 30 | "github.com/aws/aws-sdk-go/aws/credentials" 31 | "github.com/aws/aws-sdk-go/aws/session" 32 | "github.com/aws/aws-sdk-go/service/s3/s3manager" 33 | ) 34 | 35 | // Change this value to test with a different object size. 36 | const defaultObjectSize = 10 * 1024 * 1024 37 | 38 | // Uploads all the inputs objects in parallel, upon any error this function panics. 39 | func parallelUploads(objectNames []string, data []byte) { 40 | var wg sync.WaitGroup 41 | for _, objectName := range objectNames { 42 | wg.Add(1) 43 | go func(objectName string) { 44 | defer wg.Done() 45 | if err := uploadBlob(data, objectName); err != nil { 46 | panic(err) 47 | } 48 | }(objectName) 49 | } 50 | wg.Wait() 51 | } 52 | 53 | // uploadBlob does an upload to the S3/Minio server 54 | func uploadBlob(data []byte, objectName string) error { 55 | credsUp := credentials.NewStaticCredentials(os.Getenv("ACCESSKEY"), os.Getenv("SECRETKEY"), "") 56 | sessUp := session.New(aws.NewConfig(). 57 | WithCredentials(credsUp). 58 | WithRegion("us-east-1"). 59 | WithEndpoint(os.Getenv("ENDPOINT")). 60 | WithS3ForcePathStyle(true)) 61 | 62 | uploader := s3manager.NewUploader(sessUp, func(u *s3manager.Uploader) { 63 | u.PartSize = 64 * 1024 * 1024 // 64MB per part 64 | }) 65 | var err error 66 | _, err = uploader.Upload(&s3manager.UploadInput{ 67 | Body: bytes.NewReader(data), 68 | Bucket: aws.String(os.Getenv("BUCKET")), 69 | Key: aws.String(objectName), 70 | }) 71 | 72 | return err 73 | } 74 | 75 | var ( 76 | objectSize = flag.Int("size", defaultObjectSize, "Size of the object to upload.") 77 | ) 78 | 79 | func main() { 80 | flag.Parse() 81 | 82 | concurrency := os.Getenv("CONCURRENCY") 83 | conc, err := strconv.Atoi(concurrency) 84 | if err != nil { 85 | log.Fatalln(err) 86 | } 87 | 88 | var objectNames []string 89 | for i := 0; i < conc; i++ { 90 | objectNames = append(objectNames, fmt.Sprintf("object%d", i+1)) 91 | } 92 | 93 | var data = bytes.Repeat([]byte("a"), *objectSize) 94 | 95 | start := time.Now().UTC() 96 | parallelUploads(objectNames, data) 97 | 98 | totalSize := conc * *objectSize 99 | elapsed := time.Since(start) 100 | fmt.Println("Elapsed time :", elapsed) 101 | seconds := float64(elapsed) / float64(time.Second) 102 | fmt.Printf("Speed : %4.0f objs/sec\n", float64(conc)/seconds) 103 | fmt.Printf("Bandwidth : %4.0f MBit/sec\n", float64(totalSize)/seconds/1024/1024) 104 | } 105 | -------------------------------------------------------------------------------- /perftest.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "flag" 5 | "fmt" 6 | "github.com/aws/aws-sdk-go/aws" 7 | "github.com/aws/aws-sdk-go/aws/credentials" 8 | "github.com/aws/aws-sdk-go/aws/session" 9 | "github.com/aws/aws-sdk-go/service/s3" 10 | "github.com/aws/aws-sdk-go/service/s3/s3manager" 11 | "log" 12 | "os" 13 | "runtime" 14 | "sync" 15 | "time" 16 | ) 17 | 18 | var ( 19 | prefixFlag = flag.String("p", "", "prefix for search") 20 | digitsFlag = flag.Int("d", 0, "digits for search") 21 | bucketFlag = flag.String("b", "", "bucket for search") 22 | regionFlag = flag.String("r", "", "region for search") 23 | endpointFlag = flag.String("e", "", "endpoint for bucket") 24 | accessFlag = flag.String("a", "", "access key") 25 | secretFlag = flag.String("s", "", "secret key") 26 | cpu = flag.Int("cpus", runtime.NumCPU(), "Number of CPUs to use. Defaults to number of processors.") 27 | ) 28 | 29 | func listS3(wg *sync.WaitGroup, index int, results chan<- []string) { 30 | 31 | var creds *credentials.Credentials 32 | if *accessFlag != "" && *secretFlag != "" { 33 | creds = credentials.NewStaticCredentials(*accessFlag, *secretFlag, "") 34 | } else { 35 | creds = credentials.AnonymousCredentials 36 | } 37 | sess := session.New(aws.NewConfig().WithCredentials(creds).WithRegion(*regionFlag).WithEndpoint(*endpointFlag).WithS3ForcePathStyle(true)) 38 | 39 | prefix := fmt.Sprintf("%s%x", *prefixFlag, index) 40 | prefixMax := "" 41 | if *digitsFlag != 0 && *digitsFlag <= 0xf { 42 | prefixMax = fmt.Sprintf("%s%x%x", *prefixFlag, index, *digitsFlag) 43 | } 44 | 45 | svc := s3.New(sess) 46 | inputparams := &s3.ListObjectsInput{ 47 | Bucket: aws.String(*bucketFlag), 48 | Prefix: aws.String(prefix), 49 | } 50 | 51 | result := make([]string, 0, 1000) 52 | 53 | svc.ListObjectsPages(inputparams, func(page *s3.ListObjectsOutput, lastPage bool) bool { 54 | 55 | prefixMaxReached := false 56 | for _, value := range page.Contents { 57 | if prefixMax != "" && (*value.Key)[:len(prefixMax)] == prefixMax { 58 | prefixMaxReached = true 59 | break 60 | } 61 | copyObject(*value.Key) 62 | result = append(result, *value.Key) 63 | } 64 | 65 | if prefixMaxReached || lastPage { 66 | results <- result 67 | wg.Done() 68 | return false 69 | } else { 70 | return true 71 | } 72 | }) 73 | } 74 | 75 | func listPrefixes() (map[string]bool, error) { 76 | 77 | var wg sync.WaitGroup 78 | var results = make(chan []string) 79 | 80 | for i := 0x0; i <= 0xf; i++ { 81 | wg.Add(1) 82 | 83 | go func(index int) { 84 | listS3(&wg, index, results) 85 | }(i) 86 | } 87 | 88 | go func() { 89 | wg.Wait() 90 | close(results) 91 | }() 92 | 93 | prefixHash := make(map[string]bool) 94 | for result := range results { 95 | for _, r := range result { 96 | prefixHash[r] = true 97 | } 98 | } 99 | 100 | return prefixHash, nil 101 | } 102 | 103 | func copyObject(k string) { 104 | 105 | // Following credentials have restricted access to just GetObject for lifedrive-100m-usw2 106 | accessKey100mRestrictedPolicy := "AKIAJDXB2JULIRQQVHZQ" 107 | secretKey100mRestrictedPolicy := "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" 108 | 109 | credsDown := credentials.NewStaticCredentials(accessKey100mRestrictedPolicy, secretKey100mRestrictedPolicy, "") 110 | sessDown := session.New(aws.NewConfig().WithCredentials(credsDown).WithRegion("us-west-2").WithEndpoint("https://s3-us-west-2.amazonaws.com").WithS3ForcePathStyle(true)) 111 | 112 | credsUp := credentials.NewStaticCredentials("9OE9RNWW2PMU5X5A3WHH", "XVjlSlQ/JeLOA7k4Y2zwgNOhnTflirIm++bqgZHb", "") 113 | sessUp := session.New(aws.NewConfig().WithCredentials(credsUp).WithRegion("us-east-1").WithEndpoint("http://127.0.0.1:9000").WithS3ForcePathStyle(true)) 114 | 115 | { 116 | file, err := os.Create(k) 117 | if err != nil { 118 | log.Fatal("Failed to create file", err) 119 | } 120 | defer file.Close() 121 | 122 | downloader := s3manager.NewDownloader(sessDown) 123 | numBytes, err := downloader.Download(file, 124 | &s3.GetObjectInput{ 125 | Bucket: aws.String("lifedrive-100m-usw2"), 126 | Key: aws.String(k), 127 | }) 128 | if err != nil { 129 | fmt.Println("Failed to download file", err, numBytes) 130 | return 131 | } 132 | } 133 | 134 | for attempt := 0; ; attempt++ { 135 | 136 | fileUp, err := os.Open(k) 137 | if err != nil { 138 | log.Fatal("Failed to open file", err) 139 | } 140 | defer fileUp.Close() 141 | 142 | uploader := s3manager.NewUploader(sessUp) 143 | _, err = uploader.Upload(&s3manager.UploadInput{ 144 | Body: fileUp, 145 | Bucket: aws.String("bucket100m"), 146 | Key: aws.String(k[0:2] + "/" + k[2:]), 147 | }) 148 | if err != nil { 149 | if attempt < 3 { 150 | time.Sleep(500 * time.Millisecond) 151 | continue 152 | } else { 153 | // abort after three failed attempts 154 | log.Fatalln("Failed to upload", err) 155 | } 156 | } 157 | 158 | fmt.Println("Up:", k[:10]) 159 | break 160 | } 161 | 162 | os.Remove(k) 163 | } 164 | 165 | func main() { 166 | flag.Parse() 167 | runtime.GOMAXPROCS(*cpu) 168 | if *prefixFlag == "" || *bucketFlag == "" || *regionFlag == "" || *endpointFlag == "" { 169 | fmt.Println("Bad arguments") 170 | return 171 | } 172 | 173 | var list map[string]bool 174 | list, _ = listPrefixes() 175 | 176 | fmt.Println("Number of objects:", len(list)) 177 | } 178 | -------------------------------------------------------------------------------- /raid_ephemeral.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # this script will attempt to detect any ephemeral drives on an EC2 node and create a RAID-0 stripe 4 | # mounted at /mnt. It should be run early on the first boot of the system. 5 | # 6 | # Beware, This script is NOT fully idempotent. 7 | # 8 | 9 | METADATA_URL_BASE="http://169.254.169.254/2012-01-12" 10 | 11 | yum -y -d0 install mdadm curl 12 | 13 | # Configure Raid - take into account xvdb or sdb 14 | root_drive=`df -h | grep -v grep | awk 'NR==2{print $1}'` 15 | 16 | if [ "$root_drive" == "/dev/xvda1" ]; then 17 | echo "Detected 'xvd' drive naming scheme (root: $root_drive)" 18 | DRIVE_SCHEME='xvd' 19 | else 20 | echo "Detected 'sd' drive naming scheme (root: $root_drive)" 21 | DRIVE_SCHEME='sd' 22 | fi 23 | 24 | # figure out how many ephemerals we have by querying the metadata API, and then: 25 | # - convert the drive name returned from the API to the hosts DRIVE_SCHEME, if necessary 26 | # - verify a matching device is available in /dev/ 27 | drives="" 28 | ephemeral_count=0 29 | ephemerals=$(curl --silent $METADATA_URL_BASE/meta-data/block-device-mapping/ | grep ephemeral) 30 | for e in $ephemerals; do 31 | echo "Probing $e .." 32 | device_name=$(curl --silent $METADATA_URL_BASE/meta-data/block-device-mapping/$e) 33 | # might have to convert 'sdb' -> 'xvdb' 34 | device_name=$(echo $device_name | sed "s/sd/$DRIVE_SCHEME/") 35 | device_path="/dev/$device_name" 36 | 37 | # test that the device actually exists since you can request more ephemeral drives than are available 38 | # for an instance type and the meta-data API will happily tell you it exists when it really does not. 39 | if [ -b $device_path ]; then 40 | echo "Detected ephemeral disk: $device_path" 41 | drives="$drives $device_path" 42 | ephemeral_count=$((ephemeral_count + 1 )) 43 | else 44 | echo "Ephemeral disk $e, $device_path is not present. skipping" 45 | fi 46 | done 47 | 48 | if [ "$ephemeral_count" = 0 ]; then 49 | echo "No ephemeral disk detected. exiting" 50 | exit 0 51 | fi 52 | 53 | # ephemeral0 is typically mounted for us already. umount it here 54 | umount /mnt 55 | 56 | # overwrite first few blocks in case there is a filesystem, otherwise mdadm will prompt for input 57 | for drive in $drives; do 58 | dd if=/dev/zero of=$drive bs=4096 count=1024 59 | done 60 | 61 | partprobe 62 | mdadm --create --verbose /dev/md0 --level=0 -c256 --raid-devices=$ephemeral_count $drives 63 | echo DEVICE $drives | tee /etc/mdadm.conf 64 | mdadm --detail --scan | tee -a /etc/mdadm.conf 65 | blockdev --setra 65536 /dev/md0 66 | mkfs -t ext3 /dev/md0 67 | mount -t ext3 -o noatime /dev/md0 /mnt 68 | 69 | # Remove xvdb/sdb from fstab 70 | chmod 777 /etc/fstab 71 | sed -i "/${DRIVE_SCHEME}b/d" /etc/fstab 72 | 73 | # Make raid appear on reboot 74 | echo "/dev/md0 /mnt ext3 noatime 0 0" | tee -a /etc/fstab -------------------------------------------------------------------------------- /upload-perftest/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:alpine 2 | 3 | RUN ["apk", "add", "--no-cache", "git"] 4 | 5 | RUN ["go", "get", "-u", "github.com/minio/minio-go"] 6 | RUN ["go", "get", "-u", "github.com/aws/aws-sdk-go"] 7 | 8 | COPY ./uploadsperftest.go /root/ 9 | 10 | WORKDIR /root 11 | 12 | ENTRYPOINT ["go", "run", "uploadsperftest.go"] 13 | -------------------------------------------------------------------------------- /upload-perftest/README.md: -------------------------------------------------------------------------------- 1 | # upload-perftest 2 | Upload performance testing program for Minio Object Storage server. 3 | 4 | The command line options are: 5 | 6 | ```shell 7 | $ ./upload-perftest --help 8 | Usage of ./upload-perftest: 9 | -bucket string 10 | Bucket to use for uploads test (default "bucket") 11 | -c int 12 | concurrency - number of parallel uploads (default 1) 13 | -h string 14 | service endpoint host (default "localhost:9000") 15 | -m int 16 | Maximum amount of disk usage in GBs (default 80) 17 | -s Set if endpoint requires https 18 | -seed int 19 | random seed (default 42) 20 | 21 | ``` 22 | 23 | Credentials are passed via the environment variables `ACCESS_KEY` and 24 | `SECRET_KEY`. 25 | 26 | After the options, a positional parameter for the size of objects to 27 | upload is required. This can be specified with units like `1MiB` or 28 | `1GB`. 29 | 30 | The program generates objects of the given size using a fast, 31 | in-memory, partially-random data generator for object content. 32 | 33 | The concurrency options sets the number of parallel uploader threads 34 | and simulates multiple uploaders opening separate connections to the 35 | Minio server endpoint. Each thread sequentially performs uploads of 36 | the given size. 37 | 38 | The program exits on any kind of upload error with non-zero exit 39 | status. On a successful run, the program exits when uploads have been 40 | continuosly performed for at least 15 minutes and at least 10 objects 41 | have been uploaded. 42 | 43 | Every 10 seconds, the program reports the number of objects uploaded, 44 | the average data bandwidth achieved since the start (total object 45 | bytes sent/duration of the test), the average number of objects 46 | uploaded per second since the start, and the total amount of object 47 | data uploaded. 48 | 49 | To not overflow disk capacity of the server, the `-m` options takes 50 | the number of GBs of maximum disk space to use in the test. If the 51 | given amount of data is written, the program randomly overwrites 52 | previously written objects. 53 | 54 | A sample run looks like the following: 55 | 56 | ``` shell 57 | $ ./upload-perftest -h moslb:80 -m 80 -c 32 10MiB 58 | Generating names for objects... 59 | done. 60 | At 10.01: Avg data b/w: 143.91 MiBps. Avg obj/s: 14.39. Data Written: 1440.00 MiB in 144 objects. 61 | At 20.01: Avg data b/w: 148.45 MiBps. Avg obj/s: 14.85. Data Written: 2970.00 MiB in 297 objects. 62 | At 30.01: Avg data b/w: 150.30 MiBps. Avg obj/s: 15.03. Data Written: 4510.00 MiB in 451 objects. 63 | At 40.01: Avg data b/w: 154.22 MiBps. Avg obj/s: 15.42. Data Written: 6170.00 MiB in 617 objects. 64 | At 50.01: Avg data b/w: 156.18 MiBps. Avg obj/s: 15.62. Data Written: 7810.00 MiB in 781 objects. 65 | ... 66 | 67 | ``` 68 | --------------------------------------------------------------------------------