├── .github
├── FUNDING.yml
├── ISSUE_TEMPLATE.md
└── PULL_REQUEST_TEMPLATE.md
├── .gitignore
├── CONTRIBUTING.md
├── DEPLOY_README.md
├── Dockerfile
├── Gemfile
├── Gemfile.lock
├── Makefile
├── README.md
├── Rakefile
├── bin
└── fakes3
├── fakes3.gemspec
├── lib
├── fakes3.rb
└── fakes3
│ ├── bucket.rb
│ ├── bucket_query.rb
│ ├── cli.rb
│ ├── errors.rb
│ ├── file_store.rb
│ ├── rate_limitable_file.rb
│ ├── s3_object.rb
│ ├── server.rb
│ ├── sorted_object_list.rb
│ ├── unsupported_operation.rb
│ ├── util.rb
│ ├── version.rb
│ ├── xml_adapter.rb
│ └── xml_parser.rb
├── static
├── button.svg
└── logo.png
└── test
├── aws_sdk_commands_test.rb
├── aws_sdk_v2_commands_test.rb
├── boto_test.rb
├── botocmd.py
├── cli_test.rb
├── local_s3_cfg
├── minitest_helper.rb
├── post_test.rb
├── s3_commands_test.rb
├── s3cmd_test.rb
└── test_helper.rb
/.github/FUNDING.yml:
--------------------------------------------------------------------------------
1 | custom: https://supso.org/projects/fake-s3
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | Thanks for the issue! We do have a few small rules to follow:
2 |
3 | 1. Please add [Feature], [Bug], [Question], or [Other] to your title.
4 |
5 | 2. If it is a bug, please add steps to reproduce it. More information is almost always helpful.
6 |
7 | 3. If it is a feature, is there any pull request you should refer to? If so, please add a link to the pull request somewhere in the comments.
8 |
9 | Thanks!
10 |
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | Thanks for the pull request! We do have a few small rules to follow:
2 |
3 | 1. Ensure tests pass before creating the pull request.
4 |
5 | 2. Please use a coding style similar to the rest of the file(s) you change. This isn't a hard rule but if it's too different, it will stick out.
6 |
7 | 3. We have a contributor license agreement (CLA) based off of Google and Apache's CLA. If you would feel comfortable contributing to, say, Angular.js, you should feel comfortable with this CLA. Unless you've previously signed, please sign at: https://docs.google.com/forms/d/e/1FAIpQLSeKKSKNNz5ji1fd5bbu5RaGFbhD45zEaCnAjzBZPpzOaXQsvQ/viewform
8 |
9 | To read more about all three of the above, visit: https://github.com/jubos/fake-s3/blob/master/CONTRIBUTING.md
10 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | pkg/*
2 | *.gem
3 | .bundle
4 | tmp
5 | test_root
6 |
7 | # Don't check in RVM/rbenv files
8 | .ruby-version
9 |
10 | # RubyMine
11 | .idea
12 |
13 | # Don't check in the Gemfile.lock
14 | Gemfile.lock
15 |
16 | .DS_Store
17 | .rvmrc
18 | root/*
19 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | ## Contributing to Fake S3
2 |
3 | Contributions in the form of pull requests, bug reports, documentation, or anything else are welcome! We do have a few small rules to follow:
4 |
5 | - Ensure tests pass before making a pull request. (You can read the Testing section below for how to set up your development environment and run tests)
6 |
7 | - Please use a coding style similar to the rest of the file(s) you change. This isn't a hard rule but if it's too different, it will stand out.
8 |
9 | - Unless your contributions fall under the trivial exemption policy (below), contributors must sign our Contributor License Agreement (CLA) to be eligible to contribute. Read more in section is below.
10 |
11 |
12 | ## Testing
13 |
14 | There are some prerequisites to actually being able to run the unit/integration tests.
15 |
16 | On macOS, edit your /etc/hosts and add the following line:
17 |
18 | 127.0.0.1 posttest.localhost
19 | 127.0.0.1 v2.bucket.localhost
20 |
21 | Then ensure that the following packages are installed (boto, s3cmd):
22 |
23 | > pip install boto
24 | > brew install s3cmd
25 |
26 |
27 | Start the test server using:
28 |
29 | rake test_server
30 |
31 | Finally, in another terminal window run:
32 |
33 | rake test
34 |
35 |
36 | ## Signing the Contributor License agreement
37 |
38 | We have a contributor license agreement (CLA) based off of Google and Apache's CLA. If you would feel comfortable contributing to, say, Angular.js, you should feel comfortable with this CLA.
39 |
40 | To sign the CLA:
41 |
42 | [Click here and fill out the form.](https://docs.google.com/forms/d/e/1FAIpQLSeKKSKNNz5ji1fd5bbu5RaGFbhD45zEaCnAjzBZPpzOaXQsvQ/viewform)
43 |
44 | If you're interested, [this blog post](https://julien.ponge.org/blog/in-defense-of-contributor-license-agreements/) discusses why to use a CLA, and even goes over the text of the CLA we based ours on.
45 |
46 |
47 | ## Trivial Exemption Policy
48 |
49 | The Trivial Exemption Policy exempts contributions that would not be sufficiently robust or creative to enjoy copyright protection, and therefore do not need to sign the CLA. This would generally be changes that don't involve much creativity.
50 |
51 | Contributions considered trivial are generally fewer than 10 lines of actual code. (It may be longer than 10 lines, for example these would often be larger: blank lines, changes in indentation, formatting, simple comments, logging messages, changes to metadata like Gemfiles or gitignore, reordering, breaking or combining files, renaming files, or other unoriginal changes)
52 |
--------------------------------------------------------------------------------
/DEPLOY_README.md:
--------------------------------------------------------------------------------
1 | # Helpful things to remember when deploying to RubyGems
2 |
3 | Ensure the version number is updated (lib/fakes3/version.rb)
4 |
5 | Ensure the tests pass
6 | ```
7 | rake test_server followed by rake test
8 | ```
9 |
10 | Build the Gem
11 | ```
12 | gem build fakes3.gemspec
13 | ```
14 |
15 | Push to RubyGems
16 | ```
17 | gem push fakes3-VERSION.gem
18 | ```
19 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM alpine:3.4
2 |
3 | RUN apk add --no-cache --update ruby ruby-dev ruby-bundler python py-pip git build-base libxml2-dev libxslt-dev
4 | RUN pip install boto s3cmd
5 |
6 | COPY fakes3.gemspec Gemfile Gemfile.lock /app/
7 | COPY lib/fakes3/version.rb /app/lib/fakes3/
8 |
9 | WORKDIR /app
10 |
11 | RUN bundle install
12 |
13 | COPY . /app/
14 |
--------------------------------------------------------------------------------
/Gemfile:
--------------------------------------------------------------------------------
1 | source 'https://rubygems.org'
2 | # Specify your gem's dependencies in fakes3.gemspec
3 | gemspec
--------------------------------------------------------------------------------
/Gemfile.lock:
--------------------------------------------------------------------------------
1 | PATH
2 | remote: .
3 | specs:
4 | fakes3 (2.0.0)
5 | builder
6 | thor
7 | xml-simple
8 |
9 | GEM
10 | remote: https://rubygems.org/
11 | specs:
12 | aws-s3 (0.6.3)
13 | builder
14 | mime-types
15 | xml-simple
16 | aws-sdk (2.10.73)
17 | aws-sdk-resources (= 2.10.73)
18 | aws-sdk-core (2.10.73)
19 | aws-sigv4 (~> 1.0)
20 | jmespath (~> 1.0)
21 | aws-sdk-resources (2.10.73)
22 | aws-sdk-core (= 2.10.73)
23 | aws-sdk-v1 (1.67.0)
24 | json (~> 1.4)
25 | nokogiri (~> 1)
26 | aws-sigv4 (1.0.2)
27 | builder (3.2.3)
28 | domain_name (0.5.20170404)
29 | unf (>= 0.0.5, < 1.0.0)
30 | http-cookie (1.0.3)
31 | domain_name (~> 0.5)
32 | jmespath (1.3.1)
33 | json (1.8.6)
34 | metaclass (0.0.4)
35 | mime-types (3.1)
36 | mime-types-data (~> 3.2015)
37 | mime-types-data (3.2016.0521)
38 | mini_portile2 (2.4.0)
39 | mocha (1.3.0)
40 | metaclass (~> 0.0.1)
41 | netrc (0.11.0)
42 | nokogiri (1.10.5)
43 | mini_portile2 (~> 2.4.0)
44 | power_assert (1.1.1)
45 | rake (12.2.1)
46 | rest-client (2.0.2)
47 | http-cookie (>= 1.0.2, < 2.0)
48 | mime-types (>= 1.16, < 4.0)
49 | netrc (~> 0.8)
50 | right_aws (3.1.0)
51 | right_http_connection (>= 1.2.5)
52 | right_http_connection (1.5.0)
53 | test-unit (3.2.6)
54 | power_assert
55 | thor (0.20.0)
56 | unf (0.1.4)
57 | unf_ext
58 | unf_ext (0.0.7.4)
59 | xml-simple (1.1.5)
60 |
61 | PLATFORMS
62 | ruby
63 |
64 | DEPENDENCIES
65 | aws-s3
66 | aws-sdk (~> 2)
67 | aws-sdk-v1
68 | bundler (>= 1.0.0)
69 | fakes3!
70 | mocha
71 | rake
72 | rest-client
73 | right_aws
74 | test-unit
75 |
76 | BUNDLED WITH
77 | 1.16.2
78 |
--------------------------------------------------------------------------------
/Makefile:
--------------------------------------------------------------------------------
1 | .PHONY: test
2 |
3 | build-test-container:
4 | docker build -t fake-s3 .
5 |
6 | test: build-test-container
7 | docker run --rm --add-host="posttest.localhost:127.0.0.1" -e "RUBYOPT=-W0" fake-s3 sh -c "rake test_server & rake test"
8 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 | ## Introduction
4 |
5 | Fake S3 is a lightweight server that responds to the same API of Amazon S3.
6 |
7 | It is extremely useful for testing of S3 in a sandbox environment without actually making calls to Amazon, which not only requires a network connection, but also costs money with every use.
8 |
9 | The goal of Fake S3 is to minimize runtime dependencies and be more of a
10 | development tool to test S3 calls in your code rather than a production server looking to duplicate S3 functionality.
11 |
12 | Many commands are supported, including put, get, list, copy, and make bucket.
13 |
14 | ## Installation
15 |
16 | gem install fakes3
17 |
18 | ## Running
19 |
20 | To run the server, you must specify a root, a port, and your license key.
21 |
22 | fakes3 -r /mnt/fakes3_root -p 4567 --license YOUR_LICENSE_KEY
23 |
24 | ## Licensing
25 |
26 | As of the latest version, we are licensing with Super Source. To get a license, visit:
27 |
28 | https://supso.org/projects/fake-s3
29 |
30 | Depending on your company's size, the license may be free. It is also free for individuals.
31 |
32 | You pass the license key to Fake S3 with the command line option --license YOUR_LICENSE_KEY.
33 |
34 | ## Connecting to Fake S3
35 |
36 | Take a look at the test cases to see client example usage. For now, Fake S3 is
37 | mainly tested with s3cmd, aws-s3 gem, and right_aws. There are plenty more
38 | libraries out there, and please do mention if other clients work or not.
39 |
40 | Here is a running list of [supported clients](https://github.com/jubos/fake-s3/wiki/Supported-Clients "Supported Clients")
41 |
42 | ## Contributing
43 |
44 | Contributions in the form of pull requests, bug reports, documentation, or anything else are welcome! Please read the CONTRIBUTING.md file for more info: [CONTRIBUTING.md](https://github.com/jubos/fake-s3/blob/master/CONTRIBUTING.md)
45 |
--------------------------------------------------------------------------------
/Rakefile:
--------------------------------------------------------------------------------
1 | require 'rubygems'
2 | require 'bundler'
3 | require 'rake/testtask'
4 | include Rake::DSL
5 | Bundler::GemHelper.install_tasks
6 |
7 | Rake::TestTask.new(:test) do |t|
8 | t.libs << "."
9 | t.test_files =
10 | FileList['test/*_test.rb'].exclude('test/s3_commands_test.rb')
11 |
12 | # A lot of the gems like right aws and amazon sdk have a bunch of warnings, so
13 | # this suppresses them for the test runs
14 | t.warning = false
15 | end
16 |
17 | desc "Run the test_server"
18 | task :test_server do |t|
19 | system("bundle exec bin/fakes3 --port 10453 --license test --root test_root --corspostputallowheaders 'Authorization, Content-Length, Cache-Control'")
20 | end
21 |
22 | task :default => :test
23 |
--------------------------------------------------------------------------------
/bin/fakes3:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env ruby
2 |
3 | $: << './lib'
4 |
5 | require 'fakes3/cli'
6 | FakeS3::CLI.start
7 |
--------------------------------------------------------------------------------
/fakes3.gemspec:
--------------------------------------------------------------------------------
1 | # -*- encoding: utf-8 -*-
2 | require File.join(File.dirname(__FILE__), 'lib', 'fakes3', 'version')
3 |
4 | Gem::Specification.new do |s|
5 | s.name = "fakes3"
6 | s.version = FakeS3::VERSION
7 | s.platform = Gem::Platform::RUBY
8 | s.authors = ["Curtis Spencer"]
9 | s.email = ["fakes3@supso.org"]
10 | s.homepage = "https://github.com/jubos/fake-s3"
11 | s.summary = %q{Fake S3 is a server that simulates Amazon S3 commands so you can test your S3 functionality in your projects}
12 | s.description = %q{Use Fake S3 to test basic Amazon S3 functionality without actually connecting to AWS}
13 | s.license = "Supported-Source"
14 | s.post_install_message = "Fake S3: if you don't already have a license for Fake S3, you can get one at https://supso.org/projects/fake-s3"
15 |
16 | s.add_development_dependency "bundler", ">= 1.0.0"
17 | s.add_development_dependency "aws-s3"
18 | s.add_development_dependency "right_aws"
19 | s.add_development_dependency "rest-client"
20 | s.add_development_dependency "rake"
21 | s.add_development_dependency "aws-sdk", "~> 2"
22 | s.add_development_dependency "aws-sdk-v1"
23 | s.add_development_dependency "test-unit"
24 | s.add_development_dependency "mocha"
25 | #s.add_development_dependency "ruby-debug"
26 | #s.add_development_dependency "debugger"
27 | s.add_dependency "thor"
28 | s.add_dependency "builder"
29 | s.add_dependency "xml-simple"
30 |
31 | s.files = `git ls-files`.split("\n")
32 | s.test_files = `git ls-files -- {test,spec,features}/*`.split("\n")
33 | s.executables = `git ls-files -- bin/*`.split("\n").map{ |f| File.basename(f) }
34 | s.require_paths = ["lib"]
35 | end
36 |
--------------------------------------------------------------------------------
/lib/fakes3.rb:
--------------------------------------------------------------------------------
1 | require 'fakes3/version'
2 | require 'fakes3/file_store'
3 | require 'fakes3/server'
4 |
--------------------------------------------------------------------------------
/lib/fakes3/bucket.rb:
--------------------------------------------------------------------------------
1 | require 'builder'
2 | require 'thread'
3 | require 'fakes3/s3_object'
4 | require 'fakes3/sorted_object_list'
5 |
6 | module FakeS3
7 | class Bucket
8 | attr_accessor :name,:creation_date,:objects
9 |
10 | def initialize(name,creation_date,objects)
11 | @name = name
12 | @creation_date = creation_date
13 | @objects = SortedObjectList.new
14 | objects.each do |obj|
15 | @objects.add(obj)
16 | end
17 | @mutex = Mutex.new
18 | end
19 |
20 | def find(object_name)
21 | @mutex.synchronize do
22 | @objects.find(object_name)
23 | end
24 | end
25 |
26 | def add(object)
27 | # Unfortunately have to synchronize here since the our SortedObjectList
28 | # not thread safe. Probably can get finer granularity if performance is
29 | # important
30 | @mutex.synchronize do
31 | @objects.add(object)
32 | end
33 | end
34 |
35 | def remove(object)
36 | @mutex.synchronize do
37 | @objects.remove(object)
38 | end
39 | end
40 |
41 | def query_for_range(options)
42 | marker = options[:marker]
43 | prefix = options[:prefix]
44 | max_keys = options[:max_keys] || 1000
45 | delimiter = options[:delimiter]
46 |
47 | match_set = nil
48 | @mutex.synchronize do
49 | match_set = @objects.list(options)
50 | end
51 |
52 | bq = BucketQuery.new
53 | bq.bucket = self
54 | bq.marker = marker
55 | bq.prefix = prefix
56 | bq.max_keys = max_keys
57 | bq.delimiter = delimiter
58 | bq.matches = match_set.matches
59 | bq.is_truncated = match_set.is_truncated
60 | bq.common_prefixes = match_set.common_prefixes
61 | return bq
62 | end
63 |
64 | end
65 | end
66 |
--------------------------------------------------------------------------------
/lib/fakes3/bucket_query.rb:
--------------------------------------------------------------------------------
1 | module FakeS3
2 | class BucketQuery
3 | attr_accessor :prefix,:matches,:marker,:max_keys,
4 | :delimiter,:bucket,:is_truncated,:common_prefixes
5 |
6 | # Syntactic sugar
7 | def is_truncated?
8 | @is_truncated
9 | end
10 | end
11 | end
12 |
--------------------------------------------------------------------------------
/lib/fakes3/cli.rb:
--------------------------------------------------------------------------------
1 | require 'thor'
2 | require 'fakes3/server'
3 | require 'fakes3/version'
4 |
5 | module FakeS3
6 | class CLI < Thor
7 | default_task("server")
8 |
9 | desc "server", "Run a server on a particular hostname"
10 | method_option :root, :type => :string, :aliases => '-r', :required => true
11 | method_option :port, :type => :numeric, :aliases => '-p', :required => true
12 | method_option :address, :type => :string, :aliases => '-a', :required => false, :desc => "Bind to this address. Defaults to all IP addresses of the machine."
13 | method_option :hostname, :type => :string, :aliases => '-H', :desc => "The root name of the host. Defaults to s3.amazonaws.com."
14 | method_option :quiet, :type => :boolean, :aliases => '-q', :desc => "Quiet; do not write anything to standard output."
15 | method_option :limit, :aliases => '-l', :type => :string, :desc => 'Rate limit for serving (ie. 50K, 1.0M)'
16 | method_option :sslcert, :type => :string, :desc => 'Path to SSL certificate'
17 | method_option :sslkey, :type => :string, :desc => 'Path to SSL certificate key'
18 | method_option :corsorigin, :type => :string, :desc => 'Access-Control-Allow-Origin header return value'
19 | method_option :corsmethods, :type => :string, :desc => 'Access-Control-Allow-Methods header return value'
20 | method_option :corspreflightallowheaders, :type => :string, :desc => 'Access-Control-Allow-Headers header return value for preflight OPTIONS requests'
21 | method_option :corspostputallowheaders, :type => :string, :desc => 'Access-Control-Allow-Headers header return value for POST and PUT requests'
22 | method_option :corsexposeheaders, :type => :string, :desc => 'Access-Control-Expose-Headers header return value'
23 | method_option :license, :type => :string, :desc => 'Your license key, available at https://supso.org/projects/fake-s3'
24 |
25 | def server
26 | license_key = options[:license]
27 | if license_key.nil?
28 | license_message = """
29 | ======================
30 | As of version 1.3, Fake S3 requires a license key passed with --license YOUR_LICENSE_KEY.
31 | Please fix this before September 18, 2018.
32 | You can get a license at:
33 | https://supso.org/projects/fake-s3
34 | ======================
35 |
36 | """
37 | licensing_required = Time.now > Time.utc(2018, 9, 19)
38 | if licensing_required
39 | abort license_message
40 | else
41 | warn license_message
42 | end
43 | end
44 |
45 | store = nil
46 | if options[:root]
47 | root = File.expand_path(options[:root])
48 | # TODO Do some sanity checking here
49 | store = FileStore.new(root, !!options[:quiet])
50 | end
51 |
52 | if store.nil?
53 | abort "You must specify a root to use a file store (the current default)"
54 | end
55 |
56 | hostname = 's3.amazonaws.com'
57 | if options[:hostname]
58 | hostname = options[:hostname]
59 | # In case the user has put a port on the hostname
60 | if hostname =~ /:(\d+)/
61 | hostname = hostname.split(":")[0]
62 | end
63 | end
64 |
65 | if options[:limit]
66 | begin
67 | store.rate_limit = options[:limit]
68 | rescue
69 | abort $!.message
70 | end
71 | end
72 |
73 | cors_options = {}
74 | cors_options['allow_origin'] = options[:corsorigin] if options[:corsorigin]
75 | cors_options['allow_methods'] = options[:corsmethods] if options[:corsmethods]
76 | cors_options['preflight_allow_headers'] = options[:corspreflightallowheaders] if options[:corspreflightallowheaders]
77 | cors_options['post_put_allow_headers'] = options[:corspostputallowheaders] if options[:corspostputallowheaders]
78 | cors_options['expose_headers'] = options[:corsexposeheaders] if options[:corsexposeheaders]
79 |
80 | address = options[:address]
81 | ssl_cert_path = options[:sslcert]
82 | ssl_key_path = options[:sslkey]
83 |
84 | if (ssl_cert_path.nil? && !ssl_key_path.nil?) || (!ssl_cert_path.nil? && ssl_key_path.nil?)
85 | abort "If you specify an SSL certificate you must also specify an SSL certificate key"
86 | end
87 |
88 | puts "Loading Fake S3 with #{root} on port #{options[:port]} with hostname #{hostname}" unless options[:quiet]
89 | server = FakeS3::Server.new(address,options[:port],store,hostname,ssl_cert_path,ssl_key_path, quiet: !!options[:quiet], cors_options: cors_options)
90 | server.serve
91 | end
92 |
93 | desc "version", "Report the current fakes3 version"
94 | def version
95 | puts <<"EOF"
96 | ======================
97 | FakeS3 #{FakeS3::VERSION}
98 |
99 | Copyright 2012, Curtis Spencer (@jubos)
100 | EOF
101 | end
102 | end
103 | end
104 |
--------------------------------------------------------------------------------
/lib/fakes3/errors.rb:
--------------------------------------------------------------------------------
1 | module FakeS3
2 | class FakeS3Exception < RuntimeError
3 | attr_accessor :resource,:request_id
4 |
5 | def self.metaclass; class << self; self; end; end
6 |
7 | def self.traits(*arr)
8 | return @traits if arr.empty?
9 | attr_accessor *arr
10 |
11 | arr.each do |a|
12 | metaclass.instance_eval do
13 | define_method( a ) do |val|
14 | @traits ||= {}
15 | @traits[a] = val
16 | end
17 | end
18 | end
19 |
20 | class_eval do
21 | define_method( :initialize ) do
22 | self.class.traits.each do |k,v|
23 | instance_variable_set("@#{k}", v)
24 | end
25 | end
26 | end
27 | end
28 |
29 | traits :message,:http_status
30 |
31 | def code
32 | self.class.to_s
33 | end
34 | end
35 |
36 | class NoSuchBucket < FakeS3Exception
37 | message "The bucket you tried to delete is not empty."
38 | http_status "404"
39 | end
40 |
41 | class BucketNotEmpty < FakeS3Exception
42 | message "The bucket you tried to delete is not empty."
43 | http_status "409"
44 | end
45 |
46 | end
47 |
--------------------------------------------------------------------------------
/lib/fakes3/file_store.rb:
--------------------------------------------------------------------------------
1 | require 'fileutils'
2 | require 'time'
3 | require 'fakes3/s3_object'
4 | require 'fakes3/bucket'
5 | require 'fakes3/rate_limitable_file'
6 | require 'digest/md5'
7 | require 'yaml'
8 |
9 | module FakeS3
10 | class FileStore
11 | FAKE_S3_METADATA_DIR = ".fakes3_metadataFFF"
12 |
13 | # S3 clients with overly strict date parsing fails to parse ISO 8601 dates
14 | # without any sub second precision (e.g. jets3t v0.7.2), and the examples
15 | # given in the official AWS S3 documentation specify three (3) decimals for
16 | # sub second precision.
17 | SUBSECOND_PRECISION = 3
18 |
19 | def initialize(root, quiet_mode)
20 | @root = root
21 | @buckets = []
22 | @bucket_hash = {}
23 | @quiet_mode = quiet_mode
24 | Dir[File.join(root,"*")].each do |bucket|
25 | bucket_name = File.basename(bucket)
26 | bucket_obj = Bucket.new(bucket_name,Time.now,[])
27 | @buckets << bucket_obj
28 | @bucket_hash[bucket_name] = bucket_obj
29 | end
30 | end
31 |
32 | # Pass a rate limit in bytes per second
33 | def rate_limit=(rate_limit)
34 | if rate_limit.is_a?(String)
35 | if rate_limit =~ /^(\d+)$/
36 | RateLimitableFile.rate_limit = rate_limit.to_i
37 | elsif rate_limit =~ /^(.*)K$/
38 | RateLimitableFile.rate_limit = $1.to_f * 1000
39 | elsif rate_limit =~ /^(.*)M$/
40 | RateLimitableFile.rate_limit = $1.to_f * 1000000
41 | elsif rate_limit =~ /^(.*)G$/
42 | RateLimitableFile.rate_limit = $1.to_f * 1000000000
43 | else
44 | raise "Invalid Rate Limit Format: Valid values include (1000,10K,1.1M)"
45 | end
46 | else
47 | RateLimitableFile.rate_limit = nil
48 | end
49 | end
50 |
51 | def buckets
52 | @buckets
53 | end
54 |
55 | def get_bucket_folder(bucket)
56 | File.join(@root, bucket.name)
57 | end
58 |
59 | def get_bucket(bucket)
60 | @bucket_hash[bucket]
61 | end
62 |
63 | def create_bucket(bucket)
64 | FileUtils.mkdir_p(File.join(@root, bucket))
65 | bucket_obj = Bucket.new(bucket, Time.now, [])
66 | if !@bucket_hash[bucket]
67 | @buckets << bucket_obj
68 | @bucket_hash[bucket] = bucket_obj
69 | end
70 | bucket_obj
71 | end
72 |
73 | def delete_bucket(bucket_name)
74 | bucket = get_bucket(bucket_name)
75 | raise NoSuchBucket if !bucket
76 | raise BucketNotEmpty if bucket.objects.count > 0
77 | FileUtils.rm_r(get_bucket_folder(bucket))
78 | @bucket_hash.delete(bucket_name)
79 | end
80 |
81 | def get_object(bucket, object_name, request)
82 | begin
83 | real_obj = S3Object.new
84 | obj_root = File.join(@root,bucket,object_name,FAKE_S3_METADATA_DIR)
85 | metadata = File.open(File.join(obj_root, "metadata")) { |file| YAML::load(file) }
86 | real_obj.name = object_name
87 | real_obj.md5 = metadata[:md5]
88 | real_obj.content_type = request.query['response-content-type'] ||
89 | metadata.fetch(:content_type) { "application/octet-stream" }
90 | real_obj.content_disposition = request.query['response-content-disposition'] ||
91 | metadata[:content_disposition]
92 | real_obj.content_encoding = metadata.fetch(:content_encoding) # if metadata.fetch(:content_encoding)
93 | real_obj.io = RateLimitableFile.open(File.join(obj_root, "content"), 'rb')
94 | real_obj.size = metadata.fetch(:size) { 0 }
95 | real_obj.creation_date = File.ctime(obj_root).utc.iso8601(SUBSECOND_PRECISION)
96 | real_obj.modified_date = metadata.fetch(:modified_date) do
97 | File.mtime(File.join(obj_root, "content")).utc.iso8601(SUBSECOND_PRECISION)
98 | end
99 | real_obj.cache_control = metadata[:cache_control]
100 | real_obj.custom_metadata = metadata.fetch(:custom_metadata) { {} }
101 | return real_obj
102 | rescue
103 | unless @quiet_mode
104 | puts $!
105 | $!.backtrace.each { |line| puts line }
106 | end
107 | return nil
108 | end
109 | end
110 |
111 | def object_metadata(bucket, object)
112 | end
113 |
114 | def copy_object(src_bucket_name, src_name, dst_bucket_name, dst_name, request)
115 | src_root = File.join(@root,src_bucket_name,src_name,FAKE_S3_METADATA_DIR)
116 | src_metadata_filename = File.join(src_root, "metadata")
117 | src_metadata = YAML.load(File.open(src_metadata_filename, 'rb').read)
118 | src_content_filename = File.join(src_root, "content")
119 |
120 | dst_filename= File.join(@root,dst_bucket_name,dst_name)
121 | FileUtils.mkdir_p(dst_filename)
122 |
123 | metadata_dir = File.join(dst_filename,FAKE_S3_METADATA_DIR)
124 | FileUtils.mkdir_p(metadata_dir)
125 |
126 | content = File.join(metadata_dir, "content")
127 | metadata = File.join(metadata_dir, "metadata")
128 |
129 | if src_bucket_name != dst_bucket_name || src_name != dst_name
130 | File.open(content, 'wb') do |f|
131 | File.open(src_content_filename, 'rb') do |input|
132 | f << input.read
133 | end
134 | end
135 |
136 | File.open(metadata,'w') do |f|
137 | File.open(src_metadata_filename,'r') do |input|
138 | f << input.read
139 | end
140 | end
141 | end
142 |
143 | metadata_directive = request.header["x-amz-metadata-directive"].first
144 | if metadata_directive == "REPLACE"
145 | metadata_struct = create_metadata(content, request)
146 | File.open(metadata,'w') do |f|
147 | f << YAML::dump(metadata_struct)
148 | end
149 | end
150 |
151 | src_bucket = get_bucket(src_bucket_name) || create_bucket(src_bucket_name)
152 | dst_bucket = get_bucket(dst_bucket_name) || create_bucket(dst_bucket_name)
153 |
154 | obj = S3Object.new
155 | obj.name = dst_name
156 | obj.md5 = src_metadata[:md5]
157 | obj.content_type = src_metadata[:content_type]
158 | obj.content_disposition = src_metadata[:content_disposition]
159 | obj.content_encoding = src_metadata[:content_encoding] # if src_metadata[:content_encoding]
160 | obj.size = src_metadata[:size]
161 | obj.modified_date = src_metadata[:modified_date]
162 | obj.cache_control = src_metadata[:cache_control]
163 |
164 | src_bucket.find(src_name)
165 | dst_bucket.add(obj)
166 | return obj
167 | end
168 |
169 | def store_object(bucket, object_name, request)
170 | filedata = ""
171 |
172 | # TODO put a tmpfile here first and mv it over at the end
173 | content_type = request.content_type || ""
174 |
175 | match = content_type.match(/^multipart\/form-data; boundary=(.+)/)
176 | boundary = match[1] if match
177 | if boundary
178 | boundary = WEBrick::HTTPUtils::dequote(boundary)
179 | form_data = WEBrick::HTTPUtils::parse_form_data(request.body, boundary)
180 |
181 | if form_data['file'] == nil || form_data['file'] == ""
182 | raise WEBrick::HTTPStatus::BadRequest
183 | end
184 |
185 | filedata = form_data['file']
186 | else
187 | request.body { |chunk| filedata << chunk }
188 | end
189 |
190 | do_store_object(bucket, object_name, filedata, request)
191 | end
192 |
193 | def do_store_object(bucket, object_name, filedata, request)
194 | begin
195 | filename = File.join(@root, bucket.name, object_name)
196 | FileUtils.mkdir_p(filename)
197 |
198 | metadata_dir = File.join(filename, FAKE_S3_METADATA_DIR)
199 | FileUtils.mkdir_p(metadata_dir)
200 |
201 | content = File.join(filename, FAKE_S3_METADATA_DIR, "content")
202 | metadata = File.join(filename, FAKE_S3_METADATA_DIR, "metadata")
203 |
204 | File.open(content,'wb') { |f| f << filedata }
205 |
206 | metadata_struct = create_metadata(content, request)
207 | File.open(metadata,'w') do |f|
208 | f << YAML::dump(metadata_struct)
209 | end
210 |
211 | obj = S3Object.new
212 | obj.name = object_name
213 | obj.md5 = metadata_struct[:md5]
214 | obj.content_type = metadata_struct[:content_type]
215 | obj.content_disposition = metadata_struct[:content_disposition]
216 | obj.content_encoding = metadata_struct[:content_encoding] # if metadata_struct[:content_encoding]
217 | obj.size = metadata_struct[:size]
218 | obj.modified_date = metadata_struct[:modified_date]
219 | obj.cache_control = metadata_struct[:cache_control]
220 |
221 | bucket.add(obj)
222 | return obj
223 | rescue
224 | unless @quiet_mode
225 | puts $!
226 | $!.backtrace.each { |line| puts line }
227 | end
228 | return nil
229 | end
230 | end
231 |
232 | def combine_object_parts(bucket, upload_id, object_name, parts, request)
233 | upload_path = File.join(@root, bucket.name)
234 | base_path = File.join(upload_path, "#{upload_id}_#{object_name}")
235 |
236 | complete_file = ""
237 | chunk = ""
238 | part_paths = []
239 |
240 | parts.sort_by { |part| part[:number] }.each do |part|
241 | part_path = "#{base_path}_part#{part[:number]}"
242 | content_path = File.join(part_path, FAKE_S3_METADATA_DIR, 'content')
243 |
244 | File.open(content_path, 'rb') { |f| chunk = f.read }
245 | etag = Digest::MD5.hexdigest(chunk)
246 |
247 | raise new Error "invalid file chunk" unless part[:etag] == etag
248 | complete_file << chunk
249 | part_paths << part_path
250 | end
251 |
252 | object = do_store_object(bucket, object_name, complete_file, request)
253 |
254 | # clean up parts
255 | part_paths.each do |path|
256 | FileUtils.remove_dir(path)
257 | end
258 |
259 | object
260 | end
261 |
262 | def delete_object(bucket,object_name,request)
263 | begin
264 | filename = File.join(@root,bucket.name,object_name)
265 | FileUtils.rm_rf(filename)
266 | object = bucket.find(object_name)
267 | bucket.remove(object)
268 | rescue
269 | puts $!
270 | $!.backtrace.each { |line| puts line }
271 | return nil
272 | end
273 | end
274 |
275 | def delete_objects(bucket, objects, request)
276 | begin
277 | filenames = []
278 | objects.each do |object_name|
279 | filenames << File.join(@root,bucket.name,object_name)
280 | object = bucket.find(object_name)
281 | bucket.remove(object)
282 | end
283 |
284 | FileUtils.rm_rf(filenames)
285 | rescue
286 | puts $!
287 | $!.backtrace.each { |line| puts line }
288 | return nil
289 | end
290 | end
291 |
292 | # TODO: abstract getting meta data from request.
293 | def create_metadata(content, request)
294 | metadata = {}
295 | metadata[:md5] = Digest::MD5.file(content).hexdigest
296 | metadata[:content_type] = request.header["content-type"].first
297 | if request.header['content-disposition']
298 | metadata[:content_disposition] = request.header['content-disposition'].first
299 | end
300 |
301 | if request.header['cache-control']
302 | metadata[:cache_control] = request.header['cache-control'].first
303 | end
304 |
305 | content_encoding = request.header["content-encoding"].first
306 | metadata[:content_encoding] = content_encoding
307 | #if content_encoding
308 | # metadata[:content_encoding] = content_encoding
309 | #end
310 | metadata[:size] = File.size(content)
311 | metadata[:modified_date] = File.mtime(content).utc.iso8601(SUBSECOND_PRECISION)
312 | metadata[:amazon_metadata] = {}
313 | metadata[:custom_metadata] = {}
314 |
315 | # Add custom metadata from the request header
316 | request.header.each do |key, value|
317 | match = /^x-amz-([^-]+)-(.*)$/.match(key)
318 | next unless match
319 | if match[1].eql?('meta') && (match_key = match[2])
320 | metadata[:custom_metadata][match_key] = value.join(', ')
321 | next
322 | end
323 | metadata[:amazon_metadata][key.gsub(/^x-amz-/, '')] = value.join(', ')
324 | end
325 | return metadata
326 | end
327 | end
328 | end
329 |
--------------------------------------------------------------------------------
/lib/fakes3/rate_limitable_file.rb:
--------------------------------------------------------------------------------
1 | module FakeS3
2 | class RateLimitableFile < File
3 | @@rate_limit = nil
4 | # Specify a rate limit in bytes per second
5 | def self.rate_limit
6 | @@rate_limit
7 | end
8 |
9 | def self.rate_limit=(rate_limit)
10 | @@rate_limit = rate_limit
11 | end
12 |
13 | def read(args)
14 | if @@rate_limit
15 | time_to_sleep = args / @@rate_limit
16 | sleep(time_to_sleep)
17 | end
18 | return super(args)
19 | end
20 | end
21 | end
22 |
--------------------------------------------------------------------------------
/lib/fakes3/s3_object.rb:
--------------------------------------------------------------------------------
1 | module FakeS3
2 | class S3Object
3 | include Comparable
4 | attr_accessor :name,:size,:creation_date,:modified_date,:md5,:io,:content_type,:content_disposition,:content_encoding,:custom_metadata,:cache_control
5 |
6 | def hash
7 | @name.hash
8 | end
9 |
10 | def eql?(object)
11 | object.is_a?(self.class) ? (@name == object.name) : false
12 | end
13 |
14 | # Sort by the object's name
15 | def <=>(object)
16 | object.is_a?(self.class) ? (@name <=> object.name) : nil
17 | end
18 | end
19 | end
20 |
--------------------------------------------------------------------------------
/lib/fakes3/server.rb:
--------------------------------------------------------------------------------
1 | require 'time'
2 | require 'webrick'
3 | require 'webrick/https'
4 | require 'openssl'
5 | require 'securerandom'
6 | require 'cgi'
7 | require 'uri'
8 | require 'fakes3/util'
9 | require 'fakes3/file_store'
10 | require 'fakes3/xml_adapter'
11 | require 'fakes3/xml_parser'
12 | require 'fakes3/bucket_query'
13 | require 'fakes3/unsupported_operation'
14 | require 'fakes3/errors'
15 | require 'ipaddr'
16 |
17 | module FakeS3
18 | class Request
19 | CREATE_BUCKET = "CREATE_BUCKET"
20 | LIST_BUCKETS = "LIST_BUCKETS"
21 | LS_BUCKET = "LS_BUCKET"
22 | HEAD = "HEAD"
23 | STORE = "STORE"
24 | COPY = "COPY"
25 | GET = "GET"
26 | GET_ACL = "GET_ACL"
27 | SET_ACL = "SET_ACL"
28 | MOVE = "MOVE"
29 | DELETE_OBJECT = "DELETE_OBJECT"
30 | DELETE_BUCKET = "DELETE_BUCKET"
31 | DELETE_OBJECTS = "DELETE_OBJECTS"
32 |
33 | attr_accessor :bucket, :object, :type, :src_bucket,
34 | :src_object, :method, :webrick_request,
35 | :path, :is_path_style, :query, :http_verb
36 |
37 | def inspect
38 | puts "-----Inspect FakeS3 Request"
39 | puts "Type: #{@type}"
40 | puts "Is Path Style: #{@is_path_style}"
41 | puts "Request Method: #{@method}"
42 | puts "Bucket: #{@bucket}"
43 | puts "Object: #{@object}"
44 | puts "Src Bucket: #{@src_bucket}"
45 | puts "Src Object: #{@src_object}"
46 | puts "Query: #{@query}"
47 | puts "-----Done"
48 | end
49 | end
50 |
51 | class Servlet < WEBrick::HTTPServlet::AbstractServlet
52 | def initialize(server,store,hostname,cors_options)
53 | super(server)
54 | @store = store
55 | @hostname = hostname
56 | @port = server.config[:Port]
57 | @root_hostnames = [hostname,'localhost','s3.amazonaws.com','s3.localhost']
58 |
59 | # Here lies hard-coded defaults for CORS Configuration
60 | @cors_allow_origin = (cors_options['allow_origin'] or '*')
61 | @cors_allow_methods = (cors_options['allow_methods'] or 'PUT, POST, HEAD, GET, OPTIONS')
62 | @cors_preflight_allow_headers = (cors_options['preflight_allow_headers'] or 'Accept, Content-Type, Authorization, Content-Length, ETag, X-CSRF-Token, Content-Disposition')
63 | @cors_post_put_allow_headers = (cors_options['post_put_allow_headers'] or 'Authorization, Content-Length')
64 | @cors_expose_headers = (cors_options['expose_headers'] or 'ETag')
65 | end
66 |
67 | def validate_request(request)
68 | req = request.webrick_request
69 | return if req.nil?
70 | return if not req.header.has_key?('expect')
71 | req.continue if req.header['expect'].first=='100-continue'
72 | end
73 |
74 | def do_GET(request, response)
75 | s_req = normalize_request(request)
76 |
77 | case s_req.type
78 | when 'LIST_BUCKETS'
79 | response.status = 200
80 | response['Content-Type'] = 'application/xml'
81 | buckets = @store.buckets
82 | response.body = XmlAdapter.buckets(buckets)
83 | when 'LS_BUCKET'
84 | bucket_obj = @store.get_bucket(s_req.bucket)
85 | if bucket_obj
86 | response.status = 200
87 | response['Content-Type'] = "application/xml"
88 | query = {
89 | :marker => s_req.query["marker"] ? s_req.query["marker"].to_s : nil,
90 | :prefix => s_req.query["prefix"] ? s_req.query["prefix"].to_s : nil,
91 | :max_keys => s_req.query["max-keys"] ? s_req.query["max-keys"].to_i : nil,
92 | :delimiter => s_req.query["delimiter"] ? s_req.query["delimiter"].to_s : nil
93 | }
94 | bq = bucket_obj.query_for_range(query)
95 | response.body = XmlAdapter.bucket_query(bq)
96 | else
97 | response.status = 404
98 | response.body = XmlAdapter.error_no_such_bucket(s_req.bucket)
99 | response['Content-Type'] = "application/xml"
100 | end
101 | when 'GET_ACL'
102 | response.status = 200
103 | response.body = XmlAdapter.acl
104 | response['Content-Type'] = 'application/xml'
105 | when 'GET'
106 | real_obj = @store.get_object(s_req.bucket, s_req.object, request)
107 | if !real_obj
108 | response.status = 404
109 | response.body = XmlAdapter.error_no_such_key(s_req.object)
110 | response['Content-Type'] = "application/xml"
111 | response['Access-Control-Allow-Origin'] = @cors_allow_origin
112 | return
113 | end
114 |
115 | if_none_match = request["If-None-Match"]
116 | if if_none_match == "\"#{real_obj.md5}\"" or if_none_match == "*"
117 | response.status = 304
118 | return
119 | end
120 |
121 | if_modified_since = request["If-Modified-Since"]
122 | if if_modified_since
123 | time = Time.httpdate(if_modified_since)
124 | if time >= Time.iso8601(real_obj.modified_date)
125 | response.status = 304
126 | return
127 | end
128 | end
129 |
130 | response.status = 200
131 | response['Content-Type'] = real_obj.content_type
132 |
133 | if real_obj.content_encoding
134 | response.header['X-Content-Encoding'] = real_obj.content_encoding
135 | response.header['Content-Encoding'] = real_obj.content_encoding
136 | end
137 |
138 | response['Content-Disposition'] = real_obj.content_disposition ? real_obj.content_disposition : 'attachment'
139 |
140 | response['Last-Modified'] = Time.iso8601(real_obj.modified_date).httpdate
141 | response.header['ETag'] = "\"#{real_obj.md5}\""
142 | response['Accept-Ranges'] = "bytes"
143 | response['Last-Ranges'] = "bytes"
144 | response['Access-Control-Allow-Origin'] = @cors_allow_origin
145 |
146 | real_obj.custom_metadata.each do |header, value|
147 | response.header['x-amz-meta-' + header] = value
148 | end
149 |
150 | stat = File::Stat.new(real_obj.io.path)
151 | content_length = stat.size
152 |
153 | # Added Range Query support
154 | range = request.header["range"].first
155 | if range
156 | response.status = 206
157 | if range =~ /bytes=(\d*)-(\d*)/
158 | start = $1.to_i
159 | finish = $2.to_i
160 | finish_str = ""
161 | if finish == 0
162 | finish = content_length - 1
163 | finish_str = "#{finish}"
164 | else
165 | finish_str = finish.to_s
166 | end
167 |
168 | bytes_to_read = finish - start + 1
169 | response['Content-Range'] = "bytes #{start}-#{finish_str}/#{content_length}"
170 | real_obj.io.pos = start
171 | response.body = real_obj.io.read(bytes_to_read)
172 | return
173 | end
174 | end
175 | response['Content-Length'] = File::Stat.new(real_obj.io.path).size
176 | if s_req.http_verb == 'HEAD'
177 | response.body = ""
178 | real_obj.io.close
179 | else
180 | response.body = real_obj.io
181 | end
182 |
183 | if real_obj.cache_control
184 | response['Cache-Control'] = real_obj.cache_control
185 | end
186 | end
187 | end
188 |
189 | def do_PUT(request, response)
190 | s_req = normalize_request(request)
191 | query = CGI::parse(request.request_uri.query || "")
192 |
193 | return do_multipartPUT(request, response) if query['uploadId'].first
194 |
195 | response.status = 200
196 | response.body = ""
197 | response['Content-Type'] = "text/xml"
198 | response['Access-Control-Allow-Origin'] = @cors_allow_origin
199 |
200 | case s_req.type
201 | when Request::COPY
202 | object = @store.copy_object(s_req.src_bucket, s_req.src_object, s_req.bucket, s_req.object, request)
203 | response.body = XmlAdapter.copy_object_result(object)
204 | when Request::STORE
205 | bucket_obj = @store.get_bucket(s_req.bucket)
206 | if !bucket_obj
207 | # Lazily create a bucket. TODO fix this to return the proper error
208 | bucket_obj = @store.create_bucket(s_req.bucket)
209 | end
210 |
211 | real_obj = @store.store_object(bucket_obj, s_req.object, s_req.webrick_request)
212 | response.header['ETag'] = "\"#{real_obj.md5}\""
213 | when Request::CREATE_BUCKET
214 | @store.create_bucket(s_req.bucket)
215 | end
216 | end
217 |
218 | def do_multipartPUT(request, response)
219 | s_req = normalize_request(request)
220 | query = CGI::parse(request.request_uri.query)
221 |
222 | part_number = query['partNumber'].first
223 | upload_id = query['uploadId'].first
224 | part_name = "#{upload_id}_#{s_req.object}_part#{part_number}"
225 |
226 | # store the part
227 | if s_req.type == Request::COPY
228 | real_obj = @store.copy_object(
229 | s_req.src_bucket, s_req.src_object,
230 | s_req.bucket , part_name,
231 | request
232 | )
233 |
234 | response['Content-Type'] = "text/xml"
235 | response.body = XmlAdapter.copy_object_result real_obj
236 | else
237 | bucket_obj = @store.get_bucket(s_req.bucket)
238 | if !bucket_obj
239 | bucket_obj = @store.create_bucket(s_req.bucket)
240 | end
241 | real_obj = @store.store_object(
242 | bucket_obj, part_name,
243 | request
244 | )
245 |
246 | response.body = ""
247 | response.header['ETag'] = "\"#{real_obj.md5}\""
248 | end
249 |
250 | response['Access-Control-Allow-Origin'] = @cors_allow_origin
251 | response['Access-Control-Allow-Headers'] = @cors_post_put_allow_headers
252 | response['Access-Control-Expose-Headers'] = @cors_expose_headers
253 |
254 | response.status = 200
255 | end
256 |
257 | def do_POST(request,response)
258 | if request.query_string === 'delete'
259 | return do_DELETE(request, response)
260 | end
261 |
262 | s_req = normalize_request(request)
263 | key = request.query['key']
264 | query = CGI::parse(request.request_uri.query || "")
265 |
266 | if query.has_key?('uploads')
267 | upload_id = SecureRandom.hex
268 |
269 | response.body = <<-eos.strip
270 |
271 |
272 | #{ s_req.bucket }
273 | #{ key }
274 | #{ upload_id }
275 |
276 | eos
277 | elsif query.has_key?('uploadId')
278 | upload_id = query['uploadId'].first
279 | bucket_obj = @store.get_bucket(s_req.bucket)
280 | real_obj = @store.combine_object_parts(
281 | bucket_obj,
282 | upload_id,
283 | s_req.object,
284 | parse_complete_multipart_upload(request),
285 | request
286 | )
287 |
288 | response.body = XmlAdapter.complete_multipart_result real_obj
289 | elsif request.content_type =~ /^multipart\/form-data; boundary=(.+)/
290 | key = request.query['key']
291 |
292 | success_action_redirect = request.query['success_action_redirect']
293 | success_action_status = request.query['success_action_status']
294 |
295 | filename = 'default'
296 | filename = $1 if request.body =~ /filename="(.*)"/
297 | key = key.gsub('${filename}', filename)
298 |
299 | bucket_obj = @store.get_bucket(s_req.bucket) || @store.create_bucket(s_req.bucket)
300 | real_obj = @store.store_object(bucket_obj, key, s_req.webrick_request)
301 |
302 | response['Etag'] = "\"#{real_obj.md5}\""
303 |
304 | if success_action_redirect
305 | object_params = [ [ :bucket, s_req.bucket ], [ :key, key ] ]
306 | location_uri = URI.parse(success_action_redirect)
307 | original_location_params = URI.decode_www_form(String(location_uri.query))
308 | location_uri.query = URI.encode_www_form(original_location_params + object_params)
309 |
310 | response.status = 303
311 | response.body = ""
312 | response['Location'] = location_uri.to_s
313 | else
314 | response.status = success_action_status || 204
315 | if response.status == "201"
316 | response.body = <<-eos.strip
317 |
318 |
319 | http://#{s_req.bucket}.localhost:#{@port}/#{key}
320 | #{s_req.bucket}
321 | #{key}
322 | #{response['Etag']}
323 |
324 | eos
325 | end
326 | end
327 | else
328 | raise WEBrick::HTTPStatus::BadRequest
329 | end
330 |
331 | response['Content-Type'] = 'text/xml'
332 | response['Access-Control-Allow-Origin'] = @cors_allow_origin
333 | response['Access-Control-Allow-Headers'] = @cors_post_put_allow_headers
334 | response['Access-Control-Expose-Headers'] = @cors_expose_headers
335 | end
336 |
337 | def do_DELETE(request, response)
338 | s_req = normalize_request(request)
339 |
340 | case s_req.type
341 | when Request::DELETE_OBJECTS
342 | bucket_obj = @store.get_bucket(s_req.bucket)
343 | keys = XmlParser.delete_objects(s_req.webrick_request)
344 | @store.delete_objects(bucket_obj,keys,s_req.webrick_request)
345 | when Request::DELETE_OBJECT
346 | bucket_obj = @store.get_bucket(s_req.bucket)
347 | @store.delete_object(bucket_obj,s_req.object,s_req.webrick_request)
348 | when Request::DELETE_BUCKET
349 | @store.delete_bucket(s_req.bucket)
350 | end
351 |
352 | response.status = 204
353 | response.body = ""
354 | end
355 |
356 | def do_OPTIONS(request, response)
357 | super
358 | response['Access-Control-Allow-Origin'] = @cors_allow_origin
359 | response['Access-Control-Allow-Methods'] = @cors_allow_methods
360 | response['Access-Control-Allow-Headers'] = @cors_preflight_allow_headers
361 | response['Access-Control-Expose-Headers'] = @cors_expose_headers
362 | end
363 |
364 | private
365 |
366 | def normalize_delete(webrick_req, s_req)
367 | path = webrick_req.path
368 | path_len = path.size
369 | query = webrick_req.query
370 | if path == "/" and s_req.is_path_style
371 | # Probably do a 404 here
372 | else
373 | if s_req.is_path_style
374 | elems = path[1,path_len].split("/")
375 | s_req.bucket = elems[0]
376 | else
377 | elems = path.split("/")
378 | end
379 |
380 | if elems.size == 0
381 | if s_req.is_path_style
382 | s_req.type = Request::DELETE_OBJECTS
383 | s_req.query = query
384 | s_req.webrick_request = webrick_req
385 | else
386 | s_req.type = Request::DELETE_BUCKET
387 | end
388 | elsif elems.size == 1
389 | s_req.type = webrick_req.query_string == 'delete' ? Request::DELETE_OBJECTS : Request::DELETE_BUCKET
390 | s_req.query = query
391 | s_req.webrick_request = webrick_req
392 | else
393 | s_req.type = Request::DELETE_OBJECT
394 | object = elems[1,elems.size].join('/')
395 | s_req.object = object
396 | end
397 | end
398 | end
399 |
400 | def normalize_get(webrick_req, s_req)
401 | path = webrick_req.path
402 | path_len = path.size
403 | query = webrick_req.query
404 | if path == "/" and s_req.is_path_style
405 | s_req.type = Request::LIST_BUCKETS
406 | else
407 | if s_req.is_path_style
408 | elems = path[1,path_len].split("/")
409 | s_req.bucket = elems[0]
410 | else
411 | elems = path.split("/")
412 | end
413 |
414 | if elems.size < 2
415 | s_req.type = Request::LS_BUCKET
416 | s_req.query = query
417 | else
418 | if query["acl"] == ""
419 | s_req.type = Request::GET_ACL
420 | else
421 | s_req.type = Request::GET
422 | end
423 | object = elems[1,elems.size].join('/')
424 | s_req.object = object
425 | end
426 | end
427 | end
428 |
429 | def normalize_put(webrick_req, s_req)
430 | path = webrick_req.path
431 | path_len = path.size
432 | if path == "/"
433 | if s_req.bucket
434 | s_req.type = Request::CREATE_BUCKET
435 | end
436 | else
437 | if s_req.is_path_style
438 | elems = path[1,path_len].split("/")
439 | s_req.bucket = elems[0]
440 | if elems.size == 1
441 | s_req.type = Request::CREATE_BUCKET
442 | else
443 | if webrick_req.request_line =~ /\?acl/
444 | s_req.type = Request::SET_ACL
445 | else
446 | s_req.type = Request::STORE
447 | end
448 | s_req.object = elems[1,elems.size].join('/')
449 | end
450 | else
451 | if webrick_req.request_line =~ /\?acl/
452 | s_req.type = Request::SET_ACL
453 | else
454 | s_req.type = Request::STORE
455 | end
456 | s_req.object = webrick_req.path[1..-1]
457 | end
458 | end
459 |
460 | # TODO: also parse the x-amz-copy-source-range:bytes=first-last header
461 | # for multipart copy
462 | copy_source = webrick_req.header["x-amz-copy-source"]
463 | if copy_source and copy_source.size == 1
464 | copy_source = URI.unescape copy_source.first
465 | src_elems = copy_source.split("/")
466 | root_offset = src_elems[0] == "" ? 1 : 0
467 | s_req.src_bucket = src_elems[root_offset]
468 | s_req.src_object = src_elems[1 + root_offset,src_elems.size].join("/")
469 | s_req.type = Request::COPY
470 | end
471 |
472 | s_req.webrick_request = webrick_req
473 | end
474 |
475 | def normalize_post(webrick_req,s_req)
476 | path = webrick_req.path
477 | path_len = path.size
478 |
479 | s_req.path = webrick_req.query['key']
480 | s_req.webrick_request = webrick_req
481 |
482 | if s_req.is_path_style
483 | elems = path[1, path_len].split("/")
484 | s_req.bucket = elems[0]
485 | s_req.object = elems[1..-1].join('/') if elems.size >= 2
486 | else
487 | s_req.object = path[1..-1]
488 | end
489 | end
490 |
491 | # This method takes a webrick request and generates a normalized FakeS3 request
492 | def normalize_request(webrick_req)
493 | host_header= webrick_req["Host"]
494 | host = host_header.split(':')[0]
495 |
496 | s_req = Request.new
497 | s_req.path = webrick_req.path
498 | s_req.is_path_style = true
499 |
500 | root_hostname = @root_hostnames.find { |hostname| host.end_with?(".#{hostname}") }
501 | if root_hostname
502 | s_req.bucket = host[0...-root_hostname.size - 1]
503 | s_req.is_path_style = false
504 | end
505 |
506 | s_req.http_verb = webrick_req.request_method
507 |
508 | case webrick_req.request_method
509 | when 'PUT'
510 | normalize_put(webrick_req,s_req)
511 | when 'GET','HEAD'
512 | normalize_get(webrick_req,s_req)
513 | when 'DELETE'
514 | normalize_delete(webrick_req,s_req)
515 | when 'POST'
516 | if webrick_req.query_string != 'delete'
517 | normalize_post(webrick_req,s_req)
518 | else
519 | normalize_delete(webrick_req,s_req)
520 | end
521 | else
522 | raise "Unknown Request"
523 | end
524 |
525 | validate_request(s_req)
526 |
527 | return s_req
528 | end
529 |
530 | def parse_complete_multipart_upload(request)
531 | parts_xml = ""
532 | request.body { |chunk| parts_xml << chunk }
533 |
534 | # TODO: improve parsing xml
535 | parts_xml = parts_xml.scan(/.*?<\/Part>/m)
536 |
537 | parts_xml.collect do |xml|
538 | {
539 | number: xml[/(\d+)<\/PartNumber>/, 1].to_i,
540 | etag: FakeS3::Util.strip_before_and_after(xml[/\(.+)<\/ETag>/, 1], '"')
541 | }
542 | end
543 | end
544 |
545 | def dump_request(request)
546 | puts "----------Dump Request-------------"
547 | puts request.request_method
548 | puts request.path
549 | request.each do |k,v|
550 | puts "#{k}:#{v}"
551 | end
552 | puts "----------End Dump -------------"
553 | end
554 | end
555 |
556 |
557 | class Server
558 | def initialize(address, port, store, hostname, ssl_cert_path, ssl_key_path, extra_options={})
559 | @address = address
560 | @port = port
561 | @store = store
562 | @hostname = hostname
563 | @ssl_cert_path = ssl_cert_path
564 | @ssl_key_path = ssl_key_path
565 | @cors_options = extra_options[:cors_options] or {}
566 | webrick_config = {
567 | :BindAddress => @address,
568 | :Port => @port
569 | }
570 | if !@ssl_cert_path.to_s.empty?
571 | webrick_config.merge!(
572 | {
573 | :SSLEnable => true,
574 | :SSLCertificate => OpenSSL::X509::Certificate.new(File.read(@ssl_cert_path)),
575 | :SSLPrivateKey => OpenSSL::PKey::RSA.new(File.read(@ssl_key_path))
576 | }
577 | )
578 | end
579 |
580 | if extra_options[:quiet]
581 | webrick_config.merge!(
582 | :Logger => WEBrick::Log.new("/dev/null"),
583 | :AccessLog => []
584 | )
585 | end
586 |
587 | @server = WEBrick::HTTPServer.new(webrick_config)
588 | end
589 |
590 | def serve
591 | @server.mount "/", Servlet, @store, @hostname, @cors_options
592 | shutdown = proc { @server.shutdown }
593 | trap "INT", &shutdown
594 | trap "TERM", &shutdown
595 | @server.start
596 | end
597 |
598 | def shutdown
599 | @server.shutdown
600 | end
601 | end
602 | end
603 |
--------------------------------------------------------------------------------
/lib/fakes3/sorted_object_list.rb:
--------------------------------------------------------------------------------
1 | require 'set'
2 | module FakeS3
3 | class S3MatchSet
4 | attr_accessor :matches,:is_truncated,:common_prefixes
5 | def initialize
6 | @matches = []
7 | @is_truncated = false
8 | @common_prefixes = []
9 | end
10 | end
11 |
12 | # This class has some of the semantics necessary for how buckets can return
13 | # their items
14 | #
15 | # It is currently implemented naively as a sorted set + hash If you are going
16 | # to try to put massive lists inside buckets and ls them, you will be sorely
17 | # disappointed about this performance.
18 | class SortedObjectList
19 |
20 | def initialize
21 | @sorted_set = SortedSet.new
22 | @object_map = {}
23 | @mutex = Mutex.new
24 | end
25 |
26 | def count
27 | @sorted_set.count
28 | end
29 |
30 | def find(object_name)
31 | @object_map[object_name]
32 | end
33 |
34 | # Add an S3 object into the sorted list
35 | def add(s3_object)
36 | return if !s3_object
37 |
38 | @object_map[s3_object.name] = s3_object
39 | @sorted_set << s3_object
40 | end
41 |
42 | def remove(s3_object)
43 | return if !s3_object
44 |
45 | @object_map.delete(s3_object.name)
46 | @sorted_set.delete(s3_object)
47 | end
48 |
49 | # Return back a set of matches based on the passed in options
50 | #
51 | # options:
52 | #
53 | # :marker : a string to start the lexographical search (it is not included
54 | # in the result)
55 | # :max_keys : a maximum number of results
56 | # :prefix : a string to filter the results by
57 | # :delimiter : not supported yet
58 | def list(options)
59 | marker = options[:marker]
60 | prefix = options[:prefix]
61 | max_keys = options[:max_keys] || 1000
62 | delimiter = options[:delimiter]
63 |
64 | ms = S3MatchSet.new
65 |
66 | marker_found = true
67 | pseudo = nil
68 | if marker
69 | marker_found = false
70 | if !@object_map[marker]
71 | pseudo = S3Object.new
72 | pseudo.name = marker
73 | @sorted_set << pseudo
74 | end
75 | end
76 |
77 | if delimiter
78 | if prefix
79 | base_prefix = prefix
80 | else
81 | base_prefix = ""
82 | end
83 | prefix_offset = base_prefix.length
84 | end
85 |
86 | count = 0
87 | last_chunk = nil
88 | @sorted_set.each do |s3_object|
89 | if marker_found && (!prefix or s3_object.name.index(prefix) == 0)
90 | if delimiter
91 | name = s3_object.name
92 | remainder = name.slice(prefix_offset, name.length)
93 | chunks = remainder.split(delimiter, 2)
94 | if chunks.length > 1
95 | if (last_chunk != chunks[0])
96 | # "All of the keys rolled up in a common prefix count as
97 | # a single return when calculating the number of
98 | # returns. See MaxKeys."
99 | # (http://awsdocs.s3.amazonaws.com/S3/latest/s3-api.pdf)
100 | count += 1
101 | if count <= max_keys
102 | ms.common_prefixes << base_prefix + chunks[0] + delimiter
103 | last_chunk = chunks[0]
104 | else
105 | is_truncated = true
106 | break
107 | end
108 | end
109 |
110 | # Continue to the next key, since this one has a
111 | # delimiter.
112 | next
113 | end
114 | end
115 |
116 | count += 1
117 | if count <= max_keys
118 | ms.matches << s3_object
119 | else
120 | is_truncated = true
121 | break
122 | end
123 | end
124 |
125 | if marker and marker == s3_object.name
126 | marker_found = true
127 | end
128 | end
129 |
130 | if pseudo
131 | @sorted_set.delete(pseudo)
132 | end
133 |
134 | return ms
135 | end
136 | end
137 | end
138 |
--------------------------------------------------------------------------------
/lib/fakes3/unsupported_operation.rb:
--------------------------------------------------------------------------------
1 | module FakeS3
2 | class UnsupportedOperation < RuntimeError
3 | end
4 | end
5 |
--------------------------------------------------------------------------------
/lib/fakes3/util.rb:
--------------------------------------------------------------------------------
1 | module FakeS3
2 | module Util
3 | def Util.strip_before_and_after(string, strip_this)
4 | regex_friendly_strip_this = Regexp.escape(strip_this)
5 | string.gsub(/\A[#{regex_friendly_strip_this}]+|[#{regex_friendly_strip_this}]+\z/, '')
6 | end
7 | end
8 | end
9 |
--------------------------------------------------------------------------------
/lib/fakes3/version.rb:
--------------------------------------------------------------------------------
1 | module FakeS3
2 | VERSION = "2.0.0"
3 | end
4 |
--------------------------------------------------------------------------------
/lib/fakes3/xml_adapter.rb:
--------------------------------------------------------------------------------
1 | require 'builder'
2 | require 'time'
3 |
4 | module FakeS3
5 | class XmlAdapter
6 | def self.buckets(bucket_objects)
7 | output = ""
8 | xml = Builder::XmlMarkup.new(:target => output)
9 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
10 | xml.ListAllMyBucketsResult(:xmlns => "http://s3.amazonaws.com/doc/2006-03-01/") { |lam|
11 | lam.Owner { |owner|
12 | owner.ID("123")
13 | owner.DisplayName("FakeS3")
14 | }
15 | lam.Buckets { |buckets|
16 | bucket_objects.each do |bucket|
17 | buckets.Bucket do |b|
18 | b.Name(bucket.name)
19 | b.CreationDate(bucket.creation_date.strftime("%Y-%m-%dT%H:%M:%S.000Z"))
20 | end
21 | end
22 | }
23 | }
24 | output
25 | end
26 |
27 | def self.error(error)
28 | output = ""
29 | xml = Builder::XmlMarkup.new(:target => output)
30 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
31 | xml.Error { |err|
32 | err.Code(error.code)
33 | err.Message(error.message)
34 | err.Resource(error.resource)
35 | err.RequestId(1)
36 | }
37 | output
38 | end
39 |
40 | #
41 | #
42 | # NoSuchKey
43 | # The resource you requested does not exist
44 | # /mybucket/myfoto.jpg
45 | # 4442587FB7D0A2F9
46 | #
47 | #
48 | def self.error_no_such_bucket(name)
49 | output = ""
50 | xml = Builder::XmlMarkup.new(:target => output)
51 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
52 | xml.Error { |err|
53 | err.Code("NoSuchBucket")
54 | err.Message("The resource you requested does not exist")
55 | err.Resource(name)
56 | err.RequestId(1)
57 | }
58 | output
59 | end
60 |
61 | def self.error_bucket_not_empty(name)
62 | output = ""
63 | xml = Builder::XmlMarkup.new(:target => output)
64 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
65 | xml.Error { |err|
66 | err.Code("BucketNotEmpty")
67 | err.Message("The bucket you tried to delete is not empty.")
68 | err.Resource(name)
69 | err.RequestId(1)
70 | }
71 | output
72 | end
73 |
74 | def self.error_no_such_key(name)
75 | output = ""
76 | xml = Builder::XmlMarkup.new(:target => output)
77 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
78 | xml.Error { |err|
79 | err.Code("NoSuchKey")
80 | err.Message("The specified key does not exist")
81 | err.Key(name)
82 | err.RequestId(1)
83 | err.HostId(2)
84 | }
85 | output
86 | end
87 |
88 | def self.bucket(bucket)
89 | output = ""
90 | xml = Builder::XmlMarkup.new(:target => output)
91 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
92 | xml.ListBucketResult(:xmlns => "http://s3.amazonaws.com/doc/2006-03-01/") { |lbr|
93 | lbr.Name(bucket.name)
94 | lbr.Prefix
95 | lbr.Marker
96 | lbr.MaxKeys("1000")
97 | lbr.IsTruncated("false")
98 | }
99 | output
100 | end
101 |
102 | # A bucket query gives back the bucket along with contents
103 | #
104 | #Nelson
105 | # 2006-01-01T12:00:00.000Z
106 | # "828ef3fdfa96f00ad9f27c383fc9ac7f"
107 | # 5
108 | # STANDARD
109 | #
110 | # bcaf161ca5fb16fd081034f
111 | # webfile
112 | #
113 | #
114 |
115 | def self.append_objects_to_list_bucket_result(lbr,objects)
116 | return if objects.nil? or objects.size == 0
117 |
118 | if objects.index(nil)
119 | require 'ruby-debug'
120 | Debugger.start
121 | debugger
122 | end
123 |
124 | objects.each do |s3_object|
125 | lbr.Contents { |contents|
126 | contents.Key(s3_object.name)
127 | contents.LastModified(s3_object.modified_date)
128 | contents.ETag("\"#{s3_object.md5}\"")
129 | contents.Size(s3_object.size)
130 | contents.StorageClass("STANDARD")
131 |
132 | contents.Owner { |owner|
133 | owner.ID("abc")
134 | owner.DisplayName("You")
135 | }
136 | }
137 | end
138 | end
139 |
140 | def self.append_common_prefixes_to_list_bucket_result(lbr, prefixes)
141 | return if prefixes.nil? or prefixes.size == 0
142 |
143 | prefixes.each do |common_prefix|
144 | lbr.CommonPrefixes { |contents| contents.Prefix(common_prefix) }
145 | end
146 | end
147 |
148 | def self.bucket_query(bucket_query)
149 | output = ""
150 | bucket = bucket_query.bucket
151 | xml = Builder::XmlMarkup.new(:target => output)
152 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
153 | xml.ListBucketResult(:xmlns => "http://s3.amazonaws.com/doc/2006-03-01/") { |lbr|
154 | lbr.Name(bucket.name)
155 | lbr.Prefix(bucket_query.prefix)
156 | lbr.Marker(bucket_query.marker)
157 | lbr.MaxKeys(bucket_query.max_keys)
158 | lbr.IsTruncated(bucket_query.is_truncated?)
159 | append_objects_to_list_bucket_result(lbr,bucket_query.matches)
160 | append_common_prefixes_to_list_bucket_result(lbr, bucket_query.common_prefixes)
161 | }
162 | output
163 | end
164 |
165 | # ACL xml
166 | def self.acl(object = nil)
167 | output = ""
168 | xml = Builder::XmlMarkup.new(:target => output)
169 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
170 | xml.AccessControlPolicy(:xmlns => "http://s3.amazonaws.com/doc/2006-03-01/") { |acp|
171 | acp.Owner do |owner|
172 | owner.ID("abc")
173 | owner.DisplayName("You")
174 | end
175 | acp.AccessControlList do |acl|
176 | acl.Grant do |grant|
177 | grant.Grantee("xmlns:xsi" => "http://www.w3.org/2001/XMLSchema-instance", "xsi:type" => "CanonicalUser") do |grantee|
178 | grantee.ID("abc")
179 | grantee.DisplayName("You")
180 | end
181 | grant.Permission("FULL_CONTROL")
182 | end
183 | end
184 | }
185 | output
186 | end
187 |
188 | #
189 | # 2009-10-28T22:32:00
190 | # "9b2cf535f27731c974343645a3985328"
191 | #
192 | def self.copy_object_result(object)
193 | output = ""
194 | xml = Builder::XmlMarkup.new(:target => output)
195 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
196 | xml.CopyObjectResult { |result|
197 | result.LastModified(object.modified_date)
198 | result.ETag("\"#{object.md5}\"")
199 | }
200 | output
201 | end
202 |
203 | #
204 | # http://Example-Bucket.s3.amazonaws.com/Example-Object
205 | # Example-Bucket
206 | # Example-Object
207 | # "3858f62230ac3c915f300c664312c11f-9"
208 | #
209 | def self.complete_multipart_result(object)
210 | output = ""
211 | xml = Builder::XmlMarkup.new(:target => output)
212 | xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
213 | xml.CompleteMultipartUploadResult { |result|
214 | result.Location("TODO: implement")
215 | result.Bucket("TODO: implement")
216 | result.Key(object.name)
217 | result.ETag("\"#{object.md5}\"")
218 | }
219 | output
220 | end
221 | end
222 | end
223 |
--------------------------------------------------------------------------------
/lib/fakes3/xml_parser.rb:
--------------------------------------------------------------------------------
1 | require 'xmlsimple'
2 |
3 | module FakeS3
4 | class XmlParser
5 | def self.delete_objects(request)
6 | keys = []
7 |
8 | objects = XmlSimple.xml_in(request.body, {'NoAttr' => true})['Object']
9 | objects.each do |key|
10 | keys << key['Key'][0]
11 | end
12 |
13 | keys
14 | end
15 | end
16 | end
--------------------------------------------------------------------------------
/static/button.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
--------------------------------------------------------------------------------
/static/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jubos/fake-s3/51fd4375f64efaf419174e66b32a72ffff84c027/static/logo.png
--------------------------------------------------------------------------------
/test/aws_sdk_commands_test.rb:
--------------------------------------------------------------------------------
1 | require 'test/test_helper'
2 | require 'aws-sdk-v1'
3 | require 'rest-client'
4 |
5 | class AwsSdkCommandsTest < Test::Unit::TestCase
6 | def setup
7 | @s3 = AWS::S3.new(:access_key_id => '123',
8 | :secret_access_key => 'abc',
9 | :s3_endpoint => 'localhost',
10 | :s3_port => 10453,
11 | :use_ssl => false)
12 | end
13 |
14 | def test_copy_to
15 | bucket = @s3.buckets["test_copy_to"]
16 | object = bucket.objects["key1"]
17 | object.write("asdf")
18 |
19 | assert object.exists?
20 | object.copy_to("key2")
21 |
22 | assert_equal 2, bucket.objects.count
23 | end
24 |
25 | def test_multipart_upload
26 | bucket = @s3.buckets["test_multipart_upload"]
27 | object = bucket.objects["key1"]
28 | object.write("thisisaverybigfile", :multipart_threshold => 5)
29 | assert object.exists?
30 | assert_equal "thisisaverybigfile", object.read
31 | end
32 |
33 | def test_metadata
34 | file_path = './test_root/test_metadata/metaobject'
35 | FileUtils.rm_rf file_path
36 |
37 | bucket = @s3.buckets["test_metadata"]
38 | object = bucket.objects["metaobject"]
39 | object.write(
40 | 'data',
41 | # this is sent as header x-amz-storage-class
42 | :storage_class => 'REDUCED_REDUNDANCY',
43 | # this is sent as header x-amz-meta-custom1
44 | :metadata => {
45 | "custom1" => "foobar"
46 | }
47 | )
48 | assert object.exists?
49 | metadata_file = YAML.load(IO.read("#{file_path}/.fakes3_metadataFFF/metadata"))
50 |
51 | assert metadata_file.has_key?(:custom_metadata), 'Metadata file does not contain a :custom_metadata key'
52 | assert metadata_file[:custom_metadata].has_key?('custom1'), ':custom_metadata does not contain field "custom1"'
53 | assert_equal 'foobar', metadata_file[:custom_metadata]['custom1'], '"custom1" does not equal expected value "foobar"'
54 |
55 | assert metadata_file.has_key?(:amazon_metadata), 'Metadata file does not contain an :amazon_metadata key'
56 | assert metadata_file[:amazon_metadata].has_key?('storage-class'), ':amazon_metadata does not contain field "storage-class"'
57 | assert_equal 'REDUCED_REDUNDANCY', metadata_file[:amazon_metadata]['storage-class'], '"storage-class" does not equal expected value "REDUCED_REDUNDANCY"'
58 | end
59 |
60 | def test_content_disposition
61 | bucket = @s3.buckets["test_bucket"]
62 | bucket.objects.create("test_object", "asdf", :content_disposition => "application/test")
63 | assert_equal "application/test", content_disposition("test_bucket", "test_object")
64 | end
65 |
66 | def test_content_disposition_copy
67 | bucket = @s3.buckets["test_bucket"]
68 | object = bucket.objects.create("test_object", "asdf", :content_disposition => "application/test")
69 | object.copy_to("test_copy_object")
70 | assert_equal "application/test", content_disposition("test_bucket", "test_copy_object")
71 | end
72 |
73 | def test_content_disposition_request_parameter
74 | bucket = @s3.buckets["test_bucket"]
75 | object = bucket.objects.create("test_object", "asdf")
76 | url = object.url_for(:read, :response_content_disposition => "application/test", :signature_version => :v4)
77 | assert_equal "application/test", response_header(url, :content_disposition)
78 | end
79 |
80 | def test_content_type_request_parameter
81 | bucket = @s3.buckets["test_bucket"]
82 | object = bucket.objects.create("test_object", "asdf")
83 | url = object.url_for(:read, :response_content_type => "application/test", :signature_version => :v4)
84 | assert_equal "application/test", response_header(url, :content_type)
85 | end
86 |
87 | # Unfortunately v1 of the AWS SDK doesn't support reading the content_disposition of an object
88 | def content_disposition(bucket_name, key)
89 | url = "http://localhost:#{@s3.client.port}/#{bucket_name}/#{key}"
90 | response_header(url, :content_disposition)
91 | end
92 |
93 | def response_header(url, header_name)
94 | RestClient.head(url.to_s) do |response|
95 | response.headers[header_name]
96 | end
97 | end
98 | end
99 |
--------------------------------------------------------------------------------
/test/aws_sdk_v2_commands_test.rb:
--------------------------------------------------------------------------------
1 | require 'test/test_helper'
2 | require 'aws-sdk'
3 |
4 | class AwsSdkV2CommandsTest < Test::Unit::TestCase
5 | def setup
6 | @creds = Aws::Credentials.new('123', 'abc')
7 | @s3 = Aws::S3::Client.new(credentials: @creds, region: 'us-east-1', endpoint: 'http://localhost:10453/')
8 | @resource = Aws::S3::Resource.new(client: @s3)
9 | @bucket = @resource.create_bucket(bucket: 'v2_bucket')
10 |
11 | # Delete all objects to avoid sharing state between tests
12 | @bucket.objects.each(&:delete)
13 | end
14 |
15 | def test_create_bucket
16 | bucket = @resource.create_bucket(bucket: 'v2_create_bucket')
17 | assert_not_nil bucket
18 |
19 | bucket_names = @resource.buckets.map(&:name)
20 | assert_not_nil bucket_names.index("v2_create_bucket")
21 | end
22 |
23 | def test_destroy_bucket
24 | @bucket.delete
25 |
26 | begin
27 | @s3.head_bucket(bucket: 'v2_bucket')
28 | assert_fail("Shouldn't succeed here")
29 | rescue
30 | end
31 | end
32 |
33 | def test_create_object
34 | object = @bucket.object('key')
35 | object.put(body: 'test')
36 |
37 | assert_equal 'test', object.get.body.string
38 | end
39 |
40 | def test_bucket_with_dots
41 | bucket = @resource.create_bucket(bucket: 'v2.bucket')
42 | object = bucket.object('key')
43 | object.put(body: 'test')
44 |
45 | assert_equal 'test', object.get.body.string
46 | end
47 |
48 | def test_delete_object
49 | object = @bucket.object('exists')
50 | object.put(body: 'test')
51 |
52 | assert_equal 'test', object.get.body.string
53 |
54 | object.delete
55 |
56 | assert_raise Aws::S3::Errors::NoSuchKey do
57 | object.get
58 | end
59 | end
60 |
61 | # TODO - get this test working
62 | #
63 | #def test_copy_object
64 | # object = @bucket.object("key_one")
65 | # object.put(body: 'asdf')
66 |
67 | # # TODO: explore why 'key1' won't work but 'key_one' will
68 | # object2 = @bucket.object('key_two')
69 | # object2.copy_from(copy_source: 'testing_copy/key_one')
70 |
71 | # assert_equal 2, @bucket.objects.count
72 | #end
73 | end
74 |
--------------------------------------------------------------------------------
/test/boto_test.rb:
--------------------------------------------------------------------------------
1 | require 'test/test_helper'
2 | require 'fileutils'
3 |
4 | class BotoTest < Test::Unit::TestCase
5 | def setup
6 | cmdpath = File.expand_path(File.join(File.dirname(__FILE__),'botocmd.py'))
7 | @botocmd = "python #{cmdpath} -t localhost -p 10453"
8 | end
9 |
10 | def teardown
11 | end
12 |
13 | def test_store
14 | File.open(__FILE__,'rb') do |input|
15 | File.open("/tmp/fakes3_upload",'wb') do |output|
16 | output << input.read
17 | end
18 | end
19 | output = `#{@botocmd} put /tmp/fakes3_upload s3://s3cmd_bucket/upload`
20 | assert_match(/stored/,output)
21 |
22 | FileUtils.rm("/tmp/fakes3_upload")
23 | end
24 |
25 | end
26 |
--------------------------------------------------------------------------------
/test/botocmd.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | # -*- coding: utf-8 -*-
4 | # fakes3cmd.py -- an s3cmd-like script that accepts a custom host and portname
5 | from __future__ import print_function
6 | import re
7 | import os
8 | from optparse import OptionParser
9 |
10 | try:
11 | from boto.s3.connection import S3Connection, OrdinaryCallingFormat
12 | from boto.s3.key import Key
13 | except ImportError:
14 | raise Exception('You must install the boto package for python')
15 |
16 |
17 | class FakeS3Cmd(object):
18 | COMMANDS = ['mb', 'rb', 'put', ]
19 | def __init__(self, host, port):
20 | self.host = host
21 | self.port = port
22 | self.conn = None
23 | self._connect()
24 |
25 | def _connect(self):
26 | print('Connecting: %s:%s' % (self.host, self.port))
27 | self.conn = S3Connection(is_secure=False,
28 | calling_format=OrdinaryCallingFormat(),
29 | aws_access_key_id='',
30 | aws_secret_access_key='',
31 | port=self.port, host=self.host)
32 |
33 |
34 | @staticmethod
35 | def _parse_uri(path):
36 | match = re.match(r's3://([^/]+)(?:/(.*))?', path, re.I)
37 | ## (bucket, key)
38 | return match.groups()
39 |
40 | def mb(self, path, *args):
41 | if not self.conn:
42 | self._connect()
43 |
44 | bucket, _ = self._parse_uri(path)
45 | self.conn.create_bucket(bucket)
46 | print('made bucket: [%s]' % bucket)
47 |
48 | def rb(self, path, *args):
49 | if not self.conn:
50 | self._connect()
51 |
52 | bucket, _ = self._parse_uri(path)
53 | self.conn.delete_bucket(bucket)
54 | print('removed bucket: [%s]' % bucket)
55 |
56 | def put(self, *args):
57 | if not self.conn:
58 | self._connect()
59 |
60 | args = list(args)
61 | path = args.pop()
62 | bucket_name, prefix = self._parse_uri(path)
63 | bucket = self.conn.create_bucket(bucket_name)
64 | for src_file in args:
65 | key = Key(bucket)
66 | key.key = os.path.join(prefix, os.path.basename(src_file))
67 | key.set_contents_from_filename(src_file)
68 | print('stored: [%s]' % key.key)
69 |
70 |
71 | if __name__ == "__main__":
72 | # check for options. TODO: This requires a more verbose help message
73 | # to explain how the positional arguments work.
74 | parser = OptionParser()
75 | parser.add_option("-t", "--host", type="string", default='localhost')
76 | parser.add_option("-p", "--port", type='int', default=80)
77 | o, args = parser.parse_args()
78 |
79 | if len(args) < 2:
80 | raise ValueError('you must minimally supply a desired command and s3 uri')
81 |
82 | cmd = args.pop(0)
83 |
84 | if cmd not in FakeS3Cmd.COMMANDS:
85 | raise ValueError('%s is not a valid command' % cmd)
86 |
87 | fs3 = FakeS3Cmd(o.host, o.port)
88 | handler = getattr(fs3, cmd)
89 | handler(*args)
90 |
--------------------------------------------------------------------------------
/test/cli_test.rb:
--------------------------------------------------------------------------------
1 | require 'test/test_helper'
2 | require 'test/minitest_helper'
3 | require 'fakes3/cli'
4 |
5 |
6 | class CLITest < Test::Unit::TestCase
7 | def setup
8 | super
9 | FakeS3::Server.any_instance.stubs(:serve)
10 | end
11 |
12 | def test_quiet_mode
13 | script = FakeS3::CLI.new([], :root => '.', :port => 4567, :license => 'test', :quiet => true)
14 | assert_output('') do
15 | script.invoke(:server)
16 | end
17 | end
18 | end
19 |
--------------------------------------------------------------------------------
/test/local_s3_cfg:
--------------------------------------------------------------------------------
1 | [default]
2 | access_key = abc
3 | acl_public = False
4 | bucket_location = US
5 | cloudfront_host = cloudfront.amazonaws.com
6 | cloudfront_resource = /2008-06-30/distribution
7 | default_mime_type = binary/octet-stream
8 | delete_removed = False
9 | dry_run = False
10 | encoding = UTF-8
11 | encrypt = False
12 | force = False
13 | get_continue = False
14 | gpg_command = None
15 | gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
16 | gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
17 | gpg_passphrase =
18 | guess_mime_type = True
19 | host_base = localhost:10453
20 | host_bucket = %(bucket)s.localhost:10453
21 | human_readable_sizes = False
22 | list_md5 = False
23 | preserve_attrs = True
24 | progress_meter = True
25 | proxy_host =
26 | proxy_port = 0
27 | recursive = False
28 | recv_chunk = 4096
29 | secret_key = def
30 | send_chunk = 4096
31 | simpledb_host = sdb.amazonaws.com
32 | skip_existing = False
33 | use_https = False
34 | verbosity = WARNING
35 |
--------------------------------------------------------------------------------
/test/minitest_helper.rb:
--------------------------------------------------------------------------------
1 | # LICENSE:
2 | #
3 | # (The MIT License)
4 | #
5 | # Copyright © Ryan Davis, seattle.rb
6 | #
7 | # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
8 | #
9 | # The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
10 | #
11 | # THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
12 |
13 | # The following is from minitest:
14 | # TODO - decide whether to switch to minitest or what to do about these:
15 |
16 | def capture_io
17 | require 'stringio'
18 |
19 | captured_stdout, captured_stderr = StringIO.new, StringIO.new
20 |
21 | orig_stdout, orig_stderr = $stdout, $stderr
22 | $stdout, $stderr = captured_stdout, captured_stderr
23 |
24 | begin
25 | yield
26 | ensure
27 | $stdout = orig_stdout
28 | $stderr = orig_stderr
29 | end
30 |
31 | return captured_stdout.string, captured_stderr.string
32 | end
33 |
34 | def assert_output stdout = nil, stderr = nil
35 | out, err = capture_io do
36 | yield
37 | end
38 |
39 | err_msg = Regexp === stderr ? :assert_match : :assert_equal if stderr
40 | out_msg = Regexp === stdout ? :assert_match : :assert_equal if stdout
41 |
42 | y = send err_msg, stderr, err, "In stderr" if err_msg
43 | x = send out_msg, stdout, out, "In stdout" if out_msg
44 |
45 | (!stdout || x) && (!stderr || y)
46 | end
47 |
--------------------------------------------------------------------------------
/test/post_test.rb:
--------------------------------------------------------------------------------
1 | require 'test/test_helper'
2 | require 'rest-client'
3 |
4 | class PostTest < Test::Unit::TestCase
5 | # Make sure you have a posttest.localhost in your /etc/hosts/
6 | def setup
7 | @url='http://posttest.localhost:10453/'
8 | end
9 |
10 | def teardown
11 | end
12 |
13 | def test_options
14 | RestClient.options(@url) do |response|
15 | assert_equal(response.code, 200)
16 | assert_equal(response.headers[:access_control_allow_origin],"*")
17 | assert_equal(response.headers[:access_control_allow_methods], "PUT, POST, HEAD, GET, OPTIONS")
18 | assert_equal(response.headers[:access_control_allow_headers], "Accept, Content-Type, Authorization, Content-Length, ETag, X-CSRF-Token, Content-Disposition")
19 | assert_equal(response.headers[:access_control_expose_headers], "ETag")
20 | end
21 | end
22 |
23 | def test_redirect
24 | res = RestClient.post(
25 | @url,
26 | 'key'=>'uploads/12345/${filename}',
27 | 'success_action_redirect'=>'http://somewhere.else.com/?foo=bar',
28 | 'file'=>File.new(__FILE__,"rb")
29 | ) { |response|
30 | assert_equal(response.code, 303)
31 | assert_equal(response.headers[:location], 'http://somewhere.else.com/?foo=bar&bucket=posttest&key=uploads%2F12345%2Fpost_test.rb')
32 | # Tests that CORS Headers can be set from command line
33 | assert_equal(response.headers[:access_control_allow_headers], 'Authorization, Content-Length, Cache-Control')
34 | }
35 | end
36 |
37 | def test_status_200
38 | res = RestClient.post(
39 | @url,
40 | 'key'=>'uploads/12345/${filename}',
41 | 'success_action_status'=>'200',
42 | 'file'=>File.new(__FILE__,"rb")
43 | ) { |response|
44 | assert_equal(response.code, 200)
45 | # Tests that CORS Headers can be set from command line
46 | assert_equal(response.headers[:access_control_allow_headers], 'Authorization, Content-Length, Cache-Control')
47 | }
48 | end
49 |
50 | def test_status_201
51 | res = RestClient.post(
52 | @url,
53 | 'key'=>'uploads/12345/${filename}',
54 | 'success_action_status'=>'201',
55 | 'file'=>File.new(__FILE__,"rb")
56 | ) { |response|
57 | assert_equal(response.code, 201)
58 | assert_match(%r{^\<\?xml.*uploads/12345/post_test\.rb}m, response.body)
59 | # Tests that CORS Headers can be set from command line
60 | assert_equal(response.headers[:access_control_allow_headers], 'Authorization, Content-Length, Cache-Control')
61 | }
62 | end
63 |
64 | end
65 |
--------------------------------------------------------------------------------
/test/s3_commands_test.rb:
--------------------------------------------------------------------------------
1 | require 'test/test_helper'
2 | require 'fileutils'
3 | #require 'fakes3/server'
4 | require 'aws/s3'
5 |
6 | class S3CommandsTest < Test::Unit::TestCase
7 | include AWS::S3
8 |
9 | def setup
10 | AWS::S3::Base.establish_connection!(:access_key_id => "123",
11 | :secret_access_key => "abc",
12 | :server => "localhost",
13 | :port => "10453" )
14 | end
15 |
16 | def teardown
17 | AWS::S3::Base.disconnect!
18 | end
19 |
20 | def test_create_bucket
21 | bucket = Bucket.create("ruby_aws_s3")
22 | assert_not_nil bucket
23 |
24 | bucket_names = []
25 | Service.buckets.each do |bucket|
26 | bucket_names << bucket.name
27 | end
28 | assert(bucket_names.index("ruby_aws_s3") >= 0)
29 | end
30 |
31 | def test_destroy_bucket
32 | Bucket.create("deletebucket")
33 | Bucket.delete("deletebucket")
34 |
35 | begin
36 | bucket = Bucket.find("deletebucket")
37 | assert_fail("Shouldn't succeed here")
38 | rescue
39 | end
40 | end
41 |
42 | def test_store
43 | bucket = Bucket.create("ruby_aws_s3")
44 | S3Object.store("hello", "world", "ruby_aws_s3")
45 |
46 | output = ""
47 | obj = S3Object.stream("hello", "ruby_aws_s3") do |chunk|
48 | output << chunk
49 | end
50 | assert_equal "world", output
51 | end
52 |
53 | def test_large_store
54 | bucket = Bucket.create("ruby_aws_s3")
55 | buffer = ""
56 | 500000.times do
57 | buffer << "#{(rand * 100).to_i}"
58 | end
59 |
60 | buf_len = buffer.length
61 | S3Object.store("big",buffer,"ruby_aws_s3")
62 |
63 | output = ""
64 | S3Object.stream("big","ruby_aws_s3") do |chunk|
65 | output << chunk
66 | end
67 | assert_equal buf_len,output.size
68 | end
69 |
70 | def test_metadata_store
71 | assert_equal true, Bucket.create("ruby_aws_s3")
72 | bucket = Bucket.find("ruby_aws_s3")
73 |
74 | # Note well: we can't seem to access obj.metadata until we've stored
75 | # the object and found it again. Thus the store, find, store
76 | # runaround below.
77 | obj = bucket.new_object(:value => "foo")
78 | obj.key = "key_with_metadata"
79 | obj.store
80 | obj = S3Object.find("key_with_metadata", "ruby_aws_s3")
81 | obj.metadata[:param1] = "one"
82 | obj.metadata[:param2] = "two, three"
83 | obj.store
84 | obj = S3Object.find("key_with_metadata", "ruby_aws_s3")
85 |
86 | assert_equal "one", obj.metadata[:param1]
87 | assert_equal "two, three", obj.metadata[:param2]
88 | end
89 |
90 | def test_metadata_copy
91 | assert_equal true, Bucket.create("ruby_aws_s3")
92 | bucket = Bucket.find("ruby_aws_s3")
93 |
94 | # Note well: we can't seem to access obj.metadata until we've stored
95 | # the object and found it again. Thus the store, find, store
96 | # runaround below.
97 | obj = bucket.new_object(:value => "foo")
98 | obj.key = "key_with_metadata"
99 | obj.store
100 | obj = S3Object.find("key_with_metadata", "ruby_aws_s3")
101 | obj.metadata[:param1] = "one"
102 | obj.metadata[:param2] = "two, three"
103 | obj.store
104 |
105 | S3Object.copy("key_with_metadata", "key_with_metadata2", "ruby_aws_s3")
106 | obj = S3Object.find("key_with_metadata2", "ruby_aws_s3")
107 |
108 | assert_equal "one", obj.metadata[:param1]
109 | assert_equal "two, three", obj.metadata[:param2]
110 | end
111 |
112 | def test_multi_directory
113 | bucket = Bucket.create("ruby_aws_s3")
114 | S3Object.store("dir/myfile/123.txt","recursive","ruby_aws_s3")
115 |
116 | output = ""
117 | obj = S3Object.stream("dir/myfile/123.txt","ruby_aws_s3") do |chunk|
118 | output << chunk
119 | end
120 | assert_equal "recursive", output
121 | end
122 |
123 | def test_find_nil_bucket
124 | begin
125 | bucket = Bucket.find("unknown")
126 | assert_fail "Bucket.find didn't throw an exception"
127 | rescue
128 | assert_equal AWS::S3::NoSuchBucket,$!.class
129 | end
130 | end
131 |
132 | def test_find_object
133 | bucket = Bucket.create('find_bucket')
134 | obj_name = 'short'
135 | S3Object.store(obj_name,'short_text','find_bucket')
136 | short = S3Object.find(obj_name,"find_bucket")
137 | assert_not_nil(short)
138 | assert_equal(short.value,'short_text')
139 | end
140 |
141 | def test_find_non_existent_object
142 | bucket = Bucket.create('find_bucket')
143 | obj_name = 'doesnotexist'
144 | assert_raise AWS::S3::NoSuchKey do
145 | should_throw = S3Object.find(obj_name,"find_bucket")
146 | end
147 |
148 | # Try something higher in the alphabet
149 | assert_raise AWS::S3::NoSuchKey do
150 | should_throw = S3Object.find("zzz","find_bucket")
151 | end
152 | end
153 |
154 | def test_exists?
155 | bucket = Bucket.create('ruby_aws_s3')
156 | obj_name = 'dir/myfile/exists.txt'
157 | S3Object.store(obj_name,'exists','ruby_aws_s3')
158 | assert S3Object.exists?(obj_name, 'ruby_aws_s3')
159 | assert !S3Object.exists?('dir/myfile/doesnotexist.txt','ruby_aws_s3')
160 | end
161 |
162 | def test_delete
163 | bucket = Bucket.create("ruby_aws_s3")
164 | S3Object.store("something_to_delete","asdf","ruby_aws_s3")
165 | something = S3Object.find("something_to_delete","ruby_aws_s3")
166 | S3Object.delete("something_to_delete","ruby_aws_s3")
167 |
168 | assert_raise AWS::S3::NoSuchKey do
169 | should_throw = S3Object.find("something_to_delete","ruby_aws_s3")
170 | end
171 | end
172 |
173 | def test_rename
174 | bucket = Bucket.create("ruby_aws_s3")
175 | S3Object.store("something_to_rename","asdf","ruby_aws_s3")
176 | S3Object.rename("something_to_rename","renamed","ruby_aws_s3")
177 |
178 | renamed = S3Object.find("renamed","ruby_aws_s3")
179 | assert_not_nil(renamed)
180 | assert_equal(renamed.value,'asdf')
181 |
182 | assert_raise AWS::S3::NoSuchKey do
183 | should_throw = S3Object.find("something_to_rename","ruby_aws_s3")
184 | end
185 | end
186 |
187 | def test_larger_lists
188 | Bucket.create("ruby_aws_s3_many")
189 | (0..50).each do |i|
190 | ('a'..'z').each do |letter|
191 | name = "#{letter}#{i}"
192 | S3Object.store(name,"asdf","ruby_aws_s3_many")
193 | end
194 | end
195 |
196 | bucket = Bucket.find("ruby_aws_s3_many")
197 | assert_equal(bucket.size,1000)
198 | assert_equal(bucket.objects.first.key,"a0")
199 | end
200 |
201 |
202 | # Copying an object
203 | #S3Object.copy 'headshot.jpg', 'headshot2.jpg', 'photos'
204 |
205 | # Renaming an object
206 | #S3Object.rename 'headshot.jpg', 'portrait.jpg', 'photos'
207 |
208 | end
209 |
--------------------------------------------------------------------------------
/test/s3cmd_test.rb:
--------------------------------------------------------------------------------
1 | require 'test/test_helper'
2 | require 'fileutils'
3 |
4 | class S3CmdTest < Test::Unit::TestCase
5 | def setup
6 | config = File.expand_path(File.join(File.dirname(__FILE__),'local_s3_cfg'))
7 | raise "Please install s3cmd" if `which s3cmd`.empty?
8 | @s3cmd = "s3cmd --config #{config}"
9 | end
10 |
11 | def teardown
12 | end
13 |
14 | def test_create_bucket
15 | `#{@s3cmd} mb s3://s3cmd_bucket`
16 | output = `#{@s3cmd} ls`
17 | assert_match(/s3cmd_bucket/,output)
18 | end
19 |
20 | def test_store
21 | File.open(__FILE__,'rb') do |input|
22 | File.open("/tmp/fakes3_upload",'wb') do |output|
23 | output << input.read
24 | end
25 | end
26 | output = `#{@s3cmd} put /tmp/fakes3_upload s3://s3cmd_bucket/upload`
27 | assert_match(/upload/,output)
28 |
29 | FileUtils.rm("/tmp/fakes3_upload")
30 | end
31 |
32 | def test_acl
33 | File.open(__FILE__,'rb') do |input|
34 | File.open("/tmp/fakes3_acl_upload",'wb') do |output|
35 | output << input.read
36 | end
37 | end
38 | output = `#{@s3cmd} put /tmp/fakes3_acl_upload s3://s3cmd_bucket/acl_upload`
39 | assert_match(/upload/,output)
40 |
41 | output = `#{@s3cmd} --force setacl -P s3://s3cmd_bucket/acl_upload`
42 | end
43 |
44 | def test_large_store
45 | end
46 |
47 | def test_multi_directory
48 | end
49 |
50 | def test_intra_bucket_copy
51 | end
52 | end
53 |
--------------------------------------------------------------------------------
/test/test_helper.rb:
--------------------------------------------------------------------------------
1 | require 'test/unit'
2 | require 'test/unit/assertions'
3 | require 'mocha/test_unit'
4 | require 'rubygems'
5 | require 'bundler/setup'
6 | require 'fakes3'
7 |
--------------------------------------------------------------------------------