├── .gitattributes
├── .gitignore
├── CHANGELOG.md
├── Dockerfile
├── Instructions_pypi.md
├── LICENSE
├── README.assets
├── HOME.md
├── arch.drawio.png
└── arch.drawio.svg
├── README.md
├── cloudbuild.yaml
├── docs
├── .nojekyll
├── README.md
├── _images
│ └── arch.drawio.png
├── _navbar.md
├── _sidebar.md
├── index.html
├── library.md
├── tutorial.md
├── types.md
├── uplink_python.access.html
├── uplink_python.download.html
├── uplink_python.errors.html
├── uplink_python.html
├── uplink_python.module_classes.html
├── uplink_python.module_def.html
├── uplink_python.project.html
├── uplink_python.uplink.html
└── uplink_python.upload.html
├── setup.py
├── test
├── __init__.py
├── test_cases.py
└── test_data
│ ├── __init__.py
│ ├── access_test.py
│ ├── bucket_list_test.py
│ ├── bucket_test.py
│ ├── helper.py
│ ├── object_list_test.py
│ ├── object_test.py
│ └── project_test.py
└── uplink_python
├── __init__.py
├── access.py
├── download.py
├── errors.py
├── hello_storj.py
├── module_classes.py
├── module_def.py
├── project.py
├── uplink.py
└── upload.py
/.gitattributes:
--------------------------------------------------------------------------------
1 | # Documentation files and directories are excluded from language
2 | # statistics.
3 |
4 | docs/* linguist-documentation=true
5 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 |
2 | .vscode/settings.json
3 | uplink-c/
4 | libuplinkc.so
5 |
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | # uplink-python binding changelog
2 |
3 | ## [1.2.2.0] - 08-02-2021
4 | ### Changelog:
5 | * Pinned to specific version of uplinkc - v1.2.2
6 | * Resolved satellite address issue in test cases.
7 |
8 | ## [1.2.0.0] - 14-12-2020
9 | ### Changelog:
10 | * Pinned to specific version of uplinkc - v1.2.0
11 | * Resolved access grant issue in test cases.
12 |
13 | ## [1.0.7] - 04-09-2020
14 | ### Changelog:
15 | * Added functions based on uplinkc - v1.1.0
16 | * Made namespace changes based on latest master branch.
17 | * Binding not pinned to any specific uplink-c tag, uses master branch.
18 |
19 | ## [1.0.6] - 25-08-2020
20 | ### Changelog:
21 | * Pinned to specific version of uplinkc - v1.0.5
22 |
23 | ## [1.0.4] - 06-08-2020
24 | ### Changelog:
25 | * Changed overall structure of uplink-python.
26 | * Added error classes for error handling.
27 | * Added instruction to run unit test on local in Readme.
28 | * Changed documentation.
29 |
30 | ## [1.0.3] - 19-07-2020
31 | ### Changelog:
32 | * Update module to latest storj/uplink-c v1.0.5
33 | * Added unittest
34 | * Changed documentation
35 | * import statements changed
36 |
37 | ## [1.0.2] - 02-06-2020
38 | ### Changelog:
39 | * Migrated package on pip (pypi) from storj-python to uplink-python
40 | * Linting done using pylint
41 | * Changed file names according to snake_case python rule.
42 | * Splitted LibUplinkPy class into two parts and two files: uplink and exchange.
43 |
44 |
45 | ## [1.0.1] - 06-05-2020
46 | ### Changelog:
47 | * Migrated package on pip (pypi) from storjPython to storj-python
48 | * Building libuplinkc.so is now handled by pip install.
49 |
50 |
51 | ## [1.0.1] - 19-04-2020
52 | ### Changelog:
53 | * Changes made according to latest storj/uplink-c RC v1.0.1
54 |
55 |
56 | ## [0.12.0] - 30-01-2020
57 | ### Changelog:
58 | * Changes made according to latest libuplinkc v0.31.6
59 | * Added support for MacOS.
60 |
61 |
62 | ## [0.11.0] - 06-01-2020
63 | ### Changelog:
64 | * Changes made according to latest libuplinkc v0.28.4
65 | * Added get_file_size example function in helloStorj.py to check object size on storj to download.
66 | * Added restrict_scope function.
67 | * Added example for using scope key to access object on Storj in helloStorj.py
68 |
69 |
70 | ## [0.10.0] - 12-12-2019
71 | ### Changelog:
72 | * Changes made according to latest libuplinkc v0.27.1
73 | * Changed get_encryption_access return type from ctype pointer to string.
74 | * Changed open_bucket parameters to take serializedEncryptionAccess as string instead of ctype pointer.
75 | * Added functions related to access_scope.
76 | * Added example for functions new_scope, parse_scope, etc in helloStorj.py
77 |
78 |
79 | ## [0.9.0] - 24-09-2019
80 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 |
2 | FROM python:3.8
3 |
4 | RUN curl -O https://dl.google.com/go/go1.14.6.linux-amd64.tar.gz
5 | RUN sha256sum go1.14.6.linux-amd64.tar.gz
6 | RUN tar xvf go1.14.6.linux-amd64.tar.gz
7 | RUN chown -R root:root ./go
8 | RUN mv go /usr/local
9 | ENV PATH=$PATH:/usr/local/go/bin
10 |
11 | RUN pip --no-cache-dir install pylint
12 |
--------------------------------------------------------------------------------
/Instructions_pypi.md:
--------------------------------------------------------------------------------
1 | # uplink-python binding
2 |
3 | ## Instructions to push package on pypi
4 |
5 | Follow these instructions to make changes to the project and push ```uplink-python``` package to PyPI, This would make the latest package install using the ```pip install``` command.
6 |
7 | Assuming you have git cloned the project to your local directory and have already made the required changes to the project:
8 |
9 | * In Command Prompt, navigate to the ```your-local-directory/uplink-python``` folder, here ```uplink-python``` is the root directory of the cloned project.
10 |
11 | Your directory structure would be something like this:
12 |
13 | uplink-python
14 | └── CHANGELOG.md
15 | └── cloudbuild.yaml
16 | └── Dockerfile
17 | └── docs
18 | └── index.html
19 | └── Instructions_pypi.md
20 | └── LICENSE
21 | └── README.assets
22 | └── README.md
23 | └── setup.py
24 | └── tests
25 | └── test_cases.py
26 | └── uplink_python
27 | └── __init__.py
28 | └── uplink.py
29 |
30 | * Open ```setup.py``` using any text editor and increment the package ```version```:
31 |
32 | setuptools.setup(
33 | name="uplink-python",
34 | version="1.0.4",
35 |
36 | > Increment package version is mandatory because pypi does not allow updating a package with same version.
37 |
38 | * The next step is to generate distribution packages for the package. These are archives that are uploaded to the Package Index and can be installed by pip.\
39 | Make sure you have the latest versions of setuptools and wheel installed:
40 |
41 | $ python3 -m pip install --user --upgrade setuptools wheel
42 |
43 | * Now run this command from the same directory where ```setup.py``` is located:
44 |
45 | $ python3 setup.py sdist
46 |
47 | This command should output a lot of text and once completed should generate two files in the dist directory:
48 |
49 | dist/
50 | uplink-python-x.x.x.tar.gz
51 |
52 | * You will use twine to upload the distribution packages. You’ll need to install Twine if not already installed:
53 |
54 | $ python3 -m pip install --upgrade twine
55 |
56 | * Once installed, run Twine to upload all of the archives under dist:
57 | * On Test PyPI:
58 |
59 | Test PyPI is a separate instance of the package index intended for testing and experimentation.
60 |
61 | $ python3 -m twine upload --repository testpypi dist/*
62 | You will be prompted for a username and password, enter username and password used on ```https://test.pypi.org/```
63 |
64 | * On PyPI:
65 |
66 | When you are ready to upload a real package to the Python Package Index, use the following command.
67 |
68 | $ python3 -m twine upload dist/*
69 | You will be prompted for a username and password, enter username and password used on ```https://pypi.org/```
70 |
71 |
72 | > For more details and complete tutorial on how to publish a package on PyPI, goto [Packaging Python Projects](https://packaging.python.org/tutorials/packaging-projects/)
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.assets/arch.drawio.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/storj-thirdparty/uplink-python/efba0bc50803599d4570376ad1fd737e0db65b46/README.assets/arch.drawio.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # uplink-python binding
2 |
3 | [](https://app.codacy.com/gh/storj-thirdparty/uplink-python?utm_source=github.com&utm_medium=referral&utm_content=storj-thirdparty/uplink-python&utm_campaign=Badge_Grade_Dashboard)
4 |
5 | ### *Developed using v1.2.2 storj/uplink-c*
6 |
7 | ### [API documentation and tutorial](https://storj-thirdparty.github.io/uplink-python/#/)
8 |
9 | ## Initial Set-up (Important)
10 |
11 | **NOTE**: for Golang
12 |
13 | Make sure your `PATH` includes the `$GOPATH/bin` directory, so that your commands can be easily used [Refer: Install the Go Tools](https://golang.org/doc/install):
14 | ```
15 | export PATH=$PATH:$GOPATH/bin
16 | ```
17 |
18 | Depending on your operating system, you will need to install:
19 |
20 | **On Unix**
21 | * A proper C/C++ compiler toolchain, like [GCC](https://gcc.gnu.org/)
22 |
23 | **On macOS**
24 | * [Xcode](https://developer.apple.com/xcode/download/) : You also need to install the XCode Command Line Tools by running xcode-select --install. Alternatively, if you already have the full Xcode installed, you can find them under the menu Xcode -> Open Developer Tool -> More Developer Tools.... This step will install clang, clang++, and make.
25 |
26 | **On Windows**
27 | * Install Visual C++ Build Environment: [Visual Studio Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) (using "Visual C++ build tools" workload) or [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community) (using the "Desktop development with C++" workload)
28 | * Make sure you have access to ```site-packages``` folder inside the directory where Python is installed. To do this navigate to the directory where Python is installed, if you get an error "Permission Denied", follow the instruction in the message box and allow access using ```security tab```.
29 | * Install appropriate GCC compiler (for example [TDM-GCC](https://sourceforge.net/projects/tdm-gcc/)). Make sure g++ listed in PATH.
30 | ## Binding Set-up
31 |
32 |
33 | Please ensure you have Python 3.x and [pip](https://pypi.org/project/pip/) installed on your system. If you have Python version 3.4 or later, pip is included by default. uplink-python does not support Python 2.x.
34 | ```
35 | $ python get-pip.py
36 | ```
37 |
38 | ### Option 1
39 |
40 | Install [uplink-python](https://pypi.org/project/uplink-python/) python package with ```--no-cache-dir``` tag if re-installing or upgrading from the previous version, otherwise, the tag can be ignored (using Terminal/Powershell/CMD as ```Administrator```):
41 | ```
42 | $ pip install --no-cache-dir uplink-python
43 | ```
44 | >Note: If ```Administrator``` privilege is not granted to the terminal/cmd, the libuplinkc.so build process may fail.
45 |
46 | ### Option 2
47 |
48 | Follow these steps to set-up binding manually or if ```libuplinkc.so``` fails to build using Option 1.
49 |
50 | * Install [uplink-python](https://pypi.org/project/uplink-python/) python package (using Terminal/Powershell/CMD) if not already done in ```Option 1```
51 | ```
52 | $ pip install uplink-python
53 | ```
54 |
55 | * Clone [storj-uplink-c](https://godoc.org/storj.io/storj/lib/uplink) package to any location of your choice, using cmd/terminal navigate to ```PREFERED_DIR_PATH``` and run:
56 | ```
57 | $ git clone -b v1.2.2 https://github.com/storj/uplink-c
58 | ```
59 |
60 | * After cloning the package, navigate to the ```PREFERED_DIR_PATH/uplink-c``` folder.
61 | ```
62 | $ cd uplink-c
63 | ```
64 |
65 | * Create '.so' file at ```PREFERED_DIR_PATH/uplink-c``` folder, by using following command:
66 | ```
67 | $ go build -o libuplinkc.so -buildmode=c-shared
68 | ```
69 |
70 | * Copy created *libuplinkc.so* file into the folder, where Python package was installed (by default it is python3.X ```->``` site-packages ```->``` uplink_python)
71 |
72 | * Important notice: if you have 32-bit python on 64-bit machine *.so* file will not work correctly. There are 2 solutions:
73 | 1. Switch to 64-bit python
74 | 2. Set appropriate GO environment variables GOOS and GOARCH ([details](https://golang.org/doc/install/source#environment)) and compile uplink-c
75 |
76 | ## Project Set-up
77 |
78 | To include uplink in your project, import the library, by using the following command:
79 | ```
80 | from uplink_python.uplink import Uplink
81 | ```
82 | Create an object of ```Uplink``` class to access all the functions of the library. Please refer to the sample *hello_storj.py* file, for example.
83 | ```
84 | variable_name = Uplink()
85 | ```
86 |
87 | To use various parameters such as ListBucketsOptions, ListObjectsOptions, Permissions, etc you would require to import them first from module_classes i.e. uplink_python.module_classes.
88 | ```
89 | from uplink_python.module_classes import DownloadOptions, Permission
90 | ```
91 |
92 | To use various user-defined Storj Exceptions such as InternalError, BucketNotFoundError, etc you would require to import them first from errors i.e. uplink_python.errors.
93 | ```
94 | from uplink_python.errors import InternalError, BucketNotFoundError
95 | ```
96 |
97 | ## Sample Hello Storj!
98 |
99 | File *hello_storj.py* can be found in the folder where the Python package was installed.
100 |
101 | The sample *hello_storj.py* code calls the *uplink.py* file and imports the *Uplink* binding class to do the following:
102 | * list all buckets in a Storj project
103 | * create a new bucket (if it does not exist) within the desired Storj project
104 | * write a file from local computer to the Storj bucket
105 | * read back the object from the Storj bucket to the local system for verification
106 | * list all objects in a bucket
107 | * delete bucket from a Storj project
108 | * create shareable Access with permissions and shared prefix.
109 | * list all buckets and objects with permission to shareable access.
110 |
111 | ## Run Unit Test Cases on Local
112 |
113 | Directory with unit test cases *test* can be found in the uplink-python repository.
114 |
115 | To run the test cases on a local system, you need to perform the following steps:
116 | * clone the repo so that you have the test folder on your local system.
117 |
118 | directory_on_local
119 | └── test
120 | └── __init__.py
121 | └── test_data
122 | └── test_cases.py
123 |
124 | * Add a test file parallel to the tests folder, add ```API Key``` in the file, and name it as ```secret.txt```. The directory structure would be something like this now:
125 |
126 | directory_on_local
127 | └── secret.txt
128 | └── test
129 | └── __init__.py
130 | └── test_data
131 | └── test_cases.py
132 |
133 | * Navigate to the folder, here ```directory_on_local``` and use the following command to run through all the tests.
134 |
135 |
136 | $ python3 -m unittest test/test_cases.py -v
137 |
138 |
139 | ## Documentation
140 | For more information on function definations and diagrams, check out the [Detail](//github.com/storj-thirdparty/uplink-python/wiki/) or jump to:
141 | * [Uplink-Python Binding Functions](//github.com/storj-thirdparty/uplink-python/wiki/#binding-functions)
142 | * [Flow Diagram](//github.com/storj-thirdparty/uplink-python/wiki/#flow-diagram)
143 | * [libuplink Documentation](https://godoc.org/storj.io/uplink)
144 |
--------------------------------------------------------------------------------
/cloudbuild.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | steps:
3 | - name: gcr.io/cloud-builders/docker
4 | args: ['build', '-t', 'gcr.io/$PROJECT_ID/python3', '.']
5 | - name: 'gcr.io/${PROJECT_ID}/python3'
6 | entrypoint: 'python3'
7 | args: ["-m", "pylint", 'uplink_python/uplink.py']
8 | - name: gcr.io/cloud-builders/gcloud
9 | entrypoint: 'bash'
10 | args: ["-c","gcloud secrets versions access latest --secret=StorjAPIKey >>secret.txt" ]
11 | - name: 'gcr.io/${PROJECT_ID}/python3'
12 | entrypoint: 'bash'
13 | args: ["-c", "git clone -b v1.2.2 https://github.com/storj/uplink-c"]
14 | - name: 'gcr.io/${PROJECT_ID}/python3'
15 | entrypoint: 'bash'
16 | args: ["-c", "cd uplink-c && go build -o libuplinkc.so -buildmode=c-shared && cp *.so ../uplink_python/"]
17 | - name: 'gcr.io/${PROJECT_ID}/python3'
18 | entrypoint: 'python3'
19 | args: ['-m', 'unittest', 'test/test_cases.py', '-v']
20 | tags: ['cloud-builders-community']
21 | images: ['gcr.io/$PROJECT_ID/python3']
22 | tags: ['cloud-builders-community']
23 |
--------------------------------------------------------------------------------
/docs/.nojekyll:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/storj-thirdparty/uplink-python/efba0bc50803599d4570376ad1fd737e0db65b46/docs/.nojekyll
--------------------------------------------------------------------------------
/docs/README.md:
--------------------------------------------------------------------------------
1 | # uplink-python binding
2 | > Developed using v1.2.2 storj/uplink-c
3 |
4 | > Binding is not tagged to any release and will use uplink-c master branch.
5 |
6 | ## Initial Set-up (Important)
7 |
8 | > NOTE: For Golang
9 | >
10 | >Make sure your `PATH` includes the `$GOPATH/bin` directory, so that your commands can be easily used [Refer: Install the Go Tools](https://golang.org/doc/install):
11 | >```
12 | >export PATH=$PATH:$GOPATH/bin
13 | >```
14 | >
15 | >Depending on your operating system, you will need to install:
16 | >
17 | >**On Unix**
18 | >* A proper C/C++ compiler toolchain, like [GCC](https://gcc.gnu.org/)
19 | >
20 | >**On macOS**
21 | >* [Xcode](https://developer.apple.com/xcode/download/) : You also need to install the XCode Command Line Tools by running xcode-select --install. Alternatively, if you already have the full Xcode installed, you can find them under the menu Xcode -> Open Developer Tool -> More Developer Tools.... This step will install clang, clang++, and make.
22 | >
23 | >**On Windows**
24 | >* Install Visual C++ Build Environment: [Visual Studio Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) (using "Visual C++ build tools" workload) or [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community) (using the "Desktop development with C++" workload)
25 | >* Make sure you have access to ```site-packages``` folder inside the directory where Python is installed. To do this navigate to the directory where Python is installed, if you get an error "Permission Denied", follow the instruction in the message box and allow access using ```security tab```.
26 |
27 | ## Binding Set-up
28 |
29 |
30 | Please ensure you have Python 3.x and [pip](https://pypi.org/project/pip/) installed on your system. If you have Python version 3.4 or later, pip is included by default. uplink-python does not support Python 2.x.
31 | ```
32 | $ python get-pip.py
33 | ```
34 |
35 | ### Option 1
36 |
37 | Install [uplink-python](https://pypi.org/project/uplink-python/) python package with ```--no-cache-dir``` tag if re-installing or upgrading from the previous version, otherwise, the tag can be ignored (using Terminal/Powershell/CMD as ```Administrator```):
38 | ```
39 | $ pip install --no-cache-dir uplink-python
40 | ```
41 |
42 | ### Option 2
43 |
44 | Follow these steps to set-up binding manually or if ```libuplinkc.so``` fails to build using Option 1.
45 |
46 | * Install [uplink-python](https://pypi.org/project/uplink-python/) python package (using Terminal/Powershell/CMD) if not already done in ```Option 1```
47 | ```
48 | $ pip install uplink-python
49 | ```
50 |
51 | * Clone [storj-uplink-c](https://godoc.org/storj.io/storj/lib/uplink) package to any location of your choice, using cmd/terminal navigate to ```PREFERED_DIR_PATH``` and run:
52 | ```
53 | $ git clone -b v1.2.2 https://github.com/storj/uplink-c
54 | ```
55 |
56 | * After cloning the package, navigate to the ```PREFERED_DIR_PATH/uplink-c``` folder.
57 | ```
58 | $ cd uplink-c
59 | ```
60 |
61 | * Create '.so' file at ```PREFERED_DIR_PATH/uplink-c``` folder, by using following command:
62 | ```
63 | $ go build -o libuplinkc.so -buildmode=c-shared
64 | ```
65 |
66 | * Copy created *libuplinkc.so* file into the folder, where Python package was installed (by default it is python3.X ```->``` site-packages ```->``` uplink_python)
67 |
68 |
69 | ## Project Set-up
70 |
71 | To include uplink in your project, import the library, by using the following command:
72 | ```
73 | from uplink_python.uplink import Uplink
74 | ```
75 | Create an object of ```Uplink``` class to access all the functions of the library. Please refer to the sample *hello_storj.py* file, for example.
76 | ```
77 | variable_name = Uplink()
78 | ```
79 |
80 | To use various parameters such as ListBucketsOptions, ListObjectsOptions, Permissions, etc you would require to import them first from module_classes i.e. uplink_python.module_classes.
81 | ```
82 | from uplink_python.module_classes import DownloadOptions, Permission
83 | ```
84 |
85 | To use various user-defined Storj Exceptions such as InternalError, BucketNotFoundError, etc you would require to import them first from errors i.e. uplink_python.errors.
86 | ```
87 | from uplink_python.errors import InternalError, BucketNotFoundError
88 | ```
89 |
90 | ## Sample Hello Storj
91 |
92 | File *hello_storj.py* can be found in the folder where the Python package was installed.
93 |
94 | The sample *hello_storj.py* code calls the *uplink.py* file and imports the *Uplink* binding class to do the following:
95 | * list all buckets in a Storj project
96 | * create a new bucket (if it does not exist) within the desired Storj project
97 | * write a file from local computer to the Storj bucket
98 | * read back the object from the Storj bucket to the local system for verification
99 | * list all objects in a bucket
100 | * delete bucket from a Storj project
101 | * create shareable Access with permissions and shared prefix.
102 | * list all buckets and objects with permission to shareable access.
103 |
104 |
105 | ## Flow Diagram
106 |
107 | [Flow Diagram](/_images/arch.drawio.png ':include :type=iframe width=100% height=1000px')
--------------------------------------------------------------------------------
/docs/_images/arch.drawio.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/storj-thirdparty/uplink-python/efba0bc50803599d4570376ad1fd737e0db65b46/docs/_images/arch.drawio.png
--------------------------------------------------------------------------------
/docs/_navbar.md:
--------------------------------------------------------------------------------
1 | * Getting Started
2 | * [Initial Set-up](/?id=initial-set-up-important)
3 | * [Binding Set-up](/?id=binding-set-up)
4 | * [Project Set-up](/?id=project-set-up)
5 | * [Sample Hello Storj](/?id=sample-hello-storj)
6 | * [Flow Diagram](/?id=flow-diagram)
7 |
8 | * Documentation
9 | * [uplink-python Binding Functions](/library.md)
10 | * [uplink-python API Documentation](/uplink_python.html)
11 | * [Types, Errors and Constants](/types.md)
12 | * [Create your own project](/tutorial.md)
13 |
14 |
--------------------------------------------------------------------------------
/docs/_sidebar.md:
--------------------------------------------------------------------------------
1 | * Getting Started
2 | * [About](/)
3 |
4 | * Documentation
5 | * [uplink-python Binding Functions](/library.md)
6 | * [uplink-python API Documentation](/uplink_python.html)
7 | * [Types, Errors and Constants](/types.md)
8 | * [Create your own project](/tutorial.md)
9 |
--------------------------------------------------------------------------------
/docs/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | Document
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
28 |
29 |
30 |
31 |
--------------------------------------------------------------------------------
/docs/tutorial.md:
--------------------------------------------------------------------------------
1 | # Tutorial
2 |
3 | > Welcome to the guide to creating a project by yourself that uses python binding. Let's start!
4 |
5 | ## Step 1: Storj Configurations
6 |
7 | First and foremost, you need to have all the keys and configuration to connect to the storj network. You can create a JSON file and parse the JSON file to store the values in a class/structure or you can simply initialize the corresponding variables inside the main method as done in the sample *hello_storj.py* file in the following way:
8 |
9 | ```py
10 | # Storj configuration information
11 | MY_API_KEY = "change-me-to-the-api-key-created-in-satellite-gui"
12 | MY_SATELLITE = "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777"
13 | MY_BUCKET = "my-first-bucket"
14 | MY_STORJ_UPLOAD_PATH = "(optional): path / (required): filename"
15 |
16 | # (path + filename) OR filename
17 | MY_ENCRYPTION_PASSPHRASE = "you'll never guess this"
18 | ```
19 |
20 | ## Step 2: File Path
21 |
22 | A file path could be the path of the file to be uploaded from or the path where a file needs to be downloaded at or both. You can create a JSON file and parse it or simply initialize file path variable(s) as done in the sample *hello_storj.py* file in the following way:
23 |
24 | ```py
25 | # Source and destination path and file name for testing
26 | SRC_FULL_FILENAME = "filename with extension of the source file on the local system"
27 | DESTINATION_FULL_FILENAME = "filename with extension to save on local system"
28 | ```
29 |
30 | ## Step 3: Create libuplink class object
31 |
32 | Next, you need to create an object of the Uplink class that will be used to call the libuplink functions.
33 |
34 | ```py
35 | from uplink_python.uplink import Uplink
36 |
37 | uplink = Uplink()
38 | ```
39 |
40 | ## Step 4: Create Access Handle
41 |
42 | Once you have initialized the Uplink class object, you need to create access to storj network in the following way:
43 |
44 | ```py
45 | access = uplink.request_access_with_passphrase(MY_SATELLITE, MY_API_KEY, MY_ENCRYPTION_PASSPHRASE)
46 | ```
47 |
48 | ## Step 5: Open Project
49 |
50 | To perform the uplink operations you need to create a project, which can be done using the following code fragment:
51 |
52 | ```py
53 | project = access.open_project()
54 | ```
55 |
56 | ## Step 6: Create/Ensure Bucket
57 |
58 | All the uploading and downloading are performed inside and from a bucket on the storj network. To ensure that the bucket you have specified in your configurations above exists or create one if not, use the following code:
59 |
60 | ```py
61 | bucket = project.ensure_bucket(MY_BUCKET)
62 | ```
63 |
64 | ## Step 7: Upload/Download
65 |
66 | ### Upload Data
67 |
68 | Uploading a file consists of the following sub-steps:
69 |
70 | #### Create File Handle
71 |
72 | To stream the data to storj you need to create a file stream or handle which can be done in the following way:
73 |
74 | ```py
75 | file_handle = open(src_full_filename, 'r+b')
76 | data_len = path.getsize(src_full_filename)
77 | ```
78 |
79 | #### Create Upload Handle
80 |
81 | While filehandle acts as the source stream, upload handle acts as the destination stream and can be created as follows:
82 |
83 | ```py
84 | upload = project.upload_object(MY_BUCKET, MY_STORJ_UPLOAD_PATH)
85 | ```
86 |
87 | You can pass an optional parameter *upload_options* by referring to the [libuplink documentation](https://godoc.org/storj.io/uplink) or simply pass *none*.
88 |
89 | #### Stream data to Storj Network
90 |
91 | Once the source and destination streams are created, its time to perform data streaming, this can be done in two ways:
92 |
93 | ##### 1. Upload data by providing data bytes and size:
94 |
95 | ```py
96 | uploaded_total = 0
97 | while uploaded_total < data_len:
98 | # set packet size to be used while uploading
99 | size_to_write = 256 if (data_len - uploaded_total > 256) else data_len - uploaded_total
100 |
101 | # exit while loop if nothing left to upload
102 | if size_to_write == 0:
103 | break
104 |
105 | # file reading process from the last read position
106 | data_to_write = file_handle.read(size_to_write)
107 |
108 | # call to write data to Storj bucket
109 | bytes_written = upload.write(data_to_write, size_to_write)
110 |
111 | # exit while loop if nothing left to upload / unable to upload
112 | if bytes_written == 0:
113 | break
114 |
115 | # update last read location
116 | uploaded_total += bytes_written
117 | ```
118 |
119 | The above code streams the data in chunks of 256 bytes. You can change this size as per your requirement and convenience.
120 |
121 | ##### 2. Upload data by providing file handle:
122 |
123 | ```py
124 | # file_handle should be a BinaryIO, i.e. file should be opened using 'r+b" flag.
125 | upload.write_file(file_handle)
126 | ```
127 | An optional parameter here is buffer_size (default = 0), if not passes would iterate throughout the file and upload in blocks with appropriate buffer size based on the system.
128 |
129 | #### Commit Upload
130 |
131 | Once the data has been successfully streamed, the upload needs to be committed using the following method:
132 |
133 | ```py
134 | upload.commit()
135 | ```
136 |
137 | ### Download Data
138 |
139 | Downloading a file consists of the following sub-steps:
140 |
141 | #### Open/Create Destination File
142 |
143 | First, we need to create a destination file to store the downloaded data. This will also act as the destination stream.
144 |
145 | ```py
146 | # open / create file with the given name to save the downloaded data
147 | file_handle = open(dest_full_pathname, 'w+b')
148 | ```
149 |
150 | #### Create Download Stream
151 |
152 | The source stream for downloading data from storj to the required file can be created using the following code:
153 |
154 | ```py
155 | download = project.download_object(MY_BUCKET, MY_STORJ_UPLOAD_PATH)
156 | ```
157 |
158 | You can pass an optional parameter *download_options* by referring to the [libuplink documentation](https://godoc.org/storj.io/uplink) or simply pass *none*.
159 |
160 | #### Stream data from Storj Network
161 |
162 | Once the source and destination streams are created, its time to perform data streaming, this can be done in two ways:
163 |
164 | ##### 1. Downloading data by providing size_to_read:
165 |
166 | In this method, we would provide the size of data to be read from the stream and maintaining a loop (if required), and writing the data to file ourselves.
167 |
168 | Before streaming the data we need to know the amount of data we are downloading to avoid discrepancies. Fetching the data size can be done using the following code:
169 |
170 | ```py
171 | # find the size of the object to be downloaded
172 | file_size = upload.file_size()
173 | ```
174 |
175 | Now, its time to do the required downloading and can be done as follows:
176 |
177 | ```py
178 | # set packet size to be used while downloading
179 | size_to_read = 256
180 | # initialize local variables and start downloading packets of data
181 | downloaded_total = 0
182 | while True:
183 | # call to read data from Storj bucket
184 | data_read, bytes_read = download.read(size_to_read)
185 |
186 | # file writing process from the last written position if new data is downloaded
187 | if bytes_read != 0:
188 | file_handle.write(data_read)
189 | #
190 | # update last read location
191 | downloaded_total += bytes_read
192 | #
193 | # break if download complete
194 | if downloaded_total == file_size:
195 | break
196 | ```
197 |
198 | The above code streams the data in chunks of 256 bytes. You can change this size as per your requirement and convenience.
199 |
200 | ##### 2. Downloading data by providing file handle:
201 |
202 | ```py
203 | # file_handle should be a BinaryIO, i.e. file should be opened using 'w+b" flag.
204 | download.read_file(file_handle)
205 | ```
206 | An optional parameter here is buffer_size (default = 0), if not passes would iterate throughout the object and write the data to file in blocks with appropriate buffer size based on the system.
207 |
208 | #### Close Download Stream
209 |
210 | Once the download streaming is complete, it is important to close the download stream.
211 |
212 | ```py
213 | download.close()
214 | ```
215 |
216 | > NOTE: Perform error handling as per your implementation.
217 |
218 | > NOTE: Alternatively, you can simply create a method(s) that can be called from the main method and performs all the above steps internally as done in the sample file.
219 |
220 | ## Step 8: Create Shareable Access Key(Optional)
221 |
222 | A shared access key with specific restrictions can be generated using the following code:
223 |
224 | ```py
225 | # set permissions for the new access to be created
226 | permissions = Permission(allow_list=True, allow_delete=False)
227 |
228 | # set shared prefix as list of SharePrefix for the new access to be created
229 | shared_prefix = [SharePrefix(bucket=MY_BUCKET, prefix="")]
230 |
231 | # create new access
232 | new_access = access.share(permissions, shared_prefix)
233 |
234 | # generate serialized access to share
235 | serialized_access = access.serialize()
236 | ```
237 |
238 | ## Step 9: Close Project
239 |
240 | Once you have performed all the required operations, closing the project is a must to avoid memory leaks.
241 |
242 | ```py
243 | project.close()
244 | ```
245 |
246 | > NOTE: For more binding functions refer to the [uplink-python Binding Functions](/library.md) and for implemtation purpose refer *hello_storj.py* file.
247 |
--------------------------------------------------------------------------------
/docs/uplink_python.access.html:
--------------------------------------------------------------------------------
1 |
2 | Python: module uplink_python.access
3 |
4 |
5 |
6 |
Access(access, uplink)
44 |
45 | An Access Grant contains everything to access a project and specific buckets.
46 | It includes a potentially-restricted API Key, a potentially-restricted set of encryption
47 | information, and information about the Satellite responsible for the project's metadata.
48 |
49 | ...
50 |
51 | Attributes
52 | ----------
53 | access : int
54 | Access _handle returned from libuplinkc access_result.access
55 | uplink : Uplink
56 | uplink object used to get access
57 |
58 | Methods
59 | -------
60 | open_project():
61 | Project
62 | config_open_project():
63 | Project
64 | serialize():
65 | String
66 | share():
67 | Access
function Share creates a new access grant with specific permissions.
96 |
97 | Access grants can only have their existing permissions restricted, and the resulting
98 | access grant will only allow for the intersection of all previous Share calls in the
99 | access grant construction chain.
100 |
101 | Prefixes, if provided, restrict the access grant (and internal encryption information)
102 | to only contain enough information to allow access to just those prefixes.
103 |
104 | Parameters
105 | ----------
106 | permission : Permission
107 | shared_prefix : list of SharePrefix
108 |
109 | Returns
110 | -------
111 | Access
112 |
113 |
114 | Data descriptors defined here:
115 |
__dict__
116 |
dictionary for instance variables (if defined)
117 |
118 |
__weakref__
119 |
list of weak references to the object (if defined)
function downloads up to len size_to_read bytes from the object's data stream.
98 | It returns the data_read in bytes and number of bytes read
99 |
100 | Parameters
101 | ----------
102 | size_to_read : int
103 |
104 | Returns
105 | -------
106 | bytes, int
107 |
108 |
read_file(self, file_handle, buffer_size: int = 0)
function downloads complete object from it's data stream and writes it to the file whose
109 | handle is passed as parameter. After the download is complete it closes the download stream.
110 |
111 | Note: File handle should be a BinaryIO, i.e. file should be opened using 'w+b" flag.
112 | e.g.: file_handle = open(DESTINATION_FULL_FILENAME, 'w+b')
113 | Remember to close the object stream on storj and also close the local file handle
114 | after this function exits.
115 |
116 | Parameters
117 | ----------
118 | file_handle : BinaryIO
119 | buffer_size : int
120 |
121 | Returns
122 | -------
123 | None
124 |
125 |
126 | Data descriptors defined here:
127 |
__dict__
128 |
dictionary for instance variables (if defined)
129 |
130 |
__weakref__
131 |
list of weak references to the object (if defined)
function ensures that a bucket exists or creates a new one.
137 |
138 | When bucket already exists it returns a valid Bucket and no error
139 |
140 | Parameters
141 | ----------
142 | bucket_name : str
143 |
144 | Returns
145 | -------
146 | Bucket
function returns a list of buckets with all its information.
149 |
150 | Parameters
151 | ----------
152 | list_bucket_options : ListBucketsOptions (optional)
153 |
154 | Returns
155 | -------
156 | list of Bucket
function returns a list of objects with all its information.
159 |
160 | Parameters
161 | ----------
162 | bucket_name : str
163 | list_object_options : ListObjectsOptions (optional)
164 |
165 | Returns
166 | -------
167 | list of Object
RequestAccessWithPassphrase generates a new access grant using a passhprase and
65 | custom configuration.
66 | It must talk to the Satellite provided to get a project-based salt for deterministic
67 | key derivation.
68 |
69 | Note: this is a CPU-heavy function that uses a password-based key derivation
70 | function (Argon2). This should be a setup-only step.
71 | Most common interactions with the library should be using a serialized access grant
72 | through ParseAccess directly.
73 |
74 | Parameters
75 | ----------
76 | config: Config
77 | satellite : str
78 | api_key : str
79 | passphrase : str
80 |
81 | Returns
82 | -------
83 | Access
ParseAccess parses a serialized access grant string.
86 |
87 | This should be the main way to instantiate an access grant for opening a project.
88 | See the note on RequestAccessWithPassphrase
89 |
90 | Parameters
91 | ----------
92 | serialized_access : str
93 |
94 | Returns
95 | -------
96 | Access
RequestAccessWithPassphrase generates a new access grant using a passhprase.
99 | It must talk to the Satellite provided to get a project-based salt for deterministic
100 | key derivation.
101 |
102 | Note: this is a CPU-heavy function that uses a password-based key derivation
103 | function (Argon2). This should be a setup-only step.
104 | Most common interactions with the library should be using a serialized access grant
105 | through ParseAccess directly.
106 |
107 | Parameters
108 | ----------
109 | satellite : str
110 | api_key : str
111 | passphrase : str
112 |
113 | Returns
114 | -------
115 | Access
116 |
117 |
118 | Class methods defined here:
119 |
function to set custom meta information while uploading data
94 |
95 | Parameters
96 | ----------
97 | custom_metadata : CustomMetadata
98 |
99 | Returns
100 | -------
101 | None
function uploads bytes data passed as parameter to the object's data stream.
104 |
105 | Parameters
106 | ----------
107 | data_to_write : bytes
108 | size_to_write : int
109 |
110 | Returns
111 | -------
112 | int
113 |
114 |
write_file(self, file_handle, buffer_size: int = 0)
function uploads complete file whose handle is passed as parameter to the
115 | object's data stream and commits the object after upload is complete.
116 |
117 | Note: File handle should be a BinaryIO, i.e. file should be opened using 'r+b" flag.
118 | e.g.: file_handle = open(SRC_FULL_FILENAME, 'r+b')
119 | Remember to commit the object on storj and also close the local file handle
120 | after this function exits.
121 |
122 | Parameters
123 | ----------
124 | file_handle : BinaryIO
125 | buffer_size : int
126 |
127 | Returns
128 | -------
129 | None
130 |
131 |
132 | Data descriptors defined here:
133 |
__dict__
134 |
dictionary for instance variables (if defined)
135 |
136 |
__weakref__
137 |
list of weak references to the object (if defined)
138 |
139 |
140 |
141 |
142 |
143 | Data
144 |
145 |
146 |
COPY_BUFSIZE = 65536
147 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring, broad-except
2 | import subprocess
3 | import os
4 | import platform
5 | import sysconfig
6 |
7 | import setuptools
8 | from setuptools.command.install import install
9 |
10 | with open("README.md", "r") as fh:
11 | long_description = fh.read()
12 |
13 | uplinkc_version = "v1.2.2"
14 |
15 | class Install(install):
16 |
17 | @staticmethod
18 | def find_module_path():
19 | new_path = os.path.join(sysconfig.get_paths()['purelib'], "uplink_python")
20 | try:
21 | os.makedirs(new_path, exist_ok=True)
22 | os.system("echo Directory uplink_python created successfully.")
23 | except OSError as error:
24 | os.system("echo Error in creating uplink_python directory. Error: " + str(error))
25 | return new_path
26 |
27 | def run(self):
28 |
29 | try:
30 | install_path = self.find_module_path()
31 | os.system("echo Package installation path: " + install_path)
32 | if platform.system() == "Windows":
33 | os.system("icacls " + install_path + " /grant Everyone:F /t")
34 | else:
35 | os.system("sudo chmod -R 777 " + install_path)
36 | os.system("echo Building libuplinkc.so")
37 | copy_command = "copy" if platform.system() == "Windows" else "cp"
38 | command = "git clone -b "+uplinkc_version+ "https://github.com/storj/uplink-c && cd uplink-c" \
39 | "&& go build -o libuplinkc.so -buildmode=c-shared" \
40 | "&& " + copy_command + " *.so " + install_path
41 | build_so = subprocess.Popen(command,
42 | stdout=subprocess.PIPE,
43 | stderr=subprocess.STDOUT, shell=True)
44 | output, errors = build_so.communicate()
45 | build_so.wait()
46 | if output is not None:
47 | os.system("echo " + output.decode('utf-8'))
48 | os.system("echo Building libuplinkc.so successful.")
49 | if errors is not None:
50 | os.system("echo " + errors.decode('utf-8'))
51 | os.system("echo Building libuplinkc.so failed.")
52 | if build_so.returncode != 0:
53 | os.exit(1)
54 | except Exception as error:
55 | os.system("echo " + str(error))
56 | os.system("echo Building libuplinkc.so failed.")
57 |
58 | install.run(self)
59 |
60 |
61 | setuptools.setup(
62 | name="uplink-python",
63 | version="1.2.2.0",
64 | author="Utropicmedia",
65 | author_email="development@utropicmedia.com",
66 | license='Apache Software License',
67 | description="Python-native language binding for uplink to "
68 | "communicate with the Storj network.",
69 | long_description=long_description,
70 | long_description_content_type="text/markdown",
71 | url="https://github.com/storj-thirdparty/uplink-python",
72 |
73 | packages=['uplink_python'],
74 | install_requires=['wheel'],
75 | include_package_data=True,
76 | classifiers=[
77 | "Intended Audience :: Developers",
78 | "Programming Language :: Python :: 3",
79 | "License :: OSI Approved :: Apache Software License",
80 | "Operating System :: OS Independent",
81 | "Topic :: Software Development :: Build Tools",
82 | ],
83 | python_requires='>=3.4',
84 | cmdclass={
85 | 'install': Install,
86 | }
87 | )
88 |
--------------------------------------------------------------------------------
/test/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/storj-thirdparty/uplink-python/efba0bc50803599d4570376ad1fd737e0db65b46/test/__init__.py
--------------------------------------------------------------------------------
/test/test_cases.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring
2 | import unittest
3 |
4 | from .test_data.access_test import AccessTest
5 | from .test_data.bucket_list_test import BucketListTest
6 | from .test_data.bucket_test import BucketTest
7 | from .test_data.helper import InitializationTest
8 | from .test_data.object_list_test import ObjectListTest
9 | from .test_data.object_test import ObjectTest
10 | from .test_data.project_test import ProjectTest
11 |
12 | if __name__ == '__main__':
13 | testList = [InitializationTest, AccessTest, ProjectTest, BucketTest, BucketListTest,
14 | ObjectTest, ObjectListTest]
15 | testLoad = unittest.TestLoader()
16 |
17 | TestList = []
18 | for testCase in testList:
19 | testSuite = testLoad.loadTestsFromTestCase(testCase)
20 | TestList.append(testSuite)
21 |
22 | newSuite = unittest.TestSuite(TestList)
23 | runner = unittest.TextTestRunner(verbosity=4)
24 | runner.run(newSuite)
25 |
--------------------------------------------------------------------------------
/test/test_data/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/storj-thirdparty/uplink-python/efba0bc50803599d4570376ad1fd737e0db65b46/test/test_data/__init__.py
--------------------------------------------------------------------------------
/test/test_data/access_test.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring, protected-access
2 | import unittest
3 |
4 | from uplink_python.errors import StorjException, ERROR_INTERNAL
5 | from uplink_python.module_classes import Permission, SharePrefix
6 | from .helper import TestPy
7 |
8 |
9 | class AccessTest(unittest.TestCase):
10 | @classmethod
11 | def setUpClass(cls):
12 | cls.test_py = TestPy()
13 | cls.access = cls.test_py.get_access()
14 |
15 | def test1_access_parse(self):
16 | serialized_access = self.access.serialize()
17 | self.assertIsNotNone(serialized_access, "serialized_access failed")
18 | access = self.test_py.uplink.parse_access(serialized_access)
19 | self.assertIsNotNone(access, "parse_access failed")
20 |
21 | def test2_request_access(self):
22 | access = self.test_py.uplink.request_access_with_passphrase(self.test_py.satellite,
23 | self.test_py.api_key,
24 | self.test_py.encryption_phrase)
25 | self.assertIsNotNone(access, "parse_access failed")
26 |
27 | def test3_access_share_no_data(self):
28 | try:
29 | _ = self.access.share(None, None)
30 | except StorjException as error:
31 | self.assertEqual(error.code, ERROR_INTERNAL, error.details)
32 |
33 | def test4_access_share_no_prefix(self):
34 | # set permissions for the new access to be created
35 | permissions = Permission(allow_list=True, allow_delete=False)
36 | # create new access
37 | access = self.access.share(permissions, None)
38 | self.assertTrue(access.access.contents._handle != 0, "got empty access")
39 |
40 | def test5_access_share(self):
41 | # set permissions for the new access to be created
42 | permissions = Permission(allow_list=True, allow_delete=False)
43 | # set shared prefix as list of dictionaries for the new access to be created
44 | shared_prefix = [SharePrefix(bucket="alpha", prefix="")]
45 | # create new access
46 | access = self.access.share(permissions, shared_prefix)
47 | self.assertIsNotNone(access, "access_share failed")
48 |
49 |
50 | if __name__ == '__main__':
51 | unittest.main()
52 |
--------------------------------------------------------------------------------
/test/test_data/bucket_list_test.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring
2 | import unittest
3 |
4 | from .helper import TestPy
5 |
6 |
7 | class BucketListTest(unittest.TestCase):
8 | @classmethod
9 | def setUpClass(cls):
10 | cls.test_py = TestPy()
11 | cls.access = cls.test_py.get_access()
12 | cls.project = cls.test_py.get_project()
13 | cls.bucket_names = ["alpha", "delta", "gamma", "iota", "kappa", "lambda"]
14 |
15 | def test1_ensure_buckets(self):
16 | # print("Bucket List: ", self.bucket_names)
17 | for name in self.bucket_names:
18 | bucket = self.project.ensure_bucket(name)
19 | self.assertIsNotNone(bucket, "ensure_buckets failed")
20 |
21 | def test2_list_buckets(self):
22 | # enlist all the buckets in given Storj project
23 | bucket_list = self.project.list_buckets()
24 | self.assertIsNotNone(bucket_list, "list_buckets failed")
25 |
26 | retrieved_bucket_names = list()
27 | for item in bucket_list:
28 | retrieved_bucket_names.append(item.name)
29 | # print("Retrieved Bucket List: ", retrieved_bucket_names)
30 | self.assertTrue(all(item in retrieved_bucket_names for item in self.bucket_names),
31 | "Not all buckets found in bucket list")
32 |
33 | def test3_delete_buckets(self):
34 | for name in self.bucket_names:
35 | object_list = self.project.list_objects(name)
36 | if object_list is not None:
37 | for item in object_list:
38 | self.project.delete_object(name, item.key)
39 | bucket = self.project.delete_bucket(name)
40 | self.assertIsNotNone(bucket, "delete_bucket failed")
41 |
42 | def test4_close_project(self):
43 | self.project.close()
44 |
45 |
46 | if __name__ == '__main__':
47 | unittest.main()
48 |
--------------------------------------------------------------------------------
/test/test_data/bucket_test.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring
2 | import unittest
3 |
4 | from uplink_python.errors import StorjException, ERROR_BUCKET_ALREADY_EXISTS, ERROR_BUCKET_NOT_FOUND
5 | from .helper import TestPy
6 |
7 |
8 | class BucketTest(unittest.TestCase):
9 | @classmethod
10 | def setUpClass(cls):
11 | cls.test_py = TestPy()
12 | cls.access = cls.test_py.get_access()
13 | cls.project = cls.test_py.get_project()
14 |
15 | def test1_create_new_bucket(self):
16 | bucket = self.project.create_bucket("alpha")
17 | self.assertIsNotNone(bucket, "create_new_bucket failed")
18 |
19 | def test2_create_existing_bucket(self):
20 | try:
21 | _ = self.project.create_bucket("alpha")
22 | except StorjException as error:
23 | self.assertEqual(error.code, ERROR_BUCKET_ALREADY_EXISTS, error.details)
24 |
25 | def test3_ensure_new_bucket(self):
26 | bucket = self.project.ensure_bucket("beta")
27 | self.assertIsNotNone(bucket, "ensure_new_bucket failed")
28 |
29 | def test4_ensure_existing_bucket(self):
30 | bucket = self.project.ensure_bucket("alpha")
31 | self.assertIsNotNone(bucket, "ensure_existing_bucket failed")
32 |
33 | def test5_stat_existing_bucket(self):
34 | bucket = self.project.stat_bucket("alpha")
35 | self.assertIsNotNone(bucket, "stat_existing_bucket failed")
36 |
37 | def test6_stat_missing_bucket(self):
38 | try:
39 | _ = self.project.stat_bucket("missing")
40 | except StorjException as error:
41 | self.assertEqual(error.code, ERROR_BUCKET_NOT_FOUND, error.details)
42 |
43 | def test7_delete_existing_bucket(self):
44 | bucket = self.project.delete_bucket("alpha")
45 | self.assertIsNotNone(bucket, "delete_existing_bucket failed")
46 |
47 | def test8_delete_missing_bucket(self):
48 | try:
49 | _ = self.project.delete_bucket("missing")
50 | except StorjException as error:
51 | self.assertEqual(error.code, ERROR_BUCKET_NOT_FOUND, error.details)
52 |
53 | def test9_close_project(self):
54 | self.project.close()
55 |
56 |
57 | if __name__ == '__main__':
58 | unittest.main()
59 |
--------------------------------------------------------------------------------
/test/test_data/helper.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring
2 | import unittest
3 |
4 | from uplink_python.uplink import Uplink
5 |
6 |
7 | class TestPy:
8 | """
9 | Python Storj class objects with all Storj functions' bindings
10 | """
11 |
12 | def __init__(self):
13 | super().__init__()
14 | # method to get satellite, api key and passphrase
15 | file_handle = open("secret.txt", 'r')
16 | self.api_key = file_handle.read()
17 | file_handle.close()
18 |
19 | self.satellite = "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777"
20 | self.encryption_phrase = "test"
21 |
22 | self.uplink = Uplink()
23 | self.access = None
24 | self.project = None
25 |
26 | def get_access(self):
27 | self.access = self.uplink.request_access_with_passphrase(self.satellite,
28 | self.api_key,
29 | self.encryption_phrase)
30 | return self.access
31 |
32 | def get_project(self):
33 | self.project = self.access.open_project()
34 | return self.project
35 |
36 |
37 | class InitializationTest(unittest.TestCase):
38 | @classmethod
39 | def setUpClass(cls):
40 | cls.test_py = TestPy()
41 |
42 | def test1_initialize_uplink(self):
43 | self.assertIsNotNone(self.test_py, "TestPy initialization failed.")
44 |
45 | def test2_get_credentials(self):
46 | file_handle = open("secret.txt", 'r')
47 | self.assertIsNotNone(file_handle, "Credentials retrieval failed.")
48 | file_handle.close()
49 |
50 |
51 | if __name__ == '__main__':
52 | unittest.main()
53 |
--------------------------------------------------------------------------------
/test/test_data/object_list_test.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring
2 | import unittest
3 |
4 | from uplink_python.module_classes import ListObjectsOptions
5 | from .helper import TestPy
6 |
7 |
8 | class ObjectListTest(unittest.TestCase):
9 | @classmethod
10 | def setUpClass(cls):
11 | cls.test_py = TestPy()
12 | cls.access = cls.test_py.get_access()
13 | cls.project = cls.test_py.get_project()
14 | cls.object_names = ["alpha/one", "beta", "delta", "gamma", "iota", "kappa",
15 | "lambda", "alpha/two"]
16 |
17 | def test1_ensure_bucket(self):
18 | bucket = self.project.ensure_bucket("py-unit-test")
19 | self.assertIsNotNone(bucket, "ensure_bucket failed")
20 |
21 | def test2_upload_objects(self):
22 | for name in self.object_names:
23 | data_bytes = bytes("hello", 'utf-8')
24 | #
25 | upload = self.project.upload_object("py-unit-test", name)
26 | self.assertIsNotNone(upload, "upload_object failed")
27 | #
28 | _ = upload.write(data_bytes, len(data_bytes))
29 | #
30 | upload.commit()
31 |
32 | def test3_list_objects(self):
33 | # enlist all the objects in given bucket
34 | object_list = self.project.list_objects("py-unit-test")
35 | self.assertIsNotNone(object_list, "list_objects failed")
36 |
37 | expected_names = ["alpha/", "beta", "delta", "gamma", "iota", "kappa", "lambda"]
38 |
39 | retrieved_object_names = list()
40 | for item in object_list:
41 | retrieved_object_names.append(item.key)
42 | #
43 | self.assertTrue(all(item in retrieved_object_names for item in expected_names),
44 | "Not all objects found in object list")
45 |
46 | def test4_list_objects_with_prefix(self):
47 | # set list options before calling list objects (optional)
48 | list_option = ListObjectsOptions(prefix="alpha/")
49 |
50 | # enlist all the objects in given bucket
51 | object_list = self.project.list_objects("py-unit-test", list_option)
52 | self.assertIsNotNone(object_list, "list_objects failed")
53 |
54 | expected_names = ["alpha/one", "alpha/two"]
55 |
56 | retrieved_object_names = list()
57 | for item in object_list:
58 | retrieved_object_names.append(item.key)
59 | #
60 | self.assertTrue(all(item in retrieved_object_names for item in expected_names),
61 | "Not all objects found in object list")
62 |
63 | def test5_delete_objects(self):
64 | for name in self.object_names:
65 | object_ = self.project.delete_object("py-unit-test", name)
66 | self.assertIsNotNone(object_, "delete_object failed")
67 |
68 | def test6_delete_bucket(self):
69 | bucket = self.project.delete_bucket("py-unit-test")
70 | self.assertIsNotNone(bucket, "delete_bucket failed")
71 |
72 | def test7_close_project(self):
73 | self.project.close()
74 |
75 |
76 | if __name__ == '__main__':
77 | unittest.main()
78 |
--------------------------------------------------------------------------------
/test/test_data/object_test.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring
2 | import random
3 | import string
4 | import unittest
5 |
6 | from uplink_python.errors import StorjException, ERROR_OBJECT_NOT_FOUND
7 | from .helper import TestPy
8 |
9 |
10 | class ObjectTest(unittest.TestCase):
11 | @classmethod
12 | def setUpClass(cls):
13 | cls.test_py = TestPy()
14 | cls.access = cls.test_py.get_access()
15 | cls.project = cls.test_py.get_project()
16 | cls.data_len = 5 * 1024
17 |
18 | def test1_ensure_bucket(self):
19 | bucket = self.project.ensure_bucket("alpha")
20 | self.assertIsNotNone(bucket, "ensure_bucket failed")
21 |
22 | def test2_basic_upload(self):
23 |
24 | data = ''.join(random.choices(string.ascii_uppercase +
25 | string.digits, k=self.data_len))
26 | data_bytes = bytes(data, 'utf-8')
27 | #
28 | # call to get uploader handle
29 | upload = self.project.upload_object("alpha", "data.txt")
30 | self.assertIsNotNone(upload, "upload_object failed")
31 |
32 | # initialize local variables and start uploading packets of data
33 | uploaded_total = 0
34 | while uploaded_total < self.data_len:
35 | # set packet size to be used while uploading
36 | size_to_write = 256 if (self.data_len - uploaded_total > 256)\
37 | else self.data_len - uploaded_total
38 | #
39 | # exit while loop if nothing left to upload
40 | if size_to_write == 0:
41 | break
42 | #
43 | # data bytes reading process from the last read position
44 | data_to_write = data_bytes[uploaded_total:uploaded_total + size_to_write]
45 | #
46 | # call to write data to Storj bucket
47 | bytes_written = upload.write(data_to_write, size_to_write)
48 | # self.assertTrue(error is None, error)
49 | #
50 | # exit while loop if nothing left to upload / unable to upload
51 | if bytes_written == 0:
52 | break
53 | # update last read location
54 | uploaded_total += bytes_written
55 |
56 | object_ = upload.info()
57 | self.assertIsNotNone(object_, "upload_info failed")
58 |
59 | # commit upload data to bucket
60 | upload.commit()
61 |
62 | def test3_basic_download(self):
63 | data_bytes = bytes()
64 | download = self.project.download_object("alpha", "data.txt")
65 | self.assertIsNotNone(download, "download_object failed")
66 | #
67 | # get size of file to be downloaded from storj
68 | file_size = download.file_size()
69 | #
70 | # set packet size to be used while downloading
71 | size_to_read = 256
72 | # initialize local variables and start downloading packets of data
73 | downloaded_total = 0
74 | while True:
75 | # call to read data from Storj bucket
76 | data_read, bytes_read = download.read(size_to_read)
77 | #
78 | # file writing process from the last written position if new data is downloaded
79 | if bytes_read != 0:
80 | data_bytes = data_bytes + data_read
81 | #
82 | # update last read location
83 | downloaded_total += bytes_read
84 | #
85 | # break if download complete
86 | if downloaded_total == file_size:
87 | break
88 |
89 | object_ = download.info()
90 | self.assertIsNotNone(object_, "download_info failed")
91 | #
92 | # close downloader and free downloader access
93 | download.close()
94 |
95 | def test4_stat_object(self):
96 | object_ = self.project.stat_object("alpha", "data.txt")
97 | self.assertIsNotNone(object_, "stat_object failed")
98 |
99 | def test5_delete_existing_object(self):
100 | object_ = self.project.delete_object("alpha", "data.txt")
101 | self.assertIsNotNone(object_, "delete_object failed")
102 |
103 | def test6_delete_bucket(self):
104 | bucket = self.project.delete_bucket("alpha")
105 | self.assertIsNotNone(bucket, "delete_bucket failed")
106 |
107 | def test7_close_project(self):
108 | self.project.close()
109 |
110 |
111 | if __name__ == '__main__':
112 | unittest.main()
113 |
--------------------------------------------------------------------------------
/test/test_data/project_test.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=missing-docstring
2 | import unittest
3 |
4 | from .helper import TestPy
5 |
6 |
7 | class ProjectTest(unittest.TestCase):
8 | @classmethod
9 | def setUpClass(cls):
10 | cls.test_py = TestPy()
11 | cls.access = cls.test_py.get_access()
12 |
13 | def test1_open_project(self):
14 | project = self.access.open_project()
15 | self.assertIsNotNone(project, "open_project failed")
16 |
17 |
18 | if __name__ == '__main__':
19 | unittest.main()
20 |
--------------------------------------------------------------------------------
/uplink_python/__init__.py:
--------------------------------------------------------------------------------
1 | """Python Bindings for Storj (V3)"""
2 |
--------------------------------------------------------------------------------
/uplink_python/access.py:
--------------------------------------------------------------------------------
1 | """Module with Access class and access methods to get access grant to access project"""
2 | import ctypes
3 | import hashlib
4 |
5 | from uplink_python.module_classes import Permission, SharePrefix, Config
6 | from uplink_python.module_def import _ConfigStruct, _PermissionStruct, _SharePrefixStruct, \
7 | _AccessStruct, _ProjectResult, _StringResult, _AccessResult, _EncryptionKeyResult,\
8 | _EncryptionKeyStruct
9 | from uplink_python.project import Project
10 | from uplink_python.errors import _storj_exception
11 |
12 |
13 | class Access:
14 | """
15 | An Access Grant contains everything to access a project and specific buckets.
16 | It includes a potentially-restricted API Key, a potentially-restricted set of encryption
17 | information, and information about the Satellite responsible for the project's metadata.
18 |
19 | ...
20 |
21 | Attributes
22 | ----------
23 | access : int
24 | Access _handle returned from libuplinkc access_result.access
25 | uplink : Uplink
26 | uplink object used to get access
27 |
28 | Methods
29 | -------
30 | derive_encryption_key()
31 | EncryptionKey
32 | override_encryption_key()
33 | None
34 | open_project():
35 | Project
36 | config_open_project():
37 | Project
38 | serialize():
39 | String
40 | share():
41 | Access
42 | """
43 |
44 | def __init__(self, access, uplink):
45 | """Constructs all the necessary attributes for the Access object."""
46 |
47 | self.access = access
48 | self.uplink = uplink
49 |
50 | def derive_encryption_key(self, passphrase: str, salt: str):
51 | """
52 | function derives a salted encryption key for passphrase using the salt.
53 |
54 | This function is useful for deriving a salted encryption key for users when
55 | implementing multitenancy in a single app bucket.
56 |
57 | Returns
58 | -------
59 | EncryptionKey
60 | """
61 |
62 | #
63 | # declare types of arguments and response of the corresponding golang function
64 | self.uplink.m_libuplink.uplink_derive_encryption_key.argtypes = [ctypes.c_char_p,
65 | ctypes.c_void_p,
66 | ctypes.c_size_t]
67 | self.uplink.m_libuplink.uplink_derive_encryption_key.restype = _EncryptionKeyResult
68 | #
69 | # prepare the input for the function
70 | passphrase_ptr = ctypes.c_char_p(passphrase.encode('utf-8'))
71 | hash_value = hashlib.sha256() # Choose SHA256 and update with bytes
72 | hash_value.update(bytes(salt))
73 | salt_ptr = ctypes.c_void_p(hash_value.hexdigest())
74 | length_ptr = ctypes.c_size_t(hash_value.digest_size)
75 |
76 | # salted encryption key by calling the exported golang function
77 | encryption_key_result = self.uplink.m_libuplink.uplink_derive_encryption_key(passphrase_ptr,
78 | salt_ptr,
79 | length_ptr)
80 | #
81 | # if error occurred
82 | if bool(encryption_key_result.error):
83 | raise _storj_exception(encryption_key_result.error.contents.code,
84 | encryption_key_result.error.contents.message.decode("utf-8"))
85 | return encryption_key_result.encryption_key
86 |
87 | def override_encryption_key(self, bucket_name: str, prefix: str, encryption_key):
88 | """
89 | function overrides the root encryption key for the prefix in bucket with encryptionKey.
90 |
91 | This function is useful for overriding the encryption key in user-specific
92 | access grants when implementing multitenancy in a single app bucket.
93 |
94 | Returns
95 | -------
96 | None
97 | """
98 |
99 | #
100 | # declare types of arguments and response of the corresponding golang function
101 | self.uplink.m_libuplink.uplink_access_override_encryption_key.argtypes =\
102 | [ctypes.POINTER(_AccessStruct), ctypes.c_char_p, ctypes.c_char_p,
103 | ctypes.POINTER(_EncryptionKeyStruct)]
104 | self.uplink.m_libuplink.uplink_access_override_encryption_key.restype =\
105 | _EncryptionKeyResult
106 | #
107 | # prepare the input for the function
108 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
109 | prefix_ptr = ctypes.c_char_p(prefix.encode('utf-8'))
110 |
111 | # salted encryption key by calling the exported golang function
112 | error_result = self.uplink.m_libuplink.\
113 | uplink_access_override_encryption_key(self.access, bucket_name_ptr, prefix_ptr,
114 | encryption_key)
115 | #
116 | # if error occurred
117 | if bool(error_result):
118 | raise _storj_exception(error_result.contents.code,
119 | error_result.contents.message.decode("utf-8"))
120 |
121 | def open_project(self):
122 | """
123 | function opens Storj(V3) project using access grant.
124 |
125 | Returns
126 | -------
127 | Project
128 | """
129 |
130 | #
131 | # declare types of arguments and response of the corresponding golang function
132 | self.uplink.m_libuplink.uplink_open_project.argtypes = [ctypes.POINTER(_AccessStruct)]
133 | self.uplink.m_libuplink.uplink_open_project.restype = _ProjectResult
134 | #
135 | # open project by calling the exported golang function
136 | project_result = self.uplink.m_libuplink.uplink_open_project(self.access)
137 | #
138 | # if error occurred
139 | if bool(project_result.error):
140 | raise _storj_exception(project_result.error.contents.code,
141 | project_result.error.contents.message.decode("utf-8"))
142 | return Project(project_result.project, self.uplink)
143 |
144 | def config_open_project(self, config: Config):
145 | """
146 | function opens Storj(V3) project using access grant and custom configuration.
147 |
148 | Parameters
149 | ----------
150 | config : Config
151 |
152 | Returns
153 | -------
154 | Project
155 | """
156 |
157 | #
158 | # declare types of arguments and response of the corresponding golang function
159 | self.uplink.m_libuplink.uplink_config_open_project.argtypes =\
160 | [_ConfigStruct, ctypes.POINTER(_AccessStruct)]
161 | self.uplink.m_libuplink.uplink_config_open_project.restype = _ProjectResult
162 | #
163 | # prepare the input for the function
164 | if config is None:
165 | config_obj = _ConfigStruct()
166 | else:
167 | config_obj = config.get_structure()
168 | #
169 | # open project by calling the exported golang function
170 | project_result = self.uplink.m_libuplink.uplink_config_open_project(config_obj, self.access)
171 | #
172 | # if error occurred
173 | if bool(project_result.error):
174 | raise _storj_exception(project_result.error.contents.code,
175 | project_result.error.contents.message.decode("utf-8"))
176 | return Project(project_result.project, self.uplink)
177 |
178 | def serialize(self):
179 | """
180 | function serializes an access grant such that it can be used later
181 | with ParseAccess or other tools.
182 |
183 | Returns
184 | -------
185 | String
186 | """
187 |
188 | #
189 | # declare types of arguments and response of the corresponding golang function
190 | self.uplink.m_libuplink.uplink_access_serialize.argtypes = [ctypes.POINTER(_AccessStruct)]
191 | self.uplink.m_libuplink.uplink_access_serialize.restype = _StringResult
192 | #
193 | # get serialized access by calling the exported golang function
194 | string_result = self.uplink.m_libuplink.uplink_access_serialize(self.access)
195 | #
196 | # if error occurred
197 | if bool(string_result.error):
198 | raise _storj_exception(string_result.error.contents.code,
199 | string_result.error.contents.message.decode("utf-8"))
200 | return string_result.string.decode("utf-8")
201 |
202 | def share(self, permission: Permission = None, shared_prefix: [SharePrefix] = None):
203 | """
204 | function Share creates a new access grant with specific permissions.
205 |
206 | Access grants can only have their existing permissions restricted, and the resulting
207 | access grant will only allow for the intersection of all previous Share calls in the
208 | access grant construction chain.
209 |
210 | Prefixes, if provided, restrict the access grant (and internal encryption information)
211 | to only contain enough information to allow access to just those prefixes.
212 |
213 | Parameters
214 | ----------
215 | permission : Permission
216 | shared_prefix : list of SharePrefix
217 |
218 | Returns
219 | -------
220 | Access
221 | """
222 |
223 | #
224 | # declare types of arguments and response of the corresponding golang function
225 | self.uplink.m_libuplink.uplink_access_share.argtypes = [ctypes.POINTER(_AccessStruct),
226 | _PermissionStruct,
227 | ctypes.POINTER(_SharePrefixStruct),
228 | ctypes.c_size_t]
229 | self.uplink.m_libuplink.uplink_access_share.restype = _AccessResult
230 | #
231 | # prepare the input for the function
232 | # check and create valid _PermissionStruct parameter
233 | if permission is None:
234 | permission_obj = _PermissionStruct()
235 | else:
236 | permission_obj = permission.get_structure()
237 |
238 | # check and create valid Share Prefix parameter
239 | if shared_prefix is None:
240 | shared_prefix_obj = ctypes.POINTER(_SharePrefixStruct)()
241 | array_size = ctypes.c_size_t(0)
242 | else:
243 | num_of_structs = len(shared_prefix)
244 | li_array_size = (_SharePrefixStruct * num_of_structs)()
245 | array = ctypes.cast(li_array_size, ctypes.POINTER(_SharePrefixStruct))
246 | for i, val in enumerate(shared_prefix):
247 | array[i] = val.get_structure()
248 | shared_prefix_obj = array
249 | array_size = ctypes.c_size_t(num_of_structs)
250 | #
251 | # get shareable access by calling the exported golang function
252 | access_result = self.uplink.m_libuplink.uplink_access_share(self.access, permission_obj,
253 | shared_prefix_obj, array_size)
254 | #
255 | # if error occurred
256 | if bool(access_result.error):
257 | raise _storj_exception(access_result.error.contents.code,
258 | access_result.error.contents.message.decode("utf-8"))
259 | return Access(access_result.access, self.uplink)
260 |
--------------------------------------------------------------------------------
/uplink_python/download.py:
--------------------------------------------------------------------------------
1 | """Module with Download class and dowload methods to work with object download"""
2 | # pylint: disable=too-many-arguments
3 | import ctypes
4 | import os
5 |
6 | from uplink_python.module_def import _DownloadStruct, _ReadResult, _ProjectStruct,\
7 | _ObjectResult, _Error
8 | from uplink_python.errors import _storj_exception
9 |
10 | _WINDOWS = os.name == 'nt'
11 | COPY_BUFSIZE = 1024 * 1024 if _WINDOWS else 64 * 1024
12 |
13 |
14 | class Download:
15 | """
16 | Download is a download from Storj Network.
17 |
18 | ...
19 |
20 | Attributes
21 | ----------
22 | download : int
23 | Download _handle returned from libuplinkc download_result.download
24 | uplink : Uplink
25 | uplink object used to get access
26 | project : Project
27 | project object used to create upload
28 | bucket_name : Str
29 | bucket_name to which upload is being processed
30 | storj_path : Str
31 | storj_path on which upload is to be done
32 |
33 | Methods
34 | -------
35 | read():
36 | Int
37 | read_file():
38 | None
39 | file_size():
40 | Int
41 | close():
42 | None
43 | info():
44 | Object
45 | """
46 |
47 | def __init__(self, download, uplink, project, bucket_name, storj_path):
48 | """Constructs all the necessary attributes for the Download object."""
49 |
50 | self.download = download
51 | self.project = project
52 | self.bucket_name = bucket_name
53 | self.storj_path = storj_path
54 | self.uplink = uplink
55 |
56 | def read(self, size_to_read: int):
57 | """
58 | function downloads up to len size_to_read bytes from the object's data stream.
59 | It returns the data_read in bytes and number of bytes read
60 |
61 | Parameters
62 | ----------
63 | size_to_read : int
64 |
65 | Returns
66 | -------
67 | bytes, int
68 | """
69 | #
70 | # declare types of arguments and response of the corresponding golang function
71 | self.uplink.m_libuplink.uplink_download_read.argtypes = [ctypes.POINTER(_DownloadStruct),
72 | ctypes.POINTER(ctypes.c_uint8),
73 | ctypes.c_size_t]
74 | self.uplink.m_libuplink.uplink_download_read.restype = _ReadResult
75 | #
76 | # prepare the inputs for the function
77 | data_size = ctypes.c_int32(size_to_read)
78 | data_to_write = [0]
79 | data_to_write = (ctypes.c_uint8 * data_size.value)(*data_to_write)
80 | data_to_write_ptr = ctypes.cast(data_to_write, ctypes.POINTER(ctypes.c_uint8))
81 | size_to_read = ctypes.c_size_t(size_to_read)
82 |
83 | # read data from Storj by calling the exported golang function
84 | read_result = self.uplink.m_libuplink.uplink_download_read(self.download, data_to_write_ptr,
85 | size_to_read)
86 | #
87 | # if error occurred
88 | if bool(read_result.error):
89 | raise _storj_exception(read_result.error.contents.code,
90 | read_result.error.contents.message.decode("utf-8"))
91 |
92 | data_read = bytes()
93 | if int(read_result.bytes_read) != 0:
94 | #
95 | # --------------------------------------------
96 | # data conversion to type python readable form
97 | # conversion of LP_c_ubyte to python readable data variable
98 | data_read = ctypes.string_at(data_to_write_ptr, int(read_result.bytes_read))
99 | return data_read, int(read_result.bytes_read)
100 |
101 | def read_file(self, file_handle, buffer_size: int = 0):
102 | """
103 | function downloads complete object from it's data stream and writes it to the file whose
104 | handle is passed as parameter. After the download is complete it closes the download stream.
105 |
106 | Note: File handle should be a BinaryIO, i.e. file should be opened using 'w+b" flag.
107 | e.g.: file_handle = open(DESTINATION_FULL_FILENAME, 'w+b')
108 | Remember to close the object stream on storj and also close the local file handle
109 | after this function exits.
110 |
111 | Parameters
112 | ----------
113 | file_handle : BinaryIO
114 | buffer_size : int
115 |
116 | Returns
117 | -------
118 | None
119 | """
120 | if not buffer_size:
121 | buffer_size = COPY_BUFSIZE
122 | file_size = self.file_size()
123 | if buffer_size > file_size:
124 | buffer_size = file_size
125 | while file_size:
126 | buf, bytes_read = self.read(buffer_size)
127 | if buf:
128 | file_handle.write(buf)
129 | file_size -= bytes_read
130 |
131 | def file_size(self):
132 | """
133 | function returns the size of object on Storj network for which download has been created.
134 |
135 | Returns
136 | -------
137 | int
138 | """
139 |
140 | # declare types of arguments and response of the corresponding golang function
141 | self.uplink.m_libuplink.uplink_stat_object.argtypes = [ctypes.POINTER(_ProjectStruct),
142 | ctypes.c_char_p, ctypes.c_char_p]
143 | self.uplink.m_libuplink.uplink_stat_object.restype = _ObjectResult
144 | #
145 | # get object information by calling the exported golang function
146 | object_result = self.uplink.m_libuplink.uplink_stat_object(self.project, self.bucket_name,
147 | self.storj_path)
148 | # if error occurred
149 | if bool(object_result.error):
150 | raise _storj_exception(object_result.error.contents.code,
151 | object_result.error.contents.message.decode("utf-8"))
152 | # find object size
153 | return int(object_result.object.contents.system.content_length)
154 |
155 | def close(self):
156 | """
157 | function closes the download.
158 |
159 | Returns
160 | -------
161 | None
162 | """
163 | #
164 | # declare types of arguments and response of the corresponding golang function
165 | self.uplink.m_libuplink.uplink_close_download.argtypes = [ctypes.POINTER(_DownloadStruct)]
166 | self.uplink.m_libuplink.uplink_close_download.restype = ctypes.POINTER(_Error)
167 | #
168 | # close downloader by calling the exported golang function
169 | error = self.uplink.m_libuplink.uplink_close_download(self.download)
170 | #
171 | # if error occurred
172 | if bool(error):
173 | raise _storj_exception(error.contents.code,
174 | error.contents.message.decode("utf-8"))
175 |
176 | def info(self):
177 | """
178 | function returns information about the downloaded object.
179 |
180 | Returns
181 | -------
182 | Object
183 | """
184 | #
185 | # declare types of arguments and response of the corresponding golang function
186 | self.uplink.m_libuplink.uplink_download_info.argtypes = [ctypes.POINTER(_DownloadStruct)]
187 | self.uplink.m_libuplink.uplink_download_info.restype = _ObjectResult
188 | #
189 | # get last download info by calling the exported golang function
190 | object_result = self.uplink.m_libuplink.uplink_download_info(self.download)
191 | #
192 | # if error occurred
193 | if bool(object_result.error):
194 | raise _storj_exception(object_result.error.contents.code,
195 | object_result.error.contents.message.decode("utf-8"))
196 | return self.uplink.object_from_result(object_result.object)
197 |
--------------------------------------------------------------------------------
/uplink_python/errors.py:
--------------------------------------------------------------------------------
1 | """Python user-defined exceptions for uplink errors"""
2 |
3 | ERROR_INTERNAL = 0x02
4 | ERROR_CANCELED = 0x03
5 | ERROR_INVALID_HANDLE = 0x04
6 | ERROR_TOO_MANY_REQUESTS = 0x05
7 | ERROR_BANDWIDTH_LIMIT_EXCEEDED = 0x06
8 |
9 | ERROR_BUCKET_NAME_INVALID = 0x10
10 | ERROR_BUCKET_ALREADY_EXISTS = 0x11
11 | ERROR_BUCKET_NOT_EMPTY = 0x12
12 | ERROR_BUCKET_NOT_FOUND = 0x13
13 |
14 | ERROR_OBJECT_KEY_INVALID = 0x20
15 | ERROR_OBJECT_NOT_FOUND = 0x21
16 | ERROR_UPLOAD_DONE = 0x22
17 | ERROR_LIBUPLINK_SO_NOT_FOUND = 0x9999
18 | """_Error defines"""
19 |
20 |
21 | class StorjException(Exception):
22 | """Base class for other exceptions
23 |
24 | Attributes:
25 | code -- error code
26 | message -- error message
27 | details -- error message from uplink-c
28 | """
29 |
30 | def __init__(self, message, code, details):
31 | self.message = message
32 | self.code = code
33 | self.details = details
34 | super(StorjException, self).__init__()
35 |
36 | def __str__(self):
37 | return repr(self.message)
38 |
39 |
40 | class InternalError(StorjException):
41 | """Exception raised if internal error occurred.
42 |
43 | Attributes:
44 | details -- error message from uplink-c
45 | """
46 |
47 | def __init__(self, details):
48 | super().__init__("internal error", ERROR_INTERNAL, details)
49 |
50 |
51 | class CancelledError(StorjException):
52 | """Exception raised if operation cancelled.
53 |
54 | Attributes:
55 | details -- error message from uplink-c
56 | """
57 |
58 | def __init__(self, details):
59 | super().__init__("operation canceled", ERROR_CANCELED, details)
60 |
61 |
62 | class InvalidHandleError(StorjException):
63 | """Exception raised if handle is invalid.
64 |
65 | Attributes:
66 | details -- error message from uplink-c
67 | """
68 |
69 | def __init__(self, details):
70 | super().__init__("invalid handle", ERROR_INVALID_HANDLE, details)
71 |
72 |
73 | class TooManyRequestsError(StorjException):
74 | """Exception raised if too many requests performed.
75 |
76 | Attributes:
77 | details -- error message from uplink-c
78 | """
79 |
80 | def __init__(self, details):
81 | super().__init__("too many requests", ERROR_TOO_MANY_REQUESTS, details)
82 |
83 |
84 | class BandwidthLimitExceededError(StorjException):
85 | """Exception raised if allowed bandwidth limit exceeded.
86 |
87 | Attributes:
88 | details -- error message from uplink-c
89 | """
90 |
91 | def __init__(self, details):
92 | super().__init__("bandwidth limit exceeded", ERROR_BANDWIDTH_LIMIT_EXCEEDED, details)
93 |
94 |
95 | class BucketNameInvalidError(StorjException):
96 | """Exception raised if bucket name is invalid.
97 |
98 | Attributes:
99 | details -- error message from uplink-c
100 | """
101 |
102 | def __init__(self, details):
103 | super().__init__("invalid bucket name", ERROR_BUCKET_NAME_INVALID, details)
104 |
105 |
106 | class BucketAlreadyExistError(StorjException):
107 | """Exception raised if bucket already exist.
108 |
109 | Attributes:
110 | details -- error message from uplink-c
111 | """
112 |
113 | def __init__(self, details):
114 | super().__init__("bucket already exists", ERROR_BUCKET_ALREADY_EXISTS, details)
115 |
116 |
117 | class BucketNotEmptyError(StorjException):
118 | """Exception raised if bucket is not empty.
119 |
120 | Attributes:
121 | details -- error message from uplink-c
122 | """
123 |
124 | def __init__(self, details):
125 | super().__init__("bucket is not empty", ERROR_BUCKET_NOT_EMPTY, details)
126 |
127 |
128 | class BucketNotFoundError(StorjException):
129 | """Exception raised if bucket not found.
130 |
131 | Attributes:
132 | details -- error message from uplink-c
133 | """
134 |
135 | def __init__(self, details):
136 | super().__init__("bucket not found", ERROR_BUCKET_NOT_FOUND, details)
137 |
138 |
139 | class ObjectKeyInvalidError(StorjException):
140 | """Exception raised if object key is invalid.
141 |
142 | Attributes:
143 | details -- error message from uplink-c
144 | """
145 |
146 | def __init__(self, details):
147 | super().__init__("invalid object key", ERROR_OBJECT_KEY_INVALID, details)
148 |
149 |
150 | class ObjectNotFoundError(StorjException):
151 | """Exception raised if object not found.
152 |
153 | Attributes:
154 | details -- error message from uplink-c
155 | """
156 |
157 | def __init__(self, details):
158 | super().__init__("object not found", ERROR_OBJECT_NOT_FOUND, details)
159 |
160 |
161 | class UploadDoneError(StorjException):
162 | """Exception raised if upload is complete.
163 |
164 | Attributes:
165 | details -- error message from uplink-c
166 | """
167 |
168 | def __init__(self, details):
169 | super().__init__("upload completed", ERROR_UPLOAD_DONE, details)
170 |
171 |
172 | class LibUplinkSoError(StorjException):
173 | """Exception raised if reason is unknown.
174 |
175 | Attributes:
176 | code -- error code
177 | details -- error message from uplink-c
178 | """
179 |
180 | def __init__(self):
181 | super().__init__("libuplinkc.so not found", ERROR_LIBUPLINK_SO_NOT_FOUND,
182 | "Please follow \"https://github.com/storj-thirdparty"
183 | "/uplink-python#option-2\" "
184 | "to build libuplinkc.so manually.")
185 |
186 |
187 | def _storj_exception(code, details):
188 | switcher = {
189 | ERROR_INTERNAL: InternalError,
190 | ERROR_CANCELED: CancelledError,
191 | ERROR_INVALID_HANDLE: InvalidHandleError,
192 | ERROR_TOO_MANY_REQUESTS: TooManyRequestsError,
193 | ERROR_BANDWIDTH_LIMIT_EXCEEDED: BandwidthLimitExceededError,
194 | ERROR_BUCKET_NAME_INVALID: BucketNameInvalidError,
195 | ERROR_BUCKET_ALREADY_EXISTS: BucketAlreadyExistError,
196 | ERROR_BUCKET_NOT_EMPTY: BucketNotEmptyError,
197 | ERROR_BUCKET_NOT_FOUND: BucketNotFoundError,
198 | ERROR_OBJECT_KEY_INVALID: ObjectKeyInvalidError,
199 | ERROR_OBJECT_NOT_FOUND: ObjectNotFoundError,
200 | ERROR_UPLOAD_DONE: UploadDoneError
201 | }
202 | return switcher.get(code, StorjException)(details=details)
203 |
--------------------------------------------------------------------------------
/uplink_python/hello_storj.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=too-many-arguments
2 | """ example project for storj-python binding shows how to use binding for various tasks. """
3 |
4 | from datetime import datetime
5 |
6 | from .errors import StorjException, BucketNotEmptyError, BucketNotFoundError
7 | from .module_classes import ListObjectsOptions, Permission, SharePrefix
8 | from .uplink import Uplink
9 |
10 | if __name__ == "__main__":
11 |
12 | # Storj configuration information
13 | MY_API_KEY = "change-me-to-the-api-key-created-in-satellite-gui"
14 | MY_SATELLITE = "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777"
15 | MY_BUCKET = "my-first-bucket"
16 | MY_STORJ_UPLOAD_PATH = "(optional): path / (required): filename"
17 | # (path + filename) OR filename
18 | MY_ENCRYPTION_PASSPHRASE = "you'll never guess this"
19 |
20 | # Source and destination path and file name for testing
21 | SRC_FULL_FILENAME = "filename with extension of source file on local system"
22 | DESTINATION_FULL_FILENAME = "filename with extension to save on local system"
23 |
24 | # try-except block to catch any storj exception
25 | try:
26 | # create an object of Uplink class
27 | uplink = Uplink()
28 |
29 | # function calls
30 | # request access using passphrase
31 | print("\nRequesting Access using passphrase...")
32 | access = uplink.request_access_with_passphrase(MY_SATELLITE, MY_API_KEY,
33 | MY_ENCRYPTION_PASSPHRASE)
34 | print("Request Access: SUCCESS!")
35 | #
36 |
37 | # open Storj project
38 | print("\nOpening the Storj project, corresponding to the parsed Access...")
39 | project = access.open_project()
40 | print("Desired Storj project: OPENED!")
41 | #
42 |
43 | # enlist all the buckets in given Storj project
44 | print("\nListing bucket's names and creation time...")
45 | bucket_list = project.list_buckets()
46 | for bucket in bucket_list:
47 | # as python class object
48 | print(bucket.name, " | ", datetime.fromtimestamp(bucket.created))
49 | # as python dictionary
50 | print(bucket.get_dict())
51 | print("Buckets listing: COMPLETE!")
52 | #
53 |
54 | # delete given bucket
55 | print("\nDeleting '" + MY_BUCKET + "' bucket...")
56 | try:
57 | bucket = project.delete_bucket(MY_BUCKET)
58 | # if delete bucket fails due to "not empty", delete all the objects and try again
59 | except BucketNotEmptyError as exception:
60 | print("Error while deleting bucket: ", exception.message)
61 | print("Deleting object's inside bucket and try to delete bucket again...")
62 | # list objects in given bucket recursively using ListObjectsOptions
63 | print("Listing and deleting object's inside bucket...")
64 | objects_list = project.list_objects(MY_BUCKET, ListObjectsOptions(recursive=True))
65 | # iterate through all objects path
66 | for obj in objects_list:
67 | # delete selected object
68 | print("Deleting '" + obj.key)
69 | _ = project.delete_object(MY_BUCKET, obj.key)
70 | print("Delete all objects inside the bucket : COMPLETE!")
71 |
72 | # try to delete given bucket
73 | print("Deleting '" + MY_BUCKET + "' bucket...")
74 | _ = project.delete_bucket(MY_BUCKET)
75 | print("Desired bucket: DELETED")
76 | except BucketNotFoundError as exception:
77 | print("Desired bucket delete error: ", exception.message)
78 | #
79 |
80 | # create bucket in given project
81 | print("\nCreating '" + MY_BUCKET + "' bucket...")
82 | _ = project.create_bucket(MY_BUCKET)
83 | print("Desired Bucket: CREATED!")
84 |
85 | # as an example of 'put' , lets read and upload a local file
86 | # upload file/object
87 | print("\nUploading data...")
88 | # get handle of file to be uploaded
89 | file_handle = open(SRC_FULL_FILENAME, 'r+b')
90 | # get upload handle to specified bucket and upload file path
91 | upload = project.upload_object(MY_BUCKET, MY_STORJ_UPLOAD_PATH)
92 | #
93 | # upload file on storj
94 | upload.write_file(file_handle)
95 | #
96 | # commit the upload
97 | upload.commit()
98 | # close file handle
99 | file_handle.close()
100 | print("Upload: COMPLETE!")
101 | #
102 |
103 | # list objects in given bucket with above options or None
104 | print("\nListing object's names...")
105 | objects_list = project.list_objects(MY_BUCKET, ListObjectsOptions(recursive=True,
106 | system=True))
107 | # print all objects path
108 | for obj in objects_list:
109 | print(obj.key, " | ", obj.is_prefix) # as python class object
110 | print(obj.get_dict()) # as python dictionary
111 | print("Objects listing: COMPLETE!")
112 | #
113 |
114 | # as an example of 'get' , lets download an object and write it to a local file
115 | # download file/object
116 | print("\nDownloading data...")
117 | # get handle of file which data has to be downloaded
118 | file_handle = open(DESTINATION_FULL_FILENAME, 'w+b')
119 | # get download handle to specified bucket and object path to be downloaded
120 | download = project.download_object(MY_BUCKET, MY_STORJ_UPLOAD_PATH)
121 | #
122 | # download data from storj to file
123 | download.read_file(file_handle)
124 | #
125 | # close the download stream
126 | download.close()
127 | # close file handle
128 | file_handle.close()
129 | print("Download: COMPLETE!")
130 | #
131 |
132 | # as an example of how to create shareable Access for easy storj access without
133 | # API key and Encryption PassPhrase
134 | # create new Access with permissions
135 | print("\nCreating new Access...")
136 | # set permissions for the new access to be created
137 | permissions = Permission(allow_list=True, allow_delete=False)
138 | # set shared prefix as list of dictionaries for the new access to be created
139 | shared_prefix = [SharePrefix(bucket=MY_BUCKET, prefix="")]
140 | # create new access
141 | new_access = access.share(permissions, shared_prefix)
142 | print("New Access: CREATED!")
143 | #
144 |
145 | # generate serialized access to share
146 | print("\nGenerating serialized Access...")
147 | serialized_access = access.serialize()
148 | print("Serialized shareable Access: ", serialized_access)
149 | #
150 |
151 | #
152 | # close given project using handle
153 | print("\nClosing Storj project...")
154 | project.close()
155 | print("Project CLOSED!")
156 | #
157 |
158 | #
159 | # as an example of how to retrieve information from shareable Access for storj access
160 | # retrieving Access from serialized Access
161 | print("\nParsing serialized Access...")
162 | shared_access = uplink.parse_access(serialized_access)
163 | print("Parsing Access: COMPLETE")
164 | #
165 |
166 | # open Storj project
167 | print("\nOpening the Storj project, corresponding to the shared Access...")
168 | shared_project = shared_access.open_project()
169 | print("Desired Storj project: OPENED!")
170 | #
171 |
172 | # enlist all the buckets in given Storj project
173 | print("\nListing bucket's names and creation time...")
174 | bucket_list = shared_project.list_buckets()
175 | for bucket in bucket_list:
176 | # as python class object
177 | print(bucket.name, " | ", datetime.fromtimestamp(bucket.created))
178 | # as python dictionary
179 | print(bucket.get_dict())
180 | print("Buckets listing: COMPLETE!")
181 | #
182 |
183 | # list objects in given bucket with above options or None
184 | print("\nListing object's names...")
185 | objects_list = shared_project.list_objects(MY_BUCKET, ListObjectsOptions(recursive=True,
186 | system=True))
187 | # print all objects path
188 | for obj in objects_list:
189 | print(obj.key, " | ", obj.is_prefix) # as python class object
190 | print(obj.get_dict()) # as python dictionary
191 | print("Objects listing: COMPLETE!")
192 | #
193 |
194 | # try to delete given bucket
195 | print("\nTrying to delete '" + MY_STORJ_UPLOAD_PATH)
196 | try:
197 | _ = shared_project.delete_object(MY_BUCKET, MY_STORJ_UPLOAD_PATH)
198 | print("Desired object: DELETED")
199 | except StorjException as exception:
200 | print("Desired object: FAILED")
201 | print("Exception: ", exception.details)
202 | #
203 |
204 | #
205 | # close given project with shared Access
206 | print("\nClosing Storj project...")
207 | shared_project.close()
208 | print("Project CLOSED!")
209 | #
210 | except StorjException as exception:
211 | print("Exception Caught: ", exception.details)
212 |
--------------------------------------------------------------------------------
/uplink_python/module_classes.py:
--------------------------------------------------------------------------------
1 | """Classes for input and output interface of parameters and returns from uplink."""
2 | # pylint: disable=too-few-public-methods, too-many-arguments
3 | import ctypes
4 |
5 | from uplink_python.module_def import _ConfigStruct, _PermissionStruct, _SharePrefixStruct,\
6 | _BucketStruct, _DownloadOptionsStruct, _SystemMetadataStruct, _CustomMetadataStruct,\
7 | _UploadOptionsStruct, _ObjectStruct, _ListObjectsOptionsStruct, _ListBucketsOptionsStruct,\
8 | _CustomMetadataEntryStruct
9 |
10 |
11 | class Config:
12 | """
13 | Config defines configuration for using uplink library.
14 |
15 | ...
16 |
17 | Attributes
18 | ----------
19 | user_agent : str
20 | dial_timeout_milliseconds : int
21 | DialTimeout defines how long client should wait for establishing a connection to peers.
22 | temp_directory : str
23 | temp_directory specifies where to save data during downloads to use less memory.
24 |
25 | Methods
26 | -------
27 | get_structure():
28 | _ConfigStruct
29 | """
30 |
31 | def __init__(self, user_agent: str = "", dial_timeout_milliseconds: int = 0,
32 | temp_directory: str = ""):
33 | """Constructs all the necessary attributes for the Config object."""
34 |
35 | self.user_agent = user_agent
36 | self.dial_timeout_milliseconds = dial_timeout_milliseconds
37 | self.temp_directory = temp_directory
38 |
39 | def get_structure(self):
40 | """Converts python class object to ctypes structure _ConfigStruct"""
41 |
42 | return _ConfigStruct(ctypes.c_char_p(self.user_agent.encode('utf-8')),
43 | ctypes.c_int32(self.dial_timeout_milliseconds),
44 | ctypes.c_char_p(self.temp_directory.encode('utf-8')))
45 |
46 |
47 | class Permission:
48 | """
49 | Permission defines what actions can be used to share.
50 |
51 | ...
52 |
53 | Attributes
54 | ----------
55 | allow_download : bool
56 | allow_download gives permission to download the object's content. It
57 | allows getting object metadata, but it does not allow listing buckets.
58 | allow_upload : bool
59 | allow_upload gives permission to create buckets and upload new objects.
60 | It does not allow overwriting existing objects unless allow_delete is
61 | granted too.
62 | allow_list : bool
63 | allow_list gives permission to list buckets. It allows getting object
64 | metadata, but it does not allow downloading the object's content.
65 | allow_delete : bool
66 | allow_delete gives permission to delete buckets and objects. Unless
67 | either allow_download or allow_list is granted too, no object metadata and
68 | no error info will be returned for deleted objects.
69 | not_before : int
70 | NotBefore restricts when the resulting access grant is valid for.
71 | If set, the resulting access grant will not work if the Satellite
72 | believes the time is before NotBefore.
73 | If set, this value should always be before NotAfter.
74 | disabled when 0.
75 | not_after : int
76 | NotAfter restricts when the resulting access grant is valid till.
77 | If set, the resulting access grant will not work if the Satellite
78 | believes the time is after NotAfter.
79 | If set, this value should always be after NotBefore.
80 | disabled when 0.
81 |
82 | Methods
83 | -------
84 | get_structure():
85 | _PermissionStruct
86 | """
87 |
88 | def __init__(self, allow_download: bool = False, allow_upload: bool = False,
89 | allow_list: bool = False, allow_delete: bool = False,
90 | not_before: int = 0, not_after: int = 0):
91 | """Constructs all the necessary attributes for the Permission object."""
92 |
93 | self.allow_download = allow_download
94 | self.allow_upload = allow_upload
95 | self.allow_list = allow_list
96 | self.allow_delete = allow_delete
97 | self.not_before = not_before
98 | self.not_after = not_after
99 |
100 | def get_structure(self):
101 | """Converts python class object to ctypes structure _PermissionStruct"""
102 |
103 | return _PermissionStruct(ctypes.c_bool(self.allow_download),
104 | ctypes.c_bool(self.allow_upload),
105 | ctypes.c_bool(self.allow_list),
106 | ctypes.c_bool(self.allow_delete),
107 | ctypes.c_int64(self.not_before),
108 | ctypes.c_int64(self.not_after))
109 |
110 |
111 | class SharePrefix:
112 | """
113 | SharePrefix defines a prefix that will be shared.
114 |
115 | ...
116 |
117 | Attributes
118 | ----------
119 | bucket : str
120 | prefix : str
121 | Prefix is the prefix of the shared object keys.
122 |
123 | Note: that within a bucket, the hierarchical key derivation scheme is
124 | delineated by forward slashes (/), so encryption information will be
125 | included in the resulting access grant to decrypt any key that shares
126 | the same prefix up until the last slash.
127 |
128 | Methods
129 | -------
130 | get_structure():
131 | _SharePrefixStruct
132 | """
133 |
134 | def __init__(self, bucket: str = "", prefix: str = ""):
135 | """Constructs all the necessary attributes for the SharePrefix object."""
136 |
137 | self.bucket = bucket
138 | self.prefix = prefix
139 |
140 | def get_structure(self):
141 | """Converts python class object to ctypes structure _SharePrefixStruct"""
142 |
143 | return _SharePrefixStruct(ctypes.c_char_p(self.bucket.encode('utf-8')),
144 | ctypes.c_char_p(self.prefix.encode('utf-8')))
145 |
146 |
147 | class Bucket:
148 | """
149 | Bucket contains information about the bucket.
150 |
151 | ...
152 |
153 | Attributes
154 | ----------
155 | name : str
156 | created : int
157 |
158 | Methods
159 | -------
160 | get_structure():
161 | _BucketStruct
162 | get_dict():
163 | converts python class object to python dictionary
164 | """
165 |
166 | def __init__(self, name: str = "", created: int = 0):
167 | """Constructs all the necessary attributes for the Bucket object."""
168 |
169 | self.name = name
170 | self.created = created
171 |
172 | def get_structure(self):
173 | """Converts python class object to ctypes structure _BucketStruct"""
174 |
175 | return _BucketStruct(ctypes.c_char_p(self.name.encode('utf-8')),
176 | ctypes.c_int64(self.created))
177 |
178 | def get_dict(self):
179 | """Converts python class object to python dictionary"""
180 |
181 | return {"name": self.name, "created": self.created}
182 |
183 |
184 | class SystemMetadata:
185 | """
186 | SystemMetadata contains information about the object that cannot be changed directly.
187 |
188 | ...
189 |
190 | Attributes
191 | ----------
192 | created : int
193 | expires : int
194 | content_length : int
195 |
196 | Methods
197 | -------
198 | get_structure():
199 | _SystemMetadataStruct
200 | get_dict():
201 | converts python class object to python dictionary
202 | """
203 |
204 | def __init__(self, created: int = 0, expires: int = 0, content_length: int = 0):
205 | """Constructs all the necessary attributes for the SystemMetadata object."""
206 |
207 | self.created = created
208 | self.expires = expires
209 | self.content_length = content_length
210 |
211 | def get_structure(self):
212 | """Converts python class object to ctypes structure _SystemMetadataStruct"""
213 |
214 | return _SystemMetadataStruct(ctypes.c_int64(self.created),
215 | ctypes.c_int64(self.expires),
216 | ctypes.c_int64(self.content_length))
217 |
218 | def get_dict(self):
219 | """Converts python class object to python dictionary"""
220 |
221 | return {"created": self.created, "expires": self.expires,
222 | "content_length": self.content_length}
223 |
224 |
225 | class CustomMetadataEntry:
226 | """
227 | CustomMetadata contains custom user metadata about the object.
228 |
229 | When choosing a custom key for your application start it with a prefix "app:key",
230 | as an example application named"Image Board" might use a key "image-board:title".
231 |
232 | ...
233 |
234 | Attributes
235 | ----------
236 | key : str
237 | key_length : int
238 | value : str
239 | value_length : int
240 |
241 | Methods
242 | -------
243 | get_structure():
244 | _CustomMetadataEntryStruct
245 | get_dict():
246 | converts python class object to python dictionary
247 | """
248 |
249 | def __init__(self, key: str = "", key_length: int = 0, value: str = "", value_length: int = 0):
250 | """Constructs all the necessary attributes for the CustomMetadataEntry object."""
251 |
252 | self.key = key
253 | self.key_length = key_length
254 | self.value = value
255 | self.value_length = value_length
256 |
257 | def get_structure(self):
258 | """Converts python class object to ctypes structure _CustomMetadataEntryStruct"""
259 |
260 | return _CustomMetadataEntryStruct(ctypes.c_char_p(self.key.encode('utf-8')),
261 | ctypes.c_size_t(self.key_length),
262 | ctypes.c_char_p(self.value.encode('utf-8')),
263 | ctypes.c_size_t(self.value_length))
264 |
265 | def get_dict(self):
266 | """Converts python class object to python dictionary"""
267 |
268 | return {"key": self.key, "key_length": self.key_length, "value": self.value,
269 | "value_length": self.value_length}
270 |
271 |
272 | class CustomMetadata:
273 | """
274 | CustomMetadata contains a list of CustomMetadataEntry about the object.
275 |
276 | ...
277 |
278 | Attributes
279 | ----------
280 | entries : list of CustomMetadataEntry
281 | count : int
282 |
283 | Methods
284 | -------
285 | get_structure():
286 | _CustomMetadataStruct
287 | get_dict():
288 | converts python class object to python dictionary
289 | """
290 |
291 | def __init__(self, entries: [CustomMetadataEntry] = None, count: int = 0):
292 | """Constructs all the necessary attributes for the CustomMetadata object."""
293 |
294 | self.entries = entries
295 | self.count = count
296 |
297 | def get_structure(self):
298 | """Converts python class object to ctypes structure _CustomMetadataStruct"""
299 |
300 | if self.entries is None or self.count == 0:
301 | self.count = 0
302 | entries = ctypes.POINTER(_CustomMetadataEntryStruct)()
303 | else:
304 | li_array_size = (_CustomMetadataEntryStruct * self.count)()
305 | entries = ctypes.cast(li_array_size, ctypes.POINTER(_CustomMetadataEntryStruct))
306 | for i, val in enumerate(self.entries):
307 | entries[i] = val.get_structure()
308 |
309 | return _CustomMetadataStruct(entries, ctypes.c_size_t(self.count))
310 |
311 | def get_dict(self):
312 | """Converts python class object to python dictionary"""
313 |
314 | entries = self.entries
315 | if entries is None or self.count == 0:
316 | self.count = 0
317 | entries = [CustomMetadataEntry()]
318 | return {"entries": [entry.get_dict() for entry in entries], "count": self.count}
319 |
320 |
321 | class Object:
322 | """
323 | Object contains information about an object.
324 |
325 | ...
326 |
327 | Attributes
328 | ----------
329 | key : str
330 | is_prefix : bool
331 | is_prefix indicates whether the Key is a prefix for other objects.
332 | system : SystemMetadata
333 | custom : CustomMetadata
334 |
335 | Methods
336 | -------
337 | get_structure():
338 | _ObjectStruct
339 | get_dict():
340 | converts python class object to python dictionary
341 | """
342 |
343 | def __init__(self, key: str = "", is_prefix: bool = False, system: SystemMetadata = None,
344 | custom: CustomMetadata = None):
345 | """Constructs all the necessary attributes for the Object object."""
346 |
347 | self.key = key
348 | self.is_prefix = is_prefix
349 | self.system = system
350 | self.custom = custom
351 |
352 | def get_structure(self):
353 | """Converts python class object to ctypes structure _ObjectStruct"""
354 |
355 | if self.system is None:
356 | system = _SystemMetadataStruct()
357 | else:
358 | system = self.system.get_structure()
359 |
360 | if self.custom is None:
361 | custom = _CustomMetadataStruct()
362 | else:
363 | custom = self.custom.get_structure()
364 |
365 | return _ObjectStruct(ctypes.c_char_p(self.key.encode('utf-8')),
366 | ctypes.c_bool(self.is_prefix), system, custom)
367 |
368 | def get_dict(self):
369 | """Converts python class object to python dictionary"""
370 |
371 | system = self.system
372 | custom = self.custom
373 | if system is None:
374 | system = SystemMetadata()
375 | if custom is None:
376 | custom = CustomMetadata()
377 | return {"key": self.key, "is_prefix": self.is_prefix, "system": system.get_dict(),
378 | "custom": custom.get_dict()}
379 |
380 |
381 | class ListObjectsOptions:
382 | """
383 | ListObjectsOptions defines object listing options.
384 |
385 | ...
386 |
387 | Attributes
388 | ----------
389 | prefix : str
390 | prefix allows to filter objects by a key prefix. If not empty,
391 | it must end with slash.
392 | cursor : str
393 | cursor sets the starting position of the iterator.
394 | The first item listed will be the one after the cursor.
395 | recursive : bool
396 | recursive iterates the objects without collapsing prefixes.
397 | system : bool
398 | system includes SystemMetadata in the results.
399 | custom : bool
400 | custom includes CustomMetadata in the results.
401 |
402 | Methods
403 | -------
404 | get_structure():
405 | _ListObjectsOptionsStruct
406 | """
407 |
408 | def __init__(self, prefix: str = "", cursor: str = "", recursive: bool = False,
409 | system: bool = False, custom: bool = False):
410 | """Constructs all the necessary attributes for the ListObjectsOptions object."""
411 |
412 | self.prefix = prefix
413 | self.cursor = cursor
414 | self.recursive = recursive
415 | self.system = system
416 | self.custom = custom
417 |
418 | def get_structure(self):
419 | """Converts python class object to ctypes structure _ListObjectsOptionsStruct"""
420 |
421 | return _ListObjectsOptionsStruct(ctypes.c_char_p(self.prefix.encode('utf-8')),
422 | ctypes.c_char_p(self.cursor.encode('utf-8')),
423 | ctypes.c_bool(self.recursive),
424 | ctypes.c_bool(self.system),
425 | ctypes.c_bool(self.custom))
426 |
427 |
428 | class ListBucketsOptions:
429 | """
430 | ListBucketsOptions defines bucket listing options.
431 |
432 | ...
433 |
434 | Attributes
435 | ----------
436 | cursor : str
437 | Cursor sets the starting position of the iterator.
438 | The first item listed will be the one after the cursor.
439 |
440 | Methods
441 | -------
442 | get_structure():
443 | _ListBucketsOptionsStruct
444 | """
445 |
446 | def __init__(self, cursor: str = ""):
447 | """Constructs all the necessary attributes for the ListBucketsOptions object."""
448 |
449 | self.cursor = cursor
450 |
451 | def get_structure(self):
452 | """Converts python class object to ctypes structure _ListBucketsOptionsStruct"""
453 |
454 | return _ListBucketsOptionsStruct(ctypes.c_char_p(self.cursor.encode('utf-8')))
455 |
456 |
457 | class UploadOptions:
458 | """
459 | UploadOptions contains additional options for uploading.
460 |
461 | ...
462 |
463 | Attributes
464 | ----------
465 | expires : int
466 | When expires is 0 or negative, it means no expiration.
467 |
468 | Methods
469 | -------
470 | get_structure():
471 | _UploadOptionsStruct
472 | """
473 |
474 | def __init__(self, expires: int):
475 | """Constructs all the necessary attributes for the UploadOptions object."""
476 |
477 | self.expires = expires
478 |
479 | def get_structure(self):
480 | """Converts python class object to ctypes structure _UploadOptionsStruct"""
481 |
482 | return _UploadOptionsStruct(ctypes.c_int64(self.expires))
483 |
484 |
485 | class DownloadOptions:
486 | """
487 | DownloadOptions contains additional options for downloading.
488 |
489 | ...
490 |
491 | Attributes
492 | ----------
493 | offset : int
494 | length : int
495 | When length is negative, it will read until the end of the blob.
496 |
497 | Methods
498 | -------
499 | get_structure():
500 | _DownloadOptionsStruct
501 | """
502 |
503 | def __init__(self, offset: int, length: int):
504 | """Constructs all the necessary attributes for the DownloadOptions object."""
505 |
506 | self.offset = offset
507 | self.length = length
508 |
509 | def get_structure(self):
510 | """Converts python class object to ctypes structure _DownloadOptionsStruct"""
511 |
512 | return _DownloadOptionsStruct(ctypes.c_int64(self.offset),
513 | ctypes.c_int64(self.length))
514 |
--------------------------------------------------------------------------------
/uplink_python/module_def.py:
--------------------------------------------------------------------------------
1 | """C-type Classes for input and output interaction with libuplinkc."""
2 | # pylint: disable=too-few-public-methods
3 | import ctypes
4 |
5 |
6 | class _ConfigStruct(ctypes.Structure):
7 | """Config ctypes structure for internal processing."""
8 |
9 | _fields_ = [("user_agent", ctypes.c_char_p), ("dial_timeout_milliseconds", ctypes.c_int32),
10 | ("temp_directory", ctypes.c_char_p)]
11 |
12 |
13 | class _AccessStruct(ctypes.Structure):
14 | """Access ctypes structure for internal processing."""
15 |
16 | _fields_ = [("_handle", ctypes.c_size_t)]
17 |
18 |
19 | class _EncryptionKeyStruct(ctypes.Structure):
20 | """Project ctypes structure for internal processing."""
21 |
22 | _fields_ = [("_handle", ctypes.c_size_t)]
23 |
24 |
25 | class _PermissionStruct(ctypes.Structure):
26 | """Permission ctypes structure for internal processing."""
27 |
28 | _fields_ = [("allow_download", ctypes.c_bool), ("allow_upload", ctypes.c_bool),
29 | ("allow_list", ctypes.c_bool), ("allow_delete", ctypes.c_bool),
30 | ("not_before", ctypes.c_int64), ("not_after", ctypes.c_int64)]
31 |
32 |
33 | class _SharePrefixStruct(ctypes.Structure):
34 | """SharePrefix ctypes structure for internal processing."""
35 |
36 | _fields_ = [("bucket", ctypes.c_char_p), ("prefix", ctypes.c_char_p)]
37 |
38 |
39 | class _BucketStruct(ctypes.Structure):
40 | """Bucket ctypes structure for internal processing."""
41 |
42 | _fields_ = [("name", ctypes.c_char_p), ("created", ctypes.c_int64)]
43 |
44 |
45 | class _ProjectStruct(ctypes.Structure):
46 | """Project ctypes structure for internal processing."""
47 |
48 | _fields_ = [("_handle", ctypes.c_size_t)]
49 |
50 |
51 | class _SystemMetadataStruct(ctypes.Structure):
52 | """SystemMetadata ctypes structure for internal processing."""
53 |
54 | _fields_ = [("created", ctypes.c_int64), ("expires", ctypes.c_int64),
55 | ("content_length", ctypes.c_int64)]
56 |
57 |
58 | class _CustomMetadataEntryStruct(ctypes.Structure):
59 | """CustomMetadataEntry ctypes structure for internal processing."""
60 |
61 | _fields_ = [("key", ctypes.c_char_p), ("key_length", ctypes.c_size_t),
62 | ("value", ctypes.c_char_p), ("value_length", ctypes.c_size_t)]
63 |
64 |
65 | class _CustomMetadataStruct(ctypes.Structure):
66 | """CustomMetadata ctypes structure for internal processing."""
67 |
68 | _fields_ = [("entries", ctypes.POINTER(_CustomMetadataEntryStruct)), ("count", ctypes.c_size_t)]
69 |
70 |
71 | class _ObjectStruct(ctypes.Structure):
72 | """Object ctypes structure for internal processing."""
73 |
74 | _fields_ = [("key", ctypes.c_char_p), ("is_prefix", ctypes.c_bool),
75 | ("system", _SystemMetadataStruct), ("custom", _CustomMetadataStruct)]
76 |
77 |
78 | class _ListObjectsOptionsStruct(ctypes.Structure):
79 | """ListObjectsOptions ctypes structure for internal processing."""
80 |
81 | _fields_ = [("prefix", ctypes.c_char_p), ("cursor", ctypes.c_char_p),
82 | ("recursive", ctypes.c_bool), ("system", ctypes.c_bool), ("custom", ctypes.c_bool)]
83 |
84 |
85 | class _ListBucketsOptionsStruct(ctypes.Structure):
86 | """ListBucketsOptions ctypes structure for internal processing."""
87 |
88 | _fields_ = [("cursor", ctypes.c_char_p)]
89 |
90 |
91 | class _ObjectIterator(ctypes.Structure):
92 | """ObjectIterator ctypes structure"""
93 |
94 | _fields_ = [("_handle", ctypes.c_size_t)]
95 |
96 |
97 | class _BucketIterator(ctypes.Structure):
98 | """BucketIterator ctypes structure for internal processing"""
99 |
100 | _fields_ = [("_handle", ctypes.c_size_t)]
101 |
102 |
103 | class _UploadStruct(ctypes.Structure):
104 | """Upload ctypes structure for internal processing."""
105 |
106 | _fields_ = [("_handle", ctypes.c_size_t)]
107 |
108 |
109 | class _UploadOptionsStruct(ctypes.Structure):
110 | """UploadOptions ctypes structure for internal processing."""
111 |
112 | _fields_ = [("expires", ctypes.c_int64)]
113 |
114 |
115 | class _DownloadStruct(ctypes.Structure):
116 | """Download ctypes structure for internal processing."""
117 |
118 | _fields_ = [("_handle", ctypes.c_size_t)]
119 |
120 |
121 | class _DownloadOptionsStruct(ctypes.Structure):
122 | """DownloadOptions ctypes structure for internal processing."""
123 |
124 | _fields_ = [("offset", ctypes.c_int64), ("length", ctypes.c_int64)]
125 |
126 |
127 | class _Error(ctypes.Structure):
128 | """Error ctypes structure for internal processing."""
129 |
130 | _fields_ = [("code", ctypes.c_int32), ("message", ctypes.c_char_p)]
131 |
132 |
133 | class _ProjectResult(ctypes.Structure):
134 | """ProjectResult ctypes structure"""
135 |
136 | _fields_ = [("project", ctypes.POINTER(_ProjectStruct)), ("error", ctypes.POINTER(_Error))]
137 |
138 |
139 | class _BucketResult(ctypes.Structure):
140 | """BucketResult ctypes structure"""
141 |
142 | _fields_ = [("bucket", ctypes.POINTER(_BucketStruct)), ("error", ctypes.POINTER(_Error))]
143 |
144 |
145 | class _UploadResult(ctypes.Structure):
146 | """UploadResult ctypes structure"""
147 |
148 | _fields_ = [("upload", ctypes.POINTER(_UploadStruct)), ("error", ctypes.POINTER(_Error))]
149 |
150 |
151 | class _DownloadResult(ctypes.Structure):
152 | """DownloadResult ctypes structure"""
153 |
154 | _fields_ = [("download", ctypes.POINTER(_DownloadStruct)), ("error", ctypes.POINTER(_Error))]
155 |
156 |
157 | class _AccessResult(ctypes.Structure):
158 | """AccessResult ctypes structure"""
159 |
160 | _fields_ = [("access", ctypes.POINTER(_AccessStruct)), ("error", ctypes.POINTER(_Error))]
161 |
162 |
163 | class _StringResult(ctypes.Structure):
164 | """StringResult ctypes structure"""
165 |
166 | _fields_ = [("string", ctypes.c_char_p), ("error", ctypes.POINTER(_Error))]
167 |
168 |
169 | class _ObjectResult(ctypes.Structure):
170 | """ObjectResult ctypes structure"""
171 |
172 | _fields_ = [("object", ctypes.POINTER(_ObjectStruct)), ("error", ctypes.POINTER(_Error))]
173 |
174 |
175 | class _WriteResult(ctypes.Structure):
176 | """WriteResult ctypes structure"""
177 |
178 | _fields_ = [("bytes_written", ctypes.c_size_t), ("error", ctypes.POINTER(_Error))]
179 |
180 |
181 | class _ReadResult(ctypes.Structure):
182 | """ReadResult ctypes structure"""
183 |
184 | _fields_ = [("bytes_read", ctypes.c_size_t), ("error", ctypes.POINTER(_Error))]
185 |
186 |
187 | class _EncryptionKeyResult(ctypes.Structure):
188 | """EncryptionKeyResult ctypes structure"""
189 |
190 | _fields_ = [("encryption_key", ctypes.POINTER(_EncryptionKeyStruct)),
191 | ("error", ctypes.POINTER(_Error))]
192 |
--------------------------------------------------------------------------------
/uplink_python/project.py:
--------------------------------------------------------------------------------
1 | """Module with Project class and project methods to work with buckets and objects"""
2 | import ctypes
3 |
4 | from uplink_python.module_classes import ListBucketsOptions, ListObjectsOptions,\
5 | UploadOptions, DownloadOptions
6 | from uplink_python.module_def import _BucketStruct, _ObjectStruct, _ListObjectsOptionsStruct,\
7 | _ObjectResult, _ListBucketsOptionsStruct, _UploadOptionsStruct, _DownloadOptionsStruct,\
8 | _ProjectStruct, _BucketResult, _BucketIterator, _ObjectIterator, _DownloadResult,\
9 | _UploadResult, _Error
10 | from uplink_python.upload import Upload
11 | from uplink_python.download import Download
12 | from uplink_python.errors import _storj_exception
13 |
14 |
15 | class Project:
16 | """
17 | Project provides access to managing buckets and objects.
18 |
19 | ...
20 |
21 | Attributes
22 | ----------
23 | project : int
24 | Project _handle returned from libuplinkc project_result.project
25 | uplink : Uplink
26 | uplink object used to get access
27 |
28 | Methods
29 | -------
30 | create_bucket():
31 | Bucket
32 | ensure_bucket():
33 | Bucket
34 | stat_bucket():
35 | Bucket
36 | list_buckets():
37 | list of Bucket
38 | delete_bucket():
39 | Bucket
40 | stat_object():
41 | Object
42 | list_objects():
43 | list of Object
44 | delete_object():
45 | Object
46 | close():
47 | None
48 | upload_object():
49 | Upload
50 | download_object():
51 | Download
52 | """
53 |
54 | def __init__(self, project, uplink):
55 | """Constructs all the necessary attributes for the Project object."""
56 |
57 | self.project = project
58 | self.uplink = uplink
59 |
60 | def create_bucket(self, bucket_name: str):
61 | """
62 | function creates a new bucket.
63 | When bucket already exists it throws BucketAlreadyExistError exception.
64 |
65 | Parameters
66 | ----------
67 | bucket_name : str
68 |
69 | Returns
70 | -------
71 | Bucket
72 | """
73 |
74 | #
75 | # declare types of arguments and response of the corresponding golang function
76 | self.uplink.m_libuplink.uplink_create_bucket.argtypes = [ctypes.POINTER(_ProjectStruct),
77 | ctypes.c_char_p]
78 | self.uplink.m_libuplink.uplink_create_bucket.restype = _BucketResult
79 | #
80 | # prepare the input for the function
81 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
82 |
83 | # create bucket by calling the exported golang function
84 | bucket_result = self.uplink.m_libuplink.uplink_create_bucket(self.project, bucket_name_ptr)
85 | #
86 | # if error occurred
87 | if bool(bucket_result.error):
88 | raise _storj_exception(bucket_result.error.contents.code,
89 | bucket_result.error.contents.message.decode("utf-8"))
90 | return self.uplink.bucket_from_result(bucket_result.bucket)
91 |
92 | def ensure_bucket(self, bucket_name: str):
93 | """
94 | function ensures that a bucket exists or creates a new one.
95 |
96 | When bucket already exists it returns a valid Bucket and no error
97 |
98 | Parameters
99 | ----------
100 | bucket_name : str
101 |
102 | Returns
103 | -------
104 | Bucket
105 | """
106 |
107 | #
108 | # declare types of arguments and response of the corresponding golang function
109 | self.uplink.m_libuplink.uplink_ensure_bucket.argtypes = [ctypes.POINTER(_ProjectStruct),
110 | ctypes.c_char_p]
111 | self.uplink.m_libuplink.uplink_ensure_bucket.restype = _BucketResult
112 | #
113 | # prepare the input for the function
114 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
115 |
116 | # open bucket if doesn't exist by calling the exported golang function
117 | bucket_result = self.uplink.m_libuplink.uplink_ensure_bucket(self.project, bucket_name_ptr)
118 | #
119 | # if error occurred
120 | if bool(bucket_result.error):
121 | raise _storj_exception(bucket_result.error.contents.code,
122 | bucket_result.error.contents.message.decode("utf-8"))
123 | return self.uplink.bucket_from_result(bucket_result.bucket)
124 |
125 | def stat_bucket(self, bucket_name: str):
126 | """
127 | function returns information about a bucket.
128 |
129 | Parameters
130 | ----------
131 | bucket_name : str
132 |
133 | Returns
134 | -------
135 | Bucket
136 | """
137 |
138 | #
139 | # declare types of arguments and response of the corresponding golang function
140 | self.uplink.m_libuplink.uplink_stat_bucket.argtypes = [ctypes.POINTER(_ProjectStruct),
141 | ctypes.c_char_p]
142 | self.uplink.m_libuplink.uplink_stat_bucket.restype = _BucketResult
143 | #
144 | # prepare the input for the function
145 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
146 |
147 | # get bucket information by calling the exported golang function
148 | bucket_result = self.uplink.m_libuplink.uplink_stat_bucket(self.project, bucket_name_ptr)
149 | #
150 | # if error occurred
151 | if bool(bucket_result.error):
152 | raise _storj_exception(bucket_result.error.contents.code,
153 | bucket_result.error.contents.message.decode("utf-8"))
154 | return self.uplink.bucket_from_result(bucket_result.bucket)
155 |
156 | def list_buckets(self, list_bucket_options: ListBucketsOptions = None):
157 | """
158 | function returns a list of buckets with all its information.
159 |
160 | Parameters
161 | ----------
162 | list_bucket_options : ListBucketsOptions (optional)
163 |
164 | Returns
165 | -------
166 | list of Bucket
167 | """
168 |
169 | #
170 | # declare types of arguments and response of the corresponding golang function
171 | self.uplink.m_libuplink.uplink_list_buckets.argtypes =\
172 | [ctypes.POINTER(_ProjectStruct), ctypes.POINTER(_ListBucketsOptionsStruct)]
173 | self.uplink.m_libuplink.uplink_list_buckets.restype =\
174 | ctypes.POINTER(_BucketIterator)
175 | #
176 | self.uplink.m_libuplink.uplink_bucket_iterator_item.argtypes =\
177 | [ctypes.POINTER(_BucketIterator)]
178 | self.uplink.m_libuplink.uplink_bucket_iterator_item.restype =\
179 | ctypes.POINTER(_BucketStruct)
180 | #
181 | self.uplink.m_libuplink.uplink_bucket_iterator_err.argtypes =\
182 | [ctypes.POINTER(_BucketIterator)]
183 | self.uplink.m_libuplink.uplink_bucket_iterator_err.restype =\
184 | ctypes.POINTER(_Error)
185 | #
186 | self.uplink.m_libuplink.uplink_bucket_iterator_next.argtypes =\
187 | [ctypes.POINTER(_BucketIterator)]
188 | self.uplink.m_libuplink.uplink_bucket_iterator_next.restype =\
189 | ctypes.c_bool
190 | #
191 | # prepare the input for the function
192 | if list_bucket_options is None:
193 | list_bucket_options_obj = ctypes.POINTER(_ListBucketsOptionsStruct)()
194 | else:
195 | list_bucket_options_obj = ctypes.byref(list_bucket_options.get_structure())
196 |
197 | # get bucket list by calling the exported golang function
198 | bucket_iterator = self.uplink.m_libuplink.uplink_list_buckets(self.project,
199 | list_bucket_options_obj)
200 |
201 | bucket_iterator_err = self.uplink.m_libuplink.uplink_bucket_iterator_err(bucket_iterator)
202 | if bool(bucket_iterator_err):
203 | raise _storj_exception(bucket_iterator_err.contents.code,
204 | bucket_iterator_err.contents.message.decode("utf-8"))
205 |
206 | bucket_list = list()
207 | while self.uplink.m_libuplink.uplink_bucket_iterator_next(bucket_iterator):
208 | bucket = self.uplink.m_libuplink.uplink_bucket_iterator_item(bucket_iterator)
209 | bucket_list.append(self.uplink.bucket_from_result(bucket))
210 |
211 | return bucket_list
212 |
213 | def delete_bucket(self, bucket_name: str):
214 | """
215 | function deletes a bucket.
216 |
217 | When bucket is not empty it throws BucketNotEmptyError exception.
218 |
219 | Parameters
220 | ----------
221 | bucket_name : str
222 |
223 | Returns
224 | -------
225 | Bucket
226 | """
227 |
228 | #
229 | # declare types of arguments and response of the corresponding golang function
230 | self.uplink.m_libuplink.uplink_delete_bucket.argtypes = [ctypes.POINTER(_ProjectStruct),
231 | ctypes.c_char_p]
232 | self.uplink.m_libuplink.uplink_delete_bucket.restype = _BucketResult
233 | #
234 | # prepare the input for the function
235 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
236 |
237 | # delete bucket by calling the exported golang function
238 | bucket_result = self.uplink.m_libuplink.uplink_delete_bucket(self.project, bucket_name_ptr)
239 | #
240 | # if error occurred
241 | if bool(bucket_result.error):
242 | raise _storj_exception(bucket_result.error.contents.code,
243 | bucket_result.error.contents.message.decode("utf-8"))
244 | return self.uplink.bucket_from_result(bucket_result.bucket)
245 |
246 | def stat_object(self, bucket_name: str, storj_path: str):
247 | """
248 | function returns information about an object at the specific key.
249 |
250 | Parameters
251 | ----------
252 | bucket_name : str
253 | storj_path : str
254 |
255 | Returns
256 | -------
257 | Object
258 | """
259 |
260 | #
261 | # declare types of arguments and response of the corresponding golang function
262 | self.uplink.m_libuplink.uplink_stat_object.argtypes = [ctypes.POINTER(_ProjectStruct),
263 | ctypes.c_char_p, ctypes.c_char_p]
264 | self.uplink.m_libuplink.uplink_stat_object.restype = _ObjectResult
265 | #
266 | # prepare the input for the function
267 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
268 | storj_path_ptr = ctypes.c_char_p(storj_path.encode('utf-8'))
269 |
270 | # get object information by calling the exported golang function
271 | object_result = self.uplink.m_libuplink.uplink_stat_object(self.project, bucket_name_ptr,
272 | storj_path_ptr)
273 | #
274 | # if error occurred
275 | if bool(object_result.error):
276 | raise _storj_exception(object_result.error.contents.code,
277 | object_result.error.contents.message.decode("utf-8"))
278 | return self.uplink.object_from_result(object_result.object)
279 |
280 | def list_objects(self, bucket_name: str, list_object_options: ListObjectsOptions = None):
281 | """
282 | function returns a list of objects with all its information.
283 |
284 | Parameters
285 | ----------
286 | bucket_name : str
287 | list_object_options : ListObjectsOptions (optional)
288 |
289 | Returns
290 | -------
291 | list of Object
292 | """
293 |
294 | #
295 | # declare types of arguments and response of the corresponding golang function
296 | self.uplink.m_libuplink.uplink_list_objects.argtypes =\
297 | [ctypes.POINTER(_ProjectStruct), ctypes.c_char_p,
298 | ctypes.POINTER(_ListObjectsOptionsStruct)]
299 | self.uplink.m_libuplink.uplink_list_objects.restype =\
300 | ctypes.POINTER(_ObjectIterator)
301 | #
302 | self.uplink.m_libuplink.uplink_object_iterator_item.argtypes =\
303 | [ctypes.POINTER(_ObjectIterator)]
304 | self.uplink.m_libuplink.uplink_object_iterator_item.restype =\
305 | ctypes.POINTER(_ObjectStruct)
306 | #
307 | self.uplink.m_libuplink.uplink_object_iterator_err.argtypes =\
308 | [ctypes.POINTER(_ObjectIterator)]
309 | self.uplink.m_libuplink.uplink_object_iterator_err.restype =\
310 | ctypes.POINTER(_Error)
311 | #
312 | self.uplink.m_libuplink.uplink_object_iterator_next.argtypes =\
313 | [ctypes.POINTER(_ObjectIterator)]
314 | self.uplink.m_libuplink.uplink_object_iterator_next.restype =\
315 | ctypes.c_bool
316 | #
317 | # prepare the input for the function
318 | if list_object_options is None:
319 | list_object_options_obj = ctypes.POINTER(_ListObjectsOptionsStruct)()
320 | else:
321 | list_object_options_obj = ctypes.byref(list_object_options.get_structure())
322 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
323 |
324 | # get object list by calling the exported golang function
325 | object_iterator = self.uplink.m_libuplink.uplink_list_objects(self.project, bucket_name_ptr,
326 | list_object_options_obj)
327 |
328 | object_iterator_err = self.uplink.m_libuplink.uplink_object_iterator_err(object_iterator)
329 | if bool(object_iterator_err):
330 | raise _storj_exception(object_iterator_err.contents.code,
331 | object_iterator_err.contents.message.decode("utf-8"))
332 |
333 | object_list = list()
334 | while self.uplink.m_libuplink.uplink_object_iterator_next(object_iterator):
335 | object_ = self.uplink.m_libuplink.uplink_object_iterator_item(object_iterator)
336 | object_list.append(self.uplink.object_from_result(object_))
337 | return object_list
338 |
339 | def delete_object(self, bucket_name: str, storj_path: str):
340 | """
341 | function deletes the object at the specific key.
342 |
343 | Parameters
344 | ----------
345 | bucket_name : str
346 | storj_path : str
347 |
348 | Returns
349 | -------
350 | Object
351 | """
352 |
353 | #
354 | # declare types of arguments and response of the corresponding golang function
355 | self.uplink.m_libuplink.uplink_delete_object.argtypes = [ctypes.POINTER(_ProjectStruct),
356 | ctypes.c_char_p, ctypes.c_char_p]
357 | self.uplink.m_libuplink.uplink_delete_object.restype = _ObjectResult
358 | #
359 | # prepare the input for the function
360 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
361 | storj_path_ptr = ctypes.c_char_p(storj_path.encode('utf-8'))
362 |
363 | # delete object by calling the exported golang function
364 | object_result = self.uplink.m_libuplink.uplink_delete_object(self.project, bucket_name_ptr,
365 | storj_path_ptr)
366 | #
367 | # if error occurred
368 | if bool(object_result.error):
369 | raise _storj_exception(object_result.error.contents.code,
370 | object_result.error.contents.message.decode("utf-8"))
371 | return self.uplink.object_from_result(object_result.object)
372 |
373 | def close(self):
374 | """
375 | function closes the project and all associated resources.
376 |
377 | Returns
378 | -------
379 | None
380 | """
381 | #
382 | # declare types of arguments and response of the corresponding golang function
383 | self.uplink.m_libuplink.uplink_close_project.argtypes = [ctypes.POINTER(_ProjectStruct)]
384 | self.uplink.m_libuplink.uplink_close_project.restype = ctypes.POINTER(_Error)
385 | #
386 | # close Storj project by calling the exported golang function
387 | error = self.uplink.m_libuplink.uplink_close_project(self.project)
388 | #
389 | # if error occurred
390 | if bool(error):
391 | raise _storj_exception(error.contents.code,
392 | error.contents.message.decode("utf-8"))
393 |
394 | def upload_object(self, bucket_name: str, storj_path: str,
395 | upload_options: UploadOptions = None):
396 | """
397 | function starts an upload to the specified key.
398 |
399 | Parameters
400 | ----------
401 | bucket_name : str
402 | storj_path : str
403 | upload_options : UploadOptions (optional)
404 |
405 | Returns
406 | -------
407 | Upload
408 | """
409 | #
410 | # declare types of arguments and response of the corresponding golang function
411 | self.uplink.m_libuplink.uplink_upload_object.argtypes =\
412 | [ctypes.POINTER(_ProjectStruct), ctypes.c_char_p, ctypes.c_char_p,
413 | ctypes.POINTER(_UploadOptionsStruct)]
414 | self.uplink.m_libuplink.uplink_upload_object.restype = _UploadResult
415 | #
416 | # prepare the input for the function
417 | if upload_options is None:
418 | upload_options_obj = ctypes.POINTER(_UploadOptionsStruct)()
419 | else:
420 | upload_options_obj = ctypes.byref(upload_options.get_structure())
421 |
422 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
423 | storj_path_ptr = ctypes.c_char_p(storj_path.encode('utf-8'))
424 |
425 | # get uploader by calling the exported golang function
426 | upload_result = self.uplink.m_libuplink.uplink_upload_object(self.project, bucket_name_ptr,
427 | storj_path_ptr,
428 | upload_options_obj)
429 | #
430 | # if error occurred
431 | if bool(upload_result.error):
432 | raise _storj_exception(upload_result.error.contents.code,
433 | upload_result.error.contents.message.decode("utf-8"))
434 | return Upload(upload_result.upload, self.uplink)
435 |
436 | def download_object(self, bucket_name: str, storj_path: str,
437 | download_options: DownloadOptions = None):
438 | """
439 | function starts download to the specified key.
440 |
441 | Parameters
442 | ----------
443 | bucket_name : str
444 | storj_path : str
445 | download_options : DownloadOptions (optional)
446 |
447 | Returns
448 | -------
449 | Download
450 | """
451 | #
452 | # declare types of arguments and response of the corresponding golang function
453 | self.uplink.m_libuplink.uplink_download_object.argtypes =\
454 | [ctypes.POINTER(_ProjectStruct), ctypes.c_char_p, ctypes.c_char_p,
455 | ctypes.POINTER(_DownloadOptionsStruct)]
456 | self.uplink.m_libuplink.uplink_download_object.restype = _DownloadResult
457 | #
458 | # prepare the input for the function
459 | if download_options is None:
460 | download_options_obj = ctypes.POINTER(_DownloadOptionsStruct)()
461 | else:
462 | download_options_obj = ctypes.byref(download_options.get_structure())
463 |
464 | bucket_name_ptr = ctypes.c_char_p(bucket_name.encode('utf-8'))
465 | storj_path_ptr = ctypes.c_char_p(storj_path.encode('utf-8'))
466 |
467 | # get downloader by calling the exported golang function
468 | download_result = self.uplink.m_libuplink.uplink_download_object(self.project,
469 | bucket_name_ptr,
470 | storj_path_ptr,
471 | download_options_obj)
472 | #
473 | # if error occurred
474 | if bool(download_result.error):
475 | raise _storj_exception(download_result.error.contents.code,
476 | download_result.error.contents.message.decode("utf-8"))
477 | return Download(download_result.download, self.uplink, self.project, bucket_name_ptr,
478 | storj_path_ptr)
479 |
--------------------------------------------------------------------------------
/uplink_python/uplink.py:
--------------------------------------------------------------------------------
1 | # pylint: disable=too-few-public-methods, too-many-arguments
2 | """Python Binding's Uplink Module for Storj (V3)"""
3 |
4 | import ctypes
5 | import os
6 | import sysconfig
7 |
8 | from uplink_python.access import Access
9 | from uplink_python.errors import _storj_exception, LibUplinkSoError
10 | from uplink_python.module_def import _AccessResult, _ConfigStruct
11 | from uplink_python.module_classes import Config, Bucket, Object, SystemMetadata, \
12 | CustomMetadataEntry, CustomMetadata
13 |
14 |
15 | class Uplink:
16 | """
17 | Python Storj Uplink class to initialize and get access grant to Storj (V3)"
18 |
19 | ...
20 |
21 | Attributes
22 | ----------
23 | m_libuplink : CDLL
24 | Instance to the libuplinkc.so.
25 |
26 | Methods
27 | -------
28 | object_from_result(object_=object_result.object):
29 | Object
30 | bucket_from_result(bucket=bucket_result.bucket):
31 | Bucket
32 | """
33 |
34 | __instance = None
35 |
36 | def __init__(self):
37 | """Constructs all the necessary attributes for the Uplink object."""
38 | # private members of PyStorj class with reference objects
39 | # include the golang exported libuplink library functions
40 | if Uplink.__instance is None:
41 | so_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'libuplinkc.so')
42 | if os.path.exists(so_path):
43 | self.m_libuplink = ctypes.CDLL(so_path)
44 | else:
45 | new_path = os.path.join(sysconfig.get_paths()['purelib'], "uplink_python",
46 | 'libuplinkc.so')
47 | if os.path.exists(new_path):
48 | self.m_libuplink = ctypes.CDLL(so_path)
49 | else:
50 | raise LibUplinkSoError
51 | Uplink.__instance = self
52 | else:
53 | self.m_libuplink = Uplink.__instance.m_libuplink
54 |
55 | @classmethod
56 | def object_from_result(cls, object_):
57 | """Converts ctypes structure _ObjectStruct to python class object."""
58 |
59 | system = SystemMetadata(created=object_.contents.system.created,
60 | expires=object_.contents.system.expires,
61 | content_length=object_.contents.system.content_length)
62 |
63 | array_size = object_.contents.custom.count
64 | entries = list()
65 | for i in range(array_size):
66 | if bool(object_.contents.custom.entries[i]):
67 | entries_obj = object_.contents.custom.entries[i]
68 | entries.append(CustomMetadataEntry(key=entries_obj.key.decode("utf-8"),
69 | key_length=entries_obj.key_length,
70 | value=entries_obj.value.decode("utf-8"),
71 | value_length=entries_obj.value_length))
72 | else:
73 | entries.append(CustomMetadataEntry())
74 |
75 | return Object(key=object_.contents.key.decode("utf-8"),
76 | is_prefix=object_.contents.is_prefix,
77 | system=system,
78 | custom=CustomMetadata(entries=entries,
79 | count=object_.contents.custom.count))
80 |
81 | @classmethod
82 | def bucket_from_result(cls, bucket_):
83 | """Converts ctypes structure _BucketStruct to python class object."""
84 |
85 | return Bucket(name=bucket_.contents.name.decode("utf-8"),
86 | created=bucket_.contents.created)
87 |
88 | #
89 | def request_access_with_passphrase(self, satellite: str, api_key: str, passphrase: str):
90 | """
91 | RequestAccessWithPassphrase generates a new access grant using a passhprase.
92 | It must talk to the Satellite provided to get a project-based salt for deterministic
93 | key derivation.
94 |
95 | Note: this is a CPU-heavy function that uses a password-based key derivation
96 | function (Argon2). This should be a setup-only step.
97 | Most common interactions with the library should be using a serialized access grant
98 | through ParseAccess directly.
99 |
100 | Parameters
101 | ----------
102 | satellite : str
103 | api_key : str
104 | passphrase : str
105 |
106 | Returns
107 | -------
108 | Access
109 | """
110 | #
111 | # declare types of arguments and response of the corresponding golang function
112 | self.m_libuplink.uplink_request_access_with_passphrase.argtypes = [ctypes.c_char_p,
113 | ctypes.c_char_p,
114 | ctypes.c_char_p]
115 | self.m_libuplink.uplink_request_access_with_passphrase.restype = _AccessResult
116 | #
117 | # prepare the input for the function
118 | satellite_ptr = ctypes.c_char_p(satellite.encode('utf-8'))
119 | api_key_ptr = ctypes.c_char_p(api_key.encode('utf-8'))
120 | passphrase_ptr = ctypes.c_char_p(passphrase.encode('utf-8'))
121 |
122 | # get access to Storj by calling the exported golang function
123 | access_result = self.m_libuplink.uplink_request_access_with_passphrase(satellite_ptr,
124 | api_key_ptr,
125 | passphrase_ptr)
126 | #
127 | # if error occurred
128 | if bool(access_result.error):
129 | raise _storj_exception(access_result.error.contents.code,
130 | access_result.error.contents.message.decode("utf-8"))
131 | return Access(access_result.access, self)
132 |
133 | def config_request_access_with_passphrase(self, config: Config, satellite: str, api_key: str,
134 | passphrase: str):
135 | """
136 | RequestAccessWithPassphrase generates a new access grant using a passhprase and
137 | custom configuration.
138 | It must talk to the Satellite provided to get a project-based salt for deterministic
139 | key derivation.
140 |
141 | Note: this is a CPU-heavy function that uses a password-based key derivation
142 | function (Argon2). This should be a setup-only step.
143 | Most common interactions with the library should be using a serialized access grant
144 | through ParseAccess directly.
145 |
146 | Parameters
147 | ----------
148 | config: Config
149 | satellite : str
150 | api_key : str
151 | passphrase : str
152 |
153 | Returns
154 | -------
155 | Access
156 | """
157 |
158 | #
159 | # declare types of arguments and response of the corresponding golang function
160 | self.m_libuplink.uplink_config_request_access_with_passphrase.argtypes = [_ConfigStruct,
161 | ctypes.c_char_p,
162 | ctypes.c_char_p,
163 | ctypes.c_char_p]
164 | self.m_libuplink.uplink_config_request_access_with_passphrase.restype = _AccessResult
165 | #
166 | # prepare the input for the function
167 | if config is None:
168 | config_obj = _ConfigStruct()
169 | else:
170 | config_obj = config.get_structure()
171 | satellite_ptr = ctypes.c_char_p(satellite.encode('utf-8'))
172 | api_key_ptr = ctypes.c_char_p(api_key.encode('utf-8'))
173 | phrase_ptr = ctypes.c_char_p(passphrase.encode('utf-8'))
174 |
175 | # get access to Storj by calling the exported golang function
176 | access_result = self.m_libuplink.uplink_config_request_access_with_passphrase(config_obj,
177 | satellite_ptr,
178 | api_key_ptr,
179 | phrase_ptr)
180 | #
181 | # if error occurred
182 | if bool(access_result.error):
183 | raise _storj_exception(access_result.error.contents.code,
184 | access_result.error.contents.message.decode("utf-8"))
185 | return Access(access_result.access, self)
186 |
187 | def parse_access(self, serialized_access: str):
188 | """
189 | ParseAccess parses a serialized access grant string.
190 |
191 | This should be the main way to instantiate an access grant for opening a project.
192 | See the note on RequestAccessWithPassphrase
193 |
194 | Parameters
195 | ----------
196 | serialized_access : str
197 |
198 | Returns
199 | -------
200 | Access
201 | """
202 |
203 | #
204 | # prepare the input for the function
205 | serialized_access_ptr = ctypes.c_char_p(serialized_access.encode('utf-8'))
206 | #
207 | # declare types of arguments and response of the corresponding golang function
208 | self.m_libuplink.uplink_parse_access.argtypes = [ctypes.c_char_p]
209 | self.m_libuplink.uplink_parse_access.restype = _AccessResult
210 | #
211 |
212 | # get parsed access by calling the exported golang function
213 | access_result = self.m_libuplink.uplink_parse_access(serialized_access_ptr)
214 | #
215 | # if error occurred
216 | if bool(access_result.error):
217 | raise _storj_exception(access_result.error.contents.code,
218 | access_result.error.contents.message.decode("utf-8"))
219 | return Access(access_result.access, self)
220 |
--------------------------------------------------------------------------------
/uplink_python/upload.py:
--------------------------------------------------------------------------------
1 | """Module with Upload class and upload methods to work with object upload"""
2 | # pylint: disable=line-too-long
3 | import ctypes
4 | import os
5 |
6 | from uplink_python.module_classes import CustomMetadata
7 | from uplink_python.module_def import _UploadStruct, _WriteResult, _Error, _CustomMetadataStruct, _ObjectResult
8 | from uplink_python.errors import _storj_exception
9 |
10 | _WINDOWS = os.name == 'nt'
11 | COPY_BUFSIZE = 1024 * 1024 if _WINDOWS else 64 * 1024
12 |
13 |
14 | class Upload:
15 | """
16 | Upload is an upload to Storj Network.
17 |
18 | ...
19 |
20 | Attributes
21 | ----------
22 | upload : int
23 | Upload _handle returned from libuplinkc upload_result.upload
24 | uplink : Uplink
25 | uplink object used to get access
26 |
27 | Methods
28 | -------
29 | write():
30 | Int
31 | write_file():
32 | None
33 | commit():
34 | None
35 | abort():
36 | None
37 | set_custom_metadata():
38 | None
39 | info():
40 | Object
41 | """
42 |
43 | def __init__(self, upload, uplink):
44 | """Constructs all the necessary attributes for the Upload object."""
45 |
46 | self.upload = upload
47 | self.uplink = uplink
48 |
49 | def write(self, data_to_write: bytes, size_to_write: int):
50 | """
51 | function uploads bytes data passed as parameter to the object's data stream.
52 |
53 | Parameters
54 | ----------
55 | data_to_write : bytes
56 | size_to_write : int
57 |
58 | Returns
59 | -------
60 | int
61 | """
62 |
63 | # declare types of arguments and response of the corresponding golang function
64 | self.uplink.m_libuplink.uplink_upload_write.argtypes = [ctypes.POINTER(_UploadStruct),
65 | ctypes.POINTER(ctypes.c_uint8),
66 | ctypes.c_size_t]
67 | self.uplink.m_libuplink.uplink_upload_write.restype = _WriteResult
68 | #
69 | # prepare the inputs for the function
70 | # --------------------------------------------
71 | # data conversion to type required by function
72 | # get size of data in c type int32 variable
73 | # conversion of read bytes data to c type ubyte Array
74 | data_to_write = (ctypes.c_uint8 * ctypes.c_int32(len(data_to_write)).value)(*data_to_write)
75 | # conversion of c type ubyte Array to LP_c_ubyte required by upload write function
76 | data_to_write_ptr = ctypes.cast(data_to_write, ctypes.POINTER(ctypes.c_uint8))
77 | # --------------------------------------------
78 | size_to_write_obj = ctypes.c_size_t(size_to_write)
79 |
80 | # upload data by calling the exported golang function
81 | write_result = self.uplink.m_libuplink.uplink_upload_write(self.upload, data_to_write_ptr,
82 | size_to_write_obj)
83 | #
84 | # if error occurred
85 | if bool(write_result.error):
86 | _storj_exception(write_result.error.contents.code,
87 | write_result.error.contents.message.decode("utf-8"))
88 | return int(write_result.bytes_written)
89 |
90 | def write_file(self, file_handle, buffer_size: int = 0):
91 | """
92 | function uploads complete file whose handle is passed as parameter to the
93 | object's data stream and commits the object after upload is complete.
94 |
95 | Note: File handle should be a BinaryIO, i.e. file should be opened using 'r+b" flag.
96 | e.g.: file_handle = open(SRC_FULL_FILENAME, 'r+b')
97 | Remember to commit the object on storj and also close the local file handle
98 | after this function exits.
99 |
100 | Parameters
101 | ----------
102 | file_handle : BinaryIO
103 | buffer_size : int
104 |
105 | Returns
106 | -------
107 | None
108 | """
109 |
110 | if not buffer_size:
111 | buffer_size = COPY_BUFSIZE
112 | while True:
113 | buf = file_handle.read(buffer_size)
114 | if not buf:
115 | break
116 | self.write(buf, len(buf))
117 |
118 | def commit(self):
119 | """
120 | function commits the uploaded data.
121 |
122 | Returns
123 | -------
124 | None
125 | """
126 |
127 | # declare types of arguments and response of the corresponding golang function
128 | self.uplink.m_libuplink.uplink_upload_commit.argtypes = [ctypes.POINTER(_UploadStruct)]
129 | self.uplink.m_libuplink.uplink_upload_commit.restype = ctypes.POINTER(_Error)
130 | #
131 |
132 | # upload commit by calling the exported golang function
133 | error = self.uplink.m_libuplink.uplink_upload_commit(self.upload)
134 | #
135 | # if error occurred
136 | if bool(error):
137 | raise _storj_exception(error.contents.code,
138 | error.contents.message.decode("utf-8"))
139 |
140 | def abort(self):
141 | """
142 | function aborts an ongoing upload.
143 |
144 | Returns
145 | -------
146 | None
147 | """
148 | #
149 | # declare types of arguments and response of the corresponding golang function
150 | self.uplink.m_libuplink.uplink_upload_abort.argtypes = [ctypes.POINTER(_UploadStruct)]
151 | self.uplink.m_libuplink.uplink_upload_abort.restype = ctypes.POINTER(_Error)
152 | #
153 |
154 | # abort ongoing upload by calling the exported golang function
155 | error = self.uplink.m_libuplink.uplink_upload_abort(self.upload)
156 | #
157 | # if error occurred
158 | if bool(error):
159 | raise _storj_exception(error.contents.code,
160 | error.contents.message.decode("utf-8"))
161 |
162 | def set_custom_metadata(self, custom_metadata: CustomMetadata = None):
163 | """
164 | function to set custom meta information while uploading data
165 |
166 | Parameters
167 | ----------
168 | custom_metadata : CustomMetadata
169 |
170 | Returns
171 | -------
172 | None
173 | """
174 | #
175 | # declare types of arguments and response of the corresponding golang function
176 | self.uplink.m_libuplink.uplink_upload_set_custom_metadata.argtypes = [ctypes.POINTER(_UploadStruct),
177 | _CustomMetadataStruct]
178 | self.uplink.m_libuplink.uplink_upload_set_custom_metadata.restype = ctypes.POINTER(_Error)
179 | #
180 | # prepare the input for the function
181 | if custom_metadata is None:
182 | custom_metadata_obj = _CustomMetadataStruct()
183 | else:
184 | custom_metadata_obj = custom_metadata.get_structure()
185 | #
186 | # set custom metadata to upload by calling the exported golang function
187 | error = self.uplink.m_libuplink.uplink_upload_set_custom_metadata(self.upload, custom_metadata_obj)
188 | #
189 | # if error occurred
190 | if bool(error):
191 | raise _storj_exception(error.contents.code,
192 | error.contents.message.decode("utf-8"))
193 |
194 | def info(self):
195 | """
196 | function returns the last information about the uploaded object.
197 |
198 | Returns
199 | -------
200 | Object
201 | """
202 | #
203 | # declare types of arguments and response of the corresponding golang function
204 | self.uplink.m_libuplink.uplink_upload_info.argtypes = [ctypes.POINTER(_UploadStruct)]
205 | self.uplink.m_libuplink.uplink_upload_info.restype = _ObjectResult
206 | #
207 | # get last upload info by calling the exported golang function
208 | object_result = self.uplink.m_libuplink.uplink_upload_info(self.upload)
209 | #
210 | # if error occurred
211 | if bool(object_result.error):
212 | raise _storj_exception(object_result.error.contents.code,
213 | object_result.error.contents.message.decode("utf-8"))
214 | return self.uplink.object_from_result(object_result.object)
215 |
--------------------------------------------------------------------------------