├── .gitignore ├── LICENSE ├── README.md ├── contrib ├── fstab.snippet ├── mount.yas3fs.amzn1 ├── mount.yas3fs.centos6 └── unmount-yas3fs.init.d ├── setup.py └── yas3fs ├── RecoverYas3fsPlugin.py ├── YAS3FSPlugin.py ├── __init__.py ├── _version.py └── fuse.py /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | *.swp 3 | *.pyc 4 | *.pyo 5 | build/ 6 | dist/ 7 | *.egg-info/ 8 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2014 Danilo Poccia, http://blog.danilopoccia.net 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 6 | 7 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ### Yet Another S3-backed File System: yas3fs 2 | 3 | [![Join the chat at https://gitter.im/danilop/yas3fs](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/danilop/yas3fs?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 4 | 5 | YAS3FS (Yet Another S3-backed File System) is a [Filesystem in Userspace (FUSE)](http://fuse.sourceforge.net) 6 | interface to [Amazon S3](http://aws.amazon.com/s3/). 7 | It was inspired by [s3fs](http://code.google.com/p/s3fs/) but rewritten from scratch to implement 8 | a distributed cache synchronized by [Amazon SNS](http://aws.amazon.com/sns/) notifications. 9 | A web console is provided to easily monitor the nodes of a cluster through the [YAS3FS Console](https://github.com/danilop/yas3fs-console) project. 10 | 11 | **If you use YAS3FS please share your experience on the [wiki](https://github.com/danilop/yas3fs/wiki), thanks!** 12 | 13 | * It allows to mount an S3 bucket (or a part of it, if you specify a path) as a local folder. 14 | * It works on Linux and Mac OS X. 15 | * For maximum speed all data read from S3 is cached locally on the node, in memory or on disk, depending of the file size. 16 | * Parallel multi-part downloads are used if there are reads in the middle of the file (e.g. for streaming). 17 | * Parallel multi-part uploads are used for files larger than a specified size. 18 | * With buffering enabled (the default) files can be accessed during the download from S3 (e.g. for streaming). 19 | * It can be used on more than one node to create a "shared" file system (i.e. a yas3fs "cluster"). 20 | * [SNS](http://aws.amazon.com/sns/) notifications are used to update other nodes in the cluster that something has changed on S3 and they need to invalidate their cache. 21 | * Notifications can be listened using HTTP or [SQS](http://aws.amazon.com/sqs/) endpoints. 22 | * If the cache grows to its maximum size, the less recently accessed files are removed. 23 | * Signed URLs are provided through Extended file attributes (xattr). 24 | * AWS credentials can be passed using AWS\_ACCESS\_KEY\_ID and AWS\_SECRET\_ACCESS\_KEY environment variables. 25 | * In an [EC2](http://aws.amazon.com/ec2/) instance a [IAM](http://aws.amazon.com/iam/) role can be used to give access to S3/SNS/SQS resources. 26 | * It is written in Python (2.6) using [boto](https://github.com/boto/boto) and [fusepy](https://github.com/terencehonles/fusepy). 27 | 28 | This is a personal project. No relation whatsoever exists between this project and my employer. 29 | 30 | ### License 31 | 32 | Copyright (c) 2012-2014 Danilo Poccia, http://danilop.net 33 | 34 | This code is licensed under the The MIT License (MIT). Please see the LICENSE file that accompanies this project for the terms of use. 35 | 36 | ### Introduction 37 | 38 | This is the logical architecture of yas3fs: 39 | 40 | ![yas3fs Logical Architecture](http://danilopoccia.s3.amazonaws.com/YAS3FS/yas3fs.png) 41 | 42 | I strongly suggest to start yas3fs for the first time with the `-df` (debug + foreground) options, to see if there is any error. 43 | When everything works it can be interrupted (with `^C`) and restarted to run in background 44 | (it's the default with no `-f` options). 45 | 46 | To mount an S3 bucket without using SNS (i.e. for a single node): 47 | 48 | yas3fs s3://bucket/path /path/to/mount 49 | 50 | To persist file system metadata such as attr/xattr yas3fs is using S3 User Metadata. 51 | To mount an S3 bucket without actually writing metadata in it, 52 | e.g. because it is a bucket you mainly use as a repository and not as a file system, 53 | you can use the `--no-metadata` option. 54 | 55 | To mount an S3 bucket using SNS and listening to an SQS endpoint: 56 | 57 | yas3fs s3://bucket/path /path/to/mount --topic TOPIC-ARN --new-queue 58 | 59 | To mount an S3 bucket using SNS and listening to an HTTP endpoint (on EC2): 60 | 61 | yas3fs s3://bucket/path /path/to/mount --topic TOPIC-ARN --ec2-hostname --port N 62 | 63 | On EC2 the security group must allow inbound traffic from SNS on the selected port. 64 | 65 | On EC2 the command line doesn't need any information on the actual server and can easily be used 66 | within an [Auto Scaling](http://aws.amazon.com/autoscaling/) group. 67 | 68 | ### Quick Installation 69 | 70 | #### WARNING: PIP installation is no longer supported. Use "git clone" instead. 71 | 72 | Requires [Python](http://www.python.org/download/) 2.6 or higher. 73 | Install using [pip](http://www.pip-installer.org/en/latest/). 74 | 75 | pip install yas3fs 76 | 77 | If it fails, check the CentOS 6 installation steps below. 78 | 79 | If you want to do a quick test here's the installation procedure depending on the OS flavor (Linux or Mac): 80 | 81 | * Create an S3 bucket in the AWS region you prefer. 82 | * You don't need to create anything in the bucket as the initial path (if any) is created by the tool on the first mount. 83 | * If you want to use an existing S3 bucket you can use the `--no-metadata` option to not use user metadata to persist file system attr/xattr. 84 | * If you want to have more than one node in sync, create an SNS topic in the same region as the S3 bucket and write down the full topic ARN (you need it to run the tool if more than one client is connected to the same bucket/path). 85 | * Create a IAM Role that gives access to the S3 and SNS/SQS resources you need or pass the AWS credentials to the tool using environment variables (see `-h`). 86 | 87 | **On Amazon Linux** 88 | 89 | sudo yum -y install fuse fuse-libs 90 | sudo easy_install pip 91 | sudo pip install yas3fs # assume root installation 92 | sudo sed -i'' 's/^# *user_allow_other/user_allow_other/' /etc/fuse.conf # uncomment user_allow_other 93 | yas3fs -h # See the usage 94 | mkdir LOCAL-PATH 95 | # For single host mount 96 | yas3fs s3://BUCKET/PATH LOCAL-PATH 97 | # For multiple hosts mount 98 | yas3fs s3://BUCKET/PATH LOCAL-PATH --topic TOPIC-ARN --new-queue 99 | 100 | **On Ubuntu Linux** 101 | 102 | sudo apt-get update 103 | sudo apt-get -y install fuse python-pip 104 | sudo pip install yas3fs # assume root installation 105 | sudo sed -i'' 's/^# *user_allow_other/user_allow_other/' /etc/fuse.conf # uncomment user_allow_other 106 | sudo chmod a+r /etc/fuse.conf # make it readable by anybody, it is not the default on Ubuntu 107 | yas3fs -h # See the usage 108 | mkdir LOCAL-PATH 109 | # For single host mount 110 | yas3fs s3://BUCKET/PATH LOCAL-PATH 111 | # For multiple hosts mount 112 | yas3fs s3://BUCKET/PATH LOCAL-PATH --topic TOPIC-ARN --new-queue 113 | 114 | **On a Mac with OS X** 115 | 116 | Install FUSE for OS X from . 117 | 118 | sudo pip install yas3fs # assume root installation 119 | mkdir LOCAL-PATH 120 | # For single host mount 121 | yas3fs s3://BUCKET/PATH LOCAL-PATH 122 | # For multiple hosts mount 123 | yas3fs s3://BUCKET/PATH LOCAL-PATH --topic TOPIC-ARN --new-queue 124 | 125 | **On CentOS 6** 126 | 127 | sudo yum -y install fuse fuse-libs centos-release-scl 128 | sudo yum -y install python27 129 | # upgrade setuptools 130 | scl enable python27 -- pip install setuptools --upgrade 131 | # grab the latest sources 132 | git clone https://github.com/danilop/yas3fs.git 133 | cd yas3fs 134 | scl enable python27 -- python setup.py install 135 | scl enable python27 -- yas3fs -h # See the usage 136 | mkdir LOCAL-PATH 137 | # For single host mount 138 | scl enable python27 -- yas3fs s3://BUCKET/PATH LOCAL-PATH 139 | # For multiple hosts mount 140 | scl enable python27 -- yas3fs s3://BUCKET/PATH LOCAL-PATH --topic TOPIC-ARN --new-queue 141 | 142 | **/etc/fstab support** 143 | 144 | # Put contrib/mount.yas3fs to /usr/local/sbin and make the symlink 145 | chmod +x /usr/local/sbin/mount.yas3fs 146 | cd /sbin; sudo ln -s /usr/local/sbin/mount.yas3fs.centos6 # replace centos6 to amzn1 for Amazon Linux installation 147 | # Add the contents of contrib/fstab.snippet to /etc/fstab and modify accordingly 148 | # Try to mount 149 | mount /mnt/mybucket 150 | 151 | **Workaround to unmount yas3fs correctly during host shutdown or reboot** 152 | 153 | sudo cp contrib/unmount-yas3fs.init.d /etc/init.d/unmount-yas3fs 154 | sudo chmod +x /etc/init.d/unmount-yas3fs 155 | sudo chkconfig --add unmount-yas3fs 156 | sudo chkconfig unmount-yas3fs on 157 | sudo /etc/init.d/unmount-yas3fs start 158 | 159 | To listen to SNS HTTP notifications (I usually suggest to use SQS instead) with a Mac 160 | you need to install the Python [M2Crypto](http://chandlerproject.org/Projects/MeTooCrypto) module, 161 | download the most suitable "egg" from 162 | . 163 | 164 | sudo easy_install M2Crypto-*.egg 165 | 166 | If something does not work as expected you can use the `-df` options to run in foreground and in debug mode. 167 | 168 | **Unmount** 169 | 170 | To unmount the file system on Linux: 171 | 172 | fusermount -u LOCAL-PATH 173 | or 174 | umount LOCAL-PATH 175 | 176 | The latter works if /etc/fstab support steps (see above) were completed 177 | 178 | To unmount the file system on a Mac you can use `umount`. 179 | 180 | **rsync usage** 181 | 182 | rsync's option *--inplace* has to be used to avoid S3 busy events 183 | 184 | ### Full Usage 185 | 186 | yas3fs -h 187 | 188 | usage: yas3fs [-h] [--region REGION] [--topic ARN] [--new-queue] 189 | [--new-queue-with-hostname] [--queue NAME] 190 | [--queue-wait N] [--queue-polling N] [--nonempty] 191 | [--hostname HOSTNAME] [--use-ec2-hostname] [--port N] 192 | [--cache-entries N] [--cache-mem-size N] [--cache-disk-size N] 193 | [--cache-path PATH] [--recheck-s3] [--cache-on-disk N] [--cache-check N] 194 | [--s3-num N] [--download-num N] [--prefetch-num N] [--st-blksize N] 195 | [--buffer-size N] [--buffer-prefetch N] [--no-metadata] 196 | [--prefetch] [--mp-size N] [--mp-num N] [--mp-retries N] 197 | [--s3-retries N] [--s3-retries-sleep N] 198 | [--s3-use-sigv4] [--s3-endpoint URI] 199 | [--aws-managed-encryption] 200 | [--no-allow-other] 201 | [--download-retries-num N] [--download-retries-sleep N] 202 | [--read-retries-num N] [--read-retries-sleep N] 203 | [--id ID] [--mkdir] [--uid N] [--gid N] [--umask MASK] 204 | [--read-only] [--expiration N] [--requester-pays] 205 | [--with-plugin-file FILE] [--with-plugin-class CLASS] 206 | [-l FILE] 207 | [--log-mb-size N] [--log-backup-count N] [--log-backup-gzip] 208 | [-f] [-d] [-V] 209 | S3Path LocalPath 210 | 211 | YAS3FS (Yet Another S3-backed File System) is a Filesystem in Userspace (FUSE) 212 | interface to Amazon S3. It allows to mount an S3 bucket (or a part of it, if 213 | you specify a path) as a local folder. It works on Linux and Mac OS X. For 214 | maximum speed all data read from S3 is cached locally on the node, in memory 215 | or on disk, depending of the file size. Parallel multi-part downloads are used 216 | if there are reads in the middle of the file (e.g. for streaming). Parallel 217 | multi-part uploads are used for files larger than a specified size. With 218 | buffering enabled (the default) files can be accessed during the download from 219 | S3 (e.g. for streaming). It can be used on more than one node to create a 220 | "shared" file system (i.e. a yas3fs "cluster"). SNS notifications are used to 221 | update other nodes in the cluster that something has changed on S3 and they 222 | need to invalidate their cache. Notifications can be delivered to HTTP or SQS 223 | endpoints. If the cache grows to its maximum size, the less recently accessed 224 | files are removed. Signed URLs are provided through Extended file attributes 225 | (xattr). AWS credentials can be passed using AWS_ACCESS_KEY_ID and 226 | AWS_SECRET_ACCESS_KEY environment variables. In an EC2 instance a IAM role can 227 | be used to give access to S3/SNS/SQS resources. AWS_DEFAULT_REGION environment 228 | variable can be used to set the default AWS region. 229 | 230 | positional arguments: 231 | S3Path the S3 path to mount in s3://BUCKET/PATH format, PATH 232 | can be empty, can contain subfolders and is created on 233 | first mount if not found in the BUCKET 234 | LocalPath the local mount point 235 | 236 | optional arguments: 237 | -h, --help show this help message and exit 238 | --region REGION AWS region to use for SNS and SQS (default is eu- 239 | west-1) 240 | --topic ARN SNS topic ARN 241 | --new-queue create a new SQS queue that is deleted on unmount to 242 | listen to SNS notifications, overrides --queue, queue 243 | name is BUCKET-PATH-ID with alphanumeric characters 244 | only 245 | --new-queue-with-hostname 246 | create a new SQS queue with hostname in queuename, 247 | overrides --queue, queue name is BUCKET-PATH-ID with 248 | alphanumeric characters only 249 | --queue NAME SQS queue name to listen to SNS notifications, a new 250 | queue is created if it doesn't exist 251 | --queue-wait N SQS queue wait time in seconds (using long polling, 0 252 | to disable, default is 20 seconds) 253 | --queue-polling N SQS queue polling interval in seconds (default is 0 254 | seconds) 255 | --hostname HOSTNAME public hostname to listen to SNS HTTP notifications 256 | --use-ec2-hostname get public hostname to listen to SNS HTTP notifications 257 | from EC2 instance metadata (overrides --hostname) 258 | --port N TCP port to listen to SNS HTTP notifications 259 | --cache-entries N max number of entries to cache (default is 100000 260 | entries) 261 | --cache-mem-size N max size of the memory cache in MB (default is 128 MB) 262 | --cache-disk-size N max size of the disk cache in MB (default is 1024 MB) 263 | --cache-path PATH local path to use for disk cache (default is 264 | /tmp/yas3fs-BUCKET-PATH-random) 265 | --recheck-s3 Cache ENOENT results in forced recheck of S3 for new file/directory 266 | --cache-on-disk N use disk (instead of memory) cache for files greater 267 | than the given size in bytes (default is 0 bytes) 268 | --cache-check N interval between cache size checks in seconds (default 269 | is 5 seconds) 270 | --s3-endpoint the S3 endpoint URI, only required if using --s3-use-sigv4 271 | --s3-num N number of parallel S3 calls (0 to disable writeback, 272 | default is 32) 273 | --s3-retries N number of retries for s3 write operations (default 3) 274 | --s3-retries-sleep N number of seconds between retries for s3 write operations (default 1) 275 | --s3-use-sigv4 use signature version 4 signing process, required to connect 276 | to some newer AWS regions. --s3-endpoint must also be set 277 | --download-num N number of parallel downloads (default is 4) 278 | --download-retries-num N max number of retries when downloading (default is 60) 279 | --download-retries-sleep N how long to sleep in seconds between download retries (default is 1) 280 | --read-retries-num N max number of retries when read() is invoked (default is 10) 281 | --read-retries-sleep N how long to sleep in seconds between read() retries (default is 1) 282 | --prefetch-num N number of parallel prefetching downloads (default is 2) 283 | --st-blksize N st_blksize to return to getattr() callers in bytes, optional 284 | --nonempty allows mounts over a non-empty file or directory 285 | --buffer-size N download buffer size in KB (0 to disable buffering, 286 | default is 10240 KB) 287 | --buffer-prefetch N number of buffers to prefetch (default is 0) 288 | --no-metadata don't write user metadata on S3 to persist file system 289 | attr/xattr 290 | --prefetch download file/directory content as soon as it is 291 | discovered (doesn't download file content if download 292 | buffers are used) 293 | --mp-size N size of parts to use for multipart upload in MB 294 | (default value is 100 MB, the minimum allowed by S3 is 295 | 5 MB) 296 | --mp-num N max number of parallel multipart uploads per file (0 to 297 | disable multipart upload, default is 4) 298 | --mp-retries N max number of retries in uploading a part (default is 299 | 3) 300 | --aws-managed-encryption Enable AWS managed encryption (sets header x-amz-server-side-encryption = AES256) 301 | --no-allow-other do not allow other users to access this bucket 302 | --id ID a unique ID identifying this node in a cluster (default 303 | is a UUID) 304 | --mkdir create mountpoint if not found (and create intermediate 305 | directories as required) 306 | --uid N default UID 307 | --gid N default GID 308 | --umask MASK default umask 309 | --read-only mount read only 310 | --expiration N default expiration for signed URL via xattrs (in 311 | seconds, default is 30 days) 312 | --requester-pays requester pays for S3 interactions, the bucket must 313 | have Requester Pays enabled 314 | --with-plugin-file FILE 315 | YAS3FSPlugin file 316 | --with-plugin-class CLASS 317 | YAS3FSPlugin class, if this is not set it will 318 | take the first child of YAS3FSPlugin from exception 319 | handler file 320 | -l FILE, --log FILE filename for logs 321 | --log-mb-size N max size of log file 322 | --log-backup-count N number of backups log files 323 | --log-backup-gzip flag to gzip backup files 324 | 325 | -f, --foreground run in foreground 326 | -d, --debug show debug info 327 | -V, --version show program's version number and exit 328 | 329 | ### Signed URLs 330 | 331 | You can dynamically generate signed URLs for any file on yas3fs using Extended File attributes. 332 | 333 | The default expiration is used (30 days or the value, in seconds, of the '--expiration' option). 334 | 335 | You can specify per file expiration with the 'yas3fs.expiration' attribute (in seconds). 336 | 337 | On a Mac you can use the 'xattr' command to list 'yas3fs.* attributes: 338 | 339 | $ xattr -l file 340 | yas3fs.bucket: S3 bucket 341 | yas3fs.key: S3 key 342 | yas3fs.URL: http://bucket.s3.amazonaws.com/key 343 | yas3fs.signedURL: https://bucket.s3.amazonaws.com/... (for default expiration) 344 | yas3fs.expiration: 2592000 (default) 345 | 346 | $ xattr -w yas3fs.expiration 3600 file # Sets signed URL expiration for the file to 1h 347 | $ xattr -l file 348 | yas3fs.bucket: S3 bucket 349 | yas3fs.key: S3 key 350 | yas3fs.URL: http://bucket.s3.amazonaws.com/key 351 | yas3fs.signedURL: https://bucket.s3.amazonaws.com/... (for 1h expiration) 352 | yas3fs.expiration: 3600 353 | 354 | $ xattr -d yas3fs.expiration file # File specific expiration removed, the default is used again 355 | 356 | Similarly on Linux you can use the 'getfattr' and 'setfattr' commands: 357 | 358 | $ getfattr -d -m yas3fs file 359 | # file: file 360 | user.yas3fs.URL="http://bucket.s3.amazonaws.com/key" 361 | user.yas3fs.bucket="S3 bucket" 362 | user.yas3fs.expiration="2592000 (default)" 363 | user.yas3fs.key="S3 key" 364 | user.yas3fs.signedURL="https://bucket.s3.amazonaws.com/..." (for default expiration) 365 | 366 | $ setfattr -n user.yas3fs.expiration -v 3600 367 | $ getfattr -d -m yas3fs file 368 | # file: file 369 | user.yas3fs.URL="http://bucket.s3.amazonaws.com/key" 370 | user.yas3fs.bucket="S3 bucket" 371 | user.yas3fs.expiration="3600" 372 | user.yas3fs.key="S3 key" 373 | user.yas3fs.signedURL="https://bucket.s3.amazonaws.com/..." (for 1h expiration) 374 | 375 | $ setfattr -x user.yas3fs.expiration latest.zip # File specific expiration removed, the default is used again 376 | 377 | ### Notification Syntax & Use 378 | 379 | You can use the SNS topic for other purposes than keeping the cache of the nodes in sync. 380 | These are some sample use cases: 381 | 382 | * You can listen to the SNS topic to be updated on changes on S3 (if done through yas3fs). 383 | * You can publish on the SNS topic to manage the overall "cluster" of yas3fs nodes. 384 | 385 | The SNS notification syntax is based on [JSON (JavaScript Object Notation)](http://www.json.org): 386 | 387 | [ "node_id", "action", ... ] 388 | 389 | The following `action`(s) are currently implemented: 390 | 391 | * `mkdir` (new directory): `[ "node_id", "mkdir", "path" ]` 392 | * `rmdir` (remove directory): `[ "node_id", "rmdir", "path" ]` 393 | * `mknod` (new empty file): `[ "node_id", "mknod", "path" ]` 394 | * `unlink` (remove file): `[ "node_id", "unlink", "path" ]` 395 | * `symlink` (new symbolic link): `[ "node_id", "symlink", "path" ]` 396 | * `rename` (rename file or directory): `[ "node_id", "rename", "old_path", "new_path" ]` 397 | * `upload` (new or updated file): `[ "node_id", "upload", "path", "new_md5" ]` (`path` and `new_md5` are optional) 398 | * `md` (updated metadata, e.g. attr/xattr): `[ "node_id", "md", "path", "metadata_name" ]` 399 | * `reset` (reset cache): `[ "node_id", "reset", "path" ]` (`path` is optional) 400 | * `cache` (change cache config): `[ "node_id", "cache" , "entries" or "mem" or "disk", new_value ]` 401 | * `buffer` (change buffer config): `[ "node_id", "buffer", "size" or "prefetch", new_value ]` 402 | * `prefetch` (change prefetch config): `[ "node_id", "prefetch", "on" or "off" ]` 403 | * `url` (change S3 url): `[ "node_id", "url", "s3://BUCKET/PATH" ]` 404 | 405 | Every node will listen to notifications coming from a `node_id` different from its own id. 406 | As an example, if you want to reset the cache of all the nodes in a yas3fs cluster, 407 | you can send the following notification to the SNS topic (assuming there is no node with id equal to `all`): 408 | 409 | [ "all", "reset" ] 410 | 411 | To send the notification you can use the SNS web console or any command line tool that supports SNS, such as [AWS CLI](http://aws.amazon.com/cli/). 412 | 413 | In the same way, if you uploaded a new file (or updated an old one) directly on S3 414 | you can invalidate the caches of all the nodes in the yas3fs cluster for that `path` sending this SNS notification: 415 | 416 | [ "all", "upload", "path" ] 417 | 418 | The `path` is the relative path of the file system (`/` corresponding to the mount point) 419 | and doesn't include any S3 path (i.e. prefix) as given in the `--url` option. 420 | 421 | To change the size of the memory cache on all nodes, e.g. to bring it from 1GB (the current default) to 10GB, 422 | you can publish (the size is in MB as in the corresponding command line option): 423 | 424 | [ "all", "cache", "mem", 10240 ] 425 | 426 | To change the size of the disk cache on all nodes, e.g. to bring it from 10GB (the current default) to 1TB, 427 | you can publish (the size is in MB as in the corresponding command line option): 428 | 429 | [ "all", "cache", "disk", 1048576 ] 430 | 431 | To change the buffer size used to download the content (and make it available for reads) from the default of 10MB (optimized for a full download speed) to 256KB (optimized for a streaming service) you can use (the size is in KB, as in the corresponding command line option): 432 | 433 | [ "all", "buffer", "size", 256 ] 434 | 435 | To change buffer prefetch from the default of 0 to 1 (optimized for sequential access) you can publish: 436 | 437 | [ "all", "buffer", "prefetch", 1 ] 438 | 439 | Similarly, to activate download prefetch of all files on all nodes you can use: 440 | 441 | [ "all", "prefetch", "on" ] 442 | 443 | To change the multipart upload size to 100MB: 444 | 445 | [ "all", "multipart", "size", 102400 ] 446 | 447 | To change the maximum number of parallel threads to use for multipart uploads to 16: 448 | 449 | [ "all", "multipart", "num", 16 ] 450 | 451 | To change the maximum number of retries for multipart uploads to 10: 452 | 453 | [ "all", "multipart", "retries", 10 ] 454 | 455 | You can even change dinamically the mounted S3 URL (i.e. the bucket and/or the path prefix): 456 | 457 | [ "all", "url", "s3://BUCKET/PATH" ] 458 | 459 | To check the status of all the yas3fs instances listening to a topic you can use: 460 | 461 | [ "all", "ping" ] 462 | 463 | To the previous message all yas3fs instances will answer publishing a message on the topic with this content: 464 | 465 | [ "id", "status", hostname, number of entries in cache, cache memory size, 466 | cache disk size, download queue length, prefetch queue length, S3 queue length ] 467 | 468 | ### Loading files into S3 469 | 470 | Have to load a massive amount of files into an S3 bucket that you intend to front though yas3fs? Check out [s3-bucket-loader](https://github.com/bitsofinfo/s3-bucket-loader) for massively parallel imports to S3. 471 | 472 | ### Testing 473 | 474 | Use this tool to test a YAS3FS install: [yas3fs-test](https://github.com/ewah/yas3fs-test) 475 | 476 | It will run through a slew of common commands on one or more nodes, adjust the settings.py file to what you imagine your production environment to look like. 477 | 478 | It is INVALUABLE for making changes to the yas3fs code base. 479 | 480 | More tests always being added. 481 | 482 | 483 | You can use this tool to test a YAS3FS cluster: [yas3fs-cluster-tester](https://github.com/bitsofinfo/yas3fs-cluster-tester) 484 | 485 | It is a test harness suite to induce file I/O and validate YAS3FS cluster activity across N peer-nodes. 486 | 487 | This may be useful to anyone who wants to validate/test YAS3FS to see how it behaves under load and with N peers all managing files in the same S3 bucket. This has been used to test YAS3FS against a several node "cluster" with each node generating hundreds of files. 488 | 489 | ### IAM Policy Permissions 490 | ##### S3 491 | ```JSON 492 | { 493 | "Effect": "Allow", 494 | "Action": [ 495 | "s3:GetBucketLocation", 496 | "s3:DeleteObject", 497 | "s3:GetObject", 498 | "s3:GetObjectVersion", 499 | "s3:ListBucket", 500 | "s3:PutObject" 501 | ], 502 | "Resource": [ 503 | "arn:aws:s3:::bucketname", 504 | "arn:aws:s3:::bucketname/*" 505 | ] 506 | } 507 | ``` 508 | ##### SNS 509 | ```JSON 510 | { 511 | "Effect": "Allow", 512 | "Action": [ 513 | "sns:ConfirmSubscription", 514 | "sns:GetTopicAttributes", 515 | "sns:Publish", 516 | "sns:Subscribe", 517 | "sns:Unsubscribe" 518 | ], 519 | "Resource": [ 520 | "arn:aws:sns:region:acct:topicname" 521 | ] 522 | } 523 | ``` 524 | ##### SQS 525 | ```JSON 526 | { 527 | "Effect": "Allow", 528 | "Action": [ 529 | "sqs:CreateQueue", 530 | "sqs:DeleteMessage", 531 | "sqs:GetQueueAttributes", 532 | "sqs:GetQueueUrl", 533 | "sqs:ReceiveMessage", 534 | "sqs:SetQueueAttributes", 535 | "sqs:SendMessage" 536 | ], 537 | "Resource": [ 538 | "arn:aws:sqs:region:acct:queuename" 539 | ] 540 | } 541 | ``` 542 | ##### IAM 543 | ```JSON 544 | { 545 | "Effect": "Allow", 546 | "Action": "iam:GetUser", 547 | "Resource": [ 548 | "*" 549 | ] 550 | } 551 | ``` 552 | Happy File Sharing! 553 | -------------------------------------------------------------------------------- /contrib/fstab.snippet: -------------------------------------------------------------------------------- 1 | yas3fs#mybucket /mnt/mybucket yas3fs _netdev,allow_other,default_permissions,topic=arn:aws:sns:us-east-1:0000000000:mybucket,yas3fslog,yas3fsdebug 0 0 2 | -------------------------------------------------------------------------------- /contrib/mount.yas3fs.amzn1: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | s3bucket=${1##*#} # extract the last part after '#' in 'yas3fs#bucket' string 4 | localpath=$2 5 | topic=$(echo $4 | /bin/grep -o 'topic=[^,]*' | /bin/cut -f2 -d=) 6 | queue=$(echo $4 | /bin/grep -o 'queue=[^,]*' | /bin/cut -f2 -d=) 7 | log=$(echo $4 | /bin/grep -o 'yas3fslog') 8 | debug=$(echo $4 | /bin/grep -o 'yas3fsdebug') 9 | logline="" 10 | 11 | if [ "x$log" = "xyas3fslog" ]; then 12 | logline="--log /tmp/.yas3fs-${s3bucket} --log-mb-size 10 --log-backup-count 10 --log-backup-gzip" 13 | [ "x$debug" = "xyas3fsdebug" ] && logline="-d $logline" 14 | fi 15 | 16 | topicline="" 17 | queueline="" 18 | if [ ! "x$topic" = "x" ]; then 19 | topicline="--topic ${topic}" 20 | if [ "x$queue" = "x" ]; then 21 | queueline="--new-queue-with-hostname" 22 | else 23 | queueline="--queue ${queue}" 24 | fi 25 | fi 26 | 27 | yas3fs --download-retries-num 10 --recheck-s3 ${logline} "s3://${s3bucket}" "${localpath}" ${topicline} ${queueline} 28 | -------------------------------------------------------------------------------- /contrib/mount.yas3fs.centos6: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | s3bucket=${1##*#} # extract the last part after '#' in 'yas3fs#bucket' string 4 | localpath=$2 5 | topic=$(echo $4 | /bin/grep -o 'topic=[^,]*' | /bin/cut -f2 -d=) 6 | queue=$(echo $4 | /bin/grep -o 'queue=[^,]*' | /bin/cut -f2 -d=) 7 | log=$(echo $4 | /bin/grep -o 'yas3fslog') 8 | debug=$(echo $4 | /bin/grep -o 'yas3fsdebug') 9 | logline="" 10 | 11 | if [ "x$log" = "xyas3fslog" ]; then 12 | logline="--log /tmp/.yas3fs-${s3bucket} --log-mb-size 10 --log-backup-count 10 --log-backup-gzip" 13 | [ "x$debug" = "xyas3fsdebug" ] && logline="-d $logline" 14 | fi 15 | 16 | topicline="" 17 | queueline="" 18 | if [ ! "x$topic" = "x" ]; then 19 | topicline="--topic ${topic}" 20 | if [ "x$queue" = "x" ]; then 21 | queueline="--new-queue-with-hostname" 22 | else 23 | queueline="--queue ${queue}" 24 | fi 25 | fi 26 | 27 | scl enable python27 -- yas3fs --download-retries-num 10 --recheck-s3 ${logline} "s3://${s3bucket}" "${localpath}" ${topicline} ${queueline} 28 | -------------------------------------------------------------------------------- /contrib/unmount-yas3fs.init.d: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | # 3 | # chkconfig: - 99 87 4 | # 5 | # description: Correctly unmount yas3fs mounts before going reboot or halt (a workaround for https://github.com/libfuse/libfuse/issues/1) 6 | # 7 | 8 | # how many seconds to wait for yas3fs queues flushing 9 | TIMER=60 10 | 11 | lockfile=/var/lock/subsys/unmount-yas3fs 12 | 13 | start() { 14 | /bin/touch $lockfile 15 | } 16 | 17 | stop() { 18 | logger -t "unmount-yas3fs" "Unmounting yas3fs volumes..." 19 | echo "unmount-yas3fs: Unmounting yas3fs volumes..." 20 | awk '$1 ~ /^yas3fs$/ { print $2 }' \ 21 | /proc/mounts | sort -r | \ 22 | while read line; do 23 | fstab-decode /bin/umount -f $line 24 | done 25 | 26 | logger -t "unmount-yas3fs" "Waiting for yas3fs queues get flushed..." 27 | echo -n "unmount-yas3fs: Waiting for yas3fs queues get flushed" 28 | c=0 29 | while $(pgrep -x yas3fs &>/dev/null); do 30 | if [ "$c" -gt "$TIMER" ]; then 31 | logger -t "unmount-yas3fs" "Wasn't able to complete in $TIMER seconds, exiting forcefully..." 32 | echo 33 | echo "unmount-yas3fs: Wasn't able to complete in $TIMER seconds, exiting forcefully..." 34 | /bin/rm -f $lockfile 35 | exit 0 36 | fi 37 | echo -n "." 38 | sleep 1 39 | c=$((c+1)) 40 | done 41 | logger -t "unmount-yas3fs" "done" 42 | echo -n "done" 43 | echo 44 | /bin/rm -f $lockfile 45 | } 46 | 47 | case "$1" in 48 | start) start;; 49 | stop) stop;; 50 | *) 51 | echo $"Usage: $0 {start|stop}" 52 | exit 1 53 | esac 54 | 55 | exit 0 56 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | 3 | import sys 4 | 5 | exec(open('yas3fs/_version.py').read()) 6 | 7 | requires = ['setuptools>=2.2', 'boto>=2.25.0', 'boto3>=1.6.12'] 8 | 9 | # Versions of Python pre-2.7 require argparse separately. 2.7+ and 3+ all 10 | # include this as the replacement for optparse. 11 | if sys.version_info[:2] < (2, 7): 12 | requires.append("argparse") 13 | 14 | setup( 15 | name='yas3fs', 16 | version=__version__, 17 | description='YAS3FS (Yet Another S3-backed File System) is a Filesystem in Userspace (FUSE) interface to Amazon S3.', 18 | packages=find_packages(), 19 | author='Danilo Poccia', 20 | author_email='dpoccia@gmail.com', 21 | url='https://github.com/danilop/yas3fs', 22 | install_requires=requires, 23 | entry_points = { 'console_scripts': ['yas3fs = yas3fs:main'] }, 24 | ) 25 | -------------------------------------------------------------------------------- /yas3fs/RecoverYas3fsPlugin.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | from yas3fs.YAS3FSPlugin import YAS3FSPlugin 4 | import json 5 | import os 6 | import re 7 | import errno 8 | from stat import * 9 | 10 | import datetime 11 | import time 12 | 13 | ''' 14 | Upon upload failure 15 | - a log entry is written w/ metadata 16 | - the cache file is mirrored into a recovery directory ajacent to the cache directory 17 | ''' 18 | 19 | class RecoverYas3fsPlugin(YAS3FSPlugin): 20 | def epochseconds_to_iso8601(self, s = None): 21 | t = None 22 | if s == None: 23 | dt = datetime.datetime.now() 24 | else: 25 | dt = datetime.datetime.utcfromtimestamp(s) 26 | 27 | # truncates microseconds 28 | dt = dt.replace(microsecond=0) 29 | 30 | rt = dt.isoformat() 31 | 32 | return rt 33 | 34 | def stat_to_dict(self, stat): 35 | fn_map = { 36 | 'st_mode': (ST_MODE, str), 37 | 'st_ino': (ST_INO, str), 38 | 'st_dev': (ST_DEV, str), 39 | 'st_nlink': (ST_NLINK, str), 40 | 'st_uid': (ST_UID, str), 41 | 'st_gid': (ST_GID, str), 42 | 'st_size': (ST_SIZE, str), 43 | 'st_atime': (ST_ATIME, self.epochseconds_to_iso8601), 44 | 'st_mtime': (ST_MTIME, self.epochseconds_to_iso8601), 45 | 'st_ctime': (ST_CTIME, self.epochseconds_to_iso8601) 46 | } 47 | d = {} 48 | for k in fn_map: 49 | d[k] = fn_map[k][1](stat[fn_map[k][0]]) 50 | return d 51 | 52 | # k,v tuple 53 | def s3key_json_filter(self, x): 54 | if x[0] in ('s3bucket'): 55 | return False 56 | return True 57 | 58 | def __init__(self, yas3fs, logger=None): 59 | super(RecoverYas3fsPlugin, self).__init__(yas3fs, logger) 60 | self.recovery_path = yas3fs.cache.cache_path + "/recovery" 61 | self.cache = yas3fs.cache 62 | 63 | self.logger.info("PLUGIN Recovery Path '%s'"% self.recovery_path) 64 | 65 | #--------------------------------------------- 66 | # makes a recovery directory 67 | try: 68 | os.makedirs(self.recovery_path) 69 | self.logger.debug("PLUGIN created recovery path '%s' done" % self.recovery_path) 70 | except OSError as exc: # Python >2.5 71 | if exc.errno == errno.EEXIST and os.path.isdir(self.recovery_path): 72 | self.logger.debug("PLUGIN create_dirs '%s' already there" % self.recovery_path) 73 | pass 74 | else: 75 | raise 76 | 77 | def make_recovery_copy(self, cache_file): 78 | path = re.sub(self.cache.cache_path, '', cache_file) 79 | path = re.sub('/files', '', path) 80 | recovery_file = self.recovery_path + path 81 | 82 | self.logger.info("PLUGIN copying file from '%s' to '%s'"%(cache_file, recovery_file)) 83 | 84 | recovery_path = os.path.dirname(recovery_file) 85 | try: 86 | os.makedirs(recovery_path) 87 | self.logger.debug("PLUGIN created recovery path '%s' done" % recovery_path) 88 | except OSError as exc: # Python >2.5 89 | if exc.errno == errno.EEXIST and os.path.isdir(recovery_path): 90 | self.logger.debug("PLUGIN create_dirs '%s' already there" % recovery_path) 91 | pass 92 | else: 93 | raise 94 | 95 | 96 | import shutil 97 | shutil.copyfile(cache_file, recovery_file) 98 | 99 | self.logger.info("PLUGIN copying file from '%s' to '%s' done"%(cache_file, recovery_file)) 100 | 101 | return True 102 | 103 | 104 | 105 | def do_cmd_on_s3_now_w_retries(self, fn): 106 | # self, key, pub, action, args, kargs, retries = 1 107 | def wrapper(*args, **kargs): 108 | try: 109 | return fn(*args, **kargs) 110 | except Exception as e: 111 | self.logger.error("PLUGIN") 112 | selfless_args = None 113 | if args[1]: 114 | selfless_args = args[1:] 115 | self.logger.error("PLUGIN do_cmd_on_s3_now_w_retries FAILED" + " " + str(selfless_args)) 116 | 117 | s = args[0] 118 | key = args[1] 119 | pub = args[2] 120 | action = args[3] 121 | arg = args[4] 122 | kargs = args[5] 123 | 124 | 125 | ### trying to recover 126 | if pub[0] == 'upload': 127 | try: 128 | path = pub[1] 129 | cache_file = s.cache.get_cache_filename(path) 130 | cache_stat = os.stat(cache_file) 131 | etag = None 132 | etag_filename = s.cache.get_cache_etags_filename(path) 133 | if os.path.isfile(etag_filename): 134 | with open(etag_filename, mode='r') as etag_file: 135 | etag = etag_file.read() 136 | # print etag_filename 137 | # print etag 138 | 139 | 140 | json_recover = { 141 | "action" : action, 142 | "action_time" : self.epochseconds_to_iso8601(), 143 | "pub_action" : pub[0], 144 | "file" : path, 145 | "cache_file" : cache_file, 146 | "cache_stat" : self.stat_to_dict(cache_stat), 147 | # "cache_file_size" : cache_stat.st_size, 148 | # "cache_file_ctime" : self.epochseconds_to_iso8601(cache_stat.st_ctime), 149 | # "cache_file_mtime" : self.epochseconds_to_iso8601(cache_stat.st_mtime), 150 | "etag_filename": etag_filename, 151 | "etag": etag, 152 | "exception": str(e), 153 | "s3key" : dict(filter(self.s3key_json_filter, iter(key.__dict__.items()))) 154 | } 155 | 156 | self.logger.error("RecoverYAS3FS PLUGIN UPLOAD FAILED " + json.dumps(json_recover)) 157 | 158 | self.make_recovery_copy(cache_file) 159 | 160 | except Exception as e: 161 | self.logger.exception(e) 162 | 163 | return args[2] #???? 164 | return wrapper 165 | 166 | -------------------------------------------------------------------------------- /yas3fs/YAS3FSPlugin.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | import imp 4 | import os 5 | import inspect 6 | import logging 7 | 8 | class YAS3FSPlugin (object): 9 | @staticmethod 10 | def load_from_file(yas3fs, filepath, expected_class = None): 11 | class_inst = None 12 | 13 | try: 14 | mod_name,file_ext = os.path.splitext(os.path.split(filepath)[-1]) 15 | if file_ext.lower() == '.py': 16 | py_mod = imp.load_source(mod_name, filepath) 17 | 18 | elif file_ext.lower() == '.pyc': 19 | py_mod = imp.load_compiled(mod_name, filepath) 20 | else: 21 | raise 22 | 23 | if not py_mod: 24 | raise 25 | 26 | for klass in inspect.getmembers(py_mod,inspect.isclass): 27 | if not issubclass(klass[1], YAS3FSPlugin): 28 | continue 29 | 30 | if expected_class == None or expected_class == klass[0]: 31 | class_inst = klass[1](yas3fs) 32 | break 33 | except Exception as e: 34 | raise Exception("cannot load plugin file " + filepath + " " + e) 35 | 36 | if not class_inst: 37 | raise Exception("cannot load plugin class " + expected_class) 38 | 39 | return class_inst 40 | 41 | @staticmethod 42 | def load_from_class(yas3fs, expected_class): 43 | try: 44 | module_name = 'yas3fs.' + expected_class 45 | # i = imp.find_module(module_name) 46 | module = __import__(module_name) 47 | klass = getattr(module.__dict__[expected_class], expected_class) 48 | class_inst = klass(yas3fs) 49 | return class_inst 50 | except Exception as e: 51 | print(str(e)) 52 | raise Exception("cannot load plugin class " + expected_class + " " + str(e)) 53 | 54 | def __init__(self, yas3fs, logger=None): 55 | self.logger = logger 56 | if (not self.logger): 57 | self.logger = logging.getLogger('yas3fsPlugin') 58 | 59 | def do_cmd_on_s3_now_w_retries(self, fn): 60 | # self, key, pub, action, args, kargs, retries = 1 61 | def wrapper(*args, **kargs): 62 | try: 63 | return fn(*args, **kargs) 64 | except Exception as e: 65 | selfless_args = None 66 | if args[1]: 67 | selfless_args = args[1:] 68 | self.logger.info("PLUGIN do_cmd_on_s3_now_w_retries FAILED" + " " + str(selfless_args)) 69 | 70 | return args[2] #???? 71 | return wrapper 72 | 73 | -------------------------------------------------------------------------------- /yas3fs/_version.py: -------------------------------------------------------------------------------- 1 | __version__ = '2.4.6' 2 | -------------------------------------------------------------------------------- /yas3fs/fuse.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) 2012 Terence Honles (maintainer) 2 | # Copyright (c) 2008 Giorgos Verigakis (author) 3 | # 4 | # Permission to use, copy, modify, and distribute this software for any 5 | # purpose with or without fee is hereby granted, provided that the above 6 | # copyright notice and this permission notice appear in all copies. 7 | # 8 | # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 9 | # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 10 | # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 11 | # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 12 | # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 13 | # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 | # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 | 16 | from __future__ import division 17 | 18 | from ctypes import * 19 | from ctypes.util import find_library 20 | from errno import * 21 | from os import strerror 22 | from platform import machine, system 23 | from signal import signal, SIGINT, SIG_DFL 24 | from stat import S_IFDIR 25 | from traceback import print_exc 26 | 27 | import os 28 | import logging 29 | 30 | try: 31 | from functools import partial 32 | except ImportError: 33 | # http://docs.python.org/library/functools.html#functools.partial 34 | def partial(func, *args, **keywords): 35 | def newfunc(*fargs, **fkeywords): 36 | newkeywords = keywords.copy() 37 | newkeywords.update(fkeywords) 38 | return func(*(args + fargs), **newkeywords) 39 | 40 | newfunc.func = func 41 | newfunc.args = args 42 | newfunc.keywords = keywords 43 | return newfunc 44 | 45 | try: 46 | basestring 47 | except NameError: 48 | basestring = str 49 | 50 | class c_timespec(Structure): 51 | _fields_ = [('tv_sec', c_long), ('tv_nsec', c_long)] 52 | 53 | class c_utimbuf(Structure): 54 | _fields_ = [('actime', c_timespec), ('modtime', c_timespec)] 55 | 56 | class c_stat(Structure): 57 | pass # Platform dependent 58 | 59 | _system = system() 60 | _machine = machine() 61 | 62 | _libfuse_path = os.environ.get('FUSE_LIBRARY_PATH') 63 | if not _libfuse_path: 64 | if _system == 'Darwin': 65 | _libiconv = CDLL(find_library('iconv'), RTLD_GLOBAL) # libfuse dependency 66 | _libfuse_path = (find_library('fuse4x') or find_library('osxfuse') or 67 | find_library('fuse')) 68 | else: 69 | _libfuse_path = find_library('fuse') 70 | 71 | if not _libfuse_path: 72 | raise EnvironmentError('Unable to find libfuse') 73 | else: 74 | _libfuse = CDLL(_libfuse_path) 75 | 76 | if _system == 'Darwin' and hasattr(_libfuse, 'macfuse_version'): 77 | _system = 'Darwin-MacFuse' 78 | 79 | 80 | if _system in ('Darwin', 'Darwin-MacFuse', 'FreeBSD'): 81 | ENOTSUP = 45 82 | c_dev_t = c_int32 83 | c_fsblkcnt_t = c_ulong 84 | c_fsfilcnt_t = c_ulong 85 | c_gid_t = c_uint32 86 | c_mode_t = c_uint16 87 | c_off_t = c_int64 88 | c_pid_t = c_int32 89 | c_uid_t = c_uint32 90 | setxattr_t = CFUNCTYPE(c_int, c_char_p, c_char_p, POINTER(c_byte), 91 | c_size_t, c_int, c_uint32) 92 | getxattr_t = CFUNCTYPE(c_int, c_char_p, c_char_p, POINTER(c_byte), 93 | c_size_t, c_uint32) 94 | if _system == 'Darwin': 95 | c_stat._fields_ = [ 96 | ('st_dev', c_dev_t), 97 | ('st_mode', c_mode_t), 98 | ('st_nlink', c_uint16), 99 | ('st_ino', c_uint64), 100 | ('st_uid', c_uid_t), 101 | ('st_gid', c_gid_t), 102 | ('st_rdev', c_dev_t), 103 | ('st_atimespec', c_timespec), 104 | ('st_mtimespec', c_timespec), 105 | ('st_ctimespec', c_timespec), 106 | ('st_birthtimespec', c_timespec), 107 | ('st_size', c_off_t), 108 | ('st_blocks', c_int64), 109 | ('st_blksize', c_int32), 110 | ('st_flags', c_int32), 111 | ('st_gen', c_int32), 112 | ('st_lspare', c_int32), 113 | ('st_qspare', c_int64)] 114 | else: 115 | c_stat._fields_ = [ 116 | ('st_dev', c_dev_t), 117 | ('st_ino', c_uint32), 118 | ('st_mode', c_mode_t), 119 | ('st_nlink', c_uint16), 120 | ('st_uid', c_uid_t), 121 | ('st_gid', c_gid_t), 122 | ('st_rdev', c_dev_t), 123 | ('st_atimespec', c_timespec), 124 | ('st_mtimespec', c_timespec), 125 | ('st_ctimespec', c_timespec), 126 | ('st_size', c_off_t), 127 | ('st_blocks', c_int64), 128 | ('st_blksize', c_int32)] 129 | elif _system == 'Linux': 130 | ENOTSUP = 95 131 | c_dev_t = c_ulonglong 132 | c_fsblkcnt_t = c_ulonglong 133 | c_fsfilcnt_t = c_ulonglong 134 | c_gid_t = c_uint 135 | c_mode_t = c_uint 136 | c_off_t = c_longlong 137 | c_pid_t = c_int 138 | c_uid_t = c_uint 139 | setxattr_t = CFUNCTYPE(c_int, c_char_p, c_char_p, POINTER(c_byte), 140 | c_size_t, c_int) 141 | 142 | getxattr_t = CFUNCTYPE(c_int, c_char_p, c_char_p, POINTER(c_byte), 143 | c_size_t) 144 | 145 | if _machine == 'x86_64': 146 | c_stat._fields_ = [ 147 | ('st_dev', c_dev_t), 148 | ('st_ino', c_ulong), 149 | ('st_nlink', c_ulong), 150 | ('st_mode', c_mode_t), 151 | ('st_uid', c_uid_t), 152 | ('st_gid', c_gid_t), 153 | ('__pad0', c_int), 154 | ('st_rdev', c_dev_t), 155 | ('st_size', c_off_t), 156 | ('st_blksize', c_long), 157 | ('st_blocks', c_long), 158 | ('st_atimespec', c_timespec), 159 | ('st_mtimespec', c_timespec), 160 | ('st_ctimespec', c_timespec)] 161 | elif _machine == 'ppc': 162 | c_stat._fields_ = [ 163 | ('st_dev', c_dev_t), 164 | ('st_ino', c_ulonglong), 165 | ('st_mode', c_mode_t), 166 | ('st_nlink', c_uint), 167 | ('st_uid', c_uid_t), 168 | ('st_gid', c_gid_t), 169 | ('st_rdev', c_dev_t), 170 | ('__pad2', c_ushort), 171 | ('st_size', c_off_t), 172 | ('st_blksize', c_long), 173 | ('st_blocks', c_longlong), 174 | ('st_atimespec', c_timespec), 175 | ('st_mtimespec', c_timespec), 176 | ('st_ctimespec', c_timespec)] 177 | else: 178 | # i686, use as fallback for everything else 179 | c_stat._fields_ = [ 180 | ('st_dev', c_dev_t), 181 | ('__pad1', c_ushort), 182 | ('__st_ino', c_ulong), 183 | ('st_mode', c_mode_t), 184 | ('st_nlink', c_uint), 185 | ('st_uid', c_uid_t), 186 | ('st_gid', c_gid_t), 187 | ('st_rdev', c_dev_t), 188 | ('__pad2', c_ushort), 189 | ('st_size', c_off_t), 190 | ('st_blksize', c_long), 191 | ('st_blocks', c_longlong), 192 | ('st_atimespec', c_timespec), 193 | ('st_mtimespec', c_timespec), 194 | ('st_ctimespec', c_timespec), 195 | ('st_ino', c_ulonglong)] 196 | else: 197 | raise NotImplementedError('%s is not supported.' % _system) 198 | 199 | 200 | class c_statvfs(Structure): 201 | _fields_ = [ 202 | ('f_bsize', c_ulong), 203 | ('f_frsize', c_ulong), 204 | ('f_blocks', c_fsblkcnt_t), 205 | ('f_bfree', c_fsblkcnt_t), 206 | ('f_bavail', c_fsblkcnt_t), 207 | ('f_files', c_fsfilcnt_t), 208 | ('f_ffree', c_fsfilcnt_t), 209 | ('f_favail', c_fsfilcnt_t), 210 | ('f_fsid', c_ulong), 211 | #('unused', c_int), 212 | ('f_flag', c_ulong), 213 | ('f_namemax', c_ulong) 214 | ] 215 | 216 | if _system == 'FreeBSD': 217 | c_fsblkcnt_t = c_uint64 218 | c_fsfilcnt_t = c_uint64 219 | setxattr_t = CFUNCTYPE(c_int, c_char_p, c_char_p, POINTER(c_byte), 220 | c_size_t, c_int) 221 | 222 | getxattr_t = CFUNCTYPE(c_int, c_char_p, c_char_p, POINTER(c_byte), 223 | c_size_t) 224 | 225 | class c_statvfs(Structure): 226 | _fields_ = [ 227 | ('f_bavail', c_fsblkcnt_t), 228 | ('f_bfree', c_fsblkcnt_t), 229 | ('f_blocks', c_fsblkcnt_t), 230 | ('f_favail', c_fsfilcnt_t), 231 | ('f_ffree', c_fsfilcnt_t), 232 | ('f_files', c_fsfilcnt_t), 233 | ('f_bsize', c_ulong), 234 | ('f_flag', c_ulong), 235 | ('f_frsize', c_ulong)] 236 | 237 | class fuse_file_info(Structure): 238 | _fields_ = [ 239 | ('flags', c_int), 240 | ('fh_old', c_ulong), 241 | ('writepage', c_int), 242 | ('direct_io', c_uint, 1), 243 | ('keep_cache', c_uint, 1), 244 | ('flush', c_uint, 1), 245 | ('padding', c_uint, 29), 246 | ('fh', c_uint64), 247 | ('lock_owner', c_uint64)] 248 | 249 | class fuse_context(Structure): 250 | _fields_ = [ 251 | ('fuse', c_voidp), 252 | ('uid', c_uid_t), 253 | ('gid', c_gid_t), 254 | ('pid', c_pid_t), 255 | ('private_data', c_voidp)] 256 | 257 | _libfuse.fuse_get_context.restype = POINTER(fuse_context) 258 | 259 | 260 | class fuse_operations(Structure): 261 | _fields_ = [ 262 | ('getattr', CFUNCTYPE(c_int, c_char_p, POINTER(c_stat))), 263 | ('readlink', CFUNCTYPE(c_int, c_char_p, POINTER(c_byte), c_size_t)), 264 | ('getdir', c_voidp), # Deprecated, use readdir 265 | ('mknod', CFUNCTYPE(c_int, c_char_p, c_mode_t, c_dev_t)), 266 | ('mkdir', CFUNCTYPE(c_int, c_char_p, c_mode_t)), 267 | ('unlink', CFUNCTYPE(c_int, c_char_p)), 268 | ('rmdir', CFUNCTYPE(c_int, c_char_p)), 269 | ('symlink', CFUNCTYPE(c_int, c_char_p, c_char_p)), 270 | ('rename', CFUNCTYPE(c_int, c_char_p, c_char_p)), 271 | ('link', CFUNCTYPE(c_int, c_char_p, c_char_p)), 272 | ('chmod', CFUNCTYPE(c_int, c_char_p, c_mode_t)), 273 | ('chown', CFUNCTYPE(c_int, c_char_p, c_uid_t, c_gid_t)), 274 | ('truncate', CFUNCTYPE(c_int, c_char_p, c_off_t)), 275 | ('utime', c_voidp), # Deprecated, use utimens 276 | ('open', CFUNCTYPE(c_int, c_char_p, POINTER(fuse_file_info))), 277 | 278 | ('read', CFUNCTYPE(c_int, c_char_p, POINTER(c_byte), c_size_t, 279 | c_off_t, POINTER(fuse_file_info))), 280 | 281 | ('write', CFUNCTYPE(c_int, c_char_p, POINTER(c_byte), c_size_t, 282 | c_off_t, POINTER(fuse_file_info))), 283 | 284 | ('statfs', CFUNCTYPE(c_int, c_char_p, POINTER(c_statvfs))), 285 | ('flush', CFUNCTYPE(c_int, c_char_p, POINTER(fuse_file_info))), 286 | ('release', CFUNCTYPE(c_int, c_char_p, POINTER(fuse_file_info))), 287 | ('fsync', CFUNCTYPE(c_int, c_char_p, c_int, POINTER(fuse_file_info))), 288 | ('setxattr', setxattr_t), 289 | ('getxattr', getxattr_t), 290 | ('listxattr', CFUNCTYPE(c_int, c_char_p, POINTER(c_byte), c_size_t)), 291 | ('removexattr', CFUNCTYPE(c_int, c_char_p, c_char_p)), 292 | ('opendir', CFUNCTYPE(c_int, c_char_p, POINTER(fuse_file_info))), 293 | 294 | ('readdir', CFUNCTYPE(c_int, c_char_p, c_voidp, 295 | CFUNCTYPE(c_int, c_voidp, c_char_p, 296 | POINTER(c_stat), c_off_t), 297 | c_off_t, POINTER(fuse_file_info))), 298 | 299 | ('releasedir', CFUNCTYPE(c_int, c_char_p, POINTER(fuse_file_info))), 300 | 301 | ('fsyncdir', CFUNCTYPE(c_int, c_char_p, c_int, 302 | POINTER(fuse_file_info))), 303 | 304 | ('init', CFUNCTYPE(c_voidp, c_voidp)), 305 | ('destroy', CFUNCTYPE(c_voidp, c_voidp)), 306 | ('access', CFUNCTYPE(c_int, c_char_p, c_int)), 307 | 308 | ('create', CFUNCTYPE(c_int, c_char_p, c_mode_t, 309 | POINTER(fuse_file_info))), 310 | 311 | ('ftruncate', CFUNCTYPE(c_int, c_char_p, c_off_t, 312 | POINTER(fuse_file_info))), 313 | 314 | ('fgetattr', CFUNCTYPE(c_int, c_char_p, POINTER(c_stat), 315 | POINTER(fuse_file_info))), 316 | 317 | ('lock', CFUNCTYPE(c_int, c_char_p, POINTER(fuse_file_info), 318 | c_int, c_voidp)), 319 | 320 | ('utimens', CFUNCTYPE(c_int, c_char_p, POINTER(c_utimbuf))), 321 | ('bmap', CFUNCTYPE(c_int, c_char_p, c_size_t, POINTER(c_ulonglong))), 322 | ] 323 | 324 | 325 | def time_of_timespec(ts): 326 | return ts.tv_sec + ts.tv_nsec / 10 ** 9 327 | 328 | def set_st_attrs(st, attrs): 329 | for key, val in attrs.items(): 330 | if key in ('st_atime', 'st_mtime', 'st_ctime', 'st_birthtime'): 331 | timespec = getattr(st, key + 'spec') 332 | timespec.tv_sec = int(val) 333 | timespec.tv_nsec = int((val - timespec.tv_sec) * 10 ** 9) 334 | elif hasattr(st, key): 335 | setattr(st, key, val) 336 | 337 | 338 | def fuse_get_context(): 339 | 'Returns a (uid, gid, pid) tuple' 340 | 341 | ctxp = _libfuse.fuse_get_context() 342 | ctx = ctxp.contents 343 | return ctx.uid, ctx.gid, ctx.pid 344 | 345 | 346 | class FuseOSError(OSError): 347 | def __init__(self, errno): 348 | super(FuseOSError, self).__init__(errno, strerror(errno)) 349 | 350 | 351 | class FUSE(object): 352 | ''' 353 | This class is the lower level interface and should not be subclassed under 354 | normal use. Its methods are called by fuse. 355 | 356 | Assumes API version 2.6 or later. 357 | ''' 358 | 359 | OPTIONS = ( 360 | ('foreground', '-f'), 361 | ('debug', '-d'), 362 | ('nothreads', '-s'), 363 | ) 364 | 365 | def __init__(self, operations, mountpoint, raw_fi=False, encoding='utf-8', 366 | **kwargs): 367 | 368 | ''' 369 | Setting raw_fi to True will cause FUSE to pass the fuse_file_info 370 | class as is to Operations, instead of just the fh field. 371 | 372 | This gives you access to direct_io, keep_cache, etc. 373 | ''' 374 | 375 | self.operations = operations 376 | self.raw_fi = raw_fi 377 | self.encoding = encoding 378 | 379 | args = ['fuse'] 380 | 381 | args.extend(flag for arg, flag in self.OPTIONS 382 | if kwargs.pop(arg, False)) 383 | 384 | kwargs.setdefault('fsname', operations.__class__.__name__) 385 | args.append('-o') 386 | args.append(','.join(self._normalize_fuse_options(**kwargs))) 387 | args.append(mountpoint) 388 | 389 | args = [arg.encode(encoding) for arg in args] 390 | argv = (c_char_p * len(args))(*args) 391 | 392 | fuse_ops = fuse_operations() 393 | for name, prototype in fuse_operations._fields_: 394 | if prototype != c_voidp and getattr(operations, name, None): 395 | op = partial(self._wrapper, getattr(self, name)) 396 | setattr(fuse_ops, name, prototype(op)) 397 | 398 | try: 399 | old_handler = signal(SIGINT, SIG_DFL) 400 | except ValueError: 401 | old_handler = SIG_DFL 402 | 403 | err = _libfuse.fuse_main_real(len(args), argv, pointer(fuse_ops), 404 | sizeof(fuse_ops), None) 405 | 406 | try: 407 | signal(SIGINT, old_handler) 408 | except ValueError: 409 | pass 410 | 411 | del self.operations # Invoke the destructor 412 | if err: 413 | raise RuntimeError(err) 414 | 415 | @staticmethod 416 | def _normalize_fuse_options(**kargs): 417 | for key, value in kargs.items(): 418 | if isinstance(value, bool): 419 | if value is True: yield key 420 | else: 421 | yield '%s=%s' % (key, value) 422 | 423 | @staticmethod 424 | def _wrapper(func, *args, **kwargs): 425 | 'Decorator for the methods that follow' 426 | 427 | try: 428 | return func(*args, **kwargs) or 0 429 | except OSError as e: 430 | return -(e.errno or EFAULT) 431 | except: 432 | print_exc() 433 | return -EFAULT 434 | 435 | def getattr(self, path, buf): 436 | return self.fgetattr(path, buf, None) 437 | 438 | def readlink(self, path, buf, bufsize): 439 | ret = self.operations('readlink', path.decode(self.encoding)) \ 440 | .encode(self.encoding) 441 | 442 | # copies a string into the given buffer 443 | # (null terminated and truncated if necessary) 444 | if not isinstance(ret, bytes): 445 | ret = ret.encode('utf-8') 446 | data = create_string_buffer(ret[:bufsize - 1]) 447 | memmove(buf, data, len(data)) 448 | return 0 449 | 450 | def mknod(self, path, mode, dev): 451 | return self.operations('mknod', path.decode(self.encoding), mode, dev) 452 | 453 | def mkdir(self, path, mode): 454 | return self.operations('mkdir', path.decode(self.encoding), mode) 455 | 456 | def unlink(self, path): 457 | return self.operations('unlink', path.decode(self.encoding)) 458 | 459 | def rmdir(self, path): 460 | return self.operations('rmdir', path.decode(self.encoding)) 461 | 462 | def symlink(self, source, target): 463 | 'creates a symlink `target -> source` (e.g. ln -s source target)' 464 | 465 | return self.operations('symlink', target.decode(self.encoding), 466 | source.decode(self.encoding)) 467 | 468 | def rename(self, old, new): 469 | return self.operations('rename', old.decode(self.encoding), 470 | new.decode(self.encoding)) 471 | 472 | def link(self, source, target): 473 | 'creates a hard link `target -> source` (e.g. ln source target)' 474 | 475 | return self.operations('link', target.decode(self.encoding), 476 | source.decode(self.encoding)) 477 | 478 | def chmod(self, path, mode): 479 | return self.operations('chmod', path.decode(self.encoding), mode) 480 | 481 | def chown(self, path, uid, gid): 482 | # Check if any of the arguments is a -1 that has overflowed 483 | if c_uid_t(uid + 1).value == 0: 484 | uid = -1 485 | if c_gid_t(gid + 1).value == 0: 486 | gid = -1 487 | 488 | return self.operations('chown', path.decode(self.encoding), uid, gid) 489 | 490 | def truncate(self, path, length): 491 | return self.operations('truncate', path.decode(self.encoding), length) 492 | 493 | def open(self, path, fip): 494 | fi = fip.contents 495 | if self.raw_fi: 496 | return self.operations('open', path.decode(self.encoding), fi) 497 | else: 498 | fi.fh = self.operations('open', path.decode(self.encoding), 499 | fi.flags) 500 | 501 | return 0 502 | 503 | def read(self, path, buf, size, offset, fip): 504 | if self.raw_fi: 505 | fh = fip.contents 506 | else: 507 | fh = fip.contents.fh 508 | 509 | ret = self.operations('read', path.decode(self.encoding), size, 510 | offset, fh) 511 | 512 | if not ret: return 0 513 | 514 | retsize = len(ret) 515 | assert retsize <= size, \ 516 | 'actual amount read %d greater than expected %d' % (retsize, size) 517 | 518 | if not isinstance(ret, bytes): 519 | ret = ret.encode('utf-8') 520 | data = create_string_buffer(ret, retsize) 521 | memmove(buf, ret, retsize) 522 | return retsize 523 | 524 | def write(self, path, buf, size, offset, fip): 525 | data = string_at(buf, size) 526 | 527 | if self.raw_fi: 528 | fh = fip.contents 529 | else: 530 | fh = fip.contents.fh 531 | 532 | return self.operations('write', path.decode(self.encoding), data, 533 | offset, fh) 534 | 535 | def statfs(self, path, buf): 536 | stv = buf.contents 537 | attrs = self.operations('statfs', path.decode(self.encoding)) 538 | for key, val in attrs.items(): 539 | if hasattr(stv, key): 540 | setattr(stv, key, val) 541 | 542 | return 0 543 | 544 | def flush(self, path, fip): 545 | if self.raw_fi: 546 | fh = fip.contents 547 | else: 548 | fh = fip.contents.fh 549 | 550 | return self.operations('flush', path.decode(self.encoding), fh) 551 | 552 | def release(self, path, fip): 553 | if self.raw_fi: 554 | fh = fip.contents 555 | else: 556 | fh = fip.contents.fh 557 | 558 | return self.operations('release', path.decode(self.encoding), fh) 559 | 560 | def fsync(self, path, datasync, fip): 561 | if self.raw_fi: 562 | fh = fip.contents 563 | else: 564 | fh = fip.contents.fh 565 | 566 | return self.operations('fsync', path.decode(self.encoding), datasync, 567 | fh) 568 | 569 | def setxattr(self, path, name, value, size, options, *args): 570 | return self.operations('setxattr', path.decode(self.encoding), 571 | name.decode(self.encoding), 572 | string_at(value, size), options, *args) 573 | 574 | def getxattr(self, path, name, value, size, *args): 575 | ret = self.operations('getxattr', path.decode(self.encoding), 576 | name.decode(self.encoding), *args) 577 | 578 | retsize = len(ret) 579 | # allow size queries 580 | if not value: return retsize 581 | 582 | # do not truncate 583 | if retsize > size: return -ERANGE 584 | if not isinstance(ret, bytes): 585 | ret = ret.encode('utf-8') 586 | buf = create_string_buffer(ret, retsize) # Does not add trailing 0 587 | memmove(value, buf, retsize) 588 | 589 | return retsize 590 | 591 | def listxattr(self, path, namebuf, size): 592 | attrs = self.operations('listxattr', path.decode(self.encoding)) or '' 593 | ret = '\x00'.join(attrs).encode(self.encoding) + '\x00' 594 | 595 | retsize = len(ret) 596 | # allow size queries 597 | if not namebuf: return retsize 598 | 599 | # do not truncate 600 | if retsize > size: return -ERANGE 601 | if not isinstance(ret, bytes): 602 | ret = ret.encode('utf-8') 603 | buf = create_string_buffer(ret, retsize) 604 | memmove(namebuf, buf, retsize) 605 | 606 | return retsize 607 | 608 | def removexattr(self, path, name): 609 | return self.operations('removexattr', path.decode(self.encoding), 610 | name.decode(self.encoding)) 611 | 612 | def opendir(self, path, fip): 613 | # Ignore raw_fi 614 | fip.contents.fh = self.operations('opendir', 615 | path.decode(self.encoding)) 616 | 617 | return 0 618 | 619 | def readdir(self, path, buf, filler, offset, fip): 620 | # Ignore raw_fi 621 | for item in self.operations('readdir', path.decode(self.encoding), 622 | fip.contents.fh): 623 | 624 | if isinstance(item, basestring): 625 | name, st, offset = item, None, 0 626 | else: 627 | name, attrs, offset = item 628 | if attrs: 629 | st = c_stat() 630 | set_st_attrs(st, attrs) 631 | else: 632 | st = None 633 | 634 | if filler(buf, name.encode(self.encoding), st, offset) != 0: 635 | break 636 | 637 | return 0 638 | 639 | def releasedir(self, path, fip): 640 | # Ignore raw_fi 641 | return self.operations('releasedir', path.decode(self.encoding), 642 | fip.contents.fh) 643 | 644 | def fsyncdir(self, path, datasync, fip): 645 | # Ignore raw_fi 646 | return self.operations('fsyncdir', path.decode(self.encoding), 647 | datasync, fip.contents.fh) 648 | 649 | def init(self, conn): 650 | return self.operations('init', '/') 651 | 652 | def destroy(self, private_data): 653 | return self.operations('destroy', '/') 654 | 655 | def access(self, path, amode): 656 | return self.operations('access', path.decode(self.encoding), amode) 657 | 658 | def create(self, path, mode, fip): 659 | fi = fip.contents 660 | path = path.decode(self.encoding) 661 | 662 | if self.raw_fi: 663 | return self.operations('create', path, mode, fi) 664 | else: 665 | fi.fh = self.operations('create', path, mode) 666 | return 0 667 | 668 | def ftruncate(self, path, length, fip): 669 | if self.raw_fi: 670 | fh = fip.contents 671 | else: 672 | fh = fip.contents.fh 673 | 674 | return self.operations('truncate', path.decode(self.encoding), 675 | length, fh) 676 | 677 | def fgetattr(self, path, buf, fip): 678 | memset(buf, 0, sizeof(c_stat)) 679 | 680 | st = buf.contents 681 | if not fip: 682 | fh = fip 683 | elif self.raw_fi: 684 | fh = fip.contents 685 | else: 686 | fh = fip.contents.fh 687 | 688 | attrs = self.operations('getattr', path.decode(self.encoding), fh) 689 | set_st_attrs(st, attrs) 690 | return 0 691 | 692 | def lock(self, path, fip, cmd, lock): 693 | if self.raw_fi: 694 | fh = fip.contents 695 | else: 696 | fh = fip.contents.fh 697 | 698 | return self.operations('lock', path.decode(self.encoding), fh, cmd, 699 | lock) 700 | 701 | def utimens(self, path, buf): 702 | if buf: 703 | atime = time_of_timespec(buf.contents.actime) 704 | mtime = time_of_timespec(buf.contents.modtime) 705 | times = (atime, mtime) 706 | else: 707 | times = None 708 | 709 | return self.operations('utimens', path.decode(self.encoding), times) 710 | 711 | def bmap(self, path, blocksize, idx): 712 | return self.operations('bmap', path.decode(self.encoding), blocksize, 713 | idx) 714 | 715 | 716 | class Operations(object): 717 | ''' 718 | This class should be subclassed and passed as an argument to FUSE on 719 | initialization. All operations should raise a FuseOSError exception on 720 | error. 721 | 722 | When in doubt of what an operation should do, check the FUSE header file 723 | or the corresponding system call man page. 724 | ''' 725 | 726 | def __call__(self, op, *args): 727 | if not hasattr(self, op): 728 | raise FuseOSError(EFAULT) 729 | return getattr(self, op)(*args) 730 | 731 | def access(self, path, amode): 732 | return 0 733 | 734 | bmap = None 735 | 736 | def chmod(self, path, mode): 737 | raise FuseOSError(EROFS) 738 | 739 | def chown(self, path, uid, gid): 740 | raise FuseOSError(EROFS) 741 | 742 | def create(self, path, mode, fi=None): 743 | ''' 744 | When raw_fi is False (default case), fi is None and create should 745 | return a numerical file handle. 746 | 747 | When raw_fi is True the file handle should be set directly by create 748 | and return 0. 749 | ''' 750 | 751 | raise FuseOSError(EROFS) 752 | 753 | def destroy(self, path): 754 | 'Called on filesystem destruction. Path is always /' 755 | 756 | pass 757 | 758 | def flush(self, path, fh): 759 | return 0 760 | 761 | def fsync(self, path, datasync, fh): 762 | return 0 763 | 764 | def fsyncdir(self, path, datasync, fh): 765 | return 0 766 | 767 | def getattr(self, path, fh=None): 768 | ''' 769 | Returns a dictionary with keys identical to the stat C structure of 770 | stat(2). 771 | 772 | st_atime, st_mtime and st_ctime should be floats. 773 | 774 | NOTE: There is an incombatibility between Linux and Mac OS X 775 | concerning st_nlink of directories. Mac OS X counts all files inside 776 | the directory, while Linux counts only the subdirectories. 777 | ''' 778 | 779 | if path != '/': 780 | raise FuseOSError(ENOENT) 781 | return dict(st_mode=(S_IFDIR | 0o0755), st_nlink=2) 782 | 783 | def getxattr(self, path, name, position=0): 784 | raise FuseOSError(ENOTSUP) 785 | 786 | def init(self, path): 787 | ''' 788 | Called on filesystem initialization. (Path is always /) 789 | 790 | Use it instead of __init__ if you start threads on initialization. 791 | ''' 792 | 793 | pass 794 | 795 | def link(self, target, source): 796 | 'creates a hard link `target -> source` (e.g. ln source target)' 797 | 798 | raise FuseOSError(EROFS) 799 | 800 | def listxattr(self, path): 801 | return [] 802 | 803 | lock = None 804 | 805 | def mkdir(self, path, mode): 806 | raise FuseOSError(EROFS) 807 | 808 | def mknod(self, path, mode, dev): 809 | raise FuseOSError(EROFS) 810 | 811 | def open(self, path, flags): 812 | ''' 813 | When raw_fi is False (default case), open should return a numerical 814 | file handle. 815 | 816 | When raw_fi is True the signature of open becomes: 817 | open(self, path, fi) 818 | 819 | and the file handle should be set directly. 820 | ''' 821 | 822 | return 0 823 | 824 | def opendir(self, path): 825 | 'Returns a numerical file handle.' 826 | 827 | return 0 828 | 829 | def read(self, path, size, offset, fh): 830 | 'Returns a string containing the data requested.' 831 | 832 | raise FuseOSError(EIO) 833 | 834 | def readdir(self, path, fh): 835 | ''' 836 | Can return either a list of names, or a list of (name, attrs, offset) 837 | tuples. attrs is a dict as in getattr. 838 | ''' 839 | 840 | return ['.', '..'] 841 | 842 | def readlink(self, path): 843 | raise FuseOSError(ENOENT) 844 | 845 | def release(self, path, fh): 846 | return 0 847 | 848 | def releasedir(self, path, fh): 849 | return 0 850 | 851 | def removexattr(self, path, name): 852 | raise FuseOSError(ENOTSUP) 853 | 854 | def rename(self, old, new): 855 | raise FuseOSError(EROFS) 856 | 857 | def rmdir(self, path): 858 | raise FuseOSError(EROFS) 859 | 860 | def setxattr(self, path, name, value, options, position=0): 861 | raise FuseOSError(ENOTSUP) 862 | 863 | def statfs(self, path): 864 | ''' 865 | Returns a dictionary with keys identical to the statvfs C structure of 866 | statvfs(3). 867 | 868 | On Mac OS X f_bsize and f_frsize must be a power of 2 869 | (minimum 512). 870 | ''' 871 | 872 | return {} 873 | 874 | def symlink(self, target, source): 875 | 'creates a symlink `target -> source` (e.g. ln -s source target)' 876 | 877 | raise FuseOSError(EROFS) 878 | 879 | def truncate(self, path, length, fh=None): 880 | raise FuseOSError(EROFS) 881 | 882 | def unlink(self, path): 883 | raise FuseOSError(EROFS) 884 | 885 | def utimens(self, path, times=None): 886 | 'Times is a (atime, mtime) tuple. If None use current time.' 887 | 888 | return 0 889 | 890 | def write(self, path, data, offset, fh): 891 | raise FuseOSError(EROFS) 892 | 893 | 894 | class LoggingMixIn: 895 | log = logging.getLogger('fuse.log-mixin') 896 | 897 | def __call__(self, op, path, *args): 898 | self.log.debug('-> %s %s %s', op, path, repr(args)) 899 | ret = '[Unhandled Exception]' 900 | try: 901 | ret = getattr(self, op)(path, *args) 902 | return ret 903 | except OSError as e: 904 | ret = str(e) 905 | raise 906 | finally: 907 | self.log.debug('<- %s %s', op, repr(ret)) 908 | --------------------------------------------------------------------------------