├── .github └── workflows │ └── deploy-docs.yml ├── .gitignore ├── .readthedocs.yml ├── .travis.yml ├── Dockerfile ├── LICENSE ├── Makefile ├── README.md ├── docs ├── Administrator-Guide │ ├── Access-Control-Lists.md │ ├── Accessing-Gluster-from-Windows.md │ ├── Automatic-File-Replication.md │ ├── Bareos.md │ ├── Brick-Naming-Conventions.md │ ├── Building-QEMU-With-gfapi-For-Debian-Based-Systems.md │ ├── Consul.md │ ├── Directory-Quota.md │ ├── Events-APIs.md │ ├── Export-And-Netgroup-Authentication.md │ ├── Geo-Replication.md │ ├── Gluster-On-ZFS.md │ ├── GlusterFS-Cinder.md │ ├── GlusterFS-Coreutils.md │ ├── GlusterFS-Filter.md │ ├── GlusterFS-Introduction.md │ ├── GlusterFS-Keystone-Quickstart.md │ ├── GlusterFS-iSCSI.md │ ├── Handling-of-users-with-many-groups.md │ ├── Hook-scripts.md │ ├── Linux-Kernel-Tuning.md │ ├── Logging.md │ ├── Managing-Snapshots.md │ ├── Managing-Volumes.md │ ├── Mandatory-Locks.md │ ├── Monitoring-Workload.md │ ├── NFS-Ganesha-GlusterFS-Integration.md │ ├── Network-Configurations-Techniques.md │ ├── Object-Storage.md │ ├── Performance-Testing.md │ ├── Performance-Tuning.md │ ├── Puppet.md │ ├── RDMA-Transport.md │ ├── SSL.md │ ├── Setting-Up-Clients.md │ ├── Setting-Up-Volumes.md │ ├── Split-brain-and-ways-to-deal-with-it.md │ ├── Start-Stop-Daemon.md │ ├── Storage-Pools.md │ ├── Thin-Arbiter-Volumes.md │ ├── Trash.md │ ├── Tuning-Volume-Options.md │ ├── arbiter-volumes-and-quorum.md │ ├── formatting-and-mounting-bricks.md │ ├── index.md │ ├── io_uring.md │ ├── overview.md │ └── setting-up-storage.md ├── CLI-Reference │ └── cli-main.md ├── Contributors-Guide │ ├── Adding-your-blog.md │ ├── Bug-Reporting-Guidelines.md │ ├── Bug-Triage.md │ ├── GlusterFS-Release-process.md │ ├── Guidelines-For-Maintainers.md │ └── Index.md ├── Developer-guide │ ├── Backport-Guidelines.md │ ├── Building-GlusterFS.md │ ├── Developers-Index.md │ ├── Development-Workflow.md │ ├── Easy-Fix-Bugs.md │ ├── Fixing-issues-reported-by-tools-for-static-code-analysis.md │ ├── Projects.md │ ├── Simplified-Development-Workflow.md │ ├── compiling-rpms.md │ └── coredump-on-customer-setup.md ├── GlusterFS-Tools │ ├── README.md │ ├── gfind-missing-files.md │ └── glusterfind.md ├── Install-Guide │ ├── Common-criteria.md │ ├── Community-Packages.md │ ├── Configure.md │ ├── Install.md │ ├── Overview.md │ ├── Setup-Bare-metal.md │ ├── Setup-aws.md │ └── Setup-virt.md ├── Ops-Guide │ ├── Overview.md │ └── Tools.md ├── Quick-Start-Guide │ ├── Architecture.md │ └── Quickstart.md ├── Troubleshooting │ ├── README.md │ ├── gfid-to-path.md │ ├── gluster-crash.md │ ├── resolving-splitbrain.md │ ├── statedump.md │ ├── troubleshooting-afr.md │ ├── troubleshooting-filelocks.md │ ├── troubleshooting-georep.md │ ├── troubleshooting-glusterd.md │ ├── troubleshooting-gnfs.md │ └── troubleshooting-memory.md ├── Upgrade-Guide │ ├── README.md │ ├── generic-upgrade-procedure.md │ ├── op-version.md │ ├── upgrade-to-10.md │ ├── upgrade-to-11.md │ ├── upgrade-to-3.10.md │ ├── upgrade-to-3.11.md │ ├── upgrade-to-3.12.md │ ├── upgrade-to-3.13.md │ ├── upgrade-to-3.5.md │ ├── upgrade-to-3.6.md │ ├── upgrade-to-3.7.md │ ├── upgrade-to-3.8.md │ ├── upgrade-to-3.9.md │ ├── upgrade-to-4.0.md │ ├── upgrade-to-4.1.md │ ├── upgrade-to-5.md │ ├── upgrade-to-6.md │ ├── upgrade-to-7.md │ ├── upgrade-to-8.md │ └── upgrade-to-9.md ├── analytics.txt ├── css │ └── custom.css ├── glossary.md ├── google64817fdc11b2f6b6.html ├── images │ ├── 640px-GlusterFS-Architecture.png │ ├── Bugzilla-watching-bugs@gluster.org.png │ ├── Bugzilla-watching.png │ ├── Distribute.png │ ├── Distributed-Replicated-Volume.png │ ├── Distributed-Striped-Replicated-Volume.png │ ├── Distributed-Striped-Volume.png │ ├── Distributed-Volume.png │ ├── FUSE-access.png │ ├── FUSE-structure.png │ ├── First-translator.png │ ├── Geo-Rep-LAN.png │ ├── Geo-Rep-WAN.png │ ├── Geo-Rep03-Internet.png │ ├── Geo-Rep04-Cascading.png │ ├── Geo-replication-async.jpg │ ├── Geo-replication-sync.png │ ├── GlusterFS-Architecture.png │ ├── GlusterFS_Translator_Stack.png │ ├── Graph.png │ ├── Hadoop-Architecture.png │ ├── Libgfapi-access.png │ ├── New-DispersedVol.png │ ├── New-Distributed-DisperseVol.png │ ├── New-Distributed-ReplicatedVol.png │ ├── New-DistributedVol.png │ ├── New-ReplicatedVol.png │ ├── Overallprocess.png │ ├── Replicated-Volume.png │ ├── Striped-Replicated-Volume.png │ ├── Striped-Volume.png │ ├── Translator-h.png │ ├── Translator.png │ ├── UFO-Architecture.png │ ├── VSA-Architecture.png │ ├── favicon.ico │ ├── icon.svg │ ├── logo.png │ └── url.png ├── index.md ├── js │ └── custom-features.js ├── presentations │ └── index.md ├── release-notes │ ├── 10.0.md │ ├── 10.1.md │ ├── 10.2.md │ ├── 10.3.md │ ├── 10.4.md │ ├── 10.5.md │ ├── 11.0.md │ ├── 11.1.md │ ├── 3.10.0.md │ ├── 3.10.1.md │ ├── 3.10.10.md │ ├── 3.10.11.md │ ├── 3.10.12.md │ ├── 3.10.2.md │ ├── 3.10.3.md │ ├── 3.10.4.md │ ├── 3.10.5.md │ ├── 3.10.6.md │ ├── 3.10.7.md │ ├── 3.10.8.md │ ├── 3.10.9.md │ ├── 3.11.0.md │ ├── 3.11.1.md │ ├── 3.11.2.md │ ├── 3.11.3.md │ ├── 3.12.0.md │ ├── 3.12.1.md │ ├── 3.12.10.md │ ├── 3.12.11.md │ ├── 3.12.12.md │ ├── 3.12.13.md │ ├── 3.12.14.md │ ├── 3.12.15.md │ ├── 3.12.2.md │ ├── 3.12.3.md │ ├── 3.12.4.md │ ├── 3.12.5.md │ ├── 3.12.6.md │ ├── 3.12.7.md │ ├── 3.12.8.md │ ├── 3.12.9.md │ ├── 3.13.0.md │ ├── 3.13.1.md │ ├── 3.13.2.md │ ├── 3.5.0.md │ ├── 3.5.1.md │ ├── 3.5.2.md │ ├── 3.5.3.md │ ├── 3.5.4.md │ ├── 3.6.0.md │ ├── 3.6.3.md │ ├── 3.7.0.md │ ├── 3.7.1.md │ ├── 3.9.0.md │ ├── 4.0.0.md │ ├── 4.0.1.md │ ├── 4.0.2.md │ ├── 4.1.0.md │ ├── 4.1.1.md │ ├── 4.1.10.md │ ├── 4.1.2.md │ ├── 4.1.3.md │ ├── 4.1.4.md │ ├── 4.1.5.md │ ├── 4.1.6.md │ ├── 4.1.7.md │ ├── 4.1.8.md │ ├── 4.1.9.md │ ├── 5.0.md │ ├── 5.1.md │ ├── 5.10.md │ ├── 5.11.md │ ├── 5.12.md │ ├── 5.13.md │ ├── 5.2.md │ ├── 5.3.md │ ├── 5.5.md │ ├── 5.6.md │ ├── 5.8.md │ ├── 5.9.md │ ├── 6.0.md │ ├── 6.1.md │ ├── 6.10.md │ ├── 6.2.md │ ├── 6.3.md │ ├── 6.4.md │ ├── 6.5.md │ ├── 6.6.md │ ├── 6.7.md │ ├── 6.8.md │ ├── 6.9.md │ ├── 7.0.md │ ├── 7.1.md │ ├── 7.2.md │ ├── 7.3.md │ ├── 7.4.md │ ├── 7.5.md │ ├── 7.6.md │ ├── 7.7.md │ ├── 7.8.md │ ├── 7.9.md │ ├── 8.0.md │ ├── 8.1.md │ ├── 8.2.md │ ├── 8.3.md │ ├── 8.4.md │ ├── 8.5.md │ ├── 8.6.md │ ├── 9.0.md │ ├── 9.1.md │ ├── 9.2.md │ ├── 9.3.md │ ├── 9.4.md │ ├── 9.5.md │ ├── 9.6.md │ ├── geo-rep-in-3.7.md │ ├── glusterfs-selinux2.0.1.md │ └── index.md └── security.md ├── glustertheme └── base.html ├── mkdocs.yml └── requirements.txt /.github/workflows/deploy-docs.yml: -------------------------------------------------------------------------------- 1 | name: Publish docs via GitHub Pages 2 | on: 3 | push: 4 | branches: 5 | - main 6 | 7 | jobs: 8 | build: 9 | name: Deploy docs 10 | runs-on: ubuntu-latest 11 | steps: 12 | - name: Checkout main 13 | uses: actions/checkout@v3 14 | 15 | - name: Deploy docs 16 | uses: mhausenblas/mkdocs-deploy-gh-pages@master 17 | # Or use mhausenblas/mkdocs-deploy-gh-pages@nomaterial to build without the mkdocs-material theme 18 | env: 19 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 20 | CUSTOM_DOMAIN: docs.gluster.org 21 | CONFIG_FILE: mkdocs.yml 22 | EXTRA_PACKAGES: build-base 23 | # GITHUB_DOMAIN: github.myenterprise.com 24 | REQUIREMENTS: requirements.txt 25 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | site/ 2 | env/ 3 | .env 4 | /.buildlog 5 | -------------------------------------------------------------------------------- /.readthedocs.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | 3 | build: 4 | os: ubuntu-22.04 5 | tools: 6 | python: "3.11" 7 | 8 | mkdocs: 9 | configuration: mkdocs.yml 10 | 11 | python: 12 | install: 13 | - requirements: requirements.txt 14 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - "3.6" 4 | install: "pip install -r requirements.txt" 5 | script: mkdocs build --clean --strict 6 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM centos:7 2 | 3 | # RUN yum install -y epel-release 4 | RUN yum install -y python3 python3-setuptools 5 | RUN pip3 install mkdocs mkdocs-material 6 | 7 | ENV LC_ALL=en_US.utf-8 LANG=en_US.utf-8 8 | 9 | ENTRYPOINT ["mkdocs", "build"] 10 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Gluster.org 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | build: docker/build 2 | @docker run --rm -ti --user $(shell id -u):$(shell id -g) -v $(PWD):/docs:ro -v $(PWD)/site:/docs/site:rw -w /docs $(shell grep "Successfully built" .buildlog | cut -d ' ' -f 3) 3 | 4 | docker/build: 5 | @docker build . | tee .buildlog 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # glusterdocs 2 | 3 | Source code to gluster documentation: http://docs.gluster.org/ 4 | 5 | **Important Note: 6 | This repo had its git history re-written on 19 May 2016. 7 | Please create a fresh fork or clone if you have an older local clone.** 8 | 9 | # Building the docs 10 | 11 | If you are on EPEL 7 or Fedora, the first thing you will need is to install 12 | mkdocs, with the following command : 13 | 14 | # sudo yum install mkdocs 15 | 16 | For Fedora 30+ (run the following in root) 17 | 18 | # dnf install python-pip 19 | # pip install -r requirements.txt 20 | 21 | Then you need to run mkdocs from the root of that repository: 22 | 23 | $ mkdocs build 24 | 25 | If you see an error about `docs_dir` when using recent versions of mkdocs , try running additional steps mentioned below: 26 | 27 | $ cp ./mkdocs.yml ../ 28 | $ cd .. 29 | 30 | Edit below entry in the copied mkdocs.yml file 31 | 32 | docs_dir: ./glusterdocs/ 33 | 34 | Then you need to run mkdocs 35 | 36 | $ mkdocs build 37 | 38 | The result will be in the `site/` subdirectory, in HTML. 39 | 40 | # Building the docs in Docker 41 | 42 | Included is a Makefile and a Dockerfile, which enables you to easily build the 43 | docs inside Docker without installing any dependencies on your system. 44 | 45 | Simply run the following command to compile the docs: 46 | 47 | ```sh 48 | make 49 | ``` 50 | 51 | This Makefile recipe builds a Docker image containing the dependencies required 52 | and runs `mkdocs` inside the built image, taking care to run the container as 53 | the current `uid` and `gid` so that your user has ownership of the results in 54 | the `./site` directory. 55 | -------------------------------------------------------------------------------- /docs/Administrator-Guide/Brick-Naming-Conventions.md: -------------------------------------------------------------------------------- 1 | # Brick Naming Conventions 2 | 3 | FHS-2.3 isn't entirely clear on where data shared by the server should reside. It does state that "_/srv contains site-specific data which is served by this system_", but is GlusterFS data site-specific? 4 | 5 | The consensus seems to lean toward using `/data`. A good hierarchical method for placing bricks is: 6 | 7 | ```text 8 | /data/glusterfs///brick 9 | ``` 10 | 11 | In this example, `` is the filesystem that is mounted. 12 | 13 | ### Example: One Brick Per Server 14 | 15 | A physical disk _/dev/sdb_ is going to be used as brick storage for a volume you're about to create named _myvol1_. You've partitioned and formatted _/dev/sdb1_ with XFS on each of 4 servers. 16 | 17 | On all 4 servers: 18 | 19 | ```console 20 | mkdir -p /data/glusterfs/myvol1/brick1 21 | mount /dev/sdb1 /data/glusterfs/myvol1/brick1 22 | ``` 23 | 24 | We're going to define the actual brick in the `brick` directory on that filesystem. This helps by causing the brick to fail to start if the XFS filesystem isn't mounted. 25 | 26 | On just one server: 27 | 28 | ```console 29 | gluster volume create myvol1 replica 2 server{1..4}:/data/glusterfs/myvol1/brick1/brick 30 | ``` 31 | 32 | This will create the volume _myvol1_ which uses the directory `/data/glusterfs/myvol1/brick1/brick` on all 4 servers. 33 | 34 | ### Example: Two Bricks Per Server 35 | 36 | Two physical disks _/dev/sdb_ and _/dev/sdc_ are going to be used as brick storage for a volume you're about to create named _myvol2_. You've partitioned and formatted _/dev/sdb1_ and _/dev/sdc1_ with XFS on each of 4 servers. 37 | 38 | On all 4 servers: 39 | 40 | ```console 41 | mkdir -p /data/glusterfs/myvol2/brick{1,2} 42 | mount /dev/sdb1 /data/glusterfs/myvol2/brick1 43 | mount /dev/sdc1 /data/glusterfs/myvol2/brick2 44 | ``` 45 | 46 | Again we're going to define the actual brick in the `brick` directory on these filesystems. 47 | 48 | On just one server: 49 | 50 | ```console 51 | gluster volume create myvol2 replica 2 \ 52 | server{1..4}:/data/glusterfs/myvol2/brick1/brick \ 53 | server{1..4}:/data/glusterfs/myvol2/brick2/brick 54 | ``` 55 | 56 | **Note:** It might be tempting to try `gluster volume create myvol2 replica 2 server{1..4}:/data/glusterfs/myvol2/brick{1,2}/brick` but Bash would expand the last `{}` first, so you would end up replicating between the two bricks on each servers, instead of across servers. 57 | -------------------------------------------------------------------------------- /docs/Administrator-Guide/GlusterFS-Filter.md: -------------------------------------------------------------------------------- 1 | # Modifying .vol files with a filter 2 | 3 | If you need to make manual changes to a .vol file it is recommended to 4 | make these through the client interface ('gluster foo'). Making changes 5 | directly to .vol files is discouraged, because it cannot be predicted 6 | when a .vol file will be reset on disk, for example with a 'gluster set 7 | foo' command. The command line interface was never designed to read the 8 | .vol files, but rather to keep state and rebuild them (from 9 | `/var/lib/glusterd/vols/$vol/info`). There is, however, another way to 10 | do this. 11 | 12 | You can create a shell script in the directory 13 | `/usr/lib*/glusterfs/$VERSION/filter`. All scripts located there will 14 | be executed every time the .vol files are written back to disk. The 15 | first and only argument passed to all script located there is the name 16 | of the .vol file. 17 | 18 | So you could create a script there that looks like this: 19 | 20 | ```console 21 | #!/bin/sh 22 | sed -i 'some-sed-magic' "$1" 23 | ``` 24 | 25 | Which will run the script, which in turn will run the sed command on the 26 | .vol file (passed as \$1). 27 | 28 | Importantly, the script needs to be set as executable (eg via chmod), 29 | else it won't be run. 30 | -------------------------------------------------------------------------------- /docs/Administrator-Guide/GlusterFS-Introduction.md: -------------------------------------------------------------------------------- 1 | # What is Gluster ? 2 | 3 | Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. 4 | 5 | ### Advantages 6 | 7 | - Scales to several petabytes 8 | - Handles thousands of clients 9 | - POSIX compatible 10 | - Uses commodity hardware 11 | - Can use any ondisk filesystem that supports extended attributes 12 | - Accessible using industry standard protocols like NFS and SMB 13 | - Provides replication, quotas, geo-replication, snapshots and bitrot detection 14 | - Allows optimization for different workloads 15 | - Open Source 16 | 17 | ![640px-glusterfs_architecture](../images/640px-GlusterFS-Architecture.png) 18 | 19 | Enterprises can scale capacity, performance, and availability on demand, with no vendor lock-in, across on-premise, public cloud, and hybrid environments. 20 | Gluster is used in production at thousands of organisations spanning media, healthcare, government, education, web 2.0, and financial services. 21 | 22 | ### Commercial offerings and support 23 | 24 | Several companies offer support or [consulting](https://www.gluster.org/support/). 25 | 26 | [Red Hat Gluster Storage](http://www.redhat.com/en/technologies/storage/gluster) 27 | is a commercial storage software product, based on Gluster. 28 | -------------------------------------------------------------------------------- /docs/Administrator-Guide/Logging.md: -------------------------------------------------------------------------------- 1 | # GlusterFS service Logs and locations 2 | 3 | Below lists the component, services, and functionality based logs in the GlusterFS Server. As per the File System Hierarchy Standards (FHS) all the log files are placed in the `/var/log` directory. 4 | ⁠ 5 | 6 | ## Glusterd: 7 | 8 | glusterd logs are located at `/var/log/glusterfs/glusterd.log`. One glusterd log file per server. This log file also contains the snapshot and user logs. 9 | 10 | ## Gluster cli command: 11 | 12 | gluster cli logs are located at `/var/log/glusterfs/cli.log`. Gluster commands executed on a node in a GlusterFS Trusted Storage Pool is logged in `/var/log/glusterfs/cmd_history.log`. 13 | 14 | ## Bricks: 15 | 16 | Bricks logs are located at `/var/log/glusterfs/bricks/.log`. One log file per brick on the server 17 | 18 | ## Rebalance: 19 | 20 | rebalance logs are located at `/var/log/glusterfs/VOLNAME-rebalance.log` . One log file per volume on the server. 21 | 22 | ## Self heal deamon: 23 | 24 | self heal deamon are logged at `/var/log/glusterfs/glustershd.log`. One log file per server 25 | 26 | ## Quota: 27 | 28 | `/var/log/glusterfs/quotad.log` are log of the quota daemons running on each node. 29 | `/var/log/glusterfs/quota-crawl.log` Whenever quota is enabled, a file system crawl is performed and the corresponding log is stored in this file. 30 | `/var/log/glusterfs/quota-mount- VOLNAME.log` An auxiliary FUSE client is mounted in /VOLNAME of the glusterFS and the corresponding client logs found in this file. One log file per server and per volume from quota-mount. 31 | 32 | ## Gluster NFS: 33 | 34 | `/var/log/glusterfs/nfs.log ` One log file per server 35 | 36 | ## SAMBA Gluster: 37 | 38 | `/var/log/samba/glusterfs-VOLNAME-.log` . If the client mounts this on a glusterFS server node, the actual log file or the mount point may not be found. In such a case, the mount outputs of all the glusterFS type mount operations need to be considered. 39 | 40 | ## Ganesha NFS : 41 | 42 | `/var/log/nfs-ganesha.log` 43 | 44 | ## FUSE Mount: 45 | 46 | `/var/log/glusterfs/.log ` 47 | 48 | ## Geo-replication: 49 | 50 | `/var/log/glusterfs/geo-replication/` 51 | `/var/log/glusterfs/geo-replication-secondary ` 52 | 53 | ## Gluster volume heal VOLNAME info command: 54 | 55 | `/var/log/glusterfs/glfsheal-VOLNAME.log` . One log file per server on which the command is executed. 56 | 57 | ## Gluster-swift: 58 | 59 | `/var/log/messages` 60 | 61 | ## SwiftKrbAuth: 62 | 63 | `/var/log/httpd/error_log ` 64 | -------------------------------------------------------------------------------- /docs/Administrator-Guide/Mandatory-Locks.md: -------------------------------------------------------------------------------- 1 | # Mandatory Locks 2 | 3 | Support for mandatory locks inside GlusterFS does not converge all by itself to what Linux kernel provides to user space file systems. Here we enforce core mandatory lock semantics with and without the help of file mode bits. Please read through the [design specification](https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.8/Mandatory%20Locks.md) which explains the whole concept behind the mandatory locks implementation done for GlusterFS. 4 | 5 | ## Implications and Usage 6 | 7 | By default, mandatory locking will be disabled for a volume and a volume set options is available to configure volume to operate under 3 different mandatory locking modes. 8 | 9 | ## Volume Option 10 | 11 | ```console 12 | gluster volume set locks.mandatory-locking 13 | ``` 14 | 15 | **off** - Disable mandatory locking for specified volume.
16 | **file** - Enable Linux kernel style mandatory locking semantics with the help of mode bits (not well tested)
17 | **forced** - Check for conflicting byte range locks for every data modifying operation in a volume
18 | **optimal** - Combinational mode where POSIX clients can live with their advisory lock semantics which will still honour the mandatory locks acquired by other clients like SMB. 19 | 20 | **Note**:- Please refer the design doc for more information on these key values. 21 | 22 | #### Points to be remembered 23 | 24 | - Valid key values available with mandatory-locking volume set option are taken into effect only after a subsequent start/restart of the volume. 25 | - Due to some outstanding issues, it is recommended to turn off the performance translators in order to have the complete functionality of mandatory-locks when volume is configured in any one of the above described mandatory-locking modes. Please see the 'Known issue' section below for more details. 26 | 27 | #### Known issues 28 | 29 | - Since the whole logic of mandatory-locks are implemented within the locks translator loaded at the server side, early success returned to fops like open, read, write to upper/application layer by performance translators residing at the client side will impact the intended functionality of mandatory-locks. One such issue is being tracked in the following bugzilla report: 30 | 31 | 32 | 33 | - There is a possible race window uncovered with respect to mandatory locks and an ongoing read/write operation. For more details refer the bug report given below: 34 | 35 | 36 | -------------------------------------------------------------------------------- /docs/Administrator-Guide/io_uring.md: -------------------------------------------------------------------------------- 1 | # io_uring support in gluster 2 | 3 | io_uring is an asynchronous I/O interface similar to linux-aio, but aims to be more performant. 4 | Refer [https://kernel.dk/io_uring.pdf](https://kernel.dk/io_uring.pdf) and [https://kernel-recipes.org/en/2019/talks/faster-io-through-io_uring](https://kernel-recipes.org/en/2019/talks/faster-io-through-io_uring) for more details. 5 | 6 | Incorporating io_uring in various layers of gluster is an ongoing activity but beginning with glusterfs-9.0, support has been added to the posix translator via the `storage.linux-io_uring` volume option. When this option is enabled, the posix translator in the glusterfs brick process (at the server side) will use io_uring calls for reads, writes and fsyncs as opposed to the normal pread/pwrite based syscalls. 7 | 8 | #### Example: 9 | 10 | ```{ .console .no-copy } 11 | # gluster volume set testvol storage.linux-io_uring on 12 | volume set: success 13 | 14 | # gluster volume set testvol storage.linux-io_uring off 15 | volume set: success 16 | ``` 17 | 18 | This option can be enabled/disabled only when the volume is not running. 19 | i.e. you can toggle the option when the volume is `Created` or is `Stopped` as indicated in `gluster volume status $VOLNAME` 20 | -------------------------------------------------------------------------------- /docs/Administrator-Guide/overview.md: -------------------------------------------------------------------------------- 1 | ### Overview 2 | 3 | The Administration guide covers day to day management tasks as well as advanced configuration methods for your Gluster setup. 4 | 5 | You can manage your Gluster cluster using the [Gluster CLI](../CLI-Reference/cli-main.md) 6 | 7 | See the [glossary](../glossary.md) for an explanation of the various terms used in this document. 8 | -------------------------------------------------------------------------------- /docs/Administrator-Guide/setting-up-storage.md: -------------------------------------------------------------------------------- 1 | # Setting Up Storage 2 | 3 | A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. 4 | Before creating a volume, you need to set up the bricks that will form the volume. 5 | 6 | - [Brick Naming Conventions](./Brick-Naming-Conventions.md) 7 | - [Formatting and Mounting Bricks](./formatting-and-mounting-bricks.md) 8 | - [Posix ACLS](./Access-Control-Lists.md) 9 | -------------------------------------------------------------------------------- /docs/Contributors-Guide/Adding-your-blog.md: -------------------------------------------------------------------------------- 1 | ### Adding your blog 2 | 3 | As a developer/user, you have blogged about gluster and want to share the post to Gluster community. 4 | 5 | OK, you can do that by editing planet-gluster [feeds](https://github.com/gluster/planet-gluster/blob/master/data/feeds.yml) on Github. 6 | 7 | Please find instructions mentioned in the file and send a pull request. 8 | 9 | Once approved, all your gluster related posts will appear in [planet.gluster.org](http://planet.gluster.org) website. 10 | -------------------------------------------------------------------------------- /docs/Contributors-Guide/Index.md: -------------------------------------------------------------------------------- 1 | # Workflow Guide 2 | 3 | ## Bug Handling 4 | 5 | - [Bug reporting guidelines](./Bug-Reporting-Guidelines.md) - 6 | Guideline for reporting a bug in GlusterFS 7 | - [Bug triage guidelines](./Bug-Triage.md) - Guideline on how to 8 | triage bugs for GlusterFS 9 | 10 | ## Release Process 11 | 12 | - [GlusterFS Release process](./GlusterFS-Release-process.md) - 13 | Our release process / checklist 14 | 15 | ## Patch Acceptance 16 | 17 | - The [Guidelines For Maintainers](./Guidelines-For-Maintainers.md) explains when 18 | maintainers can merge patches. 19 | 20 | ## Blogging about gluster 21 | 22 | - The [Adding your gluster blog](./Adding-your-blog.md) explains how to add your 23 | gluster blog to Community blogger. 24 | -------------------------------------------------------------------------------- /docs/Developer-guide/Backport-Guidelines.md: -------------------------------------------------------------------------------- 1 | # Backport Guidelines 2 | 3 | In GlusterFS project, as a policy, any new change, bug fix, etc., are to be 4 | fixed in 'devel' branch before release branches. When a bug is fixed in 5 | the devel branch, it might be desirable or necessary in release branch. 6 | 7 | This page describes the policy GlusterFS has regarding the backports. As 8 | a user, or contributor, being aware of this policy would help you to 9 | understand how to request for backport from community. 10 | 11 | ## Policy 12 | 13 | - No feature from devel would be backported to the release branch 14 | - CVE ie., security vulnerability [(listed on the CVE database)](https://cve.mitre.org/cve/search_cve_list.html) 15 | reported in the existing releases would be backported, after getting fixed 16 | in devel branch. 17 | - Only topics which bring about data loss or, unavailability would be 18 | backported to the release. 19 | - For any other issues, the project recommends that the installation be 20 | upgraded to a newer release where the specific bug has been addressed. 21 | - Gluster provides 'rolling' upgrade support, i.e., one can upgrade their 22 | server version without stopping the application I/O, so we recommend migrating 23 | to higher version. 24 | 25 | ## Things to pay attention to while backporting a patch. 26 | 27 | If your patch meets the criteria above, or you are a user, who prefer to have a 28 | fix backported, because your current setup is facing issues, below are the 29 | steps you need to take care to submit a patch on release branch. 30 | 31 | - The patch should have same 'Change-Id'. 32 | 33 | ### How to contact release owners? 34 | 35 | All release owners are part of 'gluster-devel@gluster.org' mailing list. 36 | Please write your expectation from next release there, so we can take that 37 | to consideration while making the release. 38 | -------------------------------------------------------------------------------- /docs/Developer-guide/Developers-Index.md: -------------------------------------------------------------------------------- 1 | # Developers 2 | 3 | ### Contributing to the Gluster community 4 | 5 | --- 6 | 7 | Are you itching to send in patches and participate as a developer in the 8 | Gluster community? Here are a number of starting points for getting 9 | involved. All you need is your 'github' account to be handy. 10 | 11 | Remember that, [Gluster community](https://github.com/gluster) has multiple projects, each of which has its own way of handling PRs and patches. Decide on which project you want to contribute. Below documents are mostly about 'GlusterFS' project, which is the core of Gluster Community. 12 | 13 | ## Workflow 14 | 15 | - [Simplified Developer Workflow](./Simplified-Development-Workflow.md) 16 | 17 | - A simpler and faster intro to developing with GlusterFS, than the document below 18 | 19 | - [Developer Workflow](./Development-Workflow.md) 20 | 21 | - Covers detail about requirements from a patch; tools and toolkits used by developers. 22 | This is recommended reading in order to begin contributions to the project. 23 | 24 | - [GD2 Developer Workflow](https://github.com/gluster/glusterd2/blob/master/doc/development-guide.md) 25 | 26 | - Helps in on-boarding developers to contribute in GlusterD2 project. 27 | 28 | ## Compiling Gluster 29 | 30 | - [Building GlusterFS](./Building-GlusterFS.md) - How to compile 31 | Gluster from source code. 32 | 33 | ## Developing 34 | 35 | - [Projects](./Projects.md) - Ideas for projects you could 36 | create 37 | - [Fixing issues reported by tools for static code 38 | analysis](./Fixing-issues-reported-by-tools-for-static-code-analysis.md) 39 | 40 | - This is a good starting point for developers to fix bugs in GlusterFS project. 41 | 42 | ## Releases and Backports 43 | 44 | - [Backport Guidelines](./Backport-Guidelines.md) describe the steps that branches too. 45 | 46 | Some more GlusterFS Developer documentation can be found [in glusterfs documentation directory](https://github.com/gluster/glusterfs/tree/master/doc/developer-guide) 47 | -------------------------------------------------------------------------------- /docs/Developer-guide/Easy-Fix-Bugs.md: -------------------------------------------------------------------------------- 1 | # Easy Fix Bugs 2 | 3 | Fixing easy issues is an excellent method to start contributing patches to Gluster. 4 | 5 | Sometimes an _Easy Fix_ issue has a patch attached. In those cases, 6 | the _Patch_ keyword has been added to the bug. These bugs can be 7 | used by new contributors that would like to verify their workflow. [Bug 8 | 1099645](https://bugzilla.redhat.com/1099645) is one example of those. 9 | 10 | All such issues can be found [here](https://github.com/gluster/glusterfs/labels/EasyFix) 11 | 12 | ### Guidelines for new comers 13 | 14 | - While trying to write a patch, do not hesitate to ask questions. 15 | - If something in the documentation is unclear, we do need to know so 16 | that we can improve it. 17 | - There are no stupid questions, and it's more stupid to not ask 18 | questions that others can easily answer. Always assume that if you 19 | have a question, someone else would like to hear the answer too. 20 | 21 | [Reach out](https://www.gluster.org/community/) to the developers 22 | in #gluster on [Gluster Slack](https://gluster.slack.com) channel, or on 23 | one of the mailing lists, try to keep the discussions public so that anyone 24 | can learn from it. 25 | -------------------------------------------------------------------------------- /docs/Developer-guide/Fixing-issues-reported-by-tools-for-static-code-analysis.md: -------------------------------------------------------------------------------- 1 | ## Static Code Analysis Tools 2 | 3 | Bug fixes for issues reported by _Static Code Analysis Tools_ should 4 | follow [Development Work Flow](./Development-Workflow.md) 5 | 6 | ### Coverity 7 | 8 | GlusterFS is part of [Coverity's](https://scan.coverity.com/) scan 9 | program. 10 | 11 | - To see Coverity issues you have to be a member of the GlusterFS 12 | project in Coverity scan website. 13 | - Here is the link to [Coverity scan website](https://scan.coverity.com/projects/987) 14 | - Go to above link and subscribe to GlusterFS project (as 15 | contributor). It will send a request to Admin for including you in 16 | the Project. 17 | - Once admins for the GlusterFS Coverity scan approve your request, 18 | you will be able to see the defects raised by Coverity. 19 | - [Issue #1060](https://github.com/gluster/glusterfs/issues/1060) 20 | can be used as a umbrella bug for Coverity issues in master 21 | branch unless you are trying to fix a specific issue. 22 | - When you decide to work on some issue, please assign it to your name 23 | in the same Coverity website. So that we don't step on each others 24 | work. 25 | - When marking a bug intentional in Coverity scan website, please put 26 | an explanation for the same. So that it will help others to 27 | understand the reasoning behind it. 28 | 29 | _If you have more questions please send it to 30 | [gluster-devel](https://lists.gluster.org/mailman/listinfo/gluster-devel) mailing 31 | list_ 32 | 33 | ### CPP Check 34 | 35 | Cppcheck is available in Fedora and EL's EPEL repo 36 | 37 | - Install Cppcheck 38 | 39 | dnf install cppcheck 40 | 41 | - Clone GlusterFS code 42 | 43 | git clone https://github.com/gluster/glusterfs 44 | 45 | - Run Cpp check 46 | 47 | cppcheck glusterfs/ 2>cppcheck.log 48 | 49 | ### Clang-Scan Daily Runs 50 | 51 | We have daily runs of static source code analysis tool clang-scan on 52 | the glusterfs sources. There are daily analyses of the master and 53 | on currently supported branches. 54 | 55 | Results are posted at 56 | 57 | 58 | [Issue #1000](https://github.com/gluster/glusterfs/issues/1000) 59 | can be used as a umbrella bug for Clang issues in master 60 | branch unless you are trying to fix a specific issue. 61 | -------------------------------------------------------------------------------- /docs/Developer-guide/Projects.md: -------------------------------------------------------------------------------- 1 | # Projects 2 | 3 | This page contains a list of project ideas which will be suitable for 4 | students (for GSOC, internship etc.) 5 | 6 | ## Projects/Features which needs contributors 7 | 8 | ### RIO 9 | 10 | Issue: https://github.com/gluster/glusterfs/issues/243 11 | 12 | This is a new distribution logic, which can scale Gluster to 1000s of nodes. 13 | 14 | ### Composition xlator for small files 15 | 16 | Merge small files into a designated large file using our own custom 17 | semantics. This can improve our small file performance. 18 | 19 | ### Path based geo-replication 20 | 21 | Issue: https://github.com/gluster/glusterfs/issues/460 22 | 23 | This would allow remote volume to be of different type (NFS/S3 etc etc) too. 24 | 25 | ### Project Quota support 26 | 27 | Issue: https://github.com/gluster/glusterfs/issues/184 28 | 29 | This will make Gluster's Quota faster, and also provide desired behavior. 30 | 31 | ### Cluster testing framework based on gluster-tester 32 | 33 | Repo: https://github.com/aravindavk/gluster-tester 34 | 35 | Build a cluster using docker images (or VMs). Write a tool which would 36 | extend current gluster testing's .t format to take NODE as an addition 37 | parameter to run command. This would make upgrade and downgrade testing 38 | very easy and feasible. 39 | 40 | ### Network layer changes 41 | 42 | Issue: https://github.com/gluster/glusterfs/issues/391 43 | 44 | There is many improvements we can do in this area 45 | -------------------------------------------------------------------------------- /docs/GlusterFS-Tools/README.md: -------------------------------------------------------------------------------- 1 | ## GlusterFS Tools 2 | 3 | - [glusterfind](./glusterfind.md) 4 | - [gfind missing files](./gfind-missing-files.md) 5 | -------------------------------------------------------------------------------- /docs/Install-Guide/Install.md: -------------------------------------------------------------------------------- 1 | ### Installing Gluster 2 | 3 | For RPM based distributions, if you will be using InfiniBand, add the 4 | glusterfs RDMA package to the installations. For RPM based systems, yum/dnf 5 | is used as the install method in order to satisfy external depencies 6 | such as compat-readline5 7 | 8 | ###### Community Packages 9 | 10 | Packages are provided according to this [table](./Community-Packages.md). 11 | 12 | ###### For Debian 13 | 14 | Download the GPG key to apt config directory: 15 | 16 | ```console 17 | wget -O - https://download.gluster.org/pub/gluster/glusterfs/9/rsa.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/gluster.gpg 18 | ``` 19 | 20 | If the rsa.pub is not available at the above location, please look here https://download.gluster.org/pub/gluster/glusterfs/7/rsa.pub and add the GPG key to apt: 21 | 22 | ```console 23 | wget -O - https://download.gluster.org/pub/gluster/glusterfs/7/rsa.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/gluster.gpg 24 | ``` 25 | 26 | Add the source: 27 | 28 | ```console 29 | DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"') 30 | DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+') 31 | DEBARCH=$(dpkg --print-architecture) 32 | echo "deb [signed-by=/etc/apt/trusted.gpg.d/gluster.gpg] https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main" > /etc/apt/sources.list.d/gluster.list 33 | ``` 34 | 35 | Update package list: 36 | 37 | ```console 38 | apt update 39 | ``` 40 | 41 | Install: 42 | 43 | ```console 44 | apt install glusterfs-server 45 | ``` 46 | 47 | ###### For Ubuntu 48 | 49 | Install software-properties-common: 50 | 51 | ```console 52 | apt install software-properties-common 53 | ``` 54 | 55 | Then add the community GlusterFS PPA: 56 | 57 | ```console 58 | add-apt-repository ppa:gluster/glusterfs-7 59 | apt update 60 | ``` 61 | 62 | Finally, install the packages: 63 | 64 | ```console 65 | apt install glusterfs-server 66 | ``` 67 | 68 | _Note: Packages exist for Ubuntu 16.04 LTS, 18.04 69 | LTS, 20.04 LTS, 20.10, 21.04_ 70 | 71 | ###### For Red Hat/CentOS 72 | 73 | RPMs for CentOS and other RHEL clones are available from the 74 | CentOS Storage SIG mirrors. 75 | 76 | For more installation details refer [Gluster Quick start guide](https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart) from CentOS Storage SIG. 77 | 78 | ###### For Fedora 79 | 80 | Install the Gluster packages: 81 | 82 | ```console 83 | dnf install glusterfs-server 84 | ``` 85 | 86 | Once you are finished installing, you can move on to [configuration](./Configure.md) section. 87 | 88 | ###### For Arch Linux 89 | 90 | Install the Gluster package: 91 | 92 | ```console 93 | pacman -S glusterfs 94 | ``` 95 | -------------------------------------------------------------------------------- /docs/Install-Guide/Setup-virt.md: -------------------------------------------------------------------------------- 1 | # Setup on Virtual Machine 2 | 3 | _Note: You only need one of the three setup methods!_ 4 | 5 | ### Setup, Method 1 – Setting up in virtual machines 6 | 7 | As we just mentioned, to set up Gluster using virtual machines, you will 8 | need at least two virtual machines with at least 1GB of RAM each. You 9 | may be able to test with less but most users will find it too slow for 10 | their tastes. The particular virtualization product you use is a matter 11 | of choice. Common platforms include Xen, VMware ESX and 12 | Workstation, VirtualBox, and KVM. For purpose of this article, all steps 13 | assume KVM but the concepts are expected to be simple to translate to 14 | other platforms as well. The article assumes you know the particulars of 15 | how to create a virtual machine and have installed a 64 bit linux 16 | distribution already. 17 | 18 | Create or clone two VM’s, with the following setup on each: 19 | 20 | - 2 disks using the VirtIO driver, one for the base OS and one that we 21 | will use as a Gluster “brick”. You can add more later to try testing 22 | some more advanced configurations, but for now let’s keep it simple. 23 | 24 | _Note: If you have ample space available, consider allocating all the 25 | disk space at once._ 26 | 27 | - 2 NIC’s using VirtIO driver. The second NIC is not strictly 28 | required, but can be used to demonstrate setting up a separate 29 | network for storage and management traffic. 30 | 31 | _Note: Attach each NIC to a separate network._ 32 | 33 | Other notes: Make sure that if you clone the VM, that Gluster has not 34 | already been installed. Gluster generates a UUID to “fingerprint” each 35 | system, so cloning a previously deployed system will result in errors 36 | later on. 37 | 38 | Once these are prepared, you are ready to move on to the 39 | [install](./Install.md) section. 40 | -------------------------------------------------------------------------------- /docs/Ops-Guide/Overview.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | Over the years the infrastructure and services consumed by the Gluster.org 4 | community have grown organically. There have been instances of design and 5 | planning but the growth has mostly been ad-hoc and need-based. 6 | 7 | Central to the plan of revitalizing the Gluster.org community is the ability to 8 | provide well-maintained infrastructure services with predictable uptimes and 9 | resilience. We're migrating the existing services into the Community Cage. The 10 | implied objective is that the transition would open up ways and means of the 11 | formation of a loose coalition among Infrastructure Administrators who provide 12 | expertise and guidance to the community projects within the OSAS team. 13 | 14 | A small group of Gluster.org community members was asked to assess the current 15 | utilization and propose a planned growth. The ad-hoc nature of the existing 16 | infrastructure impedes the development of a proposal based on 17 | standardized methods of extrapolation. A part of the projection is based on a 18 | combination of patterns and heuristics - problems that have been observed and 19 | how mitigation strategies have enabled the community to continue to consume the 20 | services available. 21 | 22 | The guiding principle for the assessment has been the need to migrate services 23 | to "Software-as-a-Service" models and providers wherever applicable and deemed 24 | fit. To illustrate this specific directive - the documentation/docs aspect of 25 | Gluster.org has been continuously migrating artifacts to readthedocs.org while 26 | focusing on simple integration with the website. The website itself has been 27 | put within the Gluster.org Github.com account to enable ease of maintenance and 28 | sustainability. 29 | 30 | For more details look at the full [Tools List](./Tools.md). 31 | -------------------------------------------------------------------------------- /docs/Ops-Guide/Tools.md: -------------------------------------------------------------------------------- 1 | ## Tools We Use 2 | 3 | | Service/Tool | Purpose | Hosted At | 4 | | :------------------- | :------------------------------------: | --------------: | 5 | | Github | Code Review | Github | 6 | | Jenkins | CI, build-verification-test | Temporary Racks | 7 | | Backups | Website, Gerrit and Jenkins backup | Rackspace | 8 | | Docs | Documentation content | mkdocs.org | 9 | | download.gluster.org | Official download site of the binaries | Rackspace | 10 | | Mailman | Lists mailman | Rackspace | 11 | | www.gluster.org | Web asset | Rackspace | 12 | 13 | ## Notes 14 | 15 | - download.gluster.org: Resiliency is important for availability and metrics. 16 | Since it's official download, access need to restricted as much as possible. 17 | Few developers building the community packages have access. If anyone requires 18 | access can raise an issue at [gluster/project-infrastructure](https://github.com/gluster/project-infrastructure/issues/new) 19 | with valid reason 20 | - Mailman: Should be migrated to a separate host. Should be made more redundant 21 | (ie, more than 1 MX). 22 | - www.gluster.org: Framework, Artifacts now exist under gluster.github.com. Has 23 | various legacy installation of software (mediawiki, etc ), being cleaned as 24 | we find them. 25 | -------------------------------------------------------------------------------- /docs/Troubleshooting/README.md: -------------------------------------------------------------------------------- 1 | ## Troubleshooting Guide 2 | 3 | This guide describes some commonly seen issues and steps to recover from them. 4 | If that doesn’t help, reach out to the [Gluster community](https://www.gluster.org/community/), in which case the guide also describes what information needs to be provided in order to debug the issue. At minimum, we need the version of gluster running and the output of `gluster volume info`. 5 | 6 | ### Where Do I Start? 7 | 8 | Is the issue already listed in the component specific troubleshooting sections? 9 | 10 | - [CLI and Glusterd Issues](./troubleshooting-glusterd.md) 11 | - [Heal related issues](./troubleshooting-afr.md) 12 | - [Resolving Split brains](./resolving-splitbrain.md) 13 | - [Geo-replication Issues](./troubleshooting-georep.md) 14 | - [Gluster NFS Issues](./troubleshooting-gnfs.md) 15 | - [File Locks](./troubleshooting-filelocks.md) 16 | 17 | If that didn't help, here is how to debug further. 18 | 19 | Identifying the problem and getting the necessary information to diagnose it is the first step in troubleshooting your Gluster setup. As Gluster operations involve interactions between multiple processes, this can involve multiple steps. 20 | 21 | ### What Happened? 22 | 23 | - An operation failed 24 | - [High Memory Usage](./troubleshooting-memory.md) 25 | - [A Gluster process crashed](./gluster-crash.md) 26 | -------------------------------------------------------------------------------- /docs/Troubleshooting/gfid-to-path.md: -------------------------------------------------------------------------------- 1 | # Convert GFID to Path 2 | 3 | GlusterFS internal file identifier (GFID) is a uuid that is unique to each 4 | file across the entire cluster. This is analogous to inode number in a 5 | normal filesystem. The GFID of a file is stored in its xattr named 6 | `trusted.gfid`. 7 | 8 | #### Special mount using gfid-access translator: 9 | 10 | ```console 11 | mount -t glusterfs -o aux-gfid-mount vm1:test /mnt/testvol 12 | ``` 13 | 14 | Assuming, you have `GFID` of a file from changelog (or somewhere else). 15 | For trying this out, you can get `GFID` of a file from mountpoint: 16 | 17 | ```console 18 | getfattr -n glusterfs.gfid.string /mnt/testvol/dir/file 19 | ``` 20 | 21 | --- 22 | 23 | ### Get file path from GFID (Method 1): 24 | 25 | **(Lists hardlinks delimited by `:`, returns path as seen from mountpoint)** 26 | 27 | #### Turn on build-pgfid option 28 | 29 | ```console 30 | gluster volume set test build-pgfid on 31 | ``` 32 | 33 | Read virtual xattr `glusterfs.ancestry.path` which contains the file path 34 | 35 | ```console 36 | getfattr -n glusterfs.ancestry.path -e text /mnt/testvol/.gfid/ 37 | ``` 38 | 39 | **Example:** 40 | 41 | ```{ .console .no-copy } 42 | [root@vm1 glusterfs]# ls -il /mnt/testvol/dir/ 43 | total 1 44 | 10610563327990022372 -rw-r--r--. 2 root root 3 Jul 17 18:05 file 45 | 10610563327990022372 -rw-r--r--. 2 root root 3 Jul 17 18:05 file3 46 | 47 | [root@vm1 glusterfs]# getfattr -n glusterfs.gfid.string /mnt/testvol/dir/file 48 | getfattr: Removing leading '/' from absolute path names 49 | # file: mnt/testvol/dir/file 50 | glusterfs.gfid.string="11118443-1894-4273-9340-4b212fa1c0e4" 51 | 52 | [root@vm1 glusterfs]# getfattr -n glusterfs.ancestry.path -e text /mnt/testvol/.gfid/11118443-1894-4273-9340-4b212fa1c0e4 53 | getfattr: Removing leading '/' from absolute path names 54 | # file: mnt/testvol/.gfid/11118443-1894-4273-9340-4b212fa1c0e4 55 | glusterfs.ancestry.path="/dir/file:/dir/file3" 56 | ``` 57 | 58 | ### Get file path from GFID (Method 2): 59 | 60 | **(Does not list all hardlinks, returns backend brick path)** 61 | 62 | ```console 63 | getfattr -n trusted.glusterfs.pathinfo -e text /mnt/testvol/.gfid/ 64 | ``` 65 | 66 | **Example:** 67 | 68 | ```console 69 | [root@vm1 glusterfs]# getfattr -n trusted.glusterfs.pathinfo -e text /mnt/testvol/.gfid/11118443-1894-4273-9340-4b212fa1c0e4 70 | getfattr: Removing leading '/' from absolute path names 71 | # file: mnt/testvol/.gfid/11118443-1894-4273-9340-4b212fa1c0e4 72 | trusted.glusterfs.pathinfo="( )" 73 | ``` 74 | 75 | #### References and links: 76 | 77 | [posix: placeholders for GFID to path conversion](http://review.gluster.org/5951) 78 | -------------------------------------------------------------------------------- /docs/Troubleshooting/gluster-crash.md: -------------------------------------------------------------------------------- 1 | # Debugging a Crash 2 | 3 | To find out why a Gluster process terminated abruptly, we need the following: 4 | 5 | - A coredump of the process that crashed 6 | - The exact version of Gluster that is running 7 | - The Gluster log files 8 | - the output of `gluster volume info` 9 | - Steps to reproduce the crash if available 10 | 11 | Contact the [community](https://www.gluster.org/community/) with this information or [open an issue](https://github.com/gluster/glusterfs/issues/new) 12 | -------------------------------------------------------------------------------- /docs/Troubleshooting/troubleshooting-memory.md: -------------------------------------------------------------------------------- 1 | # Troubleshooting High Memory Utilization 2 | 3 | If the memory utilization of a Gluster process increases significantly with time, it could be a leak caused by resources not being freed. 4 | If you suspect that you may have hit such an issue, try using [statedumps](./statedump.md) to debug the issue. 5 | 6 | If you are unable to figure out where the leak is, please [file an issue](https://github.com/gluster/glusterfs/issues/new) and provide the following details: 7 | 8 | - Gluster version 9 | - The affected process 10 | - The output of `gluster volume info` 11 | - Steps to reproduce the issue if available 12 | - Statedumps for the process collected at intervals as the memory utilization increases 13 | - The Gluster log files for the process (if possible) 14 | -------------------------------------------------------------------------------- /docs/Upgrade-Guide/README.md: -------------------------------------------------------------------------------- 1 | ## Upgrading GlusterFS 2 | 3 | - [About op-version](./op-version.md) 4 | 5 | If you are using GlusterFS version 6.x or above, you can upgrade it to the following: 6 | 7 | - [Upgrading to 10](./upgrade-to-10.md) 8 | - [Upgrading to 9](./upgrade-to-9.md) 9 | - [Upgrading to 8](./upgrade-to-8.md) 10 | - [Upgrading to 7](./upgrade-to-7.md) 11 | 12 | If you are using GlusterFS version 5.x or above, you can upgrade it to the following: 13 | 14 | - [Upgrading to 8](./upgrade-to-8.md) 15 | - [Upgrading to 7](./upgrade-to-7.md) 16 | - [Upgrading to 6](./upgrade-to-6.md) 17 | 18 | If you are using GlusterFS version 4.x or above, you can upgrade it to the following: 19 | 20 | - [Upgrading to 6](./upgrade-to-6.md) 21 | - [Upgrading to 5](./upgrade-to-5.md) 22 | 23 | If you are using GlusterFS version 3.4.x or above, you can upgrade it to following: 24 | 25 | - [Upgrading to 3.5](./upgrade-to-3.5.md) 26 | - [Upgrading to 3.6](./upgrade-to-3.6.md) 27 | - [Upgrading to 3.7](./upgrade-to-3.7.md) 28 | - [Upgrading to 3.9](./upgrade-to-3.9.md) 29 | - [Upgrading to 3.10](./upgrade-to-3.10.md) 30 | - [Upgrading to 3.11](./upgrade-to-3.11.md) 31 | - [Upgrading to 3.12](./upgrade-to-3.12.md) 32 | - [Upgrading to 3.13](./upgrade-to-3.13.md) 33 | -------------------------------------------------------------------------------- /docs/Upgrade-Guide/op-version.md: -------------------------------------------------------------------------------- 1 | ### op-version 2 | 3 | op-version is the operating version of the Gluster which is running. 4 | 5 | op-version was introduced to ensure gluster running with different versions do not end up in a problem and backward compatibility issues can be tackled. 6 | 7 | After Gluster upgrade, it is advisable to have op-version updated. 8 | 9 | ### Updating op-version 10 | 11 | Current op-version can be queried as below: 12 | 13 | For 3.10 onwards: 14 | 15 | ```console 16 | gluster volume get all cluster.op-version 17 | ``` 18 | 19 | For release < 3.10: 20 | 21 | ```{ .console .no-copy } 22 | # gluster volume get cluster.op-version 23 | ``` 24 | 25 | To get the maximum possible op-version a cluster can support, the following query can be used (this is available 3.10 release onwards): 26 | 27 | ```console 28 | gluster volume get all cluster.max-op-version 29 | ``` 30 | 31 | For example, if some nodes in a cluster have been upgraded to X and some to X+, then the maximum op-version supported by the cluster is X, and the cluster.op-version can be bumped up to X to support new features. 32 | 33 | op-version can be updated as below. 34 | For example, after upgrading to glusterfs-4.0.0, set op-version as: 35 | 36 | ```console 37 | gluster volume set all cluster.op-version 40000 38 | ``` 39 | 40 | Note: 41 | This is not mandatory, but advisable to have updated op-version if you want to make use of latest features in the updated gluster. 42 | 43 | ### Client op-version 44 | 45 | When trying to set a volume option, it might happen that one or more of the connected clients cannot support the feature being set and might need to be upgraded to the op-version the cluster is currently running on. 46 | 47 | To check op-version information for the connected clients and find the offending client, the following query can be used for 3.10 release onwards: 48 | 49 | ```{ .console .no-copy } 50 | # gluster volume status clients 51 | ``` 52 | 53 | The respective clients can then be upgraded to the required version. 54 | 55 | This information could also be used to make an informed decision while bumping up the op-version of a cluster, so that connected clients can support all the new features provided by the upgraded cluster as well. 56 | -------------------------------------------------------------------------------- /docs/Upgrade-Guide/upgrade-to-10.md: -------------------------------------------------------------------------------- 1 | # Upgrade procedure to Gluster 10, from Gluster 9.x, 8.x and 7.x 2 | 3 | We recommend reading the [release notes for 10.0](../release-notes/10.0.md) to be 4 | aware of the features and fixes provided with the release. 5 | 6 | > **NOTE:** Before following the generic upgrade procedure checkout the "**Major Issues**" section given below. 7 | 8 | Refer, to the [generic upgrade procedure](./generic-upgrade-procedure.md) guide and follow documented instructions. 9 | 10 | ## Major issues 11 | 12 | ### The following options are removed from the code base and require to be unset 13 | 14 | before an upgrade from releases older than release 4.1.0, 15 | 16 | - features.lock-heal 17 | - features.grace-timeout 18 | 19 | To check if these options are set use, 20 | 21 | ```console 22 | gluster volume info 23 | ``` 24 | 25 | and ensure that the above options are not part of the `Options Reconfigured:` 26 | section in the output of all volumes in the cluster. 27 | 28 | If these are set, then unset them using the following commands, 29 | 30 | ```{ .console .no-copy } 31 | # gluster volume reset