├── .github └── dco.yml ├── .gitignore ├── .travis.yml ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── IBM_Storage_Ceph.png ├── LICENSE ├── MAINTAINERS.md ├── README.md ├── Vagrantfile └── ceph-prep ├── admin-playbook.yml ├── client-playbook.yml ├── common-playbook.yml ├── config └── hosts /.github/dco.yml: -------------------------------------------------------------------------------- 1 | # This enables DCO bot for you, please take a look https://github.com/probot/dco 2 | # for more details. 3 | require: 4 | members: false 5 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | **/.vagrant 2 | *~ 3 | ceph-prep/id_rsa* 4 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: ruby 2 | 3 | before_install: 4 | - echo "#" 5 | - echo "#" 6 | - echo "TravisCI is unbelively powerful, but you need to do your research first." 7 | - echo "#" 8 | - echo "#" 9 | 10 | script: 11 | - echo "#" 12 | - echo "#" 13 | - echo "Please take a look https://docs.travis-ci.com/user/tutorial/ for you options." 14 | - echo "#" 15 | - echo "#" 16 | 17 | 18 | after_success: 19 | - echo "#" 20 | - echo "#" 21 | - echo "Don't forget to enable it in the GitHub repository also.." 22 | - echo "#" 23 | - echo "#" 24 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | All notable changes to this project will be documented in this file. 4 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, sex characteristics, gender identity and expression, 9 | level of experience, education, socio-economic status, nationality, personal 10 | appearance, race, religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | * Using welcoming and inclusive language 18 | * Being respectful of differing viewpoints and experiences 19 | * Gracefully accepting constructive criticism 20 | * Focusing on what is best for the community 21 | * Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | * The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | * Trolling, insulting/derogatory comments, and personal or political attacks 28 | * Public or private harassment 29 | * Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | * Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies both within project spaces and in public spaces 49 | when an individual is representing the project or its community. Examples of 50 | representing a project or community include using an official project e-mail 51 | address, posting via an official social media account, or acting as an appointed 52 | representative at an online or offline event. Representation of a project may be 53 | further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the [project team](./MAINTAINERS.md). All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 72 | 73 | [homepage]: https://www.contributor-covenant.org 74 | 75 | For answers to common questions about this code of conduct, see 76 | https://www.contributor-covenant.org/faq 77 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | ## Contributing In General 2 | Our project welcomes external contributions. If you have an itch, please feel 3 | free to scratch it. 4 | 5 | To contribute code or documentation, please submit a [pull request](https://github.com/ibm/rhcs5-vagrant/pulls). 6 | 7 | A good way to familiarize yourself with the codebase and contribution process is 8 | to look for and tackle low-hanging fruit in the [issue tracker](https://github.com/ibm/rhcs5-vagrant/issues). 9 | Before embarking on a more ambitious contribution, please quickly [get in touch](#communication) with us. 10 | 11 | **Note: We appreciate your effort, and want to avoid a situation where a contribution 12 | requires extensive rework (by you or by us), sits in backlog for a long time, or 13 | cannot be accepted at all!** 14 | 15 | ### Proposing new features 16 | 17 | If you would like to implement a new feature, please [raise an issue](https://github.com/ibm/rhcs5-vagrant/issues) 18 | before sending a pull request so the feature can be discussed. This is to avoid 19 | you wasting your valuable time working on a feature that the project developers 20 | are not interested in accepting into the code base. 21 | 22 | ### Fixing bugs 23 | 24 | If you would like to fix a bug, please [raise an issue](https://github.com/ibm/rhcs5-vagrant/issues) before sending a 25 | pull request so it can be tracked. 26 | 27 | ### Merge approval 28 | 29 | The project maintainers use LGTM (Looks Good To Me) in comments on the code 30 | review to indicate acceptance. A change requires LGTMs from two of the 31 | maintainers of each component affected. 32 | 33 | For a list of the maintainers, see the [MAINTAINERS.md](MAINTAINERS.md) page. 34 | 35 | ## Legal 36 | 37 | Each source file must include a license header for the Apache 38 | Software License 2.0. Using the SPDX format is the simplest approach. 39 | e.g. 40 | 41 | ``` 42 | /* 43 | Copyright All Rights Reserved. 44 | 45 | SPDX-License-Identifier: MIT 46 | */ 47 | ``` 48 | 49 | We have tried to make it as easy as possible to make contributions. This 50 | applies to how we handle the legal aspects of contribution. We use the 51 | same approach - the [Developer's Certificate of Origin 1.1 (DCO)](https://github.com/hyperledger/fabric/blob/master/docs/source/DCO1.1.txt) - that the Linux® Kernel [community](https://elinux.org/Developer_Certificate_Of_Origin) 52 | uses to manage code contributions. 53 | 54 | We simply ask that when submitting a patch for review, the developer 55 | must include a sign-off statement in the commit message. 56 | 57 | Here is an example Signed-off-by line, which indicates that the 58 | submitter accepts the DCO: 59 | 60 | ``` 61 | Signed-off-by: John Doe 62 | ``` 63 | 64 | You can include this automatically when you commit a change to your 65 | local git repository using the following command: 66 | 67 | ``` 68 | git commit -s 69 | ``` 70 | 71 | 72 | ## Setup 73 | **TBD** Add any special setup instructions for your project to help the developer 74 | become productive quickly. 75 | 76 | ## Testing 77 | **TBD** Provide information that helps the developer test any changes they make 78 | before submitting. 79 | 80 | ## Coding style guidelines 81 | **TBD** Share any specific style guidelines you might have for your project. 82 | -------------------------------------------------------------------------------- /IBM_Storage_Ceph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/StorageCephVagrant/d6a57789854731311e0df600d1a6b28eddfb5b84/IBM_Storage_Ceph.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Andrew Garner 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /MAINTAINERS.md: -------------------------------------------------------------------------------- 1 | # MAINTAINERS 2 | 3 | Harald Seipp - seipp@de.ibm.com 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Vagrant / Ansible IBM Storage Ceph Deployment 2 | 3 | ## Scope 4 | 5 | This is an opinionated automated deployment of a IBM Storage Ceph 8.x cluster 6 | installation based on RHEL9 up to the point where you run the preflight Ansible playbook. 7 | 8 | ![IBM Storage Ceph Screenshot](./IBM_Storage_Ceph.png "IBM Storage Ceph Screenshot") 9 | 10 | ## Acknowledgments 11 | 12 | The code to facilitate automated subscription-manager registration was derived from the 13 | [https://github.com/agarthetiger/vagrant-rhel8](https://github.com/agarthetiger/vagrant-rhel8) 14 | project. 15 | 16 | Copyright notice: 17 | 18 | ```text 19 | # Copyright contributors to the rhcs5-vagrant project. 20 | # Based on vagrant-rhel8 code - Copyright (c) 2019 Andrew Garner 21 | ``` 22 | 23 | ## Prerequisites 24 | 25 | You need a Linux host that is ideally equipped with 64GB+ RAM and 8+ vCPUs. 26 | The configuration can be adjusted to make the deployment work on less capable 27 | hardware. 28 | 29 | Fedora Linux (IBM OpenClient for Fedora) was used to develop this automation. 30 | Of course the instructions below can be adopted to other Linux variants with 31 | their respective package manager commands. 32 | 33 | - Vagrant: `sudo dnf -y install vagrant` 34 | - QEM/KVM/libvirt: `sudo dnf -y install qemu libvirt libvirt-devel ruby-devel gcc` 35 | - [Vagrant libvirt](https://github.com/vagrant-libvirt/vagrant-libvirt) Provider: 36 | For Fedora use `sudo dnf -y install vagrant-libvirt`. Check for a package available 37 | for your Linux distribution before trying to install with `vagrant plugin install vagrant-libvirt`. 38 | - [Vagrant Hostmanager](https://github.com/devopsgroup-io/vagrant-hostmanager) 39 | plugin: For Fedora use `sudo dnf -y install vagrant-hostmanager`. 40 | Check for a package available for your Linux distribution before trying to install 41 | with `vagrant plugin install vagrant-hostmanager`. 42 | - Ansible: `sudo dnf -y install ansible`. 43 | 44 | The host needs internet connectivity to download the required packages and 45 | container images. 46 | 47 | Tested with 48 | 49 | - Fedora 36: Vagrant 2.2.19, vagrant-libvirt 0.7.0 and Ansible 5.9.0 50 | - Fedora 37: Vagrant 2.2.19, vagrant-libvirt 0.7.0 and Ansible 7.1.0 51 | - Fedora 38: Vagrant 2.2.19, vagrant-libvirt 0.7.0 and Ansible 7.7.0 52 | - Fedora 39: Vagrant 2.3.4, vagrant-libvirt 0.11.2 and Ansible 9.0.0 53 | - Fedora 40: Vagrant 2.3.4, vagrant-libvirt 0.11.2 and Ansible 9.11.0. On Fedora 40, it is highly recommended to manually apply [this patch](https://github.com/net-ssh/net-ssh/commit/efd0ebe882fce04952dcf1dbe2ba5618172f2172) to fix errors causing `vagrant halt` and `vagrant reload` to fail. 54 | - Fedora 41: Vagrant 2.3.4, vagrant-libvirt 0.11.2 and Ansible 9.13.0. 55 | 56 | You need a subscription for RHEL and a pull secret for IBM Storage Ceph. 57 | 58 | Finally you need to create a private/public key pair in the `ceph-prep` directory: 59 | 60 | ```bash 61 | cd ceph-prep 62 | # use empty password when prompted 63 | ssh-keygen -t rsa -f ./id_rsa 64 | cd .. 65 | ``` 66 | 67 | There are some options that you can set in the [Vagrantfile](./Vagrantfile), look 68 | at the section marked with `Some parameters that can be adjusted`. 69 | 70 | ## Installation 71 | 72 | Bring up the Cluster by setting the environment variables with your Red Hat 73 | credentials then start the vagrant/ansible driven deployment: 74 | 75 | ```bash 76 | export RH_SUBSCRIPTION_MANAGER_USER='' 77 | export RH_SUBSCRIPTION_MANAGER_PW='' 78 | # For debugging: vagrant up --no-parallel --no-destroy-on-error 79 | vagrant up --no-parallel 80 | ``` 81 | 82 | The vagrant deployment will automatically subscribe the machines at Red Hat. 83 | 84 | To finalize the installation, ssh into the `ceph-admin` node and execute the 85 | preflight Ansible playbook: 86 | 87 | ```bash 88 | vagrant ssh ceph-admin 89 | cd /usr/share/cephadm-ansible/ 90 | ansible-playbook -i inventory/production/hosts cephadm-preflight.yml --extra-vars "ceph_origin=ibm" 91 | ``` 92 | 93 | For the next step you need to set the environment variables again on the 94 | ceph-admin node: 95 | 96 | ```bash 97 | export IBM_CR_USERNAME='' 98 | export IBM_CR_PASSWORD='' 99 | ``` 100 | 101 | Then you can continue with bootstrapping the cluster: 102 | 103 | ```bash 104 | sudo cephadm bootstrap --cluster-network 172.21.12.0/24 --mon-ip 172.21.12.10 --registry-url cp.icr.io/cp --registry-username $IBM_CR_USERNAME --registry-password $IBM_CR_PASSWORD 105 | ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-server-1 106 | ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-server-2 107 | ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-server-3 108 | ``` 109 | 110 | Note down the admin password that the `cephadm bootstrap` command printed. You 111 | will need it for logging into the console the first time. 112 | 113 | ## Configuration 114 | 115 | You can now complete the installation by logging into the 116 | [IBM Storage Ceph GUI](https://172.21.12.10:8443) to 117 | 118 | - change the password. You will be prompted automatically, the initial password 119 | is the one you noted above as `cephadm bootstrap` output. 120 | - activate telemetry module. See the warning on top of the screen after logging 121 | in. 122 | - add nodes using Cluster->Hosts->Create. Use ceph-server-1 with IP 123 | 172.21.12.12, ceph-server-2 with IP 172.21.12.13, ceph-server-3 124 | with IP 172.21.12.14. 125 | - add OSDs. Please wait until all ceph-server nodes are active, this might take 126 | up to 10 minutes. When active, Cluster->Hosts shows mon service instances on 127 | all nodes. Add OSDs via Cluster->OSDs->Create. 128 | 129 | Most of the following is taken from the course [Hands-on with Red Hat Ceph Storage 5](https://training-lms.redhat.com/sso/saml/auth/rhopen?RelayState=deeplinkoffering%3D44428318) 130 | with some improvements applied. 131 | 132 | Note: When creating an RBD image, ensure that you do not have the `Exclusive Lock` 133 | option set, otherwise there might be access issues mapping the RBD volume on the 134 | client. 135 | 136 | The following sections provide additional commands (on the `ceph-admin` node) 137 | to configure RBD access, CephFS and RGW S3 and Swift access. 138 | 139 | ### Configure Block (RBD) access 140 | 141 | ```bash 142 | vagrant ssh ceph-admin 143 | # Create RBD pool 144 | sudo ceph osd pool create rbd 64 145 | sudo ceph osd pool application enable rbd rbd 146 | # Create RBD user 147 | sudo ceph auth get-or-create client.harald -o /etc/ceph/ceph.client.harald.keyring 148 | sudo ceph auth caps client.harald mon 'allow r' osd 'allow rwx' 149 | sudo ceph auth list 150 | # Create RBD image 151 | sudo rbd create rbd/test --size=128M 152 | sudo rbd ls 153 | ``` 154 | 155 | ### Configure File (CephFS) access 156 | 157 | ```bash 158 | # Create CephFS 159 | sudo ceph fs volume create fs_name --placement=ceph-server-1,ceph-server-2 160 | sudo ceph fs volume ls 161 | cd 162 | cat <mds.yaml 163 | service_type: mds 164 | service_id: fs_name 165 | placement: 166 | count: 2 167 | EOF 168 | sudo ceph orch apply -i mds.yaml 169 | sudo ceph orch ls 170 | sudo ceph orch ps 171 | sudo ceph -s 172 | sudo ceph df 173 | sudo ceph fs status fs_name 174 | # Authorize client file system access and grant quota and snapshot rights 175 | sudo ceph fs authorize fs_name client.0 / rwps 176 | sudo ceph auth get client.0 -o /etc/ceph/ceph.client.0.keyring 177 | ``` 178 | 179 | ### Configure Object (RGW) access 180 | 181 | ```bash 182 | # Create RGW 183 | 184 | # Note that this is due to the low number of OSDs 185 | # While we're setting it on global level to silence the warnings setting it 186 | # on OSD and Mon level would have been sufficient. Setting it on Mon level as 187 | # documented does not silence the warning. 188 | sudo ceph config set global mon_max_pg_per_osd 512 189 | sudo radosgw-admin realm create --rgw-realm=test_realm --default 190 | sudo radosgw-admin zonegroup create --rgw-zonegroup=default --master --default 191 | sudo radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default 192 | sudo radosgw-admin period update --rgw-realm=test_realm --commit 193 | sudo ceph orch apply rgw test --realm=test_realm --zone=test_zone --placement="2 ceph-server-2 ceph-server-3" 194 | sudo ceph orch ls 195 | sudo ceph orch ps 196 | sudo ceph -s 197 | sudo radosgw-admin user create --uid='user1' --display-name='First User' --access-key='S3user1' --secret-key='S3user1key' 198 | sudo radosgw-admin subuser create --uid='user1' --subuser='user1:swift' --secret-key='Swiftuser1key' --access=full 199 | sudo radosgw-admin user info --uid='user1' 200 | ``` 201 | 202 | ## Usage 203 | 204 | Now use the created resources on the `ceph-client` node. 205 | 206 | ### RBD volume access 207 | 208 | ```bash 209 | vagrant ssh ceph-client 210 | # Add the configuration according to the admin node equivalent 211 | sudo vi /etc/ceph/ceph.conf 212 | # Add the client keyring according to the admin node equivalent 213 | sudo vi /etc/ceph/ceph.client.harald.keyring 214 | # Create a block device 215 | sudo rbd --id harald ls 216 | sudo rbd --id harald map rbd/test 217 | sudo rbd --id harald showmapped 218 | sudo mkfs.ext4 /dev/rbd0 219 | sudo mkdir /mnt/rbd 220 | sudo mount -o user /dev/rbd0 /mnt/rbd/ 221 | sudo chown vagrant:vagrant /mnt/rbd/ 222 | echo "hello world" > /mnt/rbd/hello.txt 223 | ``` 224 | 225 | ### CephFS file access 226 | 227 | ```bash 228 | # Add the 0 keyring file according to the admin node equivalent 229 | sudo vi /etc/ceph/ceph.client.0.keyring 230 | sudo mkdir /mnt/cephfs 231 | sudo mount -t ceph ceph-server-1:6789:/ /mnt/cephfs -o name=0,fs=fs_name 232 | sudo chown vagrant:vagrant /mnt/cephfs 233 | df -h 234 | echo "hello world" > /mnt/cephfs/hello.txt 235 | ``` 236 | 237 | ### RGW OpenStack Swift object access 238 | 239 | ```bash 240 | sudo dnf -y install python3-pip 241 | pip3 install --user python-swiftclient 242 | sudo rados --id harald lspools 243 | swift -A http://ceph-server-2:80/auth/1.0 -U user1:swift -K 'Swiftuser1key' list 244 | swift -A http://ceph-server-2:80/auth/1.0 -U user1:swift -K 'Swiftuser1key' post container-1 245 | swift -A http://ceph-server-2:80/auth/1.0 -U user1:swift -K 'Swiftuser1key' list 246 | base64 /dev/urandom | head -c 10000000 >dummy_file1.txt 247 | swift -A http://ceph-server-2:80/auth/1.0 -U user1:swift -K 'Swiftuser1key' upload container-1 dummy_file1.txt 248 | ``` 249 | 250 | ### RGW S3 object access 251 | 252 | Here we are actually testing S3 client access to the same data that was stored 253 | using the Swift protocol. 254 | 255 | ```bash 256 | curl -s https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip 257 | sudo dnf -y install unzip 258 | unzip awscliv2.zip 2>&1 >/dev/null 259 | cd aws 260 | sudo ./install 261 | cd .. 262 | rm -rf awscliv2.zip aws 263 | aws configure set aws_access_key_id S3user1 --profile ceph 264 | aws configure set aws_secret_access_key S3user1key --profile ceph 265 | # Set region, otherwise bucket creation will fail 266 | aws configure set region us-east-1 --profile ceph 267 | # The following speeds up AWS CLI requests significantly 268 | export AWS_EC2_METADATA_DISABLED=true 269 | aws --profile ceph --endpoint http://ceph-server-2 s3 ls 270 | aws --profile ceph --endpoint http://172.21.12.13 s3 ls s3://container-1 271 | ``` 272 | 273 | ### Grafana Performance Dashboards 274 | 275 | By default, the Grafana dashboards that as part of the Web UI "Overall Performance" 276 | tabs will not show up. 277 | To fix this, you need to 278 | 279 | 1. Add ceph-admin to `/etc/hosts` on your host: `172.21.12.10 ceph-admin` 280 | 2. Open a browser tab and point it to [https://ceph-admin:3000](https://ceph-admin:3000), then accept the self-signed certificate 281 | 282 | Afterwards the Grafana dashboards should show up in the respective 283 | "Overall Performance" tabs for Hosts, OSDs, Pools, Images, File Systems, and 284 | Object Gateway Daemons. 285 | 286 | ### Reducing to 3 mons 287 | 288 | After adding all nodes, a count of 5 mons is set while only four nodes are 289 | available, thus only four mons will be running: 290 | 291 | ```bash 292 | vagrant ssh ceph-admin 293 | $ sudo ceph orch ls 294 | NAME PORTS RUNNING REFRESHED AGE PLACEMENT 295 | ... 296 | mon 4/5 2m ago 5w count:5 297 | ``` 298 | 299 | To get a better balanced state, reduce the number of mons to 3: 300 | 301 | ```bash 302 | cat <3-mons.yaml 303 | service_type: mon 304 | service_name: mon 305 | placement: 306 | count: 3 307 | EOF 308 | sudo ceph orch apply -i 3-mons.yaml 309 | ``` 310 | 311 | This will result in: 312 | 313 | ```bash 314 | $ sudo ceph orch ls 315 | NAME PORTS RUNNING REFRESHED AGE PLACEMENT 316 | ... 317 | mon 3/3 65s ago 6s count:3 318 | ... 319 | ``` 320 | 321 | ## Shutting down and restarting the VMs 322 | 323 | Shutting down the VMs with `vagrant halt` then re-starting them at a later time 324 | with `vagrant reload` works with this solution. 325 | 326 | RBD mappings/mounts and CephFS mounts on the ceph-client VM need to be 327 | re-created after re-starting. 328 | 329 | ## Deleting the deployment 330 | 331 | Ensure that you have the machines up and running when destroying the machines 332 | using `vagrant destroy -f` because only then the machines will be automatically 333 | unregistered at Red Hat. 334 | 335 | ## Contacting the Project Maintainers, Contributing 336 | 337 | **NOTE: This repository has been configured with the [DCO bot](https://github.com/probot/dco).** 338 | 339 | If you have any questions or issues you can create a new [issue here](https://github.com/IBM/rhcs5-vagrant/issues). 340 | 341 | Pull requests are very welcome! Make sure your patches are well tested. 342 | Ideally create a topic branch for every separate change you make. For 343 | example: 344 | 345 | 1. Fork the repo 346 | 2. Create your feature branch (`git checkout -b my-new-feature`) 347 | 3. Commit your changes (`git commit -am 'Added some feature'`) 348 | 4. Push to the branch (`git push origin my-new-feature`) 349 | 5. Create new Pull Request 350 | 351 | ## License 352 | 353 | All source files must include a Copyright and License header. The SPDX license header is 354 | preferred because it can be easily scanned. 355 | 356 | If you would like to see the detailed LICENSE click [here](LICENSE). 357 | 358 | ```text 359 | # 360 | # Copyright 2022- IBM Inc. All rights reserved 361 | # SPDX-License-Identifier: MIT 362 | # 363 | ``` 364 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # Copyright contributors to the rhcs5-vagrant project. 2 | # Based on vagrant-rhel8 code - Copyright (c) 2019 Andrew Garner 3 | 4 | ### Some parameters that can be adjusted 5 | 6 | # Number of ceph-server nodes. If you change the default number of 3 you 7 | # need to adjust the ceph-prep/config and ceph-prep/hosts files accordingly. 8 | N = 3 9 | 10 | # The libvirt storage pool to use 11 | STORAGE_POOL = "default" 12 | 13 | # Amount of Memory (RAM) per node. 8192 is recommended, but 3072 works as well. 14 | RAM_SIZE = 8192 15 | 16 | # IP prefix and start address for the VMs 17 | # The ceph-admin node will get IP_PREFIX.IPSTART (default: 172.21.12.10) 18 | # The ceph-client and ceph-server-x nodes will get subsequent addresses. 19 | IP_PREFIX = "172.21.12." 20 | IP_START = 10 21 | 22 | ### Do not modify the code below unless you are developing 23 | 24 | Vagrant.require_version ">= 2.1.0" # 2.1.0 minimum required for triggers 25 | 26 | ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt' 27 | 28 | user = ENV['RH_SUBSCRIPTION_MANAGER_USER'] 29 | password = ENV['RH_SUBSCRIPTION_MANAGER_PW'] 30 | if !user or !password 31 | puts 'Required environment variables not found. Please set RH_SUBSCRIPTION_MANAGER_USER and RH_SUBSCRIPTION_MANAGER_PW' 32 | abort 33 | end 34 | 35 | register_script = %{ 36 | if ! subscription-manager status; then 37 | sudo subscription-manager register --username=#{user} --password=#{password} --auto-attach 38 | fi 39 | } 40 | 41 | unregister_script = %{ 42 | if subscription-manager status; then 43 | sudo subscription-manager unregister 44 | fi 45 | } 46 | 47 | timezone_script = %{ 48 | localectl set-keymap de 49 | timedatectl set-timezone Europe/Berlin 50 | } 51 | 52 | Vagrant.configure("2") do |config| 53 | config.vm.box = "roboxes/rhel9" 54 | 55 | config.vm.provider :libvirt do |libvirt| 56 | # Do not use (user) session libvirt - VM networks will not work there on Fedora! 57 | libvirt.qemu_use_session = false 58 | libvirt.memory = RAM_SIZE 59 | libvirt.cpus = 2 60 | libvirt.disk_bus = "virtio" 61 | libvirt.storage_pool_name = STORAGE_POOL 62 | # To avoid USB device resource conflicts 63 | libvirt.graphics_type = "spice" 64 | (1..2).each do 65 | libvirt.redirdev :type => "spicevmc" 66 | end 67 | end 68 | config.ssh.forward_agent = true 69 | config.ssh.insert_key = false 70 | config.hostmanager.enabled = true 71 | config.vm.synced_folder './', '/vagrant', type: 'rsync' 72 | 73 | # The Ceph client will be our client machine to mount volumes and interact with the cluster 74 | config.vm.define "ceph-client" do |client| 75 | client.vm.hostname = "ceph-client" 76 | client.vm.network "private_network", ip: IP_PREFIX+"#{IP_START + 1}" 77 | client.vm.provision "shell", inline: register_script 78 | client.vm.provision "shell", inline: timezone_script 79 | client.vm.provision "ansible" do |ansible| 80 | ansible.playbook = "ceph-prep/client-playbook.yml" 81 | ansible.extra_vars = { 82 | node_ip: IP_PREFIX+"#{IP_START + 1}", 83 | user: user, 84 | password: password, 85 | } 86 | end 87 | end 88 | 89 | # We need one Ceph admin machine to manage the cluster 90 | config.vm.define "ceph-admin" do |admin| 91 | admin.vm.hostname = "ceph-admin" 92 | admin.vm.network "private_network", ip: IP_PREFIX+"#{IP_START}" 93 | admin.vm.provision "shell", inline: register_script 94 | admin.vm.provision "shell", inline: timezone_script 95 | admin.vm.provision "ansible" do |ansible| 96 | ansible.playbook = "ceph-prep/admin-playbook.yml" 97 | ansible.extra_vars = { 98 | node_ip: IP_PREFIX+"#{IP_START}", 99 | user: user, 100 | password: password, 101 | } 102 | end 103 | end 104 | 105 | # We provision three nodes to be Ceph servers 106 | (1..N).each do |i| 107 | config.vm.define "ceph-server-#{i}" do |server| 108 | server.vm.hostname = "ceph-server-#{i}" 109 | server.vm.network "private_network", ip: IP_PREFIX+"#{IP_START+i+1}" 110 | # Attach disks for Ceph OSDs 111 | server.vm.provider "libvirt" do |libvirt| 112 | # 1 small disk 113 | num = 1 114 | (1..num).each do |disk| 115 | libvirt.storage :file, :size => '5G', :cache => 'none' 116 | end 117 | end 118 | server.vm.provision "shell", inline: register_script 119 | server.vm.provision "shell", inline: timezone_script 120 | server.vm.provision "ansible" do |ansible| 121 | ansible.playbook = "ceph-prep/common-playbook.yml" 122 | ansible.extra_vars = { 123 | node_ip: IP_PREFIX+"#{IP_START+i+1}", 124 | user: user, 125 | password: password, 126 | } 127 | end 128 | end 129 | end 130 | 131 | config.trigger.before :destroy do |trigger| 132 | trigger.name = "Before Destroy trigger" 133 | trigger.info = "Unregistering this VM from RedHat Subscription Manager..." 134 | trigger.warn = "If this fails, unregister VMs manually at https://access.redhat.com/management/subscriptions" 135 | trigger.run_remote = {inline: unregister_script} 136 | trigger.on_error = :continue 137 | end # trigger.before :destroy 138 | end # vagrant configure 139 | -------------------------------------------------------------------------------- /ceph-prep/admin-playbook.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2022- IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: MIT 4 | # 5 | --- 6 | - import_playbook: common-playbook.yml 7 | - hosts: all 8 | become: true 9 | vars: 10 | prereqs_path: "/vagrant/ceph-prep" 11 | tasks: 12 | - name: Install cephadm-ansible 13 | ansible.builtin.dnf: 14 | name: "cephadm-ansible" 15 | state: latest 16 | 17 | - name: Create inventory directory 18 | ansible.builtin.file: 19 | path: /usr/share/cephadm-ansible/inventory/production 20 | state: directory 21 | 22 | - name: Copy the hosts inventory to server location 23 | ansible.builtin.copy: 24 | src: /vagrant/ceph-prep/hosts 25 | dest: /usr/share/cephadm-ansible/inventory/production/hosts 26 | mode: '0644' 27 | remote_src: true 28 | 29 | - name: Copy ssh config 30 | ansible.builtin.copy: 31 | src: /vagrant/ceph-prep/config 32 | dest: /home/vagrant/.ssh/config 33 | mode: '0600' 34 | owner: 'vagrant' 35 | group: 'vagrant' 36 | remote_src: true 37 | -------------------------------------------------------------------------------- /ceph-prep/client-playbook.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2022- IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: MIT 4 | # 5 | --- 6 | - hosts: all 7 | become: true 8 | vars: 9 | prereqs_path: "/vagrant/ceph-prep" 10 | tasks: 11 | - name: Update all packages 12 | ansible.builtin.dnf: 13 | name: "*" 14 | state: latest 15 | 16 | - name: Autoremove unneeded packages installed as dependencies 17 | ansible.builtin.dnf: 18 | autoremove: yes 19 | 20 | - name: Disable all RHSM repositories 21 | community.general.rhsm_repository: 22 | name: '*' 23 | state: disabled 24 | 25 | - name: Download ceph-tools repository 26 | ansible.builtin.uri: 27 | url: https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/ibm-storage-ceph-8-rhel-9.repo 28 | dest: /etc/yum.repos.d/ibm-storage-ceph-8-rhel-9.repo 29 | 30 | - name: Enable select repositories 31 | community.general.rhsm_repository: 32 | name: 33 | - rhel-9-for-x86_64-baseos-rpms 34 | - rhel-9-for-x86_64-appstream-rpms 35 | state: enabled 36 | 37 | - name: Install ceph-common 38 | ansible.builtin.dnf: 39 | name: "ceph-common" 40 | state: latest 41 | -------------------------------------------------------------------------------- /ceph-prep/common-playbook.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2022- IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: MIT 4 | # 5 | --- 6 | - hosts: all 7 | become: true 8 | vars: 9 | prereqs_path: "/vagrant/ceph-prep" 10 | tasks: 11 | - name: Update all packages 12 | ansible.builtin.dnf: 13 | name: "*" 14 | state: latest 15 | 16 | - name: Autoremove unneeded packages installed as dependencies 17 | ansible.builtin.dnf: 18 | autoremove: yes 19 | 20 | - name: Download ceph-tools repository 21 | ansible.builtin.uri: 22 | url: https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/ibm-storage-ceph-8-rhel-9.repo 23 | dest: /etc/yum.repos.d/ibm-storage-ceph-8-rhel-9.repo 24 | 25 | - name: Disable all RHSM repositories 26 | community.general.rhsm_repository: 27 | name: '*' 28 | state: disabled 29 | 30 | - name: Enable select repositories 31 | community.general.rhsm_repository: 32 | name: 33 | - rhel-9-for-x86_64-baseos-rpms 34 | - rhel-9-for-x86_64-appstream-rpms 35 | state: enabled 36 | 37 | - name: Install license package (most likely already installed) 38 | ansible.builtin.dnf: 39 | name: "ibm-storage-ceph-license" 40 | state: latest 41 | 42 | - name: Accept license conditions 43 | ansible.builtin.file: 44 | name: "/usr/share/ibm-storage-ceph-license/accept" 45 | state: touch 46 | 47 | - name: Key exchange, part 1 48 | ansible.builtin.copy: 49 | src: /vagrant/ceph-prep/id_rsa 50 | dest: /home/vagrant/.ssh/id_rsa 51 | mode: '0600' 52 | remote_src: true 53 | become: false 54 | 55 | - name: Key exchange, part 2 56 | ansible.builtin.copy: 57 | src: /vagrant/ceph-prep/id_rsa.pub 58 | dest: /home/vagrant/.ssh/id_rsa.pub 59 | remote_src: true 60 | become: false 61 | 62 | - name: Key exchange, part 3 63 | ansible.builtin.shell: cat /home/vagrant/.ssh/id_rsa.pub >> /home/vagrant/.ssh/authorized_keys 64 | become: false 65 | 66 | - name: Key exchange, part 4 67 | ansible.builtin.file: 68 | path: /root/.ssh 69 | state: directory 70 | mode: '0700' 71 | 72 | - name: Key exchange, part 5 73 | ansible.builtin.shell: cat /home/vagrant/.ssh/authorized_keys >> /root/.ssh/authorized_keys 74 | -------------------------------------------------------------------------------- /ceph-prep/config: -------------------------------------------------------------------------------- 1 | Host ceph-server-1 2 | Hostname ceph-server-1 3 | User vagrant 4 | 5 | Host ceph-server-2 6 | Hostname ceph-server-2 7 | User vagrant 8 | 9 | Host ceph-server-3 10 | Hostname ceph-server-3 11 | User vagrant 12 | -------------------------------------------------------------------------------- /ceph-prep/hosts: -------------------------------------------------------------------------------- 1 | ceph-server-1 2 | ceph-server-2 3 | ceph-server-3 4 | 5 | [admin] 6 | ceph-admin 7 | --------------------------------------------------------------------------------