├── LICENSE.md ├── README.md ├── _config.yml └── images └── health.png /LICENSE.md: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Hippolyte Vergnol 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # CEPH Storage Cluster on Raspberry Pi. 3 | # Installation Sheet 4 | 5 | ![Ceph](https://ceph.com/wp-content/uploads/2016/07/Ceph_Logo_Standard_RGB_120411_fa.png) 6 | 7 | ## Prerequisites 8 | * 6 server nodes with Ubuntu 16.04 server installed 9 | * Root privileges on all nodes 10 | 11 | For the whole tutorial, we will use **Raspberry Pi's 3 Model B**. 12 | 13 | ### Downloading the image 14 | You can download Ubuntu server classic from the [Ubuntu RaspberryPi wiki](https://wiki.ubuntu.com/ARM/RaspberryPi) (section **Unofficial images** : [`ubuntu-16.04-preinstalled-server-armhf+raspi3.img.xz`](https://www.finnie.org/software/raspberrypi/ubuntu-rpi3/ubuntu-16.04-preinstalled-server-armhf+raspi3.img.xz)). Make sure to choose the Ubuntu Classic Server 16.04 version for the Raspberry Pi 3. Newer Ubuntu versions are available on the [Ubuntu website](https://ubuntu.com/download/raspberry-pi). 15 | 16 | 17 | ### Flashing the image 18 | Now, you must flash the image on the SD Card. It is recommended to use [Etcher.io](https://www.balena.io/etcher/), which makes the process incredibly simple, fast and safe. Etcher is open-source and available for Windows, MacOS and Linux. 19 | 20 | 21 | Now you are all set to configure the nodes and build up your cluster ! 22 | 23 | 24 | ## Configuring the Nodes 25 | 26 | ### Activate the built-in Wifi 27 | 28 | ``` 29 | mkdir wifi-firmware 30 | cd wifi-firmware 31 | wget https://github.com/RPi-Distro/firmware-nonfree/raw/master/brcm/brcmfmac43430-sdio.bin 32 | wget https://github.com/RPi-Distro/firmware-nonfree/raw/master/brcm/brcmfmac43430-sdio.txt 33 | sudo cp *sdio* /lib/firmware/brcm/ 34 | cd .. 35 | ``` 36 | 37 | You must reboot the machine for changes to take effect. 38 | 39 | *Thanks to this [wiki](https://wiki.ubuntu.com/ARM/RaspberryPi).* 40 | 41 | ### Set up a wireless connection 42 | 43 | First, install the wifi support : 44 | 45 | ``` 46 | sudo apt-get install wireless-tools wpasupplicant 47 | ``` 48 | 49 | Reboot the machine since the name of your network interface might change during the process. 50 | 51 | ``` 52 | sudo reboot 53 | ``` 54 | 55 | Get the name of your wireless network interface with : 56 | ``` 57 | iwconfig 58 | ``` 59 | 60 | Open the network interfaces configuration: 61 | 62 | ``` 63 | sudo nano /etc/network/interfaces 64 | ``` 65 | 66 | Append the following content to the file (make sure to replace ``wlan0`` with the name of your wireless network interface) : 67 | 68 | ``` 69 | allow-hotplug wlan0 70 | iface wlan0 inet dhcp 71 | wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf 72 | ``` 73 | 74 | Now, open the wireless configuration file : 75 | 76 | ``` 77 | sudo nano /etc/wpa_supplicant/wpa_supplicant.conf 78 | ``` 79 | 80 | Add the following content to the file with the information about your wifi network : 81 | 82 | ``` 83 | network={ 84 | ssid="NETWORK-SSID" 85 | psk="NETWORK-PASSWORD" 86 | } 87 | ``` 88 | With a WPA2 Enterprise network, you might want to use the following configuration instead : 89 | 90 | ``` 91 | network={ 92 | ssid="NETWORK-SSID" 93 | scan_ssid=1 94 | key_mgmt=WPA-EAP 95 | identity="YOUR_USERNAME" 96 | password="YOUR_PASSWORD" 97 | eap=PEAP 98 | phase1="peaplabel=0" 99 | phase2="auth=MSCHAPV2" 100 | } 101 | ``` 102 | 103 | ### Create the Ceph User 104 | 105 | ``` 106 | sudo useradd -m -s /bin/bash cephuser 107 | sudo passwd cephuser 108 | ``` 109 | 110 | ### Add the user to the sudoers 111 | 112 | ``` 113 | echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser 114 | chmod 0440 /etc/sudoers.d/cephuser 115 | sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers 116 | ``` 117 | 118 | ### Install and Configure NTP 119 | 120 | Install NTP to synchronize date and time on all nodes. Run the ntpdate command to set the date and time via NTP. We will use the US pool NTP servers. Then start and enable NTP server to run at boot time. 121 | ``` 122 | sudo apt-get install -y ntp ntpdate ntp-doc 123 | ntpdate 0.us.pool.ntp.org 124 | systemctl enable ntp 125 | systemctl start ntp 126 | ``` 127 | 128 | 129 | ### Install Python and parted 130 | 131 | We will need python packages for building the ceph-cluster and parted to prepare the OSD nodes. 132 | ``` 133 | sudo apt-get install -y python python-pip parted 134 | ``` 135 | 136 | 137 | ### Configure the Hosts File 138 | 139 | You can edit the hosts file on all nodes with vim editor if the IP addresses don't resolve to their hostnames. Which might be the case with an enterprise network. 140 | ``` 141 | sudo nano /etc/hosts 142 | ``` 143 | 144 | Paste the configuration below (or adapt it according to your setup) : 145 | 146 | ``` 147 | 192.168.1.50 admin 148 | 192.168.1.61 mon1 149 | 192.168.1.62 mon2 150 | 192.168.1.63 mon3 151 | 192.168.1.71 osd1 152 | 192.168.1.72 osd2 153 | 192.168.1.73 osd3 154 | ``` 155 | 156 | ### Adding CEPH repository to the sources 157 | 158 | To add the CEPH repository to the sources, you have to create or edit the ``/etc/apt/sources.list.d/ceph.list`` file using : 159 | 160 | ``` 161 | sudo nano /etc/apt/sources.list.d/ceph.list 162 | ``` 163 | 164 | Replace the content of the file with the following line : 165 | 166 | ``` 167 | deb https://download.ceph.com/debian-jewel/ xenial main 168 | ``` 169 | 170 | ## Configure the SSH Server 171 | 172 | Our admin node is used for installing and configuring all cluster node remotely. Therefore, we need a user on the ceph-admin node with privileges to connect to all nodes without a password. In other words, we need to configure password-less SSH access for the user *cephuser* on the *admin* node. 173 | 174 | ### Generate SSH keys 175 | 176 | First, generate the ssh keys for *cephuser* : 177 | 178 | ``` 179 | ssh-keygen 180 | ``` 181 | 182 | Leave passphrase is blank/empty. 183 | 184 | Next, create a configuration file for the ssh config. 185 | ``` 186 | sudo nano ~/.ssh/config 187 | ``` 188 | 189 | Paste the configuration below to the file: 190 | 191 | ``` 192 | Host admin 193 | Hostname admin 194 | User cephuser 195 | Host mon1 196 | Hostname mon1 197 | User cephuser 198 | Host mon2 199 | Hostname mon2 200 | User cephuser 201 | Host mon3 202 | Hostname mon3 203 | User cephuser 204 | Host osd1 205 | Hostname osd1 206 | User cephuser 207 | Host osd2 208 | Hostname osd2 209 | User cephuser 210 | Host osd3 211 | Hostname osd3 212 | User cephuser 213 | ``` 214 | 215 | Change the permission of the configuration file to *644* : 216 | 217 | ``` 218 | sudo chmod 644 ~/.ssh/config 219 | ``` 220 | 221 | ### Add the SSH keys to the nodes 222 | 223 | 224 | Now we will use the ssh-copy-id command to add the key to all nodes : 225 | 226 | ``` 227 | ssh-keyscan osd1 osd2 osd3 mon1 mon2 mon3 >> ~/.ssh/known_hosts 228 | ssh-copy-id osd1 229 | ssh-copy-id osd2 230 | ssh-copy-id osd3 231 | ssh-copy-id mon1 232 | ssh-copy-id mon2 233 | ssh-copy-id mon3 234 | ``` 235 | 236 | ## (Optional) Configure the Ubuntu Firewall 237 | 238 | For security reasons, we need to turn on the firewall on the servers. Preferably we use **Ufw** (*Uncomplicated Firewall*), the default Ubuntu firewall, to protect the system. In this step, we might want to enable ufw on all nodes, then open the ports needed by the admin, the monitors and the osd's. 239 | 240 | ## Configure OSD Nodes 241 | 242 | For this installation, we have 3 OSD nodes. Each of these nodes has two hard disk partitions. The first one on the micro-SD card holding the OS and an empty partition on a storage device attached by USB. 243 | ``` 244 | /dev/sda for root partition 245 | /dev/sdb is empty partition 246 | ``` 247 | 248 | We will use ``/dev/sdb`` for the ceph disk. From the ceph-admin node, login to all OSD nodes and format the ``/dev/sdb`` partition with *XFS* file system. 249 | ``` 250 | ssh osd1 251 | ``` 252 | Note : repeat the process for osd2 and osd3. 253 | 254 | Check the disk partitioning scheme using the *fdisk* command : 255 | 256 | ``` 257 | sudo fdisk -l /dev/sdb 258 | ``` 259 | 260 | ### Format the /dev/sdb partition 261 | 262 | We will format the ``/dev/sdb`` partition with an *XFS* filesystem and with a *GPT* partition table by using the parted command: 263 | 264 | ``` 265 | sudo parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100% 266 | ``` 267 | 268 | Next, format the partition in XFS format with the mkfs command. 269 | 270 | ``` 271 | sudo mkfs.xfs -f /dev/sdb 272 | ``` 273 | 274 | You can check the partition status using these commands : 275 | 276 | ``` 277 | sudo fdisk -s /dev/sdb 278 | sudo blkid -o value -s TYPE /dev/sdb 279 | ``` 280 | 281 | ## Build the Ceph Cluster 282 | 283 | In this section, we will install Ceph on all nodes from the admin node. To get started, login to the admin node. 284 | 285 | ``` 286 | ssh cephuser@dmin 287 | ``` 288 | 289 | ### Install ceph-deploy on the admin node 290 | 291 | Install *ceph-deploy* on the ceph-admin node with the pip command : 292 | 293 | ``` 294 | sudo pip install ceph-deploy 295 | ``` 296 | 297 | ## Create a new Cluster 298 | 299 | After the ceph-deploy tool has been installed, create a new directory for the Ceph cluster configuration : 300 | 301 | ``` 302 | mkdir cluster 303 | cd cluster/ 304 | ``` 305 | 306 | Next, using the *ceph-deploy* command, create a new cluster by passing the monitor node names as parameters : 307 | 308 | ``` 309 | ceph-deploy new mon1 mon2 mon3 310 | ``` 311 | 312 | The command will generate the Ceph cluster configuration file ``ceph.conf`` in cluster directory. 313 | 314 | ### Install Ceph on all nodes 315 | 316 | Now install Ceph on all nodes from the ceph-admin node with the command below : 317 | 318 | ``` 319 | ceph-deploy install admin mon1 mon2 mon3 320 | ceph-deploy install osd1 osd2 osd3 321 | ``` 322 | 323 | This step might take some time.. 324 | 325 | Now create a monitor key : 326 | ``` 327 | ceph-deploy mon create-initial 328 | ``` 329 | 330 | ### Adding OSD's to the Cluster 331 | 332 | After Ceph has been installed on all nodes, now we can add the OSD daemons to the cluster. OSD Daemons will create the data and journal partition on the disk ``/dev/sdb``. 333 | 334 | You can check the available disk ``/dev/sdb`` on all osd nodes and see ``/dev/sdb`` with the XFS format that we created before : 335 | ``` 336 | ceph-deploy disk list osd1 osd2 osd3 337 | ``` 338 | 339 | Next, erase the device partition table and contents on all OSD nodes with the zap option : 340 | 341 | ``` 342 | ceph-deploy disk zap osd1:/dev/sdb 343 | ceph-deploy disk zap osd2:/dev/sdb 344 | ceph-deploy disk zap osd3:/dev/sdb 345 | ``` 346 | 347 | Now we will prepare all OSD nodes : 348 | 349 | ``` 350 | ceph-deploy osd prepare osd1:/dev/sdb 351 | ceph-deploy osd prepare osd2:/dev/sdb 352 | ceph-deploy osd prepare osd3:/dev/sdb 353 | ``` 354 | 355 | The result of theses steps is that ``/dev/sdb`` has two partitions now: 356 | ``` 357 | /dev/sdb1 - Ceph Data 358 | /dev/sdb2 - Ceph Journal 359 | ``` 360 | 361 | You can check it directly on the OSD node : 362 | 363 | ``` 364 | ssh osd1 365 | sudo fdisk -l /dev/sdb 366 | ``` 367 | 368 | 369 | ### Pushing the admin settings 370 | 371 | Next, deploy the management-key to all associated nodes : 372 | ``` 373 | ceph-deploy admin mon1 mon2 mon3 osd1 osd2 osd3 374 | ``` 375 | 376 | Change the permission of the key file by running the command below on **all** nodes. 377 | ``` 378 | sudo chmod 644 /etc/ceph/ceph.client.admin.keyring 379 | ``` 380 | 381 | 382 | The Ceph Cluster on Ubuntu 16.04 has been created ! 383 | 384 | ## Testing the Cluster 385 | 386 | ### Cluster health 387 | Now, to test the cluster and make sure that it works as intended, you can run the following commands : 388 | 389 | ``` 390 | ssh mon1 391 | sudo ceph health 392 | sudo ceph -s 393 | ``` 394 | 395 | ![Ceph Cluster Health](./images/health.png) 396 | 397 | ### Starting over 398 | 399 | If at any point you run into trouble and you want to start over, execute the following to purge the Ceph packages, and erase all its data and configuration: 400 | ``` 401 | ceph-deploy purge {ceph-node} [{ceph-node}] 402 | ceph-deploy purgedata {ceph-node} [{ceph-node}] 403 | ceph-deploy forgetkeys 404 | rm ceph.* 405 | ``` 406 | 407 | ## Special Thanks 408 | 409 | Special thanks to the following guides that helped me write this tutorial : 410 | 411 | * The definitive guide: Ceph Cluster on Raspberry Pi, *Bryan Apperson* → [**link**](http://bryanapperson.com/blog/the-definitive-guide-ceph-cluster-on-raspberry-pi/). 412 | * Small scale Ceph Replicated Storage, *James Coyle* → [**link**](https://www.jamescoyle.net/how-to/2105-small-scale-ceph-replicated-storage). 413 | * Ceph Pi - Mount Up,* Vess Bakalov* → [**link**](millibit.blogspot.com/2014/12/ceph-pi-mount-up-mounting-your-ceph.html). 414 | * How to install a Ceph Storage Cluster on Ubuntu 16.04, *HowToForge* → [**link**](https://www.howtoforge.com/tutorial/how-to-install-a-ceph-cluster-on-ubuntu-16-04/). 415 | * RaspberryPi, *Wiki Ubuntu* → [**link**](https://wiki.ubuntu.com/ARM/RaspberryPi). 416 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | plugins: 2 | - jekyll-relative-links 3 | relative_links: 4 | enabled: true 5 | collections: true 6 | include: 7 | - README.md 8 | - LICENSE.md 9 | -------------------------------------------------------------------------------- /images/health.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CyberHippo/Ceph-Pi/25dc494f36fc57a9961680e3d9cdc2213b521f09/images/health.png --------------------------------------------------------------------------------