├── .dockerignore ├── .gitignore ├── CHANGELOG.md ├── Dockerfile ├── Gopkg.lock ├── Gopkg.toml ├── LICENSE ├── Makefile ├── README.md ├── check-ceph-rbd-docker-plugin.sh ├── contrib └── el7 │ ├── Makefile │ ├── README.md │ ├── SOURCES │ ├── rbd-docker-plugin-wrapper │ ├── rbd-docker-plugin.conf │ └── rbd-docker-plugin.service │ └── SPECS │ └── rbd-docker-plugin.spec ├── driver.go ├── driver_test.go ├── etc ├── cron.d │ └── rbd-docker-plugin-checks ├── init │ └── rbd-docker-plugin.conf ├── logrotate.d │ └── rbd-docker-plugin_logrotate └── systemd │ └── rbd-docker-plugin.service ├── main.go ├── marathon-test ├── Dockerfile ├── Makefile ├── marathon-test.json └── run.sh ├── micro-osd.sh ├── postinstall ├── postremove ├── tpkg.yml ├── unlock_test.go ├── utils.go ├── utils_test.go └── version.go /.dockerignore: -------------------------------------------------------------------------------- 1 | rbd-docker-plugin 2 | *.tpkg 3 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | rbd-docker-plugin 2 | *.tpkg 3 | vendor 4 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Change Log 2 | All notable changes to project should be documented in this file. 3 | We attempt to adhere to [Semantic Versioning](http://semver.org/). 4 | 5 | ## [Unreleased] 6 | 7 | ### Added 8 | ### Removed 9 | ### Changed 10 | 11 | ## [2.0.1] - 2017-08-28 12 | ### Changed 13 | - fixed List and Get methods to only return Mountpoint value if volume is 14 | actually mounted. Hopefully this matches the Docker Volume API better and 15 | fixes some spurious volume creation (See #3, #6, #12, and 16 | yp-engineering/rbd-docker-plugin#36) 17 | 18 | ## [2.0.0] - 2017-08-25 19 | ### Added 20 | - add golang/dep support for a repeatable build 21 | 22 | ### Removed 23 | - Removed all usage of go-ceph library. Simplifies the code and now only 24 | shelling out to 'rbd' executable 25 | 26 | ### Changed 27 | - Updated docker/go-plugin-helpers and latest sdk api 28 | - Fixed List() to call out to 'rbd ls' instead of relying on in memory per-host list 29 | 30 | ## [1.5.2] - 2017-08 31 | ### Changed 32 | - Remove cron script installation from tpkg 33 | 34 | ## [1.5.1] - 2016-09-20 35 | ### Changed 36 | - Update XFS checks to mount/unmount to clear any disk logs 37 | 38 | ## [1.5.0] - 2016-09-01 39 | ### Added 40 | - Updated plugin to include Capabilities() method and new method 41 | signatures for Mount(MountRequest) and Unmount(UnmountRequest) 42 | - add xfs filesystem check before mount - if xfs-repair -n returns 43 | error, Mount operation fails with note to manually repair 44 | 45 | ## [1.4.1] - 2016-08-05 46 | ### Changed 47 | - Bug fix from bad upstream merge: provide openContext and 48 | shutdownContext 49 | 50 | ## [1.4.0] - 2016-08-05 51 | ### Added 52 | - Merged upstream fixes for pull porcupie/rbd-docker-plugin#7 53 | 54 | ## [1.3.0] - 2016-08-05 55 | ### Added 56 | - Added goroutines and timeouts to all shell commands, hoping to prevent 57 | propagation of hung external procs to docker daemon 58 | 59 | ## [1.2.2] - 2016-06-08 60 | ### Changed 61 | - Docker is calling Unmount after failed Mount, causing trouble if 62 | device is still in use by another container (locked by same node). 63 | The workaround / hack is to bail earlier in Unmount API call if rbd 64 | unmap fails with busy device error. This can leave the device usable but 65 | possibly in a funky state (unmounted from host but still mounted and 66 | accessible inside container) 67 | - related to porcupie/rbd-docker-plugin#5 68 | 69 | ## [1.2.1] - 2016-06-02 70 | ### Changed 71 | - When rbd map does not return device name but no error, try default 72 | device path (/dev/rbd//). Issue porcupie/rbd-docker-plugin#4 73 | 74 | ## [1.2.0] - 2016-06-02 75 | ### Added 76 | - Updated to pull in yp-engineering/rbd-docker-plugin v0.9.1.2, which 77 | includes support for Docker Volume Create options: size, pool, fstype 78 | 79 | ## [1.1.1] - 2016-04-15 80 | ### Changed 81 | - Due to issue with golang 1.6 and strict Host header requirements, we 82 | cannot use golang 1.6 to compile our plugin since docker never sends 83 | correct Host header for plugin socket usage. 84 | * Recompiled with golang 1.5 85 | 86 | ## [1.1.0] - 2016-04-15 87 | ### Added 88 | - added a cron job and check script for the Ceph configs and tpkg update 89 | 90 | ## [1.0.0] - 2016-04-15 91 | ### Changed 92 | - bump major version with deprecated --remove boolean flag 93 | - --remove flag now takes one of three values: ignore, delete or rename 94 | - ignore will just ignore the docker volume Remove call (new default) 95 | - delete will destroy the rbd volume on Remove request 96 | - rename will rename the rbd volume on Remove request, prefixed with `zz_` 97 | 98 | ## [0.5.0] - 2016-04-13 99 | ### Changed 100 | - pulled latest from upstream yp-engineering/rbd-docker-plugin 101 | - add new docker volume api support (Get, List) 102 | - use ceph/go-ceph instead of noahdesu/go-ceph 103 | - use docker/go-plugins-helpers/ instead of calavera/dkvolume 104 | 105 | ## [0.4.2] - 2016-03-16 106 | ### Changed 107 | - Update logrotate config to restart instead of reload 108 | ### Added 109 | - Some new marathon-tester configs for running a test container in 110 | Marathon/Mesos environment 111 | 112 | ## [0.4.1] - 2015-12-03 113 | ### Changed 114 | - Update systemd service unit to add --config /etc/ceph/ceph.conf 115 | - Force default config file in main.go to /etc/ceph/ceph.conf 116 | 117 | ## [0.4.0] - 2015-12-03 118 | ### Changed 119 | - Last ditch effort : Update all Plugin RBD functions to use CLI shell 120 | commands instead of go-ceph library 121 | - Provide command line flag --go-ceph to use go-ceph lib, otherwise default 122 | now is shell CLI command via `rbd` binary 123 | 124 | ## [0.3.1] - 2015-11-30 125 | ### Changed 126 | - Try to open RBD Image without read-only option (no effect) 127 | - Try to use same client-id for every connection -- not possible in 128 | go-ceph 129 | - Adding --conf options to external rbd operations (was having micro-osd 130 | issues) 131 | 132 | ## [0.3.0] - 2015-11-25 133 | ### Changed 134 | - Update go-ceph import to use github.com/ceph/go-ceph instead of 135 | noahdesu/go-ceph 136 | - Recreate ceph connection and pool context for every operation (don't 137 | try to cache them) 138 | 139 | ## [0.2.2] - 2015-11-19 140 | ### Changed 141 | - Disable the reload operation in systemd service unit, having issues 142 | with go-ceph lib and that operation (panics) 143 | - Update the Image Rename and Remove functions to use go-ceph lib 144 | instead of shelling out to rbd binary 145 | - Update the tpkg scripts to start the service on installation 146 | 147 | ## [0.2.1] - 2015-09-11 148 | ### Added 149 | - Merged pull request with some RPM scripts for use in generic Redhat EL7 (Thanks Clement Laforet ) 150 | 151 | ## [0.2.0] - 2015-08-25 152 | ### Changed 153 | - Added micro-osd script for testing Ceph locally 154 | 155 | ## [0.1.9] - 2015-08-20 156 | ### Changed 157 | - Added user ID and options to more shell rbd binary exec commands (Thanks Sébastien Han ) 158 | - Moving version definition from tpkg.yml to version.go 159 | - Better blkid integration (Thanks Sébastien Han ) 160 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:1.5 2 | 3 | MAINTAINER Adam Avilla 4 | 5 | 6 | # Install Ceph. 7 | ENV CEPH_VERSION infernalis 8 | RUN curl -sSL 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | \ 9 | apt-key add - && \ 10 | echo deb http://ceph.com/debian-${CEPH_VERSION}/ jessie main | \ 11 | tee /etc/apt/sources.list.d/ceph-${CEPH_VERSION}.list && \ 12 | apt-get update && \ 13 | apt-get install -y --force-yes \ 14 | librados-dev \ 15 | librbd-dev \ 16 | ceph 17 | 18 | 19 | ENV SRC_ROOT /go/src/github.com/yp-engineering/rbd-docker-plugin 20 | 21 | # Setup our directory and give convenient path via ln. 22 | RUN mkdir -p ${SRC_ROOT} && \ 23 | ln -s ${SRC_ROOT} /rbd-docker-plugin 24 | WORKDIR ${SRC_ROOT} 25 | 26 | # Used to only go get if sources change. 27 | ADD *.go ${SRC_ROOT}/ 28 | RUN go get -t . 29 | 30 | # Add the rest of the files. 31 | ADD . ${SRC_ROOT} 32 | 33 | 34 | # Clean up all the apt stuff 35 | RUN apt-get clean && \ 36 | rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* 37 | 38 | 39 | CMD ["bash"] 40 | -------------------------------------------------------------------------------- /Gopkg.lock: -------------------------------------------------------------------------------- 1 | # This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'. 2 | 3 | 4 | [[projects]] 5 | name = "github.com/Microsoft/go-winio" 6 | packages = ["."] 7 | revision = "78439966b38d69bf38227fbf57ac8a6fee70f69a" 8 | version = "v0.4.5" 9 | 10 | [[projects]] 11 | name = "github.com/coreos/go-systemd" 12 | packages = ["activation"] 13 | revision = "d2196463941895ee908e13531a23a39feb9e1243" 14 | version = "v15" 15 | 16 | [[projects]] 17 | name = "github.com/davecgh/go-spew" 18 | packages = ["spew"] 19 | revision = "346938d642f2ec3594ed81d874461961cd0faa76" 20 | version = "v1.1.0" 21 | 22 | [[projects]] 23 | name = "github.com/docker/go-connections" 24 | packages = ["sockets"] 25 | revision = "3ede32e2033de7505e6500d6c868c2b9ed9f169d" 26 | 27 | [[projects]] 28 | branch = "master" 29 | name = "github.com/docker/go-plugins-helpers" 30 | packages = ["sdk","volume"] 31 | revision = "a9ef19c479cb60e751efa55f7f2b265776af1abf" 32 | 33 | [[projects]] 34 | name = "github.com/pmezard/go-difflib" 35 | packages = ["difflib"] 36 | revision = "792786c7400a136282c1664665ae0a8db921c6c2" 37 | version = "v1.0.0" 38 | 39 | [[projects]] 40 | name = "github.com/stretchr/testify" 41 | packages = ["assert","require"] 42 | revision = "69483b4bd14f5845b5a1e55bca19e954e827f1d0" 43 | version = "v1.1.4" 44 | 45 | [[projects]] 46 | branch = "master" 47 | name = "golang.org/x/net" 48 | packages = ["proxy"] 49 | revision = "57efc9c3d9f91fb3277f8da1cff370539c4d3dc5" 50 | 51 | [[projects]] 52 | branch = "master" 53 | name = "golang.org/x/sys" 54 | packages = ["windows"] 55 | revision = "2d6f6f883a06fc0d5f4b14a81e4c28705ea64c15" 56 | 57 | [solve-meta] 58 | analyzer-name = "dep" 59 | analyzer-version = 1 60 | inputs-digest = "3de365ff1ec407fda506c9fcc6b7cb6680ff3aee5be4694ddb4f3d797a9da5fa" 61 | solver-name = "gps-cdcl" 62 | solver-version = 1 63 | -------------------------------------------------------------------------------- /Gopkg.toml: -------------------------------------------------------------------------------- 1 | 2 | # Gopkg.toml example 3 | # 4 | # Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md 5 | # for detailed Gopkg.toml documentation. 6 | # 7 | # required = ["github.com/user/thing/cmd/thing"] 8 | # ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"] 9 | # 10 | # [[constraint]] 11 | # name = "github.com/user/project" 12 | # version = "1.0.0" 13 | # 14 | # [[constraint]] 15 | # name = "github.com/user/project2" 16 | # branch = "dev" 17 | # source = "github.com/myfork/project2" 18 | # 19 | # [[override]] 20 | # name = "github.com/x/y" 21 | # version = "2.4.0" 22 | 23 | 24 | [[constraint]] 25 | branch = "master" 26 | name = "github.com/docker/go-plugins-helpers" 27 | 28 | [[constraint]] 29 | name = "github.com/stretchr/testify" 30 | version = "1.1.4" 31 | 32 | [[constraint]] 33 | name = "github.com/docker/go-connections" 34 | # NOTE: these are broken but requested by another dep: 35 | #revision = "990a1a1a70b0da4c4cb70e117971a4f0babfbf1a" 36 | #version = "v0.2.1" 37 | # NOTE: This rev of master works: 38 | ##revision = "3ede32e2033de7505e6500d6c868c2b9ed9f169d" 39 | revision = "3ede32e2033de7505e6500d6c868c2b9ed9f169d" 40 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 YP LLC 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in 13 | all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 21 | THE SOFTWARE. 22 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # building the rbd docker plugin golang binary with version 2 | # makefile mostly used for packing a tpkg 3 | 4 | .PHONY: all build install clean test version setup systemd dep-tool 5 | 6 | IMAGE_PATH=ypengineering/rbd-docker-plugin 7 | TAG?=latest 8 | IMAGE=$(IMAGE_PATH):$(TAG) 9 | SUDO?= 10 | 11 | 12 | TMPDIR?=/tmp 13 | INSTALL?=install 14 | #TPKG_VERSION=$(VERSION)-2 15 | TPKG_VERSION=$(VERSION) 16 | 17 | BINARY=rbd-docker-plugin 18 | PKG_SRC=main.go driver.go version.go 19 | PKG_SRC_TEST=$(PKG_SRC) driver_test.go unlock_test.go 20 | 21 | PACKAGE_BUILD=$(TMPDIR)/$(BINARY).tpkg.buildtmp 22 | 23 | PACKAGE_BIN_DIR=$(PACKAGE_BUILD)/reloc/bin 24 | PACKAGE_ETC_DIR=$(PACKAGE_BUILD)/reloc/etc 25 | PACKAGE_INIT_DIR=$(PACKAGE_ETC_DIR)/init 26 | PACKAGE_LOG_CONFIG_DIR=$(PACKAGE_ETC_DIR)/logrotate.d 27 | PACKAGE_SYSTEMD_DIR=$(PACKAGE_ETC_DIR)/systemd/system 28 | 29 | CONFIG_FILES=tpkg.yml README.md LICENSE 30 | SYSTEMD_UNIT=etc/systemd/rbd-docker-plugin.service 31 | UPSTART_INIT=etc/init/rbd-docker-plugin.conf 32 | LOG_CONFIG=etc/logrotate.d/rbd-docker-plugin_logrotate 33 | SCRIPT_FILES=postinstall postremove 34 | BIN_FILES=dist/$(BINARY) check-ceph-rbd-docker-plugin.sh 35 | 36 | # Run these if you have a local dev env setup, otherwise they will / can be run 37 | # in the container. 38 | all: build 39 | 40 | dep-tool: 41 | go get -u github.com/golang/dep/cmd/dep 42 | 43 | vendor: dep-tool 44 | dep ensure 45 | 46 | # set VERSION from version.go, eval into Makefile for inclusion into tpkg.yml 47 | version: version.go 48 | $(eval VERSION := $(shell grep "VERSION" version.go | cut -f2 -d'"')) 49 | 50 | build: dist/$(BINARY) 51 | 52 | dist/$(BINARY): $(PKG_SRC) vendor 53 | go build -v -x -o dist/$(BINARY) . 54 | 55 | install: build test 56 | go install . 57 | 58 | clean: 59 | go clean 60 | rm -f dist/$(BINARY) 61 | rm -fr vendor/ 62 | 63 | uninstall: 64 | @$(RM) -iv `which $(BINARY)` 65 | 66 | # FIXME: TODO: this micro-osd script leaves ceph-mds laying around -- fix it up 67 | test: vendor 68 | TMP_DIR=$$(mktemp -d) && \ 69 | ./micro-osd.sh $$TMP_DIR && \ 70 | export CEPH_CONF=$${TMP_DIR}/ceph.conf && \ 71 | ceph -s && \ 72 | go test -v && \ 73 | rm -rf $$TMP_DIR 74 | 75 | 76 | # use existing ceph installation instead of micro-osd.sh - expecting CEPH_CONF to be set ... 77 | CEPH_CONF ?= /etc/ceph/ceph.conf 78 | local_test: vendor 79 | @echo "Using CEPH_CONF=$(CEPH_CONF)" 80 | test -n "${CEPH_CONF}" && \ 81 | $(SUDO) rbd ls && \ 82 | go test -v 83 | 84 | dist: 85 | mkdir dist 86 | 87 | systemd: dist 88 | cp systemd/rbd-docker-plugin.service dist/ 89 | 90 | 91 | 92 | # Used to have build env be inside container and to pull out the binary. 93 | make/%: build_docker 94 | $(SUDO) docker run ${DOCKER_ARGS} --rm -i $(IMAGE) make $* 95 | 96 | run: 97 | $(SUDO) docker run ${DOCKER_ARGS} --rm -it $(IMAGE) 98 | 99 | build_docker: 100 | $(SUDO) docker build -t $(IMAGE) . 101 | 102 | binary_from_container: 103 | $(SUDO) docker run ${DOCKER_ARGS} --rm -it \ 104 | -v $${PWD}:/rbd-docker-plugin/dist \ 105 | -w /rbd-docker-plugin \ 106 | $(IMAGE) make build 107 | 108 | local: 109 | $(SUDO) docker run ${DOCKER_ARGS} --rm -it \ 110 | -v $${PWD}:/rbd-docker-plugin \ 111 | -w /rbd-docker-plugin \ 112 | $(IMAGE) 113 | 114 | 115 | # container actions 116 | test_from_container: make/test 117 | 118 | 119 | 120 | # build relocatable tpkg 121 | # TODO: repair PATHS at install to set TPKG_HOME (assumed /home/ops) 122 | package: version build local_test 123 | $(RM) -fr $(PACKAGE_BUILD) 124 | mkdir -p $(PACKAGE_BIN_DIR) $(PACKAGE_INIT_DIR) $(PACKAGE_SYSTEMD_DIR) $(PACKAGE_LOG_CONFIG_DIR) 125 | $(INSTALL) $(SCRIPT_FILES) $(PACKAGE_BUILD)/. 126 | $(INSTALL) $(BIN_FILES) $(PACKAGE_BIN_DIR)/. 127 | $(INSTALL) -m 0644 $(CONFIG_FILES) $(PACKAGE_BUILD)/. 128 | $(INSTALL) -m 0644 $(SYSTEMD_UNIT) $(PACKAGE_SYSTEMD_DIR)/. 129 | $(INSTALL) -m 0644 $(UPSTART_INIT) $(PACKAGE_INIT_DIR)/. 130 | $(INSTALL) -m 0644 $(LOG_CONFIG) $(PACKAGE_LOG_CONFIG_DIR)/. 131 | sed -i "s/^version:.*/version: $(TPKG_VERSION)/" $(PACKAGE_BUILD)/tpkg.yml 132 | tpkg --make $(PACKAGE_BUILD) --out $(CURDIR) 133 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Simple Ceph RBD Docker VolumeDriver Plugin 2 | 3 | * Use Case: Persistent Storage for a Single Docker Container 4 | * one RBD Image can only be used by one Docker Container at a time 5 | 6 | * Plugin is a separate process running alongside Docker Daemon 7 | * plugin can be configured for a single Ceph User 8 | * run multiple plugin instances for varying configs (ceph user, default pool, default size) 9 | * OPTIONAL: pass extra config via volume name to override default pool and creation size: 10 | * docker run --volume-driver rbd -v poolname/imagename@size:/mnt/disk1 ... 11 | 12 | * plugin supports all Docker VolumeDriver Plugin API commands (Volume Plugin API v1.12.x) 13 | * Create - can provision Ceph RBD Image in a pool of a certain size 14 | * controlled by `--create` boolean flag (default false) 15 | * default size from `--size` flag (default 20480 = 20GB) 16 | * Mount - Locks, Maps and Mounts RBD Image to the Host system 17 | * Unmount - Unmounts, Unmaps and Unlocks the RBD Image on request 18 | * Remove - Removes (destroys) RBD Image on request 19 | * only called for `docker run --rm -v ...` or `docker rm -v ...` 20 | * action controlled by plugin's `--remove` flag, which can be one of three values: 21 | - ''ignore'' - the call to delete the ceph rbd volume is ignored (default) 22 | - ''rename'' - will cause image to be renamed with _zz_ prefix for later culling 23 | - ''delete'' - will actually delete ceph rbd image (destructive) 24 | * Get, List - Return information on accessible RBD volumes 25 | 26 | ## Plugin Setup 27 | 28 | Plugin is a standalone process and places a Socket file in a known location; 29 | needs to start before Docker. It does not daemonize by default, so if you need 30 | it in the background, use normal shell process control (&). 31 | 32 | The driver has a name, also used to name the socket, which is used to refer to 33 | the plugin via the `--volume-driver=name` docker CLI option, allowing multiple 34 | uniquely named plugin instances with different default configurations. 35 | 36 | For the default name is "rbd", use `--volume-driver rbd` from docker. 37 | 38 | General build/run requirements: 39 | * /usr/bin/rbd for manipulating Ceph RBD images 40 | * /usr/sbin/mkfs.xfs for fs creation (default fstype) 41 | * /usr/bin/mount and /usr/bin/umount 42 | * golang/dep tool 43 | 44 | Tested with Ceph version 0.94.2 on Centos 7.1 host with Docker 1.12 45 | 46 | ### Building rbd-docker-plugin 47 | 48 | Clone the repo and use the Makefile: 49 | 50 | make 51 | 52 | To get `dist/rbd-docker-plugin` binary. 53 | 54 | Or the equivalent shell commands: 55 | 56 | go get -u github.com/golang/dep/cmd/dep 57 | dep ensure 58 | go build -v -x -o dist/rbd-docker-plugin . 59 | 60 | If none of the dependencies has changed (??) you might be able to get away with: 61 | 62 | go get github.com/porcupie/rbd-docker-plugin 63 | 64 | 65 | ### Commandline Options 66 | 67 | Usage of ./rbd-docker-plugin: 68 | --ceph-user="admin": Ceph user to use for RBD 69 | --create=false: Can auto Create RBD Images (default: false) 70 | --fs="xfs": FS type for the created RBD Image (must have mkfs.type) 71 | --logdir="/var/log": Logfile directory for RBD Docker Plugin 72 | --mount="/var/lib/docker/volumes": Mount directory for volumes on host 73 | --name="rbd": Docker plugin name for use on --volume-driver option 74 | --pool="rbd": Default Ceph Pool for RBD operations 75 | --remove=false: Can Remove (destroy) RBD Images (default: false, volume will be renamed zz_name) 76 | --size=20480: RBD Image size to Create (in MB) (default: 20480=20GB 77 | 78 | ### Start the Plugin 79 | 80 | Start with the default options: 81 | 82 | * socket name=rbd, pool=rbd, user=admin, logfile=/var/log/rbd-docker-plugin.log 83 | * no creation or removal of volumes 84 | 85 | sudo rbd-docker-plugin 86 | 87 | For Debugging: send log to STDERR: 88 | 89 | sudo RBD_DOCKER_PLUGIN_DEBUG=1 rbd-docker-plugin 90 | 91 | Use a different socket name and Ceph pool 92 | 93 | sudo rbd-docker-plugin --name rbd2 --pool liverpool 94 | # docker run --volume-driver rbd2 -v ... 95 | 96 | To allow creation of new RBD Images: 97 | 98 | sudo rbd-docker-plugin --create 99 | 100 | To allow creation and removal: 101 | 102 | sudo rbd-docker-plugin --create --remove 103 | 104 | Then you would be able to use RBD volumes via Docker CLI: 105 | 106 | docker run --volume-driver rbd -v ... 107 | 108 | ### Testing 109 | 110 | Can test using docker engine 1.8+ which has `--volume-driver` support. 111 | 112 | * https://docker.com/ 113 | 114 | Alternatively, you can POST json to the socket to manually test. If your curl 115 | is new enough (v7.40+), you can use the `--unix-socket` option and syntax. You 116 | can also use [this golang version](https://github.com/Soulou/curl-unix-socket) 117 | instead: 118 | 119 | go get github.com/Soulou/curl-unix-socket 120 | 121 | 122 | Once you have that you can POST json to the plugin: 123 | 124 | % sudo curl-unix-socket -v -X POST unix:///run/docker/plugins/rbd.sock:/Plugin.Activate 125 | > POST /Plugin.Activate HTTP/1.1 126 | > Socket: /run/docker/plugins/rbd.sock 127 | > Content-Length: 0 128 | > 129 | < HTTP/1.1 200 OK 130 | < Content-Type: appplication/vnd.docker.plugins.v1+json 131 | < Date: Tue, 28 Jul 2015 18:52:11 GMT 132 | < Content-Length: 33 133 | {"Implements": ["VolumeDriver"]} 134 | 135 | 136 | # Plugin started without --create: 137 | % sudo curl-unix-socket -v -X POST -d '{"Name": "testimage"}' unix:///run/docker/plugins/rbd.sock:/VolumeDriver.Create 138 | > POST /VolumeDriver.Create HTTP/1.1 139 | > Socket: /run/docker/plugins/rbd.sock 140 | > Content-Length: 21 141 | > 142 | < HTTP/1.1 500 Internal Server Error 143 | < Content-Length: 62 144 | < Content-Type: appplication/vnd.docker.plugins.v1+json 145 | < Date: Tue, 28 Jul 2015 18:53:20 GMT 146 | {"Mountpoint":"","Err":"Ceph RBD Image not found: testimage"} 147 | 148 | # Plugin started --create turned on will create unknown image: 149 | % sudo curl-unix-socket -v -X POST -d '{"Name": "testimage"}' unix:///run/docker/plugins/rbd.sock:/VolumeDriver.Create 150 | > POST /VolumeDriver.Create HTTP/1.1 151 | > Socket: /run/docker/plugins/rbd.sock 152 | > Content-Length: 21 153 | > 154 | < HTTP/1.1 200 OK 155 | < Content-Length: 27 156 | < Content-Type: appplication/vnd.docker.plugins.v1+json 157 | < Date: Fri, 14 Aug 2015 19:47:35 GMT 158 | {"Mountpoint":"","Err":""} 159 | 160 | ## Examples 161 | 162 | If you need persistent storage for your application container, you can use a 163 | Ceph Rados Block Device (RBD) as a persistent disk. 164 | 165 | You can provision the Block Device and Filesystem first, or allow a 166 | sufficiently configured Plugin instance create it for you. This plugin can 167 | create RBD images with XFS filesystem. 168 | 169 | 1. (Optional) Provision RBD Storage yourself 170 | * `sudo rbd create --size 1024 foo` 171 | * `sudo rbd map foo` => /dev/rbd1 172 | * `sudo mkfs.xfs /dev/rbd1` 173 | * `sudo rbd unmap /dev/rbd1` 174 | 2. Or Run the RBD Docker Plugin with `--create` option flag and just request a volume 175 | * `sudo rbd-docker-plugin --create` 176 | 3. Requesting and Using Volumes 177 | * `docker run --volume-driver=rbd --volume foo:/mnt/foo -it ubuntu /bin/bash` 178 | * Volume "foo" will be locked, mapped and mounted to Host and bind-mounted to container at `/mnt/foo` 179 | * When container exits, the volume will be unmounted, unmapped and unlocked 180 | * You can control the RBD Pool and initial Size using this syntax sugar: 181 | * foo@1024 => pool=rbd (default), image=foo, size 1GB 182 | * deep/foo => pool=deep, image=foo and default `--size` (20GB) 183 | * deep/foo@1024 => pool=deep, image=foo, size 1GB 184 | - pool must already exist 185 | 186 | ### Misc 187 | 188 | * Create RBD Snapshots: `sudo rbd snap create --image foo --snap foosnap` 189 | * Resize RBD Volume: 190 | * set max size: `sudo rbd resize --size 2048 --image foo` 191 | * map/mount and then fix XFS: `sudo xfs_growfs -d /mnt/foo` 192 | 193 | 194 | ## Links 195 | 196 | 197 | - [Legacy Plugins](https://docs.docker.com/engine/extend/legacy_plugins/) 198 | - [Volume plugins](https://docs.docker.com/engine/extend/plugins_volume/) 199 | 200 | # Packaging 201 | 202 | Using [tpkg](http://tpkg.github.io) to distribute and specify native package 203 | dependencies. Tested with Centos 7.1 and yum/rpm packages. 204 | 205 | 206 | # License 207 | 208 | This project is using the MIT License (MIT), see LICENSE file. 209 | 210 | Copyright (c) 2015 YP LLC 211 | -------------------------------------------------------------------------------- /check-ceph-rbd-docker-plugin.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # cron sript to determine whether mesos agent node has correct Ceph and RBD 4 | # configuration needed for rbd-docker-plugin 5 | 6 | CEPH_CONF=${CEPH_CONF:-/etc/ceph/ceph.conf} 7 | CEPH_USER=${CEPH_USER:-admin} 8 | 9 | CEPH_KEY=${CEPH_KEY:-"/etc/ceph/ceph.client.${CEPH_USER}.keyring"} 10 | 11 | if [ ! -f "$CEPH_CONF" ]; then 12 | echo "$0: $HOSTNAME: ERROR missing CEPH_CONF: $CEPH_CONF" 13 | exit 1 14 | fi 15 | 16 | if [ ! -f "$CEPH_KEY" ]; then 17 | echo "$0: $HOSTNAME: ERROR missing expected keyring for CEPH_USER=$CEPH_USER: $CEPH_KEY" 18 | exit 3 19 | fi 20 | 21 | # try running rbd command 22 | RBD_TEST=$(rbd --conf ${CEPH_CONF} --id ${CEPH_USER} ls 2>&1) 23 | if [ $? != 0 ]; then 24 | echo "$0: $HOSTNAME: ERROR running rbd command: $RBD_TEST" 25 | exit 5 26 | fi 27 | -------------------------------------------------------------------------------- /contrib/el7/Makefile: -------------------------------------------------------------------------------- 1 | 2 | 3 | all: build 4 | 5 | build: 6 | rpmbuild -ba ~/rpmbuild/SPECS/rbd-docker-plugin.spec 7 | 8 | env: 9 | rpmdev-setuptree 10 | cp -Rvf S* ~/rpmbuild 11 | wget -O ~/rpmbuild/SOURCES/0.1.9.tar.gz https://github.com/yp-engineering/rbd-docker-plugin/archive/0.1.9.tar.gz 12 | wget -O ~/rpmbuild/SOURCES/rbd-docker-plugin_logrotate https://github.com/yp-engineering/rbd-docker-plugin/blob/master/logrotate.d/rbd-docker-plugin_logrotate 13 | 14 | depends: 15 | yum install -y http://ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-1.el7.noarch.rpm epel-release 16 | yum install -y librados2-devel librbd1-devel golang git epel-release rpmdevtools make wget 17 | -------------------------------------------------------------------------------- /contrib/el7/README.md: -------------------------------------------------------------------------------- 1 | # Centos7/RHEL7 contribs 2 | ## RPM 3 | ### Runtime dependencies 4 | - docker from docker [RHEL repo](https://docs.docker.com/installation/rhel/) 5 | - ceph >= 0.94.0 6 | ### Build the rpm 7 | ``` 8 | $ git clone https://github.com/yp-engineering/rbd-docker-plugin 9 | $ cd rbd-docker-plugin/contrib/el7 10 | # optional - install required rpms to build 11 | $ sudo make depends 12 | # mandatory 13 | $ make env build 14 | ``` 15 | You can find your freshly build rpm in *~/rpmbuild/BUILDROOT/rbd-docker-plugin-0.1.9-1.el7.centos.x86_64* 16 | 17 | -------------------------------------------------------------------------------- /contrib/el7/SOURCES/rbd-docker-plugin-wrapper: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -o errexit -o nounset -o pipefail 3 | 4 | RBD_BIN=%LIBEXEC%/rbd-docker-plugin 5 | CONF=${1:-/etc/docker/rbd-docker-plugin.conf} 6 | 7 | ARGS=`cat ${CONF} | \ 8 | egrep -v '(^[[:space:]]*#)' | 9 | awk -F \# '{print $1}' | \ 10 | awk -F = '{ 11 | gsub(/[ \t]/, "", $1); 12 | gsub(/^[ \t]*/, "", $2); 13 | gsub(/[ \t]*$/, "", $2); 14 | printf("-%s=%s ", $1, $2)}' 15 | ` 16 | exec ${RBD_BIN} ${ARGS} ${@} 17 | -------------------------------------------------------------------------------- /contrib/el7/SOURCES/rbd-docker-plugin.conf: -------------------------------------------------------------------------------- 1 | #### 2 | # rbd-docker-plugin config 3 | ##### 4 | #cluster= 5 | #config= 6 | create=true 7 | #fs=xfs 8 | ## Don't forget to update logrotate file logrotate.d/rbd-docker-plugin 9 | #logdir=/var/log 10 | #mount=/var/lib/docker/volumes 11 | #name=rbd 12 | plugins=/run/docker/plugins 13 | #pool=rbd 14 | #remove=false 15 | #size=20480 16 | #user=admin 17 | -------------------------------------------------------------------------------- /contrib/el7/SOURCES/rbd-docker-plugin.service: -------------------------------------------------------------------------------- 1 | # systemd config for rbd-docker-plugin 2 | [Unit] 3 | Description=Ceph RBD Docker VolumeDriver Plugin 4 | Wants=docker.service 5 | Before=docker.service 6 | 7 | [Service] 8 | ExecStart=/usr/bin/rbd-docker-plugin-wrapper 9 | Restart=on-failure 10 | # NOTE: this kill is not synchronous as recommended by systemd *shrug* 11 | ExecReload=/bin/kill -HUP $MAINPID 12 | 13 | [Install] 14 | WantedBy=multi-user.target 15 | -------------------------------------------------------------------------------- /contrib/el7/SPECS/rbd-docker-plugin.spec: -------------------------------------------------------------------------------- 1 | # SRC https://github.com/yp-engineering/rbd-docker-plugin/archive/0.1.9.tar.gz 2 | 3 | %define version 0.1.9 4 | %{!?release: %{!?release: %define release 1}} 5 | 6 | Summary: Ceph RBD docker volume driver plugin. 7 | Name: rbd-docker-plugin 8 | Version: %{version} 9 | Release: %{release}%{?dist} 10 | License: MIT 11 | Group: misc 12 | URL: https://github.com/yp-engineering/rbd-docker-plugin/ 13 | Source0: https://github.com/yp-engineering/rbd-docker-plugin/archive/%{version}.tar.gz 14 | Source1: rbd-docker-plugin.service 15 | Source2: rbd-docker-plugin.conf 16 | Source3: rbd-docker-plugin-wrapper 17 | Source4: rbd-docker-plugin_logrotate 18 | ExclusiveArch: x86_64 19 | BuildRoot: %{_tmppath}/%{name}-%{version} 20 | BuildRequires: golang >= 1.4.2 21 | BuildRequires: make 22 | BuildRequires: librados2-devel >= 0.94.0 23 | BuildRequires: librbd1-devel >= 0.94.0 24 | BuildRequires: pkgconfig(systemd) 25 | 26 | Requires(post): systemd 27 | Requires(preun): systemd 28 | Requires(postun): systemd 29 | Requires: ceph >= 0.94.0 30 | Requires: librados2 >= 0.94.0 31 | Requires: librbd1 >= 0.94.0 32 | Requires: docker-engine >= 1.8.0 33 | 34 | %description 35 | Ceph RBD docker volume driver plugin. 36 | 37 | %prep 38 | %setup -q -n %{name}-%{version} 39 | 40 | %build 41 | export GOPATH=%{_builddir}/go 42 | mkdir -p ${GOPATH} 43 | export GOBIN=${GOPATH}/bin 44 | mkdir -p ${GOBIN} 45 | go get 46 | make 47 | 48 | %install 49 | install -d %{buildroot}%{_bindir} 50 | install -d %{buildroot}%{_libexecdir} 51 | install -d %{buildroot}%{_unitdir} 52 | install -d %{buildroot}%{_sysconfdir}/docker/ 53 | install -p -m 755 dist/%{name} %{buildroot}%{_libexecdir}/%{name} 54 | install -p -m 644 %{S:1} %{buildroot}%{_unitdir}/ 55 | sed -e "s,%%LIBEXEC%%,%{_libexecdir}," %{S:3} > %{buildroot}%{_bindir}/rbd-docker-plugin-wrapper 56 | chmod 755 %{buildroot}%{_bindir}/rbd-docker-plugin-wrapper 57 | sed -e "s,%%LIBEXEC%%,%{_libexecdir}," %{S:2} > %{buildroot}%{_sysconfdir}/docker/rbd-docker-plugin.conf 58 | chmod 644 %{buildroot}%{_sysconfdir}/docker/rbd-docker-plugin.conf 59 | 60 | mkdir -p $RPM_BUILD_ROOT%{_sysconfdir}/logrotate.d 61 | install -m 644 %{S:4} $RPM_BUILD_ROOT%{_sysconfdir}/logrotate.d/rbd-docker-plugin 62 | 63 | %files 64 | %defattr(-,root,root) 65 | %{_unitdir}/rbd-docker-plugin.service 66 | %{_libexecdir}/%{name} 67 | %{_bindir}/rbd-docker-plugin-wrapper 68 | %{_sysconfdir}/logrotate.d/rbd-docker-plugin 69 | %config(noreplace) %{_sysconfdir}/docker/%{name}.conf 70 | 71 | %post 72 | %systemd_post rbd-docker-plugin.service 73 | 74 | %preun 75 | %systemd_preun rbd-docker-plugin.service 76 | 77 | %postun 78 | # When the last version of a package is erased, $1 is 0 79 | # Otherwise it's an upgrade and we need to restart the service 80 | if [ $1 -ge 1 ]; then 81 | /usr/bin/systemctl restart rbd-docker-plugin.service 82 | fi 83 | /usr/bin/systemctl daemon-reload >/dev/null 2>&1 || true 84 | 85 | %changelog 86 | * Thu Sep 10 2015 ct16k 87 | - add logrotate 88 | - fix runtime deps 89 | * Wed Sep 09 2015 sheepkiller 90 | - move plugin to /usr/libexec 91 | - add wrapper + config 92 | * Mon Sep 07 2015 sheepkiller 93 | - Initial version 94 | -------------------------------------------------------------------------------- /driver.go: -------------------------------------------------------------------------------- 1 | // Copyright 2015 YP LLC. 2 | // Use of this source code is governed by an MIT-style 3 | // license that can be found in the LICENSE file. 4 | package main 5 | 6 | /** 7 | * Ceph RBD Docker VolumeDriver Plugin 8 | * 9 | * rbd-docker-plugin service creates a UNIX socket that can accept Volume 10 | * Driver requests (JSON HTTP POSTs) from Docker Engine. 11 | * 12 | * Historical note: Due to some issues using the go-ceph library for 13 | * locking/unlocking, we reimplemented all functionality to use shell CLI 14 | * commands via the 'rbd' executable. 15 | * 16 | * System Requirements: 17 | * - requires rbd CLI binary in PATH 18 | * 19 | * Plugin name: rbd -- configurable via --name 20 | * 21 | * % docker run --volume-driver=rbd -v imagename:/mnt/dir IMAGE [CMD] 22 | * 23 | * golang github code examples: 24 | * - https://github.com/docker/docker/blob/master/experimental/plugins_volume.md 25 | * - https://github.com/docker/go-plugins-helpers/tree/master/volume 26 | * - https://github.com/calavera/docker-volume-glusterfs 27 | * - https://github.com/AcalephStorage/docker-volume-ceph-rbd 28 | */ 29 | 30 | import ( 31 | "errors" 32 | "fmt" 33 | "log" 34 | "os" 35 | "os/exec" 36 | "path/filepath" 37 | "regexp" 38 | "strconv" 39 | "strings" 40 | "sync" 41 | "time" 42 | 43 | "github.com/docker/go-plugins-helpers/volume" 44 | ) 45 | 46 | var ( 47 | imageNameRegexp = regexp.MustCompile(`^(([-_.[:alnum:]]+)/)?([-_.[:alnum:]]+)(@([0-9]+))?$`) // optional pool or size in image name 48 | rbdUnmapBusyRegexp = regexp.MustCompile(`^exit status 16$`) 49 | ) 50 | 51 | // Volume is our local struct to store info about Ceph RBD Image 52 | type Volume struct { 53 | Name string // RBD Image name 54 | Device string // local host kernel device (e.g. /dev/rbd1) 55 | Locker string // track the lock name 56 | FStype string 57 | Pool string 58 | ID string 59 | } 60 | 61 | // our driver type for impl func 62 | type cephRBDVolumeDriver struct { 63 | // - using default ceph cluster name ("ceph") 64 | // - using default ceph config (/etc/ceph/.conf) 65 | // 66 | // TODO: when starting, what if there are mounts already for RBD devices? 67 | // do we ingest them as our own or ... currently fails if locked 68 | // 69 | // TODO: use a chan as semaphore instead of mutex in driver? 70 | 71 | name string // unique name for plugin 72 | cluster string // ceph cluster to use (default: ceph) 73 | user string // ceph user to use (default: admin) 74 | pool string // ceph pool to use (default: rbd) 75 | root string // scratch dir for mounts for this plugin 76 | config string // ceph config file to read 77 | volumes map[string]*Volume // track locally mounted volumes, key on mountpoint 78 | m *sync.Mutex // mutex to guard operations that change volume maps or use conn 79 | } 80 | 81 | // newCephRBDVolumeDriver builds the driver struct, reads config file and connects to cluster 82 | func newCephRBDVolumeDriver(pluginName, cluster, userName, defaultPoolName, rootBase, config string) cephRBDVolumeDriver { 83 | // the root mount dir will be based on docker default root and plugin name - pool added later per volume 84 | mountDir := filepath.Join(rootBase, pluginName) 85 | log.Printf("INFO: newCephRBDVolumeDriver: setting base mount dir=%s", mountDir) 86 | 87 | // fill everything except the connection and context 88 | driver := cephRBDVolumeDriver{ 89 | name: pluginName, 90 | cluster: cluster, 91 | user: userName, 92 | pool: defaultPoolName, 93 | root: mountDir, 94 | config: config, 95 | volumes: map[string]*Volume{}, 96 | m: &sync.Mutex{}, 97 | } 98 | 99 | return driver 100 | } 101 | 102 | // ************************************************************ 103 | // 104 | // Implement the Docker Volume Driver interface 105 | // 106 | // Using https://github.com/docker/go-plugins-helpers/ 107 | // 108 | // ************************************************************ 109 | 110 | // Capabilities 111 | // Scope: global - images managed using rbd plugin can be considered "global" 112 | func (d cephRBDVolumeDriver) Capabilities() *volume.CapabilitiesResponse { 113 | return &volume.CapabilitiesResponse{ 114 | Capabilities: volume.Capability{ 115 | Scope: "global", 116 | }, 117 | } 118 | } 119 | 120 | // Create will ensure the RBD image requested is available. Plugin requires 121 | // --create option flag to be able to provision new RBD images. 122 | // 123 | // Docker Volume Create Options: 124 | // size - in MB 125 | // pool 126 | // fstype 127 | // 128 | // 129 | // POST /VolumeDriver.Create 130 | // 131 | // Request: 132 | // { 133 | // "Name": "volume_name", 134 | // "Opts": {} 135 | // } 136 | // Instruct the plugin that the user wants to create a volume, given a user 137 | // specified volume name. The plugin does not need to actually manifest the 138 | // volume on the filesystem yet (until Mount is called). 139 | // 140 | // Response: 141 | // { "Err": null } 142 | // Respond with a string error if an error occurred. 143 | // 144 | func (d cephRBDVolumeDriver) Create(r *volume.CreateRequest) error { 145 | log.Printf("INFO: API Create(%q)", r) 146 | d.m.Lock() 147 | defer d.m.Unlock() 148 | 149 | return d.createImage(r) 150 | } 151 | 152 | func (d cephRBDVolumeDriver) createImage(r *volume.CreateRequest) error { 153 | log.Printf("INFO: createImage(%q)", r) 154 | 155 | fstype := *defaultImageFSType 156 | 157 | // parse image name optional/default pieces 158 | pool, name, size, err := d.parseImagePoolNameSize(r.Name) 159 | if err != nil { 160 | log.Printf("ERROR: parsing volume: %s", err) 161 | return err 162 | } 163 | 164 | // Options to override from `docker volume create -o OPT=VAL ...` 165 | if r.Options["pool"] != "" { 166 | pool = r.Options["pool"] 167 | } 168 | if r.Options["size"] != "" { 169 | size, err = strconv.Atoi(r.Options["size"]) 170 | if err != nil { 171 | log.Printf("WARN: using default size. unable to parse int from %s: %s", r.Options["size"], err) 172 | size = *defaultImageSizeMB 173 | } 174 | } 175 | if r.Options["fstype"] != "" { 176 | fstype = r.Options["fstype"] 177 | } 178 | 179 | // check for mount 180 | mount := d.mountpoint(pool, name) 181 | 182 | // do we already know about this volume? return early 183 | if _, found := d.volumes[mount]; found { 184 | log.Println("INFO: Volume is already in known mounts: " + mount) 185 | return nil 186 | } 187 | 188 | exists, err := d.rbdImageExists(pool, name) 189 | if err != nil { 190 | log.Printf("ERROR: checking for RBD Image: %s", err) 191 | return err 192 | } 193 | if !exists { 194 | if !*canCreateVolumes { 195 | errString := fmt.Sprintf("Ceph RBD Image not found: %s", name) 196 | log.Println("ERROR: " + errString) 197 | return errors.New(errString) 198 | } 199 | // try to create it ... use size and default fs-type 200 | err = d.createRBDImage(pool, name, size, fstype) 201 | if err != nil { 202 | errString := fmt.Sprintf("Unable to create Ceph RBD Image(%s): %s", name, err) 203 | log.Println("ERROR: " + errString) 204 | return errors.New(errString) 205 | } 206 | } 207 | 208 | return nil 209 | } 210 | 211 | // POST /VolumeDriver.Remove 212 | // 213 | // Request: 214 | // { "Name": "volume_name" } 215 | // Remove a volume, given a user specified volume name. 216 | // 217 | // Response: 218 | // { "Err": null } 219 | // Respond with a string error if an error occurred. 220 | // 221 | func (d cephRBDVolumeDriver) Remove(r *volume.RemoveRequest) error { 222 | log.Printf("INFO: API Remove(%s)", r) 223 | d.m.Lock() 224 | defer d.m.Unlock() 225 | 226 | // parse full image name for optional/default pieces 227 | pool, name, _, err := d.parseImagePoolNameSize(r.Name) 228 | if err != nil { 229 | log.Printf("ERROR: parsing volume: %s", err) 230 | return err 231 | } 232 | 233 | mount := d.mountpoint(pool, name) 234 | 235 | // do we know about this volume? does it matter? 236 | if _, found := d.volumes[mount]; !found { 237 | log.Printf("WARN: Volume is not in known mounts: %s", mount) 238 | } 239 | 240 | exists, err := d.rbdImageExists(pool, name) 241 | if err != nil { 242 | log.Printf("ERROR: checking for RBD Image: %s", err) 243 | return err 244 | } 245 | if !exists { 246 | errString := fmt.Sprintf("Ceph RBD Image not found: %s", name) 247 | log.Println("ERROR: " + errString) 248 | return errors.New(errString) 249 | } 250 | 251 | // attempt to gain lock before remove - lock seems to disappear after rm (but not after rename) 252 | locker, err := d.lockImage(pool, name) 253 | if err != nil { 254 | errString := fmt.Sprintf("Unable to lock image for remove: %s", name) 255 | log.Println("ERROR: " + errString) 256 | return errors.New(errString) 257 | } 258 | 259 | // remove action can be: ignore, delete or rename 260 | if removeActionFlag == "delete" { 261 | // delete it (for real - destroy it ... ) 262 | err = d.removeRBDImage(pool, name) 263 | if err != nil { 264 | errString := fmt.Sprintf("Unable to remove Ceph RBD Image(%s): %s", name, err) 265 | log.Println("ERROR: " + errString) 266 | defer d.unlockImage(pool, name, locker) 267 | return errors.New(errString) 268 | } 269 | defer d.unlockImage(pool, name, locker) 270 | } else if removeActionFlag == "rename" { 271 | // just rename it (in case needed later, or can be culled via script) 272 | // TODO: maybe add a timestamp? 273 | err = d.renameRBDImage(pool, name, "zz_"+name) 274 | if err != nil { 275 | errString := fmt.Sprintf("Unable to rename with zz_ prefix: RBD Image(%s): %s", name, err) 276 | log.Println("ERROR: " + errString) 277 | // unlock by old name 278 | defer d.unlockImage(pool, name, locker) 279 | return errors.New(errString) 280 | } 281 | // unlock by new name 282 | defer d.unlockImage(pool, "zz_"+name, locker) 283 | } else { 284 | // ignore the remove call - but unlock ? 285 | defer d.unlockImage(pool, name, locker) 286 | } 287 | 288 | delete(d.volumes, mount) 289 | return nil 290 | } 291 | 292 | // Mount will Ceph Map the RBD image to the local kernel and create a mount 293 | // point and mount the image. 294 | // 295 | // POST /VolumeDriver.Mount 296 | // 297 | // Request: 298 | // { "Name": "volume_name" } 299 | // Docker requires the plugin to provide a volume, given a user specified 300 | // volume name. This is called once per container start. 301 | // 302 | // Response: 303 | // { "Mountpoint": "/path/to/directory/on/host", "Err": null } 304 | // Respond with the path on the host filesystem where the volume has been 305 | // made available, and/or a string error if an error occurred. 306 | // 307 | // TODO: utilize the new MountRequest.ID field to track volumes 308 | func (d cephRBDVolumeDriver) Mount(r *volume.MountRequest) (*volume.MountResponse, error) { 309 | log.Printf("INFO: API Mount(%s)", r) 310 | d.m.Lock() 311 | defer d.m.Unlock() 312 | 313 | // parse full image name for optional/default pieces 314 | pool, name, _, err := d.parseImagePoolNameSize(r.Name) 315 | if err != nil { 316 | log.Printf("ERROR: parsing volume: %s", err) 317 | return nil, err 318 | } 319 | 320 | mount := d.mountpoint(pool, name) 321 | 322 | // attempt to lock 323 | locker, err := d.lockImage(pool, name) 324 | if err != nil { 325 | log.Printf("ERROR: locking RBD Image(%s): %s", name, err) 326 | return nil, errors.New("Unable to get Exclusive Lock") 327 | } 328 | 329 | // map and mount the RBD image -- these are OS level commands, not avail in go-ceph 330 | 331 | // map 332 | device, err := d.mapImage(pool, name) 333 | if err != nil { 334 | log.Printf("ERROR: mapping RBD Image(%s) to kernel device: %s", name, err) 335 | // failsafe: need to release lock 336 | defer d.unlockImage(pool, name, locker) 337 | return nil, errors.New("Unable to map kernel device") 338 | } 339 | 340 | // determine device FS type 341 | fstype, err := d.deviceType(device) 342 | if err != nil { 343 | log.Printf("WARN: unable to detect RBD Image(%s) fstype: %s", name, err) 344 | // NOTE: don't fail - FOR NOW we will assume default plugin fstype 345 | fstype = *defaultImageFSType 346 | } 347 | 348 | // double check image filesystem if possible 349 | err = d.verifyDeviceFilesystem(device, mount, fstype) 350 | if err != nil { 351 | log.Printf("ERROR: filesystem may need repairs: %s", err) 352 | // failsafe: need to release lock and unmap kernel device 353 | defer d.unmapImageDevice(device) 354 | defer d.unlockImage(pool, name, locker) 355 | return nil, errors.New("Image filesystem has errors, requires manual repairs") 356 | } 357 | 358 | // check for mountdir - create if necessary 359 | err = os.MkdirAll(mount, os.ModeDir|os.FileMode(int(0775))) 360 | if err != nil { 361 | log.Printf("ERROR: creating mount directory: %s", err) 362 | // failsafe: need to release lock and unmap kernel device 363 | defer d.unmapImageDevice(device) 364 | defer d.unlockImage(pool, name, locker) 365 | return nil, errors.New("Unable to make mountdir") 366 | } 367 | 368 | // mount 369 | err = d.mountDevice(fstype, device, mount) 370 | if err != nil { 371 | log.Printf("ERROR: mounting device(%s) to directory(%s): %s", device, mount, err) 372 | // need to release lock and unmap kernel device 373 | defer d.unmapImageDevice(device) 374 | defer d.unlockImage(pool, name, locker) 375 | return nil, errors.New("Unable to mount device") 376 | } 377 | 378 | // if all that was successful - add to our list of volumes 379 | d.volumes[mount] = &Volume{ 380 | Name: name, 381 | Device: device, 382 | Locker: locker, 383 | FStype: fstype, 384 | Pool: pool, 385 | ID: r.ID, 386 | } 387 | 388 | return &volume.MountResponse{Mountpoint: mount}, nil 389 | } 390 | 391 | // Get the list of volumes registered with the plugin. 392 | // Default returns Ceph RBD images in default pool. 393 | // 394 | // POST /VolumeDriver.List 395 | // 396 | // Request: 397 | // {} 398 | // List the volumes mapped by this plugin. 399 | // 400 | // Response: 401 | // { "Volumes": [ { "Name": "volume_name", "Mountpoint": "/path/to/directory/on/host" } ], "Err": null } 402 | // Respond with an array containing pairs of known volume names and their 403 | // respective paths on the host filesystem (where the volumes have been 404 | // made available). 405 | // 406 | func (d cephRBDVolumeDriver) List() (*volume.ListResponse, error) { 407 | volNames, err := d.rbdList() 408 | if err != nil { 409 | return nil, err 410 | } 411 | vols := make([]*volume.Volume, 0, len(volNames)) 412 | for _, name := range volNames { 413 | apiVol := &volume.Volume{Name: name} 414 | 415 | // for each known mounted vol, add Mountpoint 416 | // FIXME: assumes default rbd pool - should we keep track of all pools? query each? just assume one pool? 417 | mount := d.mountpoint(d.pool, name) 418 | _, ok := d.volumes[mount] 419 | if ok { 420 | apiVol.Mountpoint = mount 421 | } 422 | 423 | vols = append(vols, apiVol) 424 | } 425 | 426 | log.Printf("INFO: List request => %s", vols) 427 | return &volume.ListResponse{Volumes: vols}, nil 428 | } 429 | 430 | // rbdList performs an `rbd ls` on the default pool 431 | func (d *cephRBDVolumeDriver) rbdList() ([]string, error) { 432 | result, err := d.rbdsh(d.pool, "ls") 433 | if err != nil { 434 | return nil, err 435 | } 436 | // split into lines - should be one rbd image name per line 437 | return strings.Split(result, "\n"), nil 438 | } 439 | 440 | // Get the volume info. 441 | // 442 | // POST /VolumeDriver.Get 443 | // 444 | // GetRequest: 445 | // { "Name": "volume_name" } 446 | // Docker needs reminding of the path to the volume on the host. 447 | // 448 | // GetResponse: 449 | // { "Volume": { "Name": "volume_name", "Mountpoint": "/path/to/directory/on/host" }} 450 | // 451 | func (d cephRBDVolumeDriver) Get(r *volume.GetRequest) (*volume.GetResponse, error) { 452 | // parse full image name for optional/default pieces 453 | pool, name, _, err := d.parseImagePoolNameSize(r.Name) 454 | if err != nil { 455 | log.Printf("ERROR: parsing volume: %s", err) 456 | return nil, err 457 | } 458 | 459 | // Check to see if the image exists 460 | exists, err := d.rbdImageExists(pool, name) 461 | if err != nil { 462 | log.Printf("WARN: checking for RBD Image: %s", err) 463 | return nil, err 464 | } 465 | mountPath := d.mountpoint(pool, name) 466 | if !exists { 467 | log.Printf("WARN: Image %s does not exist", r.Name) 468 | delete(d.volumes, mountPath) 469 | return nil, fmt.Errorf("Image %s does not exist", r.Name) 470 | } 471 | 472 | // for each mounted vol, keep Mountpoint 473 | _, ok := d.volumes[mountPath] 474 | if !ok { 475 | mountPath = "" 476 | } 477 | log.Printf("INFO: Get request(%s) => %s", name, mountPath) 478 | 479 | return &volume.GetResponse{Volume: &volume.Volume{Name: r.Name, Mountpoint: mountPath}}, nil 480 | } 481 | 482 | // Path returns the path to host directory mountpoint for volume. 483 | // 484 | // POST /VolumeDriver.Path 485 | // 486 | // Request: 487 | // { "Name": "volume_name" } 488 | // Docker needs reminding of the path to the volume on the host. 489 | // 490 | // Response: 491 | // { "Mountpoint": "/path/to/directory/on/host", "Err": null } 492 | // Respond with the path on the host filesystem where the volume has been 493 | // made available, and/or a string error if an error occurred. 494 | // 495 | // NOTE: this method does not require the Ceph connection 496 | // FIXME: does volume API require error if Volume requested does not exist/is not mounted? Similar to List/Get leaving mountpoint empty? 497 | // 498 | func (d cephRBDVolumeDriver) Path(r *volume.PathRequest) (*volume.PathResponse, error) { 499 | // parse full image name for optional/default pieces 500 | pool, name, _, err := d.parseImagePoolNameSize(r.Name) 501 | if err != nil { 502 | log.Printf("ERROR: parsing volume: %s", err) 503 | return nil, err 504 | } 505 | 506 | mountPath := d.mountpoint(pool, name) 507 | log.Printf("INFO: API Path request(%s) => %s", name, mountPath) 508 | return &volume.PathResponse{Mountpoint: mountPath}, nil 509 | } 510 | 511 | // POST /VolumeDriver.Unmount 512 | // 513 | // - assuming writes are finished and no other containers using same disk on this host? 514 | 515 | // Request: 516 | // { "Name": "volume_name", ID: "client-id" } 517 | // Indication that Docker no longer is using the named volume. This is 518 | // called once per container stop. Plugin may deduce that it is safe to 519 | // deprovision it at this point. 520 | // 521 | // Response: 522 | // Respond with error or nil 523 | // 524 | func (d cephRBDVolumeDriver) Unmount(r *volume.UnmountRequest) error { 525 | log.Printf("INFO: API Unmount(%s)", r) 526 | d.m.Lock() 527 | defer d.m.Unlock() 528 | 529 | var err_msgs = []string{} 530 | 531 | // parse full image name for optional/default pieces 532 | pool, name, _, err := d.parseImagePoolNameSize(r.Name) 533 | if err != nil { 534 | log.Printf("ERROR: parsing volume: %s", err) 535 | return err 536 | } 537 | 538 | mount := d.mountpoint(pool, name) 539 | 540 | // check if it's in our mounts - we may not know about it if plugin was started late? 541 | vol, found := d.volumes[mount] 542 | if !found { 543 | // FIXME: is this an error or just a log and a return nil? 544 | //return fmt.Errorf("WARN: Volume is not in known mounts: ignoring request to unmount: %s/%s", pool, name) 545 | log.Printf("WARN: Volume is not in known mounts: ignoring request to unmount: %s/%s", pool, name) 546 | return nil 547 | /** 548 | // set up a fake Volume with defaults ... 549 | // - device is /dev/rbd// in newer ceph versions 550 | // - assume we are the locker (will fail if locked from another host) 551 | vol = &Volume{ 552 | Pool: pool, 553 | Name: name, 554 | Device: fmt.Sprintf("/dev/rbd/%s/%s", pool, name), 555 | Locker: d.localLockerCookie(), 556 | ID, r.ID, 557 | } 558 | */ 559 | } 560 | 561 | // if found - double check ID 562 | if vol.ID != r.ID { 563 | log.Printf("WARN: Volume client ID(%s) does not match requestor id(%s) for %s/%s", 564 | vol.ID, r.ID, pool, name) 565 | return nil 566 | } 567 | 568 | // unmount 569 | // NOTE: this might succeed even if device is still in use inside container. device will dissappear from host side but still be usable inside container :( 570 | err = d.unmountDevice(vol.Device) 571 | if err != nil { 572 | log.Printf("ERROR: unmounting device(%s): %s", vol.Device, err) 573 | // failsafe: will still attempt to unmap and unlock 574 | err_msgs = append(err_msgs, "Error unmounting device") 575 | } 576 | 577 | // unmap 578 | err = d.unmapImageDevice(vol.Device) 579 | if err != nil { 580 | log.Printf("ERROR: unmapping image device(%s): %s", vol.Device, err) 581 | // NOTE: rbd unmap exits 16 if device is still being used - unlike umount. try to recover differently in that case 582 | if rbdUnmapBusyRegexp.MatchString(err.Error()) { 583 | // can't always re-mount and not sure if we should here ... will be cleaned up once original container goes away 584 | log.Printf("WARN: unmap failed due to busy device, early exit from this Unmount request.") 585 | return err 586 | } 587 | // other error, failsafe: proceed to attempt to unlock 588 | err_msgs = append(err_msgs, "Error unmapping kernel device") 589 | } 590 | 591 | // unlock 592 | err = d.unlockImage(vol.Pool, vol.Name, vol.Locker) 593 | if err != nil { 594 | log.Printf("ERROR: unlocking RBD image(%s): %s", vol.Name, err) 595 | err_msgs = append(err_msgs, "Error unlocking image") 596 | } 597 | 598 | // forget it 599 | delete(d.volumes, mount) 600 | 601 | // check for piled up errors 602 | if len(err_msgs) > 0 { 603 | return errors.New(strings.Join(err_msgs, ", ")) 604 | } 605 | 606 | return nil 607 | } 608 | 609 | // 610 | // END Docker VolumeDriver Plugin API methods 611 | // 612 | // *************************************************************************** 613 | // *************************************************************************** 614 | // 615 | 616 | // mountpoint returns the expected path on host 617 | func (d *cephRBDVolumeDriver) mountpoint(pool, name string) string { 618 | return filepath.Join(d.root, pool, name) 619 | } 620 | 621 | // parseImagePoolNameSize parses out any optional parameters from Image Name 622 | // passed from docker run. Fills in unspecified options with default pool or 623 | // size. 624 | // 625 | // Returns: pool, image-name, size, error 626 | // 627 | func (d *cephRBDVolumeDriver) parseImagePoolNameSize(fullname string) (pool string, imagename string, size int, err error) { 628 | // Examples of regexp matches: 629 | // foo: ["foo" "" "" "foo" "" ""] 630 | // foo@1024: ["foo@1024" "" "" "foo" "@1024" "1024"] 631 | // pool/foo: ["pool/foo" "pool/" "pool" "foo" "" ""] 632 | // pool/foo@1024: ["pool/foo@1024" "pool/" "pool" "foo" "@1024" "1024"] 633 | // 634 | // Match indices: 635 | // 0: matched string 636 | // 1: pool with slash 637 | // 2: pool no slash 638 | // 3: image name 639 | // 4: size with @ 640 | // 5: size only 641 | // 642 | matches := imageNameRegexp.FindStringSubmatch(fullname) 643 | if isDebugEnabled() { 644 | log.Printf("DEBUG: parseImagePoolNameSize: \"%s\": %q", fullname, matches) 645 | } 646 | if len(matches) != 6 { 647 | return "", "", 0, errors.New("Unable to parse image name: " + fullname) 648 | } 649 | 650 | // 2: pool 651 | pool = d.pool // defaul pool for plugin 652 | if matches[2] != "" { 653 | pool = matches[2] 654 | } 655 | 656 | // 3: image 657 | imagename = matches[3] 658 | 659 | // 5: size 660 | size = *defaultImageSizeMB 661 | if matches[5] != "" { 662 | var err error 663 | size, err = strconv.Atoi(matches[5]) 664 | if err != nil { 665 | log.Printf("WARN: using default. unable to parse int from %s: %s", matches[5], err) 666 | size = *defaultImageSizeMB 667 | } 668 | } 669 | 670 | return pool, imagename, size, nil 671 | } 672 | 673 | // rbdImageExists will check for an existing Ceph RBD Image 674 | func (d *cephRBDVolumeDriver) rbdImageExists(pool, findName string) (bool, error) { 675 | _, err := d.rbdsh(pool, "info", findName) 676 | if err != nil { 677 | // NOTE: even though method signature returns err - we take the error 678 | // in this instance as the indication that the image does not exist 679 | // TODO: can we double check exit value for exit status 2 ? 680 | return false, nil 681 | } 682 | return true, nil 683 | } 684 | 685 | // createRBDImage will create a new Ceph block device and make a filesystem on it 686 | func (d *cephRBDVolumeDriver) createRBDImage(pool string, name string, size int, fstype string) error { 687 | log.Printf("INFO: Attempting to create new RBD Image: (%s/%s, %s, %s)", pool, name, size, fstype) 688 | 689 | // check that fs is valid type (needs mkfs.fstype in PATH) 690 | mkfs, err := exec.LookPath("mkfs." + fstype) 691 | if err != nil { 692 | msg := fmt.Sprintf("Unable to find mkfs for %s in PATH: %s", fstype, err) 693 | return errors.New(msg) 694 | } 695 | 696 | // create the block device image with format=2 (v2) - features seem heavily dependent on version and configuration of RBD pools 697 | // should we enable all v2 image features?: +1: layering support +2: striping v2 support +4: exclusive locking support +8: object map support 698 | // NOTE: i tried but "2015-08-02 20:24:36.726758 7f87787907e0 -1 librbd: librbd does not support requested features." 699 | // NOTE: I also tried just image-features=4 (locking) - but map will fail: 700 | // sudo rbd unmap mynewvol => rbd: 'mynewvol' is not a block device, rbd: unmap failed: (22) Invalid argument 701 | // "--image-features", strconv.Itoa(4), 702 | 703 | _, err = d.rbdsh( 704 | pool, "create", 705 | "--image-format", strconv.Itoa(2), 706 | "--size", strconv.Itoa(size), 707 | name, 708 | ) 709 | if err != nil { 710 | return err 711 | } 712 | 713 | // lock it temporarily for fs creation 714 | lockname, err := d.lockImage(pool, name) 715 | if err != nil { 716 | return err 717 | } 718 | 719 | // map to kernel device 720 | device, err := d.mapImage(pool, name) 721 | if err != nil { 722 | defer d.unlockImage(pool, name, lockname) 723 | return err 724 | } 725 | 726 | // make the filesystem - give it some time 727 | _, err = shWithTimeout(5*time.Minute, mkfs, device) 728 | if err != nil { 729 | defer d.unmapImageDevice(device) 730 | defer d.unlockImage(pool, name, lockname) 731 | return err 732 | } 733 | 734 | // TODO: should we chown/chmod the directory? e.g. non-root container users 735 | // won't be able to write. where to get the preferred user id? 736 | 737 | // unmap 738 | err = d.unmapImageDevice(device) 739 | if err != nil { 740 | // ? if we cant unmap -- are we screwed? should we unlock? 741 | return err 742 | } 743 | 744 | // unlock 745 | err = d.unlockImage(pool, name, lockname) 746 | if err != nil { 747 | return err 748 | } 749 | 750 | return nil 751 | } 752 | 753 | // rbdImageIsLocked returns true if named image is already locked 754 | func (d *cephRBDVolumeDriver) rbdImageIsLocked(pool, name string) (bool, error) { 755 | // check the output for a lock -- if blank or error, assume not locked (?) 756 | out, err := d.rbdsh(pool, "lock", "ls", name) 757 | if err != nil || out != "" { 758 | return false, err 759 | } 760 | // otherwise - no error and output is not blank - assume a lock exists ... 761 | return true, nil 762 | } 763 | 764 | // lockImage locks image and returns locker cookie name 765 | func (d *cephRBDVolumeDriver) lockImage(pool, imagename string) (string, error) { 766 | cookie := d.localLockerCookie() 767 | _, err := d.rbdsh(pool, "lock", "add", imagename, cookie) 768 | if err != nil { 769 | return "", err 770 | } 771 | return cookie, nil 772 | } 773 | 774 | // localLockerCookie returns the Hostname 775 | func (d *cephRBDVolumeDriver) localLockerCookie() string { 776 | host, err := os.Hostname() 777 | if err != nil { 778 | log.Printf("WARN: HOST_UNKNOWN: unable to get hostname: %s", err) 779 | host = "HOST_UNKNOWN" 780 | } 781 | return host 782 | } 783 | 784 | // unlockImage releases the exclusive lock on an image 785 | func (d *cephRBDVolumeDriver) unlockImage(pool, imagename, locker string) error { 786 | if locker == "" { 787 | log.Printf("WARN: Attempting to unlock image(%s/%s) for empty locker using default hostname", pool, imagename) 788 | // try to unlock using the local hostname 789 | locker = d.localLockerCookie() 790 | } 791 | log.Printf("INFO: unlockImage(%s/%s, %s)", pool, imagename, locker) 792 | 793 | // first - we need to discover the client id of the locker -- so we have to 794 | // `rbd lock list` and grep out fields 795 | out, err := d.rbdsh(pool, "lock", "list", imagename) 796 | if err != nil || out == "" { 797 | log.Printf("ERROR: image not locked or ceph rbd error: %s", err) 798 | return err 799 | } 800 | 801 | // parse out client id -- assume we looking for a line with the locker cookie on it -- 802 | var clientid string 803 | lines := grepLines(out, locker) 804 | if isDebugEnabled() { 805 | log.Printf("DEBUG: found lines matching %s:\n%s\n", locker, lines) 806 | } 807 | if len(lines) == 1 { 808 | // grab first word of first line as the client.id ? 809 | tokens := strings.SplitN(lines[0], " ", 2) 810 | if tokens[0] != "" { 811 | clientid = tokens[0] 812 | } 813 | } 814 | 815 | if clientid == "" { 816 | return errors.New("sh_unlockImage: Unable to determine client.id") 817 | } 818 | 819 | _, err = d.rbdsh(pool, "lock", "rm", imagename, locker, clientid) 820 | if err != nil { 821 | return err 822 | } 823 | return nil 824 | } 825 | 826 | // removeRBDImage will remove a Ceph RBD image - no undo available 827 | func (d *cephRBDVolumeDriver) removeRBDImage(pool, name string) error { 828 | log.Println("INFO: Remove RBD Image(%s/%s)", pool, name) 829 | 830 | // remove the block device image 831 | _, err := d.rbdsh(pool, "rm", name) 832 | 833 | if err != nil { 834 | return err 835 | } 836 | return nil 837 | } 838 | 839 | // renameRBDImage will move a Ceph RBD image to new name 840 | func (d *cephRBDVolumeDriver) renameRBDImage(pool, name, newname string) error { 841 | log.Println("INFO: Rename RBD Image(%s/%s -> %s)", pool, name, newname) 842 | 843 | out, err := d.rbdsh(pool, "rename", name, newname) 844 | if err != nil { 845 | log.Printf("ERROR: unable to rename: %s: %s", err, out) 846 | return err 847 | } 848 | return nil 849 | } 850 | 851 | // mapImage will map the RBD Image to a kernel device 852 | func (d *cephRBDVolumeDriver) mapImage(pool, imagename string) (string, error) { 853 | device, err := d.rbdsh(pool, "map", imagename) 854 | // NOTE: ubuntu rbd map seems to not return device. if no error, assume "default" /dev/rbd// device 855 | if device == "" && err == nil { 856 | device = fmt.Sprintf("/dev/rbd/%s/%s", pool, imagename) 857 | } 858 | 859 | return device, err 860 | } 861 | 862 | // unmapImageDevice will release the mapped kernel device 863 | func (d *cephRBDVolumeDriver) unmapImageDevice(device string) error { 864 | // NOTE: this does not even require a user nor a pool, just device name 865 | _, err := d.rbdsh("", "unmap", device) 866 | return err 867 | } 868 | 869 | // Callouts to other unix shell commands: blkid, mount, umount 870 | 871 | // deviceType identifies Image FS Type - requires RBD image to be mapped to kernel device 872 | func (d *cephRBDVolumeDriver) deviceType(device string) (string, error) { 873 | // blkid Output: 874 | // xfs 875 | blkid, err := shWithDefaultTimeout("blkid", "-o", "value", "-s", "TYPE", device) 876 | if err != nil { 877 | return "", err 878 | } 879 | if blkid != "" { 880 | return blkid, nil 881 | } else { 882 | return "", errors.New("Unable to determine device fs type from blkid") 883 | } 884 | } 885 | 886 | // verifyDeviceFilesystem will attempt to check XFS filesystems for errors 887 | func (d *cephRBDVolumeDriver) verifyDeviceFilesystem(device, mount, fstype string) error { 888 | // for now we only handle XFS 889 | // TODO: use fsck for ext4? 890 | if fstype != "xfs" { 891 | return nil 892 | } 893 | 894 | // check XFS volume 895 | err := d.xfsRepairDryRun(device) 896 | if err != nil { 897 | switch err.(type) { 898 | case ShTimeoutError: 899 | // propagate timeout errors - can't recover? system error? don't try to mount at that point 900 | return err 901 | default: 902 | // assume any other error is xfs error and attempt limited repair 903 | return d.attemptLimitedXFSRepair(fstype, device, mount) 904 | } 905 | } 906 | 907 | return nil 908 | } 909 | 910 | func (d *cephRBDVolumeDriver) xfsRepairDryRun(device string) error { 911 | // "xfs_repair -n (no modify node) will return a status of 1 if filesystem 912 | // corruption was detected and 0 if no filesystem corruption was detected." xfs_repair(8) 913 | // TODO: can we check cmd output and ensure the mount/unmount is suggested by stale disk log? 914 | 915 | _, err := shWithDefaultTimeout("xfs_repair", "-n", device) 916 | return err 917 | } 918 | 919 | // attemptLimitedXFSRepair will try mount/unmount and return result of another xfs-repair-n 920 | func (d *cephRBDVolumeDriver) attemptLimitedXFSRepair(fstype, device, mount string) (err error) { 921 | log.Printf("WARN: attempting limited XFS repair (mount/unmount) of %s %s", device, mount) 922 | 923 | // mount 924 | err = d.mountDevice(fstype, device, mount) 925 | if err != nil { 926 | return err 927 | } 928 | 929 | // unmount 930 | err = d.unmountDevice(device) 931 | if err != nil { 932 | return err 933 | } 934 | 935 | // try a dry-run again and return result 936 | return d.xfsRepairDryRun(device) 937 | } 938 | 939 | // mountDevice will call mount on kernel device with a docker volume subdirectory 940 | func (d *cephRBDVolumeDriver) mountDevice(fstype, device, mountdir string) error { 941 | _, err := shWithDefaultTimeout("mount", "-t", fstype, device, mountdir) 942 | return err 943 | } 944 | 945 | // unmountDevice will call umount on kernel device to unmount from host's docker subdirectory 946 | func (d *cephRBDVolumeDriver) unmountDevice(device string) error { 947 | _, err := shWithDefaultTimeout("umount", device) 948 | return err 949 | } 950 | 951 | // UTIL 952 | 953 | // rbdsh will call rbd with the given command arguments, also adding config, user and pool flags 954 | func (d *cephRBDVolumeDriver) rbdsh(pool, command string, args ...string) (string, error) { 955 | args = append([]string{"--conf", d.config, "--id", d.user, command}, args...) 956 | if pool != "" { 957 | args = append([]string{"--pool", pool}, args...) 958 | } 959 | return shWithDefaultTimeout("rbd", args...) 960 | } 961 | -------------------------------------------------------------------------------- /driver_test.go: -------------------------------------------------------------------------------- 1 | // Copyright 2015 YP LLC. 2 | // Use of this source code is governed by an MIT-style 3 | // license that can be found in the LICENSE file. 4 | package main 5 | 6 | // unit tests that don't rely on ceph 7 | 8 | import ( 9 | "flag" 10 | "fmt" 11 | "os" 12 | "testing" 13 | 14 | "github.com/docker/go-plugins-helpers/volume" 15 | "github.com/stretchr/testify/assert" 16 | ) 17 | 18 | // TODO: tests that need ceph 19 | // make fake cluster? 20 | // use dockerized container with ceph for tests? 21 | 22 | const ( 23 | TEST_SOCKET_PATH = "/tmp/rbd-test.sock" 24 | EXPECTED_ACTIVATION_RESPONSE = "{\"Implements\": [\"VolumeDriver\"]}" 25 | ) 26 | 27 | var ( 28 | testDriver cephRBDVolumeDriver 29 | ) 30 | 31 | func TestMain(m *testing.M) { 32 | flag.Parse() 33 | cephConf := os.Getenv("CEPH_CONF") 34 | 35 | testDriver = newCephRBDVolumeDriver( 36 | "test", 37 | "", 38 | "admin", 39 | "rbd", 40 | volume.DefaultDockerRootDirectory, 41 | cephConf, 42 | ) 43 | 44 | handler := volume.NewHandler(testDriver) 45 | // Serve won't return so spin off routine 46 | go handler.ServeUnix(TEST_SOCKET_PATH, 0) 47 | 48 | os.Exit(m.Run()) 49 | } 50 | 51 | func TestLocalLockerCookie(t *testing.T) { 52 | assert.NotEqual(t, "HOST_UNKNOWN", testDriver.localLockerCookie()) 53 | } 54 | 55 | func TestRbdImageExists_noName(t *testing.T) { 56 | f_bool, err := testDriver.rbdImageExists(testDriver.pool, "") 57 | assert.Equal(t, false, f_bool, fmt.Sprintf("%s", err)) 58 | } 59 | 60 | func TestRbdImageExists_withName(t *testing.T) { 61 | t.Skip("This fails for many reasons. Need to figure out how to do this in a container.") 62 | err := testDriver.createRBDImage("rbd", "foo", 1, "xfs") 63 | assert.Nil(t, err, formatError("createRBDImage", err)) 64 | t_bool, err := testDriver.rbdImageExists(testDriver.pool, "foo") 65 | assert.Equal(t, true, t_bool, formatError("rbdImageExists", err)) 66 | } 67 | 68 | // cephRBDDriver.parseImagePoolNameSize(string) (string, string, int, error) 69 | func TestParseImagePoolNameSize_name(t *testing.T) { 70 | pool, name, size := parseImageAndHandleError(t, "foo") 71 | 72 | assert.Equal(t, testDriver.pool, pool, "Pool should be same") 73 | assert.Equal(t, "foo", name, "Name should be same") 74 | assert.Equal(t, *defaultImageSizeMB, size, "Size should be same") 75 | } 76 | 77 | func TestParseImagePoolNameSize_complexName(t *testing.T) { 78 | pool, name, size := parseImageAndHandleError(t, "es-data1_v2.3") 79 | 80 | assert.Equal(t, testDriver.pool, pool, "Pool should be same") 81 | assert.Equal(t, "es-data1_v2.3", name, "Name should be same") 82 | assert.Equal(t, *defaultImageSizeMB, size, "Size should be same") 83 | } 84 | 85 | func TestParseImagePoolNameSize_withPool(t *testing.T) { 86 | pool, name, size := parseImageAndHandleError(t, "liverpool/foo") 87 | 88 | assert.Equal(t, "liverpool", pool, "Pool should be same") 89 | assert.Equal(t, "foo", name, "Name should be same") 90 | assert.Equal(t, *defaultImageSizeMB, size, "Size should be same") 91 | } 92 | 93 | func TestParseImagePoolNameSize_withSize(t *testing.T) { 94 | pool, name, size := parseImageAndHandleError(t, "liverpool/foo@1024") 95 | 96 | assert.Equal(t, "liverpool", pool, "Pool should be same") 97 | assert.Equal(t, "foo", name, "Name should be same") 98 | assert.Equal(t, 1024, size, "Size should be same") 99 | } 100 | 101 | func TestParseImagePoolNameSize_withPoolAndSize(t *testing.T) { 102 | pool, name, size := parseImageAndHandleError(t, "foo@1024") 103 | 104 | assert.Equal(t, testDriver.pool, pool, "Pool should be same") 105 | assert.Equal(t, "foo", name, "Name should be same") 106 | assert.Equal(t, 1024, size, "Size should be same") 107 | } 108 | 109 | // need a way to test the socket access using basic format - since this broke 110 | // in golang 1.6 with strict Host header checking even if using Unix sockets. 111 | // Requires socat and sudo 112 | // 113 | // Error response when built with golang 1.6: 400 Bad Request: missing required Host header 114 | func TestSocketActivate(t *testing.T) { 115 | t.Skip("This test requires socket, which seems to need root privs to build. So this test fails if run as normal user. TODO: Find a proper workaround.") 116 | out, err := sh("bash", "-c", "echo \"POST /Plugin.Activate HTTP/1.1\r\n\" | sudo socat unix-connect:/tmp/rbd-test.sock STDIO") 117 | assert.Nil(t, err, formatError("socat plugin activate", err)) 118 | assert.Contains(t, out, EXPECTED_ACTIVATION_RESPONSE, "Expecting Implements VolumeDriver message") 119 | 120 | } 121 | 122 | // Helpers 123 | func formatError(name string, err error) string { 124 | return fmt.Sprintf("ERROR calling %s: %q", name, err) 125 | } 126 | 127 | func parseImageAndHandleError(t *testing.T, name string) (string, string, int) { 128 | pool, name, size, err := testDriver.parseImagePoolNameSize(name) 129 | assert.Nil(t, err, formatError("parseImagePoolNameSize", err)) 130 | return pool, name, size 131 | } 132 | -------------------------------------------------------------------------------- /etc/cron.d/rbd-docker-plugin-checks: -------------------------------------------------------------------------------- 1 | SHELL=/bin/bash 2 | TPKG_HOME=/home/ops 3 | 4 | 5 | # Uncomment to enable automated Ceph RBD Docker plugin checks 6 | 7 | # check for /etc/ceph configs and keys 8 | #0 0 * * * root $TPKG_HOME/bin/check-ceph-rbd-docker-plugin.sh 9 | 10 | # check for updated tpkg once a night (randomized start 0-30m) 11 | #0 0 * * * root sleep $(( RANDOM \% 1800 )); TPKG_HOME=$TPKG_HOME tpkg -n -u rbd-docker-plugin >& /dev/null 12 | -------------------------------------------------------------------------------- /etc/init/rbd-docker-plugin.conf: -------------------------------------------------------------------------------- 1 | # /etc/init/rbd-docker-plugin.conf 2 | author "YP LLC" 3 | description "Ceph RBD VolumeDriver Docker Plugin daemon" 4 | 5 | start on filesystem or runlevel [2345] 6 | stop on shutdown 7 | 8 | respawn 9 | respawn limit 10 5 10 | 11 | exec /home/ops/bin/rbd-docker-plugin --create 12 | -------------------------------------------------------------------------------- /etc/logrotate.d/rbd-docker-plugin_logrotate: -------------------------------------------------------------------------------- 1 | /var/log/rbd-docker-plugin.log { 2 | missingok 3 | weekly 4 | rotate 10 5 | delaycompress 6 | compress 7 | notifempty 8 | # assuming centos 7.1 with systemd 9 | postrotate 10 | systemctl restart rbd-docker-plugin.service > /dev/null 2>/dev/null || true 11 | endscript 12 | notifempty 13 | } 14 | # NOTE: config is rbd-docker-plugin_logrotate to avoid clash with binary name and .gitignore 15 | -------------------------------------------------------------------------------- /etc/systemd/rbd-docker-plugin.service: -------------------------------------------------------------------------------- 1 | # systemd config for rbd-docker-plugin 2 | [Unit] 3 | Description=Ceph RBD Docker VolumeDriver Plugin 4 | Wants=docker.service 5 | Before=docker.service 6 | 7 | [Service] 8 | ExecStart=/home/ops/bin/rbd-docker-plugin --create --config /etc/ceph/ceph.conf --plugins /run/docker/plugins 9 | Restart=on-failure 10 | # NOTE: this kill is not synchronous as recommended by systemd *shrug* 11 | # disabled for now - having odd plugin issues on reload need to debug further 12 | #ExecReload=/bin/kill -HUP $MAINPID 13 | 14 | [Install] 15 | WantedBy=multi-user.target 16 | -------------------------------------------------------------------------------- /main.go: -------------------------------------------------------------------------------- 1 | // Copyright 2015 YP LLC. 2 | // Use of this source code is governed by an MIT-style 3 | // license that can be found in the LICENSE file. 4 | package main 5 | 6 | // Ceph RBD VolumeDriver Docker Plugin, setup config and go 7 | 8 | import ( 9 | "errors" 10 | "flag" 11 | "fmt" 12 | "log" 13 | "os" 14 | "os/signal" 15 | "path/filepath" 16 | "syscall" 17 | 18 | "github.com/docker/go-plugins-helpers/volume" 19 | ) 20 | 21 | var ( 22 | VALID_REMOVE_ACTIONS = []string{"ignore", "delete", "rename"} 23 | 24 | // Plugin Option Flags 25 | versionFlag = flag.Bool("version", false, "Print version") 26 | debugFlag = flag.Bool("debug", false, "Debug output") 27 | pluginName = flag.String("name", "rbd", "Docker plugin name for use on --volume-driver option") 28 | cephUser = flag.String("user", "admin", "Ceph user") 29 | cephConfigFile = flag.String("config", "/etc/ceph/ceph.conf", "Ceph cluster config") // more likely to have config file pointing to cluster 30 | cephCluster = flag.String("cluster", "", "Ceph cluster") // less likely to run multiple clusters on same hardware 31 | defaultCephPool = flag.String("pool", "rbd", "Default Ceph Pool for RBD operations") 32 | pluginDir = flag.String("plugins", "/run/docker/plugins", "Docker plugin directory for socket") 33 | rootMountDir = flag.String("mount", volume.DefaultDockerRootDirectory, "Mount directory for volumes on host") 34 | logDir = flag.String("logdir", "/var/log", "Logfile directory") 35 | canCreateVolumes = flag.Bool("create", false, "Can auto Create RBD Images") 36 | defaultImageSizeMB = flag.Int("size", 20*1024, "RBD Image size to Create (in MB) (default: 20480=20GB)") 37 | defaultImageFSType = flag.String("fs", "xfs", "FS type for the created RBD Image (must have mkfs.type)") 38 | ) 39 | 40 | // setup a validating flag for remove action 41 | type removeAction string 42 | 43 | func (a *removeAction) String() string { 44 | return string(*a) 45 | } 46 | 47 | func (a *removeAction) Set(value string) error { 48 | if !contains(VALID_REMOVE_ACTIONS, value) { 49 | return errors.New(fmt.Sprintf("Invalid value: %s, valid values are: %q", value, VALID_REMOVE_ACTIONS)) 50 | } 51 | *a = removeAction(value) 52 | return nil 53 | } 54 | 55 | func contains(vals []string, check string) bool { 56 | for _, v := range vals { 57 | if check == v { 58 | return true 59 | } 60 | } 61 | return false 62 | } 63 | 64 | var removeActionFlag removeAction = "ignore" 65 | 66 | func init() { 67 | flag.Var(&removeActionFlag, "remove", "Action to take on Remove: ignore, delete or rename") 68 | flag.Parse() 69 | } 70 | 71 | func socketPath() string { 72 | return filepath.Join(*pluginDir, *pluginName+".sock") 73 | } 74 | 75 | func logfilePath() string { 76 | return filepath.Join(*logDir, *pluginName+"-docker-plugin.log") 77 | } 78 | 79 | func main() { 80 | if *versionFlag { 81 | fmt.Printf("%s\n", VERSION) 82 | return 83 | } 84 | 85 | logFile, err := setupLogging() 86 | if err != nil { 87 | log.Fatalf("FATAL: Unable to setup logging: %s", err) 88 | } 89 | defer shutdownLogging(logFile) 90 | 91 | log.Printf("INFO: starting rbd-docker-plugin version %s", VERSION) 92 | log.Printf("INFO: canCreateVolumes=%v, removeAction=%q", *canCreateVolumes, removeActionFlag) 93 | log.Printf( 94 | "INFO: Setting up Ceph Driver for PluginID=%s, cluster=%s, ceph-user=%s, pool=%s, mount=%s, config=%s", 95 | *pluginName, 96 | *cephCluster, 97 | *cephUser, 98 | *defaultCephPool, 99 | *rootMountDir, 100 | *cephConfigFile, 101 | ) 102 | 103 | // double check for config file - required especially for non-standard configs 104 | if *cephConfigFile == "" { 105 | log.Fatal("FATAL: Unable to use ceph rbd tool without config file") 106 | } 107 | if _, err = os.Stat(*cephConfigFile); os.IsNotExist(err) { 108 | log.Fatalf("FATAL: Unable to find ceph config needed for ceph rbd tool: %s", err) 109 | } 110 | 111 | // build driver struct -- but don't create connection yet 112 | d := newCephRBDVolumeDriver( 113 | *pluginName, 114 | *cephCluster, 115 | *cephUser, 116 | *defaultCephPool, 117 | *rootMountDir, 118 | *cephConfigFile, 119 | ) 120 | 121 | log.Println("INFO: Creating Docker VolumeDriver Handler") 122 | h := volume.NewHandler(d) 123 | 124 | socket := socketPath() 125 | log.Printf("INFO: Opening Socket for Docker to connect: %s", socket) 126 | // ensure directory exists 127 | err = os.MkdirAll(filepath.Dir(socket), os.ModeDir) 128 | if err != nil { 129 | log.Fatalf("FATAL: Error creating socket directory: %s", err) 130 | } 131 | 132 | // setup signal handling after logging setup and creating driver, in order to signal the logfile and ceph connection 133 | // NOTE: systemd will send SIGTERM followed by SIGKILL after a timeout to stop a service daemon 134 | signalChannel := make(chan os.Signal, 2) // chan with buffer size 2 135 | signal.Notify(signalChannel, syscall.SIGTERM, syscall.SIGKILL) 136 | go func() { 137 | for sig := range signalChannel { 138 | //sig := <-signalChannel 139 | switch sig { 140 | case syscall.SIGTERM, syscall.SIGKILL: 141 | log.Printf("INFO: received TERM or KILL signal: %s", sig) 142 | shutdownLogging(logFile) 143 | os.Exit(0) 144 | } 145 | } 146 | }() 147 | 148 | // open socket 149 | err = h.ServeUnix(socket, currentGid()) 150 | 151 | if err != nil { 152 | log.Printf("ERROR: Unable to create UNIX socket: %v", err) 153 | } 154 | } 155 | 156 | // isDebugEnabled checks for RBD_DOCKER_PLUGIN_DEBUG environment variable 157 | func isDebugEnabled() bool { 158 | return *debugFlag || os.Getenv("RBD_DOCKER_PLUGIN_DEBUG") == "1" 159 | } 160 | 161 | // setupLogging attempts to log to a file, otherwise stderr 162 | func setupLogging() (*os.File, error) { 163 | // use date, time and filename for log output 164 | log.SetFlags(log.LstdFlags | log.Lshortfile) 165 | 166 | // setup logfile - path is set from logfileDir and pluginName 167 | logfileName := logfilePath() 168 | if !isDebugEnabled() && logfileName != "" { 169 | logFile, err := os.OpenFile(logfileName, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666) 170 | if err != nil { 171 | // check if we can write to directory - otherwise just log to stderr? 172 | if os.IsPermission(err) { 173 | log.Printf("WARN: logging fallback to STDERR: %v", err) 174 | } else { 175 | // some other, more extreme system error 176 | return nil, err 177 | } 178 | } else { 179 | log.Printf("INFO: setting log file: %s", logfileName) 180 | log.SetOutput(logFile) 181 | return logFile, nil 182 | } 183 | } 184 | return nil, nil 185 | } 186 | 187 | func shutdownLogging(logFile *os.File) { 188 | // flush and close the file 189 | if logFile != nil { 190 | log.Println("INFO: closing log file") 191 | logFile.Sync() 192 | logFile.Close() 193 | } 194 | } 195 | 196 | func reloadLogging(logFile *os.File) (*os.File, error) { 197 | log.Println("INFO: reloading log") 198 | if logFile != nil { 199 | shutdownLogging(logFile) 200 | } 201 | return setupLogging() 202 | } 203 | -------------------------------------------------------------------------------- /marathon-test/Dockerfile: -------------------------------------------------------------------------------- 1 | # Dockerfile to test the RBD Ceph Plugin via a marathon json job 2 | 3 | # container expects a mounted file, appends data to it and then exits after short sleep. 4 | 5 | # JSON file to lauch this job: marathon-test.json 6 | # export MARATHON_HOST=localhost 7 | # curl -X POST -H "Content-Type: application/json" \ 8 | # http://${MARATHON_HOST}:8080/v2/apps \ 9 | # -d@marathon-test.json 10 | 11 | FROM gliderlabs/alpine:edge 12 | 13 | WORKDIR /root 14 | 15 | ENV RBD_TEST /mnt/foo/ 16 | ADD run.sh /root/run.sh 17 | CMD ["/root/run.sh"] 18 | -------------------------------------------------------------------------------- /marathon-test/Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: $(CONTAINER) 2 | 3 | SUDO?= 4 | MARATHON_HOST?=localhost 5 | MARATHON_PASS?= 6 | MARATHON_JSON?=marathon-test.json 7 | 8 | # update this when you make changes 9 | APP_VERSION=0.2.1 10 | CONTAINER=rbd-marathon-test 11 | TAG?=latest 12 | REMOTE_NAME?=$(CONTAINER) 13 | 14 | #all: remote 15 | #remote: $(REMOTE_NAME) 16 | 17 | container: 18 | $(SUDO) docker build -t $(REMOTE_NAME):$(APP_VERSION) . 19 | 20 | push: container 21 | $(SUDO) docker tag -f $(REMOTE_NAME):$(APP_VERSION) $(REMOTE_NAME):$(TAG) 22 | $(SUDO) docker push $(REMOTE_NAME):$(APP_VERSION) 23 | $(SUDO) docker push $(REMOTE_NAME):$(TAG) 24 | 25 | deploy: 26 | curl -X POST -H "Content-Type: application/json" http://$(MARATHON_HOST):8080/v2/apps -d@$(MARATHON_JSON) 27 | -------------------------------------------------------------------------------- /marathon-test/marathon-test.json: -------------------------------------------------------------------------------- 1 | { 2 | "container": { 3 | "type": "DOCKER", 4 | "docker": { 5 | "network": "HOST", 6 | "image": "rbd-marathon-test:latest", 7 | "forcePullImage": true, 8 | "parameters": [ 9 | { "key": "volume-driver", "value": "rbd" }, 10 | { "key": "volume", "value": "foo:/mnt/foo:rw" } 11 | ] 12 | }, 13 | "volumes": [ ] 14 | }, 15 | "id": "rbd-plugin-tester", 16 | "instances": 1, 17 | "cpus": 0.1, 18 | "mem": 256, 19 | "uris": [] 20 | } 21 | -------------------------------------------------------------------------------- /marathon-test/run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # append a hello to a log file 4 | 5 | RBD_TEST=${RBD_TEST:-/mnt/foo} 6 | LOG_FILE=${RBD_TEST}/hello.log 7 | 8 | 9 | # don't make marathon churn too much ... 10 | SLEEP_TIME=${SLEEP_TIME:-300} 11 | 12 | echo "hello from $HOSTNAME" 13 | 14 | # check for the file 15 | LOG_ERROR=0 16 | if [ ! -d $RBD_TEST ] ; then 17 | echo "ERROR: $HOSTNAME: unable to find rbd mount: $RBD_TEST" 18 | LOG_ERROR=1 19 | fi 20 | 21 | if [ ! -f $LOG_FILE ] ; then 22 | echo "ERROR: $HOSTNAME: unable to find log file: $LOG_FILE" 23 | LOG_ERROR=1 24 | else 25 | echo "NOTE: found the existing mounted log file: $LOG_FILE ==>" 26 | cat $LOG_FILE 27 | echo "----" 28 | fi 29 | 30 | # append our note to log 31 | echo "$HOSTNAME `date`" | tee -a $LOG_FILE 32 | 33 | # sleep a bit and exit 34 | echo -n "sleeping $SLEEP_TIME ... " 35 | sleep $SLEEP_TIME 36 | 37 | echo "goodbye from $HOSTNAME" 38 | 39 | if [ $LOG_ERROR != 0 ] ; then 40 | exit $LOG_ERROR 41 | fi 42 | -------------------------------------------------------------------------------- /micro-osd.sh: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright (C) 2013,2014 Loic Dachary 3 | # 4 | # This program is free software: you can redistribute it and/or modify 5 | # it under the terms of the GNU Affero General Public License as published by 6 | # the Free Software Foundation, either version 3 of the License, or 7 | # (at your option) any later version. 8 | # 9 | # This program is distributed in the hope that it will be useful, 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 | # GNU Affero General Public License for more details. 13 | # 14 | # You should have received a copy of the GNU Affero General Public License 15 | # along with this program. If not, see . 16 | # 17 | set -e 18 | set -u 19 | 20 | DIR=$1 21 | 22 | #if ! dpkg -l ceph ; then 23 | # wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add - 24 | # echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list 25 | # sudo apt-get update 26 | # sudo apt-get --yes install ceph ceph-common 27 | #fi 28 | 29 | # get rid of process and directories leftovers 30 | pkill ceph-mon || true 31 | pkill ceph-osd || true 32 | pkill ceph-mds || true 33 | rm -fr $DIR 34 | 35 | # cluster wide parameters 36 | mkdir -p ${DIR}/log 37 | cat >> $DIR/ceph.conf <> $DIR/ceph.conf <> $DIR/ceph.conf <