├── .gitignore ├── LICENSE ├── README.md ├── additional_prep.sh ├── docker-compose.yml ├── docker-machine-hostprep.sh ├── license.xml └── object_prep.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled source # 2 | ################### 3 | *.com 4 | *.class 5 | *.dll 6 | *.exe 7 | *.o 8 | *.so 9 | 10 | # Packages # 11 | ############ 12 | # it's better to unpack these files and commit the raw source 13 | # git has its own built in compression methods 14 | *.7z 15 | *.dmg 16 | *.gz 17 | *.iso 18 | *.jar 19 | *.rar 20 | *.tar 21 | *.zip 22 | 23 | # Logs and databases # 24 | ###################### 25 | *.log 26 | *.sql 27 | *.sqlite 28 | 29 | # OS generated files # 30 | ###################### 31 | .DS_Store 32 | .DS_Store? 33 | ._* 34 | .Spotlight-V100 35 | .Trashes 36 | ehthumbs.db 37 | Thumbs.db 38 | 39 | # Myfiles 40 | ############### 41 | listofcommands -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 EMC Corporation 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ecs-dockerswarm 2 | ====================== 3 | Deploy EMC's ECS as a multi-node Docker container setup using Docker Compose into a Docker Swarm cluster created by Docker Machine. 4 | 5 | ## Description 6 | [EMC Elastic Cloud Storage (ECS)](https://www.emc.com/storage/ecs-appliance/index.htm?forumID) is a software-defined cloud storage platform that exposes S3, Swift, and Atmos endpoints. This walk-through will demonstate the setup of a 3 Node ECS Cluster using Docker containers and Docker tools. 7 | 8 | ## Requirements 9 | * Each ECS node requires 16GB of RAM and an attached volume with a minimum of 512GB. The attached volume stores persistent data that is replicated between three nodes. 10 | * [Docker Machine](https://docs.docker.com/machine/) installed on your local laptop. Use `0.3.0-rc2` or later [Docker Machine Releases](https://github.com/docker/machine/releases) or until all `--swarm` commands are merged into a stable release. 11 | * [Docker Compose](https://docs.docker.com/compose/) installed on your local laptop. Use `1.3.0 RC2` or later [Docker Compose Releases](https://github.com/docker/compose/releases). 12 | * [Docker Client](https://docs.docker.com/machine/) installed on your local laptop. (follow Docker Client installation directions) 13 | 14 | This is for test and development purposes. Not for production use. Please reach out to your local EMC SE for information on production usage. EMC ECS ships as a commodity storage appliance for production use. 15 | 16 | ## Lets Get Going 17 | 18 | #### Setup 3 Ubuntu Hosts with Docker Swarm using Docker Machine 19 | The following examples use Ubuntu. Ubuntu is a requirement for using the setup scripts. 20 | 21 | [Docker Machine](https://docs.docker.com/machine/) examples are shown with the AWS driver with the standard Ubuntu AMI. However, any cloud or compatible infrastructure with Docker Machine can be used. 22 | 23 | 1. At this beta stage, the security of ports hasn't been defined. For this to work properly, ports 0-65535 need opened *inbound* to each of your Docker Machine hosts. Within AWS, this is called a *Security Group* 24 | 25 | more port information can be found on the [ECS Security Configuration Guide](https://community.emc.com/docs/DOC-45012) 26 | 27 | 2. Create a [Docker Swarm](https://docs.docker.com/swarm/) ID. 28 | 29 | 1. This can be done by using `docker run swarm:0.3.0-rc2 create` on any host that can run Docker. If no host is available, use Docker Machine to create the host: 30 | ``` 31 | docker-machine -D create --driver amazonec2 --amazonec2-access-key MYKEY --amazonec2-secret-key MYSECRETKEY --amazonec2-vpc-id vpc-myvpc swarm-create 32 | ``` 33 | 2. Connect to the Docker Machine: `docker-machine env swarm-create` 34 | 3. Get shell access: `eval "$(docker-machine env swarm-create)"` 35 | 4. Run the Swarm container: `docker run swarm:0.3.0-rc2 create` (currently must use the latest which is `v0.3.0-rc2` to support Docker Compose) 36 | 5. The output will have a unique token such as `b353bb30194d59ab33e4d47c012ee895`. 37 | 3. Create a [Docker Swarm](https://docs.docker.com/swarm/) 3 node cluster using [Docker Machine](https://docs.docker.com/machine/). Each node requires: 38 | - a root drive of >=20GB 39 | - 28GB RAM or more 40 | - A Swarm token using image `0.3.0-rc2` or higher or until this is merged until this becomes the stable release 41 | 42 | 1. Create Swarm Master: 43 | ``` 44 | docker-machine -D create --driver amazonec2 --amazonec2-access-key MYKEY --amazonec2-secret-key MYSECRETKEY --amazonec2-vpc-id vpc-myvpc --amazonec2-instance-type r3.xlarge --amazonec2-root-size 50 --swarm --swarm-image=swarm:0.3.0-rc2 --swarm-master --swarm-discovery token://b353bb30194d59ab33e4d47c012ee895 swarm-master 45 | ``` 46 | 2. Create Node 01: 47 | ``` 48 | docker-machine -D create --driver amazonec2 --amazonec2-access-key MYKEY --amazonec2-secret-key MYSECRETKEY --amazonec2-vpc-id vpc-myvpc --amazonec2-instance-type r3.xlarge --amazonec2-root-size 50 --swarm --swarm-image=swarm:0.3.0-rc2 --swarm-discovery token://b353bb30194d59ab33e4d47c012ee895 swarm-node01 49 | ``` 50 | 3. Create Node 02: 51 | ``` 52 | docker-machine -D create --driver amazonec2 --amazonec2-access-key MYKEY --amazonec2-secret-key MYSECRETKEY --amazonec2-vpc-id vpc-myvpc --amazonec2-instance-type r3.xlarge --amazonec2-root-size 50 --swarm --swarm-image=swarm:0.3.0-rc2 --swarm-discovery token://b353bb30194d59ab33e4d47c012ee895 swarm-node02 53 | ``` 54 | 55 | - note: The `-D` flag is used to look at diagnostics. It's not necessary, but helpful when looking for bugs since this all requires the Release Candidate versions of all tooling 56 | 57 | #### Add A Volume to Each Node 58 | Docker containers, by nature, are used for stateless applications. By attaching an outside volume (not the root volume), data persistence is achieved. The container running the ECS software is ephemeral and the volume containing the data can be attached to any ECS container. All ECS nodes replicate data to one another and store logs in this attached Volume. The volume needs to be >=512GB. 59 | 60 | After the volume has been added, retrieve the mounted path. For instance, AWS uses `/dev/xvdf`. This can be retrieved by using `docker-machine ssh swarm-master "sudo fdisk -l"`, which shows the disk layout for the `swarm-master` machine. 61 | 62 | This example shows how to attach a volume using AWS. 63 | 64 | 1. Login to the AWS Console 65 | 2. Navigate to **EC2 -> Volumes -> Create Volume** 66 | 3. Set the size to **512** or greater 67 | 4. Set the **Availability Zone** to where your Docker Machine Hosts reside. 68 | 5. Repeat this 2 more times so there are 3 Volumes 69 | 6. Select 1 volume and navigate to **Actions -> Attach Volume** 70 | 7. Attach 1 unique volume per Docker Machine Swarm host. Do not attach all three volumes to all three hosts. 71 | 72 | #### Prepare the hosts 73 | Each host requires a script to be ran that prepares the volumes attached as XFS and builds a multitude of folders and permissions. 74 | 75 | 1. Using Docker Machine, download the setup script from this repo 76 | ``` 77 | docker-machine ssh swarm-master "curl -O https://raw.githubusercontent.com/emccode/ecs-dockerswarm/master/docker-machine-hostprep.sh" 78 | docker-machine ssh swarm-node01 "curl -O https://raw.githubusercontent.com/emccode/ecs-dockerswarm/master/docker-machine-hostprep.sh" 79 | docker-machine ssh swarm-node02 "curl -O https://raw.githubusercontent.com/emccode/ecs-dockerswarm/master/docker-machine-hostprep.sh" 80 | ``` 81 | 2. The hosts require the **internal** IPs of all the ECS nodes in the cluster. You can retrieve all **external** IPs with `docker-machine ls` (not needed): 82 | ``` 83 | NAME ACTIVE DRIVER STATE URL SWARM 84 | swarm-create * amazonec2 Running tcp://52.4.23.123:2376 85 | swarm-master amazonec2 Running tcp://52.7.13.18:2376 swarm-master (master) 86 | swarm-node01 amazonec2 Running tcp://54.12.23.16:2376 swarm-master 87 | swarm-node02 amazonec2 Running tcp://52.7.19.173:2376 swarm-master 88 | ``` 89 | You will need to go into your cloud provider and get the **internal IP addresses** for each node. This will hopefully be added to `docker-machine` in the near future. Pay attention to [Issue #1342 FR: Get internal IP of cloud hosted machine](https://github.com/docker/machine/issues/1342) for when it will be baked into the master release. 90 | 91 | 3. The command to kick-off the host preparation accepts two arguments. 92 | 1. all three internal IP addresses as comma seperated values with no spaces such as `172.38.13.18,172.38.23.16,172.38.19.173` 93 | 2. the mounted drive path without the `/dev/` portion such as `xvdf`. If no argument is specified, `xvdf` is the default. 94 | ``` 95 | docker-machine ssh swarm-master "sudo sh docker-machine-hostprep.sh 172.38.13.18,172.38.23.16,172.38.19.173 xvdf" 96 | docker-machine ssh swarm-node01 "sudo sh docker-machine-hostprep.sh 172.38.13.18,172.38.23.16,172.38.19.173 xvdf" 97 | docker-machine ssh swarm-node02 "sudo sh docker-machine-hostprep.sh 172.38.13.18,172.38.23.16,172.38.19.173 xvdf" 98 | ``` 99 | 4. The last output line will say `Host has been successfully prepared` 100 | 101 | #### Docker Compose Up 102 | The `docker-compose.yml` file will create 3 ECS containers including all volume mounts and networking. 103 | 104 | 1. Connect to the Docker Machine Swarm instance: `docker-machine env --swarm swarm-master` 105 | 2. Get shell access to the Swarm Cluster: `eval "$(docker-machine env --swarm swarm-master)"` 106 | 3. To test if swarm is working correctly, run `docker info` and you will see an output such as: 107 | ``` 108 | Containers: 4 109 | Images: 3 110 | Storage Driver: 111 | Strategy: spread 112 | Filters: affinity, health, constraint, port, dependency 113 | Nodes: 3 114 | swarm-master: 52.7.13.18:2376 115 | └ Containers: 2 116 | └ Reserved CPUs: 0 / 4 117 | └ Reserved Memory: 0 B / 15.42 GiB 118 | swarm-node01: 54.12.23.16:2376 119 | └ Containers: 1 120 | └ Reserved CPUs: 0 / 4 121 | └ Reserved Memory: 0 B / 15.42 GiB 122 | swarm-node02: 52.7.19.173:2376 123 | └ Containers: 1 124 | └ Reserved CPUs: 0 / 4 125 | └ Reserved Memory: 0 B / 15.42 GiB 126 | Execution Driver: 127 | Kernel Version: 128 | Operating System: 129 | CPUs: 12 130 | Total Memory: 94.38 GiB 131 | Name: 132 | ID: 133 | Http Proxy: 134 | Https Proxy: 135 | No Proxy: 136 | ``` 137 | 4. Copy this repo using `git clone https://github.com/emccode/ecs-dockerswarm.git` or save the `docker-compose.yml` file to a folder. 138 | 5. Navigate to your folder such as `ecs-dockerswarm` 139 | 6. Run `docker-compose up -d`. Docker Compose will be pointing towards the registerd Docker Client machine which is our swarm cluster: 140 | ``` 141 | Pulling ecsnode02 (emccorp/ecs-software)... 142 | swarm-node01: Pulling emccorp/ecs-software... : downloaded 143 | swarm-node02: Pulling emccorp/ecs-software... : downloaded 144 | swarm-master: Pulling emccorp/ecs-software... : downloaded 145 | Creating ecsdockerswarm_ecsnode02_1... 146 | Creating ecsdockerswarm_ecsnode03_1... 147 | Creating ecsdockerswarm_ecsnode01_1... 148 | ``` 149 | 7. Make sure all containers are up using `docker ps` and wait 10-15 minutes for ECS to completely load. 150 | ``` 151 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 152 | 9054fc14ee5a emccorp/ecs-software "/opt/vipr/boot/boot 36 seconds ago Up 35 seconds swarm-master/ecsdockerswarm_ecsnode01_1 153 | a90ef32ce5ec emccorp/ecs-software "/opt/vipr/boot/boot 39 seconds ago Up 38 seconds swarm-node01/ecsdockerswarm_ecsnode02_1 154 | e6356782748c emccorp/ecs-software "/opt/vipr/boot/boot 39 seconds ago Up 38 seconds swarm-node02/ecsdockerswarm_ecsnode03_1 155 | ``` 156 | **Warning** `docker-compose.yml` calls out the use of affinity rules based on the container name. If the container name is changed within the `docker-compose.yml` file, then the environment variable needs to be edited as well. 157 | 158 | #### ECS UI Access 159 | After a few minutes, the ECS UI will be available. To access the UI, point your browser to one of the **Public IP addresses** using `https://`. You can retrieve all **external** IPs with `docker-machine ls` (not needed): 160 | ``` 161 | NAME ACTIVE DRIVER STATE URL SWARM 162 | swarm-create amazonec2 Running tcp://52.4.23.123:2376 163 | swarm-master amazonec2 Running tcp://52.7.13.18:2376 swarm-master (master) 164 | swarm-node01 amazonec2 Running tcp://54.12.23.16:2376 swarm-master 165 | swarm-node02 amazonec2 Running tcp://52.7.19.173:2376 swarm-master 166 | ``` 167 | Username: root 168 | Password: ChangeMe 169 | ![ecs-login-ui](https://s3.amazonaws.com/kennyonetime/ecsuilogin.png "ECS Login UI") 170 | 171 | #### ECS Object Preparation 172 | At this point, there are still more steps to be completed like adding the license.xml, creating the object store, etc. These steps have been automated in `object_prep.py`. This script only needs to be run once per cluster. 173 | 174 | Download two files to one of the hosts: 175 | ``` 176 | docker-machine ssh swarm-master "curl -O https://raw.githubusercontent.com/emccode/ecs-dockerswarm/master/license.xml" 177 | docker-machine ssh swarm-master "curl -O https://raw.githubusercontent.com/emccode/ecs-dockerswarm/master/object_prep.py" 178 | ``` 179 | 180 | Next, run the python script. In the `ECSNodes` flag, specify the Public IP address of one of the nodes. This script will take about 10-15 minutes to complete. Do not exit out of the script 181 | ``` 182 | docker-machine ssh swarm-master "sudo python object_prep.py --ECSNodes=52.7.13.18 --Namespace=ns1 --ObjectVArray=ova1 --ObjectVPool=ovp1 --UserName=emccode --DataStoreName=ds1 --VDCName=vdc1 --MethodName=" 183 | ``` 184 | 185 | At this point, you can put the ECS nodes behind a load balancer and single IP/DNS name for traffic re-direction. Hooray! 186 | 187 | ## Contribution 188 | Create a fork of the project into your own reposity. Make all your necessary changes and create a pull request with a description on what was added or removed and details explaining the changes in lines of code. If approved, project owners will merge it. 189 | 190 | Licensing 191 | --------- 192 | ecs-dockerswarm is freely distributed under the [MIT License](http://opensource.org/licenses/MIT). See LICENSE for details. 193 | 194 | Support 195 | ------- 196 | Please file bugs and issues on the Github issues page for this project. This is to help keep track and document everything related to this repo. For general discussions and further support you can join the [EMC {code} Community slack channel](http://community.emccode.com/). Lastly, for questions asked on [Stackoverflow.com](https://stackoverflow.com) please tag them with **EMC**. The code and documentation are released with no warranties or SLAs and are intended to be supported through a community driven process. 197 | -------------------------------------------------------------------------------- /additional_prep.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | MOUNT=/bin/mount 4 | UMOUNT=/bin/umount 5 | GREP=/usr/bin/grep 6 | AWK=/usr/bin/awk 7 | DF=/bin/df 8 | FALLOCATE=/usr/bin/fallocate 9 | XFS_DB=/usr/sbin/xfs_db 10 | #XFS_CHECK=/usr/sbin/xfs_check 11 | XFS_REPAIR=/sbin/xfs_repair 12 | DD=/bin/dd 13 | 14 | # size of chunk file 15 | FILE_SIZE_GB=10 16 | 17 | function usage { 18 | echo "usage: $0 " 19 | exit 1 20 | } 21 | 22 | device=$1 23 | mount_point=`$MOUNT | $GREP $device | $AWK '{print $3}'` 24 | 25 | if [ -z "$device" -o -z "$mount_point" ]; then 26 | usage 27 | fi 28 | 29 | # mount file system to create files on it 30 | $MOUNT | $GREP $device -q 31 | if [ $? -eq 1 ]; then 32 | echo "mount $device" 33 | $MOUNT $device $mount_point 34 | fi 35 | 36 | num_files=`$DF -BG $mount_point | $GREP $mount_point | $AWK -v FGB=$FILE_SIZE_GB '{gsub("G", "", $4); print int($4 / FGB)}'` 37 | declare -a inodes 38 | for ((i=0;i<$num_files;i++)) { 39 | file=`printf "%04d\n" $i` 40 | echo "create file $file" 41 | $FALLOCATE -l ${FILE_SIZE_GB}G $mount_point/$file 42 | inodes[$i]=`ls -i $mount_point/$file | $AWK '{print $1}'` 43 | } 44 | 45 | # umount file system and force allocation of data blocks 46 | $UMOUNT $device 47 | 48 | for inode in ${inodes[@]}; do 49 | num_extents=`$XFS_DB -c "inode $inode" -c "print u.bmx" $device | $AWK '{if ($0 ~ "not found") {print 0} else {print NF - 3}}'` 50 | # inode locates on offset inode # * inode size (256 by default) on disk 51 | # and the first extent locates at offset 100 in the inode 52 | extent_offset=$((inode*256+100)) 53 | for ((i=0;i<$num_extents;i++)) { 54 | echo "set extent $i of inode $inode at $extent_offset" 55 | # the extent status flag is the first two MSB 56 | # as the file size is just a few GB, the rest bits in the first byte are all zeros 57 | $DD if=/dev/zero of=$device bs=1 count=1 seek=$extent_offset > /dev/null 2>&1 58 | 59 | # each extent is 16 bytes 60 | extent_offset=$((extent_offset+16)) 61 | } 62 | 63 | # show the extents after force allocation 64 | $XFS_DB -c "inode $inode" -c "print u.bmx" $device 65 | done 66 | 67 | $XFS_REPAIR -n $device 68 | if [ $? -ne 0 ]; then 69 | echo "fs check failed" 70 | exit 1 71 | fi 72 | 73 | # mount again, the file system is now ready 74 | $MOUNT $device $mount_point -o noatime 75 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | #Created by Kendrick Coleman of the EMC {code} Team and Licensed under MIT. 2 | --- 3 | ecsnode01: 4 | environment: 5 | - "SS_GENCONFIG=1" 6 | - "affinity:container!=*ecsnode02*" 7 | - "affinity:container!=*ecsnode03*" 8 | image: "emccorp/ecs-software" 9 | net: host 10 | restart: always 11 | volumes: 12 | - "/host:/host" 13 | - "/var/log/vipr/emcvipr-object:/opt/storageos/logs" 14 | - "/data:/data:rw" 15 | ecsnode02: 16 | environment: 17 | - "SS_GENCONFIG=1" 18 | - "affinity:container!=*ecsnode01*" 19 | - "affinity:container!=*ecsnode03*" 20 | image: "emccorp/ecs-software" 21 | net: host 22 | restart: always 23 | volumes: 24 | - "/host:/host" 25 | - "/var/log/vipr/emcvipr-object:/opt/storageos/logs" 26 | - "/data:/data:rw" 27 | ecsnode03: 28 | environment: 29 | - "SS_GENCONFIG=1" 30 | - "affinity:container!=*ecsnode01*" 31 | - "affinity:container!=*ecsnode02*" 32 | image: "emccorp/ecs-software" 33 | net: host 34 | restart: always 35 | volumes: 36 | - "/host:/host" 37 | - "/var/log/vipr/emcvipr-object:/opt/storageos/logs" 38 | - "/data:/data:rw" -------------------------------------------------------------------------------- /docker-machine-hostprep.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | #Created by Kendrick Coleman of the EMC {code} Team and Licensed under MIT. 3 | #Please visit us at emccode.github.io 4 | #This script will prepare an Ubuntu Host for running an ECS container. 5 | #This has been properly tested with Docker Machine's default image in AWS. 6 | 7 | #Read in variable arguments from command line 8 | if [ -z "$1" ]; then 9 | echo "You forgot to specify 3 IP Addresses" 10 | exit 1 11 | fi 12 | 13 | case "$1" in 14 | *\ * ) 15 | echo "Please remove all spaces and have IP addresses as comma seperated values" 16 | exit 1 17 | ;; 18 | esac 19 | 20 | VOL="$2" 21 | if [ -z "$2" ]; then 22 | VOL="xvdf" 23 | echo "Using xvdf as Volume mount" 24 | fi 25 | 26 | #update system and install xfs 27 | echo "Updating Debian/Ubuntu" 28 | apt-get update -y 29 | echo "Install XFS Tools" 30 | apt-get install xfsprogs -y 31 | 32 | #create the seeds file 33 | echo "Creating Seeds File" 34 | printf '%s' $1 > seeds 35 | 36 | #create network.json file 37 | echo "Creating The network.json File" 38 | hn=$(hostname) 39 | ip=$(/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') 40 | printf '{"private_interface_name":"eth0","public_interface_name":"eth0","hostname":"%s","public_ip":"%s"}' $hn $ip > network.json 41 | 42 | #create some folders, set permissions, and format the attached volume 43 | echo "Creating /ecs/uuid-1 folder" 44 | mkdir -p /ecs/uuid-1 45 | 46 | echo "Formatting attached volume as XFS" 47 | mkfs.xfs /dev/$VOL 48 | 49 | echo "Mounting attached XFS volume to /ecs/uuid-1" 50 | mount /dev/$VOL /ecs/uuid-1 51 | 52 | echo "Adding /dev/$VOL mount to /etc/fstab" 53 | echo "/dev/$VOL /ecs/uuid-1 xfs defaults 0 0" >> /etc/fstab 54 | 55 | echo "Mounting /ecs/uuid-1" 56 | mount -a 57 | 58 | echo "Creating symlink" 59 | ln -s /bin/grep /usr/bin/grep 60 | 61 | echo "Downloading additional_prep.sh" 62 | curl -O https://raw.githubusercontent.com/emccode/ecs-dockerswarm/master/additional_prep.sh 63 | 64 | echo "Changing additional_prep.sh Permissions" 65 | chmod 777 additional_prep.sh 66 | 67 | echo "Starting the additional prep on attached volume" 68 | ./additional_prep.sh /dev/$VOL 69 | 70 | echo "Changing /ecs Permissions" 71 | chown -R 444 /ecs 72 | 73 | echo "Creating /host/data folder" 74 | mkdir -p /host/data 75 | echo "Creating /host/files folder" 76 | mkdir -p /host/files 77 | 78 | echo "Copying network.json to /host/data" 79 | cp network.json /host/data 80 | echo 'Copying seeds to /host/files' 81 | cp seeds /host/files 82 | 83 | echo "Changing /host Permissions" 84 | chown -R 444 /host 85 | 86 | echo "Creating /var/log/vipr/emcvipr-object folder" 87 | mkdir -p /var/log/vipr/emcvipr-object 88 | echo "Changing /var/log/vipr/emcvipr-object Permissions" 89 | chown 444 /var/log/vipr/emcvipr-object 90 | echo "Creating /data folder" 91 | mkdir /data 92 | echo "Changing /data Permissions" 93 | chown 444 /data 94 | echo "Host has been successfully prepared" -------------------------------------------------------------------------------- /license.xml: -------------------------------------------------------------------------------- 1 | 2 | 50 | 51 | -------------------------------------------------------------------------------- /object_prep.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # Object Prep script create by Magdy Salem of the EMC {code} Team and Licensed under MIT. 3 | # Please visit us at emccode.github.io 4 | 5 | import os,json 6 | import subprocess 7 | import shutil 8 | import getopt 9 | import sys,re 10 | import time 11 | 12 | AuthToken=None 13 | def getAuthToken(ECSNode, User, Password): 14 | curlCommand = "curl -i -k https://%s:4443/login -u %s:%s" % (ECSNode, User, Password) 15 | print ("Executing getAuthToken: %s " % curlCommand) 16 | res=subprocess.check_output(curlCommand, shell=True) 17 | authTokenPattern = "X-SDS-AUTH-TOKEN:(.*)\r\n" 18 | searchObject=re.search(authTokenPattern,res) 19 | assert searchObject, "Get Auth Token failed" 20 | print("Auth Token %s" % searchObject.group(1)) 21 | return searchObject.group(1) 22 | 23 | 24 | def executeRestAPI(url, method, filter, data, ECSNode,contentType='json',checkOutput=0): 25 | if data: 26 | subprocess.call("echo %s > request_body.tmp" % data, shell=True) 27 | data="-d @request_body.tmp" 28 | if "license" in url: 29 | data="-T license.xml" 30 | curlCommand = "curl -s -k -X %s -H 'Content-Type:application/%s' \ 31 | -H 'X-SDS-AUTH-TOKEN:%s' \ 32 | -H 'ACCEPT:application/%s' \ 33 | %s https://%s:9011%s" %(method, contentType, AuthToken, contentType,data, ECSNode, url) 34 | print ("Executing REST API command: %s " % curlCommand) 35 | #print jsonResult 36 | if checkOutput: 37 | subprocess.call(curlCommand, shell=True) 38 | jsonResult = subprocess.check_output(curlCommand, shell=True) 39 | RestOutputDict = {} 40 | RestOutputDict = json.loads(jsonResult) 41 | return RestOutputDict 42 | assert "code" not in jsonResult, "%s %s failed" % (method, url) 43 | else: 44 | res=subprocess.call(curlCommand, shell=True) 45 | print res 46 | 47 | 48 | def retry( numberOfRetries, timeToWaitBetweenTriesInSeconds, functionToRetry, argumentList, keywordArgs = {}): 49 | for i in range(numberOfRetries): 50 | try: 51 | return apply(functionToRetry, argumentList, keywordArgs) 52 | except Exception, e: 53 | print("Method %s threw error %s" % (functionToRetry, e)) 54 | print("Sleep for %s seconds before retry" % timeToWaitBetweenTriesInSeconds) 55 | time.sleep(timeToWaitBetweenTriesInSeconds) 56 | raise e 57 | 58 | 59 | def getVDCID(ECSNode,VDCName): 60 | url ='/object/vdcs/vdc/%s' %VDCName 61 | return executeRestAPI(url, 'GET','.id', "", ECSNode,checkOutput=1)["id"] 62 | 63 | def getVarrayID(ECSNode): 64 | return executeRestAPI('/vdc/data-services/varrays', 'GET','.id', "", ECSNode, checkOutput=1)['varray'][0]["id"] 65 | 66 | def getVpoolID(ECSNode): 67 | return executeRestAPI('/vdc/data-service/vpools', 'GET','.id', "", ECSNode, checkOutput=1)['data_service_vpool'][0]["id"] 68 | 69 | def getNamespaces(ECSNode): 70 | return executeRestAPI('/object/namespaces', 'GET','.id', "", ECSNode, checkOutput=1)['namespace'][0]["id"] 71 | 72 | 73 | def DeleteNamespace(ECSNode, Namespace): 74 | url ='/object/namespaces/namespace/%s/deactivate' %Namespace 75 | return executeRestAPI(url, 'POST','', "", ECSNode) 76 | 77 | 78 | def DeleteUser(ECSNode,userName,Namespace): 79 | print("\nDelete User %s" % userName) 80 | DeleteUserPayload ='{\\"user\\":\\"%s\\",\ 81 | \\"namespace\\":\\"%s\\"\ 82 | }' % (userName, Namespace) 83 | executeRestAPI("/object/users/deactivate", 'POST','.id', DeleteUserPayload, ECSNode) 84 | 85 | 86 | def getVDCSecretKey(self): 87 | Log.info(LoggingInfra.logger, "Fetch VDC secret key") 88 | secretKeyDict = self.executeRestAPI("/vdc/secret-key", 'GET', '.secret_key', "") 89 | return secretKeyDict['secret_key'] 90 | 91 | 92 | def UploadLicense(ECSNode): 93 | executeRestAPI("/license", 'POST','', '', ECSNode, contentType='xml') 94 | 95 | def UploadLicenseWithRetry(ECSNode): 96 | retry(5, 60, UploadLicense, [ECSNode]) 97 | 98 | 99 | def CreateObjectVArray(ECSNode, objectVArrayName): 100 | print("\nCreate Object Varray %s" % objectVArrayName) 101 | objectVArrayPayload ='{\\"name\\":\\"%s\\",\ 102 | \\"description\\":\\"%s\\",\ 103 | \\"isProtected\\":\\"%s\\"\ 104 | }' % (objectVArrayName, objectVArrayName, "false") 105 | executeRestAPI("/vdc/data-services/varrays", 'POST','.id', objectVArrayPayload, ECSNode, checkOutput=1) 106 | print("Object Varray %s is created" % objectVArrayName) 107 | 108 | def CreateObjectVarrayWithRetry(ECSNode, objectVArrayName): 109 | retry(5, 60, CreateObjectVArray, [ECSNode, objectVArrayName]) 110 | 111 | 112 | def createDataStoreOnCommodityNodes(ECSNode, dataStoreName, varray): 113 | createDataStorePayLoad ='{ \\"nodes\\":[\ 114 | {\ 115 | \\"nodeId\\":\\"%s\\",\\"name\\":\\"%s\\",\ 116 | \\"virtual_array\\":\\"%s\\",\\"description\\":\\"%s\\"\ 117 | }]}' % (ECSNode, dataStoreName, varray, dataStoreName) 118 | return executeRestAPI('/vdc/data-stores/commodity', 'POST','.id', createDataStorePayLoad, ECSNode) 119 | 120 | 121 | def CreateDataStoreOnCommodityNodesWithRetry(ECSNode, dataStoreName, varray): 122 | retry(5, 60, createDataStoreOnCommodityNodes, [ECSNode, dataStoreName, varray]) 123 | 124 | 125 | 126 | def InsertVDC(ECSNode, VDCName): 127 | secretKey="secret12345" 128 | #secretKey=getVDCSecretKey() 129 | InsertVDCPayload ='{\\"vdcName\\":\\"%s\\",\ 130 | \\"interVdcEndPoints\\":\\"%s\\", \ 131 | \\"secretKeys\\":\\"%s\\"\ 132 | }' % (VDCName, ECSNode, secretKey) 133 | executeRestAPI('/object/vdcs/vdc/%s' % VDCName, 'PUT','',InsertVDCPayload, ECSNode) 134 | return getVDCID(ECSNode,VDCName) 135 | 136 | 137 | def InsertVDCWithRetry(ECSNode, objectVpoolName): 138 | retry(5, 60, InsertVDC, [ECSNode, objectVpoolName]) 139 | 140 | 141 | def CreateObjectVpool(ECSNode, objectVpoolName, VDCName): 142 | vdcID = getVDCID(ECSNode,VDCName) 143 | print("\nVDC ID is %s" % vdcID) 144 | vArrayID = getVarrayID(ECSNode) 145 | print("\nVArray ID is %s" % vArrayID) 146 | objectVpoolPayload ='{\\"description\\":\\"%s\\",\ 147 | \\"name\\":\\"%s\\", \ 148 | \\"zone_mappings\\":[\ 149 | {\ 150 | \\"name\\":\\"%s\\",\\"value\\":\\"%s\\"\ 151 | }]}' % (objectVpoolName, objectVpoolName, vdcID, vArrayID) 152 | print("\nCreate Object VPool %s" % objectVpoolName) 153 | executeRestAPI("/vdc/data-service/vpools", 'POST','.id', objectVpoolPayload, ECSNode, checkOutput=1) 154 | print("Object Vpool %s is created" % objectVpoolName) 155 | 156 | def CreateObjectVpoolWithRetry(ECSNode, objectVpoolName, VDCName): 157 | retry(5, 60, CreateObjectVpool, [ECSNode, objectVpoolName, VDCName]) 158 | 159 | 160 | 161 | 162 | def CreateNamespace(ECSNode, Namespace, objectVpoolName): 163 | print("\nCreate Namespace %s" % Namespace) 164 | NamespacePayload='{\\"namespace\\": \\"%s\\", \\"default_data_services_vpool\\": \\"%s\\"}'%(Namespace, objectVpoolName) 165 | executeRestAPI("/object/namespaces/namespace", 'POST','.id', NamespacePayload, ECSNode, checkOutput=1) 166 | print("Namespace %s is created" % Namespace) 167 | 168 | def CreateNamespaceWithRetry(ECSNode, Namespace): 169 | retry(5, 60, CreateNamespace, [ECSNode, Namespace]) 170 | 171 | 172 | def addUser(ECSNode,userName,Namespace): 173 | print("\nCreate User %s" % userName) 174 | createUserPayload ='{\\"user\\":\\"%s\\",\ 175 | \\"namespace\\":\\"%s\\"\ 176 | }' % (userName, Namespace) 177 | executeRestAPI("/object/users", 'POST','.id', createUserPayload, ECSNode) 178 | 179 | 180 | def addUserSecretKey(ECSNode, username): 181 | secretKeyPayload='{\\"existing_key_expiry_time_mins\\":20000}' 182 | secretKeyDict = executeRestAPI("/object/user-secret-keys/%s" % username, 'POST', '.secret_key', secretKeyPayload, ECSNode) 183 | print("\nAdd secret key for user %s" % username) 184 | 185 | def getUserSecretKey(ECSNode, username): 186 | secretKeyDict = executeRestAPI("/object/user-secret-keys/%s" % username, 'GET', '.secret_key', "", ECSNode, checkOutput=1) 187 | print("\n\nUser %s SecretKey is %s" % (username,secretKeyDict['secret_key_1'])) 188 | 189 | 190 | 191 | def main(argv): 192 | try: 193 | opts, argv = getopt.getopt(argv, '', ["ECSNodes=","Namespace=","ObjectVArray=","ObjectVPool=","UserName=","DataStoreName=","VDCName=","MethodName="]) 194 | except getopt.GetoptError, e: 195 | print e 196 | print 'ObjectProvsioning.py --ECSNodes= --Namespace= --ObjectVArray= --ObjectVPool= --UserName= --DataStoreName= --VDCName= --MethodName=\n --MethodName is required only when you need to run a particular step in Object Provisioning.If this option is not provided all the Object Provisioning steps will be run.\n Supported options for --MethodName are:\n UploadLicense \n CreateObjectVarray \n GetVarrayID \n CreateDataStore \n InsertVDC \n CreateObjectVpool \n CreateNamespace \n CreateUserAndSecretKey \n' 197 | sys.exit(2) 198 | ECSNodes="" 199 | MethodName="" 200 | for opt, arg in opts: 201 | if opt == '-h': 202 | print 'ObjectProvsioning.py --ECSNodes= --Namespace= --ObjectVArray= --ObjectVPool= --UserName= --DataStoreName= --VDCName= --MethodName=\n --MethodName is required only when you need to run a particular step in Object Provisioning.If this option is not provided all the Object Provisioning steps will be run.\n Supported options for --MethodName are:\n UploadLicense \n CreateObjectVarray \n GetVarrayID \n CreateDataStore \n InsertVDC \n CreateObjectVpool \n CreateNamespace \n CreateUserAndSecretKey \n' 203 | sys.exit() 204 | elif opt in ("-ECSNodes", "--ECSNodes"): 205 | ECSNodes = arg 206 | ECSNodeList = ECSNodes.split(",") 207 | ECSNode = ECSNodeList[0] 208 | elif opt in ("-Namespace", "--Namespace"): 209 | Namespace = arg 210 | elif opt in ("-ObjectVArray", "--ObjectVArray"): 211 | ObjectVArray = arg 212 | elif opt in ("-ObjectVPool", "--ObjectVPool"): 213 | ObjectVPool = arg 214 | elif opt in ("-UserName", "--UserName"): 215 | UserName = arg 216 | elif opt in ("-DataStoreName", "--DataStoreName"): 217 | DataStoreName = arg 218 | elif opt in ("-VDCName", "--VDCName"): 219 | VDCName = arg 220 | elif opt in ("-MethodName", "--MethodName"): 221 | MethodName = arg 222 | 223 | global AuthToken 224 | AuthToken=getAuthToken(ECSNode, "root", "ChangeMe") 225 | 226 | print("ECSNodes: %s" %ECSNode) 227 | print("Namespace: %s" %Namespace) 228 | print("ObjectVArray: %s" %ObjectVArray) 229 | print("ObjectVPool: %s" %ObjectVPool) 230 | print("UserName: %s" %UserName) 231 | print("DataStoreName: %s" %DataStoreName) 232 | print("VDCName: %s" %VDCName) 233 | print("MethodName: %s" %MethodName) 234 | 235 | 236 | if MethodName == "UploadLicense": 237 | UploadLicense(ECSNode) 238 | sys.exit() 239 | elif MethodName == "CreateObjectVarray": 240 | CreateObjectVarrayWithRetry(ECSNode, ObjectVArray) 241 | print("Virtual Array: %s" %getVarrayID(ECSNode)) 242 | sys.exit() 243 | elif MethodName == "GetVarrayID": 244 | ObjectVArrayID = getVarrayID(ECSNode) 245 | sys.exit() 246 | elif MethodName == "CreateDataStore": 247 | ObjectVArrayID = getVarrayID(ECSNode) 248 | for node in ECSNodeList: 249 | CreateDataStoreOnCommodityNodesWithRetry(node, DataStoreName, ObjectVArrayID) 250 | time.sleep(20 * 60) 251 | sys.exit() 252 | elif MethodName == "InsertVDC": 253 | InsertVDC(ECSNode, VDCName) 254 | print("VDCID: %s" %getVDCID(ECSNode, VDCName)) 255 | sys.exit() 256 | elif MethodName == "CreateObjectVpool": 257 | CreateObjectVpoolWithRetry(ECSNode, ObjectVPool, VDCName) 258 | print("Data service vPool ID:%s" %getVpoolID(ECSNode)) 259 | sys.exit() 260 | elif MethodName == "CreateNamespace": 261 | ObjectVPoolID = getVpoolID(ECSNode) 262 | CreateNamespace(ECSNode, Namespace, ObjectVPoolID) 263 | print("Namespace: %s" %getNamespaces(ECSNode)) 264 | sys.exit() 265 | elif MethodName == "CreateUser": 266 | addUser(ECSNode, UserName, Namespace) 267 | sys.exit() 268 | elif MethodName == "CreateSecretKey": 269 | addUserSecretKey(ECSNode, UserName) 270 | getUserSecretKey(ECSNode, UserName) 271 | sys.exit() 272 | elif MethodName == "DeleteUser": 273 | DeleteUser(ECSNode, UserName, Namespace) 274 | sys.exit() 275 | elif MethodName == "getUserSecretKey": 276 | getUserSecretKey(ECSNode, UserName) 277 | sys.exit() 278 | 279 | else: 280 | UploadLicense(ECSNode) 281 | CreateObjectVarrayWithRetry(ECSNode, ObjectVArray) 282 | print("Virtual Array: %s" %getVarrayID(ECSNode)) 283 | ObjectVArrayID = getVarrayID(ECSNode) 284 | 285 | for node in ECSNodeList: 286 | CreateDataStoreOnCommodityNodesWithRetry(node, DataStoreName, ObjectVArrayID) 287 | time.sleep(20 * 60) 288 | InsertVDC(ECSNode, VDCName) 289 | print("VDCID: %s" %getVDCID(ECSNode, VDCName)) 290 | CreateObjectVpoolWithRetry(ECSNode, ObjectVPool, VDCName) 291 | print("Data service vPool ID:%s" %getVpoolID(ECSNode)) 292 | ObjectVPoolID = getVpoolID(ECSNode) 293 | CreateNamespace(ECSNode, Namespace, ObjectVPoolID) 294 | print("Namespace: %s" %getNamespaces(ECSNode)) 295 | addUser(ECSNode, UserName, Namespace) 296 | addUserSecretKey(ECSNode, UserName) 297 | getUserSecretKey(ECSNode, UserName) 298 | sys.exit() 299 | 300 | #DeleteUser(ECSNode,UserName,Namespace) 301 | #DeleteNamespace(ECSNode, Namespace) 302 | #print (getNamespaces(ECSNode)) 303 | 304 | 305 | if __name__ == "__main__": 306 | main(sys.argv[1:]) --------------------------------------------------------------------------------