├── .project ├── README.md ├── docker-compose ├── common.yml └── docker-compose.yml ├── docker.md ├── docker └── docker_zookeeper.sh ├── docker_compose.md ├── dynamic_no_docker.md ├── java_client └── demo │ └── org │ └── zookeeper │ └── demo │ └── Demo.java └── no_docker.md /.project: -------------------------------------------------------------------------------- 1 | 2 | 3 | zookeeper_cluster_deployment 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # zookeeper_cluster_deployment 2 | ------------------- 3 | 4 | ## Deploy a zookeeper cluster(v3.4.6) 5 | 6 | ### 1. [Deploy a zookeeper cluster without docker](no_docker.md) 7 | 8 | ### 2. [Deploy a zookeeper cluster with docker](docker.md) 9 | 10 | ### 3. [Deploy a zookeeper cluster with docker compose](docker_compose.md) -------------------------------------------------------------------------------- /docker-compose/common.yml: -------------------------------------------------------------------------------- 1 | zookeeper: 2 | image: garland/zookeeper 3 | net: "host" 4 | environment: 5 | - ADDITIONAL_ZOOKEEPER_1=server.1=localhost:2888:3888 6 | - ADDITIONAL_ZOOKEEPER_2=server.2=localhost:2889:3889 7 | - ADDITIONAL_ZOOKEEPER_3=server.3=localhost:2890:3890 -------------------------------------------------------------------------------- /docker-compose/docker-compose.yml: -------------------------------------------------------------------------------- 1 | zk1: 2 | extends: 3 | file: common.yml 4 | service: zookeeper 5 | volumes: 6 | - /data/zk1:/tmp/zookeeper 7 | environment: 8 | - SERVER_ID=1 9 | - ADDITIONAL_ZOOKEEPER_4=clientPort=2181 10 | zk2: 11 | extends: 12 | file: common.yml 13 | service: zookeeper 14 | volumes: 15 | - /data/zk2:/tmp/zookeeper 16 | environment: 17 | - SERVER_ID=2 18 | - ADDITIONAL_ZOOKEEPER_4=clientPort=2182 19 | zk3: 20 | extends: 21 | file: common.yml 22 | service: zookeeper 23 | volumes: 24 | - /data/zk2:/tmp/zookeeper 25 | environment: 26 | - SERVER_ID=3 27 | - ADDITIONAL_ZOOKEEPER_4=clientPort=2183 -------------------------------------------------------------------------------- /docker.md: -------------------------------------------------------------------------------- 1 | ## ZooKeeper Cluster(on a single node) Setup with Docker 2 | 3 | ------------------- 4 | 5 | **Environment:** 6 | 7 | - Red Hat Enterprise Linux Server release 6.6 (Santiago) 8 | 9 | - Kernel: 2.6.32-504.23.4.el6.x86_64 10 | 11 | - Docker: Server version 1.7.1 12 | Storage Driver devicemapper 13 | Execution Driver native-0.2 14 | 15 | - ZooKeeper: 3.4.6-1569965, built on 02/20/2014 09:09 GMT 16 | 17 | 18 | ### Why use Docker Host Network Setting 19 | ------------------- 20 | 21 | 22 | ZooKeeper cluster runs on multiple instances in a replicated mode, also called a **ZooKeeper ensemble**. The minimum recommended number of servers is three, 23 | and five is the most common in a production environment. All the instances in ZooKeeper cluster connect to each others on peer-to-peer communication. 24 | 25 | First, the member of ZooKeeper cluster needs to know its peers' ip address and port information. We know that Docker has [Container linking](https://docs.docker.com/userguide/dockerlinks/). 26 | It's really helpful for target container to get the source container's info through Environment variables, so it can be used to describe one-way dependency relationship.(Tomcat call database, 27 | Mysql master/slave). But because container linking only supports one-way dependency, the source container can't get information from the target container. So it can't support the application 28 | which depends on peer-to-peer communication. 29 | 30 | Docker supports 4 kinds of Network Setting: none, bridge, host and container 31 | 32 | #### Mode: none 33 | 34 | ZooKeeper can't work without external network access 35 | 36 | #### Mode: bridge 37 | 38 | Bridge is the default network mode of docker, the container has independent network stack and is based on NAT. 39 | 40 | ##### Issues 41 | 42 | 1. Each time container starts, docker daemon assigns a new network namespace and new ip address to it. But the zookeeper cluster configuration should be created before containers create and start. 43 | 44 | 2. NAT makes the network performance bad 45 | 46 | #### Mode: container 47 | 48 | The container will share the network set of another container. With this mode, we can create 3 containers in bridge mode first, and we get network setting. Then we can create another 3 ZooKeeper containers 49 | to share their network setting. 50 | 51 | ##### **Issues** 52 | 53 | 1. the container in bridge mode can't be restarted, because it will also change the ip address of the container 54 | 55 | ``` 56 | ~ docker run --name test -d busybox /bin/sh -c "while true; do sleep 1; done" 57 | ~ docker inspect --format '{{ .NetworkSettings.IPAddress }}' test 58 | 172.17.0.39 59 | 60 | ~ docker restart test 61 | test 62 | 63 | ~ docker inspect --format '{{ .NetworkSettings.IPAddress }}' test 64 | 172.17.0.40 65 | 66 | 67 | ``` 68 | 69 | #### Mode: host 70 | 71 | container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s hostname will match the hostname on the host system. But container's file 72 | system and process still has its own namespace. It has the same network performance as the processes. 73 | 74 | ##### **Issues** 75 | 76 | Docker's storage driver in Redhat is devicemapper whose performance is [not so good](http://jpetazzo.github.io/assets/2015-03-03-not-so-deep-dive-into-docker-storage-drivers.html#2) 77 | 78 | 79 | ### Deploy ZooKeeper cluster in a single machine with Docker Host Network Setting 80 | ------------------- 81 | 82 | #### 0. Install docker 83 | 84 | Follow the [installation guide](https://docs.docker.com/installation/) to install docker in your system 85 | 86 | #### 1. Install docker image 87 | 88 | Pull docker image from Docker hub: [ZooKeeper docker image](https://registry.hub.docker.com/u/garland/zookeeper/) 89 | 90 | ``` 91 | docker pull garland/zookeeper 92 | ``` 93 | 94 | Or we can build our docker image by following (https://github.com/sekka1/mesosphere-docker/blob/master/zookeeper/Dockerfile) 95 | 96 | 97 | #### 2. Start ZooKeeper Cluster with 3 docker containers 98 | 99 | 100 | 2.1 Use the script below to avoid the port conflict of the ZooKeeper cluster members 101 | 102 | * container zk1 is listening on localhost: 2181,2888,3888 103 | * container zk2 is listening on localhost: 2182,2889,3889 104 | * container zk3 is listening on localhost: 2183,2890,3890 105 | 106 | ``` 107 | docker run -d \ 108 | --name=zk1 \ 109 | --net=host \ 110 | -v /data/zk1:/tmp/zookeeper \ 111 | -e SERVER_ID=1 \ 112 | -e ADDITIONAL_ZOOKEEPER_1=server.1=localhost:2888:3888 \ 113 | -e ADDITIONAL_ZOOKEEPER_2=server.2=localhost:2889:3889 \ 114 | -e ADDITIONAL_ZOOKEEPER_3=server.3=localhost:2890:3890 \ 115 | -e ADDITIONAL_ZOOKEEPER_4=clientPort=2181 \ 116 | garland/zookeeper 117 | 118 | docker run -d \ 119 | --name=zk2 \ 120 | --net=host \ 121 | -v /data/zk2:/tmp/zookeeper \ 122 | -e SERVER_ID=2 \ 123 | -e ADDITIONAL_ZOOKEEPER_1=server.1=localhost:2888:3888 \ 124 | -e ADDITIONAL_ZOOKEEPER_2=server.2=localhost:2889:3889 \ 125 | -e ADDITIONAL_ZOOKEEPER_3=server.3=localhost:2890:3890 \ 126 | -e ADDITIONAL_ZOOKEEPER_4=clientPort=2182 \ 127 | garland/zookeeper 128 | 129 | docker run -d \ 130 | --name=zk3 \ 131 | --net=host \ 132 | -v /data/zk3:/tmp/zookeeper \ 133 | -e SERVER_ID=3 \ 134 | -e ADDITIONAL_ZOOKEEPER_1=server.1=localhost:2888:3888 \ 135 | -e ADDITIONAL_ZOOKEEPER_2=server.2=localhost:2889:3889 \ 136 | -e ADDITIONAL_ZOOKEEPER_3=server.3=localhost:2890:3890 \ 137 | -e ADDITIONAL_ZOOKEEPER_4=clientPort=2183 \ 138 | garland/zookeeper 139 | ``` 140 | 141 | 142 | 2.2. Check the ZooKeeper containers' status 143 | 144 | ``` 145 | ~ docker ps 146 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 147 | c4c68cc4250b garland/zookeeper "/opt/run.sh" 17 seconds ago Up 17 seconds zk3 148 | 0f595e47c2cc garland/zookeeper "/opt/run.sh" 18 seconds ago Up 17 seconds zk2 149 | 1c7c6f420335 garland/zookeeper "/opt/run.sh" 19 seconds ago Up 18 seconds zk1 150 | ``` 151 | 152 | 2.3 List data directories which are exported to the host 153 | 154 | ``` 155 | ~ pwd 156 | /data 157 | ~ tree 158 | . 159 | ├── zk1 160 | │   ├── myid 161 | │   └── version-2 162 | │   ├── acceptedEpoch 163 | │   ├── currentEpoch 164 | │   └── snapshot.0 165 | ├── zk2 166 | │   ├── myid 167 | │   └── version-2 168 | │   ├── acceptedEpoch 169 | │   └── currentEpoch 170 | └── zk3 171 | ├── myid 172 | └── version-2 173 | ├── acceptedEpoch 174 | ├── currentEpoch 175 | └── snapshot.100000000 176 | ``` 177 | 178 | ##### 3. Test ZooKeeper Cluster 179 | 180 | 3.1 Create a new znode in cluster member zk1 181 | 182 | ``` 183 | ~ docker exec -it zk1 bash 184 | ~ /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181 (inside container zk1) 185 | [zk: 127.0.0.1:2181(CONNECTED) 0]create /test1 abc 186 | Connecting to 127.0.0.1:2181 187 | Created /test1 188 | ``` 189 | 190 | 3.2 Check the znode in cluster member zk2 191 | 192 |
193 | ~ docker exec -it zk2 bash
194 | ~ /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2182 (inside container zk2)
195 | Connecting to 127.0.0.1:2182
196 | [zk: 127.0.0.1:2182(CONNECTED) 0] get /test1
197 | abc
198 | [zk: 127.0.0.1:2182(CONNECTED) 0] get /test1
199 | abc
200 | cZxid = 0x100000004
201 | ctime = Tue Oct 13 14:44:04 UTC 2015
202 | mZxid = 0x100000004
203 | mtime = Tue Oct 13 14:44:04 UTC 2015
204 | pZxid = 0x100000004
205 | cversion = 0
206 | dataVersion = 0
207 | aclVersion = 0
208 | ephemeralOwner = 0x0
209 | dataLength = 3
210 | numChildren = 0
211 | 
212 | 213 | ##### 4. Develop java client ZooKeeper Cluster 214 | 215 | 4.1 Develop the code 216 | 217 | ```java 218 | package org.cheyang.zookeeper.demo; 219 | 220 | import java.io.IOException; 221 | 222 | import org.apache.zookeeper.CreateMode; 223 | import org.apache.zookeeper.KeeperException; 224 | import org.apache.zookeeper.WatchedEvent; 225 | import org.apache.zookeeper.Watcher; 226 | import org.apache.zookeeper.ZooDefs.Ids; 227 | import org.apache.zookeeper.ZooKeeper; 228 | 229 | public class Demo { 230 | 231 | public static void main(String[] args) throws IOException, KeeperException, InterruptedException{ 232 | 233 | 234 | ZooKeeper zk = new ZooKeeper("localhost:2181,localhost:2182,localhost:2183", 60000, new Watcher() { 235 | // monitor all the events 236 | public void process(WatchedEvent event) { 237 | System.out.println("EVENT:" + event.getType()); 238 | } 239 | }); 240 | 241 | String key = "/test"; 242 | 243 | String value = "abc"; 244 | 245 | 246 | // create or get znode 247 | if(zk.exists(key, false) == null){ 248 | 249 | String createPath = zk.create(key, value.getBytes(), 250 | Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); 251 | 252 | System.out.println("Created znode:"+createPath); 253 | 254 | }else{ 255 | byte[] data = zk.getData(key, false, null); 256 | 257 | System.out.print("Get znode:"); 258 | System.out.printf("[%s,%s]", key, new String(data)); 259 | System.out.println(""); 260 | } 261 | 262 | // check root znode 263 | System.out.println("ls / => " + zk.getChildren("/", true)); 264 | 265 | zk.close(); 266 | } 267 | 268 | 269 | } 270 | 271 | ``` 272 | 273 | 274 | 4.2 compile the code 275 | 276 | ``` 277 | ~ javac -cp .:/opt/zookeeper/zookeeper-3.4.6.jar org/zookeeper/demo/Demo.java 278 | ``` 279 | 280 | 4.3 execute the code 281 | 282 | ``` 283 | ~ java -cp /opt/zookeeper/zookeeper-3.4.6.jar:/opt/zookeeper/lib/*:. org.zookeeper.demo.Demo 284 | ``` 285 | 286 | 4.4 stop one cluster member 287 | 288 | ``` 289 | ~ /opt/zookeeper/bin/zkServer.sh stop zk1.cfg 290 | ``` 291 | 292 | 4.5 execute the code again 293 | 294 | ``` 295 | ~ java -cp /opt/zookeeper/zookeeper-3.4.6.jar:/opt/zookeeper/lib/*:. org.zookeeper.demo.Demo 296 | ``` 297 | 298 | 299 | ##### 5. Delete ZooKeeper Cluster when finished using it 300 | 301 | ``` 302 | docker rm -f zk1 303 | docker rm -f zk2 304 | docker rm -f zk3 305 | ``` 306 | 307 | **Pros** 308 | 309 | 1. quick to start 310 | 2. support rolling update easily 311 | 312 | **Cons** 313 | 314 | 1. It's difficult to handle logging and debugging 315 | 2. performance reduction due to IO virtualization 316 | 3. Docker image may be a concern: should we build it from scratch, or leverage docker image from docker 317 | hub which has potential security issue 318 | 319 | -------------------------------------------------------------------------------- /docker/docker_zookeeper.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (C) 2015 IBM, Inc. 3 | # This script can be used to setup ZooKeeper cluster 4 | # 5 | #-------------------------------------------------------------------------- 6 | 7 | docker run -d \ 8 | --name=zk1 \ 9 | --net=host \ 10 | -v /data/zk1:/tmp/zookeeper \ 11 | -e SERVER_ID=1 \ 12 | -e ADDITIONAL_ZOOKEEPER_1=server.1=localhost:2888:3888 \ 13 | -e ADDITIONAL_ZOOKEEPER_2=server.2=localhost:2889:3889 \ 14 | -e ADDITIONAL_ZOOKEEPER_3=server.3=localhost:2890:3890 \ 15 | -e ADDITIONAL_ZOOKEEPER_4=clientPort=2181 \ 16 | garland/zookeeper 17 | 18 | docker run -d \ 19 | --name=zk2 \ 20 | --net=host \ 21 | -v /data/zk2:/tmp/zookeeper \ 22 | -e SERVER_ID=2 \ 23 | -e ADDITIONAL_ZOOKEEPER_1=server.1=localhost:2888:3888 \ 24 | -e ADDITIONAL_ZOOKEEPER_2=server.2=localhost:2889:3889 \ 25 | -e ADDITIONAL_ZOOKEEPER_3=server.3=localhost:2890:3890 \ 26 | -e ADDITIONAL_ZOOKEEPER_4=clientPort=2182 \ 27 | garland/zookeeper 28 | 29 | docker run -d \ 30 | --name=zk3 \ 31 | --net=host \ 32 | -v /data/zk3:/tmp/zookeeper \ 33 | -e SERVER_ID=3 \ 34 | -e ADDITIONAL_ZOOKEEPER_1=server.1=localhost:2888:3888 \ 35 | -e ADDITIONAL_ZOOKEEPER_2=server.2=localhost:2889:3889 \ 36 | -e ADDITIONAL_ZOOKEEPER_3=server.3=localhost:2890:3890 \ 37 | -e ADDITIONAL_ZOOKEEPER_4=clientPort=2183 \ 38 | garland/zookeeper 39 | -------------------------------------------------------------------------------- /docker_compose.md: -------------------------------------------------------------------------------- 1 | ## ZooKeeper Cluster(on a single node) Setup with Docker Compose 2 | 3 | ------------------- 4 | 5 | **Environment:** 6 | 7 | - Red Hat Enterprise Linux Server release 6.6 (Santiago) 8 | 9 | - Kernel: 2.6.32-504.23.4.el6.x86_64 10 | 11 | - Docker: Server version 1.7.1 12 | Storage Driver devicemapper 13 | Execution Driver native-0.2 14 | 15 | - Docker Compose: 16 | 17 | - ZooKeeper: 3.4.6-1569965, built on 02/20/2014 09:09 GMT 18 | 19 | 20 | ------------------- 21 | 22 | **[Docker Compose](https://docs.docker.com/compose/)** is an orchestration tool that makes spinning up multi-container applications effortless. 23 | `Dockerfile` makes user define and manage a single container, while `Compose` helps user to define a group of containers with dependency in a single `yaml` file, 24 | and 25 | 26 | #### 0. Install docker 27 | 28 | Follow the [installation guide](https://docs.docker.com/installation/) to install docker in your system 29 | 30 | ##### 1. Install Docker compose 1.4.2 31 | 32 | 1.1 Enter the `curl` command in your termial. 33 | 34 | The command has the following format: 35 | 36 | ```bash 37 | curl -L https://github.com/docker/compose/releases/download/1.4.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose 38 | chmod +x /usr/local/bin/docker-compose 39 | ``` 40 | 41 | 1.2 Apply executable permissions to the binary: 42 | 43 | ```bash 44 | chmod +x /usr/local/bin/docker-compose 45 | ``` 46 | 47 | 1.3 Test the installation 48 | 49 | ```bash 50 | ~ docker-compose --version 51 | docker-compose version: 1.4.2 52 | 53 | ``` 54 | 55 | ##### 2. Define the ZooKeeper cluster in docker compose 56 | 57 | 2.1 Create `common.yml` to define the common sections of ZooKeeper cluster members 58 | 59 | ``` 60 | zookeeper: 61 | image: garland/zookeeper 62 | net: "host" 63 | environment: 64 | - ADDITIONAL_ZOOKEEPER_1=server.1=localhost:2888:3888 65 | - ADDITIONAL_ZOOKEEPER_2=server.2=localhost:2889:3889 66 | - ADDITIONAL_ZOOKEEPER_3=server.3=localhost:2890:3890 67 | ``` 68 | 69 | 2.2 Create `docker-compose.yml` to define the cluster 70 | 71 | ``` 72 | zk1: 73 | extends: 74 | file: common.yml 75 | service: zookeeper 76 | volumes: 77 | - /data/zk1:/tmp/zookeeper 78 | environment: 79 | - SERVER_ID=1 80 | - ADDITIONAL_ZOOKEEPER_4=clientPort=2181 81 | zk2: 82 | extends: 83 | file: common.yml 84 | service: zookeeper 85 | volumes: 86 | - /data/zk2:/tmp/zookeeper 87 | environment: 88 | - SERVER_ID=2 89 | - ADDITIONAL_ZOOKEEPER_4=clientPort=2182 90 | zk3: 91 | extends: 92 | file: common.yml 93 | service: zookeeper 94 | volumes: 95 | - /data/zk2:/tmp/zookeeper 96 | environment: 97 | - SERVER_ID=3 98 | - ADDITIONAL_ZOOKEEPER_4=clientPort=2183 99 | ``` 100 | 101 | ##### 3. Start ZooKeeper Cluster with docker compose 102 | 103 | 104 | 3.1 Put `common.yml` and `docker-compose.yml` into a directory, for example 105 | 106 | ```bash 107 | ~ pwd 108 | /zookeeper_compose 109 | ~ ls 110 | common.yml docker-compose.yml 111 | ``` 112 | 113 | 3.2 Start the containers with docker compose 114 | 115 | 116 | ```bash 117 | ~ docker-compose up -d 118 | Creating zookeepercompose_zk1_1... 119 | Creating zookeepercompose_zk2_1... 120 | Creating zookeepercompose_zk3_1... 121 | ``` 122 | 123 | 3.3 Check the status 124 | 125 | ```bash 126 | ~ docker-compose ps 127 | Name Command State Ports 128 | ---------------------------------------------------- 129 | zookeepercompose_zk1_1 /opt/run.sh Up 130 | zookeepercompose_zk2_1 /opt/run.sh Up 131 | zookeepercompose_zk3_1 /opt/run.sh Up 132 | ``` 133 | 134 | ##### 4. Test ZooKeeper Cluster 135 | 136 | 4.1 Create a new znode in cluster member zookeepercompose_zk1_1 137 | 138 | ``` 139 | ~ docker exec -it zookeepercompose_zk1_1 bash 140 | ~ /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181 (inside container zookeepercompose_zk1_1) 141 | [zk: 127.0.0.1:2181(CONNECTED) 0]create /test1 abc 142 | Connecting to 127.0.0.1:2181 143 | Created /test1 144 | ``` 145 | 146 | 4.2 Check the znode in cluster member zookeepercompose_zk2_1 147 | 148 |
149 | ~ docker exec -it zookeepercompose_zk2_1 bash
150 | ~ /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2182 (inside container zookeepercompose_zk2_1)
151 | Connecting to 127.0.0.1:2182
152 | [zk: 127.0.0.1:2182(CONNECTED) 0] get /test1
153 | abc
154 | [zk: 127.0.0.1:2182(CONNECTED) 0] get /test1
155 | abc
156 | cZxid = 0x100000004
157 | ctime = Tue Oct 13 14:44:04 UTC 2015
158 | mZxid = 0x100000004
159 | mtime = Tue Oct 13 14:44:04 UTC 2015
160 | pZxid = 0x100000004
161 | cversion = 0
162 | dataVersion = 0
163 | aclVersion = 0
164 | ephemeralOwner = 0x0
165 | dataLength = 3
166 | numChildren = 0
167 | 
168 | 169 | ##### 5. Develop java client ZooKeeper Cluster 170 | 171 | 5.1 Develop the code 172 | 173 | ```java 174 | package org.cheyang.zookeeper.demo; 175 | 176 | import java.io.IOException; 177 | 178 | import org.apache.zookeeper.CreateMode; 179 | import org.apache.zookeeper.KeeperException; 180 | import org.apache.zookeeper.WatchedEvent; 181 | import org.apache.zookeeper.Watcher; 182 | import org.apache.zookeeper.ZooDefs.Ids; 183 | import org.apache.zookeeper.ZooKeeper; 184 | 185 | public class Demo { 186 | 187 | public static void main(String[] args) throws IOException, KeeperException, InterruptedException{ 188 | 189 | 190 | ZooKeeper zk = new ZooKeeper("localhost:2181,localhost:2182,localhost:2183", 60000, new Watcher() { 191 | // monitor all the events 192 | public void process(WatchedEvent event) { 193 | System.out.println("EVENT:" + event.getType()); 194 | } 195 | }); 196 | 197 | String key = "/test"; 198 | 199 | String value = "abc"; 200 | 201 | 202 | // create or get znode 203 | if(zk.exists(key, false) == null){ 204 | 205 | String createPath = zk.create(key, value.getBytes(), 206 | Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); 207 | 208 | System.out.println("Created znode:"+createPath); 209 | 210 | }else{ 211 | byte[] data = zk.getData(key, false, null); 212 | 213 | System.out.print("Get znode:"); 214 | System.out.printf("[%s,%s]", key, new String(data)); 215 | System.out.println(""); 216 | } 217 | 218 | // check root znode 219 | System.out.println("ls / => " + zk.getChildren("/", true)); 220 | 221 | zk.close(); 222 | } 223 | 224 | 225 | } 226 | 227 | ``` 228 | 229 | 230 | 5.2 compile the code 231 | 232 | ``` 233 | ~ javac -cp .:/opt/zookeeper/zookeeper-3.4.6.jar org/zookeeper/demo/Demo.java 234 | ``` 235 | 236 | 5.3 execute the code 237 | 238 | ``` 239 | ~ java -cp /opt/zookeeper/zookeeper-3.4.6.jar:/opt/zookeeper/lib/*:. org.zookeeper.demo.Demo 240 | ``` 241 | 242 | 5.4 stop one cluster member 243 | 244 | ``` 245 | ~ /opt/zookeeper/bin/zkServer.sh stop zk1.cfg 246 | ``` 247 | 248 | 5.5 execute the code again 249 | 250 | ``` 251 | ~ java -cp /opt/zookeeper/zookeeper-3.4.6.jar:/opt/zookeeper/lib/*:. org.zookeeper.demo.Demo 252 | ``` 253 | 254 | ##### 6. Delete ZooKeeper Cluster when finished using it 255 | 256 | ``` 257 | ~ docker-compose stop 258 | Stopping zookeepercompose_zk3_1... done 259 | Stopping zookeepercompose_zk2_1... done 260 | Stopping zookeepercompose_zk1_1... done 261 | 262 | ~ docker-compose rm -f 263 | Going to remove zookeepercompose_zk3_1, zookeepercompose_zk2_1, zookeepercompose_zk1_1 264 | Removing zookeepercompose_zk3_1... done 265 | Removing zookeepercompose_zk2_1... done 266 | Removing zookeepercompose_zk1_1... done 267 | 268 | ``` 269 | 270 | 271 | **Pros** 272 | 273 | 1. easy to automate deployment, to start, to stop, to destroy 274 | 2. portable 275 | 276 | **Cons** 277 | 278 | 1. Have the same weakness as docker 279 | 2. Docker team doesn't suggest to put it in production 280 | 281 | -------------------------------------------------------------------------------- /dynamic_no_docker.md: -------------------------------------------------------------------------------- 1 | ### ZooKeeper Cluster(in a single node) Setup without Docker 2 | 3 | ------------------- 4 | 5 | **Environment:** 6 | 7 | - Red Hat Enterprise Linux Server release 6.6 (Santiago) 8 | 9 | - Java: JRE 1.7.0 IBM Linux build pxa6470_27sr3fp10-20150708_01(SR3 FP10) 10 | 11 | - ZooKeeper: 3.5.1-Alpha 12 | 13 | ------------------- 14 | 15 | ##### 1. Install ZooKeeper 16 | 17 | Download and Extract ZooKeeper package 18 | 19 | ``` 20 | ~ cd /opt 21 | ~ wget -q -O - http://apache.arvixe.com/zookeeper/zookeeper-3.5.1-alpha/zookeeper-3.5.1-alpha.tar.gz | tar -xzf - -C /opt 22 | ~ mv /opt/zookeeper-3.5.1-alpha /opt/zookeeper 23 | ``` 24 | 25 | ##### 2. Deploy ZooKeeper Cluster with 3 members 26 | 27 | 2.1 Create data dirctory for zk1 28 | 29 | ``` 30 | ~ mkdir /root/zoo/zk1 31 | ``` 32 | 33 | 2.2 Create a file named myid in the data directory 34 | 35 | ``` 36 | ~ echo "1" > /root/zoo/zk1/myid 37 | ``` 38 | 39 | 2.3 Create static configuration file for zk1 40 | 41 | 42 | ``` 43 | ~ vi /opt/zookeeper/conf/zk1.cfg 44 | tickTime=2000 45 | initLimit=10 46 | syncLimit=5 47 | dataDir=/root/zoo/zk1 48 | standaloneEnabled=false 49 | dynamicConfigFile=/opt/zookeeper/conf/zk1.cfg.dynamic 50 | 51 | ``` 52 | 53 | 2.4 Create dynamic configuration file for zk1 54 | 55 | ``` 56 | ~ vi /opt/zookeeper/conf/zk1.cfg.dynamic 57 | server.1=127.0.0.1:2888:3888:participant;2181 58 | ``` 59 | 60 | 2.5 Start zk1 61 | 62 | ``` 63 | ~ /opt/zookeeper/bin/zkServer.sh start zk1.cfg 64 | JMX enabled by default 65 | Using config: /opt/zookeeper/bin/../conf/zk1.cfg 66 | Starting zookeeper ... STARTED 67 | 68 | ``` 69 | 70 | ##### 3. Add cluster member zk2 71 | 72 | 3.1 Create data dirctory for zk2 73 | 74 | ``` 75 | ~ mkdir /root/zoo/zk2 76 | ``` 77 | 78 | 3.2 Create a file named myid in the data directory 79 | 80 | ``` 81 | ~ echo "2" > /root/zoo/zk2/myid 82 | ``` 83 | 84 | 3.3 Create static configuration file for zk2 85 | 86 | ``` 87 | ~ vi /opt/zookeeper/conf/zk2.cfg 88 | tickTime=2000 89 | initLimit=10 90 | syncLimit=5 91 | dataDir=/root/zoo/zk2 92 | standaloneEnabled=false 93 | dynamicConfigFile=/opt/zookeeper/conf/zk2.cfg.dynamic 94 | 95 | ``` 96 | 97 | 3.4 Dump the existing cluster configurations to zk2's dynamic configuration file 98 | 99 | ``` 100 | ~ echo "`/opt/zookeeper/bin/zkCli.sh -server localhost:2181 get /zookeeper/config|grep ^server`" >> /opt/zookeeper/conf/zk2.cfg.dynamic 101 | 102 | ~ cat /opt/zookeeper/conf/zk2.cfg.dynamic 103 | server.1=127.0.0.1:2888:3888:participant;0.0.0.0:2181 104 | 105 | ``` 106 | 107 | 3.5 Add zk2 configuration into the exiting dynmaic zk2 configuration file 108 | 109 | ``` 110 | vi /opt/zookeeper/conf/zk2.cfg.dynamic 111 | server.1=127.0.0.1:2888:3888:participant;0.0.0.0:2181 112 | server.2=127.0.0.1:2889:3889:observer;0.0.0.0:2182 113 | 114 | ``` 115 | 116 | 3.6 Start zk2 117 | 118 | ``` 119 | ~ /opt/zookeeper/bin/zkServer.sh start zk2.cfg 120 | JMX enabled by default 121 | Using config: /opt/zookeeper/bin/../conf/zk2.cfg 122 | Starting zookeeper ... STARTED 123 | 124 | ``` 125 | 126 | 3.7 Reconfig zk1 to add zk2 to the zookeeper cluster 127 | 128 | ``` 129 | /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181 reconfig -add "server.2=127.0.0.1:2889:3889:participant;0.0.0.0:2182" 130 | ``` 131 | 132 | 133 | ##### 3. Add cluster member zk3 134 | 135 | 136 | 137 | 138 | ##### 3. Test ZooKeeper Cluster 139 | 140 | 3.1 Check ZooKeeper servers' status, and find the leader and followers in this cluster 141 | 142 |
143 | ~ /opt/zookeeper/bin/zkServer.sh status zk1.cfg
144 | JMX enabled by default
145 | Using config: /opt/zookeeper/bin/../conf/zk1.cfg
146 | Mode: follower
147 | 
148 | ~ /opt/zookeeper/bin/zkServer.sh status zk2.cfg
149 | JMX enabled by default
150 | Using config: /opt/zookeeper/bin/../conf/zk2.cfg
151 | Mode: leader
152 | 
153 | ~ /opt/zookeeper/bin/zkServer.sh status zk3.cfg
154 | JMX enabled by default
155 | Using config: /opt/zookeeper/bin/../conf/zk3.cfg
156 | Mode: follower
157 | 
158 | 159 | 3.2 Create a new znode in cluster member zk1 160 | 161 | ``` 162 | ~ /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181 163 | [zk: 127.0.0.1:2181(CONNECTED) 0] create /test1 abc 164 | Connecting to 127.0.0.1:2181 165 | Created /test1 166 | ``` 167 | 168 | 3.3 Check the znode in cluster member zk2 169 | 170 |
171 | ~ /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2182
172 | Connecting to 127.0.0.1:2182
173 | [zk: 127.0.0.1:2182(CONNECTED) 0] get /test1
174 | abc
175 | cZxid = 0x200000002
176 | ctime = Tue Sep 29 05:06:55 CDT 2015
177 | mZxid = 0x200000002
178 | mtime = Tue Sep 29 05:06:55 CDT 2015
179 | pZxid = 0x200000002
180 | cversion = 0
181 | dataVersion = 0
182 | aclVersion = 0
183 | ephemeralOwner = 0x0
184 | dataLength = 3
185 | numChildren = 0
186 | 
187 | 188 | 189 | ##### 4. Develop java client ZooKeeper Cluster 190 | 191 | 4.1 Develop the code 192 | 193 | ```java 194 | package org.cheyang.zookeeper.demo; 195 | 196 | import java.io.IOException; 197 | 198 | import org.apache.zookeeper.CreateMode; 199 | import org.apache.zookeeper.KeeperException; 200 | import org.apache.zookeeper.WatchedEvent; 201 | import org.apache.zookeeper.Watcher; 202 | import org.apache.zookeeper.ZooDefs.Ids; 203 | import org.apache.zookeeper.ZooKeeper; 204 | 205 | public class Demo { 206 | 207 | public static void main(String[] args) throws IOException, KeeperException, InterruptedException{ 208 | 209 | 210 | ZooKeeper zk = new ZooKeeper("localhost:2181,localhost:2182,localhost:2183", 60000, new Watcher() { 211 | // monitor all the events 212 | public void process(WatchedEvent event) { 213 | System.out.println("EVENT:" + event.getType()); 214 | } 215 | }); 216 | 217 | String key = "/test"; 218 | 219 | String value = "abc"; 220 | 221 | 222 | // create or get znode 223 | if(zk.exists(key, false) == null){ 224 | 225 | String createPath = zk.create(key, value.getBytes(), 226 | Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); 227 | 228 | System.out.println("Created znode:"+createPath); 229 | 230 | }else{ 231 | byte[] data = zk.getData(key, false, null); 232 | 233 | System.out.print("Get znode:"); 234 | System.out.printf("[%s,%s]", key, new String(data)); 235 | System.out.println(""); 236 | } 237 | 238 | // check root znode 239 | System.out.println("ls / => " + zk.getChildren("/", true)); 240 | 241 | zk.close(); 242 | } 243 | 244 | 245 | } 246 | 247 | ``` 248 | 249 | 250 | 4.2 compile the code 251 | 252 | ``` 253 | ~ javac -cp .:/opt/zookeeper/zookeeper-3.4.6.jar org/zookeeper/demo/Demo.java 254 | ``` 255 | 256 | 4.3 execute the code 257 | 258 | ``` 259 | ~ java -cp /opt/zookeeper/zookeeper-3.4.6.jar:/opt/zookeeper/lib/*:. org.zookeeper.demo.Demo 260 | ``` 261 | 262 | 4.4 stop one cluster member 263 | 264 | ``` 265 | ~ /opt/zookeeper/bin/zkServer.sh stop zk1.cfg 266 | ``` 267 | 268 | 4.5 execute the code again 269 | 270 | ``` 271 | ~ java -cp /opt/zookeeper/zookeeper-3.4.6.jar:/opt/zookeeper/lib/*:. org.zookeeper.demo.Demo 272 | ``` 273 | 274 | ##### Compare with docker solution 275 | 276 | **Pros** 277 | 1. Comparing docker, better performance in network and storage 278 | 2. Easy to debug 279 | 280 | **Cons** 281 | 282 | 1. can't support rolling upgrade 283 | 2. static cluster can't add new cluster member in runtime, 3.5.0 can support dynamic cluster(not for docker) -------------------------------------------------------------------------------- /java_client/demo/org/zookeeper/demo/Demo.java: -------------------------------------------------------------------------------- 1 | package org.zookeeper.demo; 2 | 3 | import java.io.IOException; 4 | 5 | import org.apache.zookeeper.CreateMode; 6 | import org.apache.zookeeper.KeeperException; 7 | import org.apache.zookeeper.WatchedEvent; 8 | import org.apache.zookeeper.Watcher; 9 | import org.apache.zookeeper.ZooDefs.Ids; 10 | import org.apache.zookeeper.ZooKeeper; 11 | 12 | public class Demo { 13 | 14 | public static void main(String[] args) throws IOException, KeeperException, InterruptedException{ 15 | 16 | 17 | ZooKeeper zk = new ZooKeeper("localhost:2181,localhost:2182,localhost:2183", 60000, new Watcher() { 18 | // monitor all the events 19 | public void process(WatchedEvent event) { 20 | System.out.println("EVENT:" + event.getType()); 21 | } 22 | }); 23 | 24 | String key = "/test"; 25 | 26 | String value = "abc"; 27 | 28 | 29 | // create or get znode 30 | if(zk.exists(key, false) == null){ 31 | 32 | String createPath = zk.create(key, value.getBytes(), 33 | Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); 34 | 35 | System.out.println("Created znode:"+createPath); 36 | 37 | }else{ 38 | byte[] data = zk.getData(key, false, null); 39 | 40 | System.out.print("Get znode:"); 41 | System.out.printf("[%s,%s]", key, new String(data)); 42 | System.out.println(""); 43 | } 44 | 45 | // check root znode 46 | System.out.println("ls / => " + zk.getChildren("/", true)); 47 | 48 | zk.close(); 49 | } 50 | 51 | 52 | } 53 | -------------------------------------------------------------------------------- /no_docker.md: -------------------------------------------------------------------------------- 1 | ### ZooKeeper Cluster(in a single node) Setup without Docker 2 | 3 | ------------------- 4 | 5 | **Environment:** 6 | 7 | - Red Hat Enterprise Linux Server release 6.6 (Santiago) 8 | 9 | - Java: JRE 1.7.0 IBM Linux build pxa6470_27sr3fp10-20150708_01(SR3 FP10) 10 | 11 | - ZooKeeper: 3.4.6-1569965, built on 02/20/2014 09:09 GMT 12 | 13 | ------------------- 14 | 15 | ##### 1. Install ZooKeeper 16 | 17 | Download and Extract ZooKeeper package 18 | 19 | ``` 20 | ~ cd /opt 21 | ~ wget -q -O - http://apache.mirrors.pair.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz | tar -xzf - -C /opt 22 | ~ mv /opt/zookeeper-3.4.6 /opt/zookeeper 23 | ``` 24 | 25 | ##### 2. Deploy ZooKeeper Cluster with 3 members 26 | 27 | 2.1 Make data directory for each cluster member: zk1, zk2 and zk3 28 | 29 | ``` 30 | ~ mkdir /root/zoo/zk1 31 | ~ mkdir /root/zoo/zk2 32 | ~ mkdir /root/zoo/zk3 33 | ``` 34 | 35 | 2.2 Create a file named myid in each data directory 36 | 37 | ``` 38 | ~ echo "1" > /root/zoo/zk1/myid 39 | ~ echo "2" > /root/zoo/zk2/myid 40 | ~ echo "3" > /root/zoo/zk3/myid 41 | ``` 42 | 43 | 2.3 Modify the cluster member's configuration files one by one 44 | 45 | 46 | ``` 47 | ~ vi /opt/zookeeper/conf/zk1.cfg 48 | tickTime=2000 49 | initLimit=10 50 | syncLimit=5 51 | dataDir=/root/zoo/zk1 52 | clientPort=2181 53 | server.1=127.0.0.1:2888:3888 54 | server.2=127.0.0.1:2889:3889 55 | server.3=127.0.0.1:2890:3890 56 | 57 | 58 | ~ vi /opt/zookeeper/conf/zk2.cfg 59 | tickTime=2000 60 | initLimit=10 61 | syncLimit=5 62 | dataDir=/root/zoo/zk2 63 | clientPort=2182 64 | server.1=127.0.0.1:2888:3888 65 | server.2=127.0.0.1:2889:3889 66 | server.3=127.0.0.1:2890:3890 67 | 68 | 69 | ~ vi /opt/zookeeper/conf/zk3.cfg 70 | tickTime=2000 71 | initLimit=10 72 | syncLimit=5 73 | dataDir=/root/zoo/zk3 74 | clientPort=2183 75 | server.1=127.0.0.1:2888:3888 76 | server.2=127.0.0.1:2889:3889 77 | server.3=127.0.0.1:2890:3890 78 | ``` 79 | 80 | 2.4 Start all the ZooKeeper servers 81 | 82 | ``` 83 | ~ /opt/zookeeper/bin/zkServer.sh start zk1.cfg 84 | JMX enabled by default 85 | Using config: /opt/zookeeper/bin/../conf/zk1.cfg 86 | Starting zookeeper ... STARTED 87 | 88 | 89 | ~ /opt/zookeeper/bin/zkServer.sh start zk2.cfg 90 | JMX enabled by default 91 | Using config: /opt/zookeeper/bin/../conf/zk2.cfg 92 | Starting zookeeper ... STARTED 93 | 94 | 95 | ~ /opt/zookeeper/bin/zkServer.sh start zk3.cfg 96 | JMX enabled by default 97 | Using config: /opt/zookeeper/bin/../conf/zk3.cfg 98 | Starting zookeeper ... STARTED 99 | ``` 100 | 101 | 102 | 2.5 List the data directory structure 103 | 104 | ``` 105 | ~ tree 106 | . 107 | ├── zk1 108 | │   ├── myid 109 | │   └── version-2 110 | │   ├── acceptedEpoch 111 | │   ├── currentEpoch 112 | │   ├── log.100000001 113 | │   └── snapshot.0 114 | ├── zk2 115 | │   ├── myid 116 | │   └── version-2 117 | │   ├── acceptedEpoch 118 | │   ├── currentEpoch 119 | │   └── log.100000001 120 | └── zk3 121 | ├── myid 122 | └── version-2 123 | ├── acceptedEpoch 124 | ├── currentEpoch 125 | ├── log.100000001 126 | ├── snapshot.100000000 127 | └── zookeeper.out 128 | 129 | 130 | ``` 131 | 132 | 133 | ##### 3. Test ZooKeeper Cluster 134 | 135 | 3.1 Check ZooKeeper servers' status, and find the leader and followers in this cluster 136 | 137 |
138 | ~ /opt/zookeeper/bin/zkServer.sh status zk1.cfg
139 | JMX enabled by default
140 | Using config: /opt/zookeeper/bin/../conf/zk1.cfg
141 | Mode: follower
142 | 
143 | ~ /opt/zookeeper/bin/zkServer.sh status zk2.cfg
144 | JMX enabled by default
145 | Using config: /opt/zookeeper/bin/../conf/zk2.cfg
146 | Mode: leader
147 | 
148 | ~ /opt/zookeeper/bin/zkServer.sh status zk3.cfg
149 | JMX enabled by default
150 | Using config: /opt/zookeeper/bin/../conf/zk3.cfg
151 | Mode: follower
152 | 
153 | 154 | 3.2 Create a new znode in cluster member zk1 155 | 156 | ``` 157 | ~ /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181 158 | [zk: 127.0.0.1:2181(CONNECTED) 0] create /test1 abc 159 | Connecting to 127.0.0.1:2181 160 | Created /test1 161 | ``` 162 | 163 | 3.3 Check the znode in cluster member zk2 164 | 165 |
166 | ~ /opt/zookeeper/bin/zkCli.sh -server 127.0.0.1:2182
167 | Connecting to 127.0.0.1:2182
168 | [zk: 127.0.0.1:2182(CONNECTED) 0] get /test1
169 | abc
170 | cZxid = 0x200000002
171 | ctime = Tue Sep 29 05:06:55 CDT 2015
172 | mZxid = 0x200000002
173 | mtime = Tue Sep 29 05:06:55 CDT 2015
174 | pZxid = 0x200000002
175 | cversion = 0
176 | dataVersion = 0
177 | aclVersion = 0
178 | ephemeralOwner = 0x0
179 | dataLength = 3
180 | numChildren = 0
181 | 
182 | 183 | 184 | ##### 4. Develop java client ZooKeeper Cluster 185 | 186 | 4.1 Develop the code 187 | 188 | ```java 189 | package org.cheyang.zookeeper.demo; 190 | 191 | import java.io.IOException; 192 | 193 | import org.apache.zookeeper.CreateMode; 194 | import org.apache.zookeeper.KeeperException; 195 | import org.apache.zookeeper.WatchedEvent; 196 | import org.apache.zookeeper.Watcher; 197 | import org.apache.zookeeper.ZooDefs.Ids; 198 | import org.apache.zookeeper.ZooKeeper; 199 | 200 | public class Demo { 201 | 202 | public static void main(String[] args) throws IOException, KeeperException, InterruptedException{ 203 | 204 | 205 | ZooKeeper zk = new ZooKeeper("localhost:2181,localhost:2182,localhost:2183", 60000, new Watcher() { 206 | // monitor all the events 207 | public void process(WatchedEvent event) { 208 | System.out.println("EVENT:" + event.getType()); 209 | } 210 | }); 211 | 212 | String key = "/test"; 213 | 214 | String value = "abc"; 215 | 216 | 217 | // create or get znode 218 | if(zk.exists(key, false) == null){ 219 | 220 | String createPath = zk.create(key, value.getBytes(), 221 | Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); 222 | 223 | System.out.println("Created znode:"+createPath); 224 | 225 | }else{ 226 | byte[] data = zk.getData(key, false, null); 227 | 228 | System.out.print("Get znode:"); 229 | System.out.printf("[%s,%s]", key, new String(data)); 230 | System.out.println(""); 231 | } 232 | 233 | // check root znode 234 | System.out.println("ls / => " + zk.getChildren("/", true)); 235 | 236 | zk.close(); 237 | } 238 | 239 | 240 | } 241 | 242 | ``` 243 | 244 | 245 | 4.2 compile the code 246 | 247 | ``` 248 | ~ javac -cp .:/opt/zookeeper/zookeeper-3.4.6.jar org/zookeeper/demo/Demo.java 249 | ``` 250 | 251 | 4.3 execute the code 252 | 253 | ``` 254 | ~ java -cp /opt/zookeeper/zookeeper-3.4.6.jar:/opt/zookeeper/lib/*:. org.zookeeper.demo.Demo 255 | ``` 256 | 257 | 4.4 stop one cluster member 258 | 259 | ``` 260 | ~ /opt/zookeeper/bin/zkServer.sh stop zk1.cfg 261 | ``` 262 | 263 | 4.5 execute the code again 264 | 265 | ``` 266 | ~ java -cp /opt/zookeeper/zookeeper-3.4.6.jar:/opt/zookeeper/lib/*:. org.zookeeper.demo.Demo 267 | ``` 268 | 269 | ##### Compare with docker solution 270 | 271 | **Pros** 272 | 1. Comparing docker, better performance in network and storage 273 | 2. Easy to debug 274 | 275 | **Cons** 276 | 277 | 1. can't support rolling upgrade 278 | 2. static cluster can't add new cluster member in runtime, 3.5.0 can support dynamic cluster(not for docker) --------------------------------------------------------------------------------