├── LICENSE ├── README.md ├── banner └── spark_gce.py /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright [yyyy] [name of copyright owner] 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Spark GCE 2 | ========= 3 | 4 | Spark GCE is like Spark Ec2 but for those who run their cluster on Google Cloud. 5 | 6 | - Make Sure you have installed and authenticated gcutils where you are running this script. 7 | - Helps you launch a spark cluster in the Google Cloud 8 | - Attaches 500GB empty disk to all nodes in the cluster 9 | - Installs and Configures everything Automatically 10 | - Starts the Shark server Automatically 11 | 12 | Spark GCE is a python script which will help you launch a spark cluster in the google cloud like the way spark_ec2 script does for AWS. 13 | 14 | Usage 15 | ----- 16 | 17 | > ***spark_gce.py project-name number-of-slaves slave-type master-type identity-file zone cluster-name*** 18 | > 19 | >> 20 | >> - **project-id**: Project ID of the project where you are going to launch your spark cluster. 21 | >> 22 | >> - **number-of-slave**: Number of slaves that you want to launch. 23 | >> 24 | >> - **slave-type**: Instance type for the slave machines. 25 | >> 26 | >> - **master-type**: Instance type for the master node. 27 | >> 28 | >> - **identity-file**: Identity file to authenticate with your GCE instances, Usually resides at *~/.ssh/google_compute_engine* once you authenticate using gcutils. 29 | >> 30 | >> - **zone:** Specify the zone where you are going to launch the cluster. 31 | >> 32 | >> - **cluster-name**: Name the cluster that you are going to launch. 33 | >> 34 | > 35 | > ***spark_gce.py project-name cluster-name destroy*** 36 | > 37 | >> - **project-id:** Project id of the project where the spark cluster is at. 38 | >> - **cluster-name:** Name of the cluster that you are going to destroy. 39 | 40 | 41 | Installation 42 | -------------- 43 | 44 | ```sh 45 | git clone https://github.com/sigmoidanalytics/spark_gce.git 46 | cd spark_gce 47 | python spark_gce.py 48 | ``` 49 | 50 | 51 | Need Help? 52 | ------------- 53 | - Drop us an email: mayur@sigmoidanalytics.com 54 | - Read Our Blog: http://www.sigmoidanalytics.com/spark-gce/ 55 | - Read Our Wiki: http://docs.sigmoidanalytics.com/index.php/SparkGCE 56 | -------------------------------------------------------------------------------- /banner: -------------------------------------------------------------------------------- 1 | __ _ _ _ _ _ _ _ 2 | / _(_) __ _ _ __ ___ ___ (_) __| | /_\ _ __ __ _| |_ _| |_(_) ___ ___ 3 | \ \| |/ _` | '_ ` _ \ / _ \| |/ _` |//_\\| '_ \ / _` | | | | | __| |/ __/ __| 4 | _\ \ | (_| | | | | | | (_) | | (_| / _ \ | | | (_| | | |_| | |_| | (__\__ \ 5 | \__/_|\__, |_| |_| |_|\___/|_|\__,_\_/ \_/_| |_|\__,_|_|\__, |\__|_|\___|___/ 6 | |___/ SigmoidAnalytics.com |___/ 7 | 8 | -------------------------------------------------------------------------------- /spark_gce.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | ### 4 | # This script sets up a Spark cluster on Google Compute Engine 5 | # Sigmoidanalytics.com 6 | ### 7 | 8 | from __future__ import with_statement 9 | 10 | import logging 11 | import os 12 | import pipes 13 | import random 14 | import shutil 15 | import subprocess 16 | import sys 17 | import tempfile 18 | import time 19 | import commands 20 | import urllib2 21 | from optparse import OptionParser 22 | from sys import stderr 23 | import shlex 24 | import getpass 25 | import threading 26 | import json 27 | 28 | ### 29 | # Make sure gcutil is installed and authenticated 30 | # Usage: spark_gce.py 31 | # Usage: spark_gce.py destroy 32 | ### 33 | 34 | identity_file = "" 35 | slave_no = "" 36 | slave_type = "" 37 | master_type = "" 38 | zone = "" 39 | cluster_name = "" 40 | username = "" 41 | project = "" 42 | 43 | 44 | def read_args(): 45 | 46 | global identity_file 47 | global slave_no 48 | global slave_type 49 | global master_type 50 | global zone 51 | global cluster_name 52 | global username 53 | global project 54 | 55 | if len(sys.argv) == 8: 56 | project = sys.argv[1] 57 | slave_no = int(sys.argv[2]) 58 | slave_type = sys.argv[3] 59 | master_type = sys.argv[4] 60 | identity_file = sys.argv[5] 61 | zone = sys.argv[6] 62 | cluster_name = sys.argv[7] 63 | username = getpass.getuser() 64 | 65 | elif len(sys.argv) == 4 and sys.argv[3].lower() == "destroy": 66 | 67 | print 'Destroying cluster ' + sys.argv[2] 68 | 69 | project = sys.argv[1] 70 | cluster_name = sys.argv[2] 71 | try: 72 | 73 | command = 'gcloud compute --project ' + project + ' instances list --format json' 74 | output = subprocess.check_output(command, shell=True) 75 | data = json.loads(output) 76 | master_nodes=[] 77 | slave_nodes=[] 78 | 79 | for instance in data: 80 | 81 | try: 82 | host_name = instance['name'] 83 | host_ip = instance['networkInterfaces'][0]['accessConfigs'][0]['natIP'] 84 | if host_name == cluster_name + '-master': 85 | command = 'gcloud compute instances delete ' + host_name + ' --project ' + project 86 | command = shlex.split(command) 87 | subprocess.call(command) 88 | elif cluster_name + '-slave' in host_name: 89 | command = 'gcloud compute instances delete ' + host_name + ' --project ' + project 90 | command = shlex.split(command) 91 | subprocess.call(command) 92 | 93 | except: 94 | pass 95 | 96 | except: 97 | print "Failed to Delete instances" 98 | sys.exit(1) 99 | 100 | sys.exit(0) 101 | 102 | else: 103 | print '# Usage: spark_gce.py ' 104 | print '# Usage: spark_gce.py destroy' 105 | sys.exit(0) 106 | 107 | 108 | 109 | def setup_network(): 110 | 111 | print '[ Setting up Network & Firewall Entries ]' 112 | 113 | try: 114 | command = 'gcloud compute --project=' + project + ' networks create "' + cluster_name + '-network" --range "10.240.0.0/16"' 115 | 116 | command = shlex.split(command) 117 | subprocess.call(command) 118 | 119 | #Uncomment the above and comment the below section if you don't want to open all ports for public. 120 | command = 'gcloud compute firewall-rules delete internal --project '+ project 121 | command = 'gcloud compute firewall-rules create internal --network ' + cluster_name + '-network --allow tcp udp icmp --project '+ project 122 | command = shlex.split(command) 123 | subprocess.call(command) 124 | 125 | except OSError: 126 | print "Failed to setup Network & Firewall. Exiting.." 127 | sys.exit(1) 128 | 129 | 130 | def launch_master(): 131 | 132 | print '[ Launching Master ]' 133 | command = 'gcloud compute --project "' + project + '" instances create "' + cluster_name + '-master" --zone "' + zone + '" --machine-type "' + master_type + '" --network "' + cluster_name + '-network" --maintenance-policy "MIGRATE" --scopes "https://www.googleapis.com/auth/devstorage.read_only" --image "https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-6-v20141218" --boot-disk-type "pd-standard" --boot-disk-device-name "' + cluster_name + '-md"' 134 | 135 | command = shlex.split(command) 136 | subprocess.call(command) 137 | 138 | 139 | def launch_slaves(): 140 | 141 | print '[ Launching Slaves ]' 142 | 143 | for s_id in range(1,slave_no+1): 144 | 145 | command = 'gcloud compute --project "' + project + '" instances create "' + cluster_name + '-slave' + str(s_id) + '" --zone "' + zone + '" --machine-type "' + slave_type + '" --network "' + cluster_name + '-network" --maintenance-policy "MIGRATE" --scopes "https://www.googleapis.com/auth/devstorage.read_only" --image "https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-6-v20141218" --boot-disk-type "pd-standard" --boot-disk-device-name "' + cluster_name + '-s' + str(s_id) + 'd"' 146 | 147 | command = shlex.split(command) 148 | subprocess.call(command) 149 | 150 | def launch_cluster(): 151 | 152 | print '[ Creating the Cluster ]' 153 | 154 | setup_network() 155 | 156 | launch_master() 157 | 158 | launch_slaves() 159 | 160 | def check_gcloud(): 161 | 162 | myexec = "gcloud" 163 | print '[ Verifying gcloud ]' 164 | try: 165 | subprocess.call([myexec, 'info']) 166 | 167 | except OSError: 168 | print "%s executable not found. \n# Make sure gcloud is installed and authenticated\nPlease follow https://cloud.google.com/compute/docs/gcloud-compute/" % myexec 169 | sys.exit(1) 170 | 171 | def get_cluster_ips(): 172 | 173 | command = 'gcloud compute --project ' + project + ' instances list --format json' 174 | output = subprocess.check_output(command, shell=True) 175 | data = json.loads(output) 176 | master_nodes=[] 177 | slave_nodes=[] 178 | 179 | for instance in data: 180 | 181 | try: 182 | host_name = instance['name'] 183 | host_ip = instance['networkInterfaces'][0]['accessConfigs'][0]['natIP'] 184 | if host_name == cluster_name + '-master': 185 | master_nodes.append(host_ip) 186 | elif cluster_name + '-slave' in host_name: 187 | slave_nodes.append(host_ip) 188 | 189 | except: 190 | pass 191 | 192 | # Return all the instances 193 | return (master_nodes, slave_nodes) 194 | 195 | def enable_sudo(master,command): 196 | ''' 197 | ssh_command(master,"echo \"import os\" > setuid.py ") 198 | ssh_command(master,"echo \"import sys\" >> setuid.py") 199 | ssh_command(master,"echo \"import commands\" >> setuid.py") 200 | ssh_command(master,"echo \"command=sys.argv[1]\" >> setuid.py") 201 | ssh_command(master,"echo \"os.setuid(os.geteuid())\" >> setuid.py") 202 | ssh_command(master,"echo \"print commands.getstatusoutput(\"command\")\" >> setuid.py") 203 | ''' 204 | os.system("ssh -i " + identity_file + " -t -o 'UserKnownHostsFile=/dev/null' -o 'CheckHostIP=no' -o 'StrictHostKeyChecking no' "+ username + "@" + master + " '" + command + "'") 205 | 206 | def ssh_thread(host,command): 207 | 208 | enable_sudo(host,command) 209 | 210 | def install_java(master_nodes,slave_nodes): 211 | 212 | print '[ Installing Java and Development Tools ]' 213 | master = master_nodes[0] 214 | 215 | master_thread = threading.Thread(target=ssh_thread, args=(master,"sudo yum install -y java-1.7.0-openjdk;sudo yum install -y java-1.7.0-openjdk-devel;sudo yum groupinstall \'Development Tools\' -y")) 216 | master_thread.start() 217 | 218 | #ssh_thread(master,"sudo yum install -y java-1.7.0-openjdk") 219 | for slave in slave_nodes: 220 | 221 | slave_thread = threading.Thread(target=ssh_thread, args=(slave,"sudo yum install -y java-1.7.0-openjdk;sudo yum install -y java-1.7.0-openjdk-devel;sudo yum groupinstall \'Development Tools\' -y")) 222 | slave_thread.start() 223 | 224 | #ssh_thread(slave,"sudo yum install -y java-1.7.0-openjdk") 225 | 226 | slave_thread.join() 227 | master_thread.join() 228 | 229 | 230 | def ssh_command(host,command): 231 | 232 | #print "ssh -i " + identity_file + " -o 'UserKnownHostsFile=/dev/null' -o 'CheckHostIP=no' -o 'StrictHostKeyChecking no' "+ username + "@" + host + " '" + command + "'" 233 | commands.getstatusoutput("ssh -i " + identity_file + " -o 'UserKnownHostsFile=/dev/null' -o 'CheckHostIP=no' -o 'StrictHostKeyChecking no' "+ username + "@" + host + " '" + command + "'" ) 234 | 235 | 236 | def deploy_keys(master_nodes,slave_nodes): 237 | 238 | print '[ Generating SSH Keys on Master ]' 239 | key_file = os.path.basename(identity_file) 240 | master = master_nodes[0] 241 | ssh_command(master,"ssh-keygen -q -t rsa -N \"\" -f ~/.ssh/id_rsa") 242 | ssh_command(master,"cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys") 243 | os.system("scp -i " + identity_file + " -oUserKnownHostsFile=/dev/null -oCheckHostIP=no -oStrictHostKeyChecking=no -o 'StrictHostKeyChecking no' "+ identity_file + " " + username + "@" + master + ":") 244 | ssh_command(master,"chmod 600 " + key_file) 245 | ssh_command(master,"tar czf .ssh.tgz .ssh") 246 | 247 | ssh_command(master,"ssh-keyscan -H $(/sbin/ifconfig eth0 | grep \"inet addr:\" | cut -d: -f2 | cut -d\" \" -f1) >> ~/.ssh/known_hosts") 248 | ssh_command(master,"ssh-keyscan -H $(cat /etc/hosts | grep $(/sbin/ifconfig eth0 | grep \"inet addr:\" | cut -d: -f2 | cut -d\" \" -f1) | cut -d\" \" -f2) >> ~/.ssh/known_hosts") 249 | 250 | print '[ Transfering SSH keys to slaves ]' 251 | for slave in slave_nodes: 252 | print commands.getstatusoutput("ssh -i " + identity_file + " -oUserKnownHostsFile=/dev/null -oCheckHostIP=no -oStrictHostKeyChecking=no " + username + "@" + master + " 'scp -i " + key_file + " -oStrictHostKeyChecking=no .ssh.tgz " + username +"@" + slave + ":'") 253 | ssh_command(slave,"tar xzf .ssh.tgz") 254 | ssh_command(master,"ssh-keyscan -H " + slave + " >> ~/.ssh/known_hosts") 255 | ssh_command(slave,"ssh-keyscan -H $(cat /etc/hosts | grep $(/sbin/ifconfig eth0 | grep \"inet addr:\" | cut -d: -f2 | cut -d\" \" -f1) | cut -d\" \" -f2) >> ~/.ssh/known_hosts") 256 | ssh_command(slave,"ssh-keyscan -H $(/sbin/ifconfig eth0 | grep \"inet addr:\" | cut -d: -f2 | cut -d\" \" -f1) >> ~/.ssh/known_hosts") 257 | 258 | 259 | 260 | def attach_drive(master_nodes,slave_nodes): 261 | 262 | print '[ Adding new 500GB drive on Master ]' 263 | master = master_nodes[0] 264 | 265 | command='gcloud compute --project="' + project + '" disks create "' + cluster_name + '-m-disk" --size 500GB --type "pd-standard" --zone ' + zone 266 | 267 | command = shlex.split(command) 268 | subprocess.call(command) 269 | 270 | command = 'gcloud compute --project="' + project + '" instances attach-disk ' + cluster_name + '-master --device-name "' + cluster_name + '-m-disk" --disk ' + cluster_name + '-m-disk --zone ' + zone 271 | command = shlex.split(command) 272 | subprocess.call(command) 273 | 274 | master_thread = threading.Thread(target=ssh_thread, args=(master,"sudo mkfs.ext3 /dev/disk/by-id/google-"+ cluster_name + "-m-disk " + " -F < /dev/null")) 275 | master_thread.start() 276 | 277 | print '[ Adding new 500GB drive on Slaves ]' 278 | 279 | i = 1 280 | for slave in slave_nodes: 281 | 282 | master = slave 283 | 284 | command='gcloud compute --project="' + project + '" disks create "' + cluster_name + '-s' + str(i) + '-disk" --size 500GB --type "pd-standard" --zone ' + zone 285 | 286 | command = shlex.split(command) 287 | subprocess.call(command) 288 | 289 | command = 'gcloud compute --project="' + project + '" instances attach-disk ' + cluster_name + '-slave' + str(i) + ' --disk ' + cluster_name + '-s' + str(i) + '-disk --device-name "' + cluster_name + '-s' + str(i) + '-disk" --zone ' + zone 290 | 291 | command = shlex.split(command) 292 | subprocess.call(command) 293 | slave_thread = threading.Thread(target=ssh_thread, args=(slave,"sudo mkfs.ext3 /dev/disk/by-id/google-" + cluster_name + "-s" + str(i) + "-disk -F < /dev/null")) 294 | slave_thread.start() 295 | i=i+1 296 | 297 | slave_thread.join() 298 | master_thread.join() 299 | 300 | print '[ Mounting new Volume ]' 301 | enable_sudo(master_nodes[0],"sudo mount /dev/disk/by-id/google-"+ cluster_name + "-m-disk /mnt") 302 | enable_sudo(master_nodes[0],"sudo chown " + username + ":" + username + " /mnt") 303 | i=1 304 | for slave in slave_nodes: 305 | enable_sudo(slave,"sudo mount /dev/disk/by-id/google-"+ cluster_name + "-s" + str(i) +"-disk /mnt") 306 | enable_sudo(slave,"sudo chown " + username + ":" + username + " /mnt") 307 | i=i+1 308 | 309 | print '[ All volumns mounted, will be available at /mnt ]' 310 | 311 | def setup_spark(master_nodes,slave_nodes): 312 | 313 | print '[ Downloading Binaries ]' 314 | 315 | master = master_nodes[0] 316 | 317 | ssh_command(master,"rm -fr sigmoid") 318 | ssh_command(master,"mkdir sigmoid") 319 | ssh_command(master,"cd sigmoid;wget https://s3.amazonaws.com/sigmoidanalytics-builds/spark/1.2.0/spark-1.2.0-bin-cdh4.tgz") 320 | ssh_command(master,"cd sigmoid;wget https://s3.amazonaws.com/sigmoidanalytics-builds/spark/0.9.1/gce/scala.tgz") 321 | ssh_command(master,"cd sigmoid;tar zxf spark-1.2.0-bin-cdh4.tgz;rm spark-1.2.0-bin-cdh4.tgz") 322 | ssh_command(master,"cd sigmoid;tar zxf scala.tgz;rm scala.tgz") 323 | 324 | 325 | print '[ Updating Spark Configurations ]' 326 | ssh_command(master,"cd sigmoid;cd spark-1.2.0-bin-cdh4/conf;cp spark-env.sh.template spark-env.sh") 327 | ssh_command(master,"cd sigmoid;cd spark-1.2.0-bin-cdh4/conf;echo 'export SCALA_HOME=\"/home/`whoami`/sigmoid/scala\"' >> spark-env.sh") 328 | ssh_command(master,"cd sigmoid;cd spark-1.2.0-bin-cdh4/conf;echo 'export SPARK_MEM=2454m' >> spark-env.sh") 329 | ssh_command(master,"cd sigmoid;cd spark-1.2.0-bin-cdh4/conf;echo \"SPARK_JAVA_OPTS+=\\\" -Dspark.local.dir=/mnt/spark \\\"\" >> spark-env.sh") 330 | ssh_command(master,"cd sigmoid;cd spark-1.2.0-bin-cdh4/conf;echo 'export SPARK_JAVA_OPTS' >> spark-env.sh") 331 | ssh_command(master,"cd sigmoid;cd spark-1.2.0-bin-cdh4/conf;echo 'export SPARK_MASTER_IP=PUT_MASTER_IP_HERE' >> spark-env.sh") 332 | ssh_command(master,"cd sigmoid;cd spark-1.2.0-bin-cdh4/conf;echo 'export MASTER=spark://PUT_MASTER_IP_HERE:7077' >> spark-env.sh") 333 | ssh_command(master,"cd sigmoid;cd spark-1.2.0-bin-cdh4/conf;echo 'export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64' >> spark-env.sh") 334 | 335 | 336 | for slave in slave_nodes: 337 | ssh_command(master,"echo " + slave + " >> sigmoid/spark-1.2.0-bin-cdh4/conf/slaves") 338 | 339 | 340 | ssh_command(master,"sed -i \"s/PUT_MASTER_IP_HERE/$(/sbin/ifconfig eth0 | grep \"inet addr:\" | cut -d: -f2 | cut -d\" \" -f1)/g\" sigmoid/spark-1.2.0-bin-cdh4/conf/spark-env.sh") 341 | 342 | ssh_command(master,"chmod +x sigmoid/spark-1.2.0-bin-cdh4/conf/spark-env.sh") 343 | 344 | print '[ Rsyncing Spark to all slaves ]' 345 | 346 | #Change permissions 347 | enable_sudo(master,"sudo chown " + username + ":" + username + " /mnt") 348 | i=1 349 | for slave in slave_nodes: 350 | enable_sudo(slave,"sudo chown " + username + ":" + username + " /mnt") 351 | 352 | 353 | for slave in slave_nodes: 354 | ssh_command(master,"rsync -za /home/" + username + "/sigmoid " + slave + ":") 355 | ssh_command(slave,"mkdir /mnt/spark") 356 | 357 | ssh_command(master,"mkdir /mnt/spark") 358 | print '[ Starting Spark Cluster ]' 359 | ssh_command(master,"sigmoid/spark-1.2.0-bin-cdh4/sbin/start-all.sh") 360 | 361 | 362 | #setup_shark(master_nodes,slave_nodes) 363 | 364 | setup_hadoop(master_nodes,slave_nodes) 365 | 366 | 367 | print "\n\nSpark Master Started, WebUI available at : http://" + master + ":8080" 368 | 369 | def setup_hadoop(master_nodes,slave_nodes): 370 | 371 | master = master_nodes[0] 372 | print '[ Downloading hadoop ]' 373 | 374 | ssh_command(master,"cd sigmoid;wget https://s3.amazonaws.com/sigmoidanalytics-builds/hadoop/hadoop-2.0.0-cdh4.2.0.tar.gz") 375 | ssh_command(master,"cd sigmoid;tar zxf hadoop-2.0.0-cdh4.2.0.tar.gz") 376 | ssh_command(master,"cd sigmoid;rm hadoop-2.0.0-cdh4.2.0.tar.gz") 377 | 378 | print '[ Configuring Hadoop ]' 379 | 380 | #Configure .bashrc 381 | ssh_command(master,"echo '#HADOOP_CONFS' >> .bashrc") 382 | ssh_command(master,"echo 'export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64' >> .bashrc") 383 | ssh_command(master,"echo 'export HADOOP_INSTALL=/home/`whoami`/sigmoid/hadoop-2.0.0-cdh4.2.0' >> .bashrc") 384 | ssh_command(master,"echo 'export PATH=$PATH:\$HADOOP_INSTALL/bin' >> .bashrc") 385 | ssh_command(master,"echo 'export PATH=$PATH:\$HADOOP_INSTALL/sbin' >> .bashrc") 386 | ssh_command(master,"echo 'export HADOOP_MAPRED_HOME=\$HADOOP_INSTALL' >> .bashrc") 387 | ssh_command(master,"echo 'export HADOOP_COMMON_HOME=\$HADOOP_INSTALL' >> .bashrc") 388 | ssh_command(master,"echo 'export HADOOP_HDFS_HOME=\$HADOOP_INSTALL' >> .bashrc") 389 | ssh_command(master,"echo 'export YARN_HOME=\$HADOOP_INSTALL' >> .bashrc") 390 | 391 | #Remove *-site.xmls 392 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0;rm etc/hadoop/core-site.xml") 393 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0;rm etc/hadoop/yarn-site.xml") 394 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0;rm etc/hadoop/hdfs-site.xml") 395 | #Download Our Confs 396 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0/etc/hadoop/;wget https://s3.amazonaws.com/sigmoidanalytics-builds/spark/0.9.1/gce/configs/core-site.xml") 397 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0/etc/hadoop/;wget https://s3.amazonaws.com/sigmoidanalytics-builds/spark/0.9.1/gce/configs/hdfs-site.xml") 398 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0/etc/hadoop/;wget https://s3.amazonaws.com/sigmoidanalytics-builds/spark/0.9.1/gce/configs/mapred-site.xml") 399 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0/etc/hadoop/;wget https://s3.amazonaws.com/sigmoidanalytics-builds/spark/0.9.1/gce/configs/yarn-site.xml") 400 | 401 | #Config Core-site 402 | ssh_command(master,"sed -i \"s/PUT-MASTER-IP/$(/sbin/ifconfig eth0 | grep \"inet addr:\" | cut -d: -f2 | cut -d\" \" -f1)/g\" sigmoid/hadoop-2.0.0-cdh4.2.0/etc/hadoop/core-site.xml") 403 | 404 | #Create data/node dirs 405 | ssh_command(master,"mkdir -p /mnt/hadoop/hdfs/namenode;mkdir -p /mnt/hadoop/hdfs/datanode") 406 | #Config slaves 407 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0/etc/hadoop/;rm slaves") 408 | for slave in slave_nodes: 409 | ssh_command(master,"cd sigmoid/hadoop-2.0.0-cdh4.2.0/etc/hadoop/;echo " + slave + " >> slaves") 410 | 411 | print '[ Rsyncing with Slaves ]' 412 | #Rsync everything 413 | for slave in slave_nodes: 414 | ssh_command(master,"rsync -za /home/" + username + "/sigmoid " + slave + ":") 415 | ssh_command(slave,"mkdir -p /mnt/hadoop/hdfs/namenode;mkdir -p /mnt/hadoop/hdfs/datanode") 416 | ssh_command(master,"rsync -za /home/" + username + "/.bashrc " + slave + ":") 417 | 418 | print '[ Formating namenode ]' 419 | #Format namenode 420 | ssh_command(master,"sigmoid/hadoop-2.0.0-cdh4.2.0/bin/hdfs namenode -format") 421 | 422 | print '[ Starting DFS ]' 423 | #Start dfs 424 | ssh_command(master,"sigmoid/hadoop-2.0.0-cdh4.2.0/sbin/start-dfs.sh") 425 | 426 | def setup_shark(master_nodes,slave_nodes): 427 | 428 | master = master_nodes[0] 429 | print '[ Downloading Shark binaries ]' 430 | 431 | ssh_command(master,"cd sigmoid;wget https://s3.amazonaws.com/spark-ui/hive-0.11.0-bin.tgz") 432 | ssh_command(master,"cd sigmoid;wget https://s3.amazonaws.com/spark-ui/shark-0.9-hadoop-2.0.0-mr1-cdh4.2.0.tar.gz") 433 | ssh_command(master,"cd sigmoid;tar zxf hive-0.11.0-bin.tgz") 434 | ssh_command(master,"cd sigmoid;tar zxf shark-0.9-hadoop-2.0.0-mr1-cdh4.2.0.tar.gz") 435 | ssh_command(master,"rm sigmoid/hive-0.11.0-bin.tgz") 436 | ssh_command(master,"rm sigmoid/shark-0.9-hadoop-2.0.0-mr1-cdh4.2.0.tar.gz") 437 | 438 | print '[ Configuring Shark ]' 439 | ssh_command(master,"cd sigmoid/shark/;echo \"export SHARK_MASTER_MEM=1g\" > conf/shark-env.sh") 440 | ssh_command(master,"cd sigmoid/shark/;echo \"SPARK_JAVA_OPTS+=\\\" -Dspark.kryoserializer.buffer.mb=10 \\\"\" >> conf/shark-env.sh") 441 | ssh_command(master,"cd sigmoid/shark/;echo \"export SPARK_JAVA_OPTS\" >> conf/shark-env.sh") 442 | ssh_command(master,"cd sigmoid/shark/;echo \"export HIVE_HOME=/home/`whoami`/sigmoid/hive-0.11.0-bin\" >> conf/shark-env.sh") 443 | ssh_command(master,"cd sigmoid/shark/;echo \"export SPARK_JAVA_OPTS\" >> conf/shark-env.sh") 444 | ssh_command(master,"cd sigmoid/shark/;echo \"export MASTER=spark://PUT_MASTER_IP_HERE:7077\" >> conf/shark-env.sh") 445 | ssh_command(master,"cd sigmoid/shark/;echo \"export SPARK_HOME=/home/`whoami`/sigmoid/spark-0.9.1-bin-cdh4\" >> conf/shark-env.sh") 446 | ssh_command(master,"mkdir /mnt/tachyon") 447 | ssh_command(master,"cd sigmoid/shark/;echo \"export TACHYON_MASTER=PUT_MASTER_IP_HERE:19998\" >> conf/shark-env.sh") 448 | ssh_command(master,"cd sigmoid/shark/;echo \"export TACHYON_WAREHOUSE_PATH=/mnt/tachyon\" >> conf/shark-env.sh") 449 | ssh_command(master,"cd sigmoid/shark/;echo \"source /home/`whoami`/sigmoid/spark-0.9.1-bin-cdh4/conf/spark-env.sh\" >> conf/shark-env.sh") 450 | ssh_command(master,"sed -i \"s/PUT_MASTER_IP_HERE/$(/sbin/ifconfig eth0 | grep \"inet addr:\" | cut -d: -f2 | cut -d\" \" -f1)/g\" sigmoid/shark/conf/shark-env.sh") 451 | 452 | ssh_command(master,"chmod +x sigmoid/shark/conf/shark-env.sh") 453 | 454 | print '[ Rsyncing Shark on slaves ]' 455 | for slave in slave_nodes: 456 | ssh_command(master,"rsync -za /home/" + username + "/sigmoid " + slave + ":") 457 | 458 | print '[ Starting Shark Server ]' 459 | ssh_command(master,"cd sigmoid/shark/;./bin/shark --service sharkserver 10000 > log.txt 2>&1 &") 460 | 461 | def show_banner(): 462 | 463 | os.system("wget -qO- https://s3.amazonaws.com/sigmoidanalytics-builds/spark/0.9.1/gce/configs/banner") 464 | 465 | def real_main(): 466 | 467 | show_banner() 468 | print "[ Script Started ]" 469 | #Read the arguments 470 | read_args() 471 | #Make sure gcloud is accessible. 472 | check_gcloud() 473 | 474 | #Launch the cluster 475 | launch_cluster() 476 | 477 | #Wait some time for machines to bootup 478 | print '[ Waiting 120 Seconds for Machines to start up ]' 479 | time.sleep(120) 480 | 481 | #Get Master/Slave IP Addresses 482 | (master_nodes, slave_nodes) = get_cluster_ips() 483 | 484 | #Install Java and build-essential 485 | install_java(master_nodes,slave_nodes) 486 | 487 | #Generate SSH keys and deploy 488 | deploy_keys(master_nodes,slave_nodes) 489 | 490 | #Attach a new empty drive and format it 491 | attach_drive(master_nodes,slave_nodes) 492 | 493 | #Set up Spark/Shark/Hadoop 494 | setup_spark(master_nodes,slave_nodes) 495 | 496 | 497 | 498 | 499 | def main(): 500 | try: 501 | real_main() 502 | except Exception as e: 503 | print >> stderr, "\nError:\n", e 504 | 505 | 506 | if __name__ == "__main__": 507 | 508 | main() 509 | --------------------------------------------------------------------------------