├── .gitignore
├── README.md
├── configuration
├── flink-ambari-config.xml
└── flink-env.xml
├── kerberos.json
├── metainfo.xml
├── package
└── scripts
│ ├── flink.py
│ ├── params.py
│ └── status_params.py
├── role_command_order.json
└── screenshots
├── Flink-UI-1.png
├── Flink-UI-2.png
├── Flink-UI-3.png
├── Flink-conf.png
├── Flink-wordcount.png
├── Install-wizard.png
├── Installed-service-config.png
├── Installed-service-stop.png
└── YARN-UI.png
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | .class
3 | .swp
4 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | #### An Ambari Service for Flink
2 | Ambari service for easily installing and managing Flink on HDP clusters.
3 | Apache Flink is an open source platform for distributed stream and batch data processing
4 | More details on Flink and how it is being used in the industry today available here: [http://flink-forward.org/?post_type=session](http://flink-forward.org/?post_type=session)
5 |
6 |
7 | The Ambari service lets you easily install/compile Flink on HDP 2.6.5
8 | - Features:
9 | - By default, downloads prebuilt package of Flink 1.8.1, but also gives option to build the latest Flink from source instead
10 | - Exposes flink-conf.yaml in Ambari UI
11 |
12 | Limitations:
13 | - This is not an officially supported service and *is not meant to be deployed in production systems*. It is only meant for testing demo/purposes
14 | - It does not support Ambari/HDP upgrade process and will cause upgrade problems if not removed prior to upgrade
15 |
16 | Author: [Ali Bajwa](https://github.com/abajwa-hw)
17 | - Thanks to [Davide Vergari](https://github.com/dvergari) for enhancing to run in clustered env
18 | - Thanks to [Ben Harris](https://github.com/jamesbenharris) for updating libraries to work with HDP 2.5.3
19 | - Thanks to [Anand Subramanian](https://github.com/anandsubbu) for updating libraries to work with HDP 2.6.5 and flink version 1.8.1
20 | - Thanks to [jzyhappy](https://github.com/jzyhappy) for updating libraries to work with HDP 2.6.5 and flink version 1.9.1
21 |
22 | #### Setup (中文访问 https://blog.csdn.net/jzy3711/article/details/104043860)
23 |
24 | - Download HDP 2.6 sandbox VM image (HDP_2.6.5_virtualbox_180626.ova) from [Cloudera website](https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html)
25 | - Import HDP_2.6.5_virtualbox_180626.ova into VMWare and set the VM memory size to 8GB
26 | - Now start the VM
27 | - After it boots up, find the IP address of the VM and add an entry into your machines hosts file. For example:
28 | ```
29 | 192.168.191.241 sandbox.hortonworks.com sandbox
30 | ```
31 | - Note that you will need to replace the above with the IP for your own VM
32 |
33 | - Connect to the VM via SSH (password hadoop)
34 | ```
35 | ssh root@sandbox.hortonworks.com
36 | ```
37 |
38 |
39 | - To download the Flink service folder, run below
40 | ```
41 | VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
42 | sudo git clone https://github.com/abajwa-hw/ambari-flink-service.git /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK
43 | ```
44 |
45 | - Restart Ambari
46 | ```
47 | #sandbox
48 | service ambari restart
49 |
50 | #non sandbox
51 | sudo service ambari-server restart
52 | ```
53 |
54 | - Then you can click on 'Add Service' from the 'Actions' dropdown menu in the bottom left of the Ambari dashboard:
55 |
56 | On bottom left -> Actions -> Add service -> check Flink server -> Next -> Next -> Change any config you like (e.g. install dir, memory sizes, num containers or values in flink-conf.yaml) -> Next -> Deploy
57 |
58 | - By default:
59 | - Container memory is 1024 MB
60 | - Job manager memory of 768 MB
61 | - Number of YARN container is 1
62 |
63 | - On successful deployment you will see the Flink service as part of Ambari stack and will be able to start/stop the service from here:
64 | 
65 |
66 | - You can see the parameters you configured under 'Configs' tab
67 | 
68 |
69 | - One benefit to wrapping the component in Ambari service is that you can now monitor/manage this service remotely via REST API
70 | ```
71 | export SERVICE=FLINK
72 | export PASSWORD=admin
73 | export AMBARI_HOST=localhost
74 |
75 | #detect name of cluster
76 | output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' http://$AMBARI_HOST:8080/api/v1/clusters`
77 | CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
78 |
79 |
80 | #get service status
81 | curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X GET http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
82 |
83 | #start service
84 | curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
85 |
86 | #stop service
87 | curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
88 | ```
89 |
90 | - ...and also install via Blueprint. See example [here](https://github.com/abajwa-hw/ambari-workshops/blob/master/blueprints-demo-security.md) on how to deploy custom services via Blueprints
91 |
92 | #### set Flink version
93 | - configuration/flink-ambari-config.xml
94 | ```
95 |
96 | flink_download_url
97 | http://X.X.151.15/Package/flink-1.9.0-bin-scala_2.11.tgz
98 | Snapshot download location. Downloaded when setup_prebuilt is true
99 |
100 | ```
101 | value from [http://apachemirror.wuchna.com/flink/](http://apachemirror.wuchna.com/flink/) or [http://www.us.apache.org/dist/flink/](http://www.us.apache.org/dist/flink/) or [https://archive.apache.org/dist/](https://archive.apache.org/dist/)or customize repo
102 |
103 | - metainfo.xml
104 | ```
105 | FLINK
106 | Flink
107 | Apache Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams.
108 | 1.9.0
109 | ```
110 | vsersion = your flink version
111 |
112 | #### Flink on Yarn
113 | - metainfo.xml
114 | ```
115 |
116 | yarn.client.failover-proxy-provider
117 | org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider
118 |
119 | ```
120 | restart yarn
121 |
122 | #### Flink Configuration
123 | 
124 | - java_home is consistent with / etc / profile
125 | ```
126 | hdp-select status hadoop-client
127 | hadoop-client -
128 | ```
129 | - hadodp_conf_dir = /etc/hadoop//0
130 |
131 |
132 | #### Use Flink
133 |
134 | - Run word count job
135 | ```
136 | su flink
137 | export HADOOP_CONF_DIR=/etc/hadoop/conf
138 | export HADOOP_CLASSPATH=`hadoop classpath`
139 | cd /opt/flink
140 | ./bin/flink run --jobmanager yarn-cluster -yn 1 -ytm 768 -yjm 768 ./examples/batch/WordCount.jar
141 | ```
142 | - This should generate a series of word counts
143 | 
144 |
145 | - Open the [YARN ResourceManager UI](http://sandbox.hortonworks.com:8088/cluster). Notice Flink is running on YARN
146 | 
147 |
148 | - Click the ApplicationMaster link to access Flink webUI
149 | 
150 |
151 | - Use the History tab to review details of the job that ran:
152 | 
153 |
154 | - View metrics in the Task Manager tab:
155 | 
156 |
157 | #### Other things to try
158 |
159 | - [Apache Zeppelin](https://zeppelin.incubator.apache.org/) now also supports Flink. You can also install it via [Zeppelin Ambari service](https://github.com/hortonworks-gallery/ambari-zeppelin-service) for vizualization
160 |
161 | More details on Flink and how it is being used in the industry today available here: [http://flink-forward.org/?post_type=session](http://flink-forward.org/?post_type=session)
162 |
163 |
164 | #### Remove service
165 |
166 | - To remove the Flink service:
167 | - Stop the service via Ambari
168 | - Unregister the service
169 |
170 | ```
171 | export SERVICE=FLINK
172 | export PASSWORD=admin
173 | export AMBARI_HOST=localhost
174 |
175 | # detect name of cluster
176 | output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' http://$AMBARI_HOST:8080/api/v1/clusters`
177 | CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
178 |
179 | curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X DELETE http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
180 | ```
181 |
182 | If above errors out, run below first to fully stop the service
183 | ```
184 | curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
185 | ```
186 |
187 | - Remove artifacts
188 | ```
189 | rm -rf /opt/flink*
190 | rm /tmp/flink.tgz
191 | ```
192 |
--------------------------------------------------------------------------------
/configuration/flink-ambari-config.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 | flink_install_dir
10 | /opt/flink
11 | Location to install Flink
12 |
13 |
14 |
15 | flink_numcontainers
16 | 1
17 | Number of YARN container to allocate (=Number of Task Managers)
18 |
19 |
20 |
21 | flink_numberoftaskslots
22 | 1
23 | Number of task slots in each container
24 |
25 |
26 |
27 | flink_appname
28 | flinkapp-from-ambari
29 | Flink application name
30 |
31 |
32 |
33 | flink_queue
34 | default
35 | YARN queue to schedule Flink job on
36 |
37 |
38 |
39 | flink_streaming
40 | false
41 | If true, Flink will be started in streaming mode: to be used when only streaming jobs will be executed on Flink
42 |
43 |
44 |
45 | flink_jobmanager_memory
46 | 768
47 | Memory for JobManager Container [in MB]. Must be at least 768
48 |
49 |
50 |
51 | flink_container_memory
52 | 1024
53 | Memory per TaskManager Container [in MB]
54 |
55 |
56 |
57 |
58 | setup_prebuilt
59 | true
60 | If false, will compile Flink from source instead
61 |
62 |
63 |
64 |
65 | hadoop_conf_dir
66 | /etc/hadoop/conf
67 | Hadoop conf dir. Needed to submit to YARN
68 |
69 |
70 |
71 | flink_download_url
72 | http://www.us.apache.org/dist/flink/flink-1.9.1/flink-1.9.1-bin-scala_2.11.tgz
73 | Snapshot download location. Downloaded when setup_prebuilt is true
74 |
75 |
76 |
77 | flink_hadoop_shaded_jar
78 | https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.6.5-7.0/flink-shaded-hadoop-2-uber-2.6.5-7.0.jar
79 | Flink shaded hadoop jar download location. Downloaded when setup_prebuilt is true
80 |
81 |
82 |
--------------------------------------------------------------------------------
/configuration/flink-env.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 | flink_log_dir
8 | /var/log/flink
9 | flink Log dir
10 |
11 |
12 |
13 | flink_pid_dir
14 | /var/run/flink
15 | Dir containing process ID file
16 |
17 |
18 |
19 | flink_user
20 | flink
21 | USER
22 | User flink daemon runs as
23 |
24 |
25 |
26 | flink_group
27 | flink
28 | GROUP
29 | flink group
30 |
31 |
32 |
33 |
34 | content
35 |
36 | #==============================================================================
37 | # Common
38 | #==============================================================================
39 |
40 | jobmanager.rpc.address: localhost
41 |
42 | jobmanager.rpc.port: 6123
43 |
44 | jobmanager.heap.mb: 256
45 |
46 | taskmanager.heap.mb: 512
47 |
48 | taskmanager.numberOfTaskSlots: 1
49 |
50 | parallelism.default: 1
51 |
52 | #==============================================================================
53 | # Web Frontend
54 | #==============================================================================
55 |
56 | # The port under which the web-based runtime monitor listens.
57 | # A value of -1 deactivates the web server.
58 | jobmanager.web.port: 8081
59 |
60 | # The port uder which the standalone web client
61 | # (for job upload and submit) listens.
62 |
63 | webclient.port: 8080
64 |
65 | #==============================================================================
66 | # Streaming state checkpointing
67 | #==============================================================================
68 |
69 | # The backend that will be used to store operator state checkpoints if
70 | # checkpointing is enabled.
71 | #
72 | # Supported backends: jobmanager, filesystem
73 |
74 | state.backend: jobmanager
75 |
76 | # Directory for storing checkpoints in a flink supported filesystem
77 | # Note: State backend must be accessible from the JobManager, use file://
78 | # only for local setups.
79 | #
80 | # state.backend.fs.checkpointdir: hdfs://checkpoints
81 |
82 | #==============================================================================
83 | # Advanced
84 | #==============================================================================
85 |
86 | # The number of buffers for the network stack.
87 | #
88 | # taskmanager.network.numberOfBuffers: 2048
89 |
90 | # Directories for temporary files.
91 | #
92 | # Add a delimited list for multiple directories, using the system directory
93 | # delimiter (colon ':' on unix) or a comma, e.g.:
94 | # /data1/tmp:/data2/tmp:/data3/tmp
95 | #
96 | # Note: Each directory entry is read from and written to by a different I/O
97 | # thread. You can include the same directory multiple times in order to create
98 | # multiple I/O threads against that directory. This is for example relevant for
99 | # high-throughput RAIDs.
100 | #
101 | # If not specified, the system-specific Java temporary directory (java.io.tmpdir
102 | # property) is taken.
103 | #
104 | # taskmanager.tmp.dirs: /tmp
105 |
106 | # Path to the Hadoop configuration directory.
107 | #
108 | # This configuration is used when writing into HDFS. Unless specified otherwise,
109 | # HDFS file creation will use HDFS default settings with respect to block-size,
110 | # replication factor, etc.
111 | #
112 | # You can also directly specify the paths to hdfs-default.xml and hdfs-site.xml
113 | # via keys 'fs.hdfs.hdfsdefault' and 'fs.hdfs.hdfssite'.
114 | #
115 | # fs.hdfs.hadoopconf: /path/to/hadoop/conf/
116 | env.java.home: /usr/jdk64/jdk1.8.0_77/jre
117 | taskmanager.memory.fraction: 0.6
118 | env.java.opts: -XX:+UseG1GC
119 |
120 | Template for flink-conf.yaml
121 |
122 |
123 |
124 |
125 |
126 |
--------------------------------------------------------------------------------
/kerberos.json:
--------------------------------------------------------------------------------
1 | {
2 | "services": [
3 | {
4 | "name": "FLINK",
5 | "identities": [
6 | {
7 | "name": "/smokeuser"
8 | }
9 | ],
10 | "components": [
11 | {
12 | "name": "FLINK_MASTER"
13 | }
14 | ]
15 | }
16 | ]
17 | }
--------------------------------------------------------------------------------
/metainfo.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 | 2.0
4 |
5 |
6 | FLINK
7 | Flink
8 | Apache Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams.
9 | 1.9.1
10 |
11 |
12 | FLINK_MASTER
13 | Flink
14 | MASTER
15 | 1
16 |
17 |
18 | PYTHON
19 | 10000
20 |
21 |
22 |
23 |
24 |
25 | redhat7,redhat6
26 |
27 | git
28 | java-1.7.0-openjdk-devel
29 | apache-maven-3.2*
30 |
31 |
32 |
33 |
34 |
35 | flink-ambari-config
36 |
37 | false
38 |
39 |
40 | YARN
41 |
42 |
43 |
44 |
45 |
--------------------------------------------------------------------------------
/package/scripts/flink.py:
--------------------------------------------------------------------------------
1 | import sys, os, pwd, grp, signal, time, glob, subprocess
2 | from resource_management import *
3 | from subprocess import call
4 |
5 |
6 |
7 |
8 |
9 | class Master(Script):
10 | def install(self, env):
11 |
12 | import params
13 | import status_params
14 | user_home = "/home/" + str(params.flink_user)
15 | try:
16 | pwd.getpwnam(params.flink_user)
17 | except KeyError:
18 | User(params.flink_user,
19 | home=user_home,
20 | shell="/bin/bash",
21 | ignore_failures=True)
22 |
23 | #e.g. /var/lib/ambari-agent/cache/stacks/HDP/2.3/services/FLINK/package
24 | service_packagedir = os.path.realpath(__file__).split('/scripts')[0]
25 |
26 | Execute('rm -rf ' + params.flink_install_dir, ignore_failures=True)
27 |
28 | Directory([status_params.flink_pid_dir, params.flink_log_dir, params.flink_install_dir],
29 | owner=params.flink_user,
30 | group=params.flink_group
31 | )
32 |
33 | File(params.flink_log_file,
34 | mode=0644,
35 | owner=params.flink_user,
36 | group=params.flink_group,
37 | content=''
38 | )
39 |
40 |
41 |
42 |
43 | #User selected option to use prebuilt flink package
44 | if params.setup_prebuilt:
45 |
46 | Execute('echo Installing packages')
47 |
48 |
49 | #Fetch and unzip snapshot build, if no cached flink tar package exists on Ambari server node
50 | if not os.path.exists(params.temp_file):
51 | Execute('wget '+params.flink_download_url+' -O '+params.temp_file+' -a ' + params.flink_log_file, user=params.flink_user)
52 | Execute('tar -zxvf '+params.temp_file+' -C ' + params.flink_install_dir + ' >> ' + params.flink_log_file, user=params.flink_user)
53 | Execute('mv '+params.flink_install_dir+'/*/* ' + params.flink_install_dir, user=params.flink_user)
54 | Execute('wget ' + params.flink_hadoop_shaded_jar_url + ' -P ' + params.flink_install_dir+'/lib' + ' >> ' + params.flink_log_file, user=params.flink_user)
55 |
56 | #update the configs specified by user
57 | self.configure(env, True)
58 |
59 |
60 | else:
61 | #User selected option to build flink from source
62 |
63 | #if params.setup_view:
64 | #Install maven repo if needed
65 | self.install_mvn_repo()
66 | # Install packages listed in metainfo.xml
67 | self.install_packages(env)
68 |
69 |
70 | Execute('echo Compiling Flink from source')
71 | Execute('cd '+params.flink_install_dir+'; git clone https://github.com/apache/flink.git '+params.flink_install_dir +' >> ' + params.flink_log_file)
72 | Execute('chown -R ' + params.flink_user + ':' + params.flink_group + ' ' + params.flink_install_dir)
73 |
74 | Execute('cd '+params.flink_install_dir+'; mvn clean install -DskipTests -Dhadoop.version=2.7.1.2.3.2.0-2950 -Pvendor-repos >> ' + params.flink_log_file, user=params.flink_user)
75 |
76 | #update the configs specified by user
77 | self.configure(env, True)
78 |
79 |
80 |
81 | def configure(self, env, isInstall=False):
82 | import params
83 | import status_params
84 | env.set_params(params)
85 | env.set_params(status_params)
86 |
87 | self.set_conf_bin(env)
88 |
89 | #write out nifi.properties
90 | properties_content=InlineTemplate(params.flink_yaml_content)
91 | File(format("{conf_dir}/flink-conf.yaml"), content=properties_content, owner=params.flink_user)
92 |
93 |
94 | def stop(self, env):
95 | import params
96 | import status_params
97 | from resource_management.core import sudo
98 | pid = str(sudo.read_file(status_params.flink_pid_file))
99 | Execute('yarn application -kill ' + pid, user=params.flink_user)
100 |
101 | Execute('rm ' + status_params.flink_pid_file, ignore_failures=True)
102 |
103 |
104 | def start(self, env):
105 | import params
106 | import status_params
107 | self.set_conf_bin(env)
108 | self.configure(env)
109 |
110 | self.create_hdfs_user(params.flink_user)
111 |
112 | Execute('echo bin dir ' + params.bin_dir)
113 | Execute('echo pid file ' + status_params.flink_pid_file)
114 | cmd_open = subprocess.Popen(["hadoop", "classpath"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
115 | hadoop_classpath = cmd_open.communicate()[0].strip()
116 | cmd = format("export HADOOP_CONF_DIR={hadoop_conf_dir}; export HADOOP_CLASSPATH={hadoop_classpath}; {bin_dir}/yarn-session.sh -d -nm {flink_appname} -n {flink_numcontainers} -s {flink_numberoftaskslots} -jm {flink_jobmanager_memory} -tm {flink_container_memory} -qu {flink_queue}")
117 | if params.flink_streaming:
118 | cmd = cmd + ' -st '
119 | Execute (cmd + format(" >> {flink_log_file}"), user=params.flink_user)
120 | Execute("yarn application -list 2>/dev/null | awk '/" + params.flink_appname + "/ {print $1}' | head -n1 > " + status_params.flink_pid_file, user=params.flink_user)
121 | #Execute('chown '+params.flink_user+':'+params.flink_group+' ' + status_params.flink_pid_file)
122 |
123 | if os.path.exists(params.temp_file):
124 | os.remove(params.temp_file)
125 |
126 | def check_flink_status(self, pid_file):
127 | from datetime import datetime
128 | from resource_management.core.exceptions import ComponentIsNotRunning
129 | from resource_management.core import sudo
130 | from subprocess import PIPE,Popen
131 | import shlex, subprocess
132 | if not pid_file or not os.path.isfile(pid_file):
133 | raise ComponentIsNotRunning()
134 | try:
135 | pid = str(sudo.read_file(pid_file))
136 | cmd_line = "/usr/bin/yarn application -list"
137 | args = shlex.split(cmd_line)
138 | proc = Popen(args, stdout=PIPE)
139 | p = str(proc.communicate()[0].split())
140 | if p.find(pid.strip()) < 0:
141 | raise ComponentIsNotRunning()
142 | except Exception, e:
143 | raise ComponentIsNotRunning()
144 |
145 | def status(self, env):
146 | import status_params
147 | from datetime import datetime
148 | self.check_flink_status(status_params.flink_pid_file)
149 |
150 | def set_conf_bin(self, env):
151 | import params
152 | if params.setup_prebuilt:
153 | params.conf_dir = params.flink_install_dir+ '/conf'
154 | params.bin_dir = params.flink_install_dir+ '/bin'
155 | else:
156 | params.conf_dir = glob.glob(params.flink_install_dir+ '/flink-dist/target/flink-*/flink-*/conf')[0]
157 | params.bin_dir = glob.glob(params.flink_install_dir+ '/flink-dist/target/flink-*/flink-*/bin')[0]
158 |
159 |
160 | def install_mvn_repo(self):
161 | #for centos/RHEL 6/7 maven repo needs to be installed
162 | distribution = platform.linux_distribution()[0].lower()
163 | if distribution in ['centos', 'redhat'] and not os.path.exists('/etc/yum.repos.d/epel-apache-maven.repo'):
164 | Execute('curl -o /etc/yum.repos.d/epel-apache-maven.repo https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo')
165 |
166 | def create_hdfs_user(self, user):
167 | Execute('hadoop fs -mkdir -p /user/'+user, user='hdfs', ignore_failures=True)
168 | Execute('hadoop fs -chown ' + user + ' /user/'+user, user='hdfs')
169 | Execute('hadoop fs -chgrp ' + user + ' /user/'+user, user='hdfs')
170 |
171 | if __name__ == "__main__":
172 | Master().execute()
173 |
--------------------------------------------------------------------------------
/package/scripts/params.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | from resource_management import *
3 | from resource_management.libraries.script.script import Script
4 | import sys, os, glob
5 | from resource_management.libraries.functions.version import format_stack_version
6 | from resource_management.libraries.functions.default import default
7 |
8 |
9 |
10 | # server configurations
11 | config = Script.get_config()
12 |
13 |
14 |
15 | # params from flink-ambari-config
16 | flink_install_dir = config['configurations']['flink-ambari-config']['flink_install_dir']
17 | flink_numcontainers = config['configurations']['flink-ambari-config']['flink_numcontainers']
18 | flink_numberoftaskslots= config['configurations']['flink-ambari-config']['flink_numberoftaskslots']
19 | flink_jobmanager_memory = config['configurations']['flink-ambari-config']['flink_jobmanager_memory']
20 | flink_container_memory = config['configurations']['flink-ambari-config']['flink_container_memory']
21 | setup_prebuilt = config['configurations']['flink-ambari-config']['setup_prebuilt']
22 | flink_appname = config['configurations']['flink-ambari-config']['flink_appname']
23 | flink_queue = config['configurations']['flink-ambari-config']['flink_queue']
24 | flink_streaming = config['configurations']['flink-ambari-config']['flink_streaming']
25 |
26 | hadoop_conf_dir = config['configurations']['flink-ambari-config']['hadoop_conf_dir']
27 | flink_download_url = config['configurations']['flink-ambari-config']['flink_download_url']
28 | flink_hadoop_shaded_jar_url = config['configurations']['flink-ambari-config']['flink_hadoop_shaded_jar']
29 |
30 | conf_dir=''
31 | bin_dir=''
32 |
33 | # params from flink-conf.yaml
34 | flink_yaml_content = config['configurations']['flink-env']['content']
35 | flink_user = config['configurations']['flink-env']['flink_user']
36 | flink_group = config['configurations']['flink-env']['flink_group']
37 | flink_log_dir = config['configurations']['flink-env']['flink_log_dir']
38 | flink_log_file = os.path.join(flink_log_dir,'flink-setup.log')
39 |
40 |
41 |
42 | temp_file='/tmp/flink.tgz'
43 |
--------------------------------------------------------------------------------
/package/scripts/status_params.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | from resource_management import *
3 | import sys, os
4 |
5 | config = Script.get_config()
6 |
7 | flink_pid_dir=config['configurations']['flink-env']['flink_pid_dir']
8 | flink_pid_file=flink_pid_dir + '/flink.pid'
9 |
10 |
--------------------------------------------------------------------------------
/role_command_order.json:
--------------------------------------------------------------------------------
1 | {
2 | "general_deps" : {
3 | "_comment" : "dependencies for flink",
4 | "FLINK_MASTER-START" : ["RESOURCEMANAGER-START"]
5 | }
6 | }
7 |
--------------------------------------------------------------------------------
/screenshots/Flink-UI-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/Flink-UI-1.png
--------------------------------------------------------------------------------
/screenshots/Flink-UI-2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/Flink-UI-2.png
--------------------------------------------------------------------------------
/screenshots/Flink-UI-3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/Flink-UI-3.png
--------------------------------------------------------------------------------
/screenshots/Flink-conf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/Flink-conf.png
--------------------------------------------------------------------------------
/screenshots/Flink-wordcount.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/Flink-wordcount.png
--------------------------------------------------------------------------------
/screenshots/Install-wizard.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/Install-wizard.png
--------------------------------------------------------------------------------
/screenshots/Installed-service-config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/Installed-service-config.png
--------------------------------------------------------------------------------
/screenshots/Installed-service-stop.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/Installed-service-stop.png
--------------------------------------------------------------------------------
/screenshots/YARN-UI.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/abajwa-hw/ambari-flink-service/765d16f09960287de63c9c45b98bd0c45d342c2b/screenshots/YARN-UI.png
--------------------------------------------------------------------------------