├── README.md
├── _config.yml
├── deployment
├── README.md
├── objects
│ ├── .ipynb_checkpoints
│ │ ├── Untitled-checkpoint.ipynb
│ │ ├── aws_objects-checkpoint.ipynb
│ │ ├── consumer-checkpoint.ipynb
│ │ ├── image_processor-checkpoint.ipynb
│ │ └── producer_camera-checkpoint.ipynb
│ ├── Dockerfile
│ ├── Untitled.ipynb
│ ├── aws_objects.ipynb
│ ├── consumer.ipynb
│ ├── consumer.py
│ ├── image-processor.json
│ ├── image_processor.ipynb
│ ├── image_processor.json
│ ├── image_processor.py
│ ├── person.jpg
│ ├── pet.jpg
│ ├── pico-consumer.sh
│ ├── producer_camera.ipynb
│ ├── producer_camera.py
│ ├── producer_camera1.py
│ ├── rasp_cluster.jpg
│ └── templates
│ │ └── index.html
├── plates
│ ├── Dockerfile
│ ├── README.md
│ ├── plates-consumer.py
│ └── plates-producer.py
└── raspbi
│ ├── Dockerfile
│ ├── README.md
│ ├── consumer.py
│ ├── producer.py
│ └── templates
│ └── index.html
├── docs
└── README.md
├── getting-started
└── README.md
├── images
├── README.md
├── arch_pico.png
├── image-9.png
├── pibox.png
├── pibox3.png
├── picbox2.png
├── pico-project-arch.png
├── pico2.png
├── pico_in_3_steps.png
├── rasp_cluster.jpg
└── thepicoproject1.png
├── kafka
├── README.md
├── aws
│ └── credentials
├── bin
│ └── debug.sh
├── config
│ ├── AwsLambdaSinkConnector.properties
│ ├── connect-avro-docker.properties
│ └── connect-json-docker.properties
├── consumer.py
├── docker-compose.yml
├── kafka-common.env
├── producer-consumer.md
├── producer.py
├── src
│ ├── main
│ │ ├── assembly
│ │ │ └── package.xml
│ │ ├── java
│ │ │ └── com
│ │ │ │ └── tm
│ │ │ │ └── kafka
│ │ │ │ └── connect
│ │ │ │ └── aws
│ │ │ │ └── lambda
│ │ │ │ ├── AwsLambdaSinkConnector.java
│ │ │ │ ├── AwsLambdaSinkConnectorConfig.java
│ │ │ │ ├── AwsLambdaSinkTask.java
│ │ │ │ ├── ConfigurationAWSCredentialsProvider.java
│ │ │ │ ├── VersionUtil.java
│ │ │ │ └── converter
│ │ │ │ ├── DefaultPayloadConverter.java
│ │ │ │ ├── JsonPayloadConverter.java
│ │ │ │ └── SinkRecordToPayloadConverter.java
│ │ └── resources
│ │ │ └── logback.xml
│ └── test
│ │ ├── java
│ │ └── com
│ │ │ └── tm
│ │ │ └── kafka
│ │ │ └── connect
│ │ │ └── aws
│ │ │ └── lambda
│ │ │ ├── AwsLambdaSinkConnectorConfigTest.java
│ │ │ ├── AwsLambdaSinkConnectorTest.java
│ │ │ └── AwsLambdaSinkTaskTest.java
│ │ └── resources
│ │ └── logback.xml
├── templates
│ └── index.html
└── zk-common.env
├── lambda
├── README.md
├── Screen Shot 2019-07-01 at 3.31.58 PM.png
├── Screen Shot 2019-07-01 at 3.32.15 PM.png
└── function.py
├── onprem
└── yolo
│ └── README.md
├── producer-rpi
└── Dockerfile
├── raspbi
├── README.md
├── consumer-test.py
├── consumer.py
└── producer.py
├── rtmp
├── Dockerfile
├── README.md
├── images
│ ├── README.md
│ └── pico2.0.jpeg
└── nginx.conf
├── sample
└── producer-consumer
│ └── Dockerfile
├── testing
├── .ipynb_checkpoints
│ ├── consumer-test-checkpoint.ipynb
│ └── producer-test-checkpoint.ipynb
├── consumer-test.ipynb
├── producer-test.ipynb
└── templates
│ └── index.html
└── workshop
├── README.md
├── images
├── README.md
└── pico123.png
├── installing-docker.md
├── performing-object-detection.md
├── preparing-raspberrypi.md
├── running-consumer-script.md
├── running-kafka-on-swarm-cluster.md
├── running-producer-script-on-pi.md
├── setting-up-docker-swarm-on-aws.md
└── turn-your-raspberrypi-into-camera.md
/README.md:
--------------------------------------------------------------------------------
1 | # The Pico Project
2 |
3 | Object Detection & Text Analytics Made Simple using Docker, Apache Kafka, IoT & Amazon Rekognition System
4 |
5 | 
6 |
7 |
8 |
9 | ## What is Pico all about?
10 |
11 |
12 | Imagine you are able to capture live video streams, identify objects using deep learning, and then trigger actions or notifications based on the identified objects - all using Docker containers. With Pico, you will be able to setup and run a live video capture, analysis, and alerting solution prototype.
13 |
14 | 
15 |
16 |
17 |
18 | A camera surveils a particular area, streaming video over the network to a video capture client. The client samples video frames and sends them over to AWS, where they are analyzed and stored along with metadata. If certain objects are detected in the analyzed video frames, SMS alerts are sent out. Once a person receives an SMS alert, they will likely want to know what caused it. For that, sampled video frames can be monitored with low latency using a web-based user interface.
19 |
20 | The Pico framework uses Kafka cluster to acquire data in real-time. Kafka is a message-based distributed publish-subscribe system, which has the advantages of high throughput and perfect fault-tolerant mechanism. The type of data source is the video that generated by the cameras attached to Raspberry Pi.
21 |
22 |
23 | 
24 |
25 |
26 | ## Offerings
27 |
28 | - Pico for AWS
29 | - Pico for On-Premises(Using Swarm & Kubernetes)
30 |
31 | ## Preparing Your Environment
32 |
33 | |Items | Link | Reference |
34 | | ------------- |:-------------:| -----:|
35 | | Raspberry Pi 3 Model B| [Buy](https://robu.in/product/latest-raspberry-pi-3-model-b-original/ref/60/) |  |
36 | | Raspberry Pi Infrared IR Night Vision Surveillance Camera Module 500W Webcam | [Buy](https://robu.in/product/raspberry-pi-infrared-ir-night-vision-surveillance-camera-module-500w-webcam/ref/60/) | |
37 | | 5MP Raspberry Pi 3 Camera Module W/ HBV FFC Cable | [Buy](https://robu.in/product/5mp-raspberry-pi-camera-module-w-hbv-ffc-cable/ref/60) | |
38 |
39 |
40 | ## View of Raspberry Pi Stack
41 |
42 | 
43 |
44 | # Getting Started
45 |
46 | # Running Producer inside Docker Container
47 |
48 | ```
49 | sudo docker run -it --privileged --device /dev/video0:/dev/video0 ajeetraina/pico-producer-rpi python3 producer_camera.py
50 | ```
51 |
52 | # Verify that it is running fine
53 |
54 | ```
55 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
56 | 81891d992daf ajeetraina/pico-producer-rpi "/usr/bin/entry.sh p…" 9 minutes ago Up 9 minutes jolly_dewdney
57 |
58 | ```
59 |
60 | # Setting up Kafka Cluster on Cloud Platform
61 |
62 | ## Running Kafka on Swarm Cluster on AWS
63 |
64 | In order to run Kafka on AWS, you need t2.medium instances which doesn't fall under Free Tier. You will need to use your FREE credits or pay for its usage. Alternatively, for development purpose if you are not concerned about performance, you can use GCP instances.
65 |
66 | I assume that you have Docker and Docker Compose installed on multiple Swarm Mode Cluster
67 |
68 | ### Cloning the Repository
69 |
70 | ```
71 | git clone https://github.com/collabnix/pico
72 | cd pico/kafka
73 | ```
74 |
75 | ```
76 | docker stack deploy -c docker-compose.yml mykafka
77 | ```
78 |
79 | That's it. Your AWS KAfka cluster is up and running on Docker Swarm Nodes.
80 |
81 | # Running the Consumer Script
82 |
83 | To run the consumer script, we need to focus on two files:
84 | - consumer.py and
85 | - image_processor.py
86 |
87 | Under the image_processor.py you need to add ACCESS KEY details of your AWS account and under consumer.py you need to add your correct kafka cluster IP.
88 |
89 |
90 | Here you go..
91 |
92 | Just place any object in front of camera module and it shall detect the object automatically with tagging about the object type.
93 |
94 | ## Governance
95 |
96 | This project was incubated by Ajeet Singh Raina [Docker Captain & Docker Community Leader](https://www.docker.com/captains/ajeet-singh-raina) & Avinash Bendigeri(Data Science Engineer)
97 |
98 | ## Getting Started - The Hard Way
99 |
100 | Stage I - [Installing Docker on Raspberry Pi](https://github.com/collabnix/pico/tree/master/getting-started)
101 | Stage II - [Turn Your Raspberry Pi into Night survillience Camera using Docker](http://collabnix.com/turn-your-raspberry-pi-into-low-cost-cctv-surveillance-camerawith-night-vision-in-5-minutes-using-docker/)
102 | Stage III - [Deploy Apache Kafka on AWS Platform using Docker Swarm](https://github.com/collabnix/pico/blob/master/kafka/README.md)
103 | Stage IV - [Pushing the video frame from Raspberry Pi to Apache Kafka](https://github.com/collabnix/pico/blob/master/kafka/producer-consumer.md)
104 | Stage IV - []() - [Preparing AWS Lambda Deployment Package in Python & Testing Kafka Connect AWS Lambda Connector](https://github.com/collabnix/pico/blob/master/lambda/README.md)
105 |
106 |
107 |
108 |
--------------------------------------------------------------------------------
/_config.yml:
--------------------------------------------------------------------------------
1 | theme: jekyll-theme-cayman
--------------------------------------------------------------------------------
/deployment/README.md:
--------------------------------------------------------------------------------
1 | # Deployment scripts for Pico
2 |
--------------------------------------------------------------------------------
/deployment/objects/.ipynb_checkpoints/Untitled-checkpoint.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [],
3 | "metadata": {},
4 | "nbformat": 4,
5 | "nbformat_minor": 2
6 | }
7 |
--------------------------------------------------------------------------------
/deployment/objects/.ipynb_checkpoints/aws_objects-checkpoint.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [],
3 | "metadata": {},
4 | "nbformat": 4,
5 | "nbformat_minor": 2
6 | }
7 |
--------------------------------------------------------------------------------
/deployment/objects/.ipynb_checkpoints/consumer-checkpoint.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {
7 | "collapsed": true
8 | },
9 | "outputs": [],
10 | "source": [
11 | "import datetime\n",
12 | "from flask import Flask, Response, render_template\n",
13 | "from kafka import KafkaConsumer\n",
14 | "import json\n",
15 | "import base64\n",
16 | "\n",
17 | "# Fire up the Kafka Consumer\n",
18 | "camera_topic_1 = \"camera1\"\n",
19 | "camera_topic_2 = \"camera2\"\n",
20 | "camera_topic_3 = \"camera3\"\n",
21 | "brokers = [\"35.189.130.4:9092\"]\n",
22 | "\n",
23 | "camera1 = KafkaConsumer(\n",
24 | " camera_topic_1, \n",
25 | " bootstrap_servers=brokers,\n",
26 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
27 | "\n",
28 | "camera2 = KafkaConsumer(\n",
29 | " camera_topic_2, \n",
30 | " bootstrap_servers=brokers,\n",
31 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
32 | "\n",
33 | "\n",
34 | "camera3 = KafkaConsumer(\n",
35 | " camera_topic_3, \n",
36 | " bootstrap_servers=brokers,\n",
37 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
38 | "\n",
39 | "\n",
40 | "# Set the consumer in a Flask App\n",
41 | "app = Flask(__name__)\n",
42 | "\n",
43 | "@app.route('/')\n",
44 | "def index():\n",
45 | " return render_template('index.html')\n",
46 | "\n",
47 | "@app.route('/camera_1', methods=['GET'])\n",
48 | "def camera_1():\n",
49 | " id=5\n",
50 | " \"\"\"\n",
51 | " This is the heart of our video display. Notice we set the mimetype to \n",
52 | " multipart/x-mixed-replace. This tells Flask to replace any old images with \n",
53 | " new values streaming through the pipeline.\n",
54 | " \"\"\"\n",
55 | " return Response(\n",
56 | " getCamera1(), \n",
57 | " mimetype='multipart/x-mixed-replace; boundary=frame')\n",
58 | "\n",
59 | "@app.route('/camera_2', methods=['GET'])\n",
60 | "def camera_2():\n",
61 | " id=6\n",
62 | " \"\"\"\n",
63 | " This is the heart of our video display. Notice we set the mimetype to \n",
64 | " multipart/x-mixed-replace. This tells Flask to replace any old images with \n",
65 | " new values streaming through the pipeline.\n",
66 | " \"\"\"\n",
67 | " return Response(\n",
68 | " getCamera2(), \n",
69 | " mimetype='multipart/x-mixed-replace; boundary=frame')\n",
70 | "\n",
71 | "\n",
72 | "@app.route('/camera_3', methods=['GET'])\n",
73 | "def camera_3():\n",
74 | " id=8\n",
75 | " \"\"\"\n",
76 | " This is the heart of our video display. Notice we set the mimetype to \n",
77 | " multipart/x-mixed-replace. This tells Flask to replace any old images with \n",
78 | " new values streaming through the pipeline.\n",
79 | " \"\"\"\n",
80 | " return Response(\n",
81 | " getCamera3(), \n",
82 | " mimetype='multipart/x-mixed-replace; boundary=frame')\n",
83 | "\n",
84 | "def getCamera1():\n",
85 | " \"\"\"\n",
86 | " Here is where we recieve streamed images from the Kafka Server and convert \n",
87 | " them to a Flask-readable format.\n",
88 | " \"\"\"\n",
89 | " for msg in camera1:\n",
90 | " yield (b'--frame\\r\\n'\n",
91 | " b'Content-Type: image/jpg\\r\\n\\r\\n' + base64.b64decode(msg.value['image_bytes']) + b'\\r\\n\\r\\n')\n",
92 | "\n",
93 | "def getCamera2():\n",
94 | " \"\"\"\n",
95 | " Here is where we recieve streamed images from the Kafka Server and convert \n",
96 | " them to a Flask-readable format.\n",
97 | " \"\"\"\n",
98 | " for msg in camera2:\n",
99 | " yield (b'--frame\\r\\n'\n",
100 | " b'Content-Type: image/jpg\\r\\n\\r\\n' + base64.b64decode(msg.value['image_bytes']) + b'\\r\\n\\r\\n')\n",
101 | " \n",
102 | " \n",
103 | "def getCamera3():\n",
104 | " \"\"\"\n",
105 | " Here is where we recieve streamed images from the Kafka Server and convert \n",
106 | " them to a Flask-readable format.\n",
107 | " \"\"\"\n",
108 | " for msg in camera3:\n",
109 | " yield (b'--frame\\r\\n'\n",
110 | " b'Content-Type: image/jpg\\r\\n\\r\\n' + base64.b64decode(msg.value['image_bytes']) + b'\\r\\n\\r\\n') \n",
111 | " \n",
112 | "if __name__ == \"__main__\":\n",
113 | " app.run(host='0.0.0.0', debug=True)"
114 | ]
115 | }
116 | ],
117 | "metadata": {
118 | "kernelspec": {
119 | "display_name": "Python 3",
120 | "language": "python",
121 | "name": "python3"
122 | },
123 | "language_info": {
124 | "codemirror_mode": {
125 | "name": "ipython",
126 | "version": 3
127 | },
128 | "file_extension": ".py",
129 | "mimetype": "text/x-python",
130 | "name": "python",
131 | "nbconvert_exporter": "python",
132 | "pygments_lexer": "ipython3",
133 | "version": "3.6.2"
134 | }
135 | },
136 | "nbformat": 4,
137 | "nbformat_minor": 2
138 | }
139 |
--------------------------------------------------------------------------------
/deployment/objects/.ipynb_checkpoints/image_processor-checkpoint.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {},
7 | "outputs": [
8 | {
9 | "name": "stdout",
10 | "output_type": "stream",
11 | "text": [
12 | "Found a Box of Glasses\n",
13 | "Found a Box of Person\n",
14 | "camera1\n",
15 | "camera1\n",
16 | "Found a Box of Person\n",
17 | "Found a Box of Glasses\n",
18 | "camera1\n",
19 | "camera1\n",
20 | "Found a Box of Glasses\n",
21 | "Found a Box of Person\n",
22 | "camera1\n",
23 | "camera1\n",
24 | "Found a Box of Glasses\n",
25 | "Found a Box of Person\n",
26 | "camera1\n",
27 | "camera1\n",
28 | "Found a Box of Glasses\n",
29 | "Found a Box of Person\n",
30 | "camera1\n",
31 | "camera1\n",
32 | "Found a Box of Glasses\n",
33 | "Found a Box of Person\n",
34 | "camera1\n",
35 | "camera1\n",
36 | "Found a Box of Person\n",
37 | "Found a Box of Glasses\n",
38 | "camera1\n",
39 | "camera1\n",
40 | "Found a Box of Glasses\n",
41 | "Found a Box of Person\n",
42 | "camera1\n",
43 | "camera1\n",
44 | "Found a Box of Glasses\n",
45 | "Found a Box of Person\n",
46 | "camera1\n",
47 | "camera1\n",
48 | "Found a Box of Glasses\n",
49 | "Found a Box of Person\n",
50 | "camera1\n",
51 | "camera1\n",
52 | "Found a Box of Person\n",
53 | "Found a Box of Glasses\n",
54 | "camera1\n",
55 | "camera1\n",
56 | "Found a Box of Glasses\n",
57 | "Found a Box of Person\n",
58 | "camera1\n",
59 | "camera1\n",
60 | "Found a Box of Glasses\n",
61 | "Found a Box of Person\n",
62 | "camera1\n",
63 | "camera1\n",
64 | "Found a Box of Person\n",
65 | "Found a Box of Glasses\n",
66 | "camera1\n",
67 | "camera1\n",
68 | "Found a Box of Person\n",
69 | "Found a Box of Glasses\n",
70 | "camera1\n",
71 | "camera1\n",
72 | "Found a Box of Person\n",
73 | "Found a Box of Glasses\n",
74 | "camera1\n",
75 | "camera1\n",
76 | "Found a Box of Person\n",
77 | "Found a Box of Glasses\n",
78 | "camera1\n",
79 | "camera1\n",
80 | "Found a Box of Glasses\n",
81 | "Found a Box of Person\n",
82 | "camera1\n",
83 | "camera1\n",
84 | "Found a Box of Glasses\n",
85 | "Found a Box of Person\n",
86 | "camera1\n",
87 | "camera1\n",
88 | "Found a Box of Person\n",
89 | "Found a Box of Glasses\n",
90 | "camera1\n",
91 | "camera1\n",
92 | "Found a Box of Person\n",
93 | "Found a Box of Glasses\n",
94 | "camera1\n",
95 | "camera1\n",
96 | "Found a Box of Person\n",
97 | "Found a Box of Glasses\n",
98 | "camera1\n",
99 | "camera1\n",
100 | "Found a Box of Glasses\n",
101 | "Found a Box of Person\n",
102 | "camera1\n",
103 | "camera1\n",
104 | "Found a Box of Person\n",
105 | "Found a Box of Glasses\n",
106 | "camera1\n",
107 | "camera1\n",
108 | "Found a Box of Person\n",
109 | "Found a Box of Glasses\n",
110 | "camera1\n",
111 | "camera1\n",
112 | "Found a Box of Person\n",
113 | "Found a Box of Glasses\n",
114 | "camera1\n",
115 | "camera1\n",
116 | "Found a Box of Glasses\n",
117 | "Found a Box of Person\n",
118 | "camera1\n",
119 | "camera1\n",
120 | "Found a Box of Glasses\n",
121 | "Found a Box of Person\n",
122 | "camera1\n",
123 | "camera1\n",
124 | "Found a Box of Person\n",
125 | "Found a Box of Glasses\n",
126 | "camera1\n",
127 | "camera1\n",
128 | "Found a Box of Person\n",
129 | "Found a Box of Glasses\n",
130 | "camera1\n",
131 | "camera1\n",
132 | "Found a Box of Glasses\n",
133 | "Found a Box of Person\n",
134 | "camera1\n",
135 | "camera1\n",
136 | "Found a Box of Person\n",
137 | "Found a Box of Glasses\n",
138 | "camera1\n",
139 | "camera1\n",
140 | "Found a Box of Glasses\n",
141 | "Found a Box of Person\n",
142 | "camera1\n",
143 | "camera1\n"
144 | ]
145 | }
146 | ],
147 | "source": [
148 | "import boto3\n",
149 | "import json\n",
150 | "import cv2\n",
151 | "import decimal\n",
152 | "from copy import deepcopy\n",
153 | "\n",
154 | "from __future__ import print_function\n",
155 | "import base64\n",
156 | "import datetime\n",
157 | "import time\n",
158 | "import decimal\n",
159 | "import uuid\n",
160 | "import json\n",
161 | "import boto3\n",
162 | "import pytz\n",
163 | "from pytz import timezone\n",
164 | "from copy import deepcopy\n",
165 | "\n",
166 | "from PIL import Image, ImageDraw, ExifTags, ImageColor, ImageFont\n",
167 | "\n",
168 | "import datetime\n",
169 | "from kafka import KafkaConsumer, KafkaProducer\n",
170 | "import boto3\n",
171 | "import json\n",
172 | "import base64\n",
173 | "import io\n",
174 | "\n",
175 | "# Fire up the Kafka Consumer\n",
176 | "topic = \"image-pool\"\n",
177 | "brokers = [\"35.221.215.135:9092\"]\n",
178 | "\n",
179 | "consumer = KafkaConsumer(\n",
180 | " topic, \n",
181 | " bootstrap_servers=brokers,\n",
182 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
183 | "\n",
184 | "\n",
185 | "# In[18]:\n",
186 | "\n",
187 | "producer = KafkaProducer(bootstrap_servers=brokers,\n",
188 | " value_serializer=lambda v: json.dumps(v).encode('utf-8'))\n",
189 | "\n",
190 | "\n",
191 | "AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXXXXXXXXXXXXXX'\n",
192 | "AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXX'\n",
193 | "\n",
194 | "\n",
195 | "session = boto3.session.Session(aws_access_key_id = AWS_ACCESS_KEY_ID,\n",
196 | " aws_secret_access_key = AWS_SECRET_ACCESS_KEY,\n",
197 | " region_name='us-west-2')\n",
198 | "\n",
199 | "\n",
200 | "def load_config():\n",
201 | " '''Load configuration from file.'''\n",
202 | " with open('image-processor.json', 'r') as conf_file:\n",
203 | " conf_json = conf_file.read()\n",
204 | " return json.loads(conf_json)\n",
205 | "\n",
206 | "#Load config\n",
207 | "config = load_config()\n",
208 | "\n",
209 | "def start_processor():\n",
210 | " \n",
211 | " while True:\n",
212 | " \n",
213 | "# raw_frame_messages = consumer.poll(timeout_ms=10, max_records=10)\n",
214 | " raw_frame_messages = consumer.poll()\n",
215 | " \n",
216 | " for topic_partition, msgs in raw_frame_messages.items():\n",
217 | " for msg in msgs:\n",
218 | "\n",
219 | " camera_data = {}\n",
220 | "\n",
221 | " img_bytes = base64.b64decode(msg.value['image_bytes'])\n",
222 | "\n",
223 | " camera_topic = \"camera\"+str(msg.value['camera_id'])\n",
224 | "\n",
225 | " stream = io.BytesIO(img_bytes)\n",
226 | " image=Image.open(stream)\n",
227 | "\n",
228 | " imgWidth, imgHeight = image.size \n",
229 | " draw = ImageDraw.Draw(image) \n",
230 | "\n",
231 | " rekog_client = session.client('rekognition')\n",
232 | " rekog_max_labels = config[\"rekog_max_labels\"]\n",
233 | " rekog_min_conf = float(config[\"rekog_min_conf\"])\n",
234 | "\n",
235 | " label_watch_list = config[\"label_watch_list\"]\n",
236 | " label_watch_min_conf = float(config[\"label_watch_min_conf\"])\n",
237 | " label_watch_phone_num = config.get(\"label_watch_phone_num\", \"\")\n",
238 | " label_watch_sns_topic_arn = config.get(\"label_watch_sns_topic_arn\", \"\")\n",
239 | "\n",
240 | " rekog_response = rekog_client.detect_labels(\n",
241 | " Image={\n",
242 | " 'Bytes': img_bytes\n",
243 | " },\n",
244 | " MaxLabels=rekog_max_labels,\n",
245 | " MinConfidence=rekog_min_conf\n",
246 | " )\n",
247 | "\n",
248 | " boxes = []\n",
249 | " objects = []\n",
250 | " confidence = []\n",
251 | "\n",
252 | " for label in rekog_response['Labels']:\n",
253 | "\n",
254 | " for instance in label['Instances']:\n",
255 | "\n",
256 | " if(instance['BoundingBox']['Top'] > 0):\n",
257 | "\n",
258 | " print(\"Found a Box of {}\".format(label['Name']))\n",
259 | "\n",
260 | " top = imgHeight * instance['BoundingBox']['Top']\n",
261 | " left = imgWidth * instance['BoundingBox']['Left']\n",
262 | " width = imgWidth * instance['BoundingBox']['Width']\n",
263 | " height = imgHeight * instance['BoundingBox']['Height']\n",
264 | "\n",
265 | " boxes.append([top,left,width,height])\n",
266 | "\n",
267 | " objects.append(label['Name'])\n",
268 | "\n",
269 | " confidence.append(label['Confidence']) \n",
270 | "\n",
271 | "\n",
272 | " for i, box in enumerate(boxes):\n",
273 | "\n",
274 | " top = box[0]\n",
275 | " left = box[1]\n",
276 | " width = box[2]\n",
277 | " height = box[3]\n",
278 | "\n",
279 | "\n",
280 | " points = (\n",
281 | " (left,top),\n",
282 | " (left + width, top),\n",
283 | " (left + width, top + height),\n",
284 | " (left , top + height),\n",
285 | " (left, top)\n",
286 | "\n",
287 | " )\n",
288 | "\n",
289 | " font = ImageFont.truetype(\"arial.ttf\", 25)\n",
290 | " draw.line(points, fill='#00d400', width=3)\n",
291 | "\n",
292 | " label = str(objects[i])+\":\"+str(confidence[i])\n",
293 | " color = 'rgb(255,255,0)' # white color\n",
294 | " draw.text((left, top - 25), label, fill=color,font=font)\n",
295 | "\n",
296 | "\n",
297 | " imgByteArr = io.BytesIO()\n",
298 | " image.save(imgByteArr, format=image.format)\n",
299 | " imgByteArr = imgByteArr.getvalue()\n",
300 | "\n",
301 | "\n",
302 | " camera_data['image_bytes'] = base64.b64encode(imgByteArr).decode('utf-8')\n",
303 | "\n",
304 | " # print(camera_topic)\n",
305 | "\n",
306 | " producer.send(camera_topic,camera_data)\n",
307 | " \n",
308 | "\n",
309 | "if __name__ == \"__main__\":\n",
310 | " start_processor()\n"
311 | ]
312 | },
313 | {
314 | "cell_type": "code",
315 | "execution_count": null,
316 | "metadata": {
317 | "collapsed": true
318 | },
319 | "outputs": [],
320 | "source": []
321 | }
322 | ],
323 | "metadata": {
324 | "kernelspec": {
325 | "display_name": "Python 3",
326 | "language": "python",
327 | "name": "python3"
328 | },
329 | "language_info": {
330 | "codemirror_mode": {
331 | "name": "ipython",
332 | "version": 3
333 | },
334 | "file_extension": ".py",
335 | "mimetype": "text/x-python",
336 | "name": "python",
337 | "nbconvert_exporter": "python",
338 | "pygments_lexer": "ipython3",
339 | "version": "3.6.2"
340 | }
341 | },
342 | "nbformat": 4,
343 | "nbformat_minor": 2
344 | }
345 |
--------------------------------------------------------------------------------
/deployment/objects/.ipynb_checkpoints/producer_camera-checkpoint.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {
7 | "collapsed": true
8 | },
9 | "outputs": [],
10 | "source": [
11 | "import sys\n",
12 | "import time\n",
13 | "import cv2\n",
14 | "import json\n",
15 | "import decimal\n",
16 | "\n",
17 | "\n",
18 | "import pytz\n",
19 | "from pytz import timezone\n",
20 | "import datetime\n",
21 | "\n",
22 | "\n",
23 | "from kafka import KafkaProducer\n",
24 | "from kafka.errors import KafkaError\n",
25 | "import base64\n",
26 | "\n",
27 | "topic = \"image-pool\"\n",
28 | "brokers = [\"35.221.215.135:9092\"]\n",
29 | "\n",
30 | "camera_data = {'camera_id':'1',\n",
31 | " 'position':'frontspace',\n",
32 | " 'image_bytes':'123'}\n",
33 | "\n",
34 | "\n",
35 | "# In[18]:\n",
36 | "\n",
37 | "\n",
38 | "def convert_ts(ts, config):\n",
39 | " '''Converts a timestamp to the configured timezone. Returns a localized datetime object.'''\n",
40 | " #lambda_tz = timezone('US/Pacific')\n",
41 | " tz = timezone(config['timezone'])\n",
42 | " utc = pytz.utc\n",
43 | "\n",
44 | " utc_dt = utc.localize(datetime.datetime.utcfromtimestamp(ts))\n",
45 | "\n",
46 | " localized_dt = utc_dt.astimezone(tz)\n",
47 | "\n",
48 | " return localized_dt\n",
49 | "\n",
50 | "\n",
51 | "def publish_camera():\n",
52 | " \"\"\"\n",
53 | " Publish camera video stream to specified Kafka topic.\n",
54 | " Kafka Server is expected to be running on the localhost. Not partitioned.\n",
55 | " \"\"\"\n",
56 | "\n",
57 | " # Start up producer\n",
58 | "\n",
59 | "\n",
60 | " producer = KafkaProducer(bootstrap_servers=brokers,\n",
61 | " value_serializer=lambda v: json.dumps(v).encode('utf-8'))\n",
62 | "\n",
63 | " camera = cv2.VideoCapture(0)\n",
64 | "\n",
65 | " framecount = 0\n",
66 | "\n",
67 | " try:\n",
68 | " while(True):\n",
69 | "\n",
70 | " success, frame = camera.read()\n",
71 | "\n",
72 | " utc_dt = pytz.utc.localize(datetime.datetime.now())\n",
73 | " now_ts_utc = (utc_dt - datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)).total_seconds()\n",
74 | "\n",
75 | " ret, buffer = cv2.imencode('.jpg', frame)\n",
76 | "\n",
77 | " camera_data['image_bytes'] = base64.b64encode(buffer).decode('utf-8')\n",
78 | "\n",
79 | " camera_data['frame_count'] = str(framecount)\n",
80 | "\n",
81 | " camera_data['capture_time'] = str(now_ts_utc)\n",
82 | "\n",
83 | " producer.send(topic, camera_data)\n",
84 | "\n",
85 | " framecount = framecount + 1\n",
86 | "\n",
87 | " # Choppier stream, reduced load on processor\n",
88 | " time.sleep(0.002)\n",
89 | "\n",
90 | "\n",
91 | " except Exception as e:\n",
92 | " print((e))\n",
93 | " print(\"\\nExiting.\")\n",
94 | " sys.exit(1)\n",
95 | "\n",
96 | "\n",
97 | " camera.release()\n",
98 | " producer.close()\n",
99 | "\n",
100 | "\n",
101 | "\n",
102 | "# In[19]:\n",
103 | "\n",
104 | "\n",
105 | "if __name__ == \"__main__\":\n",
106 | " publish_camera()"
107 | ]
108 | },
109 | {
110 | "cell_type": "code",
111 | "execution_count": null,
112 | "metadata": {
113 | "collapsed": true
114 | },
115 | "outputs": [],
116 | "source": []
117 | },
118 | {
119 | "cell_type": "code",
120 | "execution_count": null,
121 | "metadata": {
122 | "collapsed": true
123 | },
124 | "outputs": [],
125 | "source": []
126 | },
127 | {
128 | "cell_type": "code",
129 | "execution_count": null,
130 | "metadata": {
131 | "collapsed": true
132 | },
133 | "outputs": [],
134 | "source": []
135 | }
136 | ],
137 | "metadata": {
138 | "kernelspec": {
139 | "display_name": "Python 3",
140 | "language": "python",
141 | "name": "python3"
142 | },
143 | "language_info": {
144 | "codemirror_mode": {
145 | "name": "ipython",
146 | "version": 3
147 | },
148 | "file_extension": ".py",
149 | "mimetype": "text/x-python",
150 | "name": "python",
151 | "nbconvert_exporter": "python",
152 | "pygments_lexer": "ipython3",
153 | "version": "3.6.2"
154 | }
155 | },
156 | "nbformat": 4,
157 | "nbformat_minor": 2
158 | }
159 |
--------------------------------------------------------------------------------
/deployment/objects/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ajeetraina/opencv4-python3
2 | RUN apt update
3 | RUN mkdir -p /project/
4 | COPY image_processor.py /project/
5 | COPY image-processor.json /project/
6 | COPY consumer.py /project/
7 | COPY templates /project/
8 | COPY templates/index.html /project/templates/index.html
9 | COPY pico-consumer.sh /project/
10 | WORKDIR /project/
11 | RUN pip3 install pytz boto3 pillow
12 | CMD ["pico-consumer.sh"]
13 | ENTRYPOINT ["/bin/sh"]
14 |
--------------------------------------------------------------------------------
/deployment/objects/Untitled.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {
7 | "collapsed": true
8 | },
9 | "outputs": [],
10 | "source": [
11 | "from kafka import KafkaProducer"
12 | ]
13 | },
14 | {
15 | "cell_type": "code",
16 | "execution_count": 2,
17 | "metadata": {
18 | "collapsed": true
19 | },
20 | "outputs": [],
21 | "source": [
22 | "producer = KafkaProducer(bootstrap_servers=['35.221.215.135:9092'])"
23 | ]
24 | },
25 | {
26 | "cell_type": "code",
27 | "execution_count": 8,
28 | "metadata": {},
29 | "outputs": [],
30 | "source": [
31 | "var = \"camera\"+str(1)\n",
32 | "\n",
33 | "\n",
34 | "for i in range(0,100):\n",
35 | " producer.send(topic=var,value=b'raw_bytes')"
36 | ]
37 | },
38 | {
39 | "cell_type": "code",
40 | "execution_count": null,
41 | "metadata": {
42 | "collapsed": true
43 | },
44 | "outputs": [],
45 | "source": []
46 | }
47 | ],
48 | "metadata": {
49 | "kernelspec": {
50 | "display_name": "Python 3",
51 | "language": "python",
52 | "name": "python3"
53 | },
54 | "language_info": {
55 | "codemirror_mode": {
56 | "name": "ipython",
57 | "version": 3
58 | },
59 | "file_extension": ".py",
60 | "mimetype": "text/x-python",
61 | "name": "python",
62 | "nbconvert_exporter": "python",
63 | "pygments_lexer": "ipython3",
64 | "version": "3.6.2"
65 | }
66 | },
67 | "nbformat": 4,
68 | "nbformat_minor": 2
69 | }
70 |
--------------------------------------------------------------------------------
/deployment/objects/consumer.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {
7 | "collapsed": true
8 | },
9 | "outputs": [],
10 | "source": [
11 | "import datetime\n",
12 | "from flask import Flask, Response, render_template\n",
13 | "from kafka import KafkaConsumer\n",
14 | "import json\n",
15 | "import base64\n",
16 | "\n",
17 | "# Fire up the Kafka Consumer\n",
18 | "camera_topic_1 = \"camera1\"\n",
19 | "camera_topic_2 = \"camera2\"\n",
20 | "camera_topic_3 = \"camera3\"\n",
21 | "brokers = [\"35.189.130.4:9092\"]\n",
22 | "\n",
23 | "camera1 = KafkaConsumer(\n",
24 | " camera_topic_1, \n",
25 | " bootstrap_servers=brokers,\n",
26 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
27 | "\n",
28 | "camera2 = KafkaConsumer(\n",
29 | " camera_topic_2, \n",
30 | " bootstrap_servers=brokers,\n",
31 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
32 | "\n",
33 | "\n",
34 | "camera3 = KafkaConsumer(\n",
35 | " camera_topic_3, \n",
36 | " bootstrap_servers=brokers,\n",
37 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
38 | "\n",
39 | "\n",
40 | "# Set the consumer in a Flask App\n",
41 | "app = Flask(__name__)\n",
42 | "\n",
43 | "@app.route('/')\n",
44 | "def index():\n",
45 | " return render_template('index.html')\n",
46 | "\n",
47 | "@app.route('/camera_1', methods=['GET'])\n",
48 | "def camera_1():\n",
49 | " id=5\n",
50 | " \"\"\"\n",
51 | " This is the heart of our video display. Notice we set the mimetype to \n",
52 | " multipart/x-mixed-replace. This tells Flask to replace any old images with \n",
53 | " new values streaming through the pipeline.\n",
54 | " \"\"\"\n",
55 | " return Response(\n",
56 | " getCamera1(), \n",
57 | " mimetype='multipart/x-mixed-replace; boundary=frame')\n",
58 | "\n",
59 | "@app.route('/camera_2', methods=['GET'])\n",
60 | "def camera_2():\n",
61 | " id=6\n",
62 | " \"\"\"\n",
63 | " This is the heart of our video display. Notice we set the mimetype to \n",
64 | " multipart/x-mixed-replace. This tells Flask to replace any old images with \n",
65 | " new values streaming through the pipeline.\n",
66 | " \"\"\"\n",
67 | " return Response(\n",
68 | " getCamera2(), \n",
69 | " mimetype='multipart/x-mixed-replace; boundary=frame')\n",
70 | "\n",
71 | "\n",
72 | "@app.route('/camera_3', methods=['GET'])\n",
73 | "def camera_3():\n",
74 | " id=8\n",
75 | " \"\"\"\n",
76 | " This is the heart of our video display. Notice we set the mimetype to \n",
77 | " multipart/x-mixed-replace. This tells Flask to replace any old images with \n",
78 | " new values streaming through the pipeline.\n",
79 | " \"\"\"\n",
80 | " return Response(\n",
81 | " getCamera3(), \n",
82 | " mimetype='multipart/x-mixed-replace; boundary=frame')\n",
83 | "\n",
84 | "def getCamera1():\n",
85 | " \"\"\"\n",
86 | " Here is where we recieve streamed images from the Kafka Server and convert \n",
87 | " them to a Flask-readable format.\n",
88 | " \"\"\"\n",
89 | " for msg in camera1:\n",
90 | " yield (b'--frame\\r\\n'\n",
91 | " b'Content-Type: image/jpg\\r\\n\\r\\n' + base64.b64decode(msg.value['image_bytes']) + b'\\r\\n\\r\\n')\n",
92 | "\n",
93 | "def getCamera2():\n",
94 | " \"\"\"\n",
95 | " Here is where we recieve streamed images from the Kafka Server and convert \n",
96 | " them to a Flask-readable format.\n",
97 | " \"\"\"\n",
98 | " for msg in camera2:\n",
99 | " yield (b'--frame\\r\\n'\n",
100 | " b'Content-Type: image/jpg\\r\\n\\r\\n' + base64.b64decode(msg.value['image_bytes']) + b'\\r\\n\\r\\n')\n",
101 | " \n",
102 | " \n",
103 | "def getCamera3():\n",
104 | " \"\"\"\n",
105 | " Here is where we recieve streamed images from the Kafka Server and convert \n",
106 | " them to a Flask-readable format.\n",
107 | " \"\"\"\n",
108 | " for msg in camera3:\n",
109 | " yield (b'--frame\\r\\n'\n",
110 | " b'Content-Type: image/jpg\\r\\n\\r\\n' + base64.b64decode(msg.value['image_bytes']) + b'\\r\\n\\r\\n') \n",
111 | " \n",
112 | "if __name__ == \"__main__\":\n",
113 | " app.run(host='0.0.0.0', debug=True)"
114 | ]
115 | }
116 | ],
117 | "metadata": {
118 | "kernelspec": {
119 | "display_name": "Python 3",
120 | "language": "python",
121 | "name": "python3"
122 | },
123 | "language_info": {
124 | "codemirror_mode": {
125 | "name": "ipython",
126 | "version": 3
127 | },
128 | "file_extension": ".py",
129 | "mimetype": "text/x-python",
130 | "name": "python",
131 | "nbconvert_exporter": "python",
132 | "pygments_lexer": "ipython3",
133 | "version": "3.6.2"
134 | }
135 | },
136 | "nbformat": 4,
137 | "nbformat_minor": 2
138 | }
139 |
--------------------------------------------------------------------------------
/deployment/objects/consumer.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[ ]:
5 |
6 |
7 | import datetime
8 | from flask import Flask, Response, render_template
9 | from kafka import KafkaConsumer
10 | import json
11 | import base64
12 |
13 | # Fire up the Kafka Consumer
14 | camera_topic_1 = "camera1"
15 | camera_topic_2 = "camera2"
16 | camera_topic_3 = "camera3"
17 | brokers = ["35.221.213.182:9092"]
18 |
19 | camera1 = KafkaConsumer(
20 | camera_topic_1,
21 | bootstrap_servers=brokers,
22 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
23 |
24 | camera2 = KafkaConsumer(
25 | camera_topic_2,
26 | bootstrap_servers=brokers,
27 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
28 |
29 |
30 | camera3 = KafkaConsumer(
31 | camera_topic_3,
32 | bootstrap_servers=brokers,
33 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
34 |
35 |
36 | # Set the consumer in a Flask App
37 | app = Flask(__name__)
38 |
39 | @app.route('/')
40 | def index():
41 | return render_template('index.html')
42 |
43 | @app.route('/camera_1', methods=['GET'])
44 | def camera_1():
45 | id=5
46 | """
47 | This is the heart of our video display. Notice we set the mimetype to
48 | multipart/x-mixed-replace. This tells Flask to replace any old images with
49 | new values streaming through the pipeline.
50 | """
51 | return Response(
52 | getCamera1(),
53 | mimetype='multipart/x-mixed-replace; boundary=frame')
54 |
55 | @app.route('/camera_2', methods=['GET'])
56 | def camera_2():
57 | id=6
58 | """
59 | This is the heart of our video display. Notice we set the mimetype to
60 | multipart/x-mixed-replace. This tells Flask to replace any old images with
61 | new values streaming through the pipeline.
62 | """
63 | return Response(
64 | getCamera2(),
65 | mimetype='multipart/x-mixed-replace; boundary=frame')
66 |
67 |
68 | @app.route('/camera_3', methods=['GET'])
69 | def camera_3():
70 | id=8
71 | """
72 | This is the heart of our video display. Notice we set the mimetype to
73 | multipart/x-mixed-replace. This tells Flask to replace any old images with
74 | new values streaming through the pipeline.
75 | """
76 | return Response(
77 | getCamera3(),
78 | mimetype='multipart/x-mixed-replace; boundary=frame')
79 |
80 | def getCamera1():
81 | """
82 | Here is where we recieve streamed images from the Kafka Server and convert
83 | them to a Flask-readable format.
84 | """
85 | for msg in camera1:
86 | yield (b'--frame\r\n'
87 | b'Content-Type: image/jpg\r\n\r\n' + base64.b64decode(msg.value['image_bytes']) + b'\r\n\r\n')
88 |
89 | def getCamera2():
90 | """
91 | Here is where we recieve streamed images from the Kafka Server and convert
92 | them to a Flask-readable format.
93 | """
94 | for msg in camera2:
95 | yield (b'--frame\r\n'
96 | b'Content-Type: image/jpg\r\n\r\n' + base64.b64decode(msg.value['image_bytes']) + b'\r\n\r\n')
97 |
98 |
99 | def getCamera3():
100 | """
101 | Here is where we recieve streamed images from the Kafka Server and convert
102 | them to a Flask-readable format.
103 | """
104 | for msg in camera3:
105 | yield (b'--frame\r\n'
106 | b'Content-Type: image/jpg\r\n\r\n' + base64.b64decode(msg.value['image_bytes']) + b'\r\n\r\n')
107 |
108 | if __name__ == "__main__":
109 | app.run(host='0.0.0.0', debug=True)
110 |
111 |
--------------------------------------------------------------------------------
/deployment/objects/image-processor.json:
--------------------------------------------------------------------------------
1 | {
2 | "s3_bucket" : "bucketpico",
3 | "s3_key_frames_root" : "frames/",
4 |
5 | "ddb_table" : "EnrichedFrame",
6 |
7 | "rekog_max_labels" : 123,
8 | "rekog_min_conf" : 50.0,
9 |
10 | "label_watch_list" : ["Human", "Pet", "Bag", "Toy"],
11 | "label_watch_min_conf" : 90.0,
12 | "label_watch_phone_num" : "7411763580",
13 | "label_watch_sns_topic_arn" : "mypico",
14 |
15 | "timezone" : "US/Eastern"
16 | }
17 |
--------------------------------------------------------------------------------
/deployment/objects/image_processor.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {},
7 | "outputs": [
8 | {
9 | "name": "stdout",
10 | "output_type": "stream",
11 | "text": [
12 | "Found a Box of Glasses\n",
13 | "Found a Box of Person\n",
14 | "camera1\n",
15 | "camera1\n",
16 | "Found a Box of Person\n",
17 | "Found a Box of Glasses\n",
18 | "camera1\n",
19 | "camera1\n",
20 | "Found a Box of Glasses\n",
21 | "Found a Box of Person\n",
22 | "camera1\n",
23 | "camera1\n",
24 | "Found a Box of Glasses\n",
25 | "Found a Box of Person\n",
26 | "camera1\n",
27 | "camera1\n",
28 | "Found a Box of Glasses\n",
29 | "Found a Box of Person\n",
30 | "camera1\n",
31 | "camera1\n",
32 | "Found a Box of Glasses\n",
33 | "Found a Box of Person\n",
34 | "camera1\n",
35 | "camera1\n",
36 | "Found a Box of Person\n",
37 | "Found a Box of Glasses\n",
38 | "camera1\n",
39 | "camera1\n",
40 | "Found a Box of Glasses\n",
41 | "Found a Box of Person\n",
42 | "camera1\n",
43 | "camera1\n",
44 | "Found a Box of Glasses\n",
45 | "Found a Box of Person\n",
46 | "camera1\n",
47 | "camera1\n",
48 | "Found a Box of Glasses\n",
49 | "Found a Box of Person\n",
50 | "camera1\n",
51 | "camera1\n",
52 | "Found a Box of Person\n",
53 | "Found a Box of Glasses\n",
54 | "camera1\n",
55 | "camera1\n",
56 | "Found a Box of Glasses\n",
57 | "Found a Box of Person\n",
58 | "camera1\n",
59 | "camera1\n",
60 | "Found a Box of Glasses\n",
61 | "Found a Box of Person\n",
62 | "camera1\n",
63 | "camera1\n",
64 | "Found a Box of Person\n",
65 | "Found a Box of Glasses\n",
66 | "camera1\n",
67 | "camera1\n",
68 | "Found a Box of Person\n",
69 | "Found a Box of Glasses\n",
70 | "camera1\n",
71 | "camera1\n",
72 | "Found a Box of Person\n",
73 | "Found a Box of Glasses\n",
74 | "camera1\n",
75 | "camera1\n",
76 | "Found a Box of Person\n",
77 | "Found a Box of Glasses\n",
78 | "camera1\n",
79 | "camera1\n",
80 | "Found a Box of Glasses\n",
81 | "Found a Box of Person\n",
82 | "camera1\n",
83 | "camera1\n",
84 | "Found a Box of Glasses\n",
85 | "Found a Box of Person\n",
86 | "camera1\n",
87 | "camera1\n",
88 | "Found a Box of Person\n",
89 | "Found a Box of Glasses\n",
90 | "camera1\n",
91 | "camera1\n",
92 | "Found a Box of Person\n",
93 | "Found a Box of Glasses\n",
94 | "camera1\n",
95 | "camera1\n",
96 | "Found a Box of Person\n",
97 | "Found a Box of Glasses\n",
98 | "camera1\n",
99 | "camera1\n",
100 | "Found a Box of Glasses\n",
101 | "Found a Box of Person\n",
102 | "camera1\n",
103 | "camera1\n",
104 | "Found a Box of Person\n",
105 | "Found a Box of Glasses\n",
106 | "camera1\n",
107 | "camera1\n",
108 | "Found a Box of Person\n",
109 | "Found a Box of Glasses\n",
110 | "camera1\n",
111 | "camera1\n",
112 | "Found a Box of Person\n",
113 | "Found a Box of Glasses\n",
114 | "camera1\n",
115 | "camera1\n",
116 | "Found a Box of Glasses\n",
117 | "Found a Box of Person\n",
118 | "camera1\n",
119 | "camera1\n",
120 | "Found a Box of Glasses\n",
121 | "Found a Box of Person\n",
122 | "camera1\n",
123 | "camera1\n",
124 | "Found a Box of Person\n",
125 | "Found a Box of Glasses\n",
126 | "camera1\n",
127 | "camera1\n",
128 | "Found a Box of Person\n",
129 | "Found a Box of Glasses\n",
130 | "camera1\n",
131 | "camera1\n",
132 | "Found a Box of Glasses\n",
133 | "Found a Box of Person\n",
134 | "camera1\n",
135 | "camera1\n",
136 | "Found a Box of Person\n",
137 | "Found a Box of Glasses\n",
138 | "camera1\n",
139 | "camera1\n",
140 | "Found a Box of Glasses\n",
141 | "Found a Box of Person\n",
142 | "camera1\n",
143 | "camera1\n"
144 | ]
145 | }
146 | ],
147 | "source": [
148 | "import boto3\n",
149 | "import json\n",
150 | "import cv2\n",
151 | "import decimal\n",
152 | "from copy import deepcopy\n",
153 | "\n",
154 | "from __future__ import print_function\n",
155 | "import base64\n",
156 | "import datetime\n",
157 | "import time\n",
158 | "import decimal\n",
159 | "import uuid\n",
160 | "import json\n",
161 | "import boto3\n",
162 | "import pytz\n",
163 | "from pytz import timezone\n",
164 | "from copy import deepcopy\n",
165 | "\n",
166 | "from PIL import Image, ImageDraw, ExifTags, ImageColor, ImageFont\n",
167 | "\n",
168 | "import datetime\n",
169 | "from kafka import KafkaConsumer, KafkaProducer\n",
170 | "import boto3\n",
171 | "import json\n",
172 | "import base64\n",
173 | "import io\n",
174 | "\n",
175 | "# Fire up the Kafka Consumer\n",
176 | "topic = \"image-pool\"\n",
177 | "brokers = [\"35.221.215.135:9092\"]\n",
178 | "\n",
179 | "consumer = KafkaConsumer(\n",
180 | " topic, \n",
181 | " bootstrap_servers=brokers,\n",
182 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
183 | "\n",
184 | "\n",
185 | "# In[18]:\n",
186 | "\n",
187 | "producer = KafkaProducer(bootstrap_servers=brokers,\n",
188 | " value_serializer=lambda v: json.dumps(v).encode('utf-8'))\n",
189 | "\n",
190 | "\n",
191 | "AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXXXXXXXXXXXXXX'\n",
192 | "AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXX'\n",
193 | "\n",
194 | "\n",
195 | "session = boto3.session.Session(aws_access_key_id = AWS_ACCESS_KEY_ID,\n",
196 | " aws_secret_access_key = AWS_SECRET_ACCESS_KEY,\n",
197 | " region_name='us-west-2')\n",
198 | "\n",
199 | "\n",
200 | "def load_config():\n",
201 | " '''Load configuration from file.'''\n",
202 | " with open('image-processor.json', 'r') as conf_file:\n",
203 | " conf_json = conf_file.read()\n",
204 | " return json.loads(conf_json)\n",
205 | "\n",
206 | "#Load config\n",
207 | "config = load_config()\n",
208 | "\n",
209 | "def start_processor():\n",
210 | " \n",
211 | " while True:\n",
212 | " \n",
213 | "# raw_frame_messages = consumer.poll(timeout_ms=10, max_records=10)\n",
214 | " raw_frame_messages = consumer.poll()\n",
215 | " \n",
216 | " for topic_partition, msgs in raw_frame_messages.items():\n",
217 | " for msg in msgs:\n",
218 | "\n",
219 | " camera_data = {}\n",
220 | "\n",
221 | " img_bytes = base64.b64decode(msg.value['image_bytes'])\n",
222 | "\n",
223 | " camera_topic = \"camera\"+str(msg.value['camera_id'])\n",
224 | "\n",
225 | " stream = io.BytesIO(img_bytes)\n",
226 | " image=Image.open(stream)\n",
227 | "\n",
228 | " imgWidth, imgHeight = image.size \n",
229 | " draw = ImageDraw.Draw(image) \n",
230 | "\n",
231 | " rekog_client = session.client('rekognition')\n",
232 | " rekog_max_labels = config[\"rekog_max_labels\"]\n",
233 | " rekog_min_conf = float(config[\"rekog_min_conf\"])\n",
234 | "\n",
235 | " label_watch_list = config[\"label_watch_list\"]\n",
236 | " label_watch_min_conf = float(config[\"label_watch_min_conf\"])\n",
237 | " label_watch_phone_num = config.get(\"label_watch_phone_num\", \"\")\n",
238 | " label_watch_sns_topic_arn = config.get(\"label_watch_sns_topic_arn\", \"\")\n",
239 | "\n",
240 | " rekog_response = rekog_client.detect_labels(\n",
241 | " Image={\n",
242 | " 'Bytes': img_bytes\n",
243 | " },\n",
244 | " MaxLabels=rekog_max_labels,\n",
245 | " MinConfidence=rekog_min_conf\n",
246 | " )\n",
247 | "\n",
248 | " boxes = []\n",
249 | " objects = []\n",
250 | " confidence = []\n",
251 | "\n",
252 | " for label in rekog_response['Labels']:\n",
253 | "\n",
254 | " for instance in label['Instances']:\n",
255 | "\n",
256 | " if(instance['BoundingBox']['Top'] > 0):\n",
257 | "\n",
258 | " print(\"Found a Box of {}\".format(label['Name']))\n",
259 | "\n",
260 | " top = imgHeight * instance['BoundingBox']['Top']\n",
261 | " left = imgWidth * instance['BoundingBox']['Left']\n",
262 | " width = imgWidth * instance['BoundingBox']['Width']\n",
263 | " height = imgHeight * instance['BoundingBox']['Height']\n",
264 | "\n",
265 | " boxes.append([top,left,width,height])\n",
266 | "\n",
267 | " objects.append(label['Name'])\n",
268 | "\n",
269 | " confidence.append(label['Confidence']) \n",
270 | "\n",
271 | "\n",
272 | " for i, box in enumerate(boxes):\n",
273 | "\n",
274 | " top = box[0]\n",
275 | " left = box[1]\n",
276 | " width = box[2]\n",
277 | " height = box[3]\n",
278 | "\n",
279 | "\n",
280 | " points = (\n",
281 | " (left,top),\n",
282 | " (left + width, top),\n",
283 | " (left + width, top + height),\n",
284 | " (left , top + height),\n",
285 | " (left, top)\n",
286 | "\n",
287 | " )\n",
288 | "\n",
289 | " font = ImageFont.truetype(\"arial.ttf\", 25)\n",
290 | " draw.line(points, fill='#00d400', width=3)\n",
291 | "\n",
292 | " label = str(objects[i])+\":\"+str(confidence[i])\n",
293 | " color = 'rgb(255,255,0)' # white color\n",
294 | " draw.text((left, top - 25), label, fill=color,font=font)\n",
295 | "\n",
296 | "\n",
297 | " imgByteArr = io.BytesIO()\n",
298 | " image.save(imgByteArr, format=image.format)\n",
299 | " imgByteArr = imgByteArr.getvalue()\n",
300 | "\n",
301 | "\n",
302 | " camera_data['image_bytes'] = base64.b64encode(imgByteArr).decode('utf-8')\n",
303 | "\n",
304 | " # print(camera_topic)\n",
305 | "\n",
306 | " producer.send(camera_topic,camera_data)\n",
307 | " \n",
308 | "\n",
309 | "if __name__ == \"__main__\":\n",
310 | " start_processor()\n"
311 | ]
312 | },
313 | {
314 | "cell_type": "code",
315 | "execution_count": null,
316 | "metadata": {
317 | "collapsed": true
318 | },
319 | "outputs": [],
320 | "source": []
321 | }
322 | ],
323 | "metadata": {
324 | "kernelspec": {
325 | "display_name": "Python 3",
326 | "language": "python",
327 | "name": "python3"
328 | },
329 | "language_info": {
330 | "codemirror_mode": {
331 | "name": "ipython",
332 | "version": 3
333 | },
334 | "file_extension": ".py",
335 | "mimetype": "text/x-python",
336 | "name": "python",
337 | "nbconvert_exporter": "python",
338 | "pygments_lexer": "ipython3",
339 | "version": "3.6.2"
340 | }
341 | },
342 | "nbformat": 4,
343 | "nbformat_minor": 2
344 | }
345 |
--------------------------------------------------------------------------------
/deployment/objects/image_processor.json:
--------------------------------------------------------------------------------
1 | {
2 | "s3_bucket" : "bucketpico",
3 | "s3_key_frames_root" : "frames/",
4 |
5 | "ddb_table" : "EnrichedFrame",
6 |
7 | "rekog_max_labels" : 123,
8 | "rekog_min_conf" : 50.0,
9 |
10 | "label_watch_list" : ["Human", "Pet", "Bag", "Toy"],
11 | "label_watch_min_conf" : 90.0,
12 | "label_watch_phone_num" : "7411763580",
13 | "label_watch_sns_topic_arn" : "mypico",
14 |
15 | "timezone" : "US/Eastern"
16 | }
17 |
--------------------------------------------------------------------------------
/deployment/objects/image_processor.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[ ]:
5 | # Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2.
6 | # OpenCV is a library of programming functions mainly aimed at real-time computer vision
7 |
8 |
9 | import boto3
10 | import json
11 | import cv2
12 | import decimal
13 | from copy import deepcopy
14 |
15 | #from __future__ import print_function
16 | import base64
17 | import datetime
18 | import time
19 | import decimal
20 | import uuid
21 | import json
22 | import boto3
23 | import pytz
24 | from pytz import timezone
25 | from copy import deepcopy
26 |
27 | from PIL import Image, ImageDraw, ExifTags, ImageColor, ImageFont
28 |
29 | import datetime
30 | from kafka import KafkaConsumer, KafkaProducer
31 | import boto3
32 | import json
33 | import base64
34 | import io
35 |
36 | # Fire up the Kafka Consumer
37 | topic = "image-pool"
38 | brokers = ["35.221.213.182:9092"]
39 |
40 | consumer = KafkaConsumer(
41 | topic,
42 | bootstrap_servers=brokers,
43 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
44 |
45 |
46 | # In[18]:
47 |
48 | producer = KafkaProducer(bootstrap_servers=brokers,
49 | value_serializer=lambda v: json.dumps(v).encode('utf-8'))
50 |
51 |
52 | AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXXXXXXXXXXXXXX'
53 | AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXX'
54 |
55 |
56 | session = boto3.session.Session(aws_access_key_id = AWS_ACCESS_KEY_ID,
57 | aws_secret_access_key = AWS_SECRET_ACCESS_KEY,
58 | region_name='us-west-2')
59 |
60 |
61 | def load_config():
62 | '''Load configuration from file.'''
63 | with open('image-processor.json', 'r') as conf_file:
64 | conf_json = conf_file.read()
65 | return json.loads(conf_json)
66 |
67 | #Load config
68 | config = load_config()
69 |
70 | def start_processor():
71 |
72 | while True:
73 |
74 | # raw_frame_messages = consumer.poll(timeout_ms=10, max_records=10)
75 | raw_frame_messages = consumer.poll()
76 |
77 | for topic_partition, msgs in raw_frame_messages.items():
78 | for msg in msgs:
79 |
80 | camera_data = {}
81 |
82 | img_bytes = base64.b64decode(msg.value['image_bytes'])
83 |
84 | camera_topic = "camera"+str(msg.value['camera_id'])
85 |
86 | stream = io.BytesIO(img_bytes)
87 | image=Image.open(stream)
88 |
89 | imgWidth, imgHeight = image.size
90 | draw = ImageDraw.Draw(image)
91 |
92 | rekog_client = session.client('rekognition')
93 | rekog_max_labels = config["rekog_max_labels"]
94 | rekog_min_conf = float(config["rekog_min_conf"])
95 |
96 | label_watch_list = config["label_watch_list"]
97 | label_watch_min_conf = float(config["label_watch_min_conf"])
98 | label_watch_phone_num = config.get("label_watch_phone_num", "")
99 | label_watch_sns_topic_arn = config.get("label_watch_sns_topic_arn", "")
100 |
101 | rekog_response = rekog_client.detect_labels(
102 | Image={
103 | 'Bytes': img_bytes
104 | },
105 | MaxLabels=rekog_max_labels,
106 | MinConfidence=rekog_min_conf
107 | )
108 |
109 | boxes = []
110 | objects = []
111 | confidence = []
112 |
113 | for label in rekog_response['Labels']:
114 |
115 | for instance in label['Instances']:
116 |
117 | if(instance['BoundingBox']['Top'] > 0):
118 |
119 | print("Found a Box of {}".format(label['Name']))
120 |
121 | top = imgHeight * instance['BoundingBox']['Top']
122 | left = imgWidth * instance['BoundingBox']['Left']
123 | width = imgWidth * instance['BoundingBox']['Width']
124 | height = imgHeight * instance['BoundingBox']['Height']
125 |
126 | boxes.append([top,left,width,height])
127 |
128 | objects.append(label['Name'])
129 |
130 | confidence.append(label['Confidence'])
131 |
132 |
133 | for i, box in enumerate(boxes):
134 |
135 | top = box[0]
136 | left = box[1]
137 | width = box[2]
138 | height = box[3]
139 |
140 |
141 | points = (
142 | (left,top),
143 | (left + width, top),
144 | (left + width, top + height),
145 | (left , top + height),
146 | (left, top)
147 |
148 | )
149 |
150 | # font = ImageFont.truetype("arial.ttf", 25)
151 | draw.line(points, fill='#00d400', width=3)
152 |
153 | label = str(objects[i])+":"+str(confidence[i])
154 | color = 'rgb(255,255,0)' # white color
155 | draw.text((left, top - 25), label, fill=color)
156 |
157 |
158 | imgByteArr = io.BytesIO()
159 | image.save(imgByteArr, format=image.format)
160 | imgByteArr = imgByteArr.getvalue()
161 |
162 |
163 | camera_data['image_bytes'] = base64.b64encode(imgByteArr).decode('utf-8')
164 |
165 | # print(camera_topic)
166 |
167 | producer.send(camera_topic,camera_data)
168 |
169 |
170 | if __name__ == "__main__":
171 | start_processor()
172 |
173 |
174 | # In[ ]:
175 |
176 |
177 |
178 |
179 |
--------------------------------------------------------------------------------
/deployment/objects/person.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/deployment/objects/person.jpg
--------------------------------------------------------------------------------
/deployment/objects/pet.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/deployment/objects/pet.jpg
--------------------------------------------------------------------------------
/deployment/objects/pico-consumer.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -e
3 | exec python3 image_processor.py &
4 | exec python3 consumer.py
5 |
--------------------------------------------------------------------------------
/deployment/objects/producer_camera.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {
7 | "collapsed": true
8 | },
9 | "outputs": [],
10 | "source": [
11 | "import sys\n",
12 | "import time\n",
13 | "import cv2\n",
14 | "import json\n",
15 | "import decimal\n",
16 | "\n",
17 | "\n",
18 | "import pytz\n",
19 | "from pytz import timezone\n",
20 | "import datetime\n",
21 | "\n",
22 | "\n",
23 | "from kafka import KafkaProducer\n",
24 | "from kafka.errors import KafkaError\n",
25 | "import base64\n",
26 | "\n",
27 | "topic = \"image-pool\"\n",
28 | "brokers = [\"35.221.215.135:9092\"]\n",
29 | "\n",
30 | "camera_data = {'camera_id':'1',\n",
31 | " 'position':'frontspace',\n",
32 | " 'image_bytes':'123'}\n",
33 | "\n",
34 | "\n",
35 | "# In[18]:\n",
36 | "\n",
37 | "\n",
38 | "def convert_ts(ts, config):\n",
39 | " '''Converts a timestamp to the configured timezone. Returns a localized datetime object.'''\n",
40 | " #lambda_tz = timezone('US/Pacific')\n",
41 | " tz = timezone(config['timezone'])\n",
42 | " utc = pytz.utc\n",
43 | "\n",
44 | " utc_dt = utc.localize(datetime.datetime.utcfromtimestamp(ts))\n",
45 | "\n",
46 | " localized_dt = utc_dt.astimezone(tz)\n",
47 | "\n",
48 | " return localized_dt\n",
49 | "\n",
50 | "\n",
51 | "def publish_camera():\n",
52 | " \"\"\"\n",
53 | " Publish camera video stream to specified Kafka topic.\n",
54 | " Kafka Server is expected to be running on the localhost. Not partitioned.\n",
55 | " \"\"\"\n",
56 | "\n",
57 | " # Start up producer\n",
58 | "\n",
59 | "\n",
60 | " producer = KafkaProducer(bootstrap_servers=brokers,\n",
61 | " value_serializer=lambda v: json.dumps(v).encode('utf-8'))\n",
62 | "\n",
63 | " camera = cv2.VideoCapture(0)\n",
64 | "\n",
65 | " framecount = 0\n",
66 | "\n",
67 | " try:\n",
68 | " while(True):\n",
69 | "\n",
70 | " success, frame = camera.read()\n",
71 | "\n",
72 | " utc_dt = pytz.utc.localize(datetime.datetime.now())\n",
73 | " now_ts_utc = (utc_dt - datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)).total_seconds()\n",
74 | "\n",
75 | " ret, buffer = cv2.imencode('.jpg', frame)\n",
76 | "\n",
77 | " camera_data['image_bytes'] = base64.b64encode(buffer).decode('utf-8')\n",
78 | "\n",
79 | " camera_data['frame_count'] = str(framecount)\n",
80 | "\n",
81 | " camera_data['capture_time'] = str(now_ts_utc)\n",
82 | "\n",
83 | " producer.send(topic, camera_data)\n",
84 | "\n",
85 | " framecount = framecount + 1\n",
86 | "\n",
87 | " # Choppier stream, reduced load on processor\n",
88 | " time.sleep(0.002)\n",
89 | "\n",
90 | "\n",
91 | " except Exception as e:\n",
92 | " print((e))\n",
93 | " print(\"\\nExiting.\")\n",
94 | " sys.exit(1)\n",
95 | "\n",
96 | "\n",
97 | " camera.release()\n",
98 | " producer.close()\n",
99 | "\n",
100 | "\n",
101 | "\n",
102 | "# In[19]:\n",
103 | "\n",
104 | "\n",
105 | "if __name__ == \"__main__\":\n",
106 | " publish_camera()"
107 | ]
108 | },
109 | {
110 | "cell_type": "code",
111 | "execution_count": null,
112 | "metadata": {
113 | "collapsed": true
114 | },
115 | "outputs": [],
116 | "source": []
117 | },
118 | {
119 | "cell_type": "code",
120 | "execution_count": null,
121 | "metadata": {
122 | "collapsed": true
123 | },
124 | "outputs": [],
125 | "source": []
126 | },
127 | {
128 | "cell_type": "code",
129 | "execution_count": null,
130 | "metadata": {
131 | "collapsed": true
132 | },
133 | "outputs": [],
134 | "source": []
135 | }
136 | ],
137 | "metadata": {
138 | "kernelspec": {
139 | "display_name": "Python 3",
140 | "language": "python",
141 | "name": "python3"
142 | },
143 | "language_info": {
144 | "codemirror_mode": {
145 | "name": "ipython",
146 | "version": 3
147 | },
148 | "file_extension": ".py",
149 | "mimetype": "text/x-python",
150 | "name": "python",
151 | "nbconvert_exporter": "python",
152 | "pygments_lexer": "ipython3",
153 | "version": "3.6.2"
154 | }
155 | },
156 | "nbformat": 4,
157 | "nbformat_minor": 2
158 | }
159 |
--------------------------------------------------------------------------------
/deployment/objects/producer_camera.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[ ]:
5 |
6 |
7 | import sys
8 | import time
9 | import cv2
10 | import json
11 | import decimal
12 |
13 |
14 | import pytz
15 | from pytz import timezone
16 | import datetime
17 |
18 |
19 | from kafka import KafkaProducer
20 | from kafka.errors import KafkaError
21 | import base64
22 |
23 | topic = "image-pool"
24 | brokers = ["35.221.213.182:9092"]
25 |
26 | camera_data = {'camera_id':'1',
27 | 'position':'frontspace',
28 | 'image_bytes':'123'}
29 |
30 |
31 | # In[18]:
32 |
33 |
34 | def convert_ts(ts, config):
35 | '''Converts a timestamp to the configured timezone. Returns a localized datetime object.'''
36 | #lambda_tz = timezone('US/Pacific')
37 | tz = timezone(config['timezone'])
38 | utc = pytz.utc
39 |
40 | utc_dt = utc.localize(datetime.datetime.utcfromtimestamp(ts))
41 |
42 | localized_dt = utc_dt.astimezone(tz)
43 |
44 | return localized_dt
45 |
46 |
47 | def publish_camera():
48 | """
49 | Publish camera video stream to specified Kafka topic.
50 | Kafka Server is expected to be running on the localhost. Not partitioned.
51 | """
52 |
53 | # Start up producer
54 |
55 |
56 | producer = KafkaProducer(bootstrap_servers=brokers,
57 | value_serializer=lambda v: json.dumps(v).encode('utf-8'))
58 |
59 | camera = cv2.VideoCapture(0)
60 |
61 | framecount = 0
62 |
63 | try:
64 | while(True):
65 |
66 | success, frame = camera.read()
67 |
68 | utc_dt = pytz.utc.localize(datetime.datetime.now())
69 | now_ts_utc = (utc_dt - datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)).total_seconds()
70 |
71 | ret, buffer = cv2.imencode('.jpg', frame)
72 |
73 | camera_data['image_bytes'] = base64.b64encode(buffer).decode('utf-8')
74 |
75 | camera_data['frame_count'] = str(framecount)
76 |
77 | camera_data['capture_time'] = str(now_ts_utc)
78 |
79 | producer.send(topic, camera_data)
80 |
81 | framecount = framecount + 1
82 |
83 | # Choppier stream, reduced load on processor
84 | time.sleep(0.002)
85 |
86 |
87 | except Exception as e:
88 | print((e))
89 | print("\nExiting.")
90 | sys.exit(1)
91 |
92 |
93 | camera.release()
94 | producer.close()
95 |
96 |
97 |
98 | # In[19]:
99 |
100 |
101 | if __name__ == "__main__":
102 | publish_camera()
103 |
104 |
105 | # In[ ]:
106 |
107 |
108 |
109 |
110 |
111 | # In[ ]:
112 |
113 |
114 |
115 |
116 |
117 | # In[ ]:
118 |
119 |
120 |
121 |
122 |
--------------------------------------------------------------------------------
/deployment/objects/producer_camera1.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[ ]:
5 |
6 |
7 | import sys
8 | import time
9 | import cv2
10 | import json
11 | import decimal
12 |
13 |
14 | import pytz
15 | from pytz import timezone
16 | import datetime
17 |
18 |
19 | from kafka import KafkaProducer
20 | from kafka.errors import KafkaError
21 | import base64
22 |
23 | topic = "image-pool"
24 | brokers = ["35.221.215.135:9092"]
25 |
26 | camera_data = {'camera_id':'1',
27 | 'position':'frontspace',
28 | 'image_bytes':'123'}
29 |
30 |
31 | # In[18]:
32 |
33 |
34 | def convert_ts(ts, config):
35 | '''Converts a timestamp to the configured timezone. Returns a localized datetime object.'''
36 | #lambda_tz = timezone('US/Pacific')
37 | tz = timezone(config['timezone'])
38 | utc = pytz.utc
39 |
40 | utc_dt = utc.localize(datetime.datetime.utcfromtimestamp(ts))
41 |
42 | localized_dt = utc_dt.astimezone(tz)
43 |
44 | return localized_dt
45 |
46 |
47 | def publish_camera():
48 | """
49 | Publish camera video stream to specified Kafka topic.
50 | Kafka Server is expected to be running on the localhost. Not partitioned.
51 | """
52 |
53 | # Start up producer
54 |
55 |
56 | producer = KafkaProducer(bootstrap_servers=brokers,
57 | value_serializer=lambda v: json.dumps(v).encode('utf-8'))
58 |
59 | camera = cv2.VideoCapture(0)
60 |
61 | framecount = 0
62 |
63 | try:
64 | while(True):
65 |
66 | success, frame = camera.read()
67 |
68 | utc_dt = pytz.utc.localize(datetime.datetime.now())
69 | now_ts_utc = (utc_dt - datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)).total_seconds()
70 |
71 | ret, buffer = cv2.imencode('.jpg', frame)
72 |
73 | camera_data['image_bytes'] = base64.b64encode(buffer).decode('utf-8')
74 |
75 | camera_data['frame_count'] = str(framecount)
76 |
77 | camera_data['capture_time'] = str(now_ts_utc)
78 |
79 | producer.send(topic, camera_data)
80 |
81 | framecount = framecount + 1
82 |
83 | # Choppier stream, reduced load on processor
84 | time.sleep(0.002)
85 |
86 |
87 | except Exception as e:
88 | print((e))
89 | print("\nExiting.")
90 | sys.exit(1)
91 |
92 |
93 | camera.release()
94 | producer.close()
95 |
96 |
97 |
98 | # In[19]:
99 |
100 |
101 | if __name__ == "__main__":
102 | publish_camera()
103 |
104 |
105 | # In[ ]:
106 |
107 |
108 |
109 |
110 |
111 | # In[ ]:
112 |
113 |
114 |
115 |
116 |
117 | # In[ ]:
118 |
119 |
120 |
121 |
122 |
--------------------------------------------------------------------------------
/deployment/objects/rasp_cluster.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/deployment/objects/rasp_cluster.jpg
--------------------------------------------------------------------------------
/deployment/objects/templates/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 | Video Streaming Demonstration
4 |
5 |
6 | Video Streaming Demonstration
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
--------------------------------------------------------------------------------
/deployment/plates/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ajeetraina/rpi-raspbian-opencv
2 | MAINTAINER Ajeet S Raina "ajeetraina@gmail.c
3 | RUN apt update -y
4 | RUN pip3 install pytz
5 | RUN pip3 install kafka-python
6 | RUN apt install -y git
7 | ADD . /pico/
8 | WORKDIR /pico/
9 | ENTRYPOINT ["python3", "/pico/plates-producer.py" ]
10 |
--------------------------------------------------------------------------------
/deployment/plates/README.md:
--------------------------------------------------------------------------------
1 | # Building Docker Image for Number Plates Detection Service
2 |
3 | ```
4 | docker build -t ajeetraina/pico-raspbi-producer .
5 | ```
6 |
7 | ## Running the Container
8 |
9 | ```
10 | docker run -dit --privileged -v --device=/dev/vcsm --device=/dev/vchiq -v /dev/video0:/dev/video0 ajeetraina/pico-raspbi-producer
11 | ```
12 |
13 |
14 |
--------------------------------------------------------------------------------
/deployment/plates/plates-consumer.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[17]:
5 |
6 |
7 | import datetime
8 | from kafka import KafkaConsumer
9 | import boto3
10 | import json
11 | import base64
12 |
13 | # Fire up the Kafka Consumer
14 | topic = "testpico"
15 | brokers = ["35.189.130.4:9092"]
16 |
17 | # Initialising Kafka consumer(Lambda) with topic
18 | consumer = KafkaConsumer(
19 | topic,
20 | bootstrap_servers=brokers,
21 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
22 |
23 |
24 | # In[18]:
25 |
26 | # Initialising AWS session using Secrey Keys
27 | session = boto3.session.Session(aws_access_key_id='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
28 | aws_secret_access_key='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
29 | region_name='us-west-2')
30 |
31 | # Reading every message in the consumer topic(queue) followed by decoding at the producer
32 | for msg in consumer:
33 |
34 | img_bytes = base64.b64decode(msg.value['image_bytes'])
35 |
36 |
37 | # Initializing the AWS Rekognition System
38 | rekog_client = session.client('rekognition')
39 |
40 | # Sending the Image byte array to the AWS Rekognition System to detect the text in the image
41 |
42 | response = rekog_client.detect_text(Image={'Bytes':img_bytes})
43 |
44 | # Capturing the text detections from the AWS Rekognition System response
45 |
46 |
47 | textDetections=response['TextDetections']
48 |
49 | for text in textDetections:
50 | print ('Detected text:' + text['DetectedText'])
51 | print ('Confidence: ' + "{:.2f}".format(text['Confidence']) + "%")
52 |
53 | print("#"*50)
54 |
55 |
56 | # In[ ]:
57 |
58 |
59 |
60 |
61 |
--------------------------------------------------------------------------------
/deployment/plates/plates-producer.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[12]:
5 |
6 |
7 | import os
8 | import sys
9 | import time
10 | import cv2
11 | import json
12 | import decimal
13 |
14 |
15 | import pytz
16 | from pytz import timezone
17 | import datetime
18 |
19 |
20 | from kafka import KafkaProducer
21 | from kafka.errors import KafkaError
22 | import base64
23 |
24 | topic = "testpico"
25 | brokers = ["35.189.130.4:9092"]
26 |
27 | imagesfolder = 'images/'
28 |
29 | camera_data = {'camera_id':"1","position":"frontspace","image_bytes":"123"}
30 |
31 | producer = KafkaProducer(bootstrap_servers=brokers,
32 | value_serializer=lambda v: json.dumps(v).encode('utf-8'))
33 |
34 | def convert_ts(ts, config):
35 | '''Converts a timestamp to the configured timezone. Returns a localized datetime object.'''
36 | #lambda_tz = timezone('US/Pacific')
37 | tz = timezone(config['timezone'])
38 | utc = pytz.utc
39 |
40 | utc_dt = utc.localize(datetime.datetime.utcfromtimestamp(ts))
41 |
42 | localized_dt = utc_dt.astimezone(tz)
43 |
44 | return localized_dt
45 |
46 |
47 | camera = cv2.VideoCapture(0)
48 |
49 | framecount = 0
50 | try:
51 |
52 | # for root, dirs, files in os.walk(imagesfolder):
53 | # for filename in files:
54 | # print(filename)
55 |
56 | while(True):
57 |
58 | success, frame = camera.read()
59 | utc_dt = pytz.utc.localize(datetime.datetime.now())
60 | now_ts_utc = (utc_dt - datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)).total_seconds()
61 |
62 | # frame = cv2.imread(imagesfolder+filename, 0)
63 |
64 | retval, buffer = cv2.imencode(".jpg", frame)
65 |
66 | camera_data['image_bytes'] = base64.b64encode(buffer).decode('utf-8')
67 |
68 | camera_data['frame_count'] = str(framecount)
69 |
70 | camera_data['capture_time'] = str(now_ts_utc)
71 |
72 | producer.send(topic, camera_data)
73 |
74 | framecount = framecount + 1
75 |
76 | time.sleep(0.02)
77 |
78 | except Exception as e:
79 | print((e))
80 | print("\nExiting.")
81 | producer.close()
82 | sys.exit(1)
83 |
84 | producer.close()
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 | # In[ ]:
93 |
94 |
95 |
96 |
97 |
--------------------------------------------------------------------------------
/deployment/raspbi/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ajeetraina/rpi-raspbian-opencv
2 | MAINTAINER Ajeet S Raina "ajeetraina@gmail.c
3 |
4 | RUN apt update -y
5 | RUN pip3 install pytz
6 | RUN pip3 install kafka-python
7 | RUN apt install -y git
8 | ADD . /pico/
9 | WORKDIR /pico/raspbi/
10 | ENTRYPOINT ["python3", "/pico/producer.py" ]
11 |
--------------------------------------------------------------------------------
/deployment/raspbi/README.md:
--------------------------------------------------------------------------------
1 | # Getting Raspberry Pi Ready & Tagged with Camera IDs
2 |
3 | ## Pre-requisite
4 |
5 | - Enable Raspberry Pi Camera Interface using raspbi-config utility
6 | - Enable the BCM module:
7 |
8 | ```
9 | sudo modprobe bcm2835-v4l2
10 | ```
11 |
12 | ## Cloning the Repository
13 |
14 | ```
15 | git clone https://github.com/collabnix/pico/
16 | cd pico/deployment/raspbi
17 | ```
18 |
19 | ## Modifying the Camera ID:1,2,3 respectively
20 |
21 | ## Building Docker Image
22 |
23 | ```
24 | docker build -t ajeetraina/pico-armv71
25 | ```
26 |
27 | ## Running the Container
28 |
29 | ```
30 | docker run -d --privileged -v /dev/video0:/dev/video0 ajeetraina/pico-armv71
31 | ```
32 |
--------------------------------------------------------------------------------
/deployment/raspbi/consumer.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | from flask import Flask, Response, render_template
3 | from kafka import KafkaConsumer
4 | import json
5 | import base64
6 |
7 | # Fire up the Kafka Consumer
8 | camera_topic_1 = "camera1"
9 | camera_topic_2 = "camera2"
10 | camera_topic_3 = "camera3"
11 | brokers = ["35.189.130.4:9092"]
12 |
13 | camera1 = KafkaConsumer(
14 | camera_topic_1,
15 | bootstrap_servers=brokers,
16 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
17 |
18 | camera2 = KafkaConsumer(
19 | camera_topic_2,
20 | bootstrap_servers=brokers,
21 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
22 |
23 |
24 | camera3 = KafkaConsumer(
25 | camera_topic_3,
26 | bootstrap_servers=brokers,
27 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
28 |
29 |
30 | # Set the consumer in a Flask App
31 | app = Flask(__name__)
32 |
33 | @app.route('/')
34 | def index():
35 | return render_template('index.html')
36 |
37 | @app.route('/camera_1', methods=['GET'])
38 | def camera_1():
39 | id=5
40 | """
41 | This is the heart of our video display. Notice we set the mimetype to
42 | multipart/x-mixed-replace. This tells Flask to replace any old images with
43 | new values streaming through the pipeline.
44 | """
45 | return Response(
46 | getCamera1(),
47 | mimetype='multipart/x-mixed-replace; boundary=frame')
48 |
49 | @app.route('/camera_2', methods=['GET'])
50 | def camera_2():
51 | id=6
52 | """
53 | This is the heart of our video display. Notice we set the mimetype to
54 | multipart/x-mixed-replace. This tells Flask to replace any old images with
55 | new values streaming through the pipeline.
56 | """
57 | return Response(
58 | getCamera2(),
59 | mimetype='multipart/x-mixed-replace; boundary=frame')
60 |
61 |
62 | @app.route('/camera_3', methods=['GET'])
63 | def camera_3():
64 | id=8
65 | """
66 | This is the heart of our video display. Notice we set the mimetype to
67 | multipart/x-mixed-replace. This tells Flask to replace any old images with
68 | new values streaming through the pipeline.
69 | """
70 | return Response(
71 | getCamera3(),
72 | mimetype='multipart/x-mixed-replace; boundary=frame')
73 |
74 | def getCamera1():
75 | """
76 | Here is where we recieve streamed images from the Kafka Server and convert
77 | them to a Flask-readable format.
78 | """
79 | for msg in camera1:
80 | yield (b'--frame\r\n'
81 | b'Content-Type: image/jpg\r\n\r\n' + base64.b64decode(msg.value['image_bytes']) + b'\r\n\r\n')
82 |
83 | def getCamera2():
84 | """
85 | Here is where we recieve streamed images from the Kafka Server and convert
86 | them to a Flask-readable format.
87 | """
88 | for msg in camera2:
89 | yield (b'--frame\r\n'
90 | b'Content-Type: image/jpg\r\n\r\n' + base64.b64decode(msg.value['image_bytes']) + b'\r\n\r\n')
91 |
92 |
93 | def getCamera3():
94 | """
95 | Here is where we recieve streamed images from the Kafka Server and convert
96 | them to a Flask-readable format.
97 | """
98 | for msg in camera3:
99 | yield (b'--frame\r\n'
100 | b'Content-Type: image/jpg\r\n\r\n' + base64.b64decode(msg.value['image_bytes']) + b'\r\n\r\n')
101 |
102 | if __name__ == "__main__":
103 | app.run(host='0.0.0.0', debug=True)
104 |
--------------------------------------------------------------------------------
/deployment/raspbi/producer.py:
--------------------------------------------------------------------------------
1 | # coding: utf-8
2 |
3 | # In[17]:
4 |
5 |
6 | import sys
7 | import time
8 | import cv2
9 | import json
10 | import decimal
11 |
12 |
13 | import pytz
14 | from pytz import timezone
15 | import datetime
16 |
17 |
18 | from kafka import KafkaProducer
19 | from kafka.errors import KafkaError
20 | import base64
21 |
22 | topic = "camera1"
23 | brokers = ["35.189.130.4:9092"]
24 |
25 |
26 | # In[18]:
27 |
28 |
29 | def convert_ts(ts, config):
30 | '''Converts a timestamp to the configured timezone. Returns a localized datetime object.'''
31 | #lambda_tz = timezone('US/Pacific')
32 | tz = timezone(config['timezone'])
33 | utc = pytz.utc
34 |
35 | utc_dt = utc.localize(datetime.datetime.utcfromtimestamp(ts))
36 |
37 | localized_dt = utc_dt.astimezone(tz)
38 |
39 | return localized_dt
40 |
41 |
42 | def publish_camera():
43 | """
44 | Publish camera video stream to specified Kafka topic.
45 | Kafka Server is expected to be running on the localhost. Not partitioned.
46 | """
47 |
48 | # Start up producer
49 |
50 |
51 | producer = KafkaProducer(bootstrap_servers=brokers,
52 | value_serializer=lambda v: json.dumps(v).encode('utf-8'))
53 |
54 |
55 | camera_data = {'camera_id':"1","position":"frontspace","image_bytes":"123"}
56 |
57 | camera = cv2.VideoCapture(0)
58 |
59 | framecount = 0
60 |
61 | try:
62 | while(True):
63 |
64 | success, frame = camera.read()
65 |
66 | utc_dt = pytz.utc.localize(datetime.datetime.now())
67 | now_ts_utc = (utc_dt - datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)).total_seconds()
68 |
69 | ret, buffer = cv2.imencode('.jpg', frame)
70 |
71 | camera_data['image_bytes'] = base64.b64encode(buffer).decode('utf-8')
72 |
73 | camera_data['frame_count'] = str(framecount)
74 |
75 | camera_data['capture_time'] = str(now_ts_utc)
76 |
77 | producer.send(topic, camera_data)
78 |
79 | framecount = framecount + 1
80 |
81 | # Choppier stream, reduced load on processor
82 | time.sleep(0.2)
83 |
84 |
85 | except Exception as e:
86 | print((e))
87 | print("\nExiting.")
88 | sys.exit(1)
89 |
90 |
91 | camera.release()
92 | producer.close()
93 |
94 |
95 |
96 | # In[19]:
97 |
98 |
99 | if __name__ == "__main__":
100 | publish_camera()
101 |
102 |
103 | # In[12]:
104 |
105 |
106 |
107 |
108 |
109 | # In[ ]:
110 |
111 |
112 |
113 |
114 |
115 | # In[ ]:
116 |
117 |
118 |
119 |
--------------------------------------------------------------------------------
/deployment/raspbi/templates/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 | Video Streaming Demonstration
4 |
5 |
6 | Video Streaming Demonstration
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
--------------------------------------------------------------------------------
/docs/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/getting-started/README.md:
--------------------------------------------------------------------------------
1 | # How to install Docker 18.09.0 on Raspberry Pi 3
2 |
3 |
4 |
5 | ## Tested Infrastructure
6 |
7 |
8 |
9 | Platform
10 | Number of Instance
11 | Reading Time
12 |
13 |
14 |
15 | Raspberry Pi 3
16 | 1
17 | 5 min
18 |
19 |
20 |
21 |
22 |
23 | ## Pre-requisite
24 |
25 |
26 | - Flash Raspbian OS on SD card
27 |
28 | If you are in Mac, you might need to install Etcher tool. If on Windows, install SDFormatter to format SD card as well as Win32installer to flash Raspbian ISO image onto the SD card. You will need SD card reader to achieve this.
29 |
30 |
31 | ## Booting up Raspbian OS
32 |
33 | Just use the same charger which you use for your mobile to power on Raspberry Pi box. Connect HDMI port to your TV or display. Let it boot up.
34 |
35 |
36 | The default username is pi and password is raspberry.
37 |
38 |
39 | ### Enable SSH to perform remote login
40 |
41 | To login via your laptop, you need to allow SSH service running. You can verify IP address command via ifconfig command.
42 |
43 | ```
44 | [Captains-Bay]🚩 > ssh pi@192.168.1.5
45 | pi@192.168.1.5's password:
46 | Linux raspberrypi 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l
47 |
48 | The programs included with the Debian GNU/Linux system are free software;
49 | the exact distribution terms for each program are described in the
50 | individual files in /usr/share/doc/*/copyright.
51 |
52 | Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
53 | permitted by applicable law.
54 | Last login: Tue Feb 26 12:30:00 2019 from 192.168.1.4
55 | pi@raspberrypi:~ $ sudo su
56 | root@raspberrypi:/home/pi# cd
57 | ```
58 |
59 | ## Verifying Raspbian OS Version
60 |
61 |
62 | ```
63 | root@raspberrypi:~# cat /etc/os-release
64 | PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
65 | NAME="Raspbian GNU/Linux"
66 | VERSION_ID="9"
67 | VERSION="9 (stretch)"
68 | ID=raspbian
69 | ID_LIKE=debian
70 | HOME_URL="http://www.raspbian.org/"
71 | SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
72 | BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
73 | root@raspberrypi:~#
74 | ```
75 |
76 | ## Installing Docker 18.09
77 |
78 | Its a single liner command. -L means location, -s means silent and -S means show error.
79 |
80 | ```
81 | root@raspberrypi:~# curl -sSL https://get.docker.com/ | sh
82 | # Executing docker install script, commit: 40b1b76
83 | + sh -c apt-get update -qq >/dev/null
84 | + sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
85 | + sh -c curl -fsSL "https://download.docker.com/linux/raspbian/gpg" | apt-key add -qq - >/dev/null
86 | Warning: apt-key output should not be parsed (stdout is not a terminal)
87 | + sh -c echo "deb [arch=armhf] https://download.docker.com/linux/raspbian stretch edge" > /etc/apt/sources.list.d/docker.list
88 | + sh -c apt-get update -qq >/dev/null
89 | + sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
90 | + sh -c docker version
91 | Client:
92 | Version: 18.09.0
93 | API version: 1.39
94 | Go version: go1.10.4
95 | Git commit: 4d60db4
96 | Built: Wed Nov 7 00:57:21 2018
97 | OS/Arch: linux/arm
98 | Experimental: false
99 |
100 | Server: Docker Engine - Community
101 | Engine:
102 | Version: 18.09.0
103 | API version: 1.39 (minimum version 1.12)
104 | Go version: go1.10.4
105 | Git commit: 4d60db4
106 | Built: Wed Nov 7 00:17:57 2018
107 | OS/Arch: linux/arm
108 | Experimental: false
109 | If you would like to use Docker as a non-root user, you should now consider
110 | adding your user to the "docker" group with something like:
111 |
112 | sudo usermod -aG docker your-user
113 |
114 | Remember that you will have to log out and back in for this to take effect!
115 |
116 | WARNING: Adding a user to the "docker" group will grant the ability to run
117 | containers which can be used to obtain root privileges on the
118 | docker host.
119 | Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
120 | for more information.
121 |
122 | ** DOCKER ENGINE - ENTERPRISE **
123 |
124 | If you’re ready for production workloads, Docker Engine - Enterprise also includes:
125 |
126 | * SLA-backed technical support
127 | * Extended lifecycle maintenance policy for patches and hotfixes
128 | * Access to certified ecosystem content
129 |
130 | ** Learn more at https://dockr.ly/engine2 **
131 |
132 | ACTIVATE your own engine to Docker Engine - Enterprise using:
133 |
134 | sudo docker engine activate
135 |
136 | ```
137 |
138 | ## Verifying Docker Version
139 |
140 | ```
141 | root@raspberrypi:~# docker version
142 | Client:
143 | Version: 18.09.0
144 | API version: 1.39
145 | Go version: go1.10.4
146 | Git commit: 4d60db4
147 | Built: Wed Nov 7 00:57:21 2018
148 | OS/Arch: linux/arm
149 | Experimental: false
150 |
151 | Server: Docker Engine - Community
152 | Engine:
153 | Version: 18.09.0
154 | API version: 1.39 (minimum version 1.12)
155 | Go version: go1.10.4
156 | Git commit: 4d60db4
157 | Built: Wed Nov 7 00:17:57 2018
158 | OS/Arch: linux/arm
159 | Experimental: false
160 | root@raspberrypi:~#
161 | ```
162 |
163 |
164 | ## Deploying Nginx App
165 |
166 | ```
167 | root@raspberrypi:~# docker run -d -p 80:80 nginx
168 | Unable to find image 'nginx:latest' locally
169 | latest: Pulling from library/nginx
170 | 9c38b5a8a4d5: Pull complete
171 | 1c9b1b3e1e0d: Pull complete
172 | 258951b5612f: Pull complete
173 | Digest: sha256:dd2d0ac3fff2f007d99e033b64854be0941e19a2ad51f174d9240dda20d9f534
174 | Status: Downloaded newer image for nginx:latest
175 | d812bf50d136b0f78353f0a0c763b6b08ecc5e7ce706bac8bd660cdd723e0fcd
176 | root@raspberrypi:~#
177 | ```
178 |
179 | ##
180 |
181 | ```
182 | root@raspberrypi:~# curl localhost:80
183 |
184 |
185 |
186 | Welcome to nginx!
187 |
194 |
195 |
196 | Welcome to nginx!
197 | If you see this page, the nginx web server is successfully installed and
198 | working. Further configuration is required.
199 |
200 | For online documentation and support please refer to
201 | nginx.org .
202 | Commercial support is available at
203 | nginx.com .
204 |
205 | Thank you for using nginx.
206 |
207 |
208 | root@raspberrypi:~#
209 | ```
210 |
211 | ##
212 |
213 | ```
214 | root@raspberrypi:~# docker info
215 | Containers: 1
216 | Running: 1
217 | Paused: 0
218 | Stopped: 0
219 | Images: 1
220 | Server Version: 18.09.0
221 | Storage Driver: overlay2
222 | Backing Filesystem: extfs
223 | Supports d_type: true
224 | Native Overlay Diff: true
225 | Logging Driver: json-file
226 | Cgroup Driver: cgroupfs
227 | Plugins:
228 | Volume: local
229 | Network: bridge host macvlan null overlay
230 | Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
231 | Swarm: inactive
232 | Runtimes: runc
233 | Default Runtime: runc
234 | Init Binary: docker-init
235 | containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
236 | runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
237 | init version: fec3683
238 | Security Options:
239 | seccomp
240 | Profile: default
241 | Kernel Version: 4.14.98-v7+
242 | Operating System: Raspbian GNU/Linux 9 (stretch)
243 | OSType: linux
244 | Architecture: armv7l
245 | CPUs: 4
246 | Total Memory: 927.2MiB
247 | Name: raspberrypi
248 | ID: FEUI:RVU6:AWPZ:6P22:TSLT:FDJC:CBIB:D2NU:AQEQ:IHVH:HFRY:HYWF
249 | Docker Root Dir: /var/lib/docker
250 | Debug Mode (client): false
251 | Debug Mode (server): false
252 | Registry: https://index.docker.io/v1/
253 | Labels:
254 | Experimental: false
255 | Insecure Registries:
256 | 127.0.0.0/8
257 | Live Restore Enabled: false
258 | Product License: Community Engine
259 |
260 | WARNING: No memory limit support
261 | WARNING: No swap limit support
262 | WARNING: No kernel memory limit support
263 | WARNING: No oom kill disable support
264 | WARNING: No cpu cfs quota support
265 | WARNING: No cpu cfs period support
266 | ```
267 |
268 |
269 |
270 | ## Verifying Dockerd
271 |
272 |
273 | ```
274 | root@raspberrypi:~/hellowhale# systemctl status docker
275 | ● docker.service - Docker Application Container Engine
276 | Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e
277 | Active: active (running) since Tue 2019-02-26 13:01:04 IST; 38min ago
278 | Docs: https://docs.docker.com
279 | Main PID: 2437 (dockerd)
280 | CPU: 1min 46.174s
281 | CGroup: /system.slice/docker.service
282 | ├─2437 /usr/bin/dockerd -H unix://
283 | ├─2705 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8
284 | └─4186 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8
285 |
286 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.400368104+0
287 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.402012958+0
288 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.402634316+0
289 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.403005881+0
290 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.408358205+0
291 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.810154786+0
292 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.810334839+0
293 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.811462659+0
294 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.811768546+0
295 | Feb 26 13:37:07 raspberrypi dockerd[2437]: time="2019-02-26T13:37:07.402282796+0
296 | ```
297 |
298 |
299 | ## Verifying if armv7 hello-world image is available or not
300 |
301 | ```
302 | docker run --rm mplatform/mquery hello-world
303 | Unable to find image 'mplatform/mquery:latest' locally
304 | latest: Pulling from mplatform/mquery
305 | db6020507de3: Pull complete
306 | 5107afd39b7f: Pull complete
307 | Digest: sha256:e15189e3d6fbcee8a6ad2ef04c1ec80420ab0fdcf0d70408c0e914af80dfb107
308 | Status: Downloaded newer image for mplatform/mquery:latest
309 | Image: hello-world
310 | * Manifest List: Yes
311 | * Supported platforms:
312 | - linux/amd64
313 | - linux/arm/v5
314 | - linux/arm/v7
315 | - linux/arm64
316 | - linux/386
317 | - linux/ppc64le
318 | - linux/s390x
319 | - windows/amd64:10.0.14393.2551
320 | - windows/amd64:10.0.16299.846
321 | - windows/amd64:10.0.17134.469
322 | - windows/amd64:10.0.17763.194
323 | ```
324 |
325 |
326 | ## Verifying hellowhale Image
327 |
328 | ```
329 | root@raspberrypi:~# docker run --rm mplatform/mquery ajeetraina/hellowhale
330 | Image: ajeetraina/hellowhale
331 | * Manifest List: No
332 | * Supports: amd64/linux
333 | ````
334 |
335 | ## Verifying Random Images
336 |
337 | ```
338 | root@raspberrypi:~# docker run --rm mplatform/mquery rycus86/prometheus
339 | Image: rycus86/prometheus
340 | * Manifest List: Yes
341 | * Supported platforms:
342 | - linux/amd64
343 | - linux/arm/v7
344 | - linux/arm64
345 | ```
346 |
347 | [Next >> Setting up Apache Kafka on Cloud Platform](https://github.com/collabnix/pico)
348 |
--------------------------------------------------------------------------------
/images/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/images/arch_pico.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/arch_pico.png
--------------------------------------------------------------------------------
/images/image-9.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/image-9.png
--------------------------------------------------------------------------------
/images/pibox.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/pibox.png
--------------------------------------------------------------------------------
/images/pibox3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/pibox3.png
--------------------------------------------------------------------------------
/images/picbox2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/picbox2.png
--------------------------------------------------------------------------------
/images/pico-project-arch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/pico-project-arch.png
--------------------------------------------------------------------------------
/images/pico2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/pico2.png
--------------------------------------------------------------------------------
/images/pico_in_3_steps.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/pico_in_3_steps.png
--------------------------------------------------------------------------------
/images/rasp_cluster.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/rasp_cluster.jpg
--------------------------------------------------------------------------------
/images/thepicoproject1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/images/thepicoproject1.png
--------------------------------------------------------------------------------
/kafka/README.md:
--------------------------------------------------------------------------------
1 | # How to setup Apache Kafka on AWS Platform using Docker Swarm
2 |
3 | Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and donated to the Apache Software Foundation. It is written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
4 |
5 | Apache Kafka is a distributed, partitioned, and replicated publish-subscribe messaging system that is used to send high volumes of data, in the form of messages, from one point to another. It replicates these messages across a cluster of servers in order to prevent data loss and allows both online and offline message consumption. This in turn shows the fault-tolerant behaviour of Kafka in the presence of machine failures that also supports low latency message delivery. In a broader sense, Kafka is considered as a unified platform which guarantees zero data loss and handles real-time data feeds.
6 |
7 | ## Pre-requisites:
8 |
9 | - Docker Desktop for Mac or Windows
10 | - AWS Account ( You will require t2.medium instances for this)
11 | - AWS CLI installed
12 |
13 | ## Adding Your Credentials:
14 |
15 | ```
16 | [Captains-Bay]🚩 > cat ~/.aws/credentials
17 | [default]
18 | aws_access_key_id = XXXA
19 | aws_secret_access_key = XX
20 | ```
21 |
22 | ## Verifying AWS Version
23 |
24 |
25 | ```
26 | [Captains-Bay]🚩 > aws --version
27 | aws-cli/1.11.107 Python/2.7.10 Darwin/17.7.0 botocore/1.5.70
28 | Setting up Environmental Variable
29 | ```
30 |
31 | ```
32 | [Captains-Bay]🚩 > export VPC=vpc-ae59f0d6
33 | [Captains-Bay]🚩 > export REGION=us-west-2a
34 | [Captains-Bay]🚩 > export SUBNET=subnet-827651c9
35 | [Captains-Bay]🚩 > export ZONE=a
36 | [Captains-Bay]🚩 > export REGION=us-west-2
37 | ```
38 |
39 | ## Building up First Node using Docker Machine
40 |
41 | ```
42 | [Captains-Bay]🚩 > docker-machine create --driver amazonec2 --amazonec2-access-key=${ACCESS_KEY_ID} --amazonec2-secret-key=${SECRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-78a22900 --amazonec2-open-port 2377 --amazonec2-open-port 7946 --amazonec2-open-port 4789 --amazonec2-open-port 7946/udp --amazonec2-open-port 4789/udp --amazonec2-open-port 8080 --amazonec2-open-port 443 --amazonec2-open-port 80 --amazonec2-subnet-id=subnet-72dbdb1a --amazonec2-instance-type=t2.micro kafka-swarm-node1
43 | ```
44 |
45 | ## Listing out the Nodes
46 |
47 | ```
48 | [Captains-Bay]🚩 > docker-machine ls
49 | NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
50 | kafka-swarm-node1 - amazonec2 Running tcp://35.161.106.158:2376 v18.09.6
51 | kafka-swarm-node2 - amazonec2 Running tcp://54.201.99.75:2376 v18.09.6
52 | ```
53 |
54 | ## Initialiating Docker Swarm Manager Node
55 |
56 | ```
57 | ubuntu@kafka-swarm-node1:~$ sudo docker swarm init --advertise-addr 172.31.53.71 --listen-addr 172.31.53.71:2377
58 | Swarm initialized: current node (yui9wqfu7b12hwt4ig4ribpyq) is now a manager.
59 |
60 | To add a worker to this swarm, run the following command:
61 |
62 | docker swarm join --token SWMTKN-1-xxxxxmr075to2v3k-decb975h5g5da7xxxx 172.31.53.71:2377
63 |
64 | To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
65 | ```
66 |
67 | ## Adding Worker Node
68 |
69 |
70 | ```
71 | ubuntu@kafka-swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-2xjkynhin0n2zl7xxxk-decb975h5g5daxxxxxxxxn 172.31.53.71:2377
72 | This node joined a swarm as a worker.
73 | ```
74 |
75 | ## Verifying 2-Node Docker Swarm Mode Cluster
76 |
77 | ```
78 | ubuntu@kafka-swarm-node1:~$ sudo docker node ls
79 | ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
80 | yui9wqfu7b12hwt4ig4ribpyq * kafka-swarm-node1 Ready Active Leader 18.09.6
81 | vb235xtkejim1hjdnji5luuxh kafka-swarm-node2 Ready Active 18.09.6
82 | ```
83 |
84 | ## Installing Docker Compose
85 |
86 | ```
87 | curl -L https://github.com/docker/compose/releases/download/1.25.0-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
88 | % Total % Received % Xferd Average Speed Time Time Time Current
89 | Dload Upload Total Spent Left Speed
90 | 100 617 0 617 0 0 2212 0 --:--:-- --:--:-- --:--:-- 2211
91 | 100 15.5M 100 15.5M 0 0 8693k 0 0:00:01 0:00:01 --:--:-- 20.1M
92 | ```
93 |
94 | ```
95 | root@kafka-swarm-node1:/home/ubuntu/dockerlabs/solution/kafka-swarm# chmod +x /usr/local/bin/docker-compose
96 | ```
97 |
98 | ```
99 | ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker-compose version
100 | docker-compose version 1.25.0-rc1, build 8552e8e2
101 | docker-py version: 4.0.1
102 | CPython version: 3.7.3
103 | OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
104 | ```
105 |
106 | ## Cloning the Repository
107 |
108 | ```
109 | git clone https://github.com/collabnix/pico
110 | cd pico/kafka/
111 | ```
112 |
113 | ## Building up Kafka Application
114 |
115 | ```
116 | git clone https://github.com/collabnix/pico
117 | cd pico/kafka
118 | ```
119 |
120 | ```
121 | docker stack deploy -c docker-compose.yml mykafka
122 | ```
123 |
124 | By now, you should be able to access kafka manager at https://:9000
125 |
126 | ## Adding a cluster
127 |
128 | - Cluster Name = pico (or whatever you want)
129 | - Cluster Zookeeper Hosts = zk-1:2181,zk-2:2181,zk-3:2181
130 | - Kafka Version = leave it at 0.9.01 even though we're running 1.0.0
131 | - Enable JMX Polling = enabled
132 |
133 | ## Adding a Topic
134 |
135 | Click on Topic on the top center of the Kafka Manager to create a new topic with the below details -
136 |
137 | - Topic = testpico
138 | - Partitions = 6
139 | - Replication factor = 2
140 |
141 | which gives an even spread of the topic across the three kafka nodes.
142 |
143 | While saving the settings, it might ask to set minimal parameter required. Feel free to follow the instruction provided.
144 |
145 | [Next >> Pushing a Sample Video to Kafka Cluster using python script](https://github.com/collabnix/pico/blob/master/kafka/producer-consumer.md)
146 |
147 |
--------------------------------------------------------------------------------
/kafka/aws/credentials:
--------------------------------------------------------------------------------
1 | [default]
2 | aws_access_key_id=AKIXXX
3 | aws_secret_access_key=P66ZgXXX
4 |
--------------------------------------------------------------------------------
/kafka/bin/debug.sh:
--------------------------------------------------------------------------------
1 |
2 | #!/usr/bin/env bash
3 |
4 | : ${SUSPEND:='n'}
5 |
6 | set -e
7 |
8 | export KAFKA_JMX_OPTS="-Xdebug -agentlib:jdwp=transport=dt_socket,server=y,suspend=${SUSPEND},address=5005"
9 | export CLASSPATH="$(find target/kafka-connect-aws-lambda-1.0-package/share/java -type f -name '*.jar' | tr '\n' ':')"
10 |
11 | # connect-standalone config/connect-json-docker.properties config/AwsLambdaSinkConnector.properties
12 |
13 | connect-standalone $1 $2
14 |
--------------------------------------------------------------------------------
/kafka/config/AwsLambdaSinkConnector.properties:
--------------------------------------------------------------------------------
1 |
2 | name=AwsLambdaSinkConnector
3 | topics=aws-lambda-topic
4 | tasks.max=1
5 | connector.class=com.tm.kafka.connect.aws.lambda.AwsLambdaSinkConnector
6 |
7 | aws.region=us-west-2
8 | aws.function.name=kafka-aws-lambda-test
9 | aws.lambda.payload.converter.class=com.tm.kafka.connect.aws.lambda.converter.JsonPayloadConverter
10 | # aws.lambda.payload.converter.class=com.tm.kafka.connect.aws.lambda.converter.DefaultPayloadConverter
11 | # retry.backoff.ms=5000
12 | # aws.lambda.invoke.async=RequestResponse
13 | # aws.lambda.invoke.async=Event
14 | # aws.lambda.invoke.async=DryRun
15 |
16 | # aws.credentials.provider.class=com.amazonaws.auth.DefaultAWSCredentialsProviderChain
17 | aws.credentials.provider.class=com.tm.kafka.connect.aws.lambda.ConfigurationAWSCredentialsProvider
18 | aws.credentials.provider.aws.access.key.id=${file:/root/.aws/credentials:aws_access_key_id}
19 | aws.credentials.provider.aws.secret.access.key=${file:/root/.aws/credentials:aws_secret_access_key}
20 |
--------------------------------------------------------------------------------
/kafka/config/connect-avro-docker.properties:
--------------------------------------------------------------------------------
1 | # Sample configuration for a standalone Kafka Connect worker that uses Avro serialization and
2 | # integrates the the SchemaConfig Registry. This sample configuration assumes a local installation of
3 | # Confluent Platform with all services running on their default ports.
4 | # Bootstrap Kafka servers. If multiple servers are specified, they should be comma-separated.
5 | bootstrap.servers=kafka:9092
6 | # The converters specify the format of data in Kafka and how to translate it into Connect data.
7 | # Every Connect user will need to configure these based on the format they want their data in
8 | # when loaded from or stored into Kafka
9 | key.converter=org.apache.kafka.connect.storage.StringConverter
10 | key.converter.schemas.enable=false
11 | key.converter.schema.registry.url=http://schema_registry:8081/
12 | value.converter=io.confluent.connect.avro.AvroConverter
13 | value.converter.schemas.enable=true
14 | value.converter.schema.registry.url=http://schema_registry:8081/
15 |
16 | # The internal converter used for offsets and config data is configurable and must be specified,
17 | # but most users will always want to use the built-in default. Offset and config data is never
18 | # visible outside of Connect in this format.
19 | internal.key.converter=org.apache.kafka.connect.json.JsonConverter
20 | internal.value.converter=org.apache.kafka.connect.json.JsonConverter
21 | internal.key.converter.schemas.enable=true
22 | internal.value.converter.schemas.enable=true
23 | # Local storage file for offset data
24 | offset.storage.file.filename=/tmp/connect.offsets
25 | # Confuent Control Center Integration -- uncomment these lines to enable Kafka client interceptors
26 | # that will report audit data that can be displayed and analyzed in Confluent Control Center
27 | # producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
28 | # consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
29 |
--------------------------------------------------------------------------------
/kafka/config/connect-json-docker.properties:
--------------------------------------------------------------------------------
1 | # Sample configuration for a standalone Kafka Connect worker that uses Avro serialization and
2 | # integrates the the SchemaConfig Registry. This sample configuration assumes a local installation of
3 | # Confluent Platform with all services running on their default ports.
4 | # Bootstrap Kafka servers. If multiple servers are specified, they should be comma-separated.
5 | bootstrap.servers=kafka-1:9092,kafka-2:9092,kafka-3:9092
6 | # The converters specify the format of data in Kafka and how to translate it into Connect data.
7 | # Every Connect user will need to configure these based on the format they want their data in
8 | # when loaded from or stored into Kafka
9 | key.converter=org.apache.kafka.connect.storage.StringConverter
10 | key.converter.schemas.enable=false
11 | value.converter=org.apache.kafka.connect.json.JsonConverter
12 | value.converter.schemas.enable=false
13 |
14 | # The internal converter used for offsets and config data is configurable and must be specified,
15 | # but most users will always want to use the built-in default. Offset and config data is never
16 | # visible outside of Connect in this format.
17 | internal.key.converter=org.apache.kafka.connect.json.JsonConverter
18 | internal.value.converter=org.apache.kafka.connect.json.JsonConverter
19 | internal.key.converter.schemas.enable=false
20 | internal.value.converter.schemas.enable=false
21 | # Local storage file for offset data
22 | offset.storage.file.filename=/tmp/connect.offsets
23 | # Confuent Control Center Integration -- uncomment these lines to enable Kafka client interceptors
24 | # that will report audit data that can be displayed and analyzed in Confluent Control Center
25 | # producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
26 | # consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
27 |
28 | config.providers=file
29 | config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
30 | config.providers.file.param.secrets=/root/.aws/credentials
31 | config.reload.action=restart
32 |
--------------------------------------------------------------------------------
/kafka/consumer.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | from flask import Flask, Response, render_template
3 | from kafka import KafkaConsumer
4 |
5 | # Fire up the Kafka Consumer
6 | topic = "testpico"
7 | brokers = ["35.189.130.4:9092"]
8 |
9 | consumer = KafkaConsumer(
10 | topic,
11 | bootstrap_servers=brokers,
12 | value_deserializer=lambda m: json.loads(m.decode('utf-8'))
13 |
14 |
15 | # Set the consumer in a Flask App
16 | app = Flask(__name__)
17 |
18 | @app.route('/')
19 | def index():
20 | return render_template('index.html')
21 |
22 | @app.route('/camera_1', methods=['GET'])
23 | def camera_1():
24 | """
25 | This is the heart of our video display. Notice we set the mimetype to
26 | multipart/x-mixed-replace. This tells Flask to replace any old images with
27 | new values streaming through the pipeline.
28 | """
29 | return Response(
30 | get_video_stream(id),
31 | mimetype='multipart/x-mixed-replace; boundary=frame')
32 |
33 | @app.route('/camera_2', methods=['GET'])
34 | def camera_2():
35 | """
36 | This is the heart of our video display. Notice we set the mimetype to
37 | multipart/x-mixed-replace. This tells Flask to replace any old images with
38 | new values streaming through the pipeline.
39 | """
40 | return Response(
41 | get_video_stream(id),
42 | mimetype='multipart/x-mixed-replace; boundary=frame')
43 |
44 |
45 | @app.route('/camera_3', methods=['GET'])
46 | def camera_3():
47 | """
48 | This is the heart of our video display. Notice we set the mimetype to
49 | multipart/x-mixed-replace. This tells Flask to replace any old images with
50 | new values streaming through the pipeline.
51 | """
52 | return Response(
53 | get_video_stream(id),
54 | mimetype='multipart/x-mixed-replace; boundary=frame')
55 |
56 | def get_video_stream(id):
57 | """
58 | Here is where we recieve streamed images from the Kafka Server and convert
59 | them to a Flask-readable format.
60 | """
61 | for msg in consumer:
62 | if str(msg.value['camera_id']) == str(id):
63 | yield (b'--frame\r\n'
64 | b'Content-Type: image/jpg\r\n\r\n' + base64.b64decode(msg.value['image_bytes']) + b'\r\n\r\n')
65 |
66 | if __name__ == "__main__":
67 | app.run(host='0.0.0.0', debug=True)
68 |
--------------------------------------------------------------------------------
/kafka/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: '3.4'
2 |
3 | services:
4 | zk-1: &zk
5 | image: confluentinc/cp-zookeeper:4.0.0
6 | env_file:
7 | - zk-common.env
8 | environment:
9 | ZOOKEEPER_SERVER_ID: 1
10 | ZOOKEEPER_SERVERS: 0.0.0.0:2888:3888;zk-2:2888:3888;zk-3:2888:3888
11 | volumes:
12 | - zk-1:/var/lib/zookeeper/data
13 | zk-2:
14 | <<: *zk
15 | environment:
16 | ZOOKEEPER_SERVER_ID: 2
17 | ZOOKEEPER_SERVERS: zk-1:2888:3888;0.0.0.0:2888:3888;zk-3:2888:3888
18 | volumes:
19 | - zk-2:/var/lib/zookeeper/data
20 | zk-3:
21 | <<: *zk
22 | environment:
23 | ZOOKEEPER_SERVER_ID: 3
24 | ZOOKEEPER_SERVERS: zk-1:2888:3888;zk-2:2888:3888;0.0.0.0:2888:3888
25 | volumes:
26 | - zk-3:/var/lib/zookeeper/data
27 |
28 | kafka-1: &kafka
29 | image: confluentinc/cp-kafka:4.0.0
30 | env_file:
31 | - kafka-common.env
32 | environment:
33 | KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-1:9192,EXTERNAL://35.189.130.4:9092
34 | KAFKA_JMX_HOSTNAME: kafka-1
35 | ports:
36 | - 9092:9092
37 | volumes:
38 | - kafka-1:/var/lib/kafka/data
39 | - ./:/data
40 | - ./aws:/root/.aws
41 |
42 | kafka-2:
43 | <<: *kafka
44 | environment:
45 | KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-2:9192,EXTERNAL://35.189.130.4:9093
46 | KAFKA_JMX_HOSTNAME: kafka-2
47 | ports:
48 | - 9093:9092
49 | volumes:
50 | - kafka-2:/var/lib/kafka/data
51 | - ./:/data
52 | - ./aws:/root/.aws
53 |
54 | kafka-3:
55 | <<: *kafka
56 | environment:
57 | KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-3:9192,EXTERNAL://35.189.130.4:9094
58 | KAFKA_JMX_HOSTNAME: kafka-3
59 | ports:
60 | - 9094:9092
61 | volumes:
62 | - kafka-3:/var/lib/kafka/data
63 | - ./:/data
64 | - ./aws:/root/.aws
65 |
66 | kafka-manager:
67 | image: sheepkiller/kafka-manager
68 | environment:
69 | ZK_HOSTS: zk-1:2181,zk-2:2181,zk-3:2181
70 | JMX_PORT: 9181
71 | APPLICATION_SECRET: letmein
72 | ports:
73 | - 9000:9000
74 |
75 | #schema-registry:
76 | # hostname: schema-registry
77 | #image: confluentinc/cp-schema-registry:5.0.1
78 | #container_name: schema-registry
79 | #links:
80 | # - kafka-1
81 | # - kafka-2
82 | # - kafka-3
83 | # - zk-1
84 | # - zk-2
85 | # - zk-3
86 | #ports:
87 | # - "8081:8081"
88 | #environment:
89 | # SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zk-1:2181,zk-2:2181,zk-3:2181"
90 | #SCHEMA_REGISTRY_HOST_NAME: schema-registry
91 |
92 |
93 |
94 | volumes:
95 | zk-1:
96 | zk-2:
97 | zk-3:
98 | kafka-1:
99 | kafka-2:
100 | kafka-3:
101 |
--------------------------------------------------------------------------------
/kafka/kafka-common.env:
--------------------------------------------------------------------------------
1 | KAFKA_ZOOKEEPER_CONNECT=zk-1:2181,zk-2:2181,zk-3:2181
2 | KAFKA_LISTENERS=EXTERNAL://0.0.0.0:9092,INTERNAL://0.0.0.0:9192
3 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
4 | KAFKA_INTER_BROKER_LISTENER_NAME=INTERNAL
5 | KAFKA_DEFAULT_REPLICATION_FACTOR=3
6 | KAFKA_JMX_PORT=9181
7 |
--------------------------------------------------------------------------------
/kafka/producer-consumer.md:
--------------------------------------------------------------------------------
1 | # Testing the Kafka Cluster by pushing a sample video frame
2 |
3 | ## Pre-requisite
4 |
5 | - Apache Kafka Cluster Setup - Follow [this](https://github.com/collabnix/pico/blob/master/kafka/README.md)
6 | - Create an instance outside Kafka Cluster with Docker binaries installed
7 |
8 | ## Setting up Environment for Producer Script
9 |
10 | ## Pulling the Container
11 |
12 | ```
13 | docker pull ajeetraina/opencv4-python3
14 | ```
15 |
16 | ## Running the container exposing port 5000
17 |
18 | ```
19 | docker run -itd -p 5000:5000 ajeetraina/opencv4-python3 bash
20 | docker attach
21 | ```
22 |
23 |
24 |
25 | ## Cloning the Repository
26 |
27 | ```
28 | git clone https://github.com/collabnix/pico
29 | cd pico/kafka/
30 | ```
31 |
32 | If in case it reports that pico already exists, remove it first and then try the above command. I still need to package it properly.
33 |
34 | ## Modify the producer
35 |
36 | Two entries needed to be changed:
37 | - topic name(which you must have supplied during the initial Kafka cluster configuration)
38 | - bootstrapper server IP pointing to your Kafka Broker
39 |
40 | ```
41 | import sys
42 | import time
43 | import cv2
44 | # from picamera.array import PiRGBArray
45 | # from picamera import PiCamera
46 | from kafka import KafkaProducer
47 | from kafka.errors import KafkaError
48 |
49 | topic = "testpico"
50 |
51 | def publish_video(video_file):
52 | """
53 | Publish given video file to a specified Kafka topic.
54 | Kafka Server is expected to be running on the localhost. Not partitioned.
55 |
56 | :param video_file: path to video file
57 | """
58 | # Start up producer
59 | producer = KafkaProducer(bootstrap_servers='10.140.0.2:9092')
60 | ```
61 |
62 | ```
63 | def publish_camera():
64 | """
65 | Publish camera video stream to specified Kafka topic.
66 | Kafka Server is expected to be running on the localhost. Not partitioned.
67 | """
68 |
69 | # Start up producer
70 | producer = KafkaProducer(bootstrap_servers='10.140.0.2:9092')
71 |
72 | ```
73 |
74 | ## Downloading a sample video
75 |
76 | Download a sample video and rename it as Countdown1.mp4. Place it here in the same directory where producer and consumer resides.
77 |
78 | ## Executing the Script
79 |
80 | ```
81 | python producer.py
82 | ```
83 |
84 | ## Setting up Environment for Consumer Script
85 |
86 | Open up consumer script and modify the two important items:
87 |
88 | - Topic Name: testpico
89 | - Bootstrap Server: :9093
90 |
91 | Executing the Script
92 |
93 | ```
94 | python consumer.py
95 | ```
96 |
97 | ## Verifying the Video Streaming
98 |
99 | By now you can browse through http://:5000 to see video streaming.
100 |
101 |
102 |
103 |
--------------------------------------------------------------------------------
/kafka/producer.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import time
3 | import cv2
4 | # from picamera.array import PiRGBArray
5 | # from picamera import PiCamera
6 | from kafka import KafkaProducer
7 | from kafka.errors import KafkaError
8 |
9 | topic = "testpico"
10 |
11 | def publish_video(video_file):
12 | """
13 | Publish given video file to a specified Kafka topic.
14 | Kafka Server is expected to be running on the localhost. Not partitioned.
15 |
16 | :param video_file: path to video file
17 | """
18 | # Start up producer
19 | producer = KafkaProducer(bootstrap_servers='10.140.0.2:9092')
20 |
21 | # Open file
22 | video = cv2.VideoCapture(video_file)
23 |
24 | print('publishing video...')
25 |
26 | while(video.isOpened()):
27 | success, frame = video.read()
28 |
29 | # Ensure file was read successfully
30 | if not success:
31 | print("bad read!")
32 | break
33 |
34 | # Convert image to png
35 | ret, buffer = cv2.imencode('.jpg', frame)
36 |
37 | # Convert to bytes and send to kafka
38 | producer.send(topic, buffer.tobytes())
39 |
40 | time.sleep(0.2)
41 | video.release()
42 | print('publish complete')
43 |
44 |
45 | def publish_camera():
46 | """
47 | Publish camera video stream to specified Kafka topic.
48 | Kafka Server is expected to be running on the localhost. Not partitioned.
49 | """
50 |
51 | # Start up producer
52 | producer = KafkaProducer(bootstrap_servers='10.140.0.2:9092')
53 |
54 |
55 | camera = cv2.VideoCapture(0)
56 | try:
57 | while(True):
58 | success, frame = camera.read()
59 |
60 | ret, buffer = cv2.imencode('.jpg', frame)
61 | producer.send(topic, buffer.tobytes())
62 | image.write(buffer.tobytes())
63 |
64 |
65 | # Choppier stream, reduced load on processor
66 | time.sleep(0.2)
67 |
68 | except:
69 | print("\nExiting.")
70 | sys.exit(1)
71 |
72 |
73 | camera.release()
74 |
75 |
76 |
77 | publish_video('Countdown1.mp4')
78 |
--------------------------------------------------------------------------------
/kafka/src/main/assembly/package.xml:
--------------------------------------------------------------------------------
1 |
5 |
6 | package
7 |
8 | dir
9 |
10 | false
11 |
12 |
13 | ${project.basedir}
14 | share/doc/${project.name}/
15 |
16 | README*
17 | LICENSE*
18 | NOTICE*
19 | licenses/
20 |
21 |
22 |
23 | ${project.basedir}/config
24 | etc/${project.name}
25 |
26 | *
27 |
28 |
29 |
30 |
31 |
32 | share/java/${project.name}
33 | true
34 | true
35 |
36 | org.apache.kafka:connect-api
37 |
38 |
39 |
40 |
41 |
--------------------------------------------------------------------------------
/kafka/src/main/java/com/tm/kafka/connect/aws/lambda/AwsLambdaSinkConnector.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda;
2 |
3 | import org.apache.kafka.common.config.ConfigDef;
4 | import org.apache.kafka.connect.connector.Task;
5 | import org.apache.kafka.connect.sink.SinkConnector;
6 | import org.slf4j.Logger;
7 | import org.slf4j.LoggerFactory;
8 |
9 | import java.util.ArrayList;
10 | import java.util.HashMap;
11 | import java.util.List;
12 | import java.util.Map;
13 |
14 | public class AwsLambdaSinkConnector extends SinkConnector {
15 | private static Logger log = LoggerFactory.getLogger(AwsLambdaSinkConnector.class);
16 | private AwsLambdaSinkConnectorConfig config;
17 |
18 | @Override
19 | public String version() {
20 | return VersionUtil.getVersion();
21 | }
22 |
23 | @Override
24 | public void start(Map map) {
25 | config = new AwsLambdaSinkConnectorConfig(map);
26 | }
27 |
28 | @Override
29 | public Class extends Task> taskClass() {
30 | return AwsLambdaSinkTask.class;
31 | }
32 |
33 | @Override
34 | public List> taskConfigs(int maxTasks) {
35 | Map taskProps = new HashMap<>(config.originalsStrings());
36 | List> taskConfigs = new ArrayList<>(maxTasks);
37 | for (int i = 0; i < maxTasks; ++i) {
38 | taskConfigs.add(taskProps);
39 | }
40 | return taskConfigs;
41 | }
42 |
43 | @Override
44 | public void stop() {
45 | }
46 |
47 | @Override
48 | public ConfigDef config() {
49 | return AwsLambdaSinkConnectorConfig.conf();
50 | }
51 | }
52 |
--------------------------------------------------------------------------------
/kafka/src/main/java/com/tm/kafka/connect/aws/lambda/AwsLambdaSinkConnectorConfig.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda;
2 |
3 | import com.amazonaws.auth.AWSCredentialsProvider;
4 | import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
5 | import com.amazonaws.regions.RegionUtils;
6 | import com.amazonaws.services.lambda.model.InvocationType;
7 | import com.amazonaws.services.lambda.model.InvokeRequest;
8 | import com.tm.kafka.connect.aws.lambda.converter.JsonPayloadConverter;
9 | import com.tm.kafka.connect.aws.lambda.converter.SinkRecordToPayloadConverter;
10 | import org.apache.kafka.common.Configurable;
11 | import org.apache.kafka.common.config.AbstractConfig;
12 | import org.apache.kafka.common.config.ConfigDef;
13 | import org.apache.kafka.common.config.ConfigDef.Importance;
14 | import org.apache.kafka.common.config.ConfigDef.Type;
15 | import org.apache.kafka.common.config.ConfigException;
16 | import org.apache.kafka.common.utils.Utils;
17 | import org.apache.kafka.connect.errors.ConnectException;
18 |
19 | import java.lang.reflect.InvocationTargetException;
20 | import java.util.ArrayList;
21 | import java.util.Arrays;
22 | import java.util.Collections;
23 | import java.util.HashMap;
24 | import java.util.List;
25 | import java.util.Map;
26 | import java.util.function.Function;
27 |
28 | import static org.apache.kafka.common.config.ConfigDef.NO_DEFAULT_VALUE;
29 |
30 |
31 | public class AwsLambdaSinkConnectorConfig extends AbstractConfig {
32 |
33 | static final String REGION_CONFIG = "aws.region";
34 | private static final String REGION_DOC_CONFIG = "The AWS region.";
35 | private static final String REGION_DISPLAY_CONFIG = "AWS region";
36 |
37 | private static final String CREDENTIALS_PROVIDER_CLASS_CONFIG = "aws.credentials.provider.class";
38 | private static final Class extends AWSCredentialsProvider> CREDENTIALS_PROVIDER_CLASS_DEFAULT =
39 | DefaultAWSCredentialsProviderChain.class;
40 | private static final String CREDENTIALS_PROVIDER_DOC_CONFIG =
41 | "Credentials provider or provider chain to use for authentication to AWS. By default "
42 | + "the connector uses 'DefaultAWSCredentialsProviderChain'.";
43 | private static final String CREDENTIALS_PROVIDER_DISPLAY_CONFIG = "AWS Credentials Provider Class";
44 |
45 | /**
46 | * The properties that begin with this prefix will be used to configure a class, specified by
47 | * {@code s3.credentials.provider.class} if it implements {@link Configurable}.
48 | */
49 | public static final String CREDENTIALS_PROVIDER_CONFIG_PREFIX =
50 | CREDENTIALS_PROVIDER_CLASS_CONFIG.substring(
51 | 0,
52 | CREDENTIALS_PROVIDER_CLASS_CONFIG.lastIndexOf(".") + 1
53 | );
54 |
55 | static final String FUNCTION_NAME_CONFIG = "aws.function.name";
56 | private static final String FUNCTION_NAME_DOC = "The AWS Lambda function name.";
57 | private static final String FUNCTION_NAME_DISPLAY = "AWS Lambda function Name";
58 |
59 | private static final String RETRY_BACKOFF_CONFIG = "retry.backoff.ms";
60 | private static final String RETRY_BACKOFF_DOC =
61 | "The retry backoff in milliseconds. This config is used to notify Kafka connect to retry "
62 | + "delivering a message batch or performing recovery in case of transient exceptions.";
63 | private static final long RETRY_BACKOFF_DEFAULT = 5000L;
64 | private static final String RETRY_BACKOFF_DISPLAY = "Retry Backoff (ms)";
65 |
66 | private static final String INVOCATION_TYPE_CONFIG = "aws.lambda.invocation.type";
67 | private static final String INVOCATION_TYPE_DEFAULT = "RequestResponse";
68 | private static final String INVOCATION_TYPE_DOC_CONFIG = "AWS Lambda function invocation type.";
69 | private static final String INVOCATION_TYPE_DISPLAY_CONFIG = "Invocation type";
70 |
71 | private static final String PAYLOAD_CONVERTER_CONFIG = "aws.lambda.payload.converter.class";
72 | private static final Class extends SinkRecordToPayloadConverter> PAYLOAD_CONVERTER_DEFAULT =
73 | JsonPayloadConverter.class;
74 | private static final String PAYLOAD_CONVERTER_DOC_CONFIG =
75 | "Class to be used to convert Kafka messages from SinkRecord to Aws Lambda input";
76 | private static final String PAYLOAD_CONVERTER_DISPLAY_CONFIG = "Payload converter class";
77 |
78 | private final SinkRecordToPayloadConverter sinkRecordToPayloadConverter;
79 | private final InvokeRequest invokeRequest;
80 |
81 | @SuppressWarnings("unchecked")
82 | private AwsLambdaSinkConnectorConfig(ConfigDef config, Map parsedConfig) {
83 | super(config, parsedConfig);
84 | try {
85 | sinkRecordToPayloadConverter = ((Class extends SinkRecordToPayloadConverter>)
86 | getClass(PAYLOAD_CONVERTER_CONFIG)).getDeclaredConstructor().newInstance();
87 | } catch (IllegalAccessException | InstantiationException | InvocationTargetException | NoSuchMethodException e) {
88 | throw new ConnectException("Invalid class for: " + PAYLOAD_CONVERTER_CONFIG, e);
89 | }
90 | invokeRequest = new InvokeRequest()
91 | .withFunctionName(getAwsFunctionName())
92 | .withInvocationType(getAwsLambdaInvocationType());
93 | }
94 |
95 | AwsLambdaSinkConnectorConfig(Map parsedConfig) {
96 | this(conf(), parsedConfig);
97 | }
98 |
99 | public static ConfigDef conf() {
100 | String group = "AWS";
101 | int orderInGroup = 0;
102 | return new ConfigDef()
103 | .define(REGION_CONFIG,
104 | Type.STRING,
105 | NO_DEFAULT_VALUE,
106 | new RegionValidator(),
107 | Importance.HIGH,
108 | REGION_DOC_CONFIG,
109 | group,
110 | ++orderInGroup,
111 | ConfigDef.Width.SHORT,
112 | REGION_DISPLAY_CONFIG,
113 | new RegionRecommender())
114 |
115 | .define(CREDENTIALS_PROVIDER_CLASS_CONFIG,
116 | Type.CLASS,
117 | CREDENTIALS_PROVIDER_CLASS_DEFAULT,
118 | new CredentialsProviderValidator(),
119 | Importance.HIGH,
120 | CREDENTIALS_PROVIDER_DOC_CONFIG,
121 | group,
122 | ++orderInGroup,
123 | ConfigDef.Width.MEDIUM,
124 | CREDENTIALS_PROVIDER_DISPLAY_CONFIG)
125 |
126 | .define(FUNCTION_NAME_CONFIG,
127 | Type.STRING,
128 | NO_DEFAULT_VALUE,
129 | Importance.HIGH,
130 | FUNCTION_NAME_DOC,
131 | group,
132 | ++orderInGroup,
133 | ConfigDef.Width.SHORT,
134 | FUNCTION_NAME_DISPLAY)
135 |
136 | .define(RETRY_BACKOFF_CONFIG,
137 | Type.LONG,
138 | RETRY_BACKOFF_DEFAULT,
139 | Importance.LOW,
140 | RETRY_BACKOFF_DOC,
141 | group,
142 | ++orderInGroup,
143 | ConfigDef.Width.NONE,
144 | RETRY_BACKOFF_DISPLAY)
145 |
146 | .define(INVOCATION_TYPE_CONFIG,
147 | Type.STRING,
148 | INVOCATION_TYPE_DEFAULT,
149 | new InvocationTypeValidator(),
150 | Importance.LOW,
151 | INVOCATION_TYPE_DOC_CONFIG,
152 | group,
153 | ++orderInGroup,
154 | ConfigDef.Width.SHORT,
155 | INVOCATION_TYPE_DISPLAY_CONFIG,
156 | new InvocationTypeRecommender())
157 |
158 | .define(PAYLOAD_CONVERTER_CONFIG,
159 | Type.CLASS,
160 | PAYLOAD_CONVERTER_DEFAULT,
161 | new PayloadConverterValidator(),
162 | Importance.LOW,
163 | PAYLOAD_CONVERTER_DOC_CONFIG,
164 | group,
165 | ++orderInGroup,
166 | ConfigDef.Width.SHORT,
167 | PAYLOAD_CONVERTER_DISPLAY_CONFIG,
168 | new PayloadConverterRecommender())
169 | ;
170 | }
171 |
172 | public String getAwsRegion() {
173 | return this.getString(REGION_CONFIG);
174 | }
175 |
176 | @SuppressWarnings("unchecked")
177 | public AWSCredentialsProvider getAwsCredentialsProvider() {
178 | try {
179 | AWSCredentialsProvider awsCredentialsProvider = ((Class extends AWSCredentialsProvider>)
180 | getClass(CREDENTIALS_PROVIDER_CLASS_CONFIG)).getDeclaredConstructor().newInstance();
181 | if (awsCredentialsProvider instanceof Configurable) {
182 | Map configs = originalsWithPrefix(CREDENTIALS_PROVIDER_CONFIG_PREFIX);
183 | configs.remove(CREDENTIALS_PROVIDER_CLASS_CONFIG.substring(CREDENTIALS_PROVIDER_CONFIG_PREFIX.length()));
184 | ((Configurable) awsCredentialsProvider).configure(configs);
185 | }
186 | return awsCredentialsProvider;
187 | } catch (IllegalAccessException | InstantiationException | InvocationTargetException | NoSuchMethodException e) {
188 | throw new ConnectException("Invalid class for: " + CREDENTIALS_PROVIDER_CLASS_CONFIG, e);
189 | }
190 | }
191 |
192 | public String getAwsFunctionName() {
193 | return this.getString(FUNCTION_NAME_CONFIG);
194 | }
195 |
196 | public Long getRetryBackoff() {
197 | return this.getLong(RETRY_BACKOFF_CONFIG);
198 | }
199 |
200 | private InvocationType getAwsLambdaInvocationType() {
201 | return InvocationType.fromValue(this.getString(INVOCATION_TYPE_CONFIG));
202 | }
203 |
204 | public SinkRecordToPayloadConverter getPayloadConverter() {
205 | return sinkRecordToPayloadConverter;
206 | }
207 |
208 | private static class RegionRecommender implements ConfigDef.Recommender {
209 | @Override
210 | public List validValues(String name, Map connectorConfigs) {
211 | return new ArrayList<>(RegionUtils.getRegions());
212 | }
213 |
214 | @Override
215 | public boolean visible(String name, Map connectorConfigs) {
216 | return true;
217 | }
218 | }
219 |
220 | private static class RegionValidator implements ConfigDef.Validator {
221 | @Override
222 | public void ensureValid(String name, Object region) {
223 | String regionStr = ((String) region).toLowerCase().trim();
224 | if (RegionUtils.getRegion(regionStr) == null) {
225 | throw new ConfigException(name, region, "Value must be one of: " + Utils.join(RegionUtils.getRegions(), ", "));
226 | }
227 | }
228 |
229 | @Override
230 | public String toString() {
231 | return "[" + Utils.join(RegionUtils.getRegions(), ", ") + "]";
232 | }
233 | }
234 |
235 | private static class CredentialsProviderValidator implements ConfigDef.Validator {
236 | @Override
237 | public void ensureValid(String name, Object provider) {
238 | if (provider instanceof Class && AWSCredentialsProvider.class.isAssignableFrom((Class>) provider)) {
239 | return;
240 | }
241 | throw new ConfigException(name, provider, "Class must extend: " + AWSCredentialsProvider.class);
242 | }
243 |
244 | @Override
245 | public String toString() {
246 | return "Any class implementing: " + AWSCredentialsProvider.class;
247 | }
248 | }
249 |
250 | private static class InvocationTypeRecommender implements ConfigDef.Recommender {
251 | @Override
252 | public List validValues(String name, Map connectorConfigs) {
253 | return Arrays.asList(InvocationType.values());
254 | }
255 |
256 | @Override
257 | public boolean visible(String name, Map connectorConfigs) {
258 | return true;
259 | }
260 | }
261 |
262 | private static class InvocationTypeValidator implements ConfigDef.Validator {
263 | @Override
264 | public void ensureValid(String name, Object invocationType) {
265 | try {
266 | InvocationType.fromValue(((String) invocationType).trim());
267 | } catch (Exception e) {
268 | throw new ConfigException(name, invocationType, "Value must be one of: " +
269 | Utils.join(InvocationType.values(), ", "));
270 | }
271 | }
272 |
273 | @Override
274 | public String toString() {
275 | return "[" + Utils.join(InvocationType.values(), ", ") + "]";
276 | }
277 | }
278 |
279 | private static class PayloadConverterRecommender implements ConfigDef.Recommender {
280 | @Override
281 | public List validValues(String name, Map connectorConfigs) {
282 | return Collections.singletonList(JsonPayloadConverter.class);
283 | }
284 |
285 | @Override
286 | public boolean visible(String name, Map connectorConfigs) {
287 | return true;
288 | }
289 | }
290 |
291 | private static class PayloadConverterValidator implements ConfigDef.Validator {
292 | @Override
293 | public void ensureValid(String name, Object provider) {
294 | if (provider instanceof Class && SinkRecordToPayloadConverter.class.isAssignableFrom((Class>) provider)) {
295 | return;
296 | }
297 | throw new ConfigException(name, provider, "Class must extend: " + SinkRecordToPayloadConverter.class);
298 | }
299 |
300 | @Override
301 | public String toString() {
302 | return "Any class implementing: " + SinkRecordToPayloadConverter.class;
303 | }
304 | }
305 |
306 | private static ConfigDef getConfig() {
307 | Map everything = new HashMap<>(conf().configKeys());
308 | ConfigDef visible = new ConfigDef();
309 | for (ConfigDef.ConfigKey key : everything.values()) {
310 | visible.define(key);
311 | }
312 | return visible;
313 | }
314 |
315 | public Function getInvokeRequestWithPayload() {
316 | return invokeRequest::withPayload;
317 | }
318 |
319 | public static void main(String[] args) {
320 | System.out.println(VersionUtil.getVersion());
321 | System.out.println(getConfig().toEnrichedRst());
322 | }
323 |
324 | }
325 |
--------------------------------------------------------------------------------
/kafka/src/main/java/com/tm/kafka/connect/aws/lambda/AwsLambdaSinkTask.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda;
2 |
3 | import com.amazonaws.services.lambda.AWSLambda;
4 | import com.amazonaws.services.lambda.AWSLambdaAsyncClientBuilder;
5 | import com.amazonaws.services.lambda.model.InvokeRequest;
6 | import org.apache.kafka.connect.sink.SinkRecord;
7 | import org.apache.kafka.connect.sink.SinkTask;
8 | import org.slf4j.Logger;
9 | import org.slf4j.LoggerFactory;
10 |
11 | import java.util.Collection;
12 | import java.util.Map;
13 | import java.util.Optional;
14 | import java.util.function.Consumer;
15 | import java.util.stream.Stream;
16 |
17 | import static java.nio.charset.StandardCharsets.UTF_8;
18 |
19 | public class AwsLambdaSinkTask extends SinkTask {
20 | private static Logger log = LoggerFactory.getLogger(AwsLambdaSinkTask.class);
21 |
22 | private AwsLambdaSinkConnectorConfig connectorConfig;
23 | private AWSLambda client;
24 |
25 | @Override
26 | public void start(Map map) {
27 | connectorConfig = new AwsLambdaSinkConnectorConfig(map);
28 | context.timeout(connectorConfig.getRetryBackoff());
29 | if (client == null) {
30 | setClient(AWSLambdaAsyncClientBuilder.standard()
31 | .withRegion(connectorConfig.getAwsRegion())
32 | .withCredentials(connectorConfig.getAwsCredentialsProvider())
33 | .build());
34 | }
35 | }
36 |
37 | void setClient(AWSLambda client) {
38 | this.client = client;
39 | }
40 |
41 | @Override
42 | public void stop() {
43 | log.debug("Stopping sink task, setting client to null");
44 | client = null;
45 | }
46 |
47 | @Override
48 | public void put(Collection collection) {
49 | loggingWrapper(collection.stream()
50 | .map(connectorConfig.getPayloadConverter())
51 | .map(connectorConfig.getInvokeRequestWithPayload()))
52 | .forEach(client::invoke);
53 |
54 | if (log.isDebugEnabled()) {
55 | log.debug("Read {} records from Kafka", collection.size());
56 | }
57 | }
58 |
59 | private Stream loggingWrapper(final Stream stream) {
60 | return getLogFunction()
61 | .map(stream::peek) // if there is a function, stream to logging
62 | .orElse(stream); // or else just return the stream as is
63 | }
64 |
65 | private Optional> getLogFunction() {
66 | if (!log.isDebugEnabled()) {
67 | return Optional.empty();
68 | }
69 | if (!log.isTraceEnabled()) {
70 | return Optional.of(x -> log.debug("Calling " + connectorConfig.getAwsFunctionName()));
71 | }
72 | return Optional.of(x -> log.trace("Calling " + connectorConfig.getAwsFunctionName(),
73 | UTF_8.decode(x.getPayload()).toString()));
74 | }
75 |
76 | @Override
77 | public String version() {
78 | return VersionUtil.getVersion();
79 | }
80 | }
81 |
--------------------------------------------------------------------------------
/kafka/src/main/java/com/tm/kafka/connect/aws/lambda/ConfigurationAWSCredentialsProvider.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda;
2 |
3 | import com.amazonaws.auth.AWSCredentials;
4 | import com.amazonaws.auth.AWSCredentialsProvider;
5 | import org.apache.kafka.common.Configurable;
6 |
7 | import java.util.Map;
8 |
9 | public class ConfigurationAWSCredentialsProvider implements AWSCredentialsProvider, Configurable {
10 |
11 | private static final String AWS_ACCESS_KEY_ID_CONFIG = "aws.access.key.id";
12 | private static final String AWS_SECRET_ACCESS_KEY_CONFIG = "aws.secret.access.key";
13 |
14 | private AWSCredentials awsCredentials;
15 |
16 | @Override
17 | public AWSCredentials getCredentials() {
18 | return awsCredentials;
19 | }
20 |
21 | @Override
22 | public void refresh() {
23 |
24 | }
25 |
26 | @Override
27 | public void configure(final Map configs) {
28 | awsCredentials = new AWSCredentials() {
29 | private final String key = (String) configs.get(AWS_ACCESS_KEY_ID_CONFIG);
30 | private final String secret = (String) configs.get(AWS_SECRET_ACCESS_KEY_CONFIG);
31 |
32 | @Override
33 | public String getAWSAccessKeyId() {
34 | return key;
35 | }
36 |
37 | @Override
38 | public String getAWSSecretKey() {
39 | return secret;
40 | }
41 | };
42 | }
43 | }
44 |
--------------------------------------------------------------------------------
/kafka/src/main/java/com/tm/kafka/connect/aws/lambda/VersionUtil.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda;
2 |
3 | /**
4 | * Created by jeremy on 5/3/16.
5 | */
6 | class VersionUtil {
7 | public static String getVersion() {
8 | try {
9 | return VersionUtil.class.getPackage().getImplementationVersion();
10 | } catch (Exception ex) {
11 | return "0.0.0.0";
12 | }
13 | }
14 | }
15 |
--------------------------------------------------------------------------------
/kafka/src/main/java/com/tm/kafka/connect/aws/lambda/converter/DefaultPayloadConverter.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda.converter;
2 |
3 | import com.google.gson.Gson;
4 | import org.apache.kafka.connect.sink.SinkRecord;
5 | import org.slf4j.Logger;
6 | import org.slf4j.LoggerFactory;
7 |
8 | public class DefaultPayloadConverter implements SinkRecordToPayloadConverter {
9 | private Logger log = LoggerFactory.getLogger(DefaultPayloadConverter.class);
10 | private Gson gson = new Gson();
11 |
12 | public String convert(SinkRecord record) {
13 | String payload = gson.toJson(record);
14 | log.trace("P: {}", payload);
15 | return payload;
16 | }
17 | }
18 |
--------------------------------------------------------------------------------
/kafka/src/main/java/com/tm/kafka/connect/aws/lambda/converter/JsonPayloadConverter.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda.converter;
2 |
3 | import com.fasterxml.jackson.core.JsonProcessingException;
4 | import com.fasterxml.jackson.databind.ObjectMapper;
5 | import org.apache.kafka.connect.data.Schema;
6 | import org.apache.kafka.connect.json.JsonConverter;
7 | import org.apache.kafka.connect.json.JsonDeserializer;
8 | import org.apache.kafka.connect.sink.SinkRecord;
9 | import org.slf4j.Logger;
10 | import org.slf4j.LoggerFactory;
11 |
12 | import static java.util.Collections.emptyMap;
13 |
14 | public class JsonPayloadConverter implements SinkRecordToPayloadConverter {
15 | private Logger log = LoggerFactory.getLogger(JsonPayloadConverter.class);
16 | private ObjectMapper objectMapper = new ObjectMapper();
17 | private JsonConverter jsonConverter = new JsonConverter();
18 | private JsonDeserializer jsonDeserializer = new JsonDeserializer();
19 |
20 | public JsonPayloadConverter() {
21 | jsonConverter.configure(emptyMap(), false);
22 | jsonDeserializer.configure(emptyMap(), false);
23 | }
24 |
25 | public String convert(SinkRecord record) throws JsonProcessingException {
26 | String topic = record.topic();
27 | Schema schema = record.valueSchema();
28 | Object value = record.value();
29 |
30 | String payload = objectMapper.writeValueAsString(
31 | jsonDeserializer.deserialize(topic,
32 | jsonConverter.fromConnectData(topic, schema, value)));
33 |
34 | if (log.isTraceEnabled()) {
35 | log.trace("P: {}", payload);
36 | }
37 |
38 | return payload;
39 | }
40 | }
41 |
--------------------------------------------------------------------------------
/kafka/src/main/java/com/tm/kafka/connect/aws/lambda/converter/SinkRecordToPayloadConverter.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda.converter;
2 |
3 | import org.apache.kafka.connect.errors.RetriableException;
4 | import org.apache.kafka.connect.sink.SinkRecord;
5 |
6 | import java.util.function.Function;
7 |
8 | public interface SinkRecordToPayloadConverter extends Function {
9 | String convert(final SinkRecord record) throws Exception;
10 |
11 | default String apply(final SinkRecord record) {
12 | try {
13 | return convert(record);
14 | } catch (final Exception e) {
15 | throw new RetriableException("Payload converter " + getClass().getName() + " failed to convert '" + record.toString(), e);
16 | }
17 | }
18 | }
19 |
--------------------------------------------------------------------------------
/kafka/src/main/resources/logback.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | %d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
--------------------------------------------------------------------------------
/kafka/src/test/java/com/tm/kafka/connect/aws/lambda/AwsLambdaSinkConnectorConfigTest.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda;
2 |
3 | import org.junit.Test;
4 |
5 | public class AwsLambdaSinkConnectorConfigTest {
6 | @Test
7 | public void doc() {
8 | System.out.println(AwsLambdaSinkConnectorConfig.conf().toRst());
9 | }
10 | }
11 |
--------------------------------------------------------------------------------
/kafka/src/test/java/com/tm/kafka/connect/aws/lambda/AwsLambdaSinkConnectorTest.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda;
2 |
3 | import org.junit.Test;
4 |
5 | public class AwsLambdaSinkConnectorTest {
6 | @Test
7 | public void test() {
8 | // Congrats on a passing test!
9 | }
10 | }
11 |
--------------------------------------------------------------------------------
/kafka/src/test/java/com/tm/kafka/connect/aws/lambda/AwsLambdaSinkTaskTest.java:
--------------------------------------------------------------------------------
1 | package com.tm.kafka.connect.aws.lambda;
2 |
3 | import com.amazonaws.services.lambda.AbstractAWSLambda;
4 | import com.amazonaws.services.lambda.model.InvokeRequest;
5 | import com.amazonaws.services.lambda.model.InvokeResult;
6 | import org.apache.kafka.common.TopicPartition;
7 | import org.apache.kafka.common.record.TimestampType;
8 | import org.apache.kafka.connect.data.Schema;
9 | import org.apache.kafka.connect.data.SchemaBuilder;
10 | import org.apache.kafka.connect.data.Struct;
11 | import org.apache.kafka.connect.sink.SinkRecord;
12 | import org.apache.kafka.connect.sink.SinkTaskContext;
13 | import org.junit.Test;
14 |
15 | import java.util.ArrayList;
16 | import java.util.Collection;
17 | import java.util.HashMap;
18 | import java.util.HashSet;
19 | import java.util.Map;
20 | import java.util.Set;
21 |
22 | import static com.tm.kafka.connect.aws.lambda.AwsLambdaSinkConnectorConfig.FUNCTION_NAME_CONFIG;
23 | import static com.tm.kafka.connect.aws.lambda.AwsLambdaSinkConnectorConfig.REGION_CONFIG;
24 | import static org.apache.kafka.connect.data.Schema.STRING_SCHEMA;
25 | import static org.junit.Assert.assertEquals;
26 |
27 | public class AwsLambdaSinkTaskTest {
28 |
29 | private static final String TOPIC = "aws-lambda-topic";
30 | private static final int PARTITION = 12;
31 | private static final int PARTITION2 = 13;
32 |
33 | private static final TopicPartition TOPIC_PARTITION = new TopicPartition(TOPIC, PARTITION);
34 | private static final TopicPartition TOPIC_PARTITION2 = new TopicPartition(TOPIC, PARTITION2);
35 | private static final String FUNCTION_NAME = "kafka-aws-lambda-test";
36 | private static final String REGION = "us-west-2";
37 |
38 | @Test
39 | public void test() {
40 | Map props = new HashMap() {{
41 | put(FUNCTION_NAME_CONFIG, FUNCTION_NAME);
42 | put(REGION_CONFIG, REGION);
43 | }};
44 |
45 | Set assignment = new HashSet<>();
46 | assignment.add(TOPIC_PARTITION);
47 | assignment.add(TOPIC_PARTITION2);
48 | MockSinkTaskContext context = new MockSinkTaskContext(assignment);
49 |
50 | AwsLambdaSinkTask task = new AwsLambdaSinkTask();
51 |
52 |
53 | Collection records = new ArrayList<>();
54 | int partition = 1;
55 | String key = "key1";
56 |
57 | Schema valueSchema = SchemaBuilder.struct()
58 | .name("com.example.Person")
59 | .field("name", STRING_SCHEMA)
60 | .field("age", Schema.INT32_SCHEMA)
61 | .build();
62 |
63 | String bobbyMcGee = "Bobby McGee";
64 | int value21 = 21;
65 |
66 | Struct value = new Struct(valueSchema)
67 | .put("name", bobbyMcGee)
68 | .put("age", value21);
69 |
70 | long offset = 100;
71 | long timestamp = 200L;
72 | SinkRecord sinkRecord = new SinkRecord(
73 | TOPIC,
74 | partition,
75 | STRING_SCHEMA,
76 | key,
77 | valueSchema,
78 | value,
79 | offset,
80 | timestamp,
81 | TimestampType.CREATE_TIME);
82 | records.add(sinkRecord);
83 |
84 | String payload = "{\"schema\":" +
85 | "{\"type\":\"struct\"," +
86 | "\"fields\":[" +
87 | "{\"type\":\"string\",\"optional\":false,\"field\":\"name\"}," +
88 | "{\"type\":\"int32\",\"optional\":false,\"field\":\"age\"}" +
89 | "]," +
90 | "\"optional\":false," +
91 | "\"name\":\"com.example.Person\"}," +
92 | "\"payload\":{\"name\":\"Bobby McGee\",\"age\":21}}";
93 |
94 | task.setClient(new AbstractAWSLambda() {
95 | @Override
96 | public InvokeResult invoke(final InvokeRequest request) {
97 | assertEquals(FUNCTION_NAME, request.getFunctionName());
98 | assertEquals(payload, new String(request.getPayload().array()));
99 | return null;
100 | }
101 | });
102 |
103 | task.initialize(context);
104 | task.start(props);
105 | task.put(records);
106 | }
107 |
108 | protected static class MockSinkTaskContext implements SinkTaskContext {
109 |
110 | private final Map offsets;
111 | private long timeoutMs;
112 | private Set assignment;
113 |
114 | MockSinkTaskContext(Set assignment) {
115 | this.offsets = new HashMap<>();
116 | this.timeoutMs = -1L;
117 | this.assignment = assignment;
118 | }
119 |
120 | @Override
121 | public Map configs() {
122 | return null;
123 | }
124 |
125 | @Override
126 | public void offset(Map offsets) {
127 | this.offsets.putAll(offsets);
128 | }
129 |
130 | @Override
131 | public void offset(TopicPartition tp, long offset) {
132 | offsets.put(tp, offset);
133 | }
134 |
135 | public Map offsets() {
136 | return offsets;
137 | }
138 |
139 | @Override
140 | public void timeout(long timeoutMs) {
141 | this.timeoutMs = timeoutMs;
142 | }
143 |
144 | public long timeout() {
145 | return timeoutMs;
146 | }
147 |
148 | @Override
149 | public Set assignment() {
150 | return assignment;
151 | }
152 |
153 | public void setAssignment(Set nextAssignment) {
154 | assignment = nextAssignment;
155 | }
156 |
157 | @Override
158 | public void pause(TopicPartition... partitions) {
159 | }
160 |
161 | @Override
162 | public void resume(TopicPartition... partitions) {
163 | }
164 |
165 | @Override
166 | public void requestCommit() {
167 | }
168 | }
169 | }
170 |
--------------------------------------------------------------------------------
/kafka/src/test/resources/logback.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | %d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n
7 |
8 |
9 |
10 |
11 |
12 |
13 |
--------------------------------------------------------------------------------
/kafka/templates/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 | Video Streaming Demonstration
4 |
5 |
6 | Video Streaming Demonstration
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
--------------------------------------------------------------------------------
/kafka/zk-common.env:
--------------------------------------------------------------------------------
1 | ZOOKEEPER_CLIENT_PORT=2181
2 |
--------------------------------------------------------------------------------
/lambda/README.md:
--------------------------------------------------------------------------------
1 |
2 | # Preparing AWS Lambda Deployment Package in Python & Testing Kafka Connect AWS Lambda Connector
3 |
4 | A deployment package is a ZIP archive that contains your function code and dependencies. You need to create a deployment package if you use the Lambda API to manage functions, or if you need to include libraries and dependencies other than the AWS SDK. You can upload the package directly to Lambda, or you can use an Amazon S3 bucket, and then upload it to Lambda. If the deployment package is larger than 50 MB, you must use Amazon S3.
5 |
6 | Let us try out a simple example to build Kafka Module and package it in the form of zip which can be loaded onto AWS Lambda.
7 |
8 |
9 | ## Pre-requisite:
10 |
11 | - Docker Desktop for Mac
12 | - Python 3.6
13 | - Kafka Cluster running on http://35.189.130.4:9000/
14 |
15 | ## Using a Virtual Environment
16 |
17 | You may need to use a virtual environment to install dependencies for your function. This can occur if your function or its dependencies have dependencies on native libraries, or if you used Homebrew to install Python.
18 |
19 | To update a Python function with a virtual environment
20 |
21 | 1. Create a virtual environment.
22 |
23 | ```
24 | [Captains-Bay]🚩 > pwd
25 | /Users/ajeetraina/pico
26 | [Captains-Bay]🚩 > virtualenv v-env
27 | Using base prefix '/Library/Frameworks/Python.framework/Versions/3.6'
28 | New python executable in /Users/ajeetraina/pico/v-env/bin/python3.6
29 | Also creating executable in /Users/ajeetraina/pico/v-env/bin/python
30 | Installing setuptools, pip, wheel...
31 | done.
32 | [Captains-Bay]🚩
33 | ```
34 |
35 | For Python 3.3 and newer, you need to use the built-in venv module to create a virtual environment, instead of installing virtualenv.
36 |
37 | ```
38 | [Captains-Bay]🚩 > python3 -m venv v-env
39 | [Captains-Bay]🚩 >
40 | ```
41 |
42 | ## Activate the environment
43 |
44 | ```
45 | source v-env/bin/activate
46 | ```
47 |
48 | ## Install libraries with pip
49 |
50 | ```
51 | (v-env) [Captains-Bay]🚩 > pip install kafka
52 | Collecting kafka
53 | Downloading https://files.pythonhosted.org/packages/21/71/73286e748ac5045b6a669c2fe44b03ac4c5d3d2af9291c4c6fc76438a9a9/kafka-1.3.5-py2.py3-none-any.whl (207kB)
54 | |████████████████████████████████| 215kB 428kB/s
55 | Installing collected packages: kafka
56 | Successfully installed kafka-1.3.5
57 | (v-env) [Captains-Bay]🚩 >
58 | ```
59 |
60 | ## Deactivate the virtual environment
61 |
62 | ```
63 | deactivate
64 | ```
65 |
66 | ## Create a ZIP archive with the contents of the library
67 |
68 | ```
69 | cd v-env/lib/python3.7/site-packages
70 | ```
71 |
72 |
73 |
74 | ```
75 | zip -r9 ${OLDPWD}/function.zip .
76 | ```
77 |
78 | ```
79 | cd $OLDPWD
80 | ```
81 |
82 | ## Add your function code to the archive
83 |
84 | Add [function.py](https://github.com/collabnix/pico/blob/master/lambda/function.py) here under the same directory
85 |
86 | ```
87 | zip -g function.zip function.py
88 | ```
89 |
90 | # Testing Kafka Connect AWS Lambda Connector
91 |
92 | ## Pre-requisite:
93 |
94 | - aws.amazon.com
95 | - Click on Services > Lambda
96 |
97 | ## Steps:
98 |
99 | - Open AWS Lambda Page
100 | - Click on "Create Funtion"
101 | - Select "Author from Scratch"
102 | - Enter Function name of your Choice
103 | - Choose "Python 3.6" as Runtime
104 | - Click on "Create Function"
105 |
106 | You should see the message "Congratulations! Your Lambda function "kafka-pico-lambda" has been successfully created. You can now change its code and configuration. Choose Test to input a test event when you want to test your function."
107 |
108 | Under the function code, select "Upload as zip file" and upload function.zip. Select Python 3.6 as Runtime and handler as function.lambda_handler.
109 |
110 |
111 | 
112 |
113 | Click on Save.
114 |
115 | ## Triggering the Consumer Code
116 |
117 | Go to any one of your Kafka container(say, kafka-1) and run the below command:
118 |
119 | ```
120 | docker exec -it kafka-1 bash
121 | ```
122 |
123 | ```
124 | kafka-console-producer --broker-list kafka-1:9092 --topic aws-lambda-topic
125 | ```
126 | and enter some text randomly.
127 | Say, I typed dharwad for example.
128 |
129 | Go back to Lambda and Click on Test. You should be able to see dharwad as output as shown below:
130 |
131 |
132 | 
133 |
134 | ## Troubleshooting:
135 | - If it displays "Unable to import Kafka module" then possibly you have missed out the above steps. Go back and start from the beginning in building the zip file.
136 | - If in case timedout error appear, please increase the timed-out value to 3-4 minutes.
137 |
138 |
--------------------------------------------------------------------------------
/lambda/Screen Shot 2019-07-01 at 3.31.58 PM.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/lambda/Screen Shot 2019-07-01 at 3.31.58 PM.png
--------------------------------------------------------------------------------
/lambda/Screen Shot 2019-07-01 at 3.32.15 PM.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/lambda/Screen Shot 2019-07-01 at 3.32.15 PM.png
--------------------------------------------------------------------------------
/lambda/function.py:
--------------------------------------------------------------------------------
1 | import json
2 | from kafka import KafkaConsumer
3 |
4 | def lambda_handler(event, context):
5 |
6 | consumer = KafkaConsumer('aws-lambda-topic', bootstrap_servers='35.189.130.4:9092')
7 |
8 | for message in consumer:
9 | return { 'statusCode': 200, 'body': json.dumps(str(message.value))}
10 | ~
11 |
--------------------------------------------------------------------------------
/onprem/yolo/README.md:
--------------------------------------------------------------------------------
1 | # Running Yolo inside Docker Container running on Jetson Nano
2 |
3 | ## Pre-requisite:
4 |
5 | - Jetson Nano running Docker 19.03
6 | - Jetson Nano has two power mode, 5W and 10W. Set the powermode of the Jetson Nano to 5W by running the below CLI:
7 |
8 | ```
9 | sudo nvpmodel -m 1
10 | ```
11 |
12 | - Connect webcam using USB port
13 |
14 |
15 | ## Setup a swap partition
16 |
17 | In order to reduce memory pressure (and crashes), it is a good idea to setup a 6GB swap partition. (Nano has only 4GB of RAM)
18 |
19 | ```
20 | git clone https://github.com/collabnix/installSwapfile
21 | cd installSwapfile
22 | chmod 777 installSwapfile.sh
23 | ```
24 | ```
25 | ./installSwapfile.sh
26 | ```
27 |
28 | Reboot the Jetson nano.
29 |
30 | ## Verify your if your USB Camera is connected
31 |
32 | ```
33 | ls /dev/video*
34 | ```
35 |
36 | Output should be: /dev/video0
37 |
38 | # Running the scripts
39 |
40 | ```
41 | wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/v2.1.0/docker/install-opendatacam.sh
42 | chmod 777 install-opendatacam.sh
43 | ./install-opendatacam.sh --platform nano
44 | ```
45 |
46 | ```
47 |
48 |
49 | jetson@worker1:~$ sudo docker ps
50 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
51 | aae5117a06c6 opendatacam/opendatacam:v2.1.0-nano "/bin/sh -c ./docker…" 15 minutes ago Up 5 minutes 0.0.0.0:8070->8070/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8090->8090/tcp, 27017/tcp heuristic_bardeen
52 | jetson@worker1:~$ sudo docker logs -f aae
53 | 2020-01-05T10:24:01.840+0000 I STORAGE [main] Max cache overflow file size custom option: 0
54 | 2020-01-05T10:24:01.845+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
55 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] MongoDB starting : pid=8 port=27017 dbpath=/data/db 64-bit host=aae5117a06c6
56 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] db version v4.0.12
57 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] git version: 5776e3cbf9e7afe86e6b29e22520ffb6766e95d4
58 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2n 7 Dec 2017
59 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] allocator: tcmalloc
60 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] modules: none
61 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] build environment:
62 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] distmod: ubuntu1604
63 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] distarch: aarch64
64 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] target_arch: aarch64
65 | 2020-01-05T10:24:01.853+0000 I CONTROL [initandlisten] options: {}
66 | 2020-01-05T10:24:01.854+0000 I STORAGE [initandlisten]
67 | 2020-01-05T10:24:01.854+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
68 | 2020-01-05T10:24:01.854+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
69 | 2020-01-05T10:24:01.854+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1470M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
70 | 2020-01-05T10:24:03.612+0000 I STORAGE [initandlisten] WiredTiger message [1578219843:612093][8:0x7fb6246440], txn-recover: Set global recovery timestamp: 0
71 | 2020-01-05T10:24:03.669+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
72 | 2020-01-05T10:24:03.730+0000 I CONTROL [initandlisten]
73 | 2020-01-05T10:24:03.730+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
74 | 2020-01-05T10:24:03.730+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
75 | 2020-01-05T10:24:03.730+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
76 | 2020-01-05T10:24:03.730+0000 I CONTROL [initandlisten]
77 | 2020-01-05T10:24:03.731+0000 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
78 | 2020-01-05T10:24:03.731+0000 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
79 | 2020-01-05T10:24:03.731+0000 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP
80 | 2020-01-05T10:24:03.731+0000 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
81 | 2020-01-05T10:24:03.731+0000 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
82 | 2020-01-05T10:24:03.732+0000 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
83 | 2020-01-05T10:24:03.732+0000 I CONTROL [initandlisten]
84 | 2020-01-05T10:24:03.733+0000 I CONTROL [initandlisten]
85 | 2020-01-05T10:24:03.734+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
86 | 2020-01-05T10:24:03.734+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
87 | 2020-01-05T10:24:03.734+0000 I CONTROL [initandlisten]
88 | 2020-01-05T10:24:03.738+0000 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: 2ecaac66-8c6f-403e-b789-2a69113c59fd
89 | 2020-01-05T10:24:03.802+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 4.0
90 | 2020-01-05T10:24:03.810+0000 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 847e0215-cc4d-4f84-8bbe-0bccb2f9dfd3
91 | 2020-01-05T10:24:03.858+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
92 | 2020-01-05T10:24:03.862+0000 I NETWORK [initandlisten] waiting for connections on port 27017
93 | 2020-01-05T10:24:03.863+0000 I STORAGE [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: 1e2b3be5-a92a-4eb8-b8a5-c6d10cfaadb7
94 | 2020-01-05T10:24:03.961+0000 I INDEX [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
95 | 2020-01-05T10:24:03.961+0000 I INDEX [LogicalSessionCacheRefresh] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
96 | 2020-01-05T10:24:03.965+0000 I INDEX [LogicalSessionCacheRefresh] build index done. scanned 0 total records. 0 secs
97 | 2020-01-05T10:24:03.965+0000 I COMMAND [LogicalSessionCacheRefresh] command config.$cmd command: createIndexes { createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], $db: "config" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 102ms
98 |
99 | > OpenDataCam@2.1.0 start /opendatacam
100 | > PORT=8080 NODE_ENV=production node server.js
101 |
102 | Please specify the path to the raw detections file
103 | -----------------------------------
104 | - Opendatacam initialized -
105 | - Config loaded: -
106 | {
107 | "OPENDATACAM_VERSION": "2.1.0",
108 | "PATH_TO_YOLO_DARKNET": "/darknet",
109 | "VIDEO_INPUT": "usbcam",
110 | "NEURAL_NETWORK": "yolov3-tiny",
111 | "VIDEO_INPUTS_PARAMS": {
112 | "file": "opendatacam_videos/demo.mp4",
113 | "usbcam": "v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink",
114 | "usbcam_no_gstreamer": "-c 0",
115 | "experimental_raspberrycam_docker": "v4l2src device=/dev/video2 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink",
116 | "raspberrycam_no_docker": "nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsink",
117 | "remote_cam": "YOUR IP CAM STREAM (can be .m3u8, MJPEG ...), anything supported by opencv"
118 | },
119 | "VALID_CLASSES": [
120 | "*"
121 | ],
122 | "DISPLAY_CLASSES": [
123 | {
124 | "class": "bicycle",
125 | "icon": "1F6B2.svg"
126 | },
127 | {
128 | "class": "person",
129 | "icon": "1F6B6.svg"
130 | },
131 | {
132 | "class": "truck",
133 | "icon": "1F69B.svg"
134 | },
135 | {
136 | "class": "motorbike",
137 | "icon": "1F6F5.svg"
138 | },
139 | {
140 | "class": "car",
141 | "icon": "1F697.svg"
142 | },
143 | {
144 | "class": "bus",
145 | "icon": "1F68C.svg"
146 | }
147 | ],
148 | "PATHFINDER_COLORS": [
149 | "#1f77b4",
150 | "#ff7f0e",
151 | "#2ca02c",
152 | "#d62728",
153 | "#9467bd",
154 | "#8c564b",
155 | "#e377c2",
156 | "#7f7f7f",
157 | "#bcbd22",
158 | "#17becf"
159 | ],
160 | "COUNTER_COLORS": {
161 | "yellow": "#FFE700",
162 | "turquoise": "#A3FFF4",
163 | "green": "#a0f17f",
164 | "purple": "#d070f0",
165 | "red": "#AB4435"
166 | },
167 | "NEURAL_NETWORK_PARAMS": {
168 | "yolov3": {
169 | "data": "cfg/coco.data",
170 | "cfg": "cfg/yolov3.cfg",
171 | "weights": "yolov3.weights"
172 | },
173 | "yolov3-tiny": {
174 | "data": "cfg/coco.data",
175 | "cfg": "cfg/yolov3-tiny.cfg",
176 | "weights": "yolov3-tiny.weights"
177 | },
178 | "yolov2-voc": {
179 | "data": "cfg/voc.data",
180 | "cfg": "cfg/yolo-voc.cfg",
181 | "weights": "yolo-voc.weights"
182 | }
183 | },
184 | "TRACKER_ACCURACY_DISPLAY": {
185 | "nbFrameBuffer": 300,
186 | "settings": {
187 | "radius": 3.1,
188 | "blur": 6.2,
189 | "step": 0.1,
190 | "gradient": {
191 | "1": "red",
192 | "0.4": "orange"
193 | },
194 | "canvasResolutionFactor": 0.1
195 | }
196 | },
197 | "MONGODB_URL": "mongodb://127.0.0.1:27017"
198 | }
199 | -----------------------------------
200 | Process YOLO initialized
201 | 2020-01-05T10:24:09.844+0000 I NETWORK [listener] connection accepted from 127.0.0.1:33770 #1 (1 connection now open)
202 | > Ready on http://localhost:8080
203 | > Ready on http://172.17.0.2:8080
204 | 2020-01-05T10:24:09.878+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:33770 conn1: { driver: { name: "nodejs", version: "3.2.5" }, os: { type: "Linux", name: "linux", architecture: "arm64", version: "4.9.140-tegra" }, platform: "Node.js v10.16.3, LE, mongodb-core: 3.2.5" }
205 | 2020-01-05T10:24:09.915+0000 I STORAGE [conn1] createCollection: opendatacam.recordings with generated UUID: 0b545873-c40f-4232-8803-9c7c0cbd0ec4
206 | 2020-01-05T10:24:09.917+0000 I NETWORK [listener] connection accepted from 127.0.0.1:33772 #2 (2 connections now open)
207 | Success init db
208 | 2020-01-05T10:24:09.919+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:33772 conn2: { driver: { name: "nodejs", version: "3.2.5" }, os: { type: "Linux", name: "linux", architecture: "arm64", version: "4.9.140-tegra" }, platform: "Node.js v10.16.3, LE, mongodb-core: 3.2.5" }
209 | 2020-01-05T10:24:09.969+0000 I INDEX [conn1] build index on: opendatacam.recordings properties: { v: 2, key: { dateEnd: -1 }, name: "dateEnd_-1", ns: "opendatacam.recordings" }
210 | 2020-01-05T10:24:09.969+0000 I INDEX [conn1] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
211 | 2020-01-05T10:24:09.971+0000 I INDEX [conn1] build index done. scanned 0 total records. 0 secs
212 | 2020-01-05T10:24:09.971+0000 I STORAGE [conn2] createCollection: opendatacam.tracker with generated UUID: 58e46bc1-6f22-4b3b-9e3f-42201351e5b4
213 | 2020-01-05T10:24:10.040+0000 I INDEX [conn2] build index on: opendatacam.tracker properties: { v: 2, key: { recordingId: 1 }, name: "recordingId_1", ns: "opendatacam.tracker" }
214 | 2020-01-05T10:24:10.040+0000 I INDEX [conn2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
215 | 2020-01-05T10:24:10.042+0000 I INDEX [conn2] build index done. scanned 0 total records. 0 secs
216 | 2020-01-05T10:24:10.043+0000 I COMMAND [conn2] command opendatacam.$cmd command: createIndexes { createIndexes: "tracker", indexes: [ { name: "recordingId_1", key: { recordingId: 1 } } ], lsid: { id: UUID("afed3446-90a2-4a09-b03b-ba2e9e3aa76f") }, $db: "opendatacam" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2, W: 1 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 47016 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 118ms
217 | Process YOLO started
218 | { OPENDATACAM_VERSION: '2.1.0',
219 | PATH_TO_YOLO_DARKNET: '/darknet',
220 | VIDEO_INPUT: 'usbcam',
221 | NEURAL_NETWORK: 'yolov3-tiny',
222 | VIDEO_INPUTS_PARAMS:
223 | { file: 'opendatacam_videos/demo.mp4',
224 | usbcam:
225 | 'v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink',
226 | usbcam_no_gstreamer: '-c 0',
227 | experimental_raspberrycam_docker:
228 | 'v4l2src device=/dev/video2 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink',
229 | raspberrycam_no_docker:
230 | 'nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsink',
231 | remote_cam:
232 | 'YOUR IP CAM STREAM (can be .m3u8, MJPEG ...), anything supported by opencv' },
233 | VALID_CLASSES: [ '*' ],
234 | DISPLAY_CLASSES:
235 | [ { class: 'bicycle', icon: '1F6B2.svg' },
236 | { class: 'person', icon: '1F6B6.svg' },
237 | { class: 'truck', icon: '1F69B.svg' },
238 | { class: 'motorbike', icon: '1F6F5.svg' },
239 | { class: 'car', icon: '1F697.svg' },
240 | { class: 'bus', icon: '1F68C.svg' } ],
241 | PATHFINDER_COLORS:
242 | [ '#1f77b4',
243 | '#ff7f0e',
244 | '#2ca02c',
245 | '#d62728',
246 | '#9467bd',
247 | '#8c564b',
248 | '#e377c2',
249 | '#7f7f7f',
250 | '#bcbd22',
251 | '#17becf' ],
252 | COUNTER_COLORS:
253 | { yellow: '#FFE700',
254 | turquoise: '#A3FFF4',
255 | green: '#a0f17f',
256 | purple: '#d070f0',
257 | red: '#AB4435' },
258 | NEURAL_NETWORK_PARAMS:
259 | { yolov3:
260 | { data: 'cfg/coco.data',
261 | cfg: 'cfg/yolov3.cfg',
262 | weights: 'yolov3.weights' },
263 | 'yolov3-tiny':
264 | { data: 'cfg/coco.data',
265 | cfg: 'cfg/yolov3-tiny.cfg',
266 | weights: 'yolov3-tiny.weights' },
267 | 'yolov2-voc':
268 | { data: 'cfg/voc.data',
269 | cfg: 'cfg/yolo-voc.cfg',
270 | weights: 'yolo-voc.weights' } },
271 | TRACKER_ACCURACY_DISPLAY:
272 | { nbFrameBuffer: 300,
273 | settings:
274 | { radius: 3.1,
275 | blur: 6.2,
276 | step: 0.1,
277 | gradient: [Object],
278 | canvasResolutionFactor: 0.1 } },
279 | MONGODB_URL: 'mongodb://127.0.0.1:27017' }
280 | layer filters size input output
281 | 0 (node:55) [DEP0001] DeprecationWarning: OutgoingMessage.flush is deprecated. Use flushHeaders instead.
282 | conv 16 3 x 3 / 1 416 x 416 x 3 -> 416 x 416 x 16 0.150 BF
283 | 1 max 2 x 2 / 2 416 x 416 x 16 -> 208 x 208 x 16 0.003 BF
284 | 2 conv 32 3 x 3 / 1 208 x 208 x 16 -> 208 x 208 x 32 0.399 BF
285 | 3 max 2 x 2 / 2 208 x 208 x 32 -> 104 x 104 x 32 0.001 BF
286 | 4 conv 64 3 x 3 / 1 104 x 104 x 32 -> 104 x 104 x 64 0.399 BF
287 | 5 max 2 x 2 / 2 104 x 104 x 64 -> 52 x 52 x 64 0.001 BF
288 | 6 conv 128 3 x 3 / 1 52 x 52 x 64 -> 52 x 52 x 128 0.399 BF
289 | 7 max 2 x 2 / 2 52 x 52 x 128 -> 26 x 26 x 128 0.000 BF
290 | 8 conv 256 3 x 3 / 1 26 x 26 x 128 -> 26 x 26 x 256 0.399 BF
291 | 9 max 2 x 2 / 2 26 x 26 x 256 -> 13 x 13 x 256 0.000 BF
292 | 10 conv 512 3 x 3 / 1 13 x 13 x 256 -> 13 x 13 x 512 0.399 BF
293 | 11 max 2 x 2 / 1 13 x 13 x 512 -> 13 x 13 x 512 0.000 BF
294 | 12 conv 1024 3 x 3 / 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF
295 | 13 conv 256 1 x 1 / 1 13 x 13 x1024 -> 13 x 13 x 256 0.089 BF
296 | 14 conv 512 3 x 3 / 1 13 x 13 x 256 -> 13 x 13 x 512 0.399 BF
297 | 15 conv 255 1 x 1 / 1 13 x 13 x 512 -> 13 x 13 x 255 0.044 BF
298 | 16 yolo
299 | [yolo] params: iou loss: mse, iou_norm: 0.75, cls_norm: 1.00, scale_x_y: 1.00
300 | 17 route 13
301 | 18 conv 128 1 x 1 / 1 13 x 13 x 256 -> 13 x 13 x 128 0.011 BF
302 | 19 upsample 2x 13 x 13 x 128 -> 26 x 26 x 128
303 | 20 route 19 8
304 | 21 conv 256 3 x 3 / 1 26 x 26 x 384 -> 26 x 26 x 256 1.196 BF
305 | "#2ca02c",
306 | "#d62728",
307 | "#9467bd",
308 | "#8c564b",
309 | "#e377c2",
310 | "#7f7f7f",
311 | "#bcbd22",
312 | "#17becf"
313 | ],
314 | "COUNTER_COLORS": {
315 | "yellow": "#FFE700",
316 | "turquoise": "#A3FFF4",
317 | "green": "#a0f17f",
318 | "purple": "#d070f0",
319 | "red": "#AB4435"
320 | },
321 | "NEURAL_NETWORK_PARAMS": {
322 | "yolov3": {
323 | "data": "cfg/coco.data",
324 | "cfg": "cfg/yolov3.cfg",
325 | "weights": "yolov3.weights"
326 | },
327 | "yolov3-tiny": {
328 | "data": "cfg/coco.data",
329 | "cfg": "cfg/yolov3-tiny.cfg",
330 | "weights": "yolov3-tiny.weights"
331 | },
332 | "yolov2-voc": {
333 | "data": "cfg/voc.data",
334 | "cfg": "cfg/yolo-voc.cfg",
335 | "weights": "yolo-voc.weights"
336 | }
337 | },
338 | "TRACKER_ACCURACY_DISPLAY": {
339 | "nbFrameBuffer": 300,
340 | "settings": {
341 | "radius": 3.1,
342 | "blur": 6.2,
343 | "step": 0.1,
344 | "gradient": {
345 | "1": "red",
346 | "0.4": "orange"
347 | },
348 | "canvasResolutionFactor": 0.1
349 | }
350 | },
351 | "MONGODB_URL": "mongodb://127.0.0.1:27017"
352 | }
353 | -----------------------------------
354 | Process YOLO initialized
355 | 2020-01-05T10:34:44.353+0000 I NETWORK [listener] connection accepted from 127.0.0.1:40190 #1 (1 connection now open)
356 | > Ready on http://localhost:8080
357 | > Ready on http://172.17.0.2:8080
358 | 2020-01-05T10:34:44.385+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:40190 conn1: { driver: { name: "nodejs", version: "3.2.5" }, os: { type: "Linux", name: "linux", architecture: "arm64", version: "4.9.140-tegra" }, platform: "Node.js v10.16.3, LE, mongodb-core: 3.2.5" }
359 | 2020-01-05T10:34:44.424+0000 I NETWORK [listener] connection accepted from 127.0.0.1:40192 #2 (2 connections now open)
360 | Success init db
361 | 2020-01-05T10:34:44.430+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:40192 conn2: { driver: { name: "nodejs", version: "3.2.5" }, os: { type: "Linux", name: "linux", architecture: "arm64", version: "4.9.140-tegra" }, platform: "Node.js v10.16.3, LE, mongodb-core: 3.2.5" }
362 | (node:52) [DEP0001] DeprecationWarning: OutgoingMessage.flush is deprecated. Use flushHeaders instead.
363 | ```
364 |
365 |
--------------------------------------------------------------------------------
/producer-rpi/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM fradelg/rpi-opencv
2 |
3 | MAINTAINER ajeetraina@gmail.com
4 |
5 | RUN apt-get update
6 | RUN apt install python-pip python3-pip
7 | RUN pip3 install pytz kafka-python
8 | RUN pip install virtualenv virtualenvwrapper
9 | RUN apt-get install -y git
10 | RUN git clone https://github.com/collabnix/pico
11 | WORKDIR pico/deployment/objects
12 |
--------------------------------------------------------------------------------
/raspbi/README.md:
--------------------------------------------------------------------------------
1 | # Script to be run from Raspberry Pi Boxes
2 |
--------------------------------------------------------------------------------
/raspbi/consumer-test.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[3]:
5 |
6 |
7 | import datetime
8 | from kafka import KafkaConsumer
9 |
10 |
11 | topic = "testpico"
12 | brokers = ["35.189.130.4:9092"]
13 |
14 | consumer = KafkaConsumer(
15 | topic,
16 | bootstrap_servers=brokers,
17 | value_deserializer=lambda m: json.loads(m.decode('utf-8')))
18 |
19 |
20 | # In[4]:
21 |
22 |
23 | for msg in consumer:
24 | print(msg.value['frame'])
25 |
26 |
27 | # In[ ]:
28 |
29 |
30 |
31 |
--------------------------------------------------------------------------------
/raspbi/consumer.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import json
3 | import base64
4 | from flask import Flask, Response, render_template
5 | from kafka import KafkaConsumer
6 |
7 | # Fire up the Kafka Consumer
8 | topic = "testpico"
9 | brokers = ["35.189.130.4:9092"]
10 |
11 | consumer = KafkaConsumer(
12 | topic,
13 | bootstrap_servers=brokers,
14 | value_deserializer=lambda m: json.loads(m.decode('utf-8'))
15 |
16 |
17 | # Set the consumer in a Flask App
18 | app = Flask(__name__)
19 |
20 | @app.route('/')
21 | def index():
22 | return render_template('index.html')
23 |
24 | @app.route('/video_feed', methods=['GET'])
25 | def video_feed():
26 | """
27 | This is the heart of our video display. Notice we set the mimetype to
28 | multipart/x-mixed-replace. This tells Flask to replace any old images with
29 | new values streaming through the pipeline.
30 | """
31 | return Response(
32 | get_video_stream(),
33 | mimetype='multipart/x-mixed-replace; boundary=frame')
34 |
35 | def get_video_stream():
36 | """
37 | Here is where we recieve streamed images from the Kafka Server and convert
38 | them to a Flask-readable format.
39 | """
40 | for msg in consumer:
41 | yield (b'--frame\r\n'
42 | b'Content-Type: image/jpg\r\n\r\n' + base64.b64decode(msg.value['image_bytes']) + b'\r\n\r\n')
43 |
44 | if __name__ == "__main__":
45 | app.run(host='0.0.0.0', debug=True)
46 |
--------------------------------------------------------------------------------
/raspbi/producer.py:
--------------------------------------------------------------------------------
1 |
2 | # coding: utf-8
3 |
4 | # In[26]:
5 |
6 |
7 | import sys
8 | import time
9 | import cv2
10 | import json
11 | # from picamera.array import PiRGBArray
12 | # from picamera import PiCamera
13 | from kafka import KafkaProducer
14 | from kafka.errors import KafkaError
15 |
16 | import base64
17 |
18 | topic = "testpico"
19 | brokers = ["35.189.130.4:9092"]
20 |
21 | # framerate =
22 |
23 |
24 | def publish_camera():
25 | """
26 | Publish camera video stream to specified Kafka topic.
27 | Kafka Server is expected to be running on the localhost. Not partitioned.
28 | """
29 |
30 | # Start up producer
31 |
32 | # producer = KafkaProducer(bootstrap_servers=brokers)
33 | producer = KafkaProducer(bootstrap_servers=brokers,
34 | value_serializer=lambda v: json.dumps(v).encode('utf-8'))
35 |
36 |
37 | camera_data = {'camera-id':"1","position":"frontspace","image_bytes":"123","frame":"0"}
38 |
39 | camera = cv2.VideoCapture(0)
40 |
41 | i = 0
42 |
43 | try:
44 | while(True):
45 | success, frame = camera.read()
46 |
47 | ret, buffer = cv2.imencode('.jpg', frame)
48 |
49 | camera_data['image_bytes'] = base64.b64encode(buffer).decode('utf-8')
50 | camera_data['frame'] = str(i)
51 | # producer.send(topic,buffer.tobytes())
52 |
53 | producer.send(topic, camera_data)
54 |
55 | i = i + 1
56 |
57 | # Choppier stream, reduced load on processor
58 | time.sleep(5)
59 |
60 | except Exception as e:
61 | print((e))
62 | print("\nExiting.")
63 | sys.exit(1)
64 |
65 |
66 | camera.release()
67 | # producer.close()
68 |
69 |
70 |
71 | if __name__ == "__main__":
72 | publish_camera()
73 |
74 |
75 | # In[24]:
76 |
77 |
78 |
79 |
80 |
81 | # In[26]:
82 |
83 |
84 |
85 |
86 |
87 | # In[32]:
88 |
89 |
90 |
91 |
92 |
93 | # In[31]:
94 |
95 |
96 |
97 |
98 |
99 | # In[21]:
100 |
101 |
102 |
103 |
104 |
105 | # In[ ]:
106 |
107 |
108 |
109 |
--------------------------------------------------------------------------------
/rtmp/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM buildpack-deps:stretch
2 |
3 | LABEL maintainer="Ajeet S Raina "
4 |
5 | # Versions of Nginx and nginx-rtmp-module to use
6 | ENV NGINX_VERSION nginx-1.15.0
7 | ENV NGINX_RTMP_MODULE_VERSION 1.2.1
8 |
9 | # Install dependencies
10 | RUN apt-get update && \
11 | apt-get install -y ca-certificates openssl libssl-dev && \
12 | rm -rf /var/lib/apt/lists/*
13 |
14 | # Download and decompress Nginx
15 | RUN mkdir -p /tmp/build/nginx && \
16 | cd /tmp/build/nginx && \
17 | wget -O ${NGINX_VERSION}.tar.gz https://nginx.org/download/${NGINX_VERSION}.tar.gz && \
18 | tar -zxf ${NGINX_VERSION}.tar.gz
19 |
20 | # Download and decompress RTMP module
21 | RUN mkdir -p /tmp/build/nginx-rtmp-module && \
22 | cd /tmp/build/nginx-rtmp-module && \
23 | wget -O nginx-rtmp-module-${NGINX_RTMP_MODULE_VERSION}.tar.gz https://github.com/arut/nginx-rtmp-module/archive/v${NGINX_RTMP_MODULE_VERSION}.tar.gz && \
24 | tar -zxf nginx-rtmp-module-${NGINX_RTMP_MODULE_VERSION}.tar.gz && \
25 | cd nginx-rtmp-module-${NGINX_RTMP_MODULE_VERSION}
26 |
27 | # Build and install Nginx
28 | # The default puts everything under /usr/local/nginx, so it's needed to change
29 | # it explicitly. Not just for order but to have it in the PATH
30 | RUN cd /tmp/build/nginx/${NGINX_VERSION} && \
31 | ./configure \
32 | --sbin-path=/usr/local/sbin/nginx \
33 | --conf-path=/etc/nginx/nginx.conf \
34 | --error-log-path=/var/log/nginx/error.log \
35 | --pid-path=/var/run/nginx/nginx.pid \
36 | --lock-path=/var/lock/nginx/nginx.lock \
37 | --http-log-path=/var/log/nginx/access.log \
38 | --http-client-body-temp-path=/tmp/nginx-client-body \
39 | --with-http_ssl_module \
40 | --with-threads \
41 | --with-ipv6 \
42 | --add-module=/tmp/build/nginx-rtmp-module/nginx-rtmp-module-${NGINX_RTMP_MODULE_VERSION} && \
43 | make -j $(getconf _NPROCESSORS_ONLN) && \
44 | make install && \
45 | mkdir /var/lock/nginx && \
46 | rm -rf /tmp/build
47 |
48 | # Forward logs to Docker
49 | RUN ln -sf /dev/stdout /var/log/nginx/access.log && \
50 | ln -sf /dev/stderr /var/log/nginx/error.log
51 |
52 | # Set up config file
53 | COPY nginx.conf /etc/nginx/nginx.conf
54 |
55 | EXPOSE 1935
56 | CMD ["nginx", "-g", "daemon off;"]
57 |
--------------------------------------------------------------------------------
/rtmp/README.md:
--------------------------------------------------------------------------------
1 | # RTMP + Nginx for Video Streaming using Docker on Jetson Nano
2 |
3 | 
4 |
5 | - Real-Time Messaging Protocol (RTMP) is an open source protocol owned by Adobe that’s designed to stream audio and video by maintaining low latency connections.
6 | - It is a TCP-based protocol designed to maintain low-latency connections for audio and video streaming.
7 | - It is a protocol for streaming audio, video, and data over the Internet.
8 | - To increase the amount of data that can be smoothly transmitted, streams are split into smaller fragments called packets.
9 | - RTMP also defines several virtual channels that work independently of each other for packets to be delivered on.
10 | - This means that video and audio are delivered on separate channels simultaneously.
11 | - Clients use a handshake to form a connection with an RTMP server which then allows users to stream video and audio.
12 | - RTMP live streaming generally requires a media server and a content delivery network, but by leveraging StackPath EdgeCompute you can remove the need for a CDN and drastically reduce latency and costs.
13 |
14 | ## Setup:
15 |
16 | - Attach Raspberry Pi with Camera Module
17 | - Turn Your Raspberry Pi into CCTV Camera
18 | - Run RTMP + Nginx inside Docker container on Jetson Nano
19 | - Run Yolo inside Docker container on Jetson Nano
20 |
21 | ## Turn Your Raspberry Pi into CCTV Camera
22 |
23 | Refer [this](http://collabnix.com/turn-your-raspberry-pi-into-low-cost-cctv-surveillance-camerawith-night-vision-in-5-minutes-using-docker/) link
24 |
25 |
26 | ## How to run RTMP inside Docker Container on Jetson Nano
27 |
28 | ```
29 | docker run -d -p 1935:1935 --name nginx-rtmp ajeetraina/nginx-rtmp-arm:latest
30 | ```
31 |
32 | If you want to build the Docker Image from Dockerfile, follow the below steps:
33 |
34 | ```
35 | git clone https://github.com/collabnix/pico
36 | cd pico/rtmp/
37 | docker build -t ajeetraina/nginx-rtmp-arm .
38 | ```
39 |
40 | ## Testing RTMP with OBS Studio and VLC
41 |
42 | This can be tested either on your laptop or Raspberry Pi(omxplayer).
43 |
44 | Follow the below steps in case you have Windows Laptop with OBS Studo and VLC installed.
45 |
46 | - Open OBS Studio
47 | - Click the "Settings" button
48 | - Go to the "Stream" section
49 | - In "Stream Type" select "Custom Streaming Server"
50 | - In the "URL" enter the rtmp:///live replacing with the IP of the host in which the container is running. For example: rtmp://192.168.0.30/live
51 | - In the "Stream key" use a "key" that will be used later in the client URL to display that specific stream. For example: test
52 | - Click the "OK" button
53 | - In the section "Sources" click de "Add" button (+) and select a source (for example "Screen Capture") and configure it as you need
54 | - Click the "Start Streaming" button
55 | - Open a VLC player (it also works in Raspberry Pi using omxplayer)
56 | - Click in the "Media" menu
57 | - Click in "Open Network Stream"
58 | - Enter the URL from above as rtmp:///live/ replacing with the IP of the host in which the container is running and with the key you created in OBS Studio. For example: rtmp://192.168.0.30/live/test
59 | - Click "Play"
60 | - Now VLC should start playing whatever you are transmitting from OBS Studio
61 |
62 |
63 |
--------------------------------------------------------------------------------
/rtmp/images/README.md:
--------------------------------------------------------------------------------
1 | # Images
2 |
--------------------------------------------------------------------------------
/rtmp/images/pico2.0.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/rtmp/images/pico2.0.jpeg
--------------------------------------------------------------------------------
/rtmp/nginx.conf:
--------------------------------------------------------------------------------
1 | worker_processes auto;
2 | rtmp_auto_push on;
3 | events {}
4 | rtmp {
5 | server {
6 | listen 1935;
7 | listen [::]:1935 ipv6only=on;
8 |
9 | application live {
10 | live on;
11 | record off;
12 | }
13 | }
14 | }
15 |
--------------------------------------------------------------------------------
/sample/producer-consumer/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.7
2 | MAINTAINER Ajeet S Raina
3 |
4 | RUN apt-get update \
5 | && apt-get install -y \
6 | build-essential \
7 | cmake \
8 | git \
9 | wget \
10 | unzip \
11 | yasm \
12 | pkg-config \
13 | libswscale-dev \
14 | libtbb2 \
15 | libtbb-dev \
16 | libjpeg-dev \
17 | libpng-dev \
18 | libtiff-dev \
19 | libavformat-dev \
20 | libpq-dev \
21 | && rm -rf /var/lib/apt/lists/*
22 |
23 | RUN pip install numpy
24 |
25 | WORKDIR /
26 | ENV OPENCV_VERSION="4.1.0"
27 | RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
28 | && unzip ${OPENCV_VERSION}.zip \
29 | && mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
30 | && cd /opencv-${OPENCV_VERSION}/cmake_binary \
31 | && cmake -DBUILD_TIFF=ON \
32 | -DBUILD_opencv_java=OFF \
33 | -DWITH_CUDA=OFF \
34 | -DWITH_OPENGL=ON \
35 | -DWITH_OPENCL=ON \
36 | -DWITH_IPP=ON \
37 | -DWITH_TBB=ON \
38 | -DWITH_EIGEN=ON \
39 | -DWITH_V4L=ON \
40 | -DBUILD_TESTS=OFF \
41 | -DBUILD_PERF_TESTS=OFF \
42 | -DCMAKE_BUILD_TYPE=RELEASE \
43 | -DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
44 | -DPYTHON_EXECUTABLE=$(which python3.7) \
45 | -DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
46 | -DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
47 | .. \
48 | && make install \
49 | && rm /${OPENCV_VERSION}.zip \
50 | && rm -r /opencv-${OPENCV_VERSION}
51 | RUN ln -s \
52 | /usr/local/python/cv2/python-3.7/cv2.cpython-37m-x86_64-linux-gnu.so \
53 | /usr/local/lib/python3.7/site-packages/cv2.so
54 | RUN git clone https://github.com/collabnix/pico \
55 | && cd pico/kafka/
56 |
--------------------------------------------------------------------------------
/testing/.ipynb_checkpoints/consumer-test-checkpoint.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [],
3 | "metadata": {},
4 | "nbformat": 4,
5 | "nbformat_minor": 2
6 | }
7 |
--------------------------------------------------------------------------------
/testing/.ipynb_checkpoints/producer-test-checkpoint.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [],
3 | "metadata": {},
4 | "nbformat": 4,
5 | "nbformat_minor": 2
6 | }
7 |
--------------------------------------------------------------------------------
/testing/consumer-test.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 9,
6 | "metadata": {},
7 | "outputs": [
8 | {
9 | "name": "stdout",
10 | "output_type": "stream",
11 | "text": [
12 | " * Serving Flask app \"__main__\" (lazy loading)\n",
13 | " * Environment: production\n",
14 | " WARNING: This is a development server. Do not use it in a production deployment.\n",
15 | " Use a production WSGI server instead.\n",
16 | " * Debug mode: on\n"
17 | ]
18 | },
19 | {
20 | "name": "stderr",
21 | "output_type": "stream",
22 | "text": [
23 | " * Restarting with stat\n"
24 | ]
25 | },
26 | {
27 | "ename": "SystemExit",
28 | "evalue": "1",
29 | "output_type": "error",
30 | "traceback": [
31 | "An exception has occurred, use %tb to see the full traceback.\n",
32 | "\u001b[1;31mSystemExit\u001b[0m\u001b[1;31m:\u001b[0m 1\n"
33 | ]
34 | },
35 | {
36 | "name": "stderr",
37 | "output_type": "stream",
38 | "text": [
39 | "C:\\Users\\Avinash Bendigeri\\AppData\\Local\\conda\\conda\\envs\\restframework\\lib\\site-packages\\IPython\\core\\interactiveshell.py:2870: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.\n",
40 | " warn(\"To exit: use 'exit', 'quit', or Ctrl-D.\", stacklevel=1)\n"
41 | ]
42 | }
43 | ],
44 | "source": [
45 | "import datetime\n",
46 | "from flask import Flask, Response, render_template\n",
47 | "from kafka import KafkaConsumer\n",
48 | "import json\n",
49 | "import base64\n",
50 | "\n",
51 | "# Fire up the Kafka Consumer\n",
52 | "camera_topic_1 = \"camera1\"\n",
53 | "\n",
54 | "brokers = [\"35.189.130.4:9092\"]\n",
55 | "\n",
56 | "camera1 = KafkaConsumer(\n",
57 | " camera_topic_1, \n",
58 | " bootstrap_servers=brokers,\n",
59 | " value_deserializer=lambda m: json.loads(m.decode('utf-8')))\n",
60 | "\n",
61 | "# Set the consumer in a Flask App\n",
62 | "app = Flask(__name__)\n",
63 | "\n",
64 | "@app.route('/')\n",
65 | "def index():\n",
66 | " return render_template('index.html')\n",
67 | "\n",
68 | "@app.route('/camera_1', methods=['GET'])\n",
69 | "def camera_1():\n",
70 | " id=5\n",
71 | " \"\"\"\n",
72 | " This is the heart of our video display. Notice we set the mimetype to \n",
73 | " multipart/x-mixed-replace. This tells Flask to replace any old images with \n",
74 | " new values streaming through the pipeline.\n",
75 | " \"\"\"\n",
76 | " return Response(\n",
77 | " getCamera1(), \n",
78 | " mimetype='multipart/x-mixed-replace; boundary=frame')\n",
79 | "\n",
80 | "\n",
81 | "def getCamera1():\n",
82 | " \"\"\"\n",
83 | " Here is where we recieve streamed images from the Kafka Server and convert \n",
84 | " them to a Flask-readable format.\n",
85 | " \"\"\"\n",
86 | " for msg in camera1:\n",
87 | " yield (b'--frame\\r\\n'\n",
88 | " b'Content-Type: image/jpg\\r\\n\\r\\n' + base64.b64decode(msg.value['image_bytes']) + b'\\r\\n\\r\\n')\n",
89 | " \n",
90 | "if __name__ == \"__main__\":\n",
91 | " try:\n",
92 | " app.run(host='0.0.0.0', debug=True)\n",
93 | " except Exception as e:\n",
94 | " print(e)"
95 | ]
96 | },
97 | {
98 | "cell_type": "code",
99 | "execution_count": null,
100 | "metadata": {
101 | "collapsed": true
102 | },
103 | "outputs": [],
104 | "source": []
105 | }
106 | ],
107 | "metadata": {
108 | "kernelspec": {
109 | "display_name": "Python 3",
110 | "language": "python",
111 | "name": "python3"
112 | },
113 | "language_info": {
114 | "codemirror_mode": {
115 | "name": "ipython",
116 | "version": 3
117 | },
118 | "file_extension": ".py",
119 | "mimetype": "text/x-python",
120 | "name": "python",
121 | "nbconvert_exporter": "python",
122 | "pygments_lexer": "ipython3",
123 | "version": "3.6.2"
124 | }
125 | },
126 | "nbformat": 4,
127 | "nbformat_minor": 2
128 | }
129 |
--------------------------------------------------------------------------------
/testing/producer-test.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 2,
6 | "metadata": {},
7 | "outputs": [
8 | {
9 | "ename": "SyntaxError",
10 | "evalue": "invalid syntax (, line 51)",
11 | "output_type": "error",
12 | "traceback": [
13 | "\u001b[1;36m File \u001b[1;32m\"\"\u001b[1;36m, line \u001b[1;32m51\u001b[0m\n\u001b[1;33m camera.\u001b[0m\n\u001b[1;37m ^\u001b[0m\n\u001b[1;31mSyntaxError\u001b[0m\u001b[1;31m:\u001b[0m invalid syntax\n"
14 | ]
15 | }
16 | ],
17 | "source": [
18 | "import sys\n",
19 | "import time\n",
20 | "import cv2\n",
21 | "import json\n",
22 | "import decimal\n",
23 | "\n",
24 | "\n",
25 | "import pytz\n",
26 | "from pytz import timezone\n",
27 | "import datetime\n",
28 | "\n",
29 | "\n",
30 | "from kafka import KafkaProducer\n",
31 | "from kafka.errors import KafkaError\n",
32 | "import base64 \n",
33 | "\n",
34 | "topic = \"testpico\"\n",
35 | "brokers = [\"35.189.130.4:9092\"]\n",
36 | "\n",
37 | "\n",
38 | "def convert_ts(ts, config):\n",
39 | " '''Converts a timestamp to the configured timezone. Returns a localized datetime object.'''\n",
40 | " #lambda_tz = timezone('US/Pacific')\n",
41 | " tz = timezone(config['timezone'])\n",
42 | " utc = pytz.utc\n",
43 | " \n",
44 | " utc_dt = utc.localize(datetime.datetime.utcfromtimestamp(ts))\n",
45 | "\n",
46 | " localized_dt = utc_dt.astimezone(tz)\n",
47 | "\n",
48 | " return localized_dt\n",
49 | "\n",
50 | "\n",
51 | "def publish_camera():\n",
52 | " \"\"\"\n",
53 | " Publish camera video stream to specified Kafka topic.\n",
54 | " Kafka Server is expected to be running on the localhost. Not partitioned.\n",
55 | " \"\"\"\n",
56 | "\n",
57 | " # Start up producer\n",
58 | " \n",
59 | " \n",
60 | " producer = KafkaProducer(bootstrap_servers=brokers,\n",
61 | " value_serializer=lambda v: json.dumps(v).encode('utf-8'))\n",
62 | "\n",
63 | " \n",
64 | " camera_data = {'camera_id':\"1\",\"position\":\"frontspace\",\"image_bytes\":\"123\"}\n",
65 | " \n",
66 | " camera = cv2.VideoCapture(0)\n",
67 | " \n",
68 | " camera.\n",
69 | " \n",
70 | " framecount = 0\n",
71 | " \n",
72 | " try:\n",
73 | " while(True):\n",
74 | " \n",
75 | " success, frame = camera.read()\n",
76 | " \n",
77 | " utc_dt = pytz.utc.localize(datetime.datetime.now())\n",
78 | " now_ts_utc = (utc_dt - datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)).total_seconds()\n",
79 | " \n",
80 | " ret, buffer = cv2.imencode('.jpg', frame)\n",
81 | " \n",
82 | " camera_data['image_bytes'] = base64.b64encode(buffer).decode('utf-8')\n",
83 | " \n",
84 | " camera_data['frame_count'] = str(framecount)\n",
85 | " \n",
86 | " camera_data['capture_time'] = str(now_ts_utc)\n",
87 | " \n",
88 | " producer.send(topic, camera_data)\n",
89 | " \n",
90 | " framecount = framecount + 1\n",
91 | " \n",
92 | " # Choppier stream, reduced load on processor\n",
93 | " time.sleep(0.2)\n",
94 | " \n",
95 | " if framecount==20:\n",
96 | " break\n",
97 | " \n",
98 | " except Exception as e:\n",
99 | " print((e))\n",
100 | " print(\"\\nExiting.\")\n",
101 | " sys.exit(1)\n",
102 | "\n",
103 | " \n",
104 | " camera.release()\n",
105 | " producer.close()\n",
106 | "\n",
107 | "\n",
108 | "if __name__ == \"__main__\":\n",
109 | " publish_camera()\n"
110 | ]
111 | },
112 | {
113 | "cell_type": "code",
114 | "execution_count": 3,
115 | "metadata": {
116 | "collapsed": true
117 | },
118 | "outputs": [],
119 | "source": [
120 | "camera = cv2.VideoCapture(0)"
121 | ]
122 | },
123 | {
124 | "cell_type": "code",
125 | "execution_count": 4,
126 | "metadata": {},
127 | "outputs": [
128 | {
129 | "data": {
130 | "text/plain": [
131 | "True"
132 | ]
133 | },
134 | "execution_count": 4,
135 | "metadata": {},
136 | "output_type": "execute_result"
137 | }
138 | ],
139 | "source": [
140 | "camera.set(cv2.CAP_PROP_FRAME_WIDTH,3840)\n",
141 | "\n",
142 | "camera.set(cv2.CAP_PROP_FRAME_HEIGHT,2160)\n",
143 | "\n",
144 | "camera.set(cv2.CAP_PROP_FPS,30)"
145 | ]
146 | },
147 | {
148 | "cell_type": "code",
149 | "execution_count": 9,
150 | "metadata": {},
151 | "outputs": [
152 | {
153 | "data": {
154 | "text/plain": [
155 | "-1.0"
156 | ]
157 | },
158 | "execution_count": 9,
159 | "metadata": {},
160 | "output_type": "execute_result"
161 | }
162 | ],
163 | "source": [
164 | "camera.get(cv2.CAP_PROP_FRAME_COUNT)"
165 | ]
166 | },
167 | {
168 | "cell_type": "code",
169 | "execution_count": null,
170 | "metadata": {
171 | "collapsed": true
172 | },
173 | "outputs": [],
174 | "source": []
175 | }
176 | ],
177 | "metadata": {
178 | "kernelspec": {
179 | "display_name": "Python 3",
180 | "language": "python",
181 | "name": "python3"
182 | },
183 | "language_info": {
184 | "codemirror_mode": {
185 | "name": "ipython",
186 | "version": 3
187 | },
188 | "file_extension": ".py",
189 | "mimetype": "text/x-python",
190 | "name": "python",
191 | "nbconvert_exporter": "python",
192 | "pygments_lexer": "ipython3",
193 | "version": "3.6.2"
194 | }
195 | },
196 | "nbformat": 4,
197 | "nbformat_minor": 2
198 | }
199 |
--------------------------------------------------------------------------------
/testing/templates/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 | Video Streaming Demonstration
4 |
5 |
6 | Video Streaming Demonstration
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
--------------------------------------------------------------------------------
/workshop/README.md:
--------------------------------------------------------------------------------
1 | # Workshop on Pico - Object Detection & Analytics using Docker, Raspberry Pi & AWS Rekognition Service
2 |
3 | 
4 |
5 | ## Docker on Raspberry Pi
6 |
7 | - [Preparing Your Raspberry Pi](https://github.com/collabnix/pico/blob/master/workshop/preparing-raspberrypi.md)
8 | - [Installing Docker on Raspberry Pi](https://github.com/collabnix/pico/blob/master/workshop/installing-docker.md)
9 | - [Turn Your Raspberry Pi into CCTV Camera using Docker container](https://github.com/collabnix/pico/blob/master/workshop/turn-your-raspberrypi-into-camera.md)
10 |
11 |
12 | ## Apache Kafka on AWS Cloud
13 |
14 | - [Setting up 2-Node Docker Swarm Cluster on AWS Cloud](https://github.com/collabnix/pico/blob/master/workshop/setting-up-docker-swarm-on-aws.md)
15 | - [Building Apache Kafka on 2-Node Docker Swarm Cluster](https://github.com/collabnix/pico/blob/master/workshop/running-kafka-on-swarm-cluster.md)
16 |
17 |
18 | ## Setting up Pico
19 |
20 | - [Running Consumer Scripts on AWS Cloud Instance](https://github.com/collabnix/pico/blob/master/workshop/running-consumer-script.md)
21 | - [Running Producer Script on Raspberry Pi](https://github.com/collabnix/pico/blob/master/workshop/running-producer-script-on-pi.md)
22 |
23 | ## Testing Object Detection
24 |
25 | - [Performing Object Detection](https://github.com/collabnix/pico/blob/master/workshop/performing-object-detection.md)
26 |
27 |
28 |
29 |
30 |
31 |
32 |
--------------------------------------------------------------------------------
/workshop/images/README.md:
--------------------------------------------------------------------------------
1 | # Images goes here
2 |
--------------------------------------------------------------------------------
/workshop/images/pico123.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/collabnix/pico/0914b9efe05be859579ab270f3b1c4c91fa8959f/workshop/images/pico123.png
--------------------------------------------------------------------------------
/workshop/installing-docker.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # Installing Docker 18.09 on Raspberry Pi
4 |
5 | Its a single liner command. -L means location, -s means silent and -S means show error.
6 |
7 | ```
8 | root@raspberrypi:~# curl -sSL https://get.docker.com/ | sh
9 | # Executing docker install script, commit: 40b1b76
10 | + sh -c apt-get update -qq >/dev/null
11 | + sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
12 | + sh -c curl -fsSL "https://download.docker.com/linux/raspbian/gpg" | apt-key add -qq - >/dev/null
13 | Warning: apt-key output should not be parsed (stdout is not a terminal)
14 | + sh -c echo "deb [arch=armhf] https://download.docker.com/linux/raspbian stretch edge" > /etc/apt/sources.list.d/docker.list
15 | + sh -c apt-get update -qq >/dev/null
16 | + sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
17 | + sh -c docker version
18 | Client:
19 | Version: 18.09.0
20 | API version: 1.39
21 | Go version: go1.10.4
22 | Git commit: 4d60db4
23 | Built: Wed Nov 7 00:57:21 2018
24 | OS/Arch: linux/arm
25 | Experimental: false
26 |
27 | Server: Docker Engine - Community
28 | Engine:
29 | Version: 18.09.0
30 | API version: 1.39 (minimum version 1.12)
31 | Go version: go1.10.4
32 | Git commit: 4d60db4
33 | Built: Wed Nov 7 00:17:57 2018
34 | OS/Arch: linux/arm
35 | Experimental: false
36 | If you would like to use Docker as a non-root user, you should now consider
37 | adding your user to the "docker" group with something like:
38 |
39 | sudo usermod -aG docker your-user
40 |
41 | Remember that you will have to log out and back in for this to take effect!
42 |
43 | WARNING: Adding a user to the "docker" group will grant the ability to run
44 | containers which can be used to obtain root privileges on the
45 | docker host.
46 | Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
47 | for more information.
48 |
49 | ** DOCKER ENGINE - ENTERPRISE **
50 |
51 | If you’re ready for production workloads, Docker Engine - Enterprise also includes:
52 |
53 | * SLA-backed technical support
54 | * Extended lifecycle maintenance policy for patches and hotfixes
55 | * Access to certified ecosystem content
56 |
57 | ** Learn more at https://dockr.ly/engine2 **
58 |
59 | ACTIVATE your own engine to Docker Engine - Enterprise using:
60 |
61 | sudo docker engine activate
62 |
63 | ```
64 |
65 | ## Verifying Docker Version
66 |
67 | ```
68 | root@raspberrypi:~# docker version
69 | Client:
70 | Version: 18.09.0
71 | API version: 1.39
72 | Go version: go1.10.4
73 | Git commit: 4d60db4
74 | Built: Wed Nov 7 00:57:21 2018
75 | OS/Arch: linux/arm
76 | Experimental: false
77 |
78 | Server: Docker Engine - Community
79 | Engine:
80 | Version: 18.09.0
81 | API version: 1.39 (minimum version 1.12)
82 | Go version: go1.10.4
83 | Git commit: 4d60db4
84 | Built: Wed Nov 7 00:17:57 2018
85 | OS/Arch: linux/arm
86 | Experimental: false
87 | root@raspberrypi:~#
88 | ```
89 |
90 |
91 | ## Deploying Nginx App
92 |
93 | ```
94 | root@raspberrypi:~# docker run -d -p 80:80 nginx
95 | Unable to find image 'nginx:latest' locally
96 | latest: Pulling from library/nginx
97 | 9c38b5a8a4d5: Pull complete
98 | 1c9b1b3e1e0d: Pull complete
99 | 258951b5612f: Pull complete
100 | Digest: sha256:dd2d0ac3fff2f007d99e033b64854be0941e19a2ad51f174d9240dda20d9f534
101 | Status: Downloaded newer image for nginx:latest
102 | d812bf50d136b0f78353f0a0c763b6b08ecc5e7ce706bac8bd660cdd723e0fcd
103 | root@raspberrypi:~#
104 | ```
105 |
106 | ##
107 |
108 | ```
109 | root@raspberrypi:~# curl localhost:80
110 |
111 |
112 |
113 | Welcome to nginx!
114 |
121 |
122 |
123 | Welcome to nginx!
124 | If you see this page, the nginx web server is successfully installed and
125 | working. Further configuration is required.
126 |
127 | For online documentation and support please refer to
128 | nginx.org .
129 | Commercial support is available at
130 | nginx.com .
131 |
132 | Thank you for using nginx.
133 |
134 |
135 | root@raspberrypi:~#
136 | ```
137 |
138 | ##
139 |
140 | ```
141 | root@raspberrypi:~# docker info
142 | Containers: 1
143 | Running: 1
144 | Paused: 0
145 | Stopped: 0
146 | Images: 1
147 | Server Version: 18.09.0
148 | Storage Driver: overlay2
149 | Backing Filesystem: extfs
150 | Supports d_type: true
151 | Native Overlay Diff: true
152 | Logging Driver: json-file
153 | Cgroup Driver: cgroupfs
154 | Plugins:
155 | Volume: local
156 | Network: bridge host macvlan null overlay
157 | Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
158 | Swarm: inactive
159 | Runtimes: runc
160 | Default Runtime: runc
161 | Init Binary: docker-init
162 | containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
163 | runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
164 | init version: fec3683
165 | Security Options:
166 | seccomp
167 | Profile: default
168 | Kernel Version: 4.14.98-v7+
169 | Operating System: Raspbian GNU/Linux 9 (stretch)
170 | OSType: linux
171 | Architecture: armv7l
172 | CPUs: 4
173 | Total Memory: 927.2MiB
174 | Name: raspberrypi
175 | ID: FEUI:RVU6:AWPZ:6P22:TSLT:FDJC:CBIB:D2NU:AQEQ:IHVH:HFRY:HYWF
176 | Docker Root Dir: /var/lib/docker
177 | Debug Mode (client): false
178 | Debug Mode (server): false
179 | Registry: https://index.docker.io/v1/
180 | Labels:
181 | Experimental: false
182 | Insecure Registries:
183 | 127.0.0.0/8
184 | Live Restore Enabled: false
185 | Product License: Community Engine
186 |
187 | WARNING: No memory limit support
188 | WARNING: No swap limit support
189 | WARNING: No kernel memory limit support
190 | WARNING: No oom kill disable support
191 | WARNING: No cpu cfs quota support
192 | WARNING: No cpu cfs period support
193 | ```
194 |
195 |
196 |
197 | ## Verifying Dockerd
198 |
199 |
200 | ```
201 | root@raspberrypi:~/hellowhale# systemctl status docker
202 | ● docker.service - Docker Application Container Engine
203 | Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e
204 | Active: active (running) since Tue 2019-02-26 13:01:04 IST; 38min ago
205 | Docs: https://docs.docker.com
206 | Main PID: 2437 (dockerd)
207 | CPU: 1min 46.174s
208 | CGroup: /system.slice/docker.service
209 | ├─2437 /usr/bin/dockerd -H unix://
210 | ├─2705 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8
211 | └─4186 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8
212 |
213 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.400368104+0
214 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.402012958+0
215 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.402634316+0
216 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.403005881+0
217 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.408358205+0
218 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.810154786+0
219 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.810334839+0
220 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.811462659+0
221 | Feb 26 13:37:06 raspberrypi dockerd[2437]: time="2019-02-26T13:37:06.811768546+0
222 | Feb 26 13:37:07 raspberrypi dockerd[2437]: time="2019-02-26T13:37:07.402282796+0
223 | ```
224 |
225 |
226 | ## Verifying if armv7 hello-world image is available or not
227 |
228 | ```
229 | docker run --rm mplatform/mquery hello-world
230 | Unable to find image 'mplatform/mquery:latest' locally
231 | latest: Pulling from mplatform/mquery
232 | db6020507de3: Pull complete
233 | 5107afd39b7f: Pull complete
234 | Digest: sha256:e15189e3d6fbcee8a6ad2ef04c1ec80420ab0fdcf0d70408c0e914af80dfb107
235 | Status: Downloaded newer image for mplatform/mquery:latest
236 | Image: hello-world
237 | * Manifest List: Yes
238 | * Supported platforms:
239 | - linux/amd64
240 | - linux/arm/v5
241 | - linux/arm/v7
242 | - linux/arm64
243 | - linux/386
244 | - linux/ppc64le
245 | - linux/s390x
246 | - windows/amd64:10.0.14393.2551
247 | - windows/amd64:10.0.16299.846
248 | - windows/amd64:10.0.17134.469
249 | - windows/amd64:10.0.17763.194
250 | ```
251 |
252 |
253 | ## Verifying hellowhale Image
254 |
255 | ```
256 | root@raspberrypi:~# docker run --rm mplatform/mquery ajeetraina/hellowhale
257 | Image: ajeetraina/hellowhale
258 | * Manifest List: No
259 | * Supports: amd64/linux
260 | ````
261 |
262 | ## Verifying Random Images
263 |
264 | ```
265 | root@raspberrypi:~# docker run --rm mplatform/mquery rycus86/prometheus
266 | Image: rycus86/prometheus
267 | * Manifest List: Yes
268 | * Supported platforms:
269 | - linux/amd64
270 | - linux/arm/v7
271 | - linux/arm64
272 | ```
273 |
274 | [Next >> Setting up Apache Kafka on Cloud Platform](https://github.com/collabnix/pico)
275 |
--------------------------------------------------------------------------------
/workshop/performing-object-detection.md:
--------------------------------------------------------------------------------
1 | # Performing Object Detection
2 |
3 | ## Sequence of Scripts Execution
4 |
5 | ### Pre-requisite:
6 |
7 | - Ensure that Docker Swarm is up and running on AWS Cloud
8 |
9 | ### Sequence:
10 |
11 | - First run the Image_Processor Script on AWS Instance
12 | - Then run the Consumer.py Script on AWS Instance
13 | - Finally, run the Producer_camera.py script on Pi
14 |
15 | Place an object in front of camera module and watch out for both text as well as object detection under http://broker-ip:5000
16 |
--------------------------------------------------------------------------------
/workshop/preparing-raspberrypi.md:
--------------------------------------------------------------------------------
1 | # How to prepare Raspberry Pi
2 |
3 | Follow the below steps:
4 |
5 | - Flash Raspbian OS on SD card
6 |
7 | If you are in Mac, you might need to install Etcher tool. If on Windows, install SDFormatter to format SD card as well as Win32installer to flash Raspbian ISO image onto the SD card. You will need SD card reader to achieve this.
8 |
9 |
10 | ## Booting up Raspbian OS
11 |
12 | Just use the same charger which you use for your mobile to power on Raspberry Pi box. Connect HDMI port to your TV or display. Let it boot up.
13 |
14 |
15 | The default username is pi and password is raspberry.
16 |
17 |
18 | ### Enable SSH to perform remote login
19 |
20 | To login via your laptop, you need to allow SSH service running. You can verify IP address command via ifconfig command.
21 |
22 | ```
23 | [Captains-Bay]🚩 > ssh pi@192.168.1.5
24 | pi@192.168.1.5's password:
25 | Linux raspberrypi 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l
26 |
27 | The programs included with the Debian GNU/Linux system are free software;
28 | the exact distribution terms for each program are described in the
29 | individual files in /usr/share/doc/*/copyright.
30 |
31 | Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
32 | permitted by applicable law.
33 | Last login: Tue Feb 26 12:30:00 2019 from 192.168.1.4
34 | pi@raspberrypi:~ $ sudo su
35 | root@raspberrypi:/home/pi# cd
36 | ```
37 |
38 | ## Verifying Raspbian OS Version
39 |
40 |
41 | ```
42 | root@raspberrypi:~# cat /etc/os-release
43 | PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
44 | NAME="Raspbian GNU/Linux"
45 | VERSION_ID="9"
46 | VERSION="9 (stretch)"
47 | ID=raspbian
48 | ID_LIKE=debian
49 | HOME_URL="http://www.raspbian.org/"
50 | SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
51 | BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
52 | root@raspberrypi:~#
53 | ```
54 |
55 | [Next >> Installing Docker on Raspberry Pi ](https://github.com/collabnix/pico/blob/master/workshop/installing-docker.md)
56 |
--------------------------------------------------------------------------------
/workshop/running-consumer-script.md:
--------------------------------------------------------------------------------
1 | # Running Consumer Script for Pico
2 |
3 | ## Run the below Docker container for preparing environment for Consumer scripts
4 |
5 | ```
6 | docker run -d -p 5000:5000 ajeetraina/opencv4-python3 bash
7 | ```
8 |
9 | ## Open up bash shell inside Docker Container
10 |
11 | ```
12 | docker exec -it bash
13 | ```
14 |
15 | ## Remove the existing Pico directory
16 |
17 | ```
18 | rm -fr pico
19 | ```
20 |
21 | ## Cloning the fresh Repository
22 |
23 | ```
24 | #git clone https://github.com/collabnix/pico
25 | ```
26 |
27 | ## Locating the right consumer scripts
28 |
29 | You will need 2 scripts - Image Processor and Consumer
30 |
31 | ```
32 | cd pico/deployment/objects/
33 | ```
34 |
35 | ## Execute Image processor Script
36 |
37 | This script is placed under https://github.com/collabnix/pico/blob/master/deployment/objects/image_processor.py location.
38 | Before you run this script, ensure that it has right AWS Access Key and Broker IP address
39 |
40 | ```
41 | python3 image_processor.py
42 | ```
43 |
44 | ## Open up new bash again
45 |
46 | ```
47 | docker exec -it bash
48 | ```
49 |
50 | ## Exexute Consumer Script
51 |
52 | This script is placed under https://github.com/collabnix/pico/blob/master/deployment/objects/ directory.
53 | Before you run this script, ensure that it has right Broker IP address
54 |
55 | ```
56 | python3 consumer.py
57 | ```
58 |
--------------------------------------------------------------------------------
/workshop/running-kafka-on-swarm-cluster.md:
--------------------------------------------------------------------------------
1 | # Running Apache Kafka on 2-Node Docker Swarm Cluster
2 |
3 | Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and donated to the Apache Software Foundation. It is written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
4 |
5 | Apache Kafka is a distributed, partitioned, and replicated publish-subscribe messaging system that is used to send high volumes of data, in the form of messages, from one point to another. It replicates these messages across a cluster of servers in order to prevent data loss and allows both online and offline message consumption. This in turn shows the fault-tolerant behaviour of Kafka in the presence of machine failures that also supports low latency message delivery. In a broader sense, Kafka is considered as a unified platform which guarantees zero data loss and handles real-time data feeds.
6 |
7 | ## Cloning the Repository
8 |
9 | ```
10 | git clone https://github.com/collabnix/pico
11 | cd pico/kafka/
12 | ```
13 |
14 | ## Building up Kafka Application
15 |
16 | ```
17 | git clone https://github.com/collabnix/pico
18 | cd pico/kafka
19 | ```
20 |
21 | ```
22 | docker stack deploy -c docker-compose.yml mykafka
23 | ```
24 |
25 | By now, you should be able to access kafka manager at https://:9000
26 |
27 | ## Adding a cluster
28 |
29 | - Cluster Name = pico (or whatever you want)
30 | - Cluster Zookeeper Hosts = zk-1:2181,zk-2:2181,zk-3:2181
31 | - Kafka Version = leave it at 0.9.01 even though we're running 1.0.0
32 | - Enable JMX Polling = enabled
33 |
34 | ## Adding a Topic
35 |
36 | Click on Topic on the top center of the Kafka Manager to create a new topic with the below details -
37 |
38 | - Topic = testpico
39 | - Partitions = 6
40 | - Replication factor = 2
41 |
42 | which gives an even spread of the topic across the three kafka nodes.
43 |
44 | While saving the settings, it might ask to set minimal parameter required. Feel free to follow the instruction provided.
45 |
46 |
47 |
--------------------------------------------------------------------------------
/workshop/running-producer-script-on-pi.md:
--------------------------------------------------------------------------------
1 | # Running Producer Script on Pi
2 |
3 | ## Clone the Repository
4 |
5 | ```
6 | git clone https://github.com/collabnix/pico
7 | ```
8 |
9 | ## Locating Producer Script
10 |
11 | ```
12 | cd pico/deployment/objects/
13 | ```
14 |
15 | ## Edit producer_camera.py script and add the proper IP address for the kafka broker:
16 |
17 | ```
18 | brokers = ["35.221.213.182:9092"]
19 | ```
20 |
21 | ## Installing Dependencies
22 |
23 | ```
24 | apt install -y python-pip libatlas-base-dev libjasper-dev libqtgui4 python3-pyqt5 python3-pyqt5 libqt4-test
25 | pip3 install kafka-python opencv-python pytz
26 | pip install virtualenv virtualenvwrapper numpy
27 | ```
28 |
29 | ## Execute the script
30 |
31 | ```
32 | python3 producer_camera.py
33 | ```
34 |
35 | Please Note: This script should be run post the consumer scripts (Image_Processor & Consumer.py) is executed
36 |
--------------------------------------------------------------------------------
/workshop/setting-up-docker-swarm-on-aws.md:
--------------------------------------------------------------------------------
1 | # How to setup Docker Swarm on AWS Cloud
2 |
3 | ## Pre-requisites:
4 |
5 | - Docker Desktop for Mac or Windows
6 | - AWS Account ( You will require t2.medium instances for this)
7 | - AWS CLI installed
8 |
9 | ## Adding Your Credentials:
10 |
11 | ```
12 | [Captains-Bay]🚩 > cat ~/.aws/credentials
13 | [default]
14 | aws_access_key_id = XXXA
15 | aws_secret_access_key = XX
16 | ```
17 |
18 | ## Verifying AWS Version
19 |
20 |
21 | ```
22 | [Captains-Bay]🚩 > aws --version
23 | aws-cli/1.11.107 Python/2.7.10 Darwin/17.7.0 botocore/1.5.70
24 | Setting up Environmental Variable
25 | ```
26 |
27 | ```
28 | [Captains-Bay]🚩 > export VPC=vpc-ae59f0d6
29 | [Captains-Bay]🚩 > export REGION=us-west-2a
30 | [Captains-Bay]🚩 > export SUBNET=subnet-827651c9
31 | [Captains-Bay]🚩 > export ZONE=a
32 | [Captains-Bay]🚩 > export REGION=us-west-2
33 | ```
34 |
35 | ## Building up First Node using Docker Machine
36 |
37 | ```
38 | [Captains-Bay]🚩 > docker-machine create --driver amazonec2 --amazonec2-access-key=${ACCESS_KEY_ID} --amazonec2-secret-key=${SECRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-78a22900 --amazonec2-open-port 2377 --amazonec2-open-port 7946 --amazonec2-open-port 4789 --amazonec2-open-port 7946/udp --amazonec2-open-port 4789/udp --amazonec2-open-port 8080 --amazonec2-open-port 443 --amazonec2-open-port 80 --amazonec2-subnet-id=subnet-72dbdb1a --amazonec2-instance-type=t2.micro kafka-swarm-node1
39 | ```
40 |
41 | ## Listing out the Nodes
42 |
43 | ```
44 | [Captains-Bay]🚩 > docker-machine ls
45 | NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
46 | kafka-swarm-node1 - amazonec2 Running tcp://35.161.106.158:2376 v18.09.6
47 | kafka-swarm-node2 - amazonec2 Running tcp://54.201.99.75:2376 v18.09.6
48 | ```
49 |
50 | ## Initialiating Docker Swarm Manager Node
51 |
52 | ```
53 | ubuntu@kafka-swarm-node1:~$ sudo docker swarm init --advertise-addr 172.31.53.71 --listen-addr 172.31.53.71:2377
54 | Swarm initialized: current node (yui9wqfu7b12hwt4ig4ribpyq) is now a manager.
55 |
56 | To add a worker to this swarm, run the following command:
57 |
58 | docker swarm join --token SWMTKN-1-xxxxxmr075to2v3k-decb975h5g5da7xxxx 172.31.53.71:2377
59 |
60 | To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
61 | ```
62 |
63 | ## Adding Worker Node
64 |
65 |
66 | ```
67 | ubuntu@kafka-swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-2xjkynhin0n2zl7xxxk-decb975h5g5daxxxxxxxxn 172.31.53.71:2377
68 | This node joined a swarm as a worker.
69 | ```
70 |
71 | ## Verifying 2-Node Docker Swarm Mode Cluster
72 |
73 | ```
74 | ubuntu@kafka-swarm-node1:~$ sudo docker node ls
75 | ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
76 | yui9wqfu7b12hwt4ig4ribpyq * kafka-swarm-node1 Ready Active Leader 18.09.6
77 | vb235xtkejim1hjdnji5luuxh kafka-swarm-node2 Ready Active 18.09.6
78 | ```
79 |
80 | ## Installing Docker Compose
81 |
82 | ```
83 | curl -L https://github.com/docker/compose/releases/download/1.25.0-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
84 | % Total % Received % Xferd Average Speed Time Time Time Current
85 | Dload Upload Total Spent Left Speed
86 | 100 617 0 617 0 0 2212 0 --:--:-- --:--:-- --:--:-- 2211
87 | 100 15.5M 100 15.5M 0 0 8693k 0 0:00:01 0:00:01 --:--:-- 20.1M
88 | ```
89 |
90 | ```
91 | root@kafka-swarm-node1:/home/ubuntu/dockerlabs/solution/kafka-swarm# chmod +x /usr/local/bin/docker-compose
92 | ```
93 |
94 | ```
95 | ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker-compose version
96 | docker-compose version 1.25.0-rc1, build 8552e8e2
97 | docker-py version: 4.0.1
98 | CPython version: 3.7.3
99 | OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018
100 | ```
101 |
102 |
103 |
104 |
--------------------------------------------------------------------------------
/workshop/turn-your-raspberrypi-into-camera.md:
--------------------------------------------------------------------------------
1 |
2 | # Turn Your Raspberry Pi into CCTV Camera
3 |
4 |
5 | ## Cloning the Repository:
6 |
7 |
8 | ```
9 | $ git clone https://github.com/collabnix/docker-cctv-raspbian
10 | ```
11 |
12 | ## Building Docker Image
13 |
14 | ```
15 | $ cd docker-cctv-raspbian
16 | $ docker build -t collabnix/docker-cctv-raspbian .
17 | ```
18 |
19 | ## Configuring Camera Interface
20 |
21 | Before you execute run.sh, you need to configure Camera Interface by running the below command:
22 |
23 | ```
24 | $sudo raspi-config
25 | ```
26 |
27 | It will open up command-line UI window, choose Interfacing , select Camera and enable it. Save and exit the CLI window.
28 |
29 | ## Running the Docker container
30 |
31 | Before you execute run.sh, you will need to load the required driver “bcm2835-v412” to make your camera module work. If you miss this step, you will end up seeing a blank screen even though the application comes up without any issue.
32 |
33 | ```
34 | $ sudo modprobe bcm2835-v4l2
35 | ```
36 |
37 | ```
38 | $sudo sh run.sh
39 | ```
40 |
41 | That’s it. Browse over to http://:8082(either using Win Laptop or macbook) to open up CCTV cam which initiates the video streaming instantly. Cool, isn’t it?
42 |
43 |
44 |
--------------------------------------------------------------------------------