├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Dataset_Preprocess.ipynb
├── IotSensor.py
├── LICENSE
├── Pollylambda.py
├── README.md
├── cols.txt
├── gg_discovery_api.py
├── greengrasssdk
├── IoTDataPlane.py
├── Lambda.py
├── SecretsManager.py
├── __init__.py
├── client.py
├── dskreadme.md
└── utils
│ ├── .md
│ ├── __init__.py
│ └── testing.py
├── images
├── AWS_C9_Open_Terminal.png
├── AWS_C9_Show_Home.png
├── Cloud9IDE.png
├── IOT-ML-end2end.png
├── IoT-arch.png
├── ML-arch.png
├── Stepfunctions.png
└── cloudformation-launch-stack.png
├── predictive-maintenance-xgboost.ipynb
└── predictlambda.py
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ## Code of Conduct
2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4 | opensource-codeofconduct@amazon.com with any additional questions or comments.
5 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
4 | documentation, we greatly value feedback and contributions from our community.
5 |
6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
7 | information to effectively respond to your bug report or contribution.
8 |
9 |
10 | ## Reporting Bugs/Feature Requests
11 |
12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features.
13 |
14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
16 |
17 | * A reproducible test case or series of steps
18 | * The version of our code being used
19 | * Any modifications you've made relevant to the bug
20 | * Anything unusual about your environment or deployment
21 |
22 |
23 | ## Contributing via Pull Requests
24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
25 |
26 | 1. You are working against the latest source on the *master* branch.
27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
29 |
30 | To send us a pull request, please:
31 |
32 | 1. Fork the repository.
33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
34 | 3. Ensure local tests pass.
35 | 4. Commit to your fork using clear commit messages.
36 | 5. Send us a pull request, answering any default questions in the pull request interface.
37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
38 |
39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
41 |
42 |
43 | ## Finding contributions to work on
44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
45 |
46 |
47 | ## Code of Conduct
48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
50 | opensource-codeofconduct@amazon.com with any additional questions or comments.
51 |
52 |
53 | ## Security issue notifications
54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
55 |
56 |
57 | ## Licensing
58 |
59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
60 |
61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.
62 |
--------------------------------------------------------------------------------
/Dataset_Preprocess.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# PreProcess APS dataset\n",
8 | "\n",
9 | "#### In this notebook, we first download the data from UCI and preprocess it so we can build a Machine Learning model. \n",
10 | "#### We then store this data in a training and testing folder."
11 | ]
12 | },
13 | {
14 | "cell_type": "markdown",
15 | "metadata": {},
16 | "source": [
17 | "## Dataset Description:\n",
18 | "\n",
19 | "The dataset we use here for predictive maintenance comes from UCI Data Repository and consists of Air Pressure System failures recorded on Scania Trucks. Read more about the dataset here: https://archive.ics.uci.edu/ml/datasets/APS+Failure+at+Scania+Trucks\n",
20 | "\n",
21 | "The positive class consists of failures attributed to APS and negative class consists of failures in some other system. The goal is to identify APS failures correctly so a downstream predictive maintenance action can be taken on this system, once the origin of the failure has been identified.\n",
22 | "\n",
23 | "This is a typical use case in Predictive maintenance (PDM): a first model identifies the root cause of the failure. Once this is identified, a second system identifies how much time one has until a failure might occur which then informs the actions that need to be taken to avoid it. Predictive maintenance, like most machine learning problems can be multifaceted."
24 | ]
25 | },
26 | {
27 | "cell_type": "markdown",
28 | "metadata": {},
29 | "source": [
30 | "### Import Libraries"
31 | ]
32 | },
33 | {
34 | "cell_type": "code",
35 | "execution_count": 3,
36 | "metadata": {},
37 | "outputs": [],
38 | "source": [
39 | "import pandas as pd\n",
40 | "import numpy as np"
41 | ]
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "metadata": {},
46 | "source": [
47 | "#### Download the data"
48 | ]
49 | },
50 | {
51 | "cell_type": "code",
52 | "execution_count": 4,
53 | "metadata": {},
54 | "outputs": [
55 | {
56 | "name": "stdout",
57 | "output_type": "stream",
58 | "text": [
59 | " % Total % Received % Xferd Average Speed Time Time Time Current\n",
60 | " Dload Upload Total Spent Left Speed\n",
61 | "100 42.5M 100 42.5M 0 0 14.7M 0 0:00:02 0:00:02 --:--:-- 14.7M\n"
62 | ]
63 | }
64 | ],
65 | "source": [
66 | "! curl --insecure https://archive.ics.uci.edu/ml/machine-learning-databases/00421/aps_failure_training_set.csv --output aps_failure_training_set.csv"
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": 5,
72 | "metadata": {},
73 | "outputs": [],
74 | "source": [
75 | "df = pd.read_csv('aps_failure_training_set.csv', sep=' ', encoding = 'utf-8', header=None)"
76 | ]
77 | },
78 | {
79 | "cell_type": "code",
80 | "execution_count": 7,
81 | "metadata": {},
82 | "outputs": [
83 | {
84 | "data": {
85 | "text/html": [
86 | "
\n",
87 | "\n",
100 | "
\n",
101 | " \n",
102 | " \n",
103 | " | \n",
104 | " 0 | \n",
105 | " 1 | \n",
106 | " 2 | \n",
107 | " 3 | \n",
108 | " 4 | \n",
109 | " 5 | \n",
110 | " 6 | \n",
111 | " 7 | \n",
112 | " 8 | \n",
113 | " 9 | \n",
114 | " 10 | \n",
115 | " 11 | \n",
116 | " 12 | \n",
117 | "
\n",
118 | " \n",
119 | " \n",
120 | " \n",
121 | " 0 | \n",
122 | " This | \n",
123 | " file | \n",
124 | " is | \n",
125 | " part | \n",
126 | " of | \n",
127 | " APS | \n",
128 | " Failure | \n",
129 | " and | \n",
130 | " Operational | \n",
131 | " Data | \n",
132 | " for | \n",
133 | " Scania | \n",
134 | " Trucks. | \n",
135 | "
\n",
136 | " \n",
137 | " 1 | \n",
138 | " Copyright | \n",
139 | " (c) | \n",
140 | " <2016> | \n",
141 | " <Scania | \n",
142 | " CV | \n",
143 | " AB> | \n",
144 | " NaN | \n",
145 | " NaN | \n",
146 | " NaN | \n",
147 | " NaN | \n",
148 | " NaN | \n",
149 | " NaN | \n",
150 | " NaN | \n",
151 | "
\n",
152 | " \n",
153 | " 2 | \n",
154 | " This | \n",
155 | " program | \n",
156 | " (APS | \n",
157 | " Failure | \n",
158 | " and | \n",
159 | " Operational | \n",
160 | " Data | \n",
161 | " for | \n",
162 | " Scania | \n",
163 | " Trucks) | \n",
164 | " is | \n",
165 | " NaN | \n",
166 | " NaN | \n",
167 | "
\n",
168 | " \n",
169 | " 3 | \n",
170 | " free | \n",
171 | " software: | \n",
172 | " you | \n",
173 | " can | \n",
174 | " redistribute | \n",
175 | " it | \n",
176 | " and/or | \n",
177 | " modify | \n",
178 | " NaN | \n",
179 | " NaN | \n",
180 | " NaN | \n",
181 | " NaN | \n",
182 | " NaN | \n",
183 | "
\n",
184 | " \n",
185 | " 4 | \n",
186 | " it | \n",
187 | " under | \n",
188 | " the | \n",
189 | " terms | \n",
190 | " of | \n",
191 | " the | \n",
192 | " GNU | \n",
193 | " General | \n",
194 | " Public | \n",
195 | " License | \n",
196 | " as | \n",
197 | " published | \n",
198 | " by | \n",
199 | "
\n",
200 | " \n",
201 | " 5 | \n",
202 | " the | \n",
203 | " Free | \n",
204 | " Software | \n",
205 | " Foundation, | \n",
206 | " either | \n",
207 | " version | \n",
208 | " 3 | \n",
209 | " of | \n",
210 | " the | \n",
211 | " License, | \n",
212 | " or | \n",
213 | " NaN | \n",
214 | " NaN | \n",
215 | "
\n",
216 | " \n",
217 | " 6 | \n",
218 | " (at | \n",
219 | " your | \n",
220 | " option) | \n",
221 | " any | \n",
222 | " later | \n",
223 | " version. | \n",
224 | " NaN | \n",
225 | " NaN | \n",
226 | " NaN | \n",
227 | " NaN | \n",
228 | " NaN | \n",
229 | " NaN | \n",
230 | " NaN | \n",
231 | "
\n",
232 | " \n",
233 | " 7 | \n",
234 | " This | \n",
235 | " program | \n",
236 | " is | \n",
237 | " distributed | \n",
238 | " in | \n",
239 | " the | \n",
240 | " hope | \n",
241 | " that | \n",
242 | " it | \n",
243 | " will | \n",
244 | " be | \n",
245 | " useful, | \n",
246 | " NaN | \n",
247 | "
\n",
248 | " \n",
249 | " 8 | \n",
250 | " but | \n",
251 | " WITHOUT | \n",
252 | " ANY | \n",
253 | " WARRANTY; | \n",
254 | " without | \n",
255 | " even | \n",
256 | " the | \n",
257 | " implied | \n",
258 | " warranty | \n",
259 | " of | \n",
260 | " NaN | \n",
261 | " NaN | \n",
262 | " NaN | \n",
263 | "
\n",
264 | " \n",
265 | " 9 | \n",
266 | " MERCHANTABILITY | \n",
267 | " or | \n",
268 | " FITNESS | \n",
269 | " FOR | \n",
270 | " A | \n",
271 | " PARTICULAR | \n",
272 | " PURPOSE. | \n",
273 | " NaN | \n",
274 | " See | \n",
275 | " the | \n",
276 | " NaN | \n",
277 | " NaN | \n",
278 | " NaN | \n",
279 | "
\n",
280 | " \n",
281 | " 10 | \n",
282 | " GNU | \n",
283 | " General | \n",
284 | " Public | \n",
285 | " License | \n",
286 | " for | \n",
287 | " more | \n",
288 | " details. | \n",
289 | " NaN | \n",
290 | " NaN | \n",
291 | " NaN | \n",
292 | " NaN | \n",
293 | " NaN | \n",
294 | " NaN | \n",
295 | "
\n",
296 | " \n",
297 | " 11 | \n",
298 | " You | \n",
299 | " should | \n",
300 | " have | \n",
301 | " received | \n",
302 | " a | \n",
303 | " copy | \n",
304 | " of | \n",
305 | " the | \n",
306 | " GNU | \n",
307 | " General | \n",
308 | " Public | \n",
309 | " License | \n",
310 | " NaN | \n",
311 | "
\n",
312 | " \n",
313 | " 12 | \n",
314 | " along | \n",
315 | " with | \n",
316 | " this | \n",
317 | " program. | \n",
318 | " NaN | \n",
319 | " If | \n",
320 | " not, | \n",
321 | " see | \n",
322 | " <http://www.gnu.org/licenses/>. | \n",
323 | " NaN | \n",
324 | " NaN | \n",
325 | " NaN | \n",
326 | " NaN | \n",
327 | "
\n",
328 | " \n",
329 | " 13 | \n",
330 | " ----------------------------------------------... | \n",
331 | " NaN | \n",
332 | " NaN | \n",
333 | " NaN | \n",
334 | " NaN | \n",
335 | " NaN | \n",
336 | " NaN | \n",
337 | " NaN | \n",
338 | " NaN | \n",
339 | " NaN | \n",
340 | " NaN | \n",
341 | " NaN | \n",
342 | " NaN | \n",
343 | "
\n",
344 | " \n",
345 | " 14 | \n",
346 | " class,aa_000,ab_000,ac_000,ad_000,ae_000,af_00... | \n",
347 | " NaN | \n",
348 | " NaN | \n",
349 | " NaN | \n",
350 | " NaN | \n",
351 | " NaN | \n",
352 | " NaN | \n",
353 | " NaN | \n",
354 | " NaN | \n",
355 | " NaN | \n",
356 | " NaN | \n",
357 | " NaN | \n",
358 | " NaN | \n",
359 | "
\n",
360 | " \n",
361 | "
\n",
362 | "
"
363 | ],
364 | "text/plain": [
365 | " 0 1 2 \\\n",
366 | "0 This file is \n",
367 | "1 Copyright (c) <2016> \n",
368 | "2 This program (APS \n",
369 | "3 free software: you \n",
370 | "4 it under the \n",
371 | "5 the Free Software \n",
372 | "6 (at your option) \n",
373 | "7 This program is \n",
374 | "8 but WITHOUT ANY \n",
375 | "9 MERCHANTABILITY or FITNESS \n",
376 | "10 GNU General Public \n",
377 | "11 You should have \n",
378 | "12 along with this \n",
379 | "13 ----------------------------------------------... NaN NaN \n",
380 | "14 class,aa_000,ab_000,ac_000,ad_000,ae_000,af_00... NaN NaN \n",
381 | "\n",
382 | " 3 4 5 6 7 \\\n",
383 | "0 part of APS Failure and \n",
384 | "1 NaN NaN \n",
385 | "2 Failure and Operational Data for \n",
386 | "3 can redistribute it and/or modify \n",
387 | "4 terms of the GNU General \n",
388 | "5 Foundation, either version 3 of \n",
389 | "6 any later version. NaN NaN \n",
390 | "7 distributed in the hope that \n",
391 | "8 WARRANTY; without even the implied \n",
392 | "9 FOR A PARTICULAR PURPOSE. NaN \n",
393 | "10 License for more details. NaN \n",
394 | "11 received a copy of the \n",
395 | "12 program. NaN If not, see \n",
396 | "13 NaN NaN NaN NaN NaN \n",
397 | "14 NaN NaN NaN NaN NaN \n",
398 | "\n",
399 | " 8 9 10 11 12 \n",
400 | "0 Operational Data for Scania Trucks. \n",
401 | "1 NaN NaN NaN NaN NaN \n",
402 | "2 Scania Trucks) is NaN NaN \n",
403 | "3 NaN NaN NaN NaN NaN \n",
404 | "4 Public License as published by \n",
405 | "5 the License, or NaN NaN \n",
406 | "6 NaN NaN NaN NaN NaN \n",
407 | "7 it will be useful, NaN \n",
408 | "8 warranty of NaN NaN NaN \n",
409 | "9 See the NaN NaN NaN \n",
410 | "10 NaN NaN NaN NaN NaN \n",
411 | "11 GNU General Public License NaN \n",
412 | "12 . NaN NaN NaN NaN \n",
413 | "13 NaN NaN NaN NaN NaN \n",
414 | "14 NaN NaN NaN NaN NaN "
415 | ]
416 | },
417 | "execution_count": 7,
418 | "metadata": {},
419 | "output_type": "execute_result"
420 | }
421 | ],
422 | "source": [
423 | "df.head(15)"
424 | ]
425 | },
426 | {
427 | "cell_type": "markdown",
428 | "metadata": {},
429 | "source": [
430 | "Notice that this original dataset requires some preprocessing to get it in a suitable format for Machine learning. Run the function below to get a pre-processed dataset."
431 | ]
432 | },
433 | {
434 | "cell_type": "code",
435 | "execution_count": 13,
436 | "metadata": {},
437 | "outputs": [],
438 | "source": [
439 | "def preprocessdataset(df):\n",
440 | " ''' Preprocess the input dataset for Machine learning training'''\n",
441 | " \n",
442 | " import os\n",
443 | " try:\n",
444 | " os.makedirs('training_data')\n",
445 | " except Exception as e:\n",
446 | " print(\"directory already exists\")\n",
447 | " \n",
448 | " try:\n",
449 | " os.makedirs('test_data')\n",
450 | " except Exception as e:\n",
451 | " print(\"directory already exists\")\n",
452 | " \n",
453 | " print(\"Start Preprocessing ...\")\n",
454 | " wholedf = pd.DataFrame(np.zeros(shape=(60000,171)), columns=np.arange(171))\n",
455 | " wholedf.columns = df[0][14].split(',')\n",
456 | " newdf = [df[0][row].split(',') for row in range(15 ,60015)]\n",
457 | " newdf = pd.DataFrame.from_records(newdf)\n",
458 | " newdf.columns = df[0][14].split(',')\n",
459 | " \n",
460 | " print(\"Dropping last 2 columns...\")\n",
461 | " newdf = newdf.drop(columns = ['ef_000', 'eg_000'])\n",
462 | " \n",
463 | " print(\"Shape of the entire dataset ={}\".format(newdf.shape))\n",
464 | " \n",
465 | " print(\"Convert the class categorical label to numerical values for prediction\")\n",
466 | " newdf = newdf.replace({'class': {'neg': 0, 'pos':1}})\n",
467 | " newdf=newdf.replace('na',0)\n",
468 | "\n",
469 | " print(\"Changing data types to numeric...\")\n",
470 | " newdf = newdf.apply(pd.to_numeric)\n",
471 | " \n",
472 | " print(\"Splitting the data into train and test...\")\n",
473 | " \n",
474 | " from sklearn.model_selection import train_test_split\n",
475 | " X_train, X_test = train_test_split(newdf, test_size=0.2, random_state = 1234)\n",
476 | " \n",
477 | " print(\"Saving the data locally in train/test folders...\")\n",
478 | " X_train.to_csv('training_data/train.csv', index = False, header = None)\n",
479 | " X_test.to_csv('test_data/test.csv', index=False, header=None)\n",
480 | " newdf.to_csv('rawdataset.csv', index=False, header=None)\n",
481 | " print(\"Shape of Training data = {}\".format(X_train.shape))\n",
482 | " print(\"Shape of Test data = {}\".format(X_test.shape))\n",
483 | " print(\"Success!\")"
484 | ]
485 | },
486 | {
487 | "cell_type": "code",
488 | "execution_count": 14,
489 | "metadata": {},
490 | "outputs": [
491 | {
492 | "name": "stdout",
493 | "output_type": "stream",
494 | "text": [
495 | "CPU times: user 2 µs, sys: 0 ns, total: 2 µs\n",
496 | "Wall time: 4.53 µs\n",
497 | "directory already exists\n",
498 | "directory already exists\n",
499 | "Start Preprocessing ...\n",
500 | "Dropping last 2 columns...\n",
501 | "Shape of the entire dataset =(60000, 169)\n",
502 | "Convert the class categorical label to numerical values for prediction\n",
503 | "Changing data types to numeric...\n",
504 | "Splitting the data into train and test...\n",
505 | "Saving the data locally in train/test folders...\n",
506 | "Shape of Training data = (48000, 169)\n",
507 | "Shape of Test data = (12000, 169)\n",
508 | "Success!\n"
509 | ]
510 | }
511 | ],
512 | "source": [
513 | "%time\n",
514 | "preprocessdataset(df)"
515 | ]
516 | },
517 | {
518 | "cell_type": "markdown",
519 | "metadata": {},
520 | "source": [
521 | "Now go to \"predictive-maintenance-xgboost.ipynb\" and run the code cells to train your custom XGBoost model using SageMaker built in algorithms for predictive maintenance"
522 | ]
523 | }
524 | ],
525 | "metadata": {
526 | "kernelspec": {
527 | "display_name": "conda_python3",
528 | "language": "python",
529 | "name": "conda_python3"
530 | },
531 | "language_info": {
532 | "codemirror_mode": {
533 | "name": "ipython",
534 | "version": 3
535 | },
536 | "file_extension": ".py",
537 | "mimetype": "text/x-python",
538 | "name": "python",
539 | "nbconvert_exporter": "python",
540 | "pygments_lexer": "ipython3",
541 | "version": "3.6.5"
542 | }
543 | },
544 | "nbformat": 4,
545 | "nbformat_minor": 2
546 | }
547 |
--------------------------------------------------------------------------------
/IotSensor.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 | #
4 | # This Greengrass example simulates an IoT Sensor sending data to Greengrass at a fixed interval.
5 | # In addition the IoT device also sends a message to update the thing shadow.
6 |
7 | # Please refer to the AWS Greengrass Getting Started Guide, Module 5 for more information.
8 | #
9 |
10 | from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTShadowClient, AWSIoTMQTTClient
11 | import sys
12 | import logging
13 | import time
14 | import json
15 | import argparse
16 | import os
17 | import re
18 | from itertools import cycle
19 | import random
20 | from gg_discovery_api import GGDiscovery
21 |
22 |
23 | from AWSIoTPythonSDK.core.greengrass.discovery.providers import DiscoveryInfoProvider
24 | from AWSIoTPythonSDK.core.protocol.connection.cores import ProgressiveBackOffCore
25 | from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryInvalidRequestException
26 |
27 | MAX_DISCOVERY_RETRIES = 10 # MAX tries at discovery before giving up
28 | GROUP_PATH = "./groupCA/" # directory storing discovery info
29 | CA_NAME = "root-ca.crt" # stores GGC CA cert
30 | GGC_ADDR_NAME = "ggc-host" # stores GGC host address
31 |
32 |
33 |
34 | # Custom Shadow callback for updating the desired state in the shadow
35 | def customShadowCallback_Update(payload, responseStatus, token):
36 | # payload is a JSON string ready to be parsed using json.loads(...)
37 | # in both Py2.x and Py3.x
38 | if responseStatus == "timeout":
39 | print("Update request " + token + " time out!")
40 | if responseStatus == "accepted":
41 | payloadDict = json.loads(payload)
42 | print("~~~~~~~~~~Shadow Update Accepted~~~~~~~~~~~~~")
43 | print("Update request with token: " + token + " accepted!")
44 | print("property: " + str(payloadDict["state"]["desired"]["property"]))
45 | print("~~~~~~~~~~~~~~~~~~~~~~~\n\n")
46 | shadow_update_topic = '$aws/things/' + clientId + '/shadow/update'
47 | logger.info("reporting state to shadow: " + shadow_update_topic)
48 | myAWSIoTMQTTClient.publish(shadow_update_topic, json.dumps(str(payloadDict["state"]["desired"]["property"]), indent=4), 0)
49 | if responseStatus == "rejected":
50 | print("Update request " + token + " rejected!")
51 |
52 | # function does basic regex check to see if value might be an ip address
53 | def isIpAddress(value):
54 | match = re.match(r'^\d{1,3}\.\d{1,3}\.\d{1,3}', value)
55 | if match:
56 | return True
57 | return False
58 |
59 | def customCallback(client, userdata, message):
60 | print("Received a new message: ")
61 | print(message.payload)
62 | print("from topic: ")
63 | print(message.topic)
64 | print("--------------\n\n")
65 |
66 | AllowedActions = ['both', 'publish', 'subscribe']
67 |
68 | # Read in command-line parameters
69 | parser = argparse.ArgumentParser()
70 | parser.add_argument("-e", "--endpoint", action="store", required=True, dest="host", help="Your AWS IoT custom endpoint")
71 | parser.add_argument("-r", "--rootCA", action="store", required=True, dest="rootCAPath", help="Root CA file path")
72 | parser.add_argument("-c", "--cert", action="store", dest="certificatePath", help="Certificate file path")
73 | parser.add_argument("-k", "--key", action="store", dest="privateKeyPath", help="Private key file path")
74 | parser.add_argument("-n", "--thingName", action="store", dest="thingName", default="Bot", help="Targeted thing name")
75 | parser.add_argument("-id", "--clientId", action="store", dest="clientId", default="Iot-Sensor",
76 | help="Targeted client id")
77 | parser.add_argument("-t", "--topic", action="store", dest="topic", default="sensor/test/python", help="Targeted topic")
78 | parser.add_argument("-p", "--port", action="store", dest="port", type=int, help="Port number override")
79 | parser.add_argument("-w", "--websocket", action="store_true", dest="useWebsocket", default=False,
80 | help="Use MQTT over WebSocket")
81 | parser.add_argument("-m", "--mode", action="store", dest="mode", default="both",
82 | help="Operation modes: %s"%str(AllowedActions))
83 | parser.add_argument("--connect-to", action="store", dest="connectTo", default="greengrass", help="Where to connect to. Can be either awsiot or greengrass")
84 |
85 |
86 | args = parser.parse_args()
87 | host = args.host
88 | iotCAPath = args.rootCAPath
89 | certificatePath = args.certificatePath
90 | privateKeyPath = args.privateKeyPath
91 | thingName = args.thingName
92 | clientId = args.clientId
93 | port = args.port
94 | useWebsocket = args.useWebsocket
95 | topic = args.topic
96 | connectTo = args.connectTo
97 | coreCAFile = "core-CAs.crt"
98 |
99 |
100 | # Configure logging
101 | logger = logging.getLogger("AWSIoTPythonSDK.core")
102 | logger.setLevel(logging.INFO) # set to logging.DEBUG for additional logging
103 | streamHandler = logging.StreamHandler()
104 | formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
105 | streamHandler.setFormatter(formatter)
106 | logger.addHandler(streamHandler)
107 |
108 |
109 | rootCAPath = iotCAPath
110 | if args.useWebsocket and not args.port: # When no port override for WebSocket, default to 443
111 | port = 443
112 | if not args.useWebsocket and not args.port: # When no port override for non-WebSocket, default to 8883
113 | port = 8883
114 |
115 |
116 | if connectTo == "greengrass":
117 | CAFile = coreCAFile
118 | logger.info("connecting to GREENGRASS: starting discover")
119 | print("starting discover")
120 | discovery = GGDiscovery(clientId, host, 8443, rootCAPath, certificatePath, privateKeyPath)
121 |
122 | myAWSIoTMQTTClient = None
123 | if useWebsocket:
124 | myAWSIoTMQTTClient = AWSIoTMQTTClient(clientId, useWebsocket=True)
125 | myAWSIoTMQTTClient.configureEndpoint(host, port)
126 | myAWSIoTMQTTClient.configureCredentials(rootCAPath)
127 | else:
128 | myAWSIoTMQTTClient = AWSIoTMQTTClient(clientId)
129 | myAWSIoTMQTTClient.configureEndpoint(host, port)
130 | myAWSIoTMQTTClient.configureCredentials(rootCAPath, privateKeyPath, certificatePath)
131 |
132 | # AWSIoTMQTTClient connection configuration
133 | myAWSIoTMQTTClient.configureAutoReconnectBackoffTime(1, 32, 20)
134 | myAWSIoTMQTTClient.configureOfflinePublishQueueing(-1) # Infinite offline Publish queueing
135 | myAWSIoTMQTTClient.configureDrainingFrequency(2) # Draining: 2 Hz
136 | myAWSIoTMQTTClient.configureConnectDisconnectTimeout(10) # 10 sec
137 | myAWSIoTMQTTClient.configureMQTTOperationTimeout(5) # 5 sec
138 |
139 |
140 | myAWSIoTMQTTClient.connect()
141 | if args.mode == 'both' or args.mode == 'subscribe':
142 | myAWSIoTMQTTClient.subscribe(topic, 1, customCallback)
143 | time.sleep(2)
144 |
145 |
146 | #myAWSIoTMQTTShadowClient = AWSIoTMQTTShadowClient(clientId)
147 | #myAWSIoTMQTTShadowClient.configureEndpoint(host, 8883)
148 | #myAWSIoTMQTTShadowClient.configureCredentials(rootCAPath, privateKeyPath, certificatePath)
149 |
150 | # AWSIoTMQTTShadowClient configuration
151 | #myAWSIoTMQTTShadowClient.configureAutoReconnectBackoffTime(1, 32, 20)
152 | #myAWSIoTMQTTShadowClient.configureConnectDisconnectTimeout(10) # 10 sec
153 | #myAWSIoTMQTTShadowClient.configureMQTTOperationTimeout(5) # 5 sec
154 |
155 | # Connect to AWS IoT
156 | #myAWSIoTMQTTShadowClient.connect()
157 | #deviceShadowHandler = myAWSIoTMQTTShadowClient.createShadowHandlerWithName(thingName, True)
158 |
159 | # This loop simulates an IoT sensor generating a random number corresponding to the reading of a piece of equipment.
160 | # This data will be fed into a lambda function which will generate a response after invoking an ML model.
161 | shadow_topics = '$aws/things/' + clientId + '/shadow/update'
162 | loopCount = 0
163 | do = True
164 | while do:
165 | JSONPayload = '{"state":{"desired":{"property":' + '"' + str(random.random()) + '"}}}'
166 | print(JSONPayload)
167 | myAWSIoTMQTTClient.publish(topic, JSONPayload, 1)
168 | logger.info("subscribe and set sdwCallback: topic: " + shadow_topics)
169 | myAWSIoTMQTTClient.subscribe(shadow_topics, 0, customShadowCallback_Update)
170 | logger.info("reporting state to shadow: " + shadow_topics)
171 | myAWSIoTMQTTClient.publish(shadow_topics, JSONPayload, 0)
172 | # myAWSIoTMQTTClient.shadowUpdate(JSONPayload, customShadowCallback_Update, 5)
173 | loopCount += 1
174 | time.sleep(20)
175 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy of
4 | this software and associated documentation files (the "Software"), to deal in
5 | the Software without restriction, including without limitation the rights to
6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
7 | the Software, and to permit persons to whom the Software is furnished to do so.
8 |
9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
15 |
16 |
--------------------------------------------------------------------------------
/Pollylambda.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # -*- coding: utf-8 -*-
3 | """
4 | Created on Wed Sep 18 22:42:10 2019
5 |
6 | @author: stenatu
7 |
8 | Lambda function triggered whenever anything is published to the SNS topic. Once
9 | Lambda is triggered, it will initiate Amazon Polly speech synthesis API to generate
10 | an mp3 file which is uploaded to s3.
11 | The file can subsequently be downloaded to the on prem factory servers for
12 | playing over a PA system.
13 |
14 | """
15 |
16 | import boto3
17 | import os
18 | import logging
19 | import uuid
20 | from contextlib import closing
21 |
22 | logger = logging.getLogger(__name__)
23 |
24 | def lambda_handler(event, context):
25 | logger.setLevel(logging.DEBUG)
26 | logger.debug("Event is --- %s" %event)
27 | #pull out the message
28 | speak = event["Records"][0]["Sns"]["Message"] #extracts the message from SNS topic
29 | logger.debug(speak)
30 |
31 | # Converting the Subject text of the SNS message to into an mp3 audio file.
32 | # Calls the Polly API
33 |
34 | polly = boto3.client('polly')
35 | response = polly.synthesize_speech( OutputFormat='mp3',
36 | Text = 'ALERT !' + speak, # synthesize the alert using Polly
37 | SampleRate='22050', # TODO: experiment with different sample rates
38 | VoiceId = os.environ['VoiceId'] # TODO: experiment with different voice Ids
39 | )
40 | logger.debug("Polly Response is-- %s" %response)
41 | id = str(uuid.uuid4())
42 | logger.debug("ID= %s" %id)
43 |
44 | if "AudioStream" in response:
45 | with closing(response["AudioStream"]) as stream:
46 | filename=id + ".mp3"
47 | output = os.path.join("/tmp/",filename)
48 | with open(output, "wb") as file:
49 | file.write(stream.read())
50 |
51 | s3 = boto3.client('s3')
52 | s3upload_response = s3.upload_file('/tmp/' + filename, os.environ['BUCKET_NAME'],filename,ExtraArgs={"ContentType": "audio/mp3"})
53 | logger.debug("S3 UPLOAD RESPONSE IS--- %s" %s3upload_response)
54 |
55 |
56 | location = s3.get_bucket_location(Bucket=os.environ['BUCKET_NAME'])
57 | logger.debug("Location response is -- %s" %location)
58 | region = location['LocationConstraint']
59 |
60 | if region is None:
61 | url_begining = "https://s3.amazonaws.com/"
62 | else:
63 | url = url_begining + str(os.environ['BUCKET_NAME']) + "/" + filename
64 |
65 | url = '{}/{}/{}'.format(s3.meta.endpoint_url, os.environ['BUCKET_NAME'], filename)
66 | print(url)
67 | return
68 |
69 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Predictive Maintenance workshop using AWS IOT and AI/ML services.
2 |
3 | Predictive maintenance techniques are designed to monitor the condition of equipment in industrial or home environments such as factory equipment, pumps and compressors, oil rigs etc., and determine whether or not the equipment is in need of maintenance, and if so, when. As opposed to routine scheduled maintenance, predictive maintenance has the potential to avoid unexpected downtime arising from potential issues that go uncaught in between maintenance windows. This is often a major cost concern particularly when the equipment in question is mission critical. Predictive maintenance also has the potential of avoiding costly repairs when they are not required, and as such can inform when the next scheduled maintenance should occur.
4 |
5 |
6 | In this workshop, you will apply Machine learning to a predictive maintenance use case. Imagine that you are in charge of running some equipment in a factory. The equipment is monitored by sensors which generate regular signals about the condition and health of the equipment. Based on the signals, you want to predict whether the equipment is in need of maintenance or not. One of the major challenges with your environment is the lack of consistent internet connectivity, so any solution you deploy, needs to function even in the absence of a connection to the cloud. Following instruction inn this workshop, you will use AWS IoT and AI-ML capabilities to arrive at a potential proof-of-concept (PoC) solution.
7 |
8 | 1. [Solution Overview](#1-solution-overview)
9 | 2. [Prerequisites](#2-prerequisites)
10 | 3. [Architecture](#3-architecture)
11 | 4. [Getting Started: Deploy your Cloud Formation template](#4-getting-started-deploy-your-cloud-formation-template)
12 | 5. [Install Greengrass, register IoT thing and connect to Greengrass](#5-install-greengrass-register-iot-thing-and-connect-to-greengrass)
13 |
14 | 5.1 [Provision the Greengrass group and core](#51-provision-the-greengrass-group-and-core)
15 |
16 | 5.2 [Register an IoT Thing with AWS IoT](#52-register-an-iot-thing-with-aws-iot)
17 |
18 | 5.3 [Register IoT Device with AWS Greengrass](#53-register-iot-device-with-aws-greengrass)
19 |
20 | 5.4 [Set up the IoT sensor](#54-set-up-the-iot-sensor)
21 |
22 | 6. [Explore data, build, train and deploy a model in Amazon SageMaker](#6-explore-data-build-train-and-deploy-a-model-in-amazon-sagemaker)
23 | 7. [Deploy the predictive-maintenance-advanced Lambda](#7-deploy-the-predictive-maintenance-advanced-lambda)
24 |
25 | 7.1 [Create Lambda function to deploy to Greengrass Core](#71-create-lambda-function-to-deploy-to-greengrass-core)
26 |
27 | 7.2 [Create a SNS topic](#72-create-a-sns-topic)
28 |
29 | 7.3 [Deploy the Lambda function locally to Greengrass Core](#73-deploy-the-lambda-function-locally-to-greengrass-core)
30 |
31 | 8. [Create Polly Lambda function](#8-create-polly-lambda)
32 | 9. [Configure Lambda function to read data from sensors](#8-configure-lambda-function-to-read-data-from-sensors)
33 | 10. [Configure Lambda function to send prediction to AWS IoT and deploy the solution](#10-configure-lambda-function-to-send-prediction-to-aws-iot-and-deploy-the-solution)
34 |
35 | 10.1 [Configure Lambda function](#101-configure-lambda-function)
36 |
37 | 10.2 [Deploy lambda function to Greengrass Core](#102-deploy-lambda-function-to-greengrass-core)
38 |
39 | 10.3 [Troubleshooting](#103-troubleshooting)
40 |
41 | 10.4 [Trigger Polly](#104-trigger-polly)
42 |
43 | ## 1. Solution overview
44 |
45 | You will start by collecting data generated by the sensors. A local lambda function deployed on the factory floor will make API calls to the machine learning model you trained on the AWS Cloud, which is also deployed locally at the factory(more on why you want to do this later). The lambda function will send notifications to the IoT cloud whether the part is Faulty or Not. If the part is not Faulty, no further action is taken. If a faulty part is found, the Lambda function will publish a message to an Amazon SNS topic of your choice. A second lambda function, listening on this topic will automatically be triggered. This Lambda function will call the Amazon Polly API to convert the body of the notification to speech. The speech file will be asynchronously generated and saved in Amazon S3. You can download this file and play it on your factory floor to let the floor manager know that there is an issue with a part.
46 |
47 | **The following AWS Services are leveraged:**
48 |
49 | AWS Greengrass
50 |
51 | AWS IoT
52 |
53 | Amazon S3
54 |
55 | Amazon SageMaker
56 |
57 | Amazon SNS
58 |
59 | Amazon Polly
60 |
61 | AWS Cloud9
62 |
63 | AWS CloudFormation
64 |
65 | Amazon EC2
66 |
67 | **Key takeaways of this workshop are the following:**
68 |
69 | 1) Casting the use case into a supervised learning problem.
70 |
71 | 1) An understanding of some of the challenges when using ML for predictive maintenance.
72 |
73 | 2) Deploy an architecture that leverages AWS ML and AI services, and learn how they interact with IoT services such as AWS GreenGrass to perform predictive maintenance on-premises.
74 |
75 | 3) Next steps towards taking this architecture to an end-to-end cloud solution.
76 |
77 | ## 2. Prerequisites
78 |
79 | 1) AWS Account
80 | 2) Laptop
81 | 3) Browser
82 | 4) Basic Linux/Python knowledge
83 |
84 | ## 3. Architecture
85 |
86 | The architecture for this workshop comprises of two parts:
87 |
88 | ### Machine Learning:
89 |
90 | We will assume that we have a simple S3 datalake where we will upload our training data used to train the model. The Machine learning model will be trained in Amazon SageMaker and the model artifacts will be deployed to the Greengrass core via S3.
91 |
92 | 
93 |
94 | ### IoT Architecture:
95 |
96 | Once the model is deployed, you will build the following architecture which links the machine learning model to AWS IoT and leverages Amazon Polly to convert text to speech to generate mp3 files that can be played over the PA system in the factory floor.
97 |
98 | 
99 |
100 |
101 | ## 4. Getting Started: Deploy your Cloud Formation template
102 |
103 | Architecturally, this workshop comprises of two parts: your factory environment (FE) and the AWS cloud.
104 |
105 | The FE is where your equipment and sensors live which are monitoring the condition of the equipment. In order to keep track of the sensor data, they need to send the data to the AWS Cloud. The service that accomplishes this is called IoT Greengrass. Greengrass is a software that can be installed in your local factory servers which allows you to send sensor data and messages to and from the cloud using secure MQTT messaging. For more on Greengrass service, please check out: https://aws.amazon.com/greengrass/
106 |
107 | Here we will mimic your local FE with an EC2 instance. Launch the following Cloudformation template in your AWS account that you are using for this workshop.
108 |
109 | Launch CloudFormation stack in **us-east-1** only: [](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=predictivemaintenance&templateURL=https://iot-ml-predictive-maintenance.s3.amazonaws.com/iot-ml-predictive-maintenance.json)
110 |
111 | **This template will create the following resources:**
112 |
113 | 1. An **S3 bucket** for use throughout this workshop. You will use this bucket to store your machine learning model, Polly notifications and any data used for training and testing.
114 |
115 | 2. **VPC + public Subnet and security groups** for an EC2 instance.
116 |
117 | 3. **Cloud9** instance where you will run your code and deploy your Greengrass core. Cloud9 is an AWS IDE where you will be able to write code and deploy scripts on your underlying EC2 instance. You will use this environment to install your Greengrass core.
118 |
119 | 4. A **SageMaker notebook environment** to build, train and deploy machine learning models.
120 |
121 | 5. An **EC2** instance which will mimic your FE. The EC2 instance comes bootstrapped with custom python libraries necessary for running this workshop.
122 |
123 | After you have been redirected to the Quick create stack page at the AWS CloudFormation console take the following steps to launch you stack:
124 |
125 | 1. On 'Create stack', leave everything as default and click Next
126 | 2. Under 'Specifc stack details', you can leave everything by default, or you can change Parameters, and click Next
127 | - Cloud9 instance type: (Optional) Select a Cloud9 instance type. The preselected m4.large is sufficient to run the workshop.
128 | - SageMaker instance type: (Optional) Select a SageMaker instance type. The preselected ml.t2.medium is sufficient to run the workshop.
129 | 3. Leave everything as default under 'Configure Stack options', click Next
130 | 4. Review and scroll down to Capabilities -> check I acknowledge that AWS CloudFormation might create IAM resources.
131 | 5. At the bottom of the page click Create stack.
132 | 6. Wait until the complete stack is created; it should take round about 10mins.
133 | 7. In the Outputs section for your stack in the CloudFormation console you find several values for resources that have been created: Cloud9, S3 bucket name, SageMaker instance... You can go back at any time to the Outputs section to find these values.
134 |
135 | ### Access the SageMaker Notebook Instance
136 |
137 | Go to outputs section for your stack **predictivemaintenance** in the AWS CloudFormation console
138 |
139 | **SageMakerInstance**: Right-click the corresponding link and select Open link in new tab. Hint: If the link is not clickable, try copying and pasting it into the address bar of a new web browser window or tab.
140 | You will be redirected to your SageMaker instance.
141 |
142 | ### Access the Cloud9 IDE
143 |
144 | Go to outputs section for your stack **predictivemaintenance** in the AWS CloudFormation console
145 |
146 | **Cloud9IDE**: Right-click the corresponding link and select Open link in new tab. Hint: If the link is not clickable, try copying and pasting it into the address bar of a new web browser window or tab.
147 | You will be redirected to your Cloud9 IDE
148 | You should see a website similar to this one:
149 |
150 | 
151 |
152 | ### Make your home folder visible
153 |
154 | A bunch of files (to be used in other workshops) has been copied onto the Cloud9 IDE. By default the content of the home folder is not shown. So you need to change this.
155 |
156 | In your Cloud9 IDE in the left pane:
157 |
158 | 
159 |
160 | 1) Click the arrow next to the setting wheel
161 | 2) Click Show Home in Favorites
162 |
163 |
164 | ### Copying Files from/to the Cloud9 IDE
165 |
166 | Files could be uploaded either directly with the Cloud9 IDE or indirectly via an S3 bucket or locally from your laptop. You will need to copy the configuration file for your Greengrass Core to the Cloud9 instance later during the workshop.
167 |
168 | Cloud9 IDE process for later in the workshop
169 |
170 | Upload a file: In the File menu choose Upload Local Files...
171 | Download a file: Right-click on the filename > Download.
172 |
173 | ### Open a Terminal
174 |
175 | To open a terminal (shell) in the Cloud9 IDE click the + in the tab bar and select New Terminal.
176 | You will use this terminal to install and run the Greengrass core.
177 |
178 | 
179 |
180 | ## 5. Install Greengrass, register IoT thing and connect to Greengrass
181 |
182 | ### 5.1. Provision the Greengrass group and core
183 |
184 | The Greengrass group allows you to cluster resources together which need to communicate with one another. For example, multiple sensors on your factory floor, or IoT devices in your home may constitute a Greengrass group. By provisioning this group, you can also create local lambda functions which can run even when the FE goes offline. This is crucial for heavy industrial environments where consistent internet access isn't always a given.
185 |
186 | Furthermore, the Greengrass group allows you to locally deploy machine learning models in your FE, which are trained in the cloud.
187 |
188 | **To get started go to the AWS Greengrass console and create a new Greengrass group and permission**
189 |
190 | 1. Groups
191 | 2. Create Group
192 | 3. Greengrass needs your permission to access other services. Click 'Grant permission' to create 'Greengrass_ServiceRole' to provide permission to Greengrass.
193 | 4. Use default creation
194 | 5. Group Name: greengrass-predictive
195 | 6. Next
196 | 7. Leave Name for Core untouched
197 | 8. Next
198 | 8. Create Group and Core
199 | 10. Download these resources as a tar.gz (A tar.gz-file which contains key/certificate and configuration for Greengrass)
200 | 11. Finish (you might need to scroll down to find this button) !!! Don't forget to click "Finish". Otherwise your group will not be created !!!
201 | 11. Verify in the AWS IoT console that your Greengrass Group has been created
202 |
203 | The Greengrass service role that you just create is an IAM service role that authorizes AWS IoT Greengrass to access resources in your AWS account on your behalf. You need to associate this role to current AWS account. To allow AWS IoT Greengrass to access your resources, in a Cloud9 terminal run this command:
204 |
205 | ```bash
206 | #retrieve service role
207 | aws greengrass get-service-role-for-account --region us-east-1
208 |
209 | #associate service role with your account
210 | aws greengrass associate-service-role-to-account --role-arn arn:aws:iam:::role/Greengrass_ServiceRole
211 | ```
212 |
213 | Now you need to create a Greengrass group role. The Greengrass group role is an AWS Identity and Access Management (IAM) role that authorizes code running on a Greengrass core to access your AWS resources.
214 |
215 | Go to IAM console to create an IAM role.
216 |
217 | 1. Roles
218 | 2. Create role
219 | 3. AWS service
220 | 4. Greengrass
221 | 5. Next: Permissions
222 | 6. Check AWSGreengrassResourceAccessRolePolicy
223 | 7. Next: Review
224 | 8. Role name: GreengrassRole (Note that role names must be unique. You will need to keep track of the RoleARN for the rest of this workshop)
225 | 9. Create Role
226 | 10. After creating the role, make a note of the role ARN to use it later.
227 |
228 | You can also find the role arn in the IAM console:
229 | 1. Go to IAM console, click Roles.
230 | 2. Type GreengrassRole in the search field
231 | 3. Click GreengrassRole
232 | 4. You'll find the role arn in the top of the window
233 |
234 | For this workshop we will need to attach 2 more policies to this role.
235 |
236 | 1. Cick on Attach policies
237 | 2. Find: AmazonS3ReadOnlyAccess and click Attach policy
238 | 3. Repeat the steps for AmazonSNSFullAccess
239 |
240 | This will allow Greengrass to obtain the machine learning model artifacts from your S3 bucket for deployment. It will also alow your local lambda function to publish messages to an SNS topic and call the SNS APIs.
241 |
242 | Now you need to associate this role to Greengrass Group greengrass-predictive. You should see the permissions associated with the role now appear in the Settings of the Greengrass group.
243 |
244 | 1. Go back to Greengrass Console.
245 | 2. Go to Groups --> greengrass-predictive --> Settings
246 | 3. In GroupRole, click on the "Add Role"
247 | 4. For IAM role, select GreengrassRole and click Save
248 | 5. You should see the role and policies in the Settings
249 |
250 |
251 | **Copy and unpack the tar.gz-file**
252 | By default Cloud9 home folder size is 10GB. Let's expand this folder so that you will have more space to work on by running resize.sh script.
253 |
254 | ```bash
255 |
256 | cd /tmp
257 | ./resize.sh
258 |
259 | ```
260 |
261 | After expanding home folder size, you can configure Greengrass core.Copy (use S3/Cloud9 IDE as mentioned above) the downloaded tar.gz-file onto your Cloud9 IDE in the home folder /home/ec2-user/. The tar.gz file's name is similar to -setup.tar.gz
262 | The tar.gz file contains keys, certificate and a configuration file (config.json) which will be used to configure your Greengrass Core.
263 |
264 | In a Cloud9 terminal:
265 |
266 | ```bash
267 | sudo tar zxvf /home/ec2-user/-setup.tar.gz -C /greengrass/
268 | ```
269 |
270 | Now you are ready to start your Greengrass core.
271 |
272 | But before you start the Greengrass daemon subscribe to the following topics. If the Core starts correctly you can observe activities on that topics.
273 |
274 | Go to the AWS IoT Core console
275 |
276 | 1. Test
277 | 2. Subscribe $aws/events/# and $aws/things/#
278 | 3. Now fire up Greengrass on your EC2 instance
279 |
280 | In a Cloud9 terminal:
281 |
282 | ```bash
283 | cd /greengrass/ggc/core
284 | sudo ./greengrassd start
285 | ```
286 |
287 | Look at the MQTT client in the AWS IoT console for output.
288 |
289 | You need to become root to access the log-directories on the Greengrass Core:
290 |
291 | ```bash
292 | sudo su -
293 | ```
294 |
295 | In a Cloud9 terminal:
296 |
297 | ```bash
298 | cd /greengrass/ggc/var/log/system/
299 | tail -f *.log
300 | ```
301 |
302 | If there are any problems when starting AWS Greengrass check file "crash.log" for errors:
303 |
304 | ```bash
305 | /greengrass/ggc/var/log/crash.log
306 | ```
307 |
308 | Your AWS Greengrass Core should now be up and running.
309 |
310 | ### 5.2. Register an IoT Thing with AWS IoT.
311 |
312 | The IoT Thing is the Cloud representation of your IoT device, in this case the sensor which is collecting data about the equipment in your factory.
313 |
314 | 1. Go to the IoT Core
315 | 2. Onboard --> Get Started
316 | 3. Onboard a device --> Get started
317 | 4. Review the steps to register a device --> click Get started
318 | 5. Choose Platform Linux/OSX and AWS IoT Device SDK Python
319 | 6. Next
320 | 7. Thing Name: Iot-Sensor
321 | 8. Next Step
322 | 9. Download connection kit for Linux/OSX and save it on your machine
323 | 10. Next Step.
324 | 11. Click Done
325 |
326 | In your Cloud9 terminal, right click folder **/home/ec2-user/environment**, and click New Folder.
327 | Name the New folder IotSensor
328 | upload the **connect_device_package.zip** file into this folder and follow the steps indicated in a new terminal window
329 |
330 | Unzip the **connect_device_package.zip** file in folder IotSensor. Then change permission of start.sh script to start sending data to AWS IoT
331 |
332 | ```bash
333 | cd /home/ec2-user/environment/IotSensor
334 | unzip connect_device_package.zip
335 | chmod 755 start.sh
336 | ```
337 |
338 | **Note: you may have to enable root access in your terminal for your start shell script to excecute correctly. This can be done by typing:**
339 | ```bash
340 | sudo ./start.sh
341 | ```
342 |
343 |
344 | ### 5.3. Register IoT Device with AWS Greengrass
345 |
346 | Once the IoT device has been registered, we still need to connect the IoT device to Greengrass. This way, the IoT device will send messages to Greengrass and will be able to trigger Lambda functions that are deployed on the Greengrass core.
347 |
348 | To do so we first need to register the IoT device with the Greengrass core.
349 |
350 | 1. Go to Greengrass, Groups
351 | 2. Click on greengrass-predictive
352 | 3. Go to Devices --> Add Device --> Select and IoT Thing --> Select Iot-Sensor --> Finish.
353 |
354 | Next we need to change the permission policy of the Iot-Sensor so that it can Discover the Greengrass core automatically.
355 |
356 | 1. Click on Manage --> Things --> Iot-Sensor
357 | 2. Security
358 | 3. Click the Certificate
359 | 4. Policies
360 | 5. IoT-Sensor-Policy
361 | 6. Edit Policy Document
362 |
363 | Paste the json below in the box. You may need to overwrite the existing json document.
364 |
365 | ```json
366 | {
367 | "Version": "2012-10-17",
368 | "Statement": [
369 | {
370 | "Effect": "Allow",
371 | "Action": [
372 | "iot:Publish",
373 | "iot:Subscribe",
374 | "iot:Connect",
375 | "iot:Receive",
376 | "greengrass:Discover",
377 | "iot:DeleteThingShadow",
378 | "iot:GetThingShadow",
379 | "iot:UpdateThingShadow"
380 | ],
381 | "Resource": [
382 | "*"
383 | ]
384 | }
385 | ]
386 | }
387 | ```
388 |
389 | Click *save as new version*.
390 |
391 | Next we will replace the simple Hello World messages coming through from the Iot device with actual sensor data.
392 |
393 | ### 5.4. Set up the IoT sensor
394 |
395 | To start sending sensor messages to the Greengrass core and AWS IoT complete the following steps.
396 |
397 | In the Greengrass core, click on Groups --> greengrass-predictive --> Devices
398 | Click ... on the top right where it says Local Shadow Only
399 | Select: Sync to the Cloud
400 |
401 | For every IoT thing registered on Greengrass, IoT creates a thing shadow. A shadow is a JSON document that is used to store current or desired state information for a thing. When the thing shadow is syncing to the cloud, it is constantly updating itself with the most recent state of the IoT Device. AWS IoT Greengrass devices can interact with AWS IoT device shadows in an AWS IoT Greengrass group, and update the state of shadow. To do so, create the subscription following steps below. The thing shadow interacts with the IoT Device and AWS IoT on a special messaging topic **$aws/things/Iot-Sensor/shadow/**
402 |
403 | Go to Greengrass Groups --> greengrass-predictive.
404 | 1. Go to Subscriptions
405 | 2. Add subscription
406 | 3. Source --> Devices --> Iot-Sensor
407 | 4. Target --> Local Shadow Service
408 | 5. Next
409 | 6. In the topic filter enter: $aws/things/Iot-Sensor/shadow/update
410 | 7. Next --> Finish
411 |
412 | Add another subscription this time choosing the Local Shadow Service as the Source and Iot-Sensor as the target
413 | Enter Topic filter: $aws/things/Iot-Sensor/shadow/update/accepted. Click Next --> Finish
414 |
415 | Now go the Cloud9 terminal
416 |
417 | Under **/home/ec2-user/environment**, clone the following Github repository: **https://github.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge.git**
418 |
419 | Next, move the IotSensor.py and gg_discovery_api.py into the folder IotSensor. This is important because your start shell script will now execute the IotSensor.py file.
420 |
421 | Finally open the start.sh script in Cloud9. Navigate to the last line of the script and replace the
422 | "aws-iot-device-sdk-python/samples/basicPubSub/basicPubSub.py" with **IotSensor.py** and at the end add:
423 |
424 | ```python
425 | --connect-to greengrass
426 | ```
427 | The final script should look something like this (note: yourhashID-ats is the unique ID for your AWS IoT endpoint. Please keep this hashID as it is in this script):
428 |
429 | ```python
430 | python IotSensor.py -e yourhashID-ats.iot.us-east-1.amazonaws.com -r root-CA.crt -c Iot-Sensor.cert.pem -k Iot-Sensor.private.key --connect-to greengrass
431 | ```
432 |
433 | Now we are ready to deploy the Iot device to the Greengrass core.
434 |
435 | In the Cloud9 Terminal navigate to the folder containing your start.sh shell script:
436 |
437 | ```bash
438 | sudo ./start.sh
439 | ```
440 | Your Iot device should successfully discover the Greengrass core.
441 |
442 | To check that the Iot device is updating the thing shadow, go to AWS IoT --> Test --> Subscribe to topic $aws/things/Iot-Sensor/shadow/update
443 | If things are working correctly, you should start seeing messages coming through.
444 |
445 | **Troubleshoot greengrass core**
446 |
447 | If there are any errors, you can check the logs to troubleshoot. To access the logs, open a new terminal window in Cloud9. To do this, click on the + symbol and click New Terminal.
448 |
449 | In the terminal window type in
450 |
451 | ```bash
452 | sudo su
453 | cd /greengrass/ggc/var/log
454 | ls
455 | ```
456 | This will give you access to the runtime and crash logs.
457 | Once a Lambda function is configured, you will also see user logs.
458 |
459 | ## 6. Explore data, build, train and deploy a model in Amazon SageMaker
460 |
461 | Next, go to the Outputs section of the CloudFormation template and click on the link to your SageMaker notebook instance. Alternatively, simply go to SageMaker in the AWS Console and you should find your notebook instance up and running.
462 |
463 | The SageMaker notebook instance should already have a github repo cloned into the home directory.
464 |
465 | Go to **amazon-sagemaker-predictive-maintenance-deployed-at-edge** directory and open **Dataset_Preprocess.ipynb**. Run this notebook to generate the train and test datasets.
466 |
467 |
468 | Open **predictive-maintenance-xgboost.ipynb** and run this notebook to build and train your model. For a kernel, choose **conda python3**.
469 |
470 | **Remember to change default S3 bucket name to your bucket name (as a string in quotes) in the code cell where the Markdown prompts for a bucket. This is where your training, test and validation data will be stored, as well as your trained model artifacts.**
471 |
472 | To find S3 bucket name, go to CloudFormation and click on **predictivemaintenance** stack, click on **Outputs** and you will see your bucket nex to **S3Bucket**
473 |
474 | To build and train a machine learning model using Amazon SageMaker for predictive maintenance, execute each code cell and read through the text in the Markdown.
475 |
476 | Note: Ignore the warning you receive when you get the Docker image in the "get_image_uri_" command.
477 |
478 | As your model trains, training metrics will be generated in the SageMaker notebook as well as in the SageMaker console. Once the training is complete, SageMaker will automatically tear down the compute resources required for model training. You are only billed for the time the training runs and the instance type used.
479 |
480 | One you run through all the cells in this notebook, navigate to your S3 bucket and make sure a trained ML model is created in the output folder.
481 |
482 | ### Working with Jupyter notebooks
483 |
484 | Cells in notebooks containing code which should be executed have square brackets [ ] left from the cell.
485 |
486 | [ ] Cell has not been executed
487 | [*] Cell is active. Depending on the code it could take some while for a cell to execute
488 | [X] where X is any number like [6] means that the code in the cell has been executed
489 |
490 |
491 | Execute cells:
492 |
493 | Use Run at the top of the screen
494 | Ctrl+ on the keyboard
495 |
496 | ## 7. Deploy the predictive-maintenance-advanced Lambda
497 |
498 | ### 7.1. Create Lambda function to deploy to Greengrass Core
499 |
500 | We will create a Lambda function that will be deployed locally to Green Core. This function will download the machine learning model that you build earlier, and use this model to do inference on incoming sensor data to predict if device failure.
501 |
502 | To prepare for the code of this function, from the repository you cloned in the Cloud9 environment, copy the lambda function predictlambda.py and the folder greengrasssdk to your local device.
503 |
504 | Open up a Terminal and Navigate to the folder where you downloaded the files and zip the two files together using the command
505 |
506 | ```bash
507 | zip -r predictlambda.zip predictlambda.py greengrasssdk
508 | ```
509 |
510 | Next we create an IAM role for the lambda functions to control permission. Normally, as a best practice, we want to follow the principle of least privelege and grant the lambda functions *only* the access they need. However, for simplicity, we will cheat a little here and create a single role for both lambda functions.
511 |
512 | 1. Go to IAM --> Role --> Create Role --> AWS Service --> Lambda --> Next:Permissions
513 | 2. Choose AWSLambdaBasicExecutionRole in the Roles
514 | 3. Next Tags. Click Next: Review
515 | 4. Enter Role Name: **Predictivelambdarole**
516 | 5. Hit Create Role
517 |
518 | Now Navigate to the Role you just created. As before, we will add some policies to this role.
519 |
520 | 1. Click on Attach policies
521 | 2. Attach the following policies to the role: AmazonS3FullAccess, AmazonPollyFullAccess and AmazonSNSFullAccess
522 |
523 | Now we are ready to create this Lambda function. Next navigate to the Lambda console.
524 |
525 | 1. Click on Create Function
526 | 2. Choose Author from Scratch
527 | 3. Call the function **predictive-maintenance-advanced**
528 | 4. For RunTime choose **Python 3.7**
529 | 5. For IAM, expand the arrow under permissions titled "choose or create an execution role" and click on Use an existing role and find the role you just create **Predictivelambdarole** in the drop down menu.
530 | 6. Hit Create Function.
531 |
532 | You should see a lambda function created. Explore the Lambda function console, you will find a list of services to which the Lambda function has permissions to read/write from listed. You will also see triggers. This lambda function will be deployed on Greengrass and triggered by the Iot Sensor in your FE, so we won't add a trigger here.
533 |
534 | 1. Next, navigate to Function Code
535 | 2. In Code Entry Type -- Choose upload a .zip file --> Upload predictlambda.zip
536 | 3. For Runtime choose Python 3.7
537 | 4. For Handler enter predictlambda.lambda_handler
538 | 5. Click Save
539 |
540 | You should see your Lambda code appear in the IDE.
541 |
542 | Study the lambda code:
543 |
544 | 1) Upon being triggered by the IoT Sensor, the lambda function extracts the relevant data point from the sensor.
545 | 2) The Lambda function then invokes the ML model you created earlier
546 | 3) The lambda function notifies AWS Iot that a prediction has been made
547 | 4) If the prediction is faulty (```python pred == 1```), the Lambda function sends a message to SNS (you will need to create SNS topic in next step)
548 |
549 | ### 7.2. Create a SNS topic
550 |
551 | To receive a notification if the prediction is faulty, you create SNS topic and subscribe your email to this topic. In the AWS Console, navigate to SNS and Click Topics on the left hand panel.
552 |
553 | 1. Create Topic
554 | 2. Enter a name --> Create Topic
555 | 3. Create Subscription
556 | 4. Copy the topic ARN to your clipboard
557 | 5. Protocol --> Email
558 | 6. Enter your email address
559 | 7. Create Subscription
560 |
561 | You should receive an email asking you to confirm the subscription.
562 | Once you confirm, you should be all set!
563 |
564 | Navigate back to the lambda function you just created and in the IDE.
565 | For the TOPIC_ARN, replace the existing field with the ARN of the topic you just created.
566 | For the LAMBDA_TOPIC, replace the topic with a different name of your choosing or leave as is. Save this topic in a text file for later use.
567 |
568 | Click Save
569 |
570 | ### 7.3. Deploy the Lambda function locally to Greengrass Core
571 |
572 | Our Lambda function should be able to make predictions on the ML model even without internet connectivity. For this reason,the Lambda needs to be deployed on the Greengrass core and not live in the AWS Cloud.
573 |
574 | In your Lambda function:
575 |
576 | 1) Save the Lambda function and click on Actions --> Publish as new version.
577 | 2) Leave the Version Description field blank and click Publish.
578 | 3) Next in Actions, click Create alias. Give the alias a name and for version click the Version Number, not Latest. Hit Create.
579 | **Complete steps 4-5 everytime you update Lambda function**
580 | 4) If you make any subsequent changes to your Lambda code, every time you need to Save and Publish as new version. Then you need to associate Alias to the last version. To do so, click on Qualifiers -> Alias. Click on the alias you just created and scroll down.
581 | 5) Under Alias configuration, click Edit. Change the version to the most recent version number (This will usually be the highest number). Remember **do not** set the version to **$LATEST** Currently Greengrass does not support deploying aliases pointing to the $LATEST.
582 | 6) Go back to the Greengrass console --> Groups --> greengrass-predictive
583 | 7) Click on Lambas --> Add Lambda --> Use Existing Lambda --> Enter *predictive-maintenance-advanced* in the search and locate your lambda function
584 | 8) Click Next --> Choose Alias. Hit Finish
585 | 9) Once you see the Lambda function appear, click on the *...* above where it should say Using alias -- youraliasname. Click Edit Configuration.
586 | 10) Increase the memory limit to **256MB**
587 | 11) In Lambda lifecyle choose **Make this function long-lived and keep it running indefinitely**
588 | 12) Update
589 |
590 | Next navigate back to your Greengrass group and click **Resources**.
591 | 1) Choose Machine Learning --> Add a machine learning resource
592 | 2) Name your model **xgboost-model**
593 | 3) For Model source, choose Upload a Model from S3
594 | 4) Navigate to your S3 bucket --> folder xgb --> sagemaker-xgboost-* --> output. Click model.tar.gz
595 | 5) For local path enter **/greengrass-machine-learning/xgboost/** (This path has already been entered in your lambda function and must match)
596 | 6) In Lambda Function Affiliations, select your Lambda function. Choose the permission ‘Read and Write access’ , click **Save**
597 |
598 | Greengrass will now copy your model.tar.gz file to this local folder and untar the model artifacts. By associating your Lambda function with this local model path, the Lambda function knows to look in the Greengrass core to unpickle the model object and make predict calls to the model when triggered.
599 |
600 | In Resources --> Machine Learning you should now see your machine learning model affiliated to your lambda function. If the model is still unaffiliated, give it a few seconds. If the problem persists, make sure you included the correct lambda function in the affiliations.
601 |
602 | ## 8. Create Polly Lambda
603 |
604 | Next we will create a second lambda function which is triggered whenever a message is published to the SNS topic we just created.
605 |
606 | To create this lambda function, follow the steps above for creating a lambda function but give it a different name from the one you just created. For example, call it PollyLambda
607 | 1. For IAM roles, assign this function the same role as above **Predictivelambdarole**
608 | 2. Hit Create Function
609 |
610 | Next we will trigger this lambda function using SNS.
611 |
612 | 1. In the Lambda function environment, in the Designer window, click on **+ Add Trigger** and choose **SNS** from the drop down menu.
613 | 2. Select the SNS topic ARN for the topic you created
614 | 3. Make sure "Enable Trigger" box is checked
615 | 4. Click Add
616 |
617 | 5. In the Function Code menu, select **Edit Code Inline**
618 | 6. In a separate window, from the Cloud 9 Terminal, navigate to the folder where you cloned the Git Repo, select and open PollyLambda.py
619 | 7. Delete the default handler code and copy and paste the code in the Lambda function
620 | 8. For Runtime Choose Python 3.7
621 | 9. In the Handler, replace with lambda_function.lambda_handler.
622 | 10. Increase timeout limit for this function by scrolling down to 'Basic settings', click Edit, increase timeout from 3 to 30 seconds. Click Save
623 |
624 | Examine this Lambda function. It is triggered whenever a message is published to the SNS topic you created earlier. Upon this trigger, lambda is authorized to invoke Amazon Polly, an AI service which converts text into lifelike speech.
625 |
626 | For this Polly requires a Voice_Id corresponding to the many human like voices it supports. Voice Ids are denoted by name strings and can be found here: https://docs.aws.amazon.com/polly/latest/dg/API_Voice.html.
627 |
628 | The Polly synthesize_speech API also takes inputs such as how quickly or slowly you want the voice to speak, and can include SSML to create custom sounds. You can also create a custom vocabulary within Polly if your use case requires it.
629 | Explore the Polly documentation to learn more.
630 |
631 | The Polly code requires you to specify local environment variables such as BUCKET_NAME and VoiceId as part of the local lambda environment.
632 |
633 | To add these scroll down to the section entitled **Environment variables** in your Lambda function UI.
634 |
635 | In the left box enter BUCKET_NAME. On the right box in the same row enter your bucket name created for you by the Cloudformation template.
636 | In the second row enter VoiceId. On the right box enter a string corresponding to the voice Id you want to hear.
637 |
638 | Once you are done, hit Save.
639 |
640 | ## 9. Configure Lambda function to read data from sensors
641 |
642 | Once your Lambda function is deployed on Greengrass Group, in order for the Lambda to start receiving data from sensors, you need to create a subscription from the Iot-Sensor and the Local Shadow service to the lambda function.
643 |
644 | Go back to the Iot Core service → Greengrass → Groups →greengrass-predictive:
645 |
646 | 1. Click on Subscriptions
647 | 2. Add subscription
648 | 3. Source --> Devices --> Iot-Sensor
649 | 4. Target --> Lambdas --> predictive-maintenance-advanced
650 | 5. Next
651 | 6. In the topic filter enter: $aws/things/Iot-Sensor/shadow/update/accepted
652 | 7. Next --> Finish
653 |
654 | Now repeat these steps, this time changing the Source --> Services --> Local Shadow Service. Keep the target and topic filter the same with previous steps.
655 |
656 |
657 | ## 10. Configure Lambda function to send prediction to AWS IoT and deploy the solution
658 |
659 | ### 10.1. Configure Lambda function
660 | Once both Lambda functions (PollyLambda and predictive-maintenance-advanced) are up and running, we need to add a subscription to let the Lambda function on Greengrass group to send messages to AWS IoT.
661 |
662 | 1. To do this, go back to your AWS Greengrass Core
663 | 2. Click on Subscriptions
664 | 3. Add subscription
665 | 4. Source --> Lambdas --> predictive-maintenance-advanced
666 | 5. Target --> Services --> Iot Cloud
667 | 6. Next
668 | 7. In the topic filter enter the topic name you picked for LAMBDA_TOPIC in your **predictive-maintenance-advanced** function.
669 | 8. Next --> Finish
670 |
671 | ### 10.2. Deploy lambda function to Greengrass Core
672 | Next click on Actions --> Deploy. Click on the Automatic Detection (recommended). This deploys all updates and changes to the Greengrass group.
673 |
674 | **WARNING:** Your Deployment should be pretty quick (typically under 1 minute). If it is taking longer it is possible that the Greengrass core has shut down. To remedy this, go to the Cloud 9 Terminal and rerun the following commands:
675 | ```bash
676 | cd /greengrass/ggc/core
677 | sudo ./greengrassd start
678 | ```
679 |
680 | Once your Greengrass group has successfully deployed, navigate to the IotSensor folder in the Cloud9 environment and run
681 |
682 | ```bash
683 | sudo ./start.sh
684 | ```
685 | Go to AWS IoT --> Test and subscribe to the topic you entered in your Lambda subscription and you should start seeing model inferences appearing.
686 |
687 | ### 10.3. Troubleshooting
688 |
689 | If you are having trouble with your lambda functions and want to check if everything is correctly deployed, go to the logs.
690 |
691 | To access them, in the Cloud9 Terminal, navigate to:
692 | ```bash
693 | sudo su
694 | cd /greengrass/ggc/var/log/user/us-east-1
695 | ls (to get your account number)
696 | cd (your account number)
697 | ls
698 | ```
699 | This should show you the log files associated with your lambda function predictive-maintenance-advanced. Ignore any other files and use cat to access the logs.
700 |
701 | ```bash
702 | cat predictive-maintenance-advanced.log
703 | ```
704 | Inspect the logs to find the error. If needed make the necessary changes to the Lambda function, Save and Publish as a new version. Point the alias to the new version number and redeploy the Greengrass core.
705 |
706 | #### 10.3.1. Potential issue 1: Java8 not available
707 |
708 | In this workshop, we use Greengrass stream manager to transfer IoT data to AWS Cloud. Stream manager require Java8 to be installed on the Greengrass Core. If you see the error related to Java 8 not available. Try changing Java version on Cloud9 by running
709 |
710 | ```
711 | bash
712 | sudo update-alternatives --config java
713 | ```
714 |
715 | Select the option for using the Java8 package, not Java7 (usually by pressing 2)
716 |
717 | #### 10.3.2. Potential issue 2: Service role isn't associate with the account
718 |
719 | Greengrass-sevicerole should be associated to your AWS account at step 5.1. However, if you have errors related to servicerole isn't associated to your account, run this command again (remember to change the account number)
720 |
721 | ```
722 | bash
723 | #associate service role with your account
724 | aws greengrass associate-service-role-to-account --role-arn arn:aws:iam:::role/Greengrass_ServiceRole
725 | ```
726 |
727 | #### 10.3.3. Potential issue 3: Don't have enough space on Cloud9
728 |
729 | If you have errors related to disk runs out of space on Cloud9, run this script under /tmp to expand the disk size
730 |
731 | ```
732 | bash
733 | cd /tmp
734 | ./resize.sh
735 | ```
736 |
737 | ### 10.4. Trigger Polly
738 |
739 | The default Lambda code only sends a message to SNS if a faulty part is found. Since the data is heavily imbalanced, it may take a long time for a faulty part to be observed.
740 |
741 | To change this and make sure the end-to-end solution is working, go back to the Lambda console and to the **predictive-maintenance-advanced lambda**.
742 |
743 | Change the lambda code to send a message to SNS if
744 | ```python
745 | pred == 0
746 | ```
747 | instead of pred ==1. This will change the system to send messages when "not faulty" parts are found simply for demonstration purposes.
748 |
749 | Save the Lambda function and click on Actions --> Save as new version.
750 | Leave the Version Description field blank and click Publish.
751 | Next go to Qualifiers -> Alias. Click on the most recent alias (**this alias will be numbered and not the one that is Unqualified:$LATEST**) and scroll down in the console for that alias.
752 | Change the version to the most recent version number (This will usually be the highest number). Remember **do not** set the version to **$LATEST** Currently Greengrass does not support deploying aliases pointing to the $LATEST.
753 |
754 |
755 | Next, in the Cloud9 Terminal, Hit
756 | ```bash
757 | Ctrl + C
758 | ```
759 | in the Cloud9 terminal to stop the Greengrass core.
760 |
761 | Go back to the Greengrass Core.
762 | Click on Actions --> Deploy
763 |
764 | Restart the Greengrass Core by running
765 | ```bash
766 | sudo ./start.sh
767 | ```
768 |
769 | **Congratulations!!!** You should now start to see messages coming into your email at regular intervals as Not faulty parts are found.
770 |
771 | Navigate to the S3 bucket created for this workshop. You should see *.mp3* files representing recordings from the SNS topic warning you that Immediate attention is required.
772 |
773 | ## Clean up
774 |
775 | Empty the S3 Bucket otherwise the CloudFormation stack will fail when deleting the bucket
776 |
777 | In a Cloud9 terminal:
778 | ```bash
779 | aws s3 rm s3://$S3_BUCKET --recursive
780 | ```
781 |
782 | Delete the CloudFormation stack
783 |
784 | Go to the AWS CloudFormation console
785 |
786 | Check pred-maintenance-advanced
787 | Actions
788 | Delete Stack
789 | Delete the Greengrass Group
790 |
791 | Go to the AWS Greengrass console
792 |
793 | Groups
794 | greengrass-predictive
795 | Actions
796 | Reset deployment
797 | Actions
798 | Delete Group
799 | Yes, continue with delete
800 | Delete the Greengrass core in the IoT device registry
801 |
802 | Go to the AWS IoT Core console
803 |
804 | Manage
805 | Click greengrass-predictive_Core
806 | Security
807 | Click the certificate name
808 | Actions
809 | Delete
810 | Yes, continue with delete
811 | Manage
812 | Click ... at greengrass-predictive_Core
813 | Delete
814 | Yes, continue with delete
815 | Security
816 | Policies
817 | Click ... at greengrass-ml_Core-policy
818 | Delete
819 | Yes, continue with delete
820 |
821 | Manage
822 | Click Iot-Sensor
823 | Security
824 | Click the certificate name
825 | Actions
826 | Delete
827 | Yes, continue with delete
828 | Manage
829 | Click ... at Iot-Sensor
830 | Delete
831 | Yes, continue with delete
832 | Security
833 | Policies
834 | Click ... at Iot-Sensor_Core-policy
835 | Delete
836 | Yes, continue with delete
837 |
838 | Delete the Lambda functions
839 |
840 | Go to the AWS Lambda console
841 |
842 | Functions
843 | Check predictive-maintenance-advanced
844 | Actions
845 | Delete
846 | Delete
847 |
848 | Repeat for the SNS polly Lambda.
849 |
850 | Delete IAM roles for Greengrass and Lambda
851 |
852 | Go to the AWS IAM console
853 |
854 | Roles
855 | Type GreengrassRole in the search field
856 | Check GreengrassRole
857 | Delete role
858 | Yes, delete
859 | Type predictive-maintenance-lambda-role in the search field
860 | Check predictive-maintenance-lambda-role
861 | Delete role
862 | Yes, delete
863 |
864 | Delete the SNS Topic Subscription
865 |
866 | Go to SNS
867 | Go to Topics
868 | Click on the topic name you created.
869 | Click Delete
870 | Type delete me --> Delete.
871 |
872 |
873 | ## Next Steps towards end-to-end solution
874 |
875 | Having finished your POC a next question is how to fill in the steps to build an end-to-end architecture. A final architecture may look like this:
876 |
877 | 
878 |
879 | The two main changes are in the data ingest and data processing pipelines.
880 |
881 | ### Data ingest and preprocessing pipeline
882 |
883 | Our current architecture is incomplete because we assumed that cleaned, training data somehow magically appeared in our S3 bucket. This is typically not the case.
884 |
885 | To complete the flow, we want a data ingest stream. This can be readily implemented using AWS IoT Rules. An IoT Rule consists of an SQL statement that extracts meaningful information from the MQTT statement pushed by our Iot sensor and lambda functions.
886 |
887 | IoT Rules will launch a Kinesis Firehose stream that will buffer the raw MQTT data and push the raw data to S3. From here we may want to use a fully managed ETL (extract-transform-load) platform such as AWS Glue or EMR for more granular control to perform ETL jobs on the raw data and convert it to meaningful features in csv or libsvm formats that the XGBoost algorithm can consume.
888 |
889 | ### MLOps
890 |
891 | 
892 |
893 | Finally, we must discuss when our ML model needs to be retrained. This might occur if the data changes (for example we swap out the equipment with a different one) or if the model predictions begin to drift. This is usually called data drift or model drift and we must watch out for both.
894 |
895 | One way to implement this in AWS is to use StepFunctions. Since our Lambda function is sending messages to AWS IoT whenever a faulty/not faulty part is found, we can again use IoT rules to collect statistics on the number of faulty or not faulty predictions. If we notice model drift beyond a preset threshold, this can trigger AWS Step Functions to re launch the Glue ETL and SageMaker ML Model training jobs to produce a new model.
896 |
897 | Once the new model is generated, we can point the Greengrass core to both the old and new model in a blue-green deployment to test how well the new model performs on unseen data. Once we are convinced of the model's performance against production data, we can switch the traffic over entirely to the new model.
898 |
899 |
900 |
901 | Thank you very much for taking the time to complete this workshop!
902 |
903 | ## License Summary
904 |
905 | This sample code is made available under a modified MIT license. See the LICENSE file.
906 |
907 |
908 |
909 |
--------------------------------------------------------------------------------
/cols.txt:
--------------------------------------------------------------------------------
1 | ["Response", "Sensor_1", "Sensor_2", "Sensor_3", "Sensor_4", "Sensor_5", "Sensor_6", "Sensor_7", "Sensor_8", "Sensor_9", "Sensor_10", "Sensor_11", "Sensor_12", "Sensor_13", "Sensor_14", "Sensor_15", "Sensor_16", "Sensor_17", "Sensor_18", "Sensor_19", "Sensor_20", "Sensor_21", "Sensor_22", "Sensor_23", "Sensor_24", "Sensor_25", "Sensor_26", "Sensor_27", "Sensor_28", "Sensor_29", "Sensor_30", "Sensor_31", "Sensor_32", "Sensor_33", "Sensor_34", "Sensor_35", "Sensor_36", "Sensor_37", "Sensor_38", "Sensor_39", "Sensor_40", "Sensor_41", "Sensor_42", "Sensor_43", "Sensor_44", "Sensor_45", "Sensor_46", "Sensor_47", "Sensor_48", "Sensor_49", "Sensor_50", "Sensor_51", "Sensor_52", "Sensor_53", "Sensor_54", "Sensor_55", "Sensor_56", "Sensor_57", "Sensor_58", "Sensor_59", "Sensor_60", "Sensor_61", "Sensor_62", "Sensor_63", "Sensor_64", "Sensor_65", "Sensor_66", "Sensor_67", "Sensor_68", "Sensor_69", "Sensor_70", "Sensor_71", "Sensor_72", "Sensor_73", "Sensor_74", "Sensor_75", "Sensor_76", "Sensor_77", "Sensor_78", "Sensor_79", "Sensor_80", "Sensor_81", "Sensor_82", "Sensor_83", "Sensor_84", "Sensor_85", "Sensor_86", "Sensor_87", "Sensor_88", "Sensor_89", "Sensor_90", "Sensor_91", "Sensor_92", "Sensor_93", "Sensor_94", "Sensor_95", "Sensor_96", "Sensor_97", "Sensor_98", "Sensor_99", "Sensor_100", "Sensor_101", "Sensor_102", "Sensor_103", "Sensor_104", "Sensor_105", "Sensor_106", "Sensor_107", "Sensor_108", "Sensor_109", "Sensor_110", "Sensor_111", "Sensor_112", "Sensor_113", "Sensor_114", "Sensor_115", "Sensor_116", "Sensor_117", "Sensor_118", "Sensor_119", "Sensor_120", "Sensor_121", "Sensor_122", "Sensor_123", "Sensor_124", "Sensor_125", "Sensor_126", "Sensor_127", "Sensor_128", "Sensor_129", "Sensor_130", "Sensor_131", "Sensor_132", "Sensor_133", "Sensor_134", "Sensor_135", "Sensor_136", "Sensor_137", "Sensor_138", "Sensor_139", "Sensor_140", "Sensor_141", "Sensor_142", "Sensor_143", "Sensor_144", "Sensor_145", "Sensor_146", "Sensor_147", "Sensor_148", "Sensor_149", "Sensor_150", "Sensor_151", "Sensor_152", "Sensor_153", "Sensor_154", "Sensor_155", "Sensor_156", "Sensor_157", "Sensor_158", "Sensor_159", "Sensor_160", "Sensor_161", "Sensor_162", "Sensor_163", "Sensor_164", "Sensor_165", "Sensor_166", "Sensor_167", "Sensor_168"]
--------------------------------------------------------------------------------
/gg_discovery_api.py:
--------------------------------------------------------------------------------
1 | #
2 | # gg-discovery-api.py
3 | #
4 | # python class for the Greengrass Discovery API
5 | # Returns the response document for a given thing.
6 | # Can be used to get the root-ca for a GG-Core.
7 | #
8 | # Documentation: http://docs.aws.amazon.com/greengrass/latest/developerguide/gg-discover-api.html
9 | #
10 | # Create a thing with e.g. gg-discover in AWS IoT and download key/cert file
11 | # The policy mentioned in the official documentation did not work for me but
12 | # the following policy did the job:
13 | #
14 | # {
15 | # "Version": "2012-10-17",
16 | # "Statement": [
17 | # {
18 | # "Effect": "Allow",
19 | # "Action": [
20 | # "greengrass:Discover"
21 | # ],
22 | # "Resource": "*"
23 | # }
24 | # ]
25 | # }
26 |
27 | # usage:
28 | # discovery = GGDiscovery(THING_NAME,
29 | # IOT_ENDPOINT,
30 | # PORT, ROOT_CA_FILE,
31 | # THING_CERT_FILE, THING_KEY_FILE)
32 | # print("discovery url: " + discovery.url)
33 | # (status, response_document) = discovery.discovery()
34 | # print("status: " + str(status))
35 | # print("response_document: " + json.dumps(response_document, indent=4))
36 |
37 |
38 |
39 | import json
40 | import logging
41 | import urllib3
42 | import re
43 | import sys
44 |
45 |
46 | class GGDiscovery:
47 |
48 | def __init__(self, ggad, iot_host, iot_port, ca_cert, cert, key):
49 | self.ggad = ggad
50 | self.iot_host = iot_host
51 | self.iot_port = iot_port
52 | self.ca_cert = ca_cert
53 | self.cert = cert
54 | self.key = key
55 | self.proxy = ""
56 | self.url = "https://" + iot_host + ":" + str(iot_port) + "/greengrass/discover/thing/" + ggad
57 |
58 | def discovery(self):
59 | http = ""
60 | if not self.proxy:
61 | http = urllib3.PoolManager(
62 | ca_certs=self.ca_cert,
63 | cert_reqs='CERT_REQUIRED',
64 | key_file=self.key,
65 | cert_file=self.cert)
66 | else:
67 | http = urllib3.ProxyManager(
68 | self.proxy,
69 | ca_certs=self.ca_cert,
70 | cert_reqs='CERT_REQUIRED',
71 | key_file=self.key,
72 | cert_file=self.cert)
73 |
74 | r = http.request('GET', self.url)
75 | self.status = str(r.status)
76 | self.response_document = json.loads(r.data.decode())
77 |
78 | return(self.status, self.response_document)
79 |
80 | def num_gggroups(self):
81 | self.num_gggroups = len(self.response_document['GGGroups'])
82 | return len(self.response_document['GGGroups'])
83 |
84 | def num_cas(self):
85 | end = self.num_gggroups()
86 | start = end - 1
87 | for i in range(start, end):
88 | print(i)
89 | self.num_cas = len(self.response_document['GGGroups'])
90 | return len(self.response_document['GGGroups'])
--------------------------------------------------------------------------------
/greengrasssdk/IoTDataPlane.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2010-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 | #
4 |
5 | import base64
6 | import json
7 | import logging
8 |
9 | from greengrasssdk import Lambda
10 | from greengrass_common.env_vars import SHADOW_FUNCTION_ARN, ROUTER_FUNCTION_ARN, MY_FUNCTION_ARN
11 |
12 | # Log messages in the SDK are part of customer's log because they're helpful for debugging
13 | # customer's lambdas. Since we configured the root logger to log to customer's log and set the
14 | # propagate flag of this logger to True. The log messages submitted from this logger will be
15 | # sent to the customer's local Cloudwatch handler.
16 | customer_logger = logging.getLogger(__name__)
17 | customer_logger.propagate = True
18 |
19 |
20 | class ShadowError(Exception):
21 | pass
22 |
23 |
24 | class Client:
25 | def __init__(self):
26 | self.lambda_client = Lambda.Client()
27 |
28 | def get_thing_shadow(self, **kwargs):
29 | r"""
30 | Call shadow lambda to obtain current shadow state.
31 |
32 | :Keyword Arguments:
33 | * *thingName* (``string``) --
34 | [REQUIRED]
35 | The name of the thing.
36 |
37 | :returns: (``dict``) --
38 | The output from the GetThingShadow operation
39 | * *payload* (``bytes``) --
40 | The state information, in JSON format.
41 | """
42 | thing_name = self._get_required_parameter('thingName', **kwargs)
43 | payload = b''
44 |
45 | return self._shadow_op('get', thing_name, payload)
46 |
47 | def update_thing_shadow(self, **kwargs):
48 | r"""
49 | Updates the thing shadow for the specified thing.
50 |
51 | :Keyword Arguments:
52 | * *thingName* (``string``) --
53 | [REQUIRED]
54 | The name of the thing.
55 | * *payload* (``bytes or seekable file-like object``) --
56 | [REQUIRED]
57 | The state information, in JSON format.
58 |
59 | :returns: (``dict``) --
60 | The output from the UpdateThingShadow operation
61 | * *payload* (``bytes``) --
62 | The state information, in JSON format.
63 | """
64 | thing_name = self._get_required_parameter('thingName', **kwargs)
65 | payload = self._get_required_parameter('payload', **kwargs)
66 |
67 | return self._shadow_op('update', thing_name, payload)
68 |
69 | def delete_thing_shadow(self, **kwargs):
70 | r"""
71 | Deletes the thing shadow for the specified thing.
72 |
73 | :Keyword Arguments:
74 | * *thingName* (``string``) --
75 | [REQUIRED]
76 | The name of the thing.
77 |
78 | :returns: (``dict``) --
79 | The output from the DeleteThingShadow operation
80 | * *payload* (``bytes``) --
81 | The state information, in JSON format.
82 | """
83 | thing_name = self._get_required_parameter('thingName', **kwargs)
84 | payload = b''
85 |
86 | return self._shadow_op('delete', thing_name, payload)
87 |
88 | def publish(self, **kwargs):
89 | r"""
90 | Publishes state information.
91 |
92 | :Keyword Arguments:
93 | * *topic* (``string``) --
94 | [REQUIRED]
95 | The name of the MQTT topic.
96 | * *payload* (``bytes or seekable file-like object``) --
97 | The state information, in JSON format.
98 |
99 | :returns: None
100 | """
101 |
102 | topic = self._get_required_parameter('topic', **kwargs)
103 |
104 | # payload is an optional parameter
105 | payload = kwargs.get('payload', b'')
106 |
107 | function_arn = ROUTER_FUNCTION_ARN
108 | client_context = {
109 | 'custom': {
110 | 'source': MY_FUNCTION_ARN,
111 | 'subject': topic
112 | }
113 | }
114 |
115 | customer_logger.debug('Publishing message on topic "{}" with Payload "{}"'.format(topic, payload))
116 | self.lambda_client._invoke_internal(
117 | function_arn,
118 | payload,
119 | base64.b64encode(json.dumps(client_context).encode()),
120 | 'Event'
121 | )
122 |
123 | def _get_required_parameter(self, parameter_name, **kwargs):
124 | if parameter_name not in kwargs:
125 | raise ValueError('Parameter "{parameter_name}" is a required parameter but was not provided.'.format(
126 | parameter_name=parameter_name
127 | ))
128 | return kwargs[parameter_name]
129 |
130 | def _shadow_op(self, op, thing_name, payload):
131 | topic = '$aws/things/{thing_name}/shadow/{op}'.format(thing_name=thing_name, op=op)
132 | function_arn = SHADOW_FUNCTION_ARN
133 | client_context = {
134 | 'custom': {
135 | 'subject': topic
136 | }
137 | }
138 |
139 | customer_logger.debug('Calling shadow service on topic "{}" with payload "{}"'.format(topic, payload))
140 | response = self.lambda_client._invoke_internal(
141 | function_arn,
142 | payload,
143 | base64.b64encode(json.dumps(client_context).encode())
144 | )
145 |
146 | payload = response['Payload'].read()
147 | if response:
148 | response_payload_map = json.loads(payload.decode('utf-8'))
149 | if 'code' in response_payload_map and 'message' in response_payload_map:
150 | raise ShadowError('Request for shadow state returned error code {} with message "{}"'.format(
151 | response_payload_map['code'], response_payload_map['message']
152 | ))
153 |
154 | return {'payload': payload}
155 |
--------------------------------------------------------------------------------
/greengrasssdk/Lambda.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2010-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 | #
4 |
5 | import logging
6 | import re
7 |
8 | from io import BytesIO
9 |
10 | from greengrass_common.function_arn_fields import FunctionArnFields
11 | from greengrass_ipc_python_sdk.ipc_client import IPCClient, IPCException
12 | from greengrasssdk.utils.testing import mock
13 |
14 | # Log messages in the SDK are part of customer's log because they're helpful for debugging
15 | # customer's lambdas. Since we configured the root logger to log to customer's log and set the
16 | # propagate flag of this logger to True. The log messages submitted from this logger will be
17 | # sent to the customer's local Cloudwatch handler.
18 | customer_logger = logging.getLogger(__name__)
19 | customer_logger.propagate = True
20 |
21 | valid_base64_regex = '^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{4}|[A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)$'
22 |
23 |
24 | class InvocationException(Exception):
25 | pass
26 |
27 |
28 | class Client:
29 | def __init__(self, endpoint='localhost', port=8000):
30 | """
31 | :param endpoint: Endpoint used to connect to IPC.
32 | :type endpoint: str
33 |
34 | :param port: Port number used to connect to the :code:`endpoint`.
35 | :type port: int
36 | """
37 | self.ipc = IPCClient(endpoint=endpoint, port=port)
38 |
39 | def invoke(self, **kwargs):
40 |
41 | # FunctionName is a required parameter
42 | if 'FunctionName' not in kwargs:
43 | raise ValueError(
44 | '"FunctionName" argument of Lambda.Client.invoke is a required argument but was not provided.'
45 | )
46 |
47 | arn_fields = FunctionArnFields(kwargs['FunctionName'])
48 | arn_qualifier = arn_fields.qualifier
49 |
50 | # A Function qualifier can be provided as part of the ARN in FunctionName, or it can be provided here. The
51 | # behavior of the cloud is to throw an exception if both are specified but not equal
52 | extraneous_qualifier = kwargs.get('Qualifier', '')
53 |
54 | if extraneous_qualifier and arn_qualifier and arn_qualifier != extraneous_qualifier:
55 | raise ValueError('The derived qualifier from the function name does not match the specified qualifier.')
56 |
57 | final_qualifier = arn_qualifier if arn_qualifier else extraneous_qualifier
58 |
59 | try:
60 | # GGC v1.9.0 or newer
61 | function_arn = FunctionArnFields.build_function_arn(arn_fields.unqualified_arn, final_qualifier)
62 | except AttributeError:
63 | # older GGC version
64 | raise AttributeError('class FunctionArnFields has no attribute \'build_function_arn\'. build_function_arn '
65 | 'is introduced in GGC v1.9.0. Please check your GGC version.')
66 |
67 | # ClientContext must be base64 if given, but is an option parameter
68 | try:
69 | client_context = kwargs.get('ClientContext', b'').decode()
70 | except AttributeError as e:
71 | customer_logger.exception(e)
72 | raise ValueError(
73 | '"ClientContext" argument must be a byte string or support a decode method which returns a string'
74 | )
75 |
76 | if client_context:
77 | if not re.match(valid_base64_regex, client_context):
78 | raise ValueError('"ClientContext" argument of Lambda.Client.invoke must be base64 encoded.')
79 |
80 | # Payload is an optional parameter
81 | payload = kwargs.get('Payload', b'')
82 | invocation_type = kwargs.get('InvocationType', 'RequestResponse')
83 | customer_logger.debug('Invoking local lambda "{}" with payload "{}" and client context "{}"'.format(
84 | function_arn, payload, client_context))
85 |
86 | # Post the work to IPC and return the result of that work
87 | return self._invoke_internal(function_arn, payload, client_context, invocation_type)
88 |
89 | @mock
90 | def _invoke_internal(self, function_arn, payload, client_context, invocation_type="RequestResponse"):
91 | """
92 | This private method is seperate from the main, public invoke method so that other code within this SDK can
93 | give this Lambda client a raw payload/client context to invoke with, rather than having it built for them.
94 | This lets you include custom ExtensionMap_ values like subject which are needed for our internal pinned Lambdas.
95 | """
96 | customer_logger.debug('Invoking Lambda function "{}" with Greengrass Message "{}"'.format(function_arn, payload))
97 |
98 | try:
99 | invocation_id = self.ipc.post_work(function_arn, payload, client_context, invocation_type)
100 |
101 | if invocation_type == "Event":
102 | # TODO: Properly return errors based on BOTO response
103 | # https://boto3.readthedocs.io/en/latest/reference/services/lambda.html#Lambda.Client.invoke
104 | return {'Payload': b'', 'FunctionError': ''}
105 |
106 | work_result_output = self.ipc.get_work_result(function_arn, invocation_id)
107 | if not work_result_output.func_err:
108 | output_payload = StreamingBody(work_result_output.payload)
109 | else:
110 | output_payload = work_result_output.payload
111 | invoke_output = {
112 | 'Payload': output_payload,
113 | 'FunctionError': work_result_output.func_err,
114 | }
115 | return invoke_output
116 | except IPCException as e:
117 | customer_logger.exception(e)
118 | raise InvocationException('Failed to invoke function due to ' + str(e))
119 |
120 |
121 | class StreamingBody(object):
122 | """Wrapper class for http response payload
123 |
124 | This provides a consistent interface to AWS Lambda Python SDK
125 | """
126 | def __init__(self, payload):
127 | self._raw_stream = BytesIO(payload)
128 | self._amount_read = 0
129 |
130 | def read(self, amt=None):
131 | """Read at most amt bytes from the stream.
132 | If the amt argument is omitted, read all data.
133 | """
134 | chunk = self._raw_stream.read(amt)
135 | self._amount_read += len(chunk)
136 | return chunk
137 |
138 | def close(self):
139 | self._raw_stream.close()
140 |
--------------------------------------------------------------------------------
/greengrasssdk/SecretsManager.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2010-2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 | #
4 |
5 | import json
6 | import logging
7 | from datetime import datetime
8 | from decimal import Decimal
9 |
10 | from greengrasssdk import Lambda
11 | from greengrass_common.env_vars import MY_FUNCTION_ARN, SECRETS_MANAGER_FUNCTION_ARN
12 |
13 | # Log messages in the SDK are part of customer's log because they're helpful for debugging
14 | # customer's lambdas. Since we configured the root logger to log to customer's log and set the
15 | # propagate flag of this logger to True. The log messages submitted from this logger will be
16 | # sent to the customer's local Cloudwatch handler.
17 | customer_logger = logging.getLogger(__name__)
18 | customer_logger.propagate = True
19 |
20 | KEY_NAME_PAYLOAD = 'Payload'
21 | KEY_NAME_STATUS = 'Status'
22 | KEY_NAME_MESSAGE = 'Message'
23 | KEY_NAME_SECRET_ID = 'SecretId'
24 | KEY_NAME_VERSION_ID = 'VersionId'
25 | KEY_NAME_VERSION_STAGE = 'VersionStage'
26 | KEY_NAME_CREATED_DATE = "CreatedDate"
27 |
28 |
29 | class SecretsManagerError(Exception):
30 | pass
31 |
32 |
33 | class Client:
34 | def __init__(self):
35 | self.lambda_client = Lambda.Client()
36 |
37 | def get_secret_value(self, **kwargs):
38 | r"""
39 | Call secrets manager lambda to obtain the requested secret value.
40 |
41 | :Keyword Arguments:
42 | * *SecretId* (``string``) --
43 | [REQUIRED]
44 | Specifies the secret containing the version that you want to retrieve. You can specify either the
45 | Amazon Resource Name (ARN) or the friendly name of the secret.
46 | * *VersionId* (``string``) --
47 | Specifies the unique identifier of the version of the secret that you want to retrieve. If you
48 | specify this parameter then don't specify ``VersionStage`` . If you don't specify either a
49 | ``VersionStage`` or ``SecretVersionId`` then the default is to perform the operation on the version
50 | with the ``VersionStage`` value of ``AWSCURRENT`` .
51 |
52 | This value is typically a UUID-type value with 32 hexadecimal digits.
53 | * *VersionStage* (``string``) --
54 | Specifies the secret version that you want to retrieve by the staging label attached to the
55 | version.
56 |
57 | Staging labels are used to keep track of different versions during the rotation process. If you
58 | use this parameter then don't specify ``SecretVersionId`` . If you don't specify either a
59 | ``VersionStage`` or ``SecretVersionId`` , then the default is to perform the operation on the
60 | version with the ``VersionStage`` value of ``AWSCURRENT`` .
61 |
62 | :returns: (``dict``) --
63 | * *ARN* (``string``) --
64 | The ARN of the secret.
65 | * *Name* (``string``) --
66 | The friendly name of the secret.
67 | * *VersionId* (``string``) --
68 | The unique identifier of this version of the secret.
69 | * *SecretBinary* (``bytes``) --
70 | The decrypted part of the protected secret information that was originally provided as
71 | binary data in the form of a byte array. The response parameter represents the binary data
72 | as a base64-encoded string.
73 |
74 | This parameter is not used if the secret is created by the Secrets Manager console.
75 |
76 | If you store custom information in this field of the secret, then you must code your Lambda
77 | rotation function to parse and interpret whatever you store in the ``SecretString`` or
78 | ``SecretBinary`` fields.
79 | * *SecretString* (``string``) --
80 | The decrypted part of the protected secret information that was originally provided as a
81 | string.
82 |
83 | If you create this secret by using the Secrets Manager console then only the ``SecretString``
84 | parameter contains data. Secrets Manager stores the information as a JSON structure of
85 | key/value pairs that the Lambda rotation function knows how to parse.
86 |
87 | If you store custom information in the secret by using the CreateSecret , UpdateSecret , or
88 | PutSecretValue API operations instead of the Secrets Manager console, or by using the
89 | *Other secret type* in the console, then you must code your Lambda rotation function to
90 | parse and interpret those values.
91 | * *VersionStages* (``list``) --
92 | A list of all of the staging labels currently attached to this version of the secret.
93 | * (``string``) --
94 | * *CreatedDate* (``datetime``) --
95 | The date and time that this version of the secret was created.
96 | """
97 |
98 | secret_id = self._get_required_parameter(KEY_NAME_SECRET_ID, **kwargs)
99 | version_id = kwargs.get(KEY_NAME_VERSION_ID, '')
100 | version_stage = kwargs.get(KEY_NAME_VERSION_STAGE, '')
101 |
102 | if version_id: # TODO: Remove this once we support query by VersionId
103 | raise SecretsManagerError('Query by VersionId is not yet supported')
104 | if version_id and version_stage:
105 | raise ValueError('VersionId and VersionStage cannot both be specified at the same time')
106 |
107 | request_payload_bytes = self._generate_request_payload_bytes(secret_id=secret_id,
108 | version_id=version_id,
109 | version_stage=version_stage)
110 |
111 | customer_logger.debug('Retrieving secret value with id "{}", version id "{}" version stage "{}"'
112 | .format(secret_id, version_id, version_stage))
113 | response = self.lambda_client._invoke_internal(
114 | SECRETS_MANAGER_FUNCTION_ARN,
115 | request_payload_bytes,
116 | b'', # We do not need client context for Secrets Manager back-end lambda
117 | ) # Use Request/Response here as we are mimicking boto3 Http APIs for SecretsManagerService
118 |
119 | payload = response[KEY_NAME_PAYLOAD].read()
120 | payload_dict = json.loads(payload.decode('utf-8'))
121 |
122 | # All customer facing errors are presented within the response payload. For example:
123 | # {
124 | # "code": 404,
125 | # "message": "Resource not found"
126 | # }
127 | if KEY_NAME_STATUS in payload_dict and KEY_NAME_MESSAGE in payload_dict:
128 | raise SecretsManagerError('Request for secret value returned error code {} with message {}'.format(
129 | payload_dict[KEY_NAME_STATUS], payload_dict[KEY_NAME_MESSAGE]
130 | ))
131 |
132 | # Time is serialized as epoch timestamp (int) upon IPC routing. We need to deserialize it back to datetime object in Python
133 | payload_dict[KEY_NAME_CREATED_DATE] = datetime.fromtimestamp(
134 | # Cloud response contains timestamp in milliseconds while datetime.fromtimestamp is expecting seconds
135 | Decimal(payload_dict[KEY_NAME_CREATED_DATE]) / Decimal(1000)
136 | )
137 |
138 | return payload_dict
139 |
140 | def _generate_request_payload_bytes(self, secret_id, version_id, version_stage):
141 | request_payload = {
142 | KEY_NAME_SECRET_ID: secret_id,
143 | }
144 | if version_stage:
145 | request_payload[KEY_NAME_VERSION_STAGE] = version_stage
146 |
147 | # TODO: Add VersionId once we support query by VersionId
148 |
149 | # The allowed chars for secret id and version stage are strictly enforced when customers are configuring them
150 | # through Secrets Manager Service in the cloud:
151 | # https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_CreateSecret.html#API_CreateSecret_RequestSyntax
152 | return json.dumps(request_payload).encode()
153 |
154 | @staticmethod
155 | def _get_required_parameter(parameter_name, **kwargs):
156 | if parameter_name not in kwargs:
157 | raise ValueError('Parameter "{parameter_name}" is a required parameter but was not provided.'.format(
158 | parameter_name=parameter_name
159 | ))
160 | return kwargs[parameter_name]
161 |
--------------------------------------------------------------------------------
/greengrasssdk/__init__.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2010-2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 | #
4 |
5 | from .client import client
6 | from .Lambda import StreamingBody
7 |
8 | __version__ = '1.4.0'
9 | INTERFACE_VERSION = '1.3'
10 |
--------------------------------------------------------------------------------
/greengrasssdk/client.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2010-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 | #
4 |
5 |
6 | def client(client_type, *args):
7 | if client_type == 'lambda':
8 | from .Lambda import Client
9 | elif client_type == 'iot-data':
10 | from .IoTDataPlane import Client
11 | elif client_type == 'secretsmanager':
12 | from .SecretsManager import Client
13 | else:
14 | raise Exception('Client type {} is not recognized.'.format(repr(client_type)))
15 |
16 | return Client(*args)
17 |
--------------------------------------------------------------------------------
/greengrasssdk/dskreadme.md:
--------------------------------------------------------------------------------
1 | All the python files needed to create the Greengrass sdk.
2 |
--------------------------------------------------------------------------------
/greengrasssdk/utils/.md:
--------------------------------------------------------------------------------
1 | .
2 |
--------------------------------------------------------------------------------
/greengrasssdk/utils/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/greengrasssdk/utils/testing.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2010-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 | #
4 |
5 | import json
6 | from functools import wraps
7 | from greengrass_common.env_vars import MY_FUNCTION_ARN
8 |
9 |
10 | def mock(func):
11 | """
12 | mock decorates _invoke_internal by checking if MY_FUNCTION_ARN is present
13 | if MY_FUNCTION_ARN is present, the actual _invoke_internal is invoked
14 | otherwise, the mock _invoke_internal is invoked
15 | """
16 | @wraps(func)
17 | def mock_invoke_internal(self, function_arn, payload, client_context, invocation_type="RequestResponse"):
18 | if MY_FUNCTION_ARN is None:
19 | if invocation_type == 'RequestResponse':
20 | return {
21 | 'Payload': json.dumps({
22 | 'TestKey': 'TestValue'
23 | }),
24 | 'FunctionError': ''
25 | }
26 | elif invocation_type == 'Event':
27 | return {
28 | 'Payload': b'',
29 | 'FunctionError': ''
30 | }
31 | else:
32 | raise Exception('Unsupported invocation type {}'.format(invocation_type))
33 | else:
34 | return func(self, function_arn, payload, client_context, invocation_type)
35 | return mock_invoke_internal
36 |
--------------------------------------------------------------------------------
/images/AWS_C9_Open_Terminal.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge/c77180e6aec2f8667b5ecb706b861f59ada2c64b/images/AWS_C9_Open_Terminal.png
--------------------------------------------------------------------------------
/images/AWS_C9_Show_Home.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge/c77180e6aec2f8667b5ecb706b861f59ada2c64b/images/AWS_C9_Show_Home.png
--------------------------------------------------------------------------------
/images/Cloud9IDE.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge/c77180e6aec2f8667b5ecb706b861f59ada2c64b/images/Cloud9IDE.png
--------------------------------------------------------------------------------
/images/IOT-ML-end2end.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge/c77180e6aec2f8667b5ecb706b861f59ada2c64b/images/IOT-ML-end2end.png
--------------------------------------------------------------------------------
/images/IoT-arch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge/c77180e6aec2f8667b5ecb706b861f59ada2c64b/images/IoT-arch.png
--------------------------------------------------------------------------------
/images/ML-arch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge/c77180e6aec2f8667b5ecb706b861f59ada2c64b/images/ML-arch.png
--------------------------------------------------------------------------------
/images/Stepfunctions.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge/c77180e6aec2f8667b5ecb706b861f59ada2c64b/images/Stepfunctions.png
--------------------------------------------------------------------------------
/images/cloudformation-launch-stack.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-predictive-maintenance-deployed-at-edge/c77180e6aec2f8667b5ecb706b861f59ada2c64b/images/cloudformation-launch-stack.png
--------------------------------------------------------------------------------
/predictlambda.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # -*- coding: utf-8 -*-
3 | """
4 | Created on Wed Sep 18 22:24:45 2019
5 |
6 | @author: stenatu
7 |
8 | # Edit this lambda function which invokes your trained XgBoost Model deployed
9 | # on the Greengrass Core to make predictions whenever new sensor data comes in
10 | # The output of the lambda predictins are sent to IoT. If a Faulty part is found,
11 | # the output is sent to SNS.
12 |
13 | # To get this lambda function to work, fill out the TODOs.
14 | """
15 |
16 | #
17 | # Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
18 | #
19 |
20 |
21 |
22 | from datetime import datetime
23 | import greengrasssdk
24 | import platform
25 | import boto3
26 | import random
27 | import json
28 | import xgboost as xgb
29 | import pickle
30 |
31 | # Creating a greengrass core sdk client
32 | client = greengrasssdk.client('iot-data')
33 | # Retrieving platform information to send from Greengrass Core
34 | my_platform = platform.platform()
35 | sns = boto3.client('sns')
36 | model_path = '/greengrass-machine-learning/xgboost/xgboost-model'
37 | TOPIC_ARN = 'arn:aws:sns:us-east-1:389535300735:Faulty_Sensor_MB' #TODO: enter your SNS Topic ARN here.
38 | LAMBDA_TOPIC = 'xgboost/offline' #TODO: enter your subscription topic from Lambda to IoT
39 |
40 | print("Imports invoked")
41 |
42 |
43 | # Retrieving platform information to send from Greengrass Core
44 | my_platform = platform.platform()
45 | # Load the model object.
46 | model = pickle.load(open(model_path, 'rb'))
47 |
48 | # TODO: Complete this helper function which gets invoked every time
49 | # the lambda function is triggered by the IoT device.
50 |
51 | # As we saw in the model training, the dataset includes 168 total features
52 | # coming from different sensors. Here we simulate a single datapoint from the
53 | # IoT device and generate the other 167 random numbers for the model to make
54 | # a prediction.
55 |
56 |
57 | def predict_part(datapoint):
58 |
59 |
60 | data = [random.uniform(-1, 1)/10 for x in range(167)]
61 | data = [datapoint] + data
62 |
63 |
64 | start = datetime.now()
65 |
66 | print(start)
67 |
68 |
69 | response = model.predict(xgb.DMatrix(data))
70 |
71 | end = datetime.now()
72 |
73 | mytime = (end - start).total_seconds()*1000
74 |
75 | print("Offline Model RunTime = {} milliseconds".format((end - start).total_seconds()*1000))
76 |
77 | result = round(response[0])
78 | print(result)
79 | pred = round(result)
80 |
81 | # If Prediction == 1, then part is Faulty, else it is Not Faulty.
82 | if pred ==1:
83 | predicted_label = 'Faulty'
84 | else:
85 | predicted_label = 'Not Faulty'
86 |
87 | #publish results to local greengrass topic.
88 | if not my_platform:
89 | client.publish(topic=LAMBDA_TOPIC, payload='Predicted Label {} in {} milliseconds'.format(predicted_label, mytime))
90 | else:
91 | client.publish(topic=LAMBDA_TOPIC, payload=' Predicted Label {} in {} milliseconds. Sent from Greengrass Core running on platform: {}'.format(predicted_label, mytime, my_platform))
92 |
93 | #publish to SNS topic.
94 | if pred == 1:
95 | response = sns.publish(
96 | TopicArn=TOPIC_ARN,
97 | Message='Faulty Part Found on Line 1. Immediate attention required.'
98 | )
99 | print("Published to Topic")
100 |
101 | # This is the handler that will be invoked.
102 |
103 | def lambda_handler(event, context):
104 | # load the datapoint generated by the IoT device.
105 | datapoint = json.loads(event["state"]["desired"]["property"])
106 | return predict_part(datapoint)
--------------------------------------------------------------------------------