├── .gitignore
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── NOTICE
├── README.md
├── analyze_model_predictions.ipynb
├── code
├── custom_hook.py
├── pretrained_model.py
├── pretrained_model_with_debugger_hook.py
└── requirements.txt
├── docker
├── Dockerfile
└── evaluation.py
├── images
├── image.png
└── screenshot.png
├── model
└── model.pt
└── utils.py
/.gitignore:
--------------------------------------------------------------------------------
1 | #### Project Specific:
2 |
3 | advbox/
4 | adversarial_examples/
5 | GTSRB/
6 | **.csv
7 | **.tar.gz
8 | **.zip
9 |
10 | #### Python Generic:
11 |
12 | # Byte-compiled / optimized / DLL files
13 | __pycache__/
14 | *.py[cod]
15 | *$py.class
16 |
17 | # C extensions
18 | *.so
19 |
20 | # Distribution / packaging
21 | .Python
22 | build/
23 | develop-eggs/
24 | dist/
25 | downloads/
26 | eggs/
27 | .eggs/
28 | lib/
29 | lib64/
30 | parts/
31 | sdist/
32 | var/
33 | wheels/
34 | pip-wheel-metadata/
35 | share/python-wheels/
36 | *.egg-info/
37 | .installed.cfg
38 | *.egg
39 | MANIFEST
40 |
41 | # PyInstaller
42 | # Usually these files are written by a python script from a template
43 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
44 | *.manifest
45 | *.spec
46 |
47 | # Installer logs
48 | pip-log.txt
49 | pip-delete-this-directory.txt
50 |
51 | # Unit test / coverage reports
52 | htmlcov/
53 | .tox/
54 | .nox/
55 | .coverage
56 | .coverage.*
57 | .cache
58 | nosetests.xml
59 | coverage.xml
60 | *.cover
61 | *.py,cover
62 | .hypothesis/
63 | .pytest_cache/
64 |
65 | # Translations
66 | *.mo
67 | *.pot
68 |
69 | # Django stuff:
70 | *.log
71 | local_settings.py
72 | db.sqlite3
73 | db.sqlite3-journal
74 |
75 | # Flask stuff:
76 | instance/
77 | .webassets-cache
78 |
79 | # Scrapy stuff:
80 | .scrapy
81 |
82 | # Sphinx documentation
83 | docs/_build/
84 |
85 | # PyBuilder
86 | target/
87 |
88 | # Jupyter Notebook
89 | .ipynb_checkpoints
90 |
91 | # IPython
92 | profile_default/
93 | ipython_config.py
94 |
95 | # pyenv
96 | .python-version
97 |
98 | # pipenv
99 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
100 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
101 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
102 | # install all needed dependencies.
103 | #Pipfile.lock
104 |
105 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
106 | __pypackages__/
107 |
108 | # Celery stuff
109 | celerybeat-schedule
110 | celerybeat.pid
111 |
112 | # SageMath parsed files
113 | *.sage.py
114 |
115 | # Environments
116 | .env
117 | .venv
118 | env/
119 | venv/
120 | ENV/
121 | env.bak/
122 | venv.bak/
123 |
124 | # Spyder project settings
125 | .spyderproject
126 | .spyproject
127 |
128 | # Rope project settings
129 | .ropeproject
130 |
131 | # mkdocs documentation
132 | /site
133 |
134 | # mypy
135 | .mypy_cache/
136 | .dmypy.json
137 | dmypy.json
138 |
139 | # Pyre type checker
140 | .pyre/
141 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ## Code of Conduct
2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4 | opensource-codeofconduct@amazon.com with any additional questions or comments.
5 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
4 | documentation, we greatly value feedback and contributions from our community.
5 |
6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
7 | information to effectively respond to your bug report or contribution.
8 |
9 |
10 | ## Reporting Bugs/Feature Requests
11 |
12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features.
13 |
14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
16 |
17 | * A reproducible test case or series of steps
18 | * The version of our code being used
19 | * Any modifications you've made relevant to the bug
20 | * Anything unusual about your environment or deployment
21 |
22 |
23 | ## Contributing via Pull Requests
24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
25 |
26 | 1. You are working against the latest source on the *master* branch.
27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
29 |
30 | To send us a pull request, please:
31 |
32 | 1. Fork the repository.
33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
34 | 3. Ensure local tests pass.
35 | 4. Commit to your fork using clear commit messages.
36 | 5. Send us a pull request, answering any default questions in the pull request interface.
37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
38 |
39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
41 |
42 |
43 | ## Finding contributions to work on
44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
45 |
46 |
47 | ## Code of Conduct
48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
50 | opensource-codeofconduct@amazon.com with any additional questions or comments.
51 |
52 |
53 | ## Security issue notifications
54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
55 |
56 |
57 | ## Licensing
58 |
59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
60 |
61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.
62 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 |
2 | Apache License
3 | Version 2.0, January 2004
4 | http://www.apache.org/licenses/
5 |
6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7 |
8 | 1. Definitions.
9 |
10 | "License" shall mean the terms and conditions for use, reproduction,
11 | and distribution as defined by Sections 1 through 9 of this document.
12 |
13 | "Licensor" shall mean the copyright owner or entity authorized by
14 | the copyright owner that is granting the License.
15 |
16 | "Legal Entity" shall mean the union of the acting entity and all
17 | other entities that control, are controlled by, or are under common
18 | control with that entity. For the purposes of this definition,
19 | "control" means (i) the power, direct or indirect, to cause the
20 | direction or management of such entity, whether by contract or
21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
22 | outstanding shares, or (iii) beneficial ownership of such entity.
23 |
24 | "You" (or "Your") shall mean an individual or Legal Entity
25 | exercising permissions granted by this License.
26 |
27 | "Source" form shall mean the preferred form for making modifications,
28 | including but not limited to software source code, documentation
29 | source, and configuration files.
30 |
31 | "Object" form shall mean any form resulting from mechanical
32 | transformation or translation of a Source form, including but
33 | not limited to compiled object code, generated documentation,
34 | and conversions to other media types.
35 |
36 | "Work" shall mean the work of authorship, whether in Source or
37 | Object form, made available under the License, as indicated by a
38 | copyright notice that is included in or attached to the work
39 | (an example is provided in the Appendix below).
40 |
41 | "Derivative Works" shall mean any work, whether in Source or Object
42 | form, that is based on (or derived from) the Work and for which the
43 | editorial revisions, annotations, elaborations, or other modifications
44 | represent, as a whole, an original work of authorship. For the purposes
45 | of this License, Derivative Works shall not include works that remain
46 | separable from, or merely link (or bind by name) to the interfaces of,
47 | the Work and Derivative Works thereof.
48 |
49 | "Contribution" shall mean any work of authorship, including
50 | the original version of the Work and any modifications or additions
51 | to that Work or Derivative Works thereof, that is intentionally
52 | submitted to Licensor for inclusion in the Work by the copyright owner
53 | or by an individual or Legal Entity authorized to submit on behalf of
54 | the copyright owner. For the purposes of this definition, "submitted"
55 | means any form of electronic, verbal, or written communication sent
56 | to the Licensor or its representatives, including but not limited to
57 | communication on electronic mailing lists, source code control systems,
58 | and issue tracking systems that are managed by, or on behalf of, the
59 | Licensor for the purpose of discussing and improving the Work, but
60 | excluding communication that is conspicuously marked or otherwise
61 | designated in writing by the copyright owner as "Not a Contribution."
62 |
63 | "Contributor" shall mean Licensor and any individual or Legal Entity
64 | on behalf of whom a Contribution has been received by Licensor and
65 | subsequently incorporated within the Work.
66 |
67 | 2. Grant of Copyright License. Subject to the terms and conditions of
68 | this License, each Contributor hereby grants to You a perpetual,
69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70 | copyright license to reproduce, prepare Derivative Works of,
71 | publicly display, publicly perform, sublicense, and distribute the
72 | Work and such Derivative Works in Source or Object form.
73 |
74 | 3. Grant of Patent License. Subject to the terms and conditions of
75 | this License, each Contributor hereby grants to You a perpetual,
76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77 | (except as stated in this section) patent license to make, have made,
78 | use, offer to sell, sell, import, and otherwise transfer the Work,
79 | where such license applies only to those patent claims licensable
80 | by such Contributor that are necessarily infringed by their
81 | Contribution(s) alone or by combination of their Contribution(s)
82 | with the Work to which such Contribution(s) was submitted. If You
83 | institute patent litigation against any entity (including a
84 | cross-claim or counterclaim in a lawsuit) alleging that the Work
85 | or a Contribution incorporated within the Work constitutes direct
86 | or contributory patent infringement, then any patent licenses
87 | granted to You under this License for that Work shall terminate
88 | as of the date such litigation is filed.
89 |
90 | 4. Redistribution. You may reproduce and distribute copies of the
91 | Work or Derivative Works thereof in any medium, with or without
92 | modifications, and in Source or Object form, provided that You
93 | meet the following conditions:
94 |
95 | (a) You must give any other recipients of the Work or
96 | Derivative Works a copy of this License; and
97 |
98 | (b) You must cause any modified files to carry prominent notices
99 | stating that You changed the files; and
100 |
101 | (c) You must retain, in the Source form of any Derivative Works
102 | that You distribute, all copyright, patent, trademark, and
103 | attribution notices from the Source form of the Work,
104 | excluding those notices that do not pertain to any part of
105 | the Derivative Works; and
106 |
107 | (d) If the Work includes a "NOTICE" text file as part of its
108 | distribution, then any Derivative Works that You distribute must
109 | include a readable copy of the attribution notices contained
110 | within such NOTICE file, excluding those notices that do not
111 | pertain to any part of the Derivative Works, in at least one
112 | of the following places: within a NOTICE text file distributed
113 | as part of the Derivative Works; within the Source form or
114 | documentation, if provided along with the Derivative Works; or,
115 | within a display generated by the Derivative Works, if and
116 | wherever such third-party notices normally appear. The contents
117 | of the NOTICE file are for informational purposes only and
118 | do not modify the License. You may add Your own attribution
119 | notices within Derivative Works that You distribute, alongside
120 | or as an addendum to the NOTICE text from the Work, provided
121 | that such additional attribution notices cannot be construed
122 | as modifying the License.
123 |
124 | You may add Your own copyright statement to Your modifications and
125 | may provide additional or different license terms and conditions
126 | for use, reproduction, or distribution of Your modifications, or
127 | for any such Derivative Works as a whole, provided Your use,
128 | reproduction, and distribution of the Work otherwise complies with
129 | the conditions stated in this License.
130 |
131 | 5. Submission of Contributions. Unless You explicitly state otherwise,
132 | any Contribution intentionally submitted for inclusion in the Work
133 | by You to the Licensor shall be under the terms and conditions of
134 | this License, without any additional terms or conditions.
135 | Notwithstanding the above, nothing herein shall supersede or modify
136 | the terms of any separate license agreement you may have executed
137 | with Licensor regarding such Contributions.
138 |
139 | 6. Trademarks. This License does not grant permission to use the trade
140 | names, trademarks, service marks, or product names of the Licensor,
141 | except as required for reasonable and customary use in describing the
142 | origin of the Work and reproducing the content of the NOTICE file.
143 |
144 | 7. Disclaimer of Warranty. Unless required by applicable law or
145 | agreed to in writing, Licensor provides the Work (and each
146 | Contributor provides its Contributions) on an "AS IS" BASIS,
147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148 | implied, including, without limitation, any warranties or conditions
149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150 | PARTICULAR PURPOSE. You are solely responsible for determining the
151 | appropriateness of using or redistributing the Work and assume any
152 | risks associated with Your exercise of permissions under this License.
153 |
154 | 8. Limitation of Liability. In no event and under no legal theory,
155 | whether in tort (including negligence), contract, or otherwise,
156 | unless required by applicable law (such as deliberate and grossly
157 | negligent acts) or agreed to in writing, shall any Contributor be
158 | liable to You for damages, including any direct, indirect, special,
159 | incidental, or consequential damages of any character arising as a
160 | result of this License or out of the use or inability to use the
161 | Work (including but not limited to damages for loss of goodwill,
162 | work stoppage, computer failure or malfunction, or any and all
163 | other commercial damages or losses), even if such Contributor
164 | has been advised of the possibility of such damages.
165 |
166 | 9. Accepting Warranty or Additional Liability. While redistributing
167 | the Work or Derivative Works thereof, You may choose to offer,
168 | and charge a fee for, acceptance of support, warranty, indemnity,
169 | or other liability obligations and/or rights consistent with this
170 | License. However, in accepting such obligations, You may act only
171 | on Your own behalf and on Your sole responsibility, not on behalf
172 | of any other Contributor, and only if You agree to indemnify,
173 | defend, and hold each Contributor harmless for any liability
174 | incurred by, or claims asserted against, such Contributor by reason
175 | of your accepting any such warranty or additional liability.
176 |
--------------------------------------------------------------------------------
/NOTICE:
--------------------------------------------------------------------------------
1 | Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## Detecting and analyzing incorrect model predictions with Amazon SageMaker Model Monitor and Debugger
2 |
3 | This repository contains the notebook and scripts for the blogpost "Detecting and analyzing incorrect model predictions with Amazon SageMaker Model Monitor and Debugger"
4 |
5 | [Create a SageMaker notebook instance](https://docs.aws.amazon.com/sagemaker/latest/dg/howitworks-create-ws.html) and clone the repository:
6 | ```
7 | git clone git@github.com:aws-samples/amazon-sagemaker-analyze-model-predictions.git
8 | ```
9 |
10 | In the notebook [analyze_model_predictions.ipynb](analyze_model_predictions.ipynb) we first deploy a [ResNet18](https://arxiv.org/abs/1512.03385) model that has been trained to distinguish between 43 categories of traffic signs using the [German Traffic Sign dataset](https://ieeexplore.ieee.org/document/6033395).
11 |
12 | We will setup [SageMaker Model Monitor](https://aws.amazon.com/blogs/aws/amazon-sagemaker-model-monitor-fully-managed-automatic-monitoring-for-your-machine-learning-models/) to automatically capture inference requests and predictions.
13 | Afterwards we launch a [monitoring schedule](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-scheduling.html) that periodically kicks off a custom processing job to inspect collected data and to detect unexpected model behavior.
14 |
15 | We will then create adversarial images which lead to the model making incorrect predictions. Once Model Monitor detects this issue, we will use [SageMaker Debugger](https://aws.amazon.com/blogs/aws/amazon-sagemaker-debugger-debug-your-machine-learning-models/) to obtain visual explanations of the deployed model. This is done by updating the endpoint to emit tensors during inference and we will then use those tensors to compute saliency maps.
16 |
17 | The saliency map can be rendered as heat-map and reveals the parts of an image that were critical in the prediction. Below is an example (taken from the [German Traffic Sign dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset)): the image on the left is the input into the fine-tuned ResNet model, which predicted the image class 25 (‘Road work’). The right image shows the input image overlaid with a heat-map where red indicates the most relevant and blue the least relevant pixels for predicting the class 25.
18 |
19 |
20 |
21 |
22 |
23 | ## License
24 |
25 | This project is licensed under the Apache-2.0 License.
26 |
27 |
--------------------------------------------------------------------------------
/code/custom_hook.py:
--------------------------------------------------------------------------------
1 | import smdebug.pytorch as smd
2 | import torch
3 |
4 | class CustomHook(smd.Hook):
5 |
6 | def image_gradients(self, image):
7 | """Register input image for backward pass, to get image gradients"""
8 | image.register_hook(self.backward_hook("image"))
9 |
10 | def forward_hook(self, module, inputs, outputs):
11 | module_name = self.module_maps[module]
12 | self._write_inputs(module_name, inputs)
13 |
14 | outputs.register_hook(self.backward_hook(module_name + "_output"))
15 |
16 | #record running mean and var of BatchNorm layers
17 | if isinstance(module, torch.nn.BatchNorm2d):
18 | self._write_outputs(module_name + ".running_mean", module.running_mean)
19 | self._write_outputs(module_name + ".running_var", module.running_var)
20 |
21 | self._write_outputs(module_name, outputs)
22 | self.last_saved_step = self.step
23 |
--------------------------------------------------------------------------------
/code/pretrained_model.py:
--------------------------------------------------------------------------------
1 | '''SageMaker PyTorch inference container overrides to serve plain ResNet18 model'''
2 |
3 | # Python Built-Ins:
4 | import argparse
5 | from io import BytesIO
6 | import os
7 |
8 | # External Dependencies:
9 | import numpy as np
10 | from PIL import Image
11 | import torch
12 | import torch.nn as nn
13 | import torch.optim as optim
14 | from torchvision import models, transforms
15 |
16 |
17 | def model_fn(model_dir):
18 | #create model
19 | model = models.resnet18()
20 |
21 | #traffic sign dataset has 43 classes
22 | nfeatures = model.fc.in_features
23 | model.fc = nn.Linear(nfeatures, 43)
24 |
25 | #load model
26 | weights = torch.load(f'{model_dir}/model/model.pt', map_location=lambda storage, loc: storage)
27 | model.load_state_dict(weights)
28 |
29 | model.eval()
30 | model.cpu()
31 |
32 | return model
33 |
34 |
35 | def transform_fn(model, data, content_type, output_content_type):
36 | transform = transforms.Compose([
37 | transforms.Resize((128,128)),
38 | transforms.ToTensor(),
39 | transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
40 | ])
41 |
42 | image = np.load(BytesIO(data))
43 | image = Image.fromarray(image)
44 | image = transform(image)
45 |
46 | image = image.unsqueeze(0)
47 |
48 | #forward pass
49 | prediction = model(image)
50 |
51 | #get prediction
52 | predicted_class = prediction.data.max(1, keepdim=True)[1]
53 |
54 | response_body = np.array(predicted_class.cpu()).tolist()
55 | return response_body, output_content_type
56 |
--------------------------------------------------------------------------------
/code/pretrained_model_with_debugger_hook.py:
--------------------------------------------------------------------------------
1 | '''SageMaker PyTorch inference container overrides to serve ResNet18 model with debug hook'''
2 |
3 | # Python Built-Ins:
4 | import argparse
5 | from io import BytesIO
6 | import logging
7 | import os
8 | from typing import Any, Union
9 |
10 | # External Dependencies:
11 | import numpy as np
12 | from PIL import Image
13 | import smdebug.pytorch as smd
14 | from smdebug import modes
15 | from smdebug.core.modes import ModeKeys
16 | import torch
17 | import torch.nn as nn
18 | import torch.optim as optim
19 | from torchvision import models, transforms
20 |
21 | # Local Dependencies:
22 | from custom_hook import CustomHook
23 |
24 |
25 | logger = logging.getLogger()
26 |
27 |
28 | class ModelWithDebugHook:
29 | def __init__(self, model: Any, hook: Union[smd.Hook, None]):
30 | '''Simple container to associate a 'model' with a SageMaker debug hook'''
31 | self.model = model
32 | self.hook = hook
33 |
34 |
35 | def model_fn(model_dir: str) -> ModelWithDebugHook:
36 | #create model
37 | model = models.resnet18()
38 |
39 | #traffic sign dataset has 43 classes
40 | nfeatures = model.fc.in_features
41 | model.fc = nn.Linear(nfeatures, 43)
42 |
43 | #load model
44 | weights = torch.load(f'{model_dir}/model/model.pt', map_location=lambda storage, loc: storage)
45 | model.load_state_dict(weights)
46 |
47 | model.eval()
48 | model.cpu()
49 |
50 | #hook configuration
51 | tensors_output_s3uri = os.environ.get('tensors_output')
52 | if tensors_output_s3uri is None:
53 | logger.warning(
54 | 'WARN: Skipping hook configuration as no tensors_output env var provided. '
55 | 'Tensors will not be exported'
56 | )
57 | hook = None
58 | else:
59 | save_config = smd.SaveConfig(mode_save_configs={
60 | smd.modes.PREDICT: smd.SaveConfigMode(save_interval=1),
61 | })
62 |
63 | hook = CustomHook(
64 | tensors_output_s3uri,
65 | save_config=save_config,
66 | include_regex='.*bn|.*bias|.*downsample|.*ResNet_input|.*image|.*fc_output',
67 | )
68 |
69 | #register hook
70 | hook.register_module(model)
71 |
72 | #set mode
73 | hook.set_mode(modes.PREDICT)
74 |
75 | return ModelWithDebugHook(model, hook)
76 |
77 |
78 | def transform_fn(model_with_hook, data, content_type, output_content_type):
79 | model = model_with_hook.model
80 | hook = model_with_hook.hook
81 |
82 | val_transform = transforms.Compose([
83 | transforms.Resize((128,128)),
84 | transforms.ToTensor(),
85 | transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
86 | ])
87 |
88 | image = np.load(BytesIO(data))
89 | image = Image.fromarray(image)
90 | image = val_transform(image)
91 |
92 | image = image.unsqueeze(0)
93 | image = image.to('cpu').requires_grad_()
94 | if hook is not None:
95 | hook.image_gradients(image)
96 |
97 | #forward pass
98 | prediction = model(image)
99 |
100 | #get prediction
101 | predicted_class = prediction.data.max(1, keepdim=True)[1]
102 | output = prediction[0, predicted_class[0]]
103 | model.zero_grad()
104 |
105 | #compute gradients with respect to outputs
106 | output.backward()
107 |
108 | response_body = np.array(predicted_class.cpu()).tolist()
109 | return response_body, output_content_type
110 |
--------------------------------------------------------------------------------
/code/requirements.txt:
--------------------------------------------------------------------------------
1 | Pillow
2 | smdebug==0.5.0
3 |
--------------------------------------------------------------------------------
/docker/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.7-slim-buster
2 |
3 | RUN pip3 install sagemaker
4 | ENV PYTHONUNBUFFERED=TRUE
5 |
6 | ADD evaluation.py /
7 |
8 | ENTRYPOINT ["python3", "/evaluation.py"]
--------------------------------------------------------------------------------
/docker/evaluation.py:
--------------------------------------------------------------------------------
1 | """Custom Model Monitoring script for classification
2 | """
3 |
4 | # Python Built-Ins:
5 | from collections import defaultdict
6 | import datetime
7 | import json
8 | import os
9 | import traceback
10 | from types import SimpleNamespace
11 |
12 | # External Dependencies:
13 | import numpy as np
14 |
15 |
16 | def get_environment():
17 | """Load configuration variables for SM Model Monitoring job
18 |
19 | See https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-contract-inputs.html
20 | """
21 | try:
22 | with open("/opt/ml/config/processingjobconfig.json", "r") as conffile:
23 | defaults = json.loads(conffile.read())["Environment"]
24 | except Exception as e:
25 | traceback.print_exc()
26 | print("Unable to read environment vars from SM processing config file")
27 | defaults = {}
28 |
29 | return SimpleNamespace(
30 | dataset_format=os.environ.get("dataset_format", defaults.get("dataset_format")),
31 | dataset_source=os.environ.get(
32 | "dataset_source",
33 | defaults.get("dataset_source", "/opt/ml/processing/input/endpoint"),
34 | ),
35 | end_time=os.environ.get("end_time", defaults.get("end_time")),
36 | output_path=os.environ.get(
37 | "output_path",
38 | defaults.get("output_path", "/opt/ml/processing/resultdata"),
39 | ),
40 | publish_cloudwatch_metrics=os.environ.get(
41 | "publish_cloudwatch_metrics",
42 | defaults.get("publish_cloudwatch_metrics", "Enabled"),
43 | ),
44 | sagemaker_endpoint_name=os.environ.get(
45 | "sagemaker_endpoint_name",
46 | defaults.get("sagemaker_endpoint_name"),
47 | ),
48 | sagemaker_monitoring_schedule_name=os.environ.get(
49 | "sagemaker_monitoring_schedule_name",
50 | defaults.get("sagemaker_monitoring_schedule_name"),
51 | ),
52 | start_time=os.environ.get("start_time", defaults.get("start_time")),
53 | max_ratio_threshold=float(os.environ.get("THRESHOLD", defaults.get("THRESHOLD", "nan"))),
54 | )
55 |
56 |
57 | if __name__=="__main__":
58 | env = get_environment()
59 | print(f"Starting evaluation with config:\n{env}")
60 |
61 | print("Analyzing collected data...")
62 | total_record_count = 0 # Including error predictions that we can't read the response for
63 | error_record_count = 0
64 | counts = defaultdict(int) # dict defaulting to 0 when unseen keys are requested
65 | for path, directories, filenames in os.walk(env.dataset_source):
66 | for filename in filter(lambda f: f.lower().endswith(".jsonl"), filenames):
67 | with open(os.path.join(path, filename), "r") as file:
68 | for entry in file:
69 | total_record_count += 1
70 | try:
71 | response = json.loads(json.loads(entry)["captureData"]["endpointOutput"]["data"])
72 | except:
73 | error_record_count += 1
74 | continue
75 |
76 | # response will typically be a 1x1 array (single-request, single output field), but we
77 | # can handle batch inference too by looping over array:
78 | for record in response:
79 | prediction = record[0]
80 | counts[prediction] += 1
81 | print(f"Class prediction counts: {counts}")
82 |
83 | print("Calculating secondary summaries...")
84 | total_prediction_count = np.sum(list(counts.values()))
85 | max_count = float("-inf")
86 | max_class = None
87 | numeric_class_names = []
88 | for class_name, count in counts.items():
89 | try:
90 | numeric_class_names.append(class_name - 0)
91 | except:
92 | pass
93 | if count > max_count:
94 | max_count = count
95 | max_class = class_name
96 | max_class_ratio = max_count / total_prediction_count
97 | mean_numeric_label = np.average(numeric_class_names, weights=[counts[c] for c in numeric_class_names])
98 |
99 | print("Checking for constraint violations...")
100 | violations = []
101 | if max_class_ratio > env.max_ratio_threshold:
102 | violations.append({
103 | "feature_name": "PredictedClass",
104 | "constraint_check_type": "baseline_drift_check",
105 | "description": "Class {} predicted {:.2f}% of the time: Exceeded {:.2f}% threshold".format(
106 | max_class,
107 | max_class_ratio * 100,
108 | env.max_ratio_threshold * 100,
109 | ),
110 | })
111 | if error_record_count > 0:
112 | violations.append({
113 | "feature_name": "PredictedClass",
114 | # TODO: Maybe this should be missing_column_check when error_record_count == total_record_count?
115 | "constraint_check_type": "completeness_check",
116 | "description": "Could not read predicted class for {} req/res pairs ({:.2f}% of total)".format(
117 | error_record_count,
118 | error_record_count * 100 / total_record_count,
119 | ),
120 | })
121 | print(f"Violations: {violations if len(violations) else 'None'}")
122 |
123 | print("Writing violations file...")
124 | with open(os.path.join(env.output_path, "constraints_violations.json"), "w") as outfile:
125 | outfile.write(json.dumps(
126 | { "violations": violations },
127 | indent=4,
128 | ))
129 |
130 | # You could also consider writing a statistics.json and constraints.json here, per the standard results:
131 | # https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-results.html
132 |
133 | print("Writing overall status output...")
134 | with open("/opt/ml/output/message", "w") as outfile:
135 | if len(violations):
136 | msg = f"CompletedWithViolations: {violations[0]['description']}"
137 | else:
138 | msg = "Completed: Job completed successfully with no violations."
139 | outfile.write(msg)
140 | print(msg)
141 |
142 | if env.publish_cloudwatch_metrics:
143 | print("Writing CloudWatch metrics...")
144 | with open("/opt/ml/output/metrics/cloudwatch/cloudwatch_metrics.jsonl", "a+") as outfile:
145 | # One metric per line (JSONLines list of dictionaries)
146 | # Remember these metrics are aggregated in graphs, so we report them as statistics on our dataset
147 | json.dump(
148 | {
149 | "MetricName": f"feature_data_PredictedClass",
150 | "Timestamp": datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%SZ"),
151 | "Dimensions": [
152 | { "Name": "Endpoint", "Value": env.sagemaker_endpoint_name or "unknown" },
153 | {
154 | "Name": "MonitoringSchedule",
155 | "Value": env.sagemaker_monitoring_schedule_name or "unknown",
156 | },
157 | ],
158 | "StatisticValues": {
159 | "Maximum": np.max(numeric_class_names).item(),
160 | "Minimum": np.min(numeric_class_names).item(),
161 | "SampleCount": int(total_prediction_count),
162 | "Sum": np.sum(
163 | np.array(numeric_class_names)
164 | * np.array([counts[c] for c in numeric_class_names])
165 | ).item(),
166 | },
167 | },
168 | outfile
169 | )
170 | outfile.write("\n")
171 | pct_successful = (total_record_count - error_record_count) / total_record_count
172 | json.dump(
173 | {
174 | "MetricName": f"feature_non_null_PredictedClass",
175 | "Timestamp": datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%SZ"),
176 | "Dimensions": [
177 | { "Name": "Endpoint", "Value": env.sagemaker_endpoint_name or "unknown" },
178 | {
179 | "Name": "MonitoringSchedule",
180 | "Value": env.sagemaker_monitoring_schedule_name or "unknown",
181 | },
182 | ],
183 | "StatisticValues": {
184 | "Maximum": pct_successful,
185 | "Minimum": pct_successful,
186 | "SampleCount": total_record_count,
187 | "Sum": pct_successful * total_record_count,
188 | },
189 | },
190 | outfile
191 | )
192 | outfile.write("\n")
193 | # numpy types may not be JSON serializable:
194 | max_class_ratio = float(max_class_ratio)
195 | total_prediction_count = int(total_prediction_count)
196 | json.dump(
197 | {
198 | "MetricName": f"feature_baseline_drift_PredictedClass",
199 | "Timestamp": datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%SZ"),
200 | "Dimensions": [
201 | { "Name": "Endpoint", "Value": env.sagemaker_endpoint_name or "unknown" },
202 | {
203 | "Name": "MonitoringSchedule",
204 | "Value": env.sagemaker_monitoring_schedule_name or "unknown",
205 | },
206 | ],
207 | "StatisticValues": {
208 | "Maximum": max_class_ratio,
209 | "Minimum": max_class_ratio,
210 | "SampleCount": total_prediction_count,
211 | "Sum": max_class_ratio * total_prediction_count,
212 | },
213 | },
214 | outfile
215 | )
216 | outfile.write("\n")
217 | print("Done")
218 |
--------------------------------------------------------------------------------
/images/image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-analyze-model-predictions/c3db4a3e2bf9afff57c7b17b10b17066aba9316e/images/image.png
--------------------------------------------------------------------------------
/images/screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-analyze-model-predictions/c3db4a3e2bf9afff57c7b17b10b17066aba9316e/images/screenshot.png
--------------------------------------------------------------------------------
/model/model.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-analyze-model-predictions/c3db4a3e2bf9afff57c7b17b10b17066aba9316e/model/model.pt
--------------------------------------------------------------------------------
/utils.py:
--------------------------------------------------------------------------------
1 | # Python Built-Ins:
2 | import os
3 |
4 | # External Dependencies:
5 | from IPython.display import display, HTML
6 | import matplotlib.pyplot as plt
7 | import numpy as np
8 | import torch
9 | import torch.nn as nn
10 | from torchvision import datasets, models, transforms
11 |
12 | # Configuration:
13 | image_norm_mean = [0.485, 0.456, 0.406]
14 | image_norm_stddev = [0.229, 0.224, 0.225]
15 |
16 |
17 | def create_circular_mask(h, w, center=None, radius=None):
18 | """Create a boolean mask selecting a circular region (e.g. in an image)
19 |
20 | Sourced from the following StackOverflow post, with tweaks:
21 | https://stackoverflow.com/a/44874588/13352657
22 | """
23 | if center is None: # use the middle of the image
24 | center = (int(w/2), int(h/2))
25 | elif np.all(np.array(center) <= 1.): # Convert fractional to absolute
26 | center = (np.array((w, h)) * center).astype(int)
27 | if radius is None: # use the smallest distance between the center and image walls
28 | radius = min(center[0], center[1], w-center[0], h-center[1])
29 | elif radius < 1.: # Convert fractional to absolute
30 | radius = min(w, h) * radius
31 |
32 | Y, X = np.ogrid[:h, :w]
33 | dist_from_center = np.sqrt((X - center[0])**2 + (Y-center[1])**2)
34 |
35 | mask = dist_from_center <= radius
36 | return mask
37 |
38 |
39 | def get_dataloader():
40 | val_transform = transforms.Compose([
41 | transforms.Resize((128, 128)),
42 | transforms.ToTensor(),
43 | transforms.Normalize(image_norm_mean, image_norm_stddev),
44 | ])
45 | dataset = datasets.ImageFolder('GTSRB/Final_Test/', val_transform)
46 | val_dataloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True)
47 |
48 | return val_dataloader
49 |
50 |
51 | def tensor_to_imgarray(image, floating_point=False):
52 | """Convert a normalized tensor or matrix as used by the model into a standard image array
53 |
54 | Parameters
55 | ----------
56 | image : Union[numpy.ndarray, torch.Tensor]
57 | A mean/std-normalized image tensor or matrix in inference format for the model
58 | floating_point : bool (Optional)
59 | Set True to skip conversion to 0-255 uint8 and return a 0-1.0 float ndarray instead
60 | """
61 | if torch.is_tensor(image):
62 | image = image.cpu().numpy()
63 | if len(image.shape) > 3:
64 | # Leading batch dimension - take first el only
65 | image = image[tuple(0 if dim == 0 else slice(None) for dim in range(len(image.shape)))]
66 |
67 | image_shape = image.shape
68 | channeldim = image_shape.index(3)
69 | result = image
70 |
71 | # Move channel to correct (trailing) dim if not already:
72 | if channeldim < (len(image_shape) - 1):
73 | result = np.moveaxis(result, channeldim, -1)
74 | image_shape = result.shape
75 | channeldim = len(image_shape) - 1
76 |
77 | # Pad mean and stddev constants to image dimensions
78 | # TODO: Simplify this when we're consistent in what the image dimensions are!
79 | # We use a loop here in case some environments use numpy<1.18 when the functionality to accept a tuple of
80 | # axes was introduced:
81 | stddev = image_norm_stddev
82 | mean = image_norm_mean
83 | for _ in range(channeldim):
84 | stddev = np.expand_dims(stddev, 0)
85 | mean = np.expand_dims(mean, 0)
86 | for _ in range(channeldim + 1, len(image_shape)):
87 | stddev = np.expand_dims(stddev, -1)
88 |
89 | result = (result * stddev) + mean
90 | if floating_point:
91 | return np.clip(result, 0., 1.)
92 | else:
93 | return np.clip(result * 255.0, 0, 255).astype(np.uint8)
94 |
95 |
96 | def load_model():
97 | #check if GPU is available and set context
98 | device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
99 |
100 | #load model
101 | model = models.resnet18()
102 |
103 | #traffic sign dataset has 43 classes
104 | nfeatures = model.fc.in_features
105 | model.fc = nn.Linear(nfeatures, 43)
106 |
107 | weights = torch.load('model/model.pt', map_location=lambda storage, loc: storage)
108 | model.load_state_dict(weights)
109 |
110 | for param in model.parameters():
111 | param.requires_grad = False
112 |
113 | model.to(device).eval()
114 | return model
115 |
116 |
117 | def show_images_diff(image, adv_image, adv_label=None, class_names=None, cmap=None):
118 | adv_image = tensor_to_imgarray(adv_image, floating_point=True)
119 | image = tensor_to_imgarray(image, floating_point=True)
120 |
121 | fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(12, 4))
122 |
123 | ax0.imshow(image)
124 | ax0.set_title('Original')
125 | ax0.set_axis_off()
126 |
127 | ax1.imshow(adv_image)
128 | if adv_label is None:
129 | ax1.set_title('Adversarial image')
130 | else:
131 | ax1.set_title(f'Model prediction: {class_names[adv_label] if class_names else adv_label}')
132 | ax1.set_axis_off()
133 |
134 | difference = adv_image - image
135 |
136 | # If colormapping, convert RGB to single lightness channel:
137 | if cmap is not None and 3 in difference.shape:
138 | channeldim = difference.shape.index(3)
139 | rgbindices = [
140 | tuple(rgb if dim == channeldim else slice(None) for dim in range(len(difference.shape)))
141 | for rgb in range(3)
142 | ]
143 | # RGB->lightness function per PIL docs, but no need to import the lib just for this:
144 | # https://pillow.readthedocs.io/en/3.2.x/reference/Image.html#PIL.Image.Image.convert
145 | # L = R * 299/1000 + G * 587/1000 + B * 114/1000
146 | difference = (
147 | difference[rgbindices[0]] * 0.299
148 | + difference[rgbindices[1]] * 0.587
149 | + difference[rgbindices[2]] * 0.114
150 | )
151 |
152 | # Scale to a symmetric range around max absolute difference (which we print out), and map that to 0-1
153 | # for imshow. (When colormapping we could just use vmin/vmax, but this way we keep same path for both).
154 | maxdiff = abs(difference).max()
155 | difference = difference / (maxdiff * 2.0) + 0.5
156 | ax2.imshow(difference, cmap=cmap, vmin=0., vmax=1.)
157 | ax2.set_title(f'Diff ({-maxdiff:.4f} to {maxdiff:.4f})')
158 | ax2.set_axis_off()
159 | plt.tight_layout()
160 | plt.show()
161 |
162 |
163 | def plot_saliency_map(
164 | saliency_map,
165 | image,
166 | predicted_class=None,
167 | class_names=None,
168 | confidence=None,
169 | cmap=plt.cm.plasma,
170 | alpha=0.5,
171 | interest_center=(0.5, 0.5),
172 | interest_radius=0.4,
173 | max_bg_saliency_thresh=0.85,
174 | ):
175 | """Plot an image classification result with saliency map
176 |
177 | Parameters
178 | ----------
179 | saliency_map :
180 | A *normalized* (range 0-1.0) importance/saliency map matching image height and width, but with no
181 | channel dimension.
182 | image :
183 | An image with leading channel dimension, normalized values (mean + std).
184 | TODO: Parameterize the normalization rather than hard-coding?
185 | predicted_class : Any (Optional)
186 | If supplied, the saliency overlay plot will be titled to indicate which class was detected.
187 | class_names : Mapping[Any, Any] (Optional)
188 | If supplied as well as predicted_class, the saliency overlay plot title will *also* be annotated with
189 | the "name" looked up from the raw predicted_class label.
190 | confidence : float (Optional)
191 | If supplied as well as predicted_class, the saliency overlay plot title will also be annotated with
192 | the confidence score. Should be in 0-1.0 range, will be displayed as percentage.
193 | cmap : matplotlib.pyplot.colors.ColorMap (Optional)
194 | A PyPlot colormap to apply for the saliency map. Defaults to plt.cm.plasma
195 | alpha : float (Optional)
196 | Opacity of the saliency heatmap to show in the overlay image. Defaults to 0.5
197 | interest_center : Tuple[float] (Optional)
198 | Relative (w, h) center of expected interest area in image (0.5,0.5 = middle by default). Used only
199 | when interest_radius is not None
200 | interest_radius : float (Optional)
201 | Relative radius of interest circle in image (0.4 for 80% diameter by default). When this parameter is
202 | not explicitly set to None, draw a 'circle of interest' on the plots and calculate the maximum and
203 | average saliency of points *outside* this region - to check for unexpected attention focus away from
204 | the subject of the image.
205 | max_bg_saliency_thresh : float (Optional)
206 | Display a warning box in the notebook when the maximum saliency *outside* the circle of interest is
207 | >= this value.
208 | """
209 | # Revert image normalization
210 | image = tensor_to_imgarray(image, floating_point=True)
211 |
212 | # Given the saliency map has already been normalized to 0-1, we can apply pyplot colormap as below:
213 | # (Otherwise see mpl.colors.Normalize and plt.cm.ScalarMappable(norm=norm, cmap=cmap))
214 | heatmap = cmap(saliency_map)
215 | heatmap = heatmap[:, :, :-1] # Trim off the alpha channel (always 1.0 anyway for typical cmaps)
216 |
217 | # Blend image with heatmap:
218 | combined_image = alpha * heatmap + (1-alpha) * image
219 |
220 | # Plot
221 | fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(12, 4))
222 | ax0.imshow(image)
223 | ax0.set_axis_off()
224 | ax0.set_title("Input image")
225 | ax1.imshow(combined_image)
226 | ax1.set_axis_off()
227 | if predicted_class is None:
228 | ax1.set_title("Saliency overlay")
229 | else:
230 | ax1.set_title("Predicted '{}'{}{}".format(
231 | str(predicted_class),
232 | f" ({class_names[predicted_class]})" if class_names is not None else "",
233 | f", {confidence * 100:.1f}%" if confidence is not None else "",
234 | ))
235 | ax2.imshow(heatmap)
236 | ax2.set_axis_off()
237 | ax2.set_title("Saliency heatmap")
238 | plt.tight_layout()
239 |
240 | # If required, plot interest circles and calculate background saliency metrics:
241 | if interest_radius is not None:
242 | h = heatmap.shape[0]
243 | w = heatmap.shape[1]
244 | bg_mask = ~create_circular_mask(h, w, center=interest_center, radius=interest_radius)
245 | bg_saliency = bg_mask * saliency_map
246 | max_bg_saliency = np.max(bg_saliency)
247 | print(f"Max bg_saliency {max_bg_saliency}, average bg_saliency {np.mean(bg_saliency)}")
248 |
249 | wh = np.array((w, h))
250 | # Unfortunately you can't re-use a mpl 'artist' between plots, and there's no copy method! Ugh
251 | plt_circle0 = plt.Circle(
252 | wh * interest_center, np.min(wh) * interest_radius, color='white', fill=False,
253 | )
254 | ax0.add_artist(plt_circle0)
255 | plt_circle1 = plt.Circle(
256 | wh * interest_center, np.min(wh) * interest_radius, color='white', fill=False,
257 | )
258 | ax1.add_artist(plt_circle1)
259 | plt_circle2 = plt.Circle(
260 | wh * interest_center, np.min(wh) * interest_radius, color='white', fill=False,
261 | )
262 | ax2.add_artist(plt_circle2)
263 |
264 | plt.show()
265 |
266 | if interest_radius is not None and max_bg_saliency >= max_bg_saliency_thresh:
267 | display(HTML('\n'.join((
268 | f'',
269 | 'High saliency outside region of interest: prediction may be attending to unreliable background',
270 | 'context',
271 | '
',
272 | ))))
273 |
--------------------------------------------------------------------------------