├── .gitignore
├── README.md
├── detectron2_gradcam.py
├── gradcam.py
├── img
├── grad_cam++.png
├── grad_cam.png
└── input.jpg
└── main.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # detectron2-GradCAM
2 | This repo helps you to perform [GradCAM](https://arxiv.org/abs/1610.02391) and [GradCAM++](https://arxiv.org/abs/1710.11063) on the [detectron2](https://github.com/facebookresearch/detectron2) model zoo. It follows other GradCAM implementations but also handles the detectron2 API specific model details. Be sure to have the latest detectron2 version installed.
3 |
4 | There is also [this](https://github.com/yizt/Grad-CAM.pytorch) repo to do GradCAM in detectron2. It advises you to make changes to the detectron2 build which I think is not a good idea..
5 |
6 | | Original | GradCAM (horse) | GradCAM++ (horse) |
7 | | ------------- |:-------------:| :-------------:|
8 | |
|
|
|
9 |
10 |
11 | For doing this, check the `main.py` and change the `img_path`, the `config_file` and the `model_file` according to your needs.
12 |
13 | For ResNet50 models the layer `backbone.bottom_up.res5.2.conv3` will be a good choice for the classification explanation. For larger or smaller models, change the layer accordingly via `layer_name`.
14 |
15 |
16 | For your custom models, either write your own config.yaml or edit [`cfg_list`](https://github.com/alexriedel1/detectron2-GradCAM/blob/main/main.py#L15)
17 |
18 |
19 | There's also a Colab with everything set up: [GradCam Detecteron2](https://colab.research.google.com/drive/15GN0juUurMPCDHA3tGp6nJ4mxUiSHknZ)
20 |
--------------------------------------------------------------------------------
/detectron2_gradcam.py:
--------------------------------------------------------------------------------
1 | from gradcam import GradCAM, GradCamPlusPlus
2 | import detectron2.data.transforms as T
3 | import torch
4 | from detectron2.checkpoint import DetectionCheckpointer
5 | from detectron2.config import get_cfg
6 | from detectron2.data import DatasetCatalog, MetadataCatalog
7 | from detectron2.data.detection_utils import read_image
8 | from detectron2.modeling import build_model
9 | from detectron2.data.datasets import register_coco_instances
10 |
11 | class Detectron2GradCAM():
12 | """
13 | Attributes
14 | ----------
15 | config_file : str
16 | detectron2 model config file path
17 | cfg_list : list
18 | List of additional model configurations
19 | root_dir : str [optional]
20 | directory of coco.josn and dataset images for custom dataset registration
21 | custom_dataset : str [optional]
22 | Name of the custom dataset to register
23 | """
24 | def __init__(self, config_file, cfg_list, img_path, root_dir=None, custom_dataset=None):
25 | # load config from file
26 | cfg = get_cfg()
27 | cfg.merge_from_file(config_file)
28 |
29 | if custom_dataset:
30 | register_coco_instances(custom_dataset, {}, root_dir + "coco.json", root_dir)
31 | cfg.DATASETS.TRAIN = (custom_dataset,)
32 | MetadataCatalog.get(custom_dataset)
33 | DatasetCatalog.get(custom_dataset)
34 |
35 | if torch.cuda.is_available():
36 | cfg.MODEL.DEVICE = "cuda"
37 | else:
38 | cfg.MODEL.DEVICE = "cpu"
39 |
40 | cfg.merge_from_list(cfg_list)
41 | cfg.freeze()
42 | self.cfg = cfg
43 | self._set_input_image(img_path)
44 |
45 | def _set_input_image(self, img_path):
46 | self.image = read_image(img_path, format="BGR")
47 | self.image_height, self.image_width = self.image.shape[:2]
48 | transform_gen = T.ResizeShortestEdge(
49 | [self.cfg.INPUT.MIN_SIZE_TEST, self.cfg.INPUT.MIN_SIZE_TEST], self.cfg.INPUT.MAX_SIZE_TEST
50 | )
51 | transformed_img = transform_gen.get_transform(self.image).apply_image(self.image)
52 | self.input_tensor = torch.as_tensor(transformed_img.astype("float32").transpose(2, 0, 1)).requires_grad_(True)
53 |
54 | def get_cam(self, target_instance, layer_name, grad_cam_instance):
55 | """
56 | Calls the GradCAM instance
57 |
58 | Parameters
59 | ----------
60 | img : str
61 | Path to inference image
62 | target_instance : int
63 | The target instance index
64 | layer_name : str
65 | Convolutional layer to perform GradCAM on
66 | grad_cam_type : str
67 | GradCAM or GradCAM++ (for multiple instances of the same object, GradCAM++ can be favorable)
68 |
69 | Returns
70 | -------
71 | image_dict : dict
72 | {"image" : , "cam" : , "output" :