├── .gitignore
├── LICENSE.md
├── README.md
├── gui
├── __init__.py
├── gui.py
├── imagesselectionwindow.ui
└── mainwindow.ui
├── main.py
├── package
├── __init__.py
├── app.py
├── crossvalidation.py
├── dataset.py
├── execution.py
├── feature_extractor.py
├── feature_matcher.py
├── image.py
├── image_set.py
├── post_ranker.py
├── preprocessing.py
├── segmenter.py
├── statistics.py
└── utilities.py
├── resources
└── masks
│ ├── ViperManualMask.txt
│ ├── ViperOptimalMask-1.txt
│ ├── ViperOptimalMask-2.txt
│ ├── ViperOptimalMask-3.txt
│ ├── ViperOptimalMask-4.txt
│ └── ViperOptimalMask.txt
└── tests
└── __init__.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Created by .ignore support plugin (hsz.mobi)
2 | ### JetBrains template
3 | # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm
4 |
5 | *.iml
6 |
7 | ## Directory-based project format:
8 | .idea/
9 | # if you remove the above rule, at least ignore the following:
10 |
11 | # User-specific stuff:
12 | # .idea/workspace.xml
13 | # .idea/tasks.xml
14 | # .idea/dictionaries
15 |
16 | # Sensitive or high-churn files:
17 | # .idea/dataSources.ids
18 | # .idea/dataSources.xml
19 | # .idea/sqlDataSources.xml
20 | # .idea/dynamic.xml
21 | # .idea/uiDesigner.xml
22 |
23 | # Gradle:
24 | # .idea/gradle.xml
25 | # .idea/libraries
26 |
27 | # Mongo Explorer plugin:
28 | # .idea/mongoSettings.xml
29 |
30 | ## File-based project format:
31 | *.ipr
32 | *.iws
33 |
34 | ## Plugin-specific files:
35 |
36 | # IntelliJ
37 | out/
38 |
39 | # mpeltonen/sbt-idea plugin
40 | .idea_modules/
41 |
42 | # JIRA plugin
43 | atlassian-ide-plugin.xml
44 |
45 | # Crashlytics plugin (for Android Studio and IntelliJ)
46 | com_crashlytics_export_strings.xml
47 | crashlytics.properties
48 | crashlytics-build.properties
49 |
50 |
51 | ### Python template
52 | # Byte-compiled / optimized / DLL files
53 | __pycache__/
54 | *.py[cod]
55 |
56 | # C extensions
57 | *.so
58 |
59 | # Distribution / packaging
60 | .Python
61 | env/
62 | build/
63 | develop-eggs/
64 | dist/
65 | downloads/
66 | eggs/
67 | .eggs/
68 | lib/
69 | lib64/
70 | parts/
71 | sdist/
72 | var/
73 | *.egg-info/
74 | .installed.cfg
75 | *.egg
76 |
77 | # PyInstaller
78 | # Usually these files are written by a python script from a template
79 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
80 | *.manifest
81 | *.spec
82 |
83 | # Installer logs
84 | pip-log.txt
85 | pip-delete-this-directory.txt
86 |
87 | # Unit test / coverage reports
88 | htmlcov/
89 | .tox/
90 | .coverage
91 | .coverage.*
92 | .cache
93 | nosetests.xml
94 | coverage.xml
95 |
96 | # Translations
97 | *.mo
98 | *.pot
99 |
100 | # Django stuff:
101 | *.log
102 |
103 | # Sphinx documentation
104 | docs/_build/
105 |
106 | # PyBuilder
107 | target/
108 |
109 | package/Utils.py
110 | package/examples.py
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | The MIT License (MIT)
2 |
3 | Copyright (c) 2015 Luis María González Medina
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | PyReID: Python Person Re-identification Framework
2 | =====================================================
3 | Flexible and extensible Framework for [Person Re-identification problem](http://www.sciencedirect.com/science/article/pii/S0262885614000262).
4 |
5 | PyReID allows configuring multiple **preprocessing** as a pipeline, followed by a **Feature Extraction** process and finally **Feature Matching**. It can calculate statistics like CMC or AUC. Not only that, but it also allows **PostRanking** optimization, with a functional GUI made with QT, and some embedded methods.
6 |
7 | Main Features
8 | -------------
9 | - Allows multiple **preprocessing** methods, including BTF, Illumination Normalization, Foreground/Background Segmentation using GrabCut, Symmetry-based Silhouette Partition, Static Vertical Partition and Weight maps for Weighted Histograms using Gaussian Kernels or Gaussian Mixture Models.
10 |
11 | - **Feature Extraction** based on Histograms calculations. Admit 1D and 3D histograms, independent bin size for each channel, admits Regions for calculating independent Histograms per Region and Weight maps for Weighted Histograms.
12 |
13 | - **Feature Matching** admits Histograms comparison methods: Correlation, Chi-Square, Intersection, Bhattacharyya distance and Euclidean distance.
14 |
15 | - Automatically creates Ranking Matrix. For each element of the Probe obtains all Gallery elements sorted.
16 |
17 | - **Statistics Module**. With the Ranking Matrix and the Dataset, obtain stats as CMC, AUC, range-X and mean value.
18 |
19 | - **Use of multiprocessing** to improve speed. Preprocessing, Feature Extraction and Feature Matching are designed using multiprocessing to dramatically improve speed in multicore processors.
20 |
21 | - Save stats in a complete excel for further review or plots creation. Save all your executions statistics in a file for later use (using *Python Shelves*).
22 |
23 | Installation
24 | ------------
25 | Right now the project is not prepared for a direct installation as a library. In the meantime, you can check the latest sources with the command:
26 |
27 |
28 | git clone https://github.com/Luigolas/PyReid.git
29 |
30 | Dependencies
31 | ------------
32 | PyReID is tested to work with Python 2.7+ and the next dependencies are needed to make it work:
33 |
34 | - **OpenCV 3.0**: This code makes use of OpenCV 3.0. It won't work with previous versions. This is due to incompatibility with 3D histograms and predefined values names.
35 | - **Numpy**: Tested with version *1.9.2*
36 | - **matplotlib**: Tested with version *1.4.2*
37 | - **pandas**: Tested with version *0.15.0*
38 | - **scikit-learn**: Tested with version *0.16.1*
39 | - **scipy**: Tested with version *0.14.1*
40 | - **xlwt**: Tested with version *0.7.5*
41 |
42 | *Under Construction*...
43 |
44 | How to Use
45 | ----------
46 | *This section is under construction. It is planned to have a dedicated page to further explain is usage and full potential.*
47 |
48 | Next code shows how to prepare a simple execution:
49 |
50 | ```python
51 | from package.dataset import Dataset
52 | import package.preprocessing as preprocessing
53 | from package.image import CS_HSV, CS_YCrCb
54 | import package.feature_extractor as feature_extractor
55 | import package.feature_matcher as feature_matcher
56 | from package.execution import Execution
57 | from package.statistics import Statistics
58 |
59 | # Preprocessing
60 | IluNormY = preprocessing.Illumination_Normalization(color_space=CS_YCrCb)
61 | mask_source = "../resources/masks/ViperOptimalMask-4.txt"
62 | grabcut = preprocessing.Grabcut(mask_source) # Segmenter
63 | preproc = [IluNormY, grabcut] # Executes on this order
64 |
65 | fe = feature_extractor.Histogram(CS_HSV, [16, 16, 4], "1D")
66 | f_match = feature_matcher.HistogramsCompare(feature_matcher.HISTCMP_BHATTACHARYYA)
67 |
68 | probe = "../datasets/viper/cam_a"
69 | gallery = "../datasets/viper/cam_b"
70 | ex = Execution(Dataset(probe, gallery), preproc, fe, f_match)
71 | ranking_matrix = ex.run()
72 |
73 | statistic = Statistics()
74 | statistic.run(ex.dataset, ranking_matrix)
75 |
76 | print "Range 20: %f" % statistic.CMC[19]
77 | print "AUC: %f" % statistic.AUC
78 | ```
79 |
80 | To try the GUI for PostRanking just run *main.py*.
81 |
82 | *Note: You will need to have a Person Re-identification Dataset to play with. You can download Viper dataset from [here](https://vision.soe.ucsc.edu/node/178). Example mask seeds for GrabCut are available at resource folder.*
83 |
84 | Author
85 | ------
86 | This project is initiated as a Final Project in the University of Las Palmas by [**Luis González Medina**](http://www.luigolas.com).
87 |
88 | Tutors of this project are [**Modesto Castrillón Santana**](https://scholar.google.es/citations?user=awve1bIAAAAJ) and [**Javier Lorenzo Navarro**](https://scholar.google.es/citations?user=Flnbxo8AAAAJ)
89 |
90 | License
91 | -------
92 | Copyright (c) 2015 Luis María González Medina.
93 |
94 | **PyReID** is free software made available under the **MIT License**. For details see the LICENSE file.
95 |
--------------------------------------------------------------------------------
/gui/__init__.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
--------------------------------------------------------------------------------
/gui/gui.py:
--------------------------------------------------------------------------------
1 | from copy import copy
2 |
3 | __author__ = 'luigolas'
4 |
5 | from PyQt4 import QtCore, QtGui, uic
6 | import os
7 | import package.feature_matcher as comparator
8 | import package.preprocessing as preprocessing
9 |
10 |
11 | # get the directory of this script
12 | path = os.path.dirname(os.path.abspath(__file__))
13 |
14 | MainWindowUI, MainWindowBase = uic.loadUiType(
15 | os.path.join(path, '../gui/mainwindow.ui'))
16 |
17 | ImagesSelectionWindowUI, ImagesSelectionWindowBase = uic.loadUiType(
18 | os.path.join(path, '../gui/imagesselectionwindow.ui'))
19 |
20 |
21 | class MainWindow(MainWindowBase, MainWindowUI):
22 | def __init__(self, parent=None):
23 | MainWindowBase.__init__(self, parent)
24 | self.setupUi(self)
25 | self.defaultSetup()
26 |
27 | # Main buttons
28 | self.ProbeFolderPicker.clicked.connect(self.probe_picker)
29 | self.GalleryFolderPicker.clicked.connect(self.gallery_picker)
30 | self.buttonGroupFeMatcherWeights.buttonClicked.connect(self.toggle_weight_user)
31 |
32 | # Preprocess buttons
33 | self.IluNormAddButton.clicked.connect(self.add_ilunorm)
34 | self.BTFAddbutton.clicked.connect(self.add_btf)
35 | self.GrabcutMaskFilePicker.clicked.connect(self.grabcut_mask_file_picker)
36 | self.GrabcutAddButton.clicked.connect(self.add_grabcut)
37 | self.MaskFromMatFilePicker.clicked.connect(self.mask_from_mat_file_picker)
38 | self.MaskFromMatAddButton.clicked.connect(self.add_mask_from_mat)
39 | self.VerticalregionsAddButton.clicked.connect(self.add_vertical_regions)
40 | self.SilRegPartAddButton.clicked.connect(self.add_silpart_regions)
41 | self.GaussianAddButton.clicked.connect(self.add_gaussian)
42 |
43 | self.RemoveSelbutton.clicked.connect(self.removeSel)
44 |
45 | # Run Button
46 | self.RunButton.clicked.connect(self.run)
47 |
48 | self.preproc = []
49 |
50 | def defaultSetup(self):
51 | self.ProbeLineEdit.setText(os.path.join(path, '../datasets/viper/cam_a/'))
52 | self.GalleryLineEdit.setText(os.path.join(path, '../datasets/viper/cam_b/'))
53 | self.comboBoxComparator.addItems(comparator.method_names)
54 | self.comboBoxComparator.setCurrentIndex(3)
55 |
56 | # http://stackoverflow.com/a/13661117
57 | list_model = self.listPreprocess.model()
58 | list_model.rowsMoved.connect(self.list_preprocess_reorder)
59 |
60 | def list_preprocess_reorder(self, *args):
61 | print "Layout Changed"
62 | source = args[1]
63 | destination = args[-1]
64 | # http://stackoverflow.com/a/3173159
65 | self.preproc.insert(destination, self.preproc.pop(source))
66 | pass
67 |
68 | def probe_picker(self):
69 | dir_ = QtGui.QFileDialog.getExistingDirectory(None, 'Select a folder:', os.path.join(path, "../datasets/"),
70 | QtGui.QFileDialog.ShowDirsOnly)
71 | self.ProbeLineEdit.setText(dir_)
72 |
73 | def gallery_picker(self):
74 | dir_ = QtGui.QFileDialog.getExistingDirectory(None, 'Select a folder:', os.path.join(path, "../datasets/"),
75 | QtGui.QFileDialog.ShowDirsOnly)
76 | self.GalleryLineEdit.setText(dir_)
77 |
78 | def toggle_weight_user(self, RadioButton):
79 | if RadioButton is self.WeightsUserRadioButton:
80 | self.WeightsUserLineEdit.setEnabled(True)
81 | else:
82 | self.WeightsUserLineEdit.setEnabled(False)
83 |
84 | def add_ilunorm(self):
85 | from package.image import colorspace_name as cs_name
86 | colorspace = cs_name.index(self.IluNormComboBox.currentText())
87 | ilunorm = preprocessing.Illumination_Normalization(colorspace)
88 | self.add_preproc(ilunorm)
89 |
90 | def add_btf(self):
91 | btf = preprocessing.BTF(str(self.comboBoxBTF.currentText()))
92 | self.add_preproc(btf)
93 |
94 | def grabcut_mask_file_picker(self):
95 | file_ = QtGui.QFileDialog.getOpenFileName(None, 'Select a .txt file:',
96 | os.path.join(path, "../resources/masks/"))
97 | self.GrabcutMaskLineEdit.setText(file_)
98 |
99 | def add_grabcut(self):
100 | try:
101 | grabcut = preprocessing.Grabcut(str(self.GrabcutMaskLineEdit.text()), self.GrabcutItersSpinBox.value())
102 | self.add_preproc(grabcut)
103 | except (ValueError, IOError) as e:
104 | QtGui.QMessageBox.about(self, "Error", "File name is not valid")
105 |
106 | def mask_from_mat_file_picker(self):
107 | file_ = QtGui.QFileDialog.getOpenFileName(None, 'Select a .mat file:',
108 | os.path.join(path, "../resources/masks/"))
109 | self.MaskFromMatLineEdit.setText(file_)
110 |
111 | def add_mask_from_mat(self):
112 | try:
113 | maskfrommat = preprocessing.MasksFromMat(str(self.MaskFromMatLineEdit.text()))
114 | self.add_preproc(maskfrommat)
115 | except (ValueError, IOError) as e:
116 | QtGui.QMessageBox.about(self, "Error", "File name is not valid")
117 |
118 | def add_vertical_regions(self):
119 | vert = preprocessing.VerticalRegionsPartition()
120 | self.add_preproc(vert)
121 |
122 | def add_silpart_regions(self):
123 | alpha = self.SilRegPartAlphaSpinBox.value()
124 | sub_divisions = self.SilRegPartDivisionSpinBox.value()
125 | silpart = preprocessing.SilhouetteRegionsPartition(alpha, sub_divisions)
126 | self.add_preproc(silpart)
127 |
128 | def add_gaussian(self):
129 | alpha = self.GaussianAlphaSpinBox.value()
130 | kernel = str(self.GaussianKernelComboBox.currentText())
131 | sigmas = [self.GaussianSigma1SpinBox.value(), self.GaussianSigma2SpinBox.value()]
132 | deviations = [self.GaussianDeviation1SpinBox.value(), self.GaussianDeviation2SpinBox.value()]
133 | gaussian = preprocessing.GaussianMap(alpha, kernel, sigmas, deviations)
134 | self.add_preproc(gaussian)
135 |
136 | def add_preproc(self, item):
137 | self.preproc.append(item)
138 | dict_name = item.dict_name()
139 | name = dict_name['name']
140 | if 'params' in dict_name.keys():
141 | params = dict_name['params']
142 | else:
143 | params = ''
144 | item = QtGui.QListWidgetItem(name + ' ' + params, )
145 | self.listPreprocess.addItem(item)
146 |
147 | def removeSel(self):
148 | listItems = self.listPreprocess.selectedItems()
149 | if not listItems: return
150 | for item in listItems:
151 | row = self.listPreprocess.row(item)
152 | self.listPreprocess.takeItem(row)
153 | self.preproc.pop(row)
154 |
155 | def run(self):
156 | from package.app import run_image_selection
157 | run_image_selection(self.preproc)
158 |
159 |
160 | class ImagesSelectionForm(ImagesSelectionWindowBase, ImagesSelectionWindowUI):
161 | def __init__(self, parent=None, item_size=(72, 192)):
162 | ImagesSelectionWindowBase.__init__(self, parent)
163 | self.setupUi(self)
164 | self.pushButtonNextIteration.clicked.connect(self.next_iteration)
165 | self.pushButtonMarkSolution.clicked.connect(self.mark_solution)
166 | self.pushButtonNextProbe.clicked.connect(self.next_probe)
167 | self.custom_setup()
168 | self.regions = [[0]] # Set via set regions
169 | self.item_size = item_size
170 | # self.regions_parts = 1
171 |
172 | def custom_setup(self):
173 | self.labelSolution.setText("")
174 |
175 | def addImage(self, image_path, regions, enabled=True):
176 | # self.ui.ImagesContainer.addItem(QtGui.QListWidgetItem(QtGui.QIcon(image_path), "0"))
177 | img = QtImage(image_path, regions, self.item_size)
178 |
179 | item = QtGui.QListWidgetItem("Selected")
180 | item.setSizeHint(img.sizeHint())
181 | img.setEnabled(enabled)
182 |
183 | self.ImagesContainer.addItem(item)
184 | self.ImagesContainer.setItemWidget(item, img)
185 |
186 | def setProbe(self, probe_path, size=(72, 192)):
187 | self.labelProbe.setPixmap(QtGui.QPixmap(probe_path))
188 | self.labelProbe.setFixedSize(QtCore.QSize(*size))
189 | self.labelProbe.setScaledContents(True)
190 |
191 | def set_regions(self, regions):
192 | if regions is None:
193 | self.regions = [[0]]
194 | # self.regions_parts = 1
195 | elif len(regions) == 2:
196 | self.regions = regions
197 | # self.regions_parts = sum([len(e) for e in regions])
198 | else:
199 | raise ValueError("Regions size must be 2 (body region and legs region)")
200 |
201 | def update(self, post_ranker):
202 | self.labelIterations.setText(str(post_ranker.iteration))
203 | self.labelTargetPosition.setText(str(post_ranker.target_position))
204 | self.setProbe(post_ranker.execution.dataset.probe.files_test[post_ranker.subject])
205 | self.labelProbeName.setText(post_ranker.probe_name)
206 | self.labelProbeNumber.setText("Probe %d of %d" % (post_ranker.subject, post_ranker.execution.dataset.test_size))
207 | self.ImagesContainer.clear()
208 | image_size = post_ranker.execution.dataset.gallery.images_test[0].shape[:2] # Assumes all images have same shape
209 | reshape_factor = self.item_size[1] / float(image_size[0]) # does not consider vertical size
210 | for elem in post_ranker.rank_list:
211 | if elem in [sample[0] for sample in post_ranker.strong_negatives + post_ranker.weak_negatives]:
212 | enabled = False
213 | else:
214 | enabled = True
215 | if len(self.regions) == 1:
216 | regions = [[0, image_size[0]]]
217 | else:
218 | img_regions = post_ranker.execution.dataset.gallery.regions_test[elem]
219 | # For each regions take top horizontal line and bottom line (does not consider vertical limits)
220 | regions = [[img_regions[r[0]][0], img_regions[r[-1]][1]] for r in self.regions]
221 | regions = [[int(rs * reshape_factor) for rs in r] for r in regions]
222 | self.addImage(post_ranker.execution.dataset.gallery.files_test[elem], regions, enabled)
223 | if post_ranker.iteration > 0:
224 | self.labelSolution.hide()
225 | self.labelErrorMessage.hide()
226 |
227 | def next_iteration(self):
228 | strong_negatives = []
229 | weak_negatives = []
230 | for index in range(self.ImagesContainer.count()):
231 | # http://www.qtcentre.org/threads/40439-QListWidget-accessing-item-info-%28listWidget-gt-setItemWidget%29
232 | elem = self.ImagesContainer.itemWidget(self.ImagesContainer.item(index))
233 | for rb in elem.selectedRubberBand:
234 | if rb.kind == "similar":
235 | weak_negatives.append([index, rb.region])
236 | else:
237 | strong_negatives.append([index, rb.region])
238 |
239 | if len(strong_negatives) == 0 and len(weak_negatives) == 0:
240 | self.printError("You need to select at least one weak negative or strong negative")
241 | return
242 | from package.app import iterate_images_selection
243 | iterate_images_selection(strong_negatives, weak_negatives)
244 |
245 | def mark_solution(self):
246 | selected = None
247 | for index in range(self.ImagesContainer.count()):
248 | # http://www.qtcentre.org/threads/40439-QListWidget-accessing-item-info-%28listWidget-gt-setItemWidget%29
249 | elem = self.ImagesContainer.itemWidget(self.ImagesContainer.item(index))
250 | # for rb in elem.selectedRubberBand:
251 | if len(elem.selectedRubberBand) > 0:
252 | rb = elem.selectedRubberBand[0]
253 | if rb.kind == "similar":
254 | selected = index
255 | break
256 | if selected is not None:
257 | from package.app import mark_solution
258 | mark_solution(selected)
259 |
260 | def target_correct(self, solution):
261 | if solution:
262 | self.labelSolution.setText("Target correct")
263 | self.labelSolution.setStyleSheet("QLabel { color : green; }")
264 | self.labelSolution.show()
265 | else:
266 | self.labelSolution.setText("Target Wrong")
267 | self.labelSolution.setStyleSheet("QLabel { color : red; }")
268 | self.labelSolution.show()
269 |
270 | @staticmethod
271 | def next_probe():
272 | from package.app import new_probe
273 | new_probe()
274 |
275 | def printError(self, msg):
276 | self.labelErrorMessage.setText(msg)
277 | self.labelErrorMessage.setStyleSheet("QLabel { color : red; }")
278 | self.labelErrorMessage.show()
279 |
280 |
281 | class NiceRubberBand(QtGui.QRubberBand):
282 | def __init__(self, QRubberBand_Shape, parent, kind=None):
283 | QtGui.QRubberBand.__init__(self, QRubberBand_Shape, parent)
284 | alpha = 50
285 | self.kind = kind
286 | if kind == "similar":
287 | self.color = QtGui.QColor(0, 255, 0, alpha)
288 | elif kind == "dissimilar":
289 | self.color = QtGui.QColor(255, 0, 0, alpha)
290 | else:
291 | self.color = QtGui.QColor(255, 255, 255, alpha)
292 | self.region = 0
293 |
294 | def paintEvent(self, QPaintEvent):
295 | rect = QPaintEvent.rect()
296 | painter = QtGui.QPainter(self)
297 | br = QtGui.QBrush(self.color)
298 | painter.setBrush(br)
299 | pen = QtGui.QPen(self.color, 1)
300 | painter.setPen(pen)
301 | # painter.setOpacity(0.3)
302 | painter.drawRect(rect)
303 |
304 | def set_region(self, region):
305 | """
306 | Body(0) or legs(1) region
307 | :param region:
308 | :return:
309 | """
310 | self.region = region
311 |
312 | class QtImage(QtGui.QLabel):
313 | def __init__(self, image_path, regions, size, parent=None):
314 | QtGui.QLabel.__init__(self, parent)
315 | self.setImage(image_path)
316 | self.setFixedSize(QtCore.QSize(*size))
317 | self.setScaledContents(True)
318 | self.overRubberBand = NiceRubberBand(QtGui.QRubberBand.Rectangle, self)
319 | self.setMouseTracking(True)
320 | self.selectedRubberBand = []
321 | self.regions = regions
322 | self.size = size
323 |
324 | def setImage(self, img):
325 | self.setPixmap(QtGui.QPixmap(img))
326 |
327 | def mouseMoveEvent(self, QMouseEvent):
328 | over = False
329 | for idx, r in enumerate(self.regions):
330 | # QRect(int x, int y, int width, int height)
331 | rect = QtCore.QRect(0, r[0], self.size[0], r[1] - r[0])
332 | if rect.contains(QMouseEvent.pos()):
333 | self.overRubberBand.setGeometry(rect)
334 | self.overRubberBand.set_region(idx) # body(0) or legs(1)
335 | self.overRubberBand.show()
336 | over = True
337 | if not over:
338 | self.overRubberBand.hide()
339 |
340 | # if QtCore.QRect(0, 25, 72, 70).contains(QMouseEvent.pos()):
341 | # self.overRubberBand.setGeometry(QtCore.QRect(0, 25, 72, 70))
342 | # self.overRubberBand.show()
343 | # elif QtCore.QRect(0, 70, 72, 192).contains(QMouseEvent.pos()):
344 | # self.overRubberBand.setGeometry(QtCore.QRect(0, 95, 72, 97))
345 | # self.overRubberBand.show()
346 | # else:
347 | # self.overRubberBand.hide()
348 |
349 | # def enterEvent(self, QEvent):
350 | # print "Mouse Entered"
351 | # return super(QtImage, self).enterEvent(QEvent)
352 |
353 | def leaveEvent(self, QEvent):
354 | self.overRubberBand.hide()
355 | return super(QtImage, self).enterEvent(QEvent)
356 |
357 | def mousePressEvent(self, QMouseEvent):
358 | if self.overRubberBand.isHidden():
359 | return
360 |
361 | for rubberBand in self.selectedRubberBand:
362 | if self.overRubberBand.geometry() == rubberBand.geometry():
363 | rubberBand.hide()
364 | self.selectedRubberBand.remove(rubberBand) # hide and remove
365 | return
366 |
367 | if QMouseEvent.type() == QtCore.QEvent.MouseButtonPress:
368 | if QMouseEvent.button() == QtCore.Qt.LeftButton:
369 | kind = "similar"
370 | elif QMouseEvent.button() == QtCore.Qt.RightButton:
371 | kind = "dissimilar"
372 | else:
373 | return
374 | rubberBand = NiceRubberBand(QtGui.QRubberBand.Rectangle, self, kind)
375 | rubberBand.setGeometry(self.overRubberBand.geometry())
376 | rubberBand.set_region(self.overRubberBand.region)
377 | self.selectedRubberBand.append(rubberBand)
378 | rubberBand.show()
379 |
--------------------------------------------------------------------------------
/gui/imagesselectionwindow.ui:
--------------------------------------------------------------------------------
1 |
2 |
3 | ImagesSelectionWindow
4 |
5 |
6 |
7 | 0
8 | 0
9 | 816
10 | 647
11 |
12 |
13 |
14 |
15 | 0
16 | 0
17 |
18 |
19 |
20 | MainWindow
21 |
22 |
23 |
24 | -
25 |
26 |
27 | QLayout::SetMaximumSize
28 |
29 |
-
30 |
31 |
-
32 |
33 |
34 | Qt::Vertical
35 |
36 |
37 |
38 | 20
39 | 40
40 |
41 |
42 |
43 |
44 | -
45 |
46 |
47 | Probe Number
48 |
49 |
50 | Qt::AlignCenter
51 |
52 |
53 |
54 | -
55 |
56 |
57 |
58 | 0
59 | 0
60 |
61 |
62 |
63 | ProbeImage
64 |
65 |
66 | Qt::AlignCenter
67 |
68 |
69 |
70 | -
71 |
72 |
73 | Probe Name
74 |
75 |
76 | Qt::AlignCenter
77 |
78 |
79 |
80 | -
81 |
82 |
83 | Qt::Vertical
84 |
85 |
86 |
87 | 20
88 | 40
89 |
90 |
91 |
92 |
93 |
94 |
95 | -
96 |
97 |
98 | QLayout::SetNoConstraint
99 |
100 |
-
101 |
102 |
-
103 |
104 |
105 | Iterations:
106 |
107 |
108 |
109 | -
110 |
111 |
112 | Initial
113 |
114 |
115 |
116 | -
117 |
118 |
119 | Qt::Horizontal
120 |
121 |
122 |
123 | 40
124 | 20
125 |
126 |
127 |
128 |
129 | -
130 |
131 |
132 | Target at:
133 |
134 |
135 |
136 | -
137 |
138 |
139 | Position
140 |
141 |
142 |
143 |
144 |
145 | -
146 |
147 |
148 |
149 | 0
150 | 0
151 |
152 |
153 |
154 |
155 | 485
156 | 405
157 |
158 |
159 |
160 | QAbstractItemView::NoEditTriggers
161 |
162 |
163 | false
164 |
165 |
166 | QAbstractItemView::DragDrop
167 |
168 |
169 | QAbstractItemView::NoSelection
170 |
171 |
172 |
173 | 72
174 | 192
175 |
176 |
177 |
178 | QListView::Adjust
179 |
180 |
181 | QListView::SinglePass
182 |
183 |
184 | 5
185 |
186 |
187 | QListView::IconMode
188 |
189 |
190 |
191 | -
192 |
193 |
194 | Next Iteration ->
195 |
196 |
197 |
198 | -
199 |
200 |
201 | true
202 |
203 |
204 | Error Message
205 |
206 |
207 | Qt::AlignCenter
208 |
209 |
210 |
211 |
212 |
213 | -
214 |
215 |
-
216 |
217 |
218 | Qt::Vertical
219 |
220 |
221 |
222 | 20
223 | 40
224 |
225 |
226 |
227 |
228 | -
229 |
230 |
231 | Mark as Solution
232 |
233 |
234 |
235 | -
236 |
237 |
238 | TextLabel
239 |
240 |
241 | Qt::AlignCenter
242 |
243 |
244 |
245 | -
246 |
247 |
248 | Qt::Vertical
249 |
250 |
251 |
252 | 20
253 | 40
254 |
255 |
256 |
257 |
258 | -
259 |
260 |
261 | Next Probe
262 |
263 |
264 |
265 | -
266 |
267 |
268 | Qt::Vertical
269 |
270 |
271 |
272 | 20
273 | 40
274 |
275 |
276 |
277 |
278 | -
279 |
280 |
281 | Close
282 |
283 |
284 |
285 |
286 |
287 |
288 |
289 |
290 |
291 |
308 |
309 |
310 |
311 | Salpica
312 |
313 |
314 |
315 |
316 |
317 |
318 | pushButton
319 | released()
320 | ImagesSelectionWindow
321 | close()
322 |
323 |
324 | 665
325 | 464
326 |
327 |
328 | 615
329 | 430
330 |
331 |
332 |
333 |
334 |
335 |
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
3 |
4 | if __name__ == '__main__':
5 | import sys
6 | from package import app
7 | sys.exit(app.run())
8 |
--------------------------------------------------------------------------------
/package/__init__.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
--------------------------------------------------------------------------------
/package/app.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
3 | import sys
4 | import ast
5 | from PyQt4 import QtGui
6 | import os
7 | from package import feature_extractor
8 | from package.image import CS_HSV, CS_BGR, CS_IIP
9 | import package.execution as execution
10 | from package.dataset import Dataset
11 | from package.post_ranker import SAA, LabSP, SAL
12 | import package.feature_matcher as FM
13 | from gui.gui import MainWindow, ImagesSelectionForm
14 |
15 |
16 | DB = None
17 | multiprocessing = True
18 | appMainForm = None
19 | appImagesForm = None
20 | POP = None
21 | # get the directory of this script
22 | path = os.path.dirname(os.path.abspath(__file__))
23 |
24 |
25 | def run():
26 | global appMainForm, appImagesForm
27 | app = QtGui.QApplication(sys.argv)
28 | appMainForm = MainWindow()
29 | appImagesForm = ImagesSelectionForm()
30 | appMainForm.show()
31 | sys.exit(app.exec_())
32 |
33 |
34 | def run_image_selection(preproc):
35 | global appMainForm, appImagesForm, POP
36 |
37 | probe = str(appMainForm.ProbeLineEdit.text())
38 | gallery = str(appMainForm.GalleryLineEdit.text())
39 |
40 | if appMainForm.comboBoxFeatureExtraction.currentText() == "Histograms":
41 | if appMainForm.comboBoxColorSpace.currentText() == "HSV":
42 | colorspace = CS_HSV
43 | elif appMainForm.comboBoxColorSpace.currentText() == "RGB":
44 | colorspace = CS_BGR
45 | else: # if self.comboBoxColorSpace.currentText == "IIP":
46 | colorspace = CS_IIP
47 |
48 | if appMainForm.radioButton1D.isChecked():
49 | dim = "1D"
50 | else:
51 | dim = "3D"
52 | fe = feature_extractor.Histogram(colorspace, ast.literal_eval(str(appMainForm.lineEditBins.text())), dim)
53 | else:
54 | fe = None
55 |
56 | if appMainForm.WeightsNoneRadioButton.isChecked():
57 | weights = None
58 | elif appMainForm.Weights5RRadioButton.isChecked():
59 | weights = [0.3, 0.3, 0.15, 0.15, 0.1]
60 | else: # if WeightsUserRadioButton.isChecked():
61 | weights = ast.literal_eval(str(appMainForm.WeightsUserLineEdit.text()))
62 |
63 | Fmatcher = FM.HistogramsCompare(FM.method_names.index(appMainForm.comboBoxComparator.currentText()),
64 | weights)
65 |
66 | train_split = float(appMainForm.lineEditTrainSplit.text())
67 | if train_split > 1:
68 | train_split = int(train_split)
69 |
70 | test_split = float(appMainForm.lineEditTestSplit.text())
71 | if test_split > 1.:
72 | test_split = int(test_split)
73 |
74 | seed_split = str(appMainForm.lineEditSeed.text())
75 | if seed_split == "None":
76 | seed_split = None
77 | else:
78 | seed_split = int(seed_split)
79 |
80 | ex = execution.Execution(Dataset(probe, gallery, train_split, test_split, seed_split), preproc, fe, Fmatcher)
81 |
82 | ranking_matrix = ex.run(fe4train_set=True, njobs=1)
83 |
84 | if appMainForm.radioButtonPOPBalancedAndVE.isChecked():
85 | balanced = True
86 | visual_expansion_use = True
87 | elif appMainForm.radioButtonPOPBalanced.isChecked():
88 | balanced = True
89 | visual_expansion_use = False
90 | else:
91 | balanced = False
92 | visual_expansion_use = False
93 | re_score_alpha = appMainForm.ReScoreAlpha.value()
94 | re_score_method_proportional = appMainForm.radioButtonReScoreProportional.isChecked()
95 |
96 | if appMainForm.radioButtonRegionsNone.isChecked():
97 | regions = None
98 | elif appMainForm.radioButtonRegions2R.isChecked():
99 | regions = [[0], [1]]
100 | elif appMainForm.radioButtonRegions4R.isChecked():
101 | regions = [[0, 1], [2, 3]]
102 | else: # if appMainForm.radioButtonRegions5R.isChecked():
103 | regions = [[0, 1], [2, 3, 4]]
104 |
105 | # if appMainForm.LabSPradioButton.isChecked():
106 | # POP = LabSP(balanced=balanced, visual_expansion_use=visual_expansion_use, re_score_alpha=re_score_alpha,
107 | # re_score_method_proportional=re_score_method_proportional, regions=regions)
108 | # else: # if appMainForm.SAAradioButton.isChecked():
109 | # POP = SAA(balanced=balanced, visual_expansion_use=visual_expansion_use, re_score_alpha=re_score_alpha,
110 | # re_score_method_proportional=re_score_method_proportional, regions=regions)
111 | POP = SAL(balanced=balanced, visual_expansion_use=visual_expansion_use, re_score_alpha=re_score_alpha,
112 | re_score_proportional=re_score_method_proportional, regions=regions)
113 | POP.set_ex(ex, ranking_matrix)
114 | appImagesForm.set_regions(regions)
115 |
116 | appImagesForm.update(POP)
117 | appMainForm.hide()
118 | appImagesForm.show()
119 |
120 |
121 | def iterate_images_selection(strong_negatives, weak_negatives):
122 | global POP, appImagesForm
123 | POP.new_samples(weak_negatives, strong_negatives)
124 | msg = POP.iterate()
125 | if msg == "OK":
126 | appImagesForm.update(POP)
127 | else:
128 | appImagesForm.printError(msg)
129 |
130 |
131 | def mark_solution(solution):
132 | global POP, appImagesForm
133 | if solution == POP.target_position:
134 | appImagesForm.target_correct(True)
135 | POP.new_subject()
136 | appImagesForm.update(POP)
137 | else:
138 | appImagesForm.target_correct(False)
139 |
140 |
141 | def new_probe():
142 | POP.new_subject()
143 | appImagesForm.update(POP)
144 |
145 |
146 |
147 |
148 |
--------------------------------------------------------------------------------
/package/crossvalidation.py:
--------------------------------------------------------------------------------
1 | from package.execution import Execution
2 |
3 | __author__ = 'luigolas'
4 |
5 | import os
6 | from package.statistics import Statistics
7 | import pandas as pd
8 | import numpy as np
9 | import random
10 | import pickle
11 | import shelve
12 |
13 |
14 | class CrossValidation():
15 | """
16 |
17 | :param execution:
18 | :param splits_file: If passed, read splittings from file and num_validations, and test_size is ignored
19 | :param num_validations:
20 | :param train_size:
21 | :param test_size:
22 | :return:
23 | """
24 | def __init__(self, execution, splits_file=None, num_validations=10, train_size=0, test_size=0.5):
25 | if not isinstance(execution, Execution):
26 | raise TypeError("execution is not of class Execution")
27 | else:
28 | self.execution = execution
29 | self.statistics = []
30 | self.mean_stat = Statistics()
31 |
32 | if splits_file:
33 | self.files, self.num_validations, self.test_size = self._read_split_file(splits_file)
34 | self.execution.dataset.change_probe_and_gallery(self.files[0][0], self.files[0][1],
35 | train_size=train_size)
36 | else:
37 | self.num_validations = num_validations
38 | self.test_size = test_size
39 | self.files = None
40 | self.execution.dataset.generate_train_set(train_size=train_size, test_size=self.test_size)
41 |
42 | self.train_size = self.execution.dataset.train_size
43 | self.dataframe = None
44 |
45 | @staticmethod
46 | def _read_split_file(path):
47 | """
48 | File format:
49 | % Experiment Number 1
50 |
51 | 1 - cam_a/001_45.bmp cam_b/001_90.bmp
52 | 2 - cam_a/002_45.bmp cam_b/002_90.bmp
53 | 3 - cam_a/003_0.bmp cam_b/003_90.bmp
54 | [...]
55 | 316 - cam_a/872_0.bmp cam_b/872_180.bmp
56 |
57 |
58 | % Experiment Number 2
59 |
60 | 1 - cam_a/000_45.bmp cam_b/000_45.bmp
61 | 2 - cam_a/001_45.bmp cam_b/001_90.bmp
62 | ....
63 |
64 |
65 | It defines only tests elements.
66 |
67 | :param execution:
68 | :param path:
69 | :return:
70 | """
71 | sets = []
72 | with open(path, 'r') as f:
73 | new_set = [[], []]
74 | for line in f:
75 | line = line.strip()
76 | if not line:
77 | continue
78 |
79 | if "% " in line:
80 | if new_set[0]:
81 | sets.append(new_set)
82 | new_set = [[], []]
83 | continue
84 |
85 | line = line.split(" - ")[-1]
86 | line = line.split(" ")
87 | new_set[0].append(line[0])
88 | new_set[1].append(line[1])
89 | sets.append(new_set)
90 |
91 | return sets, len(sets), len(sets[0][0])
92 |
93 | def run(self, verbosity=2):
94 | """
95 |
96 | :return:
97 | """
98 | self.dataframe = None
99 | # self._check_initialization()
100 |
101 | if self.train_size == 0: #Full execution and then
102 | self.execution.dataset.generate_train_set(train_size=None, test_size=None)
103 | if verbosity >= 0: print("******** Execution 1 of 1 ********")
104 | r_m = self.execution.run(verbosity)
105 | if verbosity > 0: print("Calculating Statistics")
106 | for val in range(self.num_validations):
107 | if self.files:
108 | self.execution.dataset.change_probe_and_gallery(self.files[val][0], self.files[val][1],
109 | train_size=self.train_size)
110 | else:
111 | self.execution.dataset.generate_train_set(train_size=0, test_size=self.test_size)
112 | statistic = Statistics()
113 | statistic.run(self.execution.dataset, r_m)
114 | self.statistics.append(statistic)
115 | else:
116 | for val in range(self.num_validations):
117 | if verbosity >= 0: print("******** Execution %d of %d ********" % (val + 1, self.num_validations))
118 | if self.files:
119 | self.execution.dataset.change_probe_and_gallery(self.files[val][0], self.files[val][1],
120 | train_size=self.train_size)
121 | else:
122 | self.execution.dataset.generate_train_set(train_size=self.train_size, test_size=self.test_size)
123 | r_m = self.execution.run(verbosity)
124 | if verbosity > 0: print("Calculating Statistics")
125 | statistic = Statistics()
126 | statistic.run(self.execution.dataset, r_m)
127 | self.statistics.append(statistic)
128 |
129 | CMCs = np.asarray([stat.CMC for stat in self.statistics])
130 | self.mean_stat.CMC = np.sum(CMCs, axis=0) / float(self.num_validations)
131 | self.mean_stat.AUC = np.mean([stat.AUC for stat in self.statistics])
132 | mean_values = np.asarray([stat.mean_value for stat in self.statistics])
133 | self.mean_stat.mean_value = np.mean(mean_values)
134 | if verbosity > 2:
135 | print "Range 20: %f" % self.mean_stat.CMC[19]
136 | print "AUC: %f" % self.mean_stat.AUC
137 |
138 | def dict_name(self, use_stats=True):
139 | """
140 |
141 |
142 | :param use_stats:
143 | :return:
144 | """
145 | name = {}
146 | name.update(self.execution.dict_name())
147 | if use_stats:
148 | name.update(self.mean_stat.dict_name())
149 | name.update({"NumValidations": self.num_validations})
150 | return name
151 |
152 | # def to_csv(self, path):
153 | # self.dataframe.to_csv(path_or_buf=path, index=False, sep="\t", float_format='%.3f')
154 | # # pd.DataFrame.to_csv(float_format="")
155 |
156 | def id(self):
157 | to_encode = str(self.dict_name(False)) + str([type(i).__name__ for i in self.execution.preprocessing])
158 | return str(abs(hash(to_encode)))
159 |
160 | def id_is_saved(self, db_file, id=None):
161 | if not id:
162 | id = self.id()
163 |
164 | # db = shelve.open(db_file, protocol=pickle.HIGHEST_PROTOCOL)
165 | db = shelve.open(db_file, protocol=pickle.HIGHEST_PROTOCOL)
166 | try:
167 | val = db.has_key(str(id))
168 | finally:
169 | db.close()
170 | return val
171 |
172 | def to_excel(self, path, with_id=False, sorting=False):
173 | """
174 |
175 | :param path:
176 | :param with_id:
177 | :param sorting:
178 | :return: id
179 | """
180 | data = self.dict_name()
181 | dataframe = pd.DataFrame(data, columns=self.order_cols(list(data.keys())), index=[0])
182 | if with_id:
183 | # data_id = random.randint(0, 1000000000)
184 | data_id = self.id()
185 | dataframe['id'] = data_id
186 | else:
187 | data_id = None
188 |
189 | if os.path.isfile(path):
190 | df = pd.read_excel(path)
191 | df = pd.concat([df, dataframe])
192 | # df = df.merge(self.dataframe)
193 | if sorting:
194 | cols = sorted([col for col in list(df.columns.values) if "Range" in col], reverse=True)
195 | df.sort(columns=cols, ascending=False, inplace=True)
196 |
197 | df.to_excel(excel_writer=path, index=False, columns=self.order_cols(list(df.keys())), float_format='%.3f')
198 | else:
199 | dataframe.to_excel(excel_writer=path, index=False, float_format='%.3f')
200 |
201 | return data_id
202 |
203 | def save_stats(self, db_file, data_id):
204 | """
205 |
206 | :param db_file:
207 | :param data_id:
208 | :return:
209 | """
210 |
211 | db = shelve.open(db_file, protocol=pickle.HIGHEST_PROTOCOL)
212 | try:
213 | data = {"CMC": self.mean_stat.CMC, "AUC": self.mean_stat.AUC, "mean_value": self.mean_stat.mean_value,
214 | "name": self.dict_name()}
215 | db[str(data_id)] = data
216 | finally:
217 | db.close()
218 | return data
219 |
220 | @staticmethod
221 | def load_stats(db_file, data_id):
222 | """
223 |
224 | :param db_file:
225 | :param data_id:
226 | :return:
227 | """
228 | db = shelve.open(db_file, protocol=pickle.HIGHEST_PROTOCOL, flag='r')
229 | try:
230 | data = db[str(data_id)]
231 | except KeyError:
232 | data = None
233 | finally:
234 | db.close()
235 |
236 | return data
237 |
238 | @staticmethod
239 | def order_cols(cols):
240 | """
241 | Format of columns to save in Dataframe
242 | :param cols:
243 | """
244 | ordered_cols = ["Probe", "Gallery", "Train", "Test"]
245 | ordered_cols.extend(sorted([col for col in cols if "Preproc" in col]))
246 | ordered_cols.extend(["Feature_Extractor"])
247 | ordered_cols.extend(sorted([col for col in cols if "Fe" in col and col != "Feature_Extractor"]))
248 | ordered_cols.extend(["FMatcher"])
249 | ordered_cols.extend(sorted([col for col in cols if "FM" in col and col != "FMatcher"]))
250 | ordered_cols.extend(sorted([col for col in cols if "Range" in col]))
251 | ordered_cols.extend(["AUC", "MeanValue", "NumValidations"])
252 | rest = sorted([item for item in cols if item not in ordered_cols])
253 | ordered_cols.extend(rest)
254 | return ordered_cols
255 |
256 |
257 | # def add_execs_grabcut_btf_histograms(self, probe, gallery, id_regex=None, segmenter_iter=None, masks=None,
258 | # colorspaces=None, binss=None, regions=None, region_name=None, weights=None,
259 | # dimensions=None, compmethods=None, preprocessing=None, train_test_split=None):
260 | # """
261 | # :param probe:
262 | # :param gallery:
263 | # :param id_regex:
264 | # :param segmenter_iter:
265 | # :param masks:
266 | # :param colorspaces:
267 | # :param binss: can be a list of bins or a tuple, like in range. ex:
268 | # binss = [[40,40,40], [50,50,50]]
269 | # binss = (40, 51, 10) # Will generate same result
270 | # :param regions:
271 | # :param dimensions:
272 | # :param compmethods:
273 | # :return:
274 | # """
275 | # print("Generating executions...")
276 | # if not segmenter_iter: segmenter_iter = [2]
277 | # if not compmethods: compmethods = [feature_matcher.HISTCMP_BHATTACHARYYA]
278 | # if not dimensions: dimensions = ["1D"]
279 | # if not colorspaces: colorspaces = [CS_BGR]
280 | # if not masks: raise InitializationError("Mask needed")
281 | # # if not regions: regions = [None]
282 | # if not preprocessing:
283 | # preprocessing = [None]
284 | # if not binss:
285 | # binss = [[32, 32, 32]]
286 | # elif isinstance(binss, tuple):
287 | # binss_temp = []
288 | # for r in range(*binss):
289 | # binss_temp.append([r, r, r])
290 | # binss = binss_temp
291 | # if train_test_split is None:
292 | # train_test_split = [[10, None]]
293 | # # elif type(train_test_split) != list:
294 | # # train_test_split = [train_test_split]
295 | #
296 | # for gc_iter, mask, colorspace, bins, dimension, method, preproc, split in itertools.product(
297 | # segmenter_iter, masks, colorspaces, binss, dimensions, compmethods, preprocessing, train_test_split):
298 | #
299 | # if preproc is None:
300 | # btf = None
301 | # else:
302 | # btf = BTF(preproc)
303 | # ex = Execution(Dataset(probe, gallery, split[0], split[1]), Grabcut(mask, gc_iter, CS_BGR), btf)
304 | #
305 | # if id_regex:
306 | # ex.set_id_regex(id_regex)
307 | #
308 | # # bin = bins[0:len(Histogram.color_channels[colorspace])]
309 | # ex.set_feature_extractor(
310 | # Histogram(colorspace, bins, regions=regions, dimension=dimension, region_name=region_name))
311 | #
312 | # ex.set_feature_matcher(feature_matcher.HistogramsCompare(method, weights))
313 | # self.executions.append(ex)
314 | # print("%d executions generated" % len(self.executions))
315 |
316 |
--------------------------------------------------------------------------------
/package/dataset.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
3 | from package.image_set import ImageSet
4 | import re
5 | from sklearn.cross_validation import ShuffleSplit
6 | from itertools import chain
7 | from sklearn.utils import safe_indexing
8 | import numpy as np
9 |
10 | class Dataset(object):
11 | """
12 |
13 | :param probe:
14 | :param gallery:
15 | :param train_size:
16 | :param test_size:
17 | """
18 |
19 | def __init__(self, probe=None, gallery=None, train_size=None, test_size=None, rand_state=None):
20 | self.probe = ImageSet(probe)
21 | self.gallery = ImageSet(gallery)
22 | if "viper" in probe:
23 | self.id_regex = "[0-9]{3}_"
24 | elif "CUHK" in probe:
25 | self.id_regex = "P[1-6]_[0-9]{3}"
26 | else: # Default to viper name convection
27 | self.id_regex = "[0-9]{3}_"
28 | self.train_indexes, self.test_indexes = [], []
29 | self.train_size = 0
30 | self.test_size = 0
31 | self.generate_train_set(train_size, test_size, rand_state=rand_state)
32 |
33 | def set_probe(self, folder):
34 | """
35 |
36 | :param folder:
37 | :return:
38 | """
39 | self.probe = ImageSet(folder)
40 |
41 | def set_gallery(self, folder):
42 | """
43 |
44 | :param folder:
45 | :return:
46 | """
47 | self.gallery = ImageSet(folder)
48 |
49 | def set_id_regex(self, regex):
50 | """
51 |
52 | :param regex:
53 | :return:
54 | """
55 | self.id_regex = regex
56 |
57 | def name(self):
58 | """
59 | example:
60 | P2_cam1_P2_cam2_Grabcut2OptimalMask_Histogram_IIP_[5, 5, 5]_6R_3D_BHATT
61 | :return:
62 | """
63 | name = "%s_%s_Train%s_Test%s" % (self.probe.name, self.gallery.name, self.train_size, self.test_size)
64 |
65 | return name
66 |
67 | def dict_name(self):
68 | """
69 | example:
70 | name = {"Probe": "P2_cam1", "Gallery": "P2_cam2", "Segment": "Grabcut", "SegIter": "2", "Mask": "OptimalMask",
71 | "Evaluator": "Histogram", "EvColorSpace": "IIP", "EvBins": "[5, 5, 5]", "EvDims": "3D", "Regions": "6R",
72 | "Comparator": "BHATT"}
73 | :return:
74 | """
75 | name = {"Probe": self.probe.name, "Gallery": self.gallery.name}
76 | # if self.train_indexes is not None:
77 | name.update({"Train": self.train_size, "Test": self.test_size})
78 |
79 | return name
80 |
81 | def same_individual(self, probe_name, gallery_name):
82 | """
83 |
84 | :param probe_name:
85 | :param gallery_name:
86 | :return:
87 | """
88 | elem_id1 = re.search(self.id_regex, probe_name).group(0)
89 | elem_id2 = re.search(self.id_regex, gallery_name).group(0)
90 | return elem_id1 == elem_id2
91 |
92 | def same_individual_by_pos(self, probe_pos, gallery_pos, selected_set=None):
93 | """
94 |
95 | :param probe_pos:
96 | :param gallery_pos:
97 | :param selected_set:
98 | :return:
99 | """
100 | if selected_set == "train":
101 | probe_name = self.probe.files_train[probe_pos]
102 | gallery_name = self.gallery.files_train[gallery_pos]
103 | elif selected_set == "test":
104 | probe_name = self.probe.files_test[probe_pos]
105 | gallery_name = self.gallery.files_test[gallery_pos]
106 | elif selected_set is None:
107 | probe_name = self.probe.files[probe_pos]
108 | gallery_name = self.gallery.files[gallery_pos]
109 | else:
110 | raise ValueError("selected_set must be None, \"train\" or \"test\"")
111 | return self.same_individual(probe_name, gallery_name)
112 |
113 | def generate_train_set(self, train_size=None, test_size=None, rand_state=None):
114 | """
115 |
116 |
117 |
118 | :param test_size:
119 | :param rand_state:
120 | :param train_size: float or int (default=20)
121 | If float, should be between 0.0 and 1.0 and represent the
122 | proportion of the dataset to include in the train split. If
123 | int, represents the absolute number of train samples.
124 | :return:
125 | """
126 | # self.probe.clear()
127 | # self.gallery.clear()
128 |
129 | if train_size is None and test_size is None:
130 | self.probe.files_train, self.probe.files_test = [], self.probe.files
131 | self.gallery.files_train, self.gallery.files_test = [], self.gallery.files
132 | self.train_indexes, self.test_indexes = [], list(range(0, len(self.probe.files)))
133 | else:
134 | n_samples = len(self.probe.files)
135 | cv = ShuffleSplit(n_samples, test_size=test_size, train_size=train_size, random_state=rand_state)
136 | train_indexes, test_indexes = next(iter(cv))
137 | arrays = [self.probe.files, self.gallery.files]
138 | self.probe.files_train, self.probe.files_test, self.gallery.files_train, self.gallery.files_test = \
139 | list(chain.from_iterable((safe_indexing(a, train_indexes),
140 | safe_indexing(a, test_indexes)) for a in arrays))
141 | self.train_indexes, self.test_indexes = train_indexes, test_indexes
142 |
143 | self.train_size = len(self.train_indexes)
144 | self.test_size = len(self.test_indexes)
145 | # self.probe.files_train, self.probe.files_test, self.gallery.files_train, self.gallery.files_test = \
146 | # train_test_split(self.probe.files, self.gallery.files, train_size=train_size, test_size=test_size,
147 | # random_state=0)
148 | # Assuming same gallery and probe size
149 |
150 | def change_probe_and_gallery(self, probe_list, gallery_list, train_size=0):
151 | """
152 |
153 | :param probe_list:
154 | :param gallery_list:
155 | :return:
156 | """
157 | probe_files = [(idx, f) for idx, f in enumerate(self.probe.files) for pl_elem in probe_list if pl_elem in f]
158 | self.probe.files_test = [elem[1] for elem in probe_files]
159 | self.test_indexes = [elem[0] for elem in probe_files]
160 | self.gallery.files_test = [f for f in self.gallery.files for gl_elem in gallery_list if gl_elem in f]
161 | self.test_size = len(self.test_indexes)
162 | if train_size > 0:
163 | self.train_size = train_size
164 | # http://stackoverflow.com/a/15940459/3337586
165 | selected = np.in1d(self.probe.files, self.probe.files_test)
166 | probe_train_files = [self.probe.files[i] for i in np.where(~selected)[0]]
167 | permutation = np.random.RandomState().permutation(len(probe_train_files))
168 | ind_test = permutation[:train_size]
169 | self.train_indexes = ind_test
170 | self.probe.files_train = [probe_train_files[i] for i in ind_test]
171 | gallery_train_files = [self.gallery.files[i] for i in np.where(~selected)[0]]
172 | self.gallery.files_train = [gallery_train_files[i] for i in ind_test]
173 |
174 | def load_images(self):
175 | """
176 |
177 | :return:
178 | """
179 | self.probe.load_images()
180 | self.gallery.load_images()
181 |
182 | # def unload(self):
183 | # self.probe.unload()
184 | # self.gallery.unload()
185 | # del self.gallery
186 | # del self.probe
187 |
--------------------------------------------------------------------------------
/package/execution.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
3 | # import time
4 | # import re
5 | # from memory_profiler import profile
6 | # import itertools
7 | import sys
8 | from package.utilities import InitializationError
9 | import numpy as np
10 | from package.dataset import Dataset
11 | from package.feature_extractor import FeatureExtractor
12 | from package.feature_matcher import FeatureMatcher
13 |
14 |
15 | class Execution():
16 | def __init__(self, dataset, preproc, feature_extractor, feature_matcher, train_split=None):
17 |
18 | if not isinstance(dataset, Dataset):
19 | raise TypeError("dataset is not of class Dataset")
20 | else:
21 | self.dataset = dataset
22 |
23 | if train_split is not None:
24 | self.dataset.generate_train_set(train_split)
25 |
26 | self.preprocessing = preproc
27 |
28 | if not isinstance(feature_extractor, FeatureExtractor):
29 | raise TypeError("feature_extractor is not of class FeatureExtractor")
30 | else:
31 | self.feature_extractor = feature_extractor
32 |
33 | if not isinstance(feature_matcher, FeatureMatcher):
34 | raise TypeError("feature_matcher is not of class FeatureMatcher")
35 | else:
36 | self.feature_matcher = feature_matcher
37 |
38 | self.matching_matrix = None
39 | # self.ranking_matrix = None
40 |
41 | def set_probe(self, folder):
42 | self.dataset.set_probe(folder)
43 |
44 | def set_gallery(self, folder):
45 | self.dataset.set_gallery(folder)
46 |
47 | def set_id_regex(self, regex):
48 | self.dataset.set_id_regex(regex)
49 |
50 | def set_feature_extractor(self, feature_extractor):
51 | self.feature_extractor = feature_extractor
52 |
53 | def set_feature_matcher(self, feature_matcher):
54 | self.feature_matcher = feature_matcher
55 |
56 | def add_preprocessing(self, preprocessing):
57 | if self.preprocessing is None:
58 | self.preprocessing = [preprocessing]
59 | else:
60 | self.preprocessing.append(preprocessing)
61 |
62 | def name(self):
63 | """
64 | example:
65 | P2_cam1_P2_cam2_Grabcut2OptimalMask_Histogram_IIP_[5, 5, 5]_6R_3D_BHATT
66 | :return:
67 | """
68 | # TODO Set unique and descriptive name
69 | name = "%s_%s_%s" % (
70 | self.dataset.name(), self.feature_extractor.name, self.feature_matcher.name)
71 | return name
72 |
73 | def dict_name(self):
74 | """
75 | example:
76 | name = {"Probe": "P2_cam1", "Gallery": "P2_cam2", "Segment": "Grabcut", "SegIter": "2", "Mask": "OptimalMask",
77 | "Evaluator": "Histogram", "EvColorSpace": "IIP", "EvBins": "[5, 5, 5]", "EvDims": "3D", "Regions": "6R",
78 | "Comparator": "BHATT"}
79 | :return:
80 | """
81 | name = {}
82 | if self.preprocessing is not None:
83 | for val, preproc in enumerate(self.preprocessing):
84 | # name.update(preproc.dict_name())
85 | # name.update({"Preproc%d" % val: preproc.dict_name()})
86 | preproc_name = preproc.dict_name()
87 | if preproc_name:
88 | name.update({"Preproc%d" % val: preproc_name["name"]})
89 | if "params" in preproc_name:
90 | name.update({"Preproc%dParams" % val: preproc_name["params"]})
91 | name.update(self.dataset.dict_name())
92 | name.update(self.feature_extractor.dict_name())
93 | name.update(self.feature_matcher.dict_name())
94 | return name
95 |
96 | def run(self, verbosity=2, fe4train_set=False, njobs=-1):
97 | """
98 |
99 | :param verbosity:
100 | :param fe4train_set:
101 | :param njobs:
102 | :return:
103 | """
104 | if sys.gettrace() is None:
105 | n_jobs = njobs
106 | else:
107 | n_jobs = 1
108 |
109 | self._check_initialization()
110 |
111 | if verbosity > 0: print("Loading dataset images")
112 | self.dataset.load_images()
113 |
114 | if verbosity > 0: print("Preprocessing")
115 | self._preprocess(n_jobs, verbosity)
116 |
117 | if verbosity > 0: print("Extracting Features")
118 | self._extract_dataset(n_jobs, verbosity, fe4train_set)
119 |
120 | # Calculate Comparison matrix
121 | if verbosity > 0: print("Matching Features")
122 | self._calc_matching_matrix(n_jobs, verbosity)
123 |
124 | # Calculate Ranking matrix
125 | if verbosity > 0: print("Calculating Ranking Matrix")
126 | ranking_matrix = self._calc_ranking_matrix()
127 |
128 | if verbosity > 0: print("Execution Finished")
129 | return ranking_matrix
130 |
131 | def unload(self):
132 | global probeX, galleryY, probeXtest, galleryYtest
133 | self.dataset.unload()
134 | del self.dataset
135 | self.feature_matcher.weigths = None
136 | del self.feature_matcher
137 | self.feature_extractor.bins = None
138 | self.feature_extractor.regions = None
139 | del self.feature_extractor
140 | for preproc in self.preprocessing:
141 | del preproc
142 | del self.preprocessing
143 | del self.matching_matrix
144 | # del self.ranking_matrix
145 | probeX = None
146 | galleryY = None
147 | probeXtest = None
148 | galleryYtest = None
149 |
150 | def _check_initialization(self):
151 | if self.dataset.probe is None:
152 | raise InitializationError("probe not initialized")
153 | if self.dataset.gallery is None:
154 | raise InitializationError("gallery not initialized")
155 | if self.dataset.id_regex is None:
156 | # self.dataset.set_id_regex("P[1-6]_[0-9]{3}")
157 | raise InitializationError("id_regex not initialized")
158 | if self.feature_extractor is None:
159 | raise InitializationError("feature_extractor not initialized")
160 | if self.feature_matcher is None:
161 | raise InitializationError("feature_matcher not initialized")
162 |
163 | def _preprocess(self, n_jobs=1, verbosity=2):
164 | if not self.preprocessing:
165 | return
166 | else:
167 | for preproc in self.preprocessing:
168 | preproc.preprocess_dataset(self.dataset, n_jobs, verbosity)
169 |
170 | def _extract_dataset(self, n_jobs=-1, verbosity=2, extract_train=False):
171 | self.feature_extractor.extract_dataset(self.dataset, n_jobs, verbosity, extract_train)
172 |
173 | def _calc_matching_matrix(self, n_jobs=-1, verbosity=2):
174 | self.matching_matrix = self.feature_matcher.match_sets(self.dataset.probe.fe_test,
175 | self.dataset.gallery.fe_test,
176 | n_jobs, verbosity)
177 |
178 | def _calc_ranking_matrix(self):
179 | import package.feature_matcher as Comparator
180 | if self.feature_matcher.method == Comparator.HISTCMP_CORRE or \
181 | self.feature_matcher.method == Comparator.HISTCMP_INTERSECT:
182 | # The biggest value, the better
183 | return np.argsort(self.matching_matrix, axis=1)[:, ::-1].astype(np.uint16)
184 | # Reverse order by axis 1
185 | else: # The lower value, the better
186 | return np.argsort(self.matching_matrix).astype(np.uint16)
187 |
--------------------------------------------------------------------------------
/package/feature_extractor.py:
--------------------------------------------------------------------------------
1 | import numpy
2 | import cv2
3 | import time
4 | from package.image import CS_IIP, CS_BGR, CS_HSV, iip_max, iip_min, hsvmax, colorspace_name
5 | from sklearn.externals.joblib import Parallel, delayed
6 | import numpy as np
7 |
8 | __author__ = 'luigolas'
9 |
10 |
11 | def _parallel_transform(fe, *args):
12 | return fe.extract(*args)
13 |
14 |
15 | class FeatureExtractor(object):
16 | def extract_dataset(self, dataset, n_jobs, verbosity=2, calc4train_set=False):
17 | """
18 |
19 |
20 |
21 | :param dataset:
22 | :param n_jobs:
23 | :param verbosity:
24 | :param calc4train_set:
25 | :return:
26 | """
27 | raise NotImplementedError("Please Implement extract_dataset method")
28 |
29 | def extract(self, image, *args):
30 | """
31 |
32 | :param image:
33 | :param mask:
34 | :raise NotImplementedError:
35 | """
36 | raise NotImplementedError("Please Implement evaluate method")
37 |
38 | def dict_name(self):
39 | """
40 |
41 | :return:
42 | """
43 | raise NotImplementedError("Please Implement dict_name method")
44 |
45 |
46 | class Histogram(FeatureExtractor):
47 | color_ranges = {CS_BGR: [0, 256, 0, 256, 0, 256],
48 | CS_IIP: [80.86236, 140.3336, -17.9941, 18.1964, -73.6702, -42.93588],
49 | # CS_IIP: [iip_min, iip_max] * 3,
50 | CS_HSV: [0, hsvmax[0], 0, hsvmax[1], 0, hsvmax[2]]}
51 |
52 | # color_channels = {CS_BGR: [0, 1, 2],
53 | # CS_IIP: [0, 1, 2],
54 | # CS_HSV: [0, 1, 2]}
55 |
56 | def __init__(self, color_space, bins=None, dimension="1D"):
57 | if color_space is None or not isinstance(color_space, int):
58 | raise AttributeError("Colorspace parameter must be valid")
59 | self._colorspace = color_space
60 | if bins is None:
61 | bins = 32
62 | if type(bins) == int:
63 | if color_space == CS_BGR:
64 | bins = [bins] * 3
65 | elif color_space == CS_HSV:
66 | bins = [bins] * 2
67 | elif color_space == CS_IIP:
68 | bins = [bins] * 3
69 |
70 | self._channels = [i for i, e in enumerate(bins) if e != 0]
71 | self._original_bins = bins
72 | self._bins = [i for i in bins if i != 0]
73 |
74 | if not isinstance(dimension, str) or not (dimension == "1D" or dimension == "3D"):
75 | raise AttributeError("Dimension must be \"1D\" or \"3D\"")
76 | self._dimension = dimension
77 |
78 | self.name = "Histogram_%s_%s_%s" % (colorspace_name[self._colorspace], self._bins, self._dimension)
79 |
80 | def extract_dataset(self, dataset, n_jobs=-1, verbosity=2, calc4train_set=False):
81 | if verbosity > 1: print(" Calculating Histograms %s, %s" % (colorspace_name[self._colorspace],
82 | str(self._original_bins)))
83 | if calc4train_set:
84 | images = dataset.probe.images_train + dataset.probe.images_test
85 | images += dataset.gallery.images_train + dataset.gallery.images_test
86 | else:
87 | images = dataset.probe.images_test
88 | images += dataset.gallery.images_test
89 |
90 | if dataset.probe.masks_test:
91 | if calc4train_set:
92 | masks = dataset.probe.masks_train + dataset.probe.masks_test
93 | masks += dataset.gallery.masks_train + dataset.gallery.masks_test
94 | else:
95 | masks = dataset.probe.masks_test
96 | masks += dataset.gallery.masks_test
97 | else:
98 | masks = [None] * (len(images))
99 |
100 | if dataset.probe.regions_test:
101 | if calc4train_set:
102 | regions = dataset.probe.regions_train + dataset.probe.regions_test
103 | regions += dataset.gallery.regions_train + dataset.gallery.regions_test
104 | else:
105 | regions = dataset.probe.regions_test
106 | regions += dataset.gallery.regions_test
107 | else:
108 | regions = [None] * (len(images))
109 |
110 | if dataset.probe.maps_test:
111 | if calc4train_set:
112 | maps = dataset.probe.maps_train + dataset.probe.maps_test
113 | maps += dataset.gallery.maps_train + dataset.galley.maps_test
114 | else:
115 | maps = dataset.probe.maps_test
116 | maps += dataset.gallery.maps_test
117 | else:
118 | maps = [None] * (len(images))
119 |
120 | args = ((im, mask, region, m) for im, mask, region, m in zip(images, masks, regions, maps))
121 |
122 | results = Parallel(n_jobs)(delayed(_parallel_transform)(self, im, mask, reg, m) for im, mask, reg, m in args)
123 |
124 | test_len = dataset.test_size
125 | if calc4train_set:
126 | train_len = dataset.train_size
127 | dataset.probe.fe_train = np.asarray(results[:train_len])
128 | dataset.probe.fe_test = np.asarray(results[train_len:train_len + test_len])
129 | dataset.gallery.fe_train = np.asarray(results[train_len + test_len:-test_len])
130 | dataset.gallery.fe_test = np.asarray(results[-test_len:])
131 | else:
132 | dataset.probe.fe_test = np.asarray(results[:test_len])
133 | dataset.gallery.fe_test = np.asarray(results[-test_len:])
134 |
135 | def extract(self, image, mask=None, regions=None, weights=None, normalization=cv2.NORM_MINMAX):
136 | """
137 |
138 | :param image:
139 | :param mask:
140 | :param regions:
141 | :param weights:
142 | :param normalization:
143 | :return:
144 | """
145 |
146 | if self._colorspace and self._colorspace != image.colorspace:
147 | # image = image.to_color_space(self._colorspace, normed=True)
148 | image = image.to_color_space(self._colorspace)
149 |
150 | if regions is None:
151 | regions = [[0, image.shape[0], 0, image.shape[1]]]
152 |
153 | histograms = []
154 | for region in regions:
155 | im = image[region[0]:region[1], region[2]:region[3]]
156 |
157 | if mask is not None and len(mask):
158 | m = mask[region[0]:region[1], region[2]:region[3]]
159 | else:
160 | m = mask
161 |
162 | if weights is not None and len(weights):
163 | weight = weights[region[0]:region[1], region[2]:region[3]]
164 | else:
165 | weight = 1
166 |
167 | hist = self.calcHistNormalized(im, m, weight, normalization)
168 | histograms.append(hist)
169 |
170 | return numpy.asarray(histograms).flatten().clip(0)
171 |
172 | # noinspection PyPep8Naming
173 | def calcHistNormalized(self, image, mask, weight, normalization):
174 | """
175 |
176 |
177 | :param image:
178 | :param mask:
179 | :param weight:
180 | :param normalization:
181 | :return:
182 | """
183 | # Using mask and map
184 | if mask is not None and len(mask):
185 | mask = mask * weight
186 | elif type(weight) != int:
187 | mask = weight
188 | else:
189 | mask = None
190 |
191 | # mask = weight # Using only MAP
192 |
193 | # mask = np.ones_like(mask) # Not using mask nor map
194 |
195 | # All comented, use only mask
196 |
197 | if self._dimension == "3D":
198 | # TODO add weighted ?
199 | # _channels = list(range(len(self._bins)))
200 | channels = numpy.nonzero(self._bins)
201 |
202 | hist = cv2.calcHist([image], channels,
203 | mask, self._bins, Histogram.color_ranges[image.colorspace])
204 | # hist = self.normalize_hist(hist, normalization)
205 | else: # 1D case
206 | hist = []
207 | for channel, bins in zip(self._channels, self._bins):
208 | if type(weight) != int:
209 | h = calc_hist(image, channel, mask, bins,
210 | Histogram.color_ranges[image.colorspace][channel*2:channel*2+2])
211 | # [0., 1.])
212 |
213 | else:
214 | h = cv2.calcHist([image], [channel], mask, [bins],
215 | Histogram.color_ranges[image.colorspace][channel*2:channel*2+2])
216 | # [0., 1.])
217 |
218 | # hist.extend(self.normalize_hist(h.astype(np.float32), normalization))
219 | hist.extend(h.astype(np.float32))
220 | # In 1D case it can't be converted to numpy array as it might have different dimension (bins) sizes
221 | return hist
222 |
223 |
224 | @staticmethod
225 | def normalize_hist(histogram, normalization=cv2.NORM_MINMAX):
226 | """
227 | if histogram.max() != 0: # Some cases when mask completely occlude region
228 |
229 | cv2.normalize(histogram, histogram, 0, 1, cv2.NORM_MINMAX, -1) # Default
230 | cv2.normalize(histogram, histogram, 1, 0, cv2.NORM_L1) # Divide all by "num pixel". Sum of all values equal 1
231 | cv2.normalize(histogram, histogram, 1, 0, cv2.NORM_INF) # Divide all values by max
232 |
233 | Any of the three have the same effect. Will use NORM_MINMAX
234 | :param histogram:
235 | :param normalization: if None, no nomalization takes effect
236 | :return:
237 | """
238 | if normalization == cv2.NORM_MINMAX:
239 | cv2.normalize(histogram, histogram, 0, 1, cv2.NORM_MINMAX, -1) # Default
240 | elif (normalization == cv2.NORM_L1) or (normalization == cv2.NORM_INF):
241 | cv2.normalize(histogram, histogram, 1, 0, normalization)
242 |
243 | return histogram
244 |
245 | def dict_name(self):
246 | return {"Feature_Extractor": "Histogram", "FeColorSpace": colorspace_name[self._colorspace],
247 | "FeBins": str(self._original_bins), "FeDim": self._dimension}
248 |
249 |
250 | def calc_hist(im, ch, weight, bins, hist_range):
251 | """
252 | Inspiration: https://gist.github.com/nkeim/4455635
253 | Fast 1D histogram calculation. Must have all values initialized
254 | :param im: image to calculate histogram. Must be os shape (Height, Width, _channels)
255 | :param ch: channel to calculate histogram
256 | :param weight: weight map for values on histogram. Must be of shape (Height, Width) and float type
257 | :param bins: Numbers of bins for histogram
258 | :param hist_range: Range of values on selected channel
259 | :return:
260 | """
261 | # return np.bincount(np.digitize(im[:, :, ch].flatten(), np.linspace(hist_range[0], hist_range[1], bins + 1)[1:-1]),
262 | # weights=weight.flatten(), minlength=bins)
263 | return np.bincount(np.digitize(im[:, :, ch].flatten(), np.linspace(hist_range[0], hist_range[1], bins)[1:]),
264 | weights=weight.flatten(), minlength=bins)
265 |
--------------------------------------------------------------------------------
/package/feature_matcher.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # !/usr/bin/env python2
3 | import cv2
4 | import math
5 | import itertools
6 | import multiprocessing
7 | import numpy as np
8 | from sklearn.externals.joblib import Parallel, delayed
9 |
10 | __author__ = 'luigolas'
11 |
12 | HISTCMP_CORRE = cv2.HISTCMP_CORREL # 0
13 | HISTCMP_CHISQR = cv2.HISTCMP_CHISQR # 1
14 | HISTCMP_INTERSECT = cv2.HISTCMP_INTERSECT # 2
15 | HISTCMP_BHATTACHARYYA = cv2.HISTCMP_BHATTACHARYYA # 3
16 | HISTCMP_EUCLID = 4
17 | method_names = ["CORRE", "CHISQ", "INTER", "BHATT", "EUCLI"]
18 |
19 |
20 | # def _parallel_match(comp, i1, i2):
21 | # return comp.match(i1, i2)
22 |
23 | def _parallel_match(*args):
24 | return args[0][0].match(*args[0][1:][0])
25 |
26 |
27 | class FeatureMatcher(object):
28 | def match_sets(self, probe_fe, gallery_fe, n_jobs=-1, verbosity=2):
29 | raise NotImplementedError("Please Implement match_probe_gallery method")
30 |
31 | def match(self, ev1, ev2):
32 | """
33 |
34 | :param ev1:
35 | :param ev2:
36 | :raise NotImplementedError:
37 | """
38 | raise NotImplementedError("Please Implement match method")
39 |
40 | def dict_name(self):
41 | raise NotImplementedError("Please Implement dict_name method")
42 |
43 |
44 | class HistogramsCompare(FeatureMatcher):
45 | """
46 | Method to compare histograms
47 | :param comp_method:
48 | :raise AttributeError:
49 | """
50 |
51 | def __init__(self, comp_method, weights=None):
52 | if comp_method not in [0, 1, 2, 3, 4]: # Not the best way to check
53 | raise AttributeError("Comparisson method must be one of the predefined for CompHistograms")
54 | self.method = comp_method
55 | self._weights = weights
56 | self.name = method_names[self.method]
57 |
58 | def match_sets(self, set1_fe, set2_fe, n_jobs=-1, verbosity=2):
59 | """
60 |
61 | :param set1_fe:
62 | :param set2_fe:
63 | :param n_jobs:
64 | :return:
65 | """
66 | if verbosity > 1: print(" Comparing Histograms")
67 |
68 | args = ((elem1, elem2) for elem1 in set1_fe for elem2 in set2_fe)
69 | args = zip(itertools.repeat(self), args)
70 |
71 | if n_jobs == 1:
72 | results = Parallel(n_jobs)(delayed(_parallel_match)(e) for e in args)
73 | comparison_matrix = np.asarray(results, np.float32)
74 | else:
75 | if n_jobs == -1:
76 | n_jobs = None
77 | pool = multiprocessing.Pool(processes=n_jobs)
78 | comparison_matrix = np.fromiter(pool.map(_parallel_match, args), np.float32)
79 | pool.close()
80 | pool.join()
81 |
82 | # size = math.sqrt(comparison_matrix.shape[0])
83 | # comparison_matrix.shape = (size, size)
84 | comparison_matrix.shape = (len(set1_fe), len(set2_fe))
85 | return comparison_matrix
86 |
87 | def match(self, hists1, hists2):
88 | """
89 |
90 | :param hists1:
91 | :param hists2:
92 | :return:
93 | """
94 | HistogramsCompare._check_size_params(hists1, hists2, self._weights)
95 |
96 | if self._weights is None:
97 | weights = [1]
98 | else:
99 | weights = self._weights
100 |
101 | comp_val = []
102 | num_histograms = len(weights)
103 | hist1 = hists1.reshape((num_histograms, hists1.shape[0] / num_histograms))
104 | hist2 = hists2.reshape((num_histograms, hists2.shape[0] / num_histograms))
105 | # hist1 = np.concatenate((np.asarray([0.] * 68, np.float32), hists1))
106 | # hist2 = np.concatenate((np.asarray([0.] * 68, np.float32), hists2))
107 | # hist1 = hist1.reshape((num_histograms, hist1.shape[0] / num_histograms))
108 | # hist2 = hist2.reshape((num_histograms, hist2.shape[0] / num_histograms))
109 | for h1, h2 in zip(hist1, hist2):
110 | if np.count_nonzero(h1) == 0 and np.count_nonzero(h2) == 0:
111 | # Might return inequality when both histograms are zero. So we compare two simple histogram to ensure
112 | # equality return value
113 | comp_val.append(self._compareHist(np.asarray([1], np.float32), np.asarray([1], np.float32)))
114 | else:
115 | comp_val.append(self._compareHist(h1, h2))
116 | comp_val = sum([i * j for i, j in zip(comp_val, weights)])
117 | return comp_val
118 |
119 | def _compareHist(self, h1, h2):
120 | """
121 |
122 | :param h1:
123 | :param h2:
124 | :return:
125 | """
126 | if self.method == HISTCMP_EUCLID:
127 | return np.linalg.norm(h1 - h2)
128 | else:
129 | return cv2.compareHist(h1, h2, self.method)
130 |
131 | @staticmethod
132 | def _check_size_params(hist1, hist2, weights):
133 | """
134 |
135 | :param hist1:
136 | :param hist2:
137 | :param weights:
138 | :return:
139 | """
140 | # if not isinstance(weights, list):
141 | # raise TypeError("Weights parameter must be a list of values")
142 | if weights: # Both initialized
143 | if hist1.shape[0] % len(weights) != 0:
144 | raise IndexError("Size of hist and weights not compatible; hist: "
145 | + str(hist1.shape[0]) + "; weights: " + str(len(weights)))
146 | if hist2.shape[0] % len(weights) != 0:
147 | raise IndexError("Size of hist and weights not compatible; hist: "
148 | + str(hist2.shape[0]) + "; weights: " + str(len(weights)))
149 | elif hist1.shape[0] != hist2.shape[0]:
150 | raise IndexError(
151 | "Size of histograms must be the same. Size1:%d _ Size2:%d" & (hist1.shape[0], hist2.shape[0]))
152 |
153 | def dict_name(self):
154 | return {"FMatcher": method_names[self.method], "FMatchWeights": str(self._weights)}
155 |
--------------------------------------------------------------------------------
/package/image.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
3 | import numpy as np
4 | import cv2
5 | from package.utilities import safe_ln, FileNotFoundError
6 | # from package.app import DB
7 |
8 | CS_IIP = 1
9 | CS_BGR = 2
10 | CS_HSV = 3
11 | CS_YCrCb = 4
12 |
13 | colorspace_name = ["", "IIP", "BGR", "HSV", "YCrCb"]
14 |
15 | iipA = np.asarray([[27.07439, -0.2280783, -1.806681],
16 | [-5.646736, -7.722125, 12.86503],
17 | [-4.163133, -4.579428, -4.576049]])
18 |
19 | iipB = np.asarray([[0.9465229, 0.2946927, -0.1313419],
20 | [-0.1179179, 0.9929960, 0.007371554],
21 | [0.09230461, -0.04645794, 0.9946464]])
22 |
23 | # iip_min = -576.559
24 | # iip_min = -622.28125
25 | # iip_max = 306.672
26 | iip_min = -73.670
27 | iip_max = 140.333
28 |
29 | bgr2iipClip = 3 # Minimal value for RGB image before converting to IIP
30 |
31 | hsvmax = [180, 256, 256]
32 |
33 |
34 | class Image(np.ndarray):
35 | """
36 | Image class, basically an ndarray with imgname and colorspace parameters
37 | --To make it work with multiprocessing Queue it needs to be pickeable (reduce and getstate)
38 | ----More info: http://mail.scipy.org/pipermail/numpy-discussion/2007-April/027193.html
39 |
40 | :param imgname:
41 | :param colorspace:
42 | :return:
43 | :raise FileNotFoundError:
44 | """
45 |
46 | def __new__(cls, ndarray, colorspace=CS_BGR, imgname=None):
47 | if not isinstance(ndarray, np.ndarray):
48 | raise ValueError('Argument must be a valid numpy.ndarray')
49 | return ndarray.view(cls)
50 |
51 | def __init__(self, src, colorspace=CS_BGR, imgname=None):
52 | if not imgname and isinstance(src, Image):
53 | imgname = src.imgname
54 | self.imgname = imgname
55 | self.colorspace = colorspace
56 | # This _method should NOT affect DB
57 |
58 | def __deepcopy__(self, memo):
59 | # Allowing deepcopy of Images
60 | result = Image(self.copy(), self.colorspace, self.imgname)
61 | memo[id(self)] = result
62 | return result
63 |
64 | def __copy__(self):
65 | # Allowing copy of Images
66 | result = Image(self.copy(), self.colorspace, self.imgname)
67 | return result
68 |
69 | def __array_finalize__(self, obj):
70 | if obj is None:
71 | return
72 | self.colorspace = getattr(obj, 'colorspace', CS_BGR)
73 |
74 | def __reduce__(self):
75 | # Enables pickling, needed for multiprocessing
76 | object_state = list(np.ndarray.__reduce__(self))
77 | subclass_state = (self.imgname, self.colorspace)
78 | object_state[2] = (object_state[2], subclass_state)
79 | return tuple(object_state)
80 |
81 | def __setstate__(self, state):
82 | # Enables pickling, needed for multiprocessing
83 | nd_state, own_state = state
84 | np.ndarray.__setstate__(self, nd_state)
85 |
86 | imgname, colorspace = own_state
87 | self.imgname, self.colorspace = imgname, colorspace
88 |
89 | @classmethod
90 | def from_filename(cls, imgname, colorspace=CS_BGR):
91 | """
92 |
93 | :param imgname:
94 | :param colorspace:
95 | :return: :raise FileNotFoundError:
96 | """
97 | img = cv2.imread(imgname)
98 | if img is None:
99 | raise FileNotFoundError("Image file not found at specified path")
100 | img = Image(img)
101 | img.colorspace = CS_BGR
102 | img.imgname = imgname
103 | if colorspace != CS_BGR:
104 | img = img.to_color_space(colorspace)
105 | return img
106 |
107 | def to_color_space(self, colorspace, normed=False):
108 | """
109 |
110 | :param colorspace:
111 | :return:
112 | """
113 | img = None
114 | if self.colorspace == CS_BGR:
115 | if colorspace == CS_IIP:
116 | img = self._bgr2iip()
117 | elif colorspace == CS_HSV:
118 | img = self._bgr2hsv(normed)
119 | elif colorspace == CS_YCrCb:
120 | img = self._bgr2YCrCb()
121 | elif self.colorspace == CS_YCrCb:
122 | if colorspace == CS_BGR:
123 | img = self._YCrCb2bgr()
124 | elif self.colorspace == CS_HSV:
125 | if colorspace == CS_BGR:
126 | img = self._hsv2bgr()
127 | return img
128 |
129 | def _bgr2iip(self):
130 | # Convert to CV_32F3 , floating point in range 0.0 , 1.0
131 | # imgf32 = np.float32(self)
132 | # imgf32 = imgf32*1.0/255
133 |
134 | # Clip values to min value
135 | src = self.clip(bgr2iipClip)
136 |
137 | # src_xyz = cv2.cvtColor(imgf32, cv2.COLOR_BGR2XYZ)
138 | src_xyz = cv2.cvtColor(src, cv2.COLOR_BGR2XYZ)
139 |
140 | # img_iip = np.empty_like(src_xyz, np.float32)
141 |
142 | # http://stackoverflow.com/a/25922418/3337586
143 | img_iip = np.einsum('ij,klj->kli', iipB, src_xyz)
144 | img_iip = safe_ln(img_iip)
145 | img_iip = np.einsum('ij,klj->kli', iipA, img_iip)
146 | img_iip = img_iip.astype(np.float32)
147 | # for row_index, row in enumerate(src_xyz):
148 | # for columm_index, element in enumerate(row):
149 | # element = np.dot(iipB, element)
150 | # element = safe_ln(element) # element = np.log(element)
151 | # element = np.dot(iipA, element)
152 | # # element = (element - iip_min)/(iip_max - iip_min) # Normalized to 0.0 ; 1.0
153 | # # element = np.around(element, decimals=6) # Remove some "extreme" precision
154 | # img_iip[row_index][columm_index] = element
155 | return Image(img_iip, CS_IIP, self.imgname)
156 |
157 | def _bgr2hsv(self, normed=False):
158 | if normed:
159 | # Convert to CV_32F3 , floating point in range 0.0 , 1.0
160 | imgf32 = np.float32(self)
161 | imgf32 = imgf32 * 1.0 / 255
162 | img = cv2.cvtColor(imgf32, cv2.COLOR_BGR2HSV)
163 | img[:, :, 0] /= 360.
164 | else:
165 | img = cv2.cvtColor(self, cv2.COLOR_BGR2HSV)
166 |
167 | img = Image(img, CS_HSV, self.imgname)
168 | return img
169 |
170 | def _hsv2bgr(self, ):
171 | img = cv2.cvtColor(self, cv2.COLOR_HSV2BGR)
172 | img = Image(img, CS_BGR, self.imgname)
173 | return img
174 |
175 | def _bgr2YCrCb(self):
176 | img = cv2.cvtColor(self, cv2.COLOR_BGR2YCrCb)
177 | img = Image(img, CS_YCrCb, self.imgname)
178 | return img
179 |
180 | def _YCrCb2bgr(self):
181 | img = cv2.cvtColor(self, cv2.COLOR_YCrCb2BGR)
182 | img = Image(img, CS_BGR, self.imgname)
183 | return img
184 |
--------------------------------------------------------------------------------
/package/image_set.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
3 | import os
4 | from package.image import Image
5 | from package.utilities import ImagesNotFoundError, NotADirectoryError
6 |
7 |
8 | class ImageSet(object):
9 | def __init__(self, folder_name, name_ids=2):
10 | self.path = ImageSet._valid_directory(folder_name)
11 | self.name = "_".join(self.path.split("/")[-name_ids:])
12 | # name = "_".join(name)
13 | self.files = self._read_all_files()
14 | self.dataset_len = len(self.files)
15 | if self.dataset_len == 0:
16 | raise ImagesNotFoundError("At folder " + self.path)
17 |
18 | self.files_train = []
19 | self.files_test = []
20 | self.images_train = []
21 | self.images_test = []
22 | self.masks_train = []
23 | self.masks_test = []
24 | self.regions_train = []
25 | self.regions_test = []
26 | self.maps_train = []
27 | self.maps_test = []
28 | self.fe_train = []
29 | self.fe_test = []
30 |
31 | def _read_all_files(self):
32 | files = []
33 | for path, subdirs, files_order_list in os.walk(self.path):
34 | for filename in files_order_list:
35 | if ImageSet._valid_format(filename):
36 | f = os.path.join(path, filename)
37 | files.append(f)
38 | return sorted(files)
39 |
40 | def load_images(self):
41 | self.images_train = []
42 | self.images_test = []
43 |
44 | if not self.files_test: # If not initialized
45 | self.files_test = self.files
46 | self.files_train = []
47 |
48 | for imname in self.files_train:
49 | self.images_train.append(Image.from_filename(imname))
50 |
51 | for imname in self.files_test:
52 | self.images_test.append(Image.from_filename(imname))
53 |
54 |
55 | @staticmethod
56 | def _valid_format(name):
57 | return ((".jpg" in name) or (".png" in name) or (
58 | ".bmp" in name)) and "MASK" not in name and "FILTERED" not in name
59 |
60 | @staticmethod
61 | def _valid_directory(folder_name):
62 | if not os.path.isdir(folder_name):
63 | raise NotADirectoryError("Not a valid directory path: " + folder_name)
64 | if folder_name[-1] == '/':
65 | folder_name = folder_name[:-1]
66 | return folder_name
67 |
68 | def unload(self):
69 | self.images_train = None
70 | self.images_train = None
71 | self.masks_train = None
72 | self.masks_test = None
73 | self.files = None
74 | self.files_train = None
75 | self.files_test = None
76 |
77 |
--------------------------------------------------------------------------------
/package/post_ranker.py:
--------------------------------------------------------------------------------
1 | from copy import copy
2 | import math
3 | import random
4 | import cv2
5 | from package.utilities import InitializationError
6 | from sklearn.ensemble import RandomForestRegressor, RandomTreesEmbedding
7 | from sklearn.semi_supervised import LabelSpreading, LabelPropagation
8 | import numpy as np
9 | from profilehooks import profile
10 |
11 | __author__ = 'luigolas'
12 |
13 |
14 | class PostRankOptimization(object):
15 | """
16 |
17 | :param balanced:
18 | :param visual_expansion_use:
19 | :param re_score_alpha:
20 | :param re_score_method_proportional:
21 | :param regions: Define which of the regions to be considered upper body and which legs. If None, not used.
22 | Must be of length 2 if defined.
23 | Example: regions=[[0, 1], [2, 3, 4]]
24 | :return:
25 | """
26 |
27 | def __init__(self, balanced=False, visual_expansion_use=True, re_score_alpha=0.15,
28 | re_score_proportional=True, regions=None, ve_estimators=20, ve_leafs=5): # OK
29 | self.subject = -1 # The order of the person to be Re-identified by the user (Initially -1)
30 | self.probe_name = ""
31 | self.probe_selected = None # Already feature extracted
32 | self.target_position = 0
33 | self.iteration = 0
34 | self.strong_negatives = []
35 | self.weak_negatives = []
36 | self.visual_expanded = []
37 | self.new_strong_negatives = []
38 | self.new_weak_negatives = []
39 | self.new_visual_expanded = []
40 | self.visual_expansion = RandomForestRegressor(n_estimators=ve_estimators, min_samples_leaf=ve_leafs,
41 | n_jobs=-1) # As in POP
42 |
43 | # regions = [[0], [1]]
44 | if regions is None:
45 | self.regions = [[0]]
46 | self.regions_parts = 1
47 | elif len(regions) == 2:
48 | self.regions = regions
49 | self.regions_parts = sum([len(e) for e in regions])
50 | else:
51 | raise ValueError("Regions size must be 2 (body region and legs region)")
52 | self.size_for_each_region_in_fe = 0 # Initialized at initial iteration
53 |
54 | self.execution = None
55 | self.ranking_matrix = None
56 | self.rank_list = None
57 | self.comp_list = None
58 | self.balanced = balanced
59 | if not balanced:
60 | self.use_visual_expansion = False
61 | else:
62 | self.use_visual_expansion = visual_expansion_use
63 | self.re_score_alpha = re_score_alpha
64 | self.re_score_proportional = re_score_proportional
65 |
66 | def set_ex(self, ex, rm): # OK
67 | self.execution = ex
68 | self.ranking_matrix = rm
69 | self.initial_iteration()
70 |
71 | def new_samples(self, weak_negatives_index, strong_negatives_index, absolute_index=False): # OK
72 | self.new_weak_negatives = [[e, idx] for [e, idx] in weak_negatives_index if
73 | [e, idx] not in self.weak_negatives]
74 | self.new_strong_negatives = [[e, idx] for [e, idx] in strong_negatives_index if
75 | [e, idx] not in self.strong_negatives]
76 | if not absolute_index:
77 | self.new_weak_negatives = [[self.rank_list[e], idx] for [e, idx] in self.new_weak_negatives]
78 | self.new_strong_negatives = [[self.rank_list[e], idx] for [e, idx] in self.new_strong_negatives]
79 |
80 | def _generate_visual_expansion(self): # OK
81 | n_estimators = self.visual_expansion.get_params()['n_estimators']
82 | selected_len = round(float(n_estimators) * (2 / 3.))
83 | selected = np.random.RandomState()
84 | selected = selected.permutation(n_estimators)
85 | selected = selected[:selected_len]
86 | expansion = np.zeros_like(self.probe_selected)
87 | for s in selected:
88 | expansion += self.visual_expansion[s].predict(self.probe_selected).flatten()
89 | expansion /= float(selected_len)
90 | return expansion
91 |
92 | def new_subject(self): # OK
93 | if self.subject < self.execution.dataset.test_size:
94 | self.subject += 1
95 | self.probe_name = self.execution.dataset.probe.files_test[self.subject]
96 | self.probe_name = "/".join(self.probe_name.split("/")[-2:])
97 | self.probe_selected = self.execution.dataset.probe.fe_test[self.subject]
98 | self.rank_list = self.ranking_matrix[self.subject].copy()
99 | self.comp_list = self.execution.matching_matrix[self.subject].copy()
100 | self._calc_target_position()
101 | self.iteration = 0
102 | self.strong_negatives = []
103 | self.weak_negatives = []
104 | self.visual_expanded = []
105 | else:
106 | return # TODO Control situation
107 |
108 | def initial_iteration(self): # OK
109 | self.new_subject()
110 | self.size_for_each_region_in_fe = self.execution.dataset.gallery.fe_test.shape[1] / self.regions_parts
111 | if self.use_visual_expansion:
112 | self.visual_expansion.fit(self.execution.dataset.probe.fe_train, self.execution.dataset.gallery.fe_train)
113 |
114 | def iterate(self): # OK
115 | self.iteration += 1
116 | # print("Iteration %d" % self.iteration)
117 | to_expand_len = len(self.new_strong_negatives) - len(self.new_weak_negatives)
118 | if self.balanced:
119 | if to_expand_len < 0:
120 | return "There cannot be more weak negatives than strong negatives"
121 | elif to_expand_len > 0 and not self.use_visual_expansion:
122 | return "There must be the same number of weak negatives and strong negatives"
123 |
124 | for i in range(to_expand_len):
125 | # Randomly select if body or legs
126 | if len(self.regions) == 1:
127 | reg = 0
128 | else: # Assumes only two body parts
129 | reg = random.choice([0, 1])
130 | self.new_visual_expanded.append([self._generate_visual_expansion(), reg])
131 |
132 | self.reorder()
133 |
134 | self._calc_target_position()
135 |
136 | self.strong_negatives.extend(self.new_strong_negatives)
137 | self.weak_negatives.extend(self.new_weak_negatives)
138 | self.visual_expanded.extend(self.new_visual_expanded)
139 | self.new_strong_negatives = []
140 | self.new_weak_negatives = []
141 | self.new_visual_expanded = []
142 | return "OK"
143 |
144 | def collage(self, name, cols=5, size=20, min_gap_size=5): # OK
145 | """
146 | Adapted from http://answers.opencv.org/question/13876/
147 | read-multiple-images-from-folder-and-concat/?answer=13890#post-id-13890
148 |
149 | :param name: path to save collage imgf
150 | :param cols: num of columms for the collage
151 | :param size: num of images to show in collage
152 | :param min_gap_size: space between images
153 | :return:
154 | """
155 | # Add reference imgf first
156 | imgs = []
157 |
158 | img = self.execution.dataset.probe.images_test[self.subject].copy()
159 | img[0:10, 0:10] = [0, 255, 0]
160 | cv2.putText(img, "Probe", (10, 10), cv2.FONT_HERSHEY_SIMPLEX, 0.4, 255, 1)
161 | imgs.append(img)
162 |
163 | elements = self.rank_list.copy()
164 |
165 | # Open imgs and save in list
166 | size = min(len(elements), (size - 1))
167 | for i, elem in zip(range(size), elements):
168 | # print files_order_list[elem]
169 | img = self.execution.dataset.gallery.images_test[elem].copy()
170 | if self.execution.dataset.same_individual_by_pos(self.subject, elem, "test"):
171 | img[0:10, 0:10] = [0, 255, 0]
172 | cv2.putText(img, str(i), (5, 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, [0, 0, 255], 2)
173 |
174 | imgs.append(img)
175 |
176 | # let's find out the maximum dimensions
177 | max_width = 0
178 | max_height = 0
179 |
180 | for img in imgs:
181 | max_height = max(max_height, img.shape[0]) # rows
182 | max_width = max(max_width, img.shape[1]) # cols
183 |
184 | # number of images in y direction
185 | rows = int(math.ceil(len(imgs) / float(cols)))
186 |
187 | result = np.zeros(
188 | (rows * max_height + (rows - 1) * min_gap_size, cols * max_width + (cols - 1) * min_gap_size, 3), np.uint8)
189 |
190 | current_height = current_width = i = 0
191 |
192 | for y in range(rows):
193 | for x in range(cols):
194 | result[current_height:current_height + imgs[i].shape[0],
195 | current_width:current_width + imgs[i].shape[1]] = imgs[i]
196 | i += 1
197 | current_width += max_width + min_gap_size
198 | current_width = 0
199 | current_height += max_height + min_gap_size
200 |
201 | cv2.imwrite(name, result)
202 | cv2.imshow("tal", result)
203 | cv2.waitKey(1)
204 |
205 | def reorder(self): # OK
206 | raise NotImplementedError("Please Implement reorder method")
207 |
208 | def _calc_target_position(self): # OK
209 | for column, elemg in enumerate(self.rank_list):
210 | if self.execution.dataset.same_individual_by_pos(self.subject, elemg, selected_set="test"):
211 | target_position = column # If not multiview we can exit loop here
212 | self.target_position = target_position
213 | break
214 |
215 |
216 | class SAA(PostRankOptimization):
217 | """
218 | Based on similarity and affinity to reorder values.
219 | Similarity: Value calculated using Feature Matching
220 | Affinity: Value calculated using clustering methods
221 | """
222 |
223 | def re_score(self, sign, elem, comp_with_probe, affinity, similarity):
224 | """
225 | Calculates new comparison value for elem and updates self.comp_list[elem]
226 | comp_with_probe, affinity and similarity must be normalized (0..1)
227 | Similarity: The lowest value, the more similar (lowest distance)
228 | :param sign:
229 | :param elem:
230 | :param comp_with_probe: The lowest value, the more similar (lowest distance)
231 | :param affinity: The higher value, the more affinity
232 | :param similarity:
233 | :return:
234 | """
235 | # similarity = self.execution.feature_matcher.match(elem2_fe, self.execution.dataset.gallery.fe_test[elem])
236 | increment = sign * self.re_score_alpha
237 | sim_beta = 1 - self.affinity_beta
238 | if self.re_score_proportional:
239 | self.comp_list[elem] = comp_with_probe + (increment * comp_with_probe *
240 | (self.affinity_beta * affinity + sim_beta * (1 - similarity)))
241 | else:
242 | # self.comp_list[elem] = ((1 - self.re_score_alpha) * comp_with_probe) + \
243 | # (sign * affinity * self.re_score_alpha)
244 | self.comp_list[elem] = ((1 - self.re_score_alpha) * comp_with_probe) + \
245 | (increment * (self.affinity_beta * affinity + sim_beta * (1 - similarity)))
246 |
247 | def reorder(self):
248 | for [sn, idx] in self.new_strong_negatives:
249 | region_size = self.size_for_each_region_in_fe * len(self.regions[idx])
250 | initial_pos = self.regions[idx][0] * self.size_for_each_region_in_fe
251 | sn_fe = self.execution.dataset.gallery.fe_test[sn][initial_pos:initial_pos + region_size]
252 | for elem, (elem_comp_w_probe, affinity) in enumerate(zip(self.comp_list, self.affinity_matrix[idx][sn])):
253 | n_estimators = self.cluster_forest[idx].get_params()['n_estimators']
254 | affinity = float(affinity) / n_estimators
255 | elem_fe = self.execution.dataset.gallery.fe_test[elem][initial_pos:initial_pos + region_size]
256 | similarity = self.feature_matcher.match(elem_fe, sn_fe)
257 | self.re_score(+1, elem, elem_comp_w_probe, affinity, similarity)
258 |
259 | for [wn, idx] in self.new_weak_negatives:
260 | region_size = self.size_for_each_region_in_fe * len(self.regions[idx])
261 | initial_pos = self.regions[idx][0] * self.size_for_each_region_in_fe
262 | wn_fe = self.execution.dataset.gallery.fe_test[wn][initial_pos:initial_pos + region_size]
263 | for elem, (elem_comp_w_probe, affinity) in enumerate(zip(self.comp_list, self.affinity_matrix[idx][wn])):
264 | n_estimators = self.cluster_forest[idx].get_params()['n_estimators']
265 | affinity = float(affinity) / n_estimators
266 | elem_fe = self.execution.dataset.gallery.fe_test[elem][initial_pos:initial_pos + region_size]
267 | similarity = self.feature_matcher.match(elem_fe, wn_fe)
268 | self.re_score(-1, elem, elem_comp_w_probe, affinity, similarity)
269 |
270 | for [ve, idx] in self.new_visual_expanded:
271 | region_size = self.size_for_each_region_in_fe * len(self.regions[idx])
272 | initial_pos = self.regions[idx][0] * self.size_for_each_region_in_fe
273 | ve_fe = ve[initial_pos:initial_pos + region_size]
274 | ve_cluster_value = self.cluster_forest[idx].apply(ve_fe)
275 | X = self.execution.dataset.gallery.fe_test[:, initial_pos:initial_pos + region_size]
276 | leaf_indexes = self.cluster_forest[idx].apply(X)
277 | for elem, elem_comp_w_probe in enumerate(self.comp_list):
278 | elem_fe = self.execution.dataset.gallery.fe_test[elem][initial_pos:initial_pos + region_size]
279 | elem_cluster_value = leaf_indexes[elem]
280 | n_estimators = self.cluster_forest[idx].get_params()['n_estimators']
281 | affinity = np.sum(ve_cluster_value == elem_cluster_value)
282 | affinity = float(affinity) / n_estimators
283 | similarity = self.feature_matcher.match(elem_fe, ve_fe)
284 | self.re_score(-1, elem, elem_comp_w_probe, affinity, similarity)
285 |
286 | self.rank_list = np.argsort(self.comp_list).astype(np.uint16)
287 |
288 |
289 | class LabSP(PostRankOptimization):
290 | """
291 | Label Spreading method for reordering
292 | """
293 |
294 | def __init__(self, method="spreading", kernel="knn", alpha=0.2, gamma=20, n_neighbors=7, **kwargs):
295 | super(LabSP, self).__init__(**kwargs)
296 | if method.lower() == "propagation":
297 | self.regressors = [LabelPropagation(kernel=kernel, alpha=alpha, gamma=gamma, n_neighbors=n_neighbors)
298 | for _ in range(len(self.regions))]
299 | elif method.lower() == "spreading":
300 | self.regressors = [LabelSpreading(kernel=kernel, alpha=alpha, gamma=gamma, n_neighbors=n_neighbors)
301 | for _ in range(len(self.regions))]
302 | else:
303 | raise InitializationError("Method %s not valid" % method)
304 |
305 | def re_score(self, elem, proba):
306 | # try:
307 | # proba[1]
308 | # except IndexError:
309 | # print("elem is %d" % elem)
310 | positive_proba = proba[0]
311 | negative_proba = proba[1]
312 | if positive_proba > negative_proba:
313 | increment = self.re_score_alpha * positive_proba
314 | else:
315 | increment = - self.re_score_alpha * negative_proba
316 |
317 | if self.re_score_proportional:
318 | self.comp_list[elem] += increment * self.comp_list[elem]
319 | else:
320 | # self.comp_list[elem] = ((1 - self.re_score_alpha) * comp_with_probe) + \
321 | # (sign * affinity * self.re_score_alpha)
322 | self.comp_list[elem] += increment
323 |
324 | def reorder(self):
325 | """
326 | Updates self.comp_list and self.rank_list, based on self.new_strong_negatives, self.new_weak_negatives and
327 | self.new_visual_expanded
328 | :return:
329 | """
330 | X = self.execution.dataset.gallery.fe_test
331 | y = np.full((X.shape[0], len(self.regressors)), -1, np.int8) # Default value -1
332 |
333 | # Positive = 1
334 | if self.new_weak_negatives:
335 | new_weak = np.array(self.new_weak_negatives)
336 | y[new_weak[:, 0], new_weak[:, 1]] = 1
337 | if self.weak_negatives:
338 | weak = np.array(self.weak_negatives)
339 | y[weak[:, 0], weak[:, 1]] = 1
340 |
341 | # Visual expanded = 1
342 | vesp = self.visual_expanded + self.new_visual_expanded
343 | if vesp:
344 | X = np.concatenate((X, [e[0] for e in vesp]))
345 | vals = np.full((len(vesp), len(self.regions)), -1, np.int8)
346 | vals[range(vals.shape[0]), [e[1] for e in vesp]] = 1
347 | y = np.concatenate((y, vals))
348 |
349 | # Negatives = 2
350 | if self.new_strong_negatives:
351 | new_strong = np.array(self.new_strong_negatives)
352 | y[new_strong[:, 0], new_strong[:, 1]] = 2
353 | if self.strong_negatives:
354 | strong = np.array(self.strong_negatives)
355 | y[strong[:, 0], strong[:, 1]] = 2
356 |
357 | for idx, regressor in enumerate(self.regressors):
358 | region_size = self.size_for_each_region_in_fe * len(self.regions[idx])
359 | initial_pos = self.regions[idx][0] * self.size_for_each_region_in_fe
360 | Xr = X[:, initial_pos:region_size]
361 | yr = y[:, idx]
362 | if yr.max() == -1:
363 | continue
364 | else:
365 | self.regressors[idx].fit(Xr, yr)
366 |
367 | X = self.execution.dataset.gallery.fe_test
368 |
369 | for idx, regressor in enumerate(self.regressors):
370 | if not hasattr(regressor, "n_iter_"): # check if initialized
371 | continue
372 | region_size = self.size_for_each_region_in_fe * len(self.regions[idx])
373 | initial_pos = self.regions[idx][0] * self.size_for_each_region_in_fe
374 | Xr = X[:, initial_pos:region_size]
375 | for elem in range(len(self.comp_list)):
376 | self.re_score(elem, regressor.predict_proba(Xr[elem])[0])
377 |
378 | self.rank_list = np.argsort(self.comp_list).astype(np.uint16)
379 |
380 |
381 | class SAL(PostRankOptimization):
382 | """
383 | Based on similarity, affinity and Label Propagation to reorder values.
384 | Similarity: Value calculated using Feature Matching
385 | Affinity: Value calculated using clustering methods
386 | Label: Value calculated using LabelPropagation/Spreading
387 | """
388 |
389 | def __init__(self, balanced=False, visual_expansion_use=True, re_score_alpha=0.15,
390 | re_score_proportional=True, regions=None, ve_estimators=20, ve_leafs=5, clf_estimators=20,
391 | clf_leafs=1, method="spreading", kernel="knn", lab_alpha=0.2, lab_gamma=20, lab_n_neighbors=7,
392 | weights=None):
393 | super(SAL, self).__init__(balanced=balanced, visual_expansion_use=visual_expansion_use,
394 | re_score_alpha=re_score_alpha,
395 | re_score_proportional=re_score_proportional,
396 | regions=regions, ve_estimators=ve_estimators, ve_leafs=ve_leafs)
397 | if not weights:
398 | # self.weights = [0.45, 0.45, 0.1]
399 | self.weights = [0.5, 0.5, 0.]
400 | else:
401 | self.weights = weights
402 |
403 | self.cluster_forest = [RandomTreesEmbedding(n_estimators=clf_estimators, min_samples_leaf=clf_leafs, n_jobs=-1)
404 | for _ in range(len(self.regions))]
405 | self.affinity_matrix = []
406 | self.feature_matcher = None # Initialized when set_ex
407 | if method.lower() == "propagation":
408 | self.regressors = [
409 | LabelPropagation(kernel=kernel, alpha=lab_alpha, gamma=lab_gamma, n_neighbors=lab_n_neighbors)
410 | for _ in range(len(self.regions))]
411 | elif method.lower() == "spreading":
412 | self.regressors = [
413 | LabelSpreading(kernel=kernel, alpha=lab_alpha, gamma=lab_gamma, n_neighbors=lab_n_neighbors)
414 | for _ in range(len(self.regions))]
415 | else:
416 | raise InitializationError("Method %s not valid" % method)
417 |
418 | def set_ex(self, ex, rm):
419 | super(SAL, self).set_ex(ex, rm)
420 | self.feature_matcher = copy(self.execution.feature_matcher)
421 | self.feature_matcher._weights = None
422 |
423 | def initial_iteration(self):
424 | super(SAL, self).initial_iteration()
425 | if self.weights[1] > 0:
426 | for idx, cl_forest in enumerate(self.cluster_forest):
427 | region_size = self.size_for_each_region_in_fe * len(self.regions[idx])
428 | initial_pos = self.regions[idx][0] * self.size_for_each_region_in_fe
429 | fe_test_idx = self.execution.dataset.gallery.fe_test[:, initial_pos:region_size]
430 | cl_forest.fit(fe_test_idx)
431 | self.affinity_matrix.append(self.calc_affinity_matrix(cl_forest, fe_test_idx))
432 |
433 | def init_regressors(self, region_size_list, initial_pos_list):
434 | X = self.execution.dataset.gallery.fe_test
435 | y = np.full((X.shape[0], len(self.regressors)), -1, np.int8) # Default value -1
436 |
437 | # Positive = 1
438 | if self.new_weak_negatives:
439 | new_weak = np.array(self.new_weak_negatives)
440 | y[new_weak[:, 0], new_weak[:, 1]] = 1
441 | if self.weak_negatives:
442 | weak = np.array(self.weak_negatives)
443 | y[weak[:, 0], weak[:, 1]] = 1
444 |
445 | # Visual expanded = 1
446 | vesp = self.visual_expanded + self.new_visual_expanded
447 | if vesp:
448 | X = np.concatenate((X, [e[0] for e in vesp]))
449 | vals = np.full((len(vesp), len(self.regions)), -1, np.int8)
450 | vals[range(vals.shape[0]), [e[1] for e in vesp]] = 1
451 | y = np.concatenate((y, vals))
452 |
453 | # Negatives = 2
454 | if self.new_strong_negatives:
455 | new_strong = np.array(self.new_strong_negatives)
456 | y[new_strong[:, 0], new_strong[:, 1]] = 2
457 | if self.strong_negatives:
458 | strong = np.array(self.strong_negatives)
459 | y[strong[:, 0], strong[:, 1]] = 2
460 |
461 | for idx, regressor in enumerate(self.regressors):
462 | region_size = region_size_list[idx]
463 | initial_pos = initial_pos_list[idx]
464 | Xr = X[:, initial_pos:initial_pos + region_size]
465 | yr = y[:, idx]
466 | if yr.max() == -1:
467 | continue
468 | else:
469 | self.regressors[idx].fit(Xr, yr)
470 |
471 | @staticmethod
472 | def calc_affinity_matrix(cl_forest, X):
473 | # TODO Add visual expanded elements?
474 | leaf_indexes = cl_forest.apply(X)
475 | n_estimators = cl_forest.get_params()['n_estimators']
476 | affinity = np.empty((X.shape[0], X.shape[0]), np.uint16)
477 | # affinity = np.zeros((X.shape[0], X.shape[0]), np.uint16)
478 | # np.append(affinity, [[7, 8, 9]], axis=0) # To add more rows later (visual expanded)
479 | np.fill_diagonal(affinity, n_estimators) # Max value in diagonal
480 | for i1, i2 in zip(*np.triu_indices(affinity.shape[0], 1, affinity.shape[0])):
481 | # for i in np.ndindex(affinity.shape):
482 | # if i[0] >= i[1]: # Already calculated (symmetric matrix)
483 | # continue
484 | affinity[i1, i2] = np.sum(leaf_indexes[i1] == leaf_indexes[i2])
485 | affinity[i2, i1] = affinity[i1, i2] # Symmetric value
486 |
487 | return affinity / float(n_estimators)
488 |
489 | def re_score(self, similarity, affinity, lab_score):
490 |
491 | if self.re_score_proportional:
492 | # self.comp_list[elem] += elem_simil_w_probe * self.re_score_alpha * (self.weights[0] * similarity[elem] +
493 | # self.weights[1] * affinity[elem] +
494 | # self.weights[2] * lab_score[elem])
495 | self.comp_list += self.comp_list * self.re_score_alpha * (self.weights[0] * similarity +
496 | self.weights[1] * affinity +
497 | self.weights[2] * lab_score)
498 | else:
499 | self.comp_list += self.re_score_alpha * (self.weights[0] * similarity +
500 | self.weights[1] * affinity +
501 | self.weights[2] * lab_score)
502 |
503 | # @profile()
504 | def reorder(self):
505 | region_size_list = []
506 | initial_pos_list = []
507 | elems_fe_list = []
508 | for region in range(len(self.regions)):
509 | region_size_list.append(self.size_for_each_region_in_fe * len(self.regions[region]))
510 | initial_pos_list.append(self.regions[region][0] * self.size_for_each_region_in_fe)
511 | region_size = region_size_list[region]
512 | initial_pos = initial_pos_list[region]
513 | elems_fe_list.append(self.execution.dataset.gallery.fe_test[:, initial_pos:initial_pos + region_size])
514 |
515 | similarity_sn = self._similarity(elems_fe_list, self.new_strong_negatives, region_size_list, initial_pos_list)
516 | similarity_wn = self._similarity(elems_fe_list, self.new_weak_negatives + self.new_visual_expanded,
517 | region_size_list, initial_pos_list)
518 |
519 | if self.weights[1] > 0:
520 | affinity_sn, affinity_wn = self._affinity(elems_fe_list, region_size_list, initial_pos_list)
521 | else:
522 | affinity_sn, affinity_wn = np.asarray([0] * len(self.comp_list)), np.asarray([0] * len(self.comp_list))
523 |
524 | if self.weights[2] > 0:
525 | self.init_regressors(region_size_list, initial_pos_list)
526 | lab_score = self._labscore(elems_fe_list)
527 | else:
528 | lab_score = np.asarray([0] * len(self.comp_list))
529 |
530 | self.re_score(similarity_wn - similarity_sn, affinity_wn - affinity_sn, lab_score)
531 | self.rank_list = np.argsort(self.comp_list).astype(np.uint16)
532 |
533 | def _affinity(self, elems_fe_list, region_size_list, initial_pos_list):
534 | n_estimators = self.cluster_forest[0].get_params()['n_estimators'] # Assumes all forests have same size
535 | affinity_sn = []
536 | for [sn, region] in self.new_strong_negatives:
537 | affinity_sn.append(self.affinity_matrix[region][:, sn])
538 | affinity_sn = np.asarray(affinity_sn).mean(axis=0)
539 |
540 | affinity_wn = []
541 | for [wn, region] in self.new_weak_negatives:
542 | affinity_wn.append(self.affinity_matrix[region][:, wn])
543 |
544 | if self.new_visual_expanded:
545 | elems_cluster_value = [self.cluster_forest[region].apply(elems_fe_list[region])
546 | for region in range(len(self.regions))]
547 | for [ve, region] in self.new_visual_expanded:
548 | region_size = region_size_list[region]
549 | initial_pos = initial_pos_list[region]
550 | ve_fe = ve[initial_pos:initial_pos + region_size]
551 | ve_cluster_value = self.cluster_forest[region].apply(ve_fe)
552 | affinity_ve = [np.sum(ve_cluster_value == elem_cl_value) / float(n_estimators)
553 | for elem_cl_value in elems_cluster_value[region]]
554 | affinity_wn.append(np.asarray(affinity_ve))
555 | affinity_wn = np.asarray(affinity_wn).mean(axis=0)
556 | return affinity_sn, affinity_wn
557 |
558 | def _similarity(self, elems_fe_list, samples, region_size_list, initial_pos_list):
559 | samples_fe_list = [[] for _ in range(len(self.regions))] # Separate fe by regions
560 | for [sample, region] in samples:
561 | region_size = region_size_list[region]
562 | initial_pos = initial_pos_list[region]
563 | if type(sample) == np.ndarray:
564 | samples_fe_list[region].append(sample[initial_pos:initial_pos + region_size])
565 | else:
566 | samples_fe_list[region].append(self.execution.dataset.gallery.fe_test[sample]
567 | [initial_pos:initial_pos + region_size])
568 |
569 | similarity_samples = [[] for _ in range(len(self.regions))]
570 | for region in range(len(self.regions)):
571 | if samples_fe_list[region]:
572 | samples_fe_list[region] = np.asarray(samples_fe_list[region])
573 | similarity_samples[region] = self.feature_matcher.match_sets(elems_fe_list[region],
574 | samples_fe_list[region], n_jobs=-1,
575 | verbosity=0)
576 | similarity_samples[region] = similarity_samples[region].mean(axis=1)
577 | else:
578 | similarity_samples[region] = None
579 | similarity_samples = [e for e in similarity_samples if e is not None]
580 | return np.asarray(similarity_samples).mean(axis=0)
581 |
582 | def calc_affinity(self, elem):
583 | n_estimators = self.cluster_forest[0].get_params()['n_estimators'] # Assumes all forests have same size
584 | affinity_sn = []
585 | for [sn, region] in self.new_strong_negatives:
586 | affinity_sn.append(self.affinity_matrix[region][elem][sn])
587 |
588 | affinity_wn = []
589 | for [wn, region] in self.new_weak_negatives:
590 | affinity_wn.append(self.affinity_matrix[region][elem][wn])
591 |
592 | for [ve, region] in self.new_visual_expanded:
593 | region_size = self.size_for_each_region_in_fe * len(self.regions[region])
594 | initial_pos = self.regions[region][0] * self.size_for_each_region_in_fe
595 | ve_fe = ve[initial_pos:initial_pos + region_size]
596 | elem_fe = self.execution.dataset.gallery.fe_test[elem][initial_pos:initial_pos + region_size]
597 | ve_cluster_value = self.cluster_forest[region].apply(ve_fe)
598 | elem_cluster_value = self.cluster_forest[region].apply(elem_fe)
599 | affinity = np.sum(ve_cluster_value == elem_cluster_value)
600 | affinity = float(affinity) / n_estimators
601 | affinity_wn.append(affinity)
602 |
603 | if not affinity_sn:
604 | affinity_sn = [0]
605 | if not affinity_wn:
606 | affinity_wn = [0]
607 |
608 | affinity_wn = np.asarray(affinity_wn).mean()
609 | affinity_sn = np.asarray(affinity_sn).mean()
610 | return affinity_wn - affinity_sn
611 |
612 | def _labscore(self, elems_fe_list):
613 | predictions = []
614 | for idx, regressor in enumerate(self.regressors):
615 | if not hasattr(regressor, "n_iter_"): # check if initialized
616 | predictions.append(None)
617 | Xr = elems_fe_list[idx]
618 | pred = np.nan_to_num(regressor.predict_proba(Xr)) # To prevent nan values
619 | # if pred.max() == pred.min(): # Sometimes it predicts all for the same class... not useful at all
620 | # predictions.append(None)
621 | # else:
622 | # print(pred.shape)
623 | if pred.shape[1] == 2:
624 | pred = pred[:, 0] - pred[:, 1]
625 | else:
626 | pred.shape = (pred.shape[0], )
627 | predictions.append(pred)
628 | predictions = [e for e in predictions if e is not None]
629 | return np.asarray(predictions).mean(axis=0)
630 |
--------------------------------------------------------------------------------
/package/preprocessing.py:
--------------------------------------------------------------------------------
1 | import itertools
2 | import cv2
3 | import math
4 | from scipy.optimize import fminbound
5 | from package import image
6 | from package.dataset import Dataset
7 | from package.image import Image, CS_YCrCb, CS_HSV, CS_BGR
8 | from sklearn.externals.joblib import Parallel, delayed
9 | import numpy as np
10 | from package.utilities import InitializationError
11 | from package import feature_extractor
12 | from scipy.stats import norm
13 | from scipy.io.matlab import loadmat
14 |
15 | __author__ = 'luigolas'
16 |
17 |
18 | def _parallel_preprocess(preproc, im, *args):
19 | return preproc.process_image(im, *args)
20 |
21 |
22 | class Preprocessing(object):
23 | def __init__(self, skip=False):
24 | self.skip = skip
25 |
26 | def preprocess_dataset(self, dataset, n_jobs=-1, verbosity=2):
27 | raise NotImplementedError("Please Implement preprocess_dataset method")
28 |
29 | # def process_image(self, im, *args):
30 | # raise NotImplementedError("Please Implement process_image method")
31 |
32 | def dict_name(self):
33 | raise NotImplementedError("Please Implement dict_name")
34 |
35 |
36 | class BTF(Preprocessing):
37 | def __init__(self, method="CBTF", skip=False):
38 | super(BTF, self).__init__(skip)
39 | if not (method == "CBTF" or method == "ngMBTF"): # or method == "gMBTF"
40 | raise InitializationError("Method " + method + " is not a valid preprocessing method")
41 | self._method = method
42 | self.btf = None
43 |
44 | def dict_name(self):
45 | return {"name": self._method}
46 |
47 | def preprocess_dataset(self, dataset, n_jobs=-1, verbosity=2):
48 | if self.skip:
49 | return
50 |
51 | if verbosity > 1: print(" BTF (%s)..." % self._method)
52 | self.btf = None
53 | if not dataset.train_size:
54 | raise InitializationError("Can't preprocess without train_set")
55 | if not dataset.probe.masks_train:
56 | raise InitializationError("Can't preprocess without mask/segmentation data")
57 |
58 | probe_images = dataset.probe.images_train
59 | probe_masks = dataset.probe.masks_train
60 | gallery_images = dataset.gallery.images_train
61 | gallery_masks = dataset.gallery.masks_train
62 | if self._method == "CBTF":
63 | self.btf = self._calc_btf(probe_images, gallery_images, probe_masks, gallery_masks)
64 |
65 | # TODO Consider gMBTF
66 | # elif self._method == "gMBTF":
67 | # elements_left = dataset.train_indexes.copy()
68 | # btfs = [np.array([0] * 256), np.array([0] * 256), np.array([0] * 256)]
69 | # count_btfs = 0
70 | # while dataset_len(elements_left) > 0:
71 | # individual = [elements_left.pop(0)]
72 | # aux_list = []
73 | # for elem in elements_left:
74 | # if dataset.same_individual(self.probe.files[individual[0]], self.probe.files[elem]):
75 | # individual.append(elem)
76 | # else:
77 | # aux_list.append(elem)
78 | # elements_left = aux_list
79 | # to_compare = [self.gallery.files.index(x) for x in self.gallery.files
80 | # if self.same_individual(self.gallery.files[individual[0]], x)]
81 | # # Load images
82 | # individual_images = [self.probe.images[x] for x in individual]
83 | # to_compare_images = [self.gallery.images[x] for x in to_compare]
84 | # masks1 = None
85 | # masks2 = None
86 | # if self.probe.masks is not None:
87 | # masks1 = [self.probe.masks[x] for x in individual]
88 | # if self.probe.masks is not None:
89 | # masks2 = [self.gallery.masks[x] for x in to_compare]
90 | # result = btf(individual_images, to_compare_images, masks1, masks2)
91 | # count_btfs += 1
92 | # for channel, elem in enumerate(result):
93 | # btfs[channel] += elem
94 | # f = [np.asarray(np.rint(x / count_btfs), np.int) for x in btfs]
95 |
96 | elif self._method == "ngMBTF":
97 | btfs = [np.array([0] * 256), np.array([0] * 256), np.array([0] * 256)] # TODO Generalize
98 | count_btfs = 0
99 | for im, mask1 in zip(probe_images, probe_masks):
100 | # Keep index of images to compare to
101 | to_compare = [[img, mask] for img, mask in zip(gallery_images, gallery_masks)
102 | if dataset.same_individual(im.imgname, img.imgname)]
103 | for im2, mask2 in to_compare:
104 | # btfs.append(btf(im, im2, mask1, mask2))
105 | result = self._calc_btf(im, im2, mask1, mask2)
106 | count_btfs += 1
107 | # for channel, elem in enumerate(result):
108 | # btfs[channel] += elem
109 | btfs += result
110 | self.btf = np.asarray([np.rint(x / count_btfs) for x in btfs], np.int)
111 |
112 | else:
113 | raise AttributeError("Not a valid preprocessing key")
114 |
115 | if self.btf is None:
116 | raise NotImplementedError
117 |
118 | new_images_train = []
119 | for im in dataset.probe.images_train:
120 | new_images_train.append(self.process_image(im))
121 | dataset.probe.images_train = new_images_train
122 |
123 | new_images_test = []
124 | for im in dataset.probe.images_test:
125 | new_images_test.append(self.process_image(im))
126 | dataset.probe.images_test = new_images_test
127 | # return new_images
128 |
129 | @staticmethod
130 | def _calc_btf(im1, im2, mask1, mask2):
131 | def find_nearest(array, value):
132 | return (np.abs(array - value)).argmin()
133 |
134 | cumh1 = BTF._cummhist(im1, masks=mask1)
135 | cumh2 = BTF._cummhist(im2, masks=mask2)
136 | # For each value in cumh1, look for the closest one (floor, ceil, round?) in cum2, and save index of cum2.
137 | # func = [np.empty_like(h, np.uint8) for h in cumh1]
138 | func = np.empty_like(cumh1, np.uint8)
139 | for f_i, hist_i, hist2_i in zip(func, cumh1, cumh2): # For each channel
140 | for index, value in enumerate(hist_i):
141 | f_i[index] = find_nearest(hist2_i, value)
142 | return func
143 |
144 | @staticmethod
145 | def _cummhist(ims, colorspace=image.CS_BGR, masks=None):
146 | ranges = feature_extractor.Histogram.color_ranges[colorspace]
147 | bins = [int(b - a) for a, b in zip(ranges, ranges[1:])[::2]] # http://stackoverflow.com/a/5394908/3337586
148 | ev = feature_extractor.Histogram(colorspace, bins=bins, dimension="1D")
149 |
150 | if type(ims) is not list:
151 | ims = [ims]
152 | if type(masks) is not list:
153 | masks = [masks] * len(ims)
154 | h = []
155 | for im, mask in zip(ims, masks):
156 | result = ev.extract(im, mask, normalization=None)
157 | h = [a + b for a, b in
158 | itertools.izip_longest(h, list(result), fillvalue=0)] # Accumulate with previous histograms
159 |
160 | # Normalize each histogram
161 | num_channels = ims[0].shape[2]
162 | h = np.asarray(h).reshape(num_channels, len(h) / num_channels)
163 | return np.asarray(
164 | [feature_extractor.Histogram.normalize_hist(h_channel.cumsum(), normalization=cv2.NORM_INF) for h_channel
165 | in h])
166 |
167 | def process_image(self, im, *args):
168 | im_converted = np.empty_like(im)
169 | for row in range(im.shape[0]):
170 | for column in range(im.shape[1]):
171 | pixel = im[row, column]
172 | for channel, elem in enumerate(pixel):
173 | im_converted[row, column, channel] = self.btf[channel][elem]
174 | # imgname = im.imgname.split(".")
175 | # imgname = ".".join(imgname[:-1]) + self._method + "." + imgname[-1]
176 | return image.Image(im_converted, colorspace=im.colorspace, imgname=im.imgname)
177 |
178 |
179 | class Illumination_Normalization(Preprocessing):
180 | def __init__(self, color_space=CS_YCrCb, skip=False):
181 | super(Illumination_Normalization, self).__init__(skip)
182 | self.color_space = color_space
183 | if color_space == CS_HSV:
184 | self.channel = 2
185 | else: # CS_YCrCb
186 | self.channel = 0
187 |
188 | def preprocess_dataset(self, dataset, n_jobs=1, verbosity=2):
189 | if self.skip:
190 | return
191 |
192 | if verbosity > 1: print(" Illumination Normalization...")
193 | assert (type(dataset) == Dataset)
194 |
195 | imgs = dataset.probe.images_train + dataset.probe.images_test
196 | imgs += dataset.gallery.images_train + dataset.gallery.images_test
197 | results = Parallel(n_jobs)(delayed(_parallel_preprocess)(self, im) for im in imgs)
198 | train_len = dataset.train_size
199 | test_len = dataset.test_size
200 | dataset.probe.images_train = results[:train_len]
201 | dataset.probe.images_test = results[train_len:train_len + test_len]
202 | dataset.gallery.images_train = results[train_len + test_len:-test_len]
203 | dataset.gallery.images_test = results[-test_len:]
204 |
205 | def process_image(self, im, *args):
206 | origin_color_space = im.colorspace
207 | im = im.to_color_space(self.color_space)
208 | im[:, :, self.channel] = cv2.equalizeHist(im[:, :, self.channel])
209 | return im.to_color_space(origin_color_space)
210 |
211 | def dict_name(self):
212 | """
213 |
214 | :return:
215 | """
216 | if self.color_space == CS_HSV:
217 | colorspace = "HSV"
218 | else: # CS_YCrCb
219 | colorspace = "YCrCb"
220 | # return {"Normalization": "IluNorm_%s" % colorspace}
221 | return {"name": "IluNorm_%s" % colorspace}
222 |
223 |
224 | class Grabcut(Preprocessing):
225 | compatible_color_spaces = [CS_BGR]
226 |
227 | def __init__(self, mask_source, iter_count=2, color_space=CS_BGR, skip=False):
228 | super(Grabcut, self).__init__(skip)
229 | self._mask_name = mask_source.split("/")[-1].split(".")[0]
230 | self._mask = np.loadtxt(mask_source, np.uint8)
231 | self._iter_count = iter_count
232 | if color_space not in Grabcut.compatible_color_spaces:
233 | raise AttributeError("Grabcut can't work with colorspace " + str(color_space))
234 | self._colorspace = color_space
235 | self.name = type(self).__name__ + str(self._iter_count) + self._mask_name
236 |
237 | def preprocess_dataset(self, dataset, n_jobs=-1, verbosity=2):
238 | """
239 |
240 |
241 | :param dataset:
242 | :param n_jobs:
243 | :param verbosity:
244 | :return:
245 | """
246 | if self.skip:
247 | return
248 |
249 | if verbosity > 1: print(" Generating Masks (Grabcut)...")
250 | imgs = dataset.probe.images_train + dataset.probe.images_test
251 | imgs += dataset.gallery.images_train + dataset.gallery.images_test
252 | results = Parallel(n_jobs)(delayed(_parallel_preprocess)(self, im) for im in imgs)
253 | train_len = dataset.train_size
254 | test_len = dataset.test_size
255 | dataset.probe.masks_train = results[:train_len]
256 | dataset.probe.masks_test = results[train_len:train_len + test_len]
257 | dataset.gallery.masks_train = results[train_len + test_len:-test_len]
258 | dataset.gallery.masks_test = results[-test_len:]
259 |
260 | def process_image(self, im):
261 | """
262 |
263 | :param im:
264 | :return: :raise TypeError:
265 | """
266 | if not isinstance(im, Image):
267 | raise TypeError("Must be a valid Image (package.image) object")
268 |
269 | if im.colorspace != self._colorspace:
270 | raise AttributeError("Image must be in BGR color space")
271 |
272 | # if app.DB:
273 | # try:
274 | # mask = app.DB[self.dbname(im.imgname)]
275 | # # print("returning mask for " + imgname + " [0][0:5]: " + str(mask[4][10:25]))
276 | # return mask
277 | # except FileNotFoundError:
278 | # # Not in DataBase, continue calculating
279 | # pass
280 |
281 | bgdmodel = np.zeros((1, 65), np.float64)
282 | fgdmodel = np.zeros((1, 65), np.float64)
283 | mask = self._mask.copy()
284 | # mask = copy.copy(self._mask)
285 | cv2.grabCut(im, mask, None, bgdmodel, fgdmodel, self._iter_count, cv2.GC_INIT_WITH_MASK)
286 |
287 | mask = np.where((mask == 2) | (mask == 0), 0, 1).astype('uint8')
288 |
289 | # if app.DB:
290 | # app.DB[self.dbname(im.imgname)] = mask
291 |
292 | return mask
293 |
294 | def dict_name(self):
295 | """
296 | :return: dictionary
297 | """
298 | # return {"Segmenter": str(type(self).__name__), "SegIter": self._iter_count,
299 | # "SegMask": self._mask_name}
300 | return {"name": str(type(self).__name__), "params": str([self._iter_count, self._mask_name])}
301 |
302 | # def dbname(self, imgname):
303 | # class_name = type(self).__name__
304 | # foldername = imgname.split("/")[-2]
305 | # imgname = imgname.split("/")[-1]
306 | # imgname = imgname.split(".")[0] # Take out file extension
307 | # keys = ["masks", class_name, "iter" + str(self._iter_count), self._mask_name, foldername, imgname]
308 | # return keys
309 |
310 |
311 | class MasksFromMat(Preprocessing):
312 | """
313 |
314 | :param mat_path: path to .mat file to read
315 | :param var_name: variable name to look for in mat file
316 | :param invert: If False, first elements are considered as masks for probe. If True, first elements as gallery
317 | :param pair_order: If True, assumes it's order alternatively
318 | :return:
319 | """
320 |
321 | def __init__(self, mat_path, var_name='msk', invert=False, skip=False):
322 | super(MasksFromMat, self).__init__(skip)
323 | # self.pair_order = pair_order
324 | self.path = mat_path
325 | self.var_name = var_name
326 | self.invert = invert
327 | self.name = self._read_name_from_mat()
328 |
329 | def _read_name_from_mat(self):
330 | data = loadmat(self.path)
331 | try:
332 | name = data['msk_name'][0]
333 | keys = name.dtype.names
334 | dict_name = {}
335 | for key in keys:
336 | value = name[0][key]
337 | if value.dtype == int:
338 | value = int(value[0])
339 | else:
340 | value = str(value[0])
341 | dict_name.update({key: value})
342 | except KeyError:
343 | dict_name = None
344 | return dict_name
345 |
346 | def preprocess_dataset(self, dataset, n_jobs=-1, verbosity=2):
347 | """
348 |
349 | :param dataset:
350 | :param n_jobs:
351 | :return:
352 | """
353 | if self.skip:
354 | return
355 |
356 | if verbosity > 1: print(" Loading masks from .mat file")
357 | data = loadmat(self.path)
358 | masks = data[self.var_name][0]
359 |
360 | if not self.invert:
361 | masks_probe = masks.take(range(0, masks.size, 2))
362 | masks_gallery = masks.take(range(1, masks.size, 2))
363 | else:
364 | masks_gallery = masks.take(range(1, masks.size, 2))
365 | masks_probe = masks.take(range(0, masks.size, 2))
366 |
367 | dataset.probe.masks_train = list(masks_probe[dataset.train_indexes])
368 | dataset.probe.masks_test = list(masks_probe[dataset.test_indexes])
369 | dataset.gallery.masks_train = list(masks_gallery[dataset.train_indexes])
370 | dataset.gallery.masks_test = list(masks_gallery[dataset.test_indexes])
371 |
372 | def dict_name(self):
373 | """
374 |
375 | :return:
376 | """
377 | if self.name is not None:
378 | return self.name
379 | else:
380 | name = self.path.split("/")[-1]
381 | # return {"Segmenter": str(type(self).__name__), "SegMask": name}
382 | return {"name": str(type(self).__name__), "params": str([name])}
383 |
384 |
385 | class SilhouetteRegionsPartition(Preprocessing):
386 | def __init__(self, alpha=0.5, sub_divisions=1, skip=False):
387 | super(SilhouetteRegionsPartition, self).__init__(skip)
388 | self.I = 0
389 | self.J = 0
390 | self.deltaI = 0
391 | self.deltaJ = 0
392 | self.search_range_H = []
393 | self.alpha = alpha
394 | self.sub_divisions = int(sub_divisions)
395 |
396 | def preprocess_dataset(self, dataset, n_jobs=-1, verbosity=2):
397 | """
398 |
399 | :param dataset:
400 | :param n_jobs:
401 | :return:
402 | """
403 | if self.skip:
404 | return
405 |
406 | if verbosity > 1: print(" Calculating Silhouette Regions...")
407 | (self.I, self.J, _) = dataset.probe.images_test[0].shape # Assumes all images of same shape
408 | self.deltaI = self.I / 4
409 |
410 | self.search_range_H = [self.deltaI, (self.I - self.deltaI) - 1]
411 |
412 | imgs = dataset.probe.images_train + dataset.probe.images_test
413 | imgs += dataset.gallery.images_train + dataset.gallery.images_test
414 | masks = dataset.probe.masks_train + dataset.probe.masks_test
415 | masks += dataset.gallery.masks_train + dataset.gallery.masks_test
416 |
417 | results = Parallel(n_jobs)(delayed(_parallel_preprocess)(self, im, mask) for im, mask in zip(imgs, masks))
418 |
419 | train_len = dataset.train_size
420 | test_len = dataset.test_size
421 | dataset.probe.regions_train = results[:train_len]
422 | dataset.probe.regions_test = results[train_len:train_len + test_len]
423 | dataset.gallery.regions_train = results[train_len + test_len:-test_len]
424 | dataset.gallery.regions_test = results[-test_len:]
425 |
426 | def process_image(self, im, *args):
427 | mask = args[0]
428 | im_hsv = im.to_color_space(CS_HSV, normed=True)
429 |
430 | lineTL = np.uint16(
431 | fminbound(SilhouetteRegionsPartition._dissym_div, self.search_range_H[0], self.search_range_H[1],
432 | (im_hsv, mask, self.deltaI, self.alpha), 1e-3))
433 | lineHT = np.uint16(fminbound(SilhouetteRegionsPartition._dissym_div_Head, 5,
434 | lineTL, (im_hsv, mask, self.deltaI), 1e-3))
435 |
436 | # consider subdivision
437 | if self.sub_divisions > 1:
438 | regions = []
439 | incr = (lineTL - lineHT) / self.sub_divisions
440 | for i in range(self.sub_divisions - 1):
441 | top_line = lineHT + incr * i
442 | bottom_line = lineHT + incr * (i + 1)
443 | regions.append([top_line, bottom_line, 0, self.J])
444 | # last region:
445 | top_line = regions[-1][1]
446 | bottom_line = lineTL
447 | regions.append([top_line, bottom_line, 0, self.J])
448 |
449 | incr = (self.I - lineTL) / self.sub_divisions
450 | for i in range(self.sub_divisions - 1):
451 | top_line = lineTL + incr * i
452 | bottom_line = lineTL + incr * (i + 1)
453 | regions.append([top_line, bottom_line, 0, self.J])
454 | # last region:
455 | top_line = regions[-1][1]
456 | bottom_line = self.I
457 | regions.append([top_line, bottom_line, 0, self.J])
458 |
459 | pass
460 |
461 | else:
462 | regions = np.asarray([(lineHT, lineTL, 0, self.J), (lineTL, self.I, 0, self.J)])
463 | # regions: [ region body , region legs ]
464 |
465 | return regions
466 |
467 | @staticmethod
468 | def _init_sym(delta, i, img, mask):
469 | i = int(round(i))
470 | delta = int(delta)
471 | imgUP = img[0:i, :]
472 | imgDOWN = img[i - 1:, :]
473 | MSK_U = mask[0:i, :]
474 | MSK_D = mask[i - 1:, :]
475 | return i, delta, MSK_D, MSK_U, imgDOWN, imgUP
476 |
477 | @staticmethod
478 | def _dissym_div(i, img, mask, delta, alpha):
479 | i, delta, MSK_D, MSK_U, imgDOWN, imgUP = SilhouetteRegionsPartition._init_sym(delta, i, img, mask)
480 |
481 | dimLoc = delta
482 | # dimLoc = delta + 1
483 | indexes = list(range(dimLoc))
484 |
485 | imgUPloc = imgUP[indexes, :, :]
486 | imgDWloc = imgDOWN[indexes, :, :][:, ::-1]
487 |
488 | d = alpha * (1 - math.sqrt(np.sum((imgUPloc - imgDWloc) ** 2)) / dimLoc) + \
489 | (1 - alpha) * (abs(int(np.sum(MSK_U)) - np.sum(MSK_D)) / max([MSK_U.size, MSK_D.size]))
490 |
491 | return d
492 |
493 | @staticmethod
494 | def _dissym_div_Head(i, img, mask, delta):
495 | i, delta, MSK_D, MSK_U, imgDOWN, imgUP = SilhouetteRegionsPartition._init_sym(delta, i, img, mask)
496 |
497 | localderU = list(range(max(i - delta, 0), i))
498 | localderD = list(range(0, delta + 1))
499 |
500 | return - abs(int(np.sum(MSK_U[localderU]))
501 | - np.sum(MSK_D[localderD]))
502 |
503 | def dict_name(self):
504 | # return {"Regions": str(type(self).__name__), "RegAlpha": self.alpha, "RegCount": 2 * self.sub_divisions}
505 | return {"name": str(type(self).__name__), "params": str([self.alpha, 2 * self.sub_divisions])}
506 |
507 |
508 | class VerticalRegionsPartition(Preprocessing):
509 | def __init__(self, regions=None, regions_name=None, skip=False):
510 | super(VerticalRegionsPartition, self).__init__(skip)
511 | if regions is None:
512 | self.regions = [[16, 33], [33, 50], [50, 67], [67, 84], [84, 100]] # Over 100% size, not actual image size
513 | else:
514 | self.regions = regions
515 | if regions_name is None:
516 | self.regions_name = "%dR" % len(self.regions)
517 | else:
518 | self.regions_name = regions_name
519 |
520 | def preprocess_dataset(self, dataset, n_jobs=-1, verbosity=2):
521 | if self.skip:
522 | return
523 |
524 | if verbosity > 1: print(" Generating Vertical regions...")
525 | (I, J, _) = dataset.probe.images_test[0].shape # Assumes all dataset images of same size for speedup
526 | regions = []
527 | for r in self.regions:
528 | regions.append((int(r[0] * I / 100.), int(r[1] * I / 100.), 0, J))
529 | regions = np.asarray(regions)
530 | regions_test = [regions] * dataset.test_size
531 | regions_train = [regions] * dataset.train_size
532 | dataset.probe.regions_test = regions_test
533 | dataset.probe.regions_train = regions_train
534 | dataset.gallery.regions_test = regions_test
535 | dataset.gallery.regions_train = regions_train
536 |
537 | def dict_name(self):
538 | # return {"Regions": self.regions_name, "RegCount": len(self.regions)}
539 | return {"name": self.regions_name, "params": str([len(self.regions)])}
540 |
541 |
542 | class GaussianMap(Preprocessing):
543 | def __init__(self, alpha=0.5, kernel="GMM", sigmas=None, deviations=None, skip=False):
544 | super(GaussianMap, self).__init__(skip)
545 | if sigmas is None:
546 | self.sigmas = np.asarray([7.4, 8.7])
547 | else:
548 | self.sigmas = sigmas
549 | self.sigmas = np.asarray(self.sigmas)
550 | if deviations is None:
551 | self.deviations = np.asarray([1., 2.])
552 | else:
553 | self.deviations = deviations
554 | self.deviations = np.asarray(self.deviations)
555 | self.sigmas_str = str(self.sigmas)
556 | self.deviations_str = str(self.deviations)
557 | self.J = 0
558 | self.deltaJ = 0
559 | self.search_range_V = []
560 | self.alpha = alpha
561 | self.kernel_name = kernel
562 | if kernel == "GMM":
563 | self.kernel = _gau_mix_kernel
564 | elif kernel == "Gaussian":
565 | self.kernel = _gau_kernel
566 | else:
567 | raise ValueError("Invalid kernel %s" % kernel)
568 |
569 | def preprocess_dataset(self, dataset, n_jobs=-1, verbosity=2):
570 | if self.skip:
571 | return
572 |
573 | if verbosity > 1: print(" Calculating %s Maps..." % self.kernel_name)
574 | (_, self.J, _) = dataset.probe.images_test[0].shape # Assumes all images of same shame
575 | self.deltaJ = self.J / 3
576 | # self.deviations = self.sigmas / self.deviations
577 | self.sigmas = self.J / self.sigmas
578 |
579 | self.search_range_V = [self.deltaJ, (self.J - self.deltaJ) - 1]
580 | ims = dataset.probe.images_train + dataset.probe.images_test
581 | ims += dataset.gallery.images_train + dataset.gallery.images_test
582 | masks = dataset.probe.masks_train + dataset.probe.masks_test
583 | masks += dataset.gallery.masks_train + dataset.gallery.masks_test
584 | regions = dataset.probe.regions_train + dataset.probe.regions_test
585 | regions += dataset.gallery.regions_train + dataset.gallery.regions_test
586 |
587 | results = Parallel(n_jobs)(delayed(_parallel_preprocess)(self, i, m, r) for i, m, r in zip(ims, masks, regions))
588 |
589 | train_len = dataset.train_size
590 | test_len = dataset.test_size
591 | dataset.probe.maps_train = results[:train_len]
592 | dataset.probe.maps_test = results[train_len:train_len + test_len]
593 | dataset.gallery.maps_train = results[train_len + test_len:-test_len]
594 | dataset.gallery.maps_test = results[-test_len:]
595 |
596 | def process_image(self, im, *args):
597 | """
598 |
599 | :param im:
600 | :param args:
601 | :return:
602 | """
603 | mask = args[0]
604 | regions = args[1]
605 |
606 | im_hsv = im.to_color_space(CS_HSV, normed=True)
607 |
608 | # map = []
609 | map = np.zeros_like(mask, dtype=np.float32)
610 | for region, sigma, deviation in zip(regions, self.sigmas, self.deviations):
611 | lineTop = region[0]
612 | lineDown = region[1]
613 | sim_line = np.uint16(fminbound(self.sym_div, self.search_range_V[0], self.search_range_V[1],
614 | (im_hsv[lineTop:lineDown, :], mask[lineTop:lineDown, :], self.deltaJ,
615 | self.alpha), 1e-3))
616 | map[lineTop:lineDown, :] = self.kernel(sim_line, sigma, lineDown - lineTop, self.J, deviation)
617 |
618 | return map
619 |
620 | @staticmethod
621 | def sym_div(i, img, mask, delta, alpha):
622 | i = int(round(i))
623 | delta = int(delta)
624 | imgL = img[:, 0:i]
625 | imgR = img[:, (i - 1):]
626 | MSK_L = mask[:, 0:i]
627 | MSK_R = mask[:, (i - 1):]
628 |
629 | dimLoc = delta
630 | indexes = list(range(dimLoc))
631 |
632 | imgLloc = imgL[:, indexes, :]
633 | imgRloc = imgR[:, indexes, :][:, ::-1]
634 | d = alpha * math.sqrt(np.sum((imgLloc - imgRloc) ** 2)) / dimLoc + \
635 | (1 - alpha) * abs(int(np.sum(MSK_R)) - np.sum(MSK_L)) / max([MSK_L.size, MSK_R.size])
636 | return d
637 |
638 | def dict_name(self):
639 | # name = {"Map": self.kernel_name, "MapSigmas": self.sigmas_str}
640 | name = {"name": self.kernel_name, "params": [self.alpha, self.sigmas_str]}
641 | if self.kernel_name == "GMM":
642 | # name.update({"MapDeviations": self.deviations_str})
643 | name["params"].append(self.deviations_str)
644 | name["params"] = str(name["params"])
645 | return name
646 |
647 |
648 | def _gau_mix_kernel(x, sigma, H, W, dev):
649 | x1 = float(dev)
650 | w1 = 0.5
651 | # w2 = w1
652 | g1 = norm.pdf(list(range(0, W)), x - x1, sigma)
653 | # g1 /= g1.max()
654 | g2 = norm.pdf(list(range(0, W)), x + x1, sigma)
655 | # g2 /= g2.max()
656 | gfinal = w1 * g1 + w1 * g2
657 | gfinal /= gfinal.max()
658 | gfinal = np.tile(gfinal, [H, 1])
659 | return gfinal
660 |
661 |
662 | def _gau_kernel(x, sigma, H, W, *args):
663 | g = norm.pdf(list(range(0, W)), x, sigma)
664 | g /= g.max()
665 | g = np.tile(g, [H, 1])
666 | return g
667 |
668 |
669 | class LoadFromFile(Preprocessing):
670 | def __init__(self, f, field):
671 | """
672 | Allows to load a precalculated result for Regions or Maps. Specially usefull for batch executions, saving
673 | CPU time. Doesn't appear on dict name
674 | :param f: File .npy to load from
675 | :param field: specify if it is Regions or Maps
676 | :return:
677 | """
678 | super(LoadFromFile, self).__init__()
679 | self.file = f
680 | self.field = field.lower()
681 |
682 | def preprocess_dataset(self, dataset, n_jobs=-1, verbosity=2):
683 | data = list(np.load(self.file))
684 | train_len = dataset.train_size
685 | test_len = dataset.test_size
686 | if self.field == "regions":
687 | dataset.probe.regions_train = data[:train_len]
688 | dataset.probe.regions_test = data[train_len:train_len + test_len]
689 | dataset.gallery.regions_train = data[train_len + test_len:-test_len]
690 | dataset.gallery.regions_test = data[-test_len:]
691 | elif self.field == "maps":
692 | dataset.probe.maps_train = data[:train_len]
693 | dataset.probe.maps_test = data[train_len:train_len + test_len]
694 | dataset.gallery.maps_train = data[train_len + test_len:-test_len]
695 | dataset.gallery.maps_test = data[-test_len:]
696 |
697 | def dict_name(self):
698 | return {}
699 |
--------------------------------------------------------------------------------
/package/segmenter.py:
--------------------------------------------------------------------------------
1 | # import copy
2 | # # import package.app as app
3 | # import cv2
4 | # import numpy as np
5 | # from package.image import Image, CS_BGR
6 | # from package.utilities import FileNotFoundError
7 | #
8 | # __author__ = 'luigolas'
9 | #
10 | #
11 | # class Segmenter(object):
12 | # compatible_color_spaces = []
13 | #
14 | # def segment(self, image):
15 | # """
16 | #
17 | # :param image:
18 | # :raise NotImplementedError:
19 | # """
20 | # raise NotImplementedError("Please Implement segment method")
21 | #
22 | #
23 | # class Grabcut(Segmenter):
24 | # compatible_color_spaces = [CS_BGR]
25 | #
26 | # def __init__(self, mask_source, iter_count=2, color_space=CS_BGR):
27 | # self._mask_name = mask_source.split("/")[-1].split(".")[0]
28 | # self._mask = np.loadtxt(mask_source, np.uint8)
29 | # self._iter_count = iter_count
30 | # if color_space not in Grabcut.compatible_color_spaces:
31 | # raise AttributeError("Grabcut can't work with colorspace " + str(color_space))
32 | # self._colorspace = color_space
33 | # self.name = type(self).__name__ + str(self._iter_count) + self._mask_name
34 | # self.dict_name = {"Segmenter": str(type(self).__name__), "SegIter": self._iter_count,
35 | # "Mask": self._mask_name}
36 | #
37 | # def segment(self, image):
38 | # """
39 | #
40 | # :param image:
41 | # :return: :raise TypeError:
42 | # """
43 | # if not isinstance(image, Image):
44 | # raise TypeError("Must be a valid Image (package.image) object")
45 | #
46 | # if image.colorspace != self._colorspace:
47 | # raise AttributeError("Image must be in BGR color space")
48 | #
49 | # # if app.DB:
50 | # # try:
51 | # # mask = app.DB[self.dbname(image.imgname)]
52 | # # # print("returning mask for " + imgname + " [0][0:5]: " + str(mask[4][10:25]))
53 | # # return mask
54 | # # except FileNotFoundError:
55 | # # # Not in DataBase, continue calculating
56 | # # pass
57 | #
58 | # bgdmodel = np.zeros((1, 65), np.float64)
59 | # fgdmodel = np.zeros((1, 65), np.float64)
60 | # # mask = self._mask.copy()
61 | # mask = copy.copy(self._mask)
62 | # cv2.grabCut(image, mask, None, bgdmodel, fgdmodel, self._iter_count, cv2.GC_INIT_WITH_MASK)
63 | #
64 | # mask = np.where((mask == 2) | (mask == 0), 0, 1).astype('uint8')
65 | #
66 | # # if app.DB:
67 | # # app.DB[self.dbname(image.imgname)] = mask
68 | #
69 | # return mask
70 | #
71 | # def dbname(self, imgname):
72 | # class_name = type(self).__name__
73 | # foldername = imgname.split("/")[-2]
74 | # imgname = imgname.split("/")[-1]
75 | # imgname = imgname.split(".")[0] # Take out file extension
76 | # keys = ["masks", class_name, "iter" + str(self._iter_count), self._mask_name, foldername, imgname]
77 | # return keys
78 |
--------------------------------------------------------------------------------
/package/statistics.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
3 | import numpy as np
4 | from scipy.stats import cumfreq
5 |
6 |
7 | class Statistics():
8 | """
9 | Position List: for each element in probe, find its same ids in gallery. Format: np.array([[2,14],[1,2],...])
10 | Mean List: Calculate means of position list by axis 0. Format: np.array([1.52, 4.89])
11 | Mode_list: TODO******* calculate statistical mode along axis 0. Format: (np.array([[2., 4.]]), np.array([[10, 12]]))
12 | Prob admissible range: For each row in Position List calculates if it is lower than a value. Then sum and calculates
13 | percentage
14 | """
15 | def __init__(self):
16 | self.matching_order = None
17 | self.mean_value = None
18 | self.CMC = None
19 | self.AUC = None
20 |
21 | def dict_name(self, ranges=None):
22 | """
23 |
24 | :param ranges:
25 | :return:
26 | """
27 | if not ranges:
28 | ranges = [1, 5, 10, 20, 50]
29 | name = {"AUC": self.AUC, "MeanValue": self.mean_value}
30 | for r in ranges:
31 | name.update({"Range%02d" % r: self.CMC[r - 1]})
32 | return name
33 |
34 | def run(self, dataset, ranking_matrix):
35 | """
36 |
37 | :param dataset:
38 | :param ranking_matrix:
39 | :return:
40 | """
41 | # Filter ranking matrix to tests values of dataset
42 | if ranking_matrix.shape[0] != len(dataset.test_indexes):
43 | ranking_matrix = self._ranking_matrix_reshape(ranking_matrix, dataset.test_indexes)
44 |
45 | self._calc_matching_order(ranking_matrix)
46 | self._calc_mean_value()
47 | self._calcCMC(dataset.test_size)
48 | self._calcAUC(dataset.test_size)
49 |
50 | def _calc_matching_order(self, ranking_matrix):
51 | """
52 |
53 | :param ranking_matrix:
54 | :return:
55 | """
56 | matching_order = []
57 |
58 | for elemp, rank_list in enumerate(ranking_matrix):
59 | # probe_elem = dataset.test_indexes[elemp]
60 | for column, elemg in enumerate(rank_list):
61 | # if dataset.same_individual_by_pos(elemp, np.where(dataset.test_indexes == elemg)[0][0],
62 | # set="test"):
63 | if elemp == elemg:
64 | matching_order.append(column + 1) # CMC count from position 1
65 | break
66 | self.matching_order = np.asarray(matching_order, np.uint16)
67 |
68 | def _calc_mean_value(self):
69 | """
70 |
71 | :return:
72 | """
73 | self.mean_value = np.mean(self.matching_order)
74 | # self.mean_value = np.mean(self.matching_order, 1) # For multiview case
75 |
76 | def _calcAUC(self, test_size):
77 | """
78 |
79 | :param test_size:
80 | :return:
81 | """
82 | self.AUC = self.CMC.sum() / test_size # CMC already normalized to 0:100
83 | # self.AUC = (self.CMC.sum() / (test_size * test_size)) * 100. # if CMC were not normalized
84 |
85 | # def plot_position_list(self, fig_name, zoom=None, show=False):
86 | # bins_rank, num_positions = self.position_list.shape
87 | # colors = itertools.cycle(["blue", "red", "green", "yellow", "orange"])
88 | # for i in range(num_positions):
89 | # plt.hist(self.position_list[:, i], bins=bins_rank, label='Pos ' + str(i), histtype='stepfilled', alpha=.8,
90 | # color=next(colors), cumulative=True, normed=True)
91 | # plt.yticks(np.arange(0, 1.01, 0.05))
92 | # plt.grid(True)
93 | # plt.title("Ranking Histogram")
94 | # plt.xlabel("Value")
95 | # plt.ylabel("Frequency")
96 | # # Put a legend below current axis
97 | # plt.legend(loc="upper left", bbox_to_anchor=(1, 1))
98 | # # Zoomed figure
99 | # if zoom:
100 | # plt.axis([0, zoom, 0, 1])
101 | # plt.xticks(range(0, zoom+1, 2))
102 | # plt.savefig(fig_name, bbox_inches='tight')
103 | # if show:
104 | # plt.show()
105 | # # Clean and close figure
106 | # plt.clf()
107 | # plt.close()
108 |
109 | def _calcCMC(self, size):
110 | cumfreqs = (cumfreq(self.matching_order, numbins=size)[0] / size) * 100.
111 | self.CMC = cumfreqs.astype(np.float32)
112 | # len(self.matching_order[self.matching_order <= admissible]) / float(self.dataset.test_size)
113 |
114 | @staticmethod
115 | def _ranking_matrix_reshape(ranking_matrix, test_indexes):
116 | # TODO Optimize or use matching matrix directly
117 | ranking_matrix = ranking_matrix[test_indexes]
118 | length = ranking_matrix.shape[0]
119 | elems = np.in1d(ranking_matrix, test_indexes).reshape(ranking_matrix.shape)
120 | ranking_matrix = ranking_matrix[elems]
121 | ranking_matrix = ranking_matrix.reshape(length, length)
122 | rm = np.empty_like(ranking_matrix)
123 | for pos, val in enumerate(test_indexes):
124 | rm[ranking_matrix == val] = pos
125 | return rm
126 |
--------------------------------------------------------------------------------
/package/utilities.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
3 | import numpy as np
4 | from itertools import islice, takewhile, count
5 | import sys
6 | import time
7 |
8 |
9 | def safe_ln(x, minval=0.0000000001):
10 | return np.log(x.clip(min=minval))
11 |
12 |
13 | def status(percent, flush=True):
14 | sys.stdout.write("%3d%%\r" % percent)
15 | if flush:
16 | sys.stdout.flush()
17 | else:
18 | sys.stdout.write("\n")
19 |
20 |
21 | def split_every(n, it):
22 | """
23 | http://stackoverflow.com/a/22919323
24 | :param n:
25 | :param it:
26 | :return:
27 | """
28 | return takewhile(bool, (list(islice(it, n)) for _ in count(0)))
29 |
30 |
31 | # split_every = (lambda n, it:
32 | # takewhile(bool, (list(islice(it, n)) for _ in count(0))))
33 |
34 |
35 | def chunks(iterable, n):
36 | """assumes n is an integer>0
37 | """
38 | iterable = iter(iterable)
39 | while True:
40 | result = []
41 | for i in range(n):
42 | try:
43 | a = next(iterable)
44 | except StopIteration:
45 | break
46 | else:
47 | result.append(a)
48 | if result:
49 | yield result
50 | else:
51 | break
52 |
53 |
54 | def time_execution(fun, repeats=1):
55 | times = []
56 | for i in range(repeats):
57 | start_time = time.time()
58 | fun()
59 | end_time = time.time() - start_time
60 | times.append(end_time)
61 | print("Time %d --- %s seconds ---" % (i, end_time))
62 | print("Min time: --- %s seconds ---" % min(times))
63 |
64 |
65 | def send_mail_by_gmail(user, password, files=None, subject="FluentMail", body=""):
66 | from fluentmail import FluentMail, TLS
67 | print "Sending mail: %s" % subject
68 |
69 | mail = FluentMail('smtp.gmail.com', 587, TLS)
70 |
71 | mail = mail.credentials(user, password) \
72 | .from_address(user) \
73 | .to(user) \
74 | .subject(subject) \
75 | .body(body, 'utf-8')
76 | if files:
77 | for f in files:
78 | mail.attach(f)
79 | mail.send()
80 | # .body(u'
Hi, I\'m FluentMail.', 'utf-8') \
81 |
82 |
83 | class Timer(object):
84 | """
85 | http://stackoverflow.com/a/5849861/3337586
86 | """
87 |
88 | def __init__(self, name=None):
89 | self.name = name
90 |
91 | def __enter__(self):
92 | self.tstart = time.time()
93 |
94 | def __exit__(self, type, value, traceback):
95 | if self.name:
96 | print '[%s]' % self.name,
97 | print 'Elapsed: %s' % (time.time() - self.tstart)
98 |
99 |
100 | # Exceptions definitions
101 | # =====================
102 | class InitializationError(Exception):
103 | pass
104 |
105 |
106 | class ImagesNotFoundError(Exception):
107 | pass
108 |
109 |
110 | class NotADirectoryError(Exception):
111 | pass
112 |
113 |
114 | class FileNotFoundError(Exception):
115 | pass
--------------------------------------------------------------------------------
/resources/masks/ViperManualMask.txt:
--------------------------------------------------------------------------------
1 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
2 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
3 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
4 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
5 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
6 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
7 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
8 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
9 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
10 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
11 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
12 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
13 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
14 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
15 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
16 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
17 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
18 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
19 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
20 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
21 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
22 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
23 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
24 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
25 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
26 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
27 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
28 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
29 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
30 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
31 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
32 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
33 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
34 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
35 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
36 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
37 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
38 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
39 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
40 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
41 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
42 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
43 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
44 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
45 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
46 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
47 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
48 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
49 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
50 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
51 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
52 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
53 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
54 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
55 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
56 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
57 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
58 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
59 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
60 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
61 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
62 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
63 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
64 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
65 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
66 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
67 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
68 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
69 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
70 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
71 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
72 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
73 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
74 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
75 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
76 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
77 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
78 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
79 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
80 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
81 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
82 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
83 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
84 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
85 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
86 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
87 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
88 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
89 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
90 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
91 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
92 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
93 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
94 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
95 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
96 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
97 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
98 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
99 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
100 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
101 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
102 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
103 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
104 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
105 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
106 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
107 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
108 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
109 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
110 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
111 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
112 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
113 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
114 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
115 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
116 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
117 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
118 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
119 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
120 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
121 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
122 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
123 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
124 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
125 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
126 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
127 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
128 | 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
129 |
--------------------------------------------------------------------------------
/resources/masks/ViperOptimalMask.txt:
--------------------------------------------------------------------------------
1 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0
4 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 3 3 3 3 3 3 3 3 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0
5 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 3 3 3 3 3 3 1 3 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0
6 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 3 3 3 3 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0
7 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0
8 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0 0
9 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0 0
10 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0
11 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0 0 0 0 0 0 0 0 0
12 | 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 3 3 3 3 3 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0
13 | 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2 0 0 0 0 0 0 0 0 0
14 | 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2 0 0 0 0 0 0 0 0 0
15 | 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0 0
16 | 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0 0 0
17 | 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0 0 0 0 0 0 0 0 0
18 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0 0 0 0 0 0 0 0 0
19 | 0 0 0 0 0 0 0 0 0 0 0 0 0 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0 0 0 0 0 0 0 0
20 | 0 0 0 0 0 0 0 0 0 0 0 0 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2 0 0 0 0 0 0 0 0
21 | 0 0 0 0 0 0 0 0 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0 0 0
22 | 0 0 0 0 0 0 0 0 0 2 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 2 0 0 0 0 0 0 0
23 | 0 0 0 0 0 0 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2 0 0 0 0 0 0
24 | 0 0 0 0 0 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0
25 | 0 0 0 0 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 2 2 2 2 2 0 0 0 0 0
26 | 0 0 0 0 0 2 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0
27 | 0 0 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0
28 | 0 0 0 0 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0
29 | 0 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0 0
30 | 0 0 0 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0 0
31 | 0 0 0 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0
32 | 0 0 0 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0 0
33 | 0 0 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0
34 | 0 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0
35 | 0 0 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0
36 | 0 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0
37 | 0 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0
38 | 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2
39 | 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 2 2 2 2 2
40 | 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2
41 | 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0
42 | 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0
43 | 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0
44 | 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0
45 | 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 0
46 | 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2
47 | 0 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 0
48 | 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2
49 | 0 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 2 2 2 2
50 | 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 1 3 3 3 2 2 2 2
51 | 0 2 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2
52 | 0 2 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 0 0
53 | 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0
54 | 0 0 2 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0
55 | 0 0 2 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0
56 | 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0
57 | 0 0 2 2 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2
58 | 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2
59 | 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 2 2 2 2 0
60 | 0 0 0 2 2 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 2 2 2 2 0
61 | 0 0 0 2 2 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 2 2 2 2 0
62 | 0 0 0 0 2 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 2 2 2 2 0
63 | 0 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 2 2 0
64 | 0 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 2 2 0
65 | 0 0 0 0 2 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2 2 2 0
66 | 0 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 3 2 2 2 2 2 2
67 | 0 0 0 0 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2 2 0 0
68 | 0 0 0 0 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2 2 0 0
69 | 0 0 0 0 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 2 2 2 2 2 2 0 0
70 | 0 0 0 0 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 2 2 0 0
71 | 0 0 0 0 0 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0 0 0
72 | 0 0 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0 0 0
73 | 0 0 0 0 0 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 0 0 0 0 0 0
74 | 0 0 0 0 0 0 2 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 0 0 0 0 0 0
75 | 0 0 0 0 0 0 0 2 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 0 0 0 0 0 0
76 | 0 0 0 0 0 0 0 2 2 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 0 0 0 0 0 0
77 | 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 0 0 0 0 0 0
78 | 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 0 0 0 0 0 0 0
79 | 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 0 0 0 0 0 0 0
80 | 0 0 0 0 0 0 0 0 2 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0 0 0 0
81 | 0 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 0 0 0 0 0 0
82 | 0 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0 0 0 0 0
83 | 0 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 0 0 0 0 0 0 0
84 | 0 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0 0 0 0 0
85 | 0 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0
86 | 0 0 0 0 0 0 0 0 0 2 2 3 1 1 1 1 1 1 1 1 1 1 3 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0 0 0 0 0 0
87 | 0 0 0 0 0 0 0 0 0 2 2 3 3 1 1 1 1 1 1 1 1 1 3 3 1 1 1 1 1 1 1 1 1 1 3 3 3 3 2 2 2 2 0 0 0 0 0 0
88 | 0 0 0 0 0 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 1 3 3 1 3 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0 0
89 | 0 0 0 0 0 0 0 0 0 2 2 3 3 1 1 1 1 1 1 1 1 3 3 3 1 3 1 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0 0
90 | 0 0 0 0 0 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 1 3 3 3 3 3 1 1 1 1 1 1 1 1 1 3 3 2 2 2 2 0 0 0 0 0 0 0
91 | 0 0 0 0 0 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 3 3 3 3 3 3 1 1 1 1 1 1 1 1 1 3 3 2 2 2 2 0 0 0 0 0 0 0
92 | 0 0 0 0 0 0 0 0 2 2 2 3 3 1 1 1 1 1 1 1 3 3 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 2 2 2 2 0 0 0 0 0 0 0
93 | 0 0 0 0 0 0 0 2 2 2 2 3 3 1 1 1 1 1 1 1 3 3 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 2 2 2 2 0 0 0 0 0 0 0
94 | 0 0 0 0 0 0 0 2 2 2 2 3 3 3 1 1 1 1 1 1 3 3 3 3 3 3 3 1 1 1 1 1 1 1 1 3 3 2 2 2 2 0 0 0 0 0 0 0
95 | 0 0 0 0 0 0 0 2 2 2 2 3 3 3 1 1 1 1 1 1 3 3 3 3 3 3 3 1 1 1 1 1 1 1 3 3 3 2 2 2 2 0 0 0 0 0 0 0
96 | 0 0 0 0 0 0 0 2 2 2 2 3 3 3 1 1 1 3 1 1 3 3 3 3 3 3 3 1 1 1 1 1 1 3 3 3 3 2 2 2 0 0 0 0 0 0 0 0
97 | 0 0 0 0 0 0 0 2 2 2 3 3 3 3 3 1 1 1 1 1 3 3 3 3 3 3 3 1 1 1 1 1 3 3 3 3 3 2 2 0 0 0 0 0 0 0 0 0
98 | 0 0 0 0 0 0 0 2 2 2 3 3 3 3 3 1 1 1 1 1 3 3 3 3 3 3 3 3 1 1 1 1 3 3 3 3 3 2 2 0 0 0 0 0 0 0 0 0
99 | 0 0 0 0 0 0 2 2 2 2 3 3 3 3 3 1 1 1 1 3 3 3 3 3 3 3 3 3 1 1 1 3 3 3 3 3 3 2 2 0 0 0 0 0 0 0 0 0
100 | 0 0 0 0 0 0 2 2 2 2 2 2 3 3 1 1 1 1 1 3 3 3 3 3 3 3 3 3 1 1 3 3 3 3 3 3 3 2 2 0 0 0 0 0 0 0 0 0
101 | 0 0 0 0 0 0 2 2 2 2 2 3 3 3 1 1 1 1 3 3 3 3 3 3 3 3 3 3 1 1 3 3 3 3 3 3 2 2 2 0 0 0 0 0 0 0 0 0
102 | 0 0 0 0 0 0 2 2 2 2 2 2 3 3 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 0 0 0 0 0 0 0 0 0
103 | 0 0 0 0 0 0 2 2 2 2 2 2 3 3 3 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 0 0 0 0 0 0 0 0 0
104 | 0 0 0 0 0 0 2 2 2 2 2 2 3 3 3 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 0 0 0 0 0 0 0 0
105 | 0 0 0 0 0 0 0 2 2 2 2 2 3 3 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 0 0 0 0 0 0 0 0
106 | 0 0 0 0 0 0 0 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 0 0 0 0 0 0 0 0
107 | 0 0 0 0 0 0 0 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 0 0 0 0 0 0 0 0
108 | 0 0 0 0 0 0 0 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 0 0 0 0 0 0 0 0 0
109 | 0 0 0 0 0 0 0 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 3 3 2 2 2 2 2 0 0 0 0 0 0 0 0 0
110 | 0 0 0 0 0 0 0 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 3 3 2 2 2 2 2 2 2 0 0 0 0 0 0 0
111 | 0 0 0 0 0 0 0 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 3 2 2 2 2 2 2 2 0 0 0 0 0 0 0
112 | 0 0 0 0 0 0 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 3 2 2 2 2 2 2 2 0 0 0 0 0 0 0
113 | 0 0 0 0 0 0 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0
114 | 0 0 0 0 0 0 0 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
115 | 0 0 0 0 0 0 0 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0
116 | 0 0 0 0 0 0 0 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
117 | 0 0 0 0 0 0 0 0 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
118 | 0 0 0 0 0 0 2 0 0 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 3 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0
119 | 0 0 0 0 0 0 2 0 0 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 3 3 2 2 2 2 2 2 2 0 0 0 0 0 0 0
120 | 0 0 0 0 0 0 2 0 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 2 2 3 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0
121 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0
122 | 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 3 3 3 2 2 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0
123 | 0 0 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 3 3 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0
124 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0
125 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
126 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
127 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
128 | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
129 |
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
1 | __author__ = 'luigolas'
2 |
--------------------------------------------------------------------------------