├── LICENSE
├── MANIFEST.in
├── README.md
├── docs
├── images
│ ├── 8cats.png
│ ├── affineDemo.mp4
│ ├── affineDemo.webm
│ ├── background_resize.jpg
│ ├── cat_original.png
│ ├── cats image.jpg
│ ├── compositeMatching.png
│ ├── dog1_resize3.jpg
│ ├── morty.mp4
│ └── morty.webm
├── index.html
├── jspolygon.js
├── main.js
├── math.js
├── moment.min.js
└── style.css
├── fullEndToEndDemo
├── CMakeLists.txt
├── include
│ └── hiredis
│ │ ├── async.h
│ │ ├── dict.h
│ │ ├── fmacros.h
│ │ ├── hiredis.h
│ │ ├── net.h
│ │ ├── read.h
│ │ ├── sds.h
│ │ ├── sdsalloc.h
│ │ └── win32.h
├── inputImages
│ ├── 8cats.png
│ ├── cat1.png
│ ├── cat2.png
│ ├── cat3.png
│ ├── cat4.png
│ ├── cat5.png
│ ├── cat6.png
│ ├── cat7.png
│ ├── cat8.png
│ ├── cat_original.png
│ ├── mona.jpg
│ ├── monaComposite.jpg
│ └── van_gogh.jpg
├── lib
│ └── libhiredis.a
├── runDemo
├── runDemo1.sh
├── runDemo2.sh
├── setup.sh
└── src
│ ├── FragmentHash.h
│ ├── Keypoint.h
│ ├── PerceptualHash.h
│ ├── PerceptualHash_Fast.h
│ ├── ShapeAndPositionInvariantImage.h
│ ├── Triangle.h
│ ├── curvature.py
│ ├── dumpKeypointsToJson.py
│ ├── img_hash_opencv_module
│ ├── PHash_Fast.cpp
│ ├── PHash_Fast.h
│ ├── img_hash.hpp
│ ├── img_hash_base.hpp
│ ├── phash.cpp
│ ├── phash.hpp
│ ├── precomp.hpp
│ └── precomp.hpp~
│ ├── main.cc
│ ├── mainImageProcessingFunctions.hpp
│ └── utils.hpp
├── setup.py
└── transformation_invariant_image_search
├── README.md
├── __init__.py
├── curvature.py
├── keypoints.py
├── main.py
├── phash.py
└── requirements.txt
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 Tom Murphy
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/MANIFEST.in:
--------------------------------------------------------------------------------
1 | include LICENSE
2 | recursive-include transformation_invariant_image_search/templates *
3 | recursive-include transformation_invariant_image_search/static *
4 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | For more information you can check out the discussion here on [Hacker News](https://news.ycombinator.com/item?id=14973741)
2 |
3 | # Transformation-Invariant Reverse Image Search
4 |
5 | This repo demos a reverse image search algorithm which performs 2D affine transformation-invariant partial image-matching in sublinear time with respect to the number of images in our database.
6 |
7 | An online demo with a description of how the algorithm works is available here:
8 | [Demo](https://pippy360.github.io/transformationInvariantImageSearch)
9 |
10 | The /docs directory contains this front end javascript demo: https://pippy360.github.io/transformationInvariantImageSearch
11 |
12 | The /fullEndToEndDemo directory contains two full end to end c++ demos of the algorithm.
13 |
14 | The two end to end c++ demos use Redis as a database and do a direct hash lookup for the constant number of hashes produced for each query image. Each demo runs in O(1) time with respect to the number of images in the database. A nearest neighbor algorithm could also be used instead to find the closest hash within some threshold which would increase the accuracy but then the algorithm would run in amortized O(log n) time (depending on which NN algorithm was used).
15 |
16 | Processing each fragment/triangle of the image only requires the 3 points of the triangle and a read-only copy of the image so the preprocessing for an image is embarrassingly parallel. If implemented correctly there should be a near linear speedup with respect to the number of cores used.
17 |
18 | **However these demos were created quickly as a proof of concept and as a result are very slow. The demos show the alogrithm works and that it can work in O(1) time.**
19 |
20 |
21 |
22 | # Setup
23 |
24 |
25 |
26 | This setup was tested on a newly deployed vm on Debian GNU/Linux 9 (stretch), YMMV on different setups.
27 |
28 | Instead of running these commands manually you can run the ./setup.sh script while in the /fullEndToEndDemo directory.
29 |
30 | Or if you want to run the commands manually...
31 |
32 | ```
33 | # From the root of the repo go to ./fullEndToEndDemo
34 | cd ./fullEndToEndDemo
35 |
36 | # Grab all the dependencies, this install is pretty huge
37 | sudo apt-get update
38 | sudo apt-get install git cmake g++ redis-server libboost-all-dev libopencv-dev python-opencv python-numpy python-scipy -y
39 |
40 | #Make it
41 | cmake .
42 | make
43 |
44 | # This step is optional. It removes a pointless annoying error opencv spits out
45 | # About: https://stackoverflow.com/questions/12689304/ctypes-error-libdc1394-error-failed-to-initialize-libdc1394
46 | sudo ln /dev/null /dev/raw1394
47 |
48 | # Then run either ./runDemo1.sh or ./runDemo2.sh to run the demo
49 |
50 |
51 | ```
52 |
53 | # Python setup
54 |
55 | All credit for the python code goes to [rachmadaniHaryono](https://github.com/rachmadaniHaryono) and [meowcoder](https://github.com/meowcoder).
56 |
57 | This setup was tested on a newly deployed vm on Ubuntu 18.04 LTS, YMMV on different setups.
58 |
59 | To use python package, do the following:
60 |
61 | ```
62 | sudo apt-get update
63 | sudo apt-get install python3-pip python3-opencv redis-server -y
64 |
65 | # On some systems this path is missing
66 | # read more here: https://github.com/pypa/pip/issues/3813
67 | PATH="$PATH:~/.local/bin"
68 |
69 | #cd to project directory
70 | pip3 install .
71 | ```
72 |
73 | You also need install redis.
74 |
75 | # Demo 1
76 |
77 |
78 | To run this demo go to the /fullEndToEndDemo directory and run ./runDemo1.sh
79 |
80 | This demo shows the original image below matching the 8 transformed images below. Each image has some combination of 2D affine transformations applied to it. The demo inserts each of the 8 images individually into the database and then queries the database with the original image.
81 |
82 |
83 | 
84 |
85 | 
86 |
87 | ## Output
88 |
89 | Here the 8 cats images are inserted first and then the database is queried with the orginal cat image. The original image matches all 8 images despite the transfomations.
90 |
91 | The low number of partial image matches is because we are doing direct hash lookups and so even a small bit of change (for example from antialising) can cause the perceptual hash to be ever so slightly off. Finding a closest hash using nearest neighbor would solve this issue.
92 |
93 | The demo takes 2 minutes (1 minute 38 seconds*) to run on a quad core VM but could run orders of magnitude faster with a better implementation.
94 |
95 | *Thanks to [meowcoder](https://github.com/meowcoder) for the speed up!
96 |
97 | ```
98 | user@instance-1:~/transformationInvariantImageSearch/fullEndToEndDemo$ time ./runDemo1.sh
99 | Loading image: inputImages/cat1.png ... done
100 | Added 46725 image fragments to DB
101 | Loading image: inputImages/cat2.png ... done
102 | Added 65769 image fragments to DB
103 | Loading image: inputImages/cat3.png ... done
104 | Added 34179 image fragments to DB
105 | Loading image: inputImages/cat4.png ... done
106 | Added 44388 image fragments to DB
107 | Loading image: inputImages/cat5.png ... done
108 | Added 47799 image fragments to DB
109 | Loading image: inputImages/cat6.png ... done
110 | Added 44172 image fragments to DB
111 | Loading image: inputImages/cat7.png ... done
112 | Added 67131 image fragments to DB
113 | Loading image: inputImages/cat8.png ... done
114 | Added 18078 image fragments to DB
115 | Loading image: inputImages/cat_original.png ... done
116 | Added 30372 image fragments to DB
117 | Loading image: inputImages/cat_original.png ... done
118 | Matches:
119 | inputImages/cat1.png: 12
120 | inputImages/cat2.png: 16
121 | inputImages/cat3.png: 15
122 | inputImages/cat4.png: 1
123 | inputImages/cat5.png: 2
124 | inputImages/cat6.png: 4
125 | inputImages/cat7.png: 43
126 | inputImages/cat8.png: 18
127 | inputImages/cat_original.png: 30352
128 | Number of matches: 30463
129 |
130 | real 1m38.352s
131 | user 2m6.140s
132 | sys 0m6.592s
133 | ```
134 |
135 | python example
136 |
137 | ```console
138 | $ time transformation-invariant-image-search insert fullEndToEndDemo/inputImages/cat* && \
139 | time transformation-invariant-image-search lookup fullEndToEndDemo/inputImages/cat_original.png
140 |
141 | loading fullEndToEndDemo/inputImages/cat1.png
142 | 100%|██| 3/3 [00:07<00:00, 2.66s/it]
143 | 100%|██| 3/3 [00:08<00:00, 2.70s/it]
144 | 100%|█| 3/3 [00:00<00:00, 270.58it/s]
145 | 100%|| 1/1 [00:00<00:00, 2457.12it/s]
146 | added 58956 fragments for fullEndToEndDemo/inputImages/cat1.png
147 | loading fullEndToEndDemo/inputImages/cat2.png
148 | 100%|██| 3/3 [00:07<00:00, 2.64s/it]
149 | 100%|██| 3/3 [00:08<00:00, 2.76s/it]
150 | 100%|█| 3/3 [00:00<00:00, 149.91it/s]
151 | 100%|█| 1/1 [00:00<00:00, 902.00it/s]
152 | added 58486 fragments for fullEndToEndDemo/inputImages/cat2.png
153 | loading fullEndToEndDemo/inputImages/cat3.png
154 | 100%|█████████| 3/3 [00:04<00:00, 1.51s/it]
155 | 100%|█████████| 3/3 [00:04<00:00, 1.56s/it]
156 | 100%|█| 5025/5025 [00:01<00:00, 3570.22it/s]
157 | added 30141 fragments for fullEndToEndDemo/inputImages/cat3.png
158 | loading fullEndToEndDemo/inputImages/cat4.png
159 | 100%|███| 3/3 [00:07<00:00, 2.58s/it]
160 | 100%|███| 3/3 [00:07<00:00, 2.62s/it]
161 | 100%|██| 3/3 [00:00<00:00, 434.36it/s]
162 | 100%|█| 1/1 [00:00<00:00, 1709.87it/s]
163 | added 53013 fragments for fullEndToEndDemo/inputImages/cat4.png
164 | loading fullEndToEndDemo/inputImages/cat5.png
165 | 100%|█████████| 3/3 [00:08<00:00, 2.90s/it]
166 | 100%|█████████| 3/3 [00:09<00:00, 3.07s/it]
167 | 100%|█| 9420/9420 [00:02<00:00, 3238.60it/s]
168 | added 56493 fragments for fullEndToEndDemo/inputImages/cat5.png
169 | loading fullEndToEndDemo/inputImages/cat6.png
170 | 100%|█████████| 3/3 [00:07<00:00, 2.41s/it]
171 | 100%|█████████| 3/3 [00:07<00:00, 2.50s/it]
172 | 100%|█| 7347/7347 [00:02<00:00, 2953.52it/s]
173 | added 44030 fragments for fullEndToEndDemo/inputImages/cat6.png
174 | loading fullEndToEndDemo/inputImages/cat7.png
175 | 100%|███████████| 3/3 [00:11<00:00, 3.82s/it]
176 | 100%|███████████| 3/3 [00:11<00:00, 3.94s/it]
177 | 100%|█| 10544/10544 [00:04<00:00, 2393.00it/s]
178 | added 63089 fragments for fullEndToEndDemo/inputImages/cat7.png
179 | loading fullEndToEndDemo/inputImages/cat8.png
180 | 100%|█████████| 3/3 [00:03<00:00, 1.06s/it]
181 | 100%|█████████| 3/3 [00:03<00:00, 1.07s/it]
182 | 100%|█| 3160/3160 [00:01<00:00, 3138.56it/s]
183 | added 18899 fragments for fullEndToEndDemo/inputImages/cat8.png
184 | loading fullEndToEndDemo/inputImages/cat_original.png
185 | 100%|█████████| 3/3 [00:05<00:00, 1.93s/it]
186 | 100%|█████████| 3/3 [00:05<00:00, 1.94s/it]
187 | 100%|█| 5795/5795 [00:01<00:00, 3211.96it/s]
188 | added 34764 fragments for fullEndToEndDemo/inputImages/cat_original.png
189 | transformation-invariant-image-search insert fullEndToEndDemo/inputImages/cat 141,98s user 10,14s system 159% cpu 1:35,54 total
190 | loading fullEndToEndDemo/inputImages/cat_original.png
191 | 100%|█████████| 3/3 [00:05<00:00, 1.83s/it]
192 | 100%|█████████| 3/3 [00:05<00:00, 1.94s/it]
193 | 100%|█| 5795/5795 [00:01<00:00, 3221.91it/s]
194 | matches for fullEndToEndDemo/inputImages/cat_original.png:
195 | 34770 fullEndToEndDemo/inputImages/cat_original.png
196 | 237 fullEndToEndDemo/inputImages/cat7.png
197 | 36 fullEndToEndDemo/inputImages/cat2.png
198 | 19 fullEndToEndDemo/inputImages/cat4.png
199 | 14 fullEndToEndDemo/inputImages/cat8.png
200 | 7 fullEndToEndDemo/inputImages/cat1.png
201 | 4 fullEndToEndDemo/inputImages/cat3.png
202 | 2 fullEndToEndDemo/inputImages/cat5.png
203 | 1 fullEndToEndDemo/inputImages/cat6.png
204 | transformation-invariant-image-search lookup 12,71s user 1,62s system 151% cpu 9,472 total
205 | ```
206 |
207 | # Demo 2
208 |
209 |
210 | To run this demo go to the /fullEndToEndDemo directory and run ./runDemo2.sh
211 |
212 | This demo shows partial image matching. The query image below (c) is a composite of images (a) and (b). The demo inserts images (a) and (b) into the database and then queries with image (c). Image (d) and (e) show the matching fragments, each coloured triangle is a fragment of the image that matched the composite image (c).
213 |
214 | 
215 |
216 | ## Output
217 |
218 | Here the two images mona.jpg and van_gogh.jpg are inserted into the database and then the database is queried with monaComposite.jpg. The demo takes 5 minutes 17 seconds (4 minutes 36 seconds*) to run on a quad core VM but could run orders of magnitude faster with a better implementation.
219 |
220 | *Thanks to [meowcoder](https://github.com/meowcoder) for the speed up!
221 |
222 | ```
223 | user@instance-1:~/transformationInvariantImageSearch/fullEndToEndDemo$ time ./runDemo2.sh
224 | Loading image: ./inputImages/mona.jpg ... done
225 | Added 26991 image fragments to DB
226 | Loading image: ./inputImages/van_gogh.jpg ... done
227 | Added 1129896 image fragments to DB
228 | Loading image: ./inputImages/monaComposite.jpg ... done
229 | Matches:
230 | ./inputImages/mona.jpg: 5
231 | ./inputImages/van_gogh.jpg: 1478
232 | Number of matches: 1483
233 |
234 | real 4m36.635s
235 | user 6m50.988s
236 | sys 0m18.224s
237 | ```
238 |
239 | python example
240 |
241 | ```console
242 | $ time transformation-invariant-image-search insert ./fullEndToEndDemo/inputImages/mona.jpg ./fullEndToEndDemo/inputImages/van_gogh.jpg && \
243 | time transformation-invariant-image-search lookup ./fullEndToEndDemo/inputImages/monaComposite.jpg
244 |
245 | loading ./fullEndToEndDemo/inputImages/mona.jpg
246 | 100%|███| 3/3 [00:03<00:00, 1.24s/it]
247 | 100%|███| 3/3 [00:03<00:00, 1.20s/it]
248 | 100%|██| 3/3 [00:00<00:00, 302.48it/s]
249 | 100%|█| 1/1 [00:00<00:00, 2471.60it/s]
250 | added 24145 fragments for ./fullEndToEndDemo/inputImages/mona.jpg
251 | loading ./fullEndToEndDemo/inputImages/van_gogh.jpg
252 | 100%|█████████████| 3/3 [02:50<00:00, 56.01s/it]
253 | 100%|█████████████| 3/3 [02:50<00:00, 56.14s/it]
254 | 100%|█| 178267/178267 [00:56<00:00, 3170.20it/s]
255 | added 1058329 fragments for ./fullEndToEndDemo/inputImages/van_gogh.jpg
256 | transformation-invariant-image-search insert 384,51s user 12,84s system 168% cpu 3:56,42 total
257 | loading ./fullEndToEndDemo/inputImages/monaComposite.jpg
258 | 100%|███████████| 3/3 [01:01<00:00, 20.88s/it]
259 | 100%|███████████| 3/3 [01:01<00:00, 20.77s/it]
260 | 100%|█| 61563/61563 [00:19<00:00, 3129.92it/s]
261 | matches for ./fullEndToEndDemo/inputImages/monaComposite.jpg:
262 | 1332 ./fullEndToEndDemo/inputImages/van_gogh.jpg
263 | 11 ./fullEndToEndDemo/inputImages/mona.jpg
264 | transformation-invariant-image-search lookup 133,29s user 5,07s system 164% cpu 1:24,30 total
265 | ```
266 |
--------------------------------------------------------------------------------
/docs/images/8cats.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/8cats.png
--------------------------------------------------------------------------------
/docs/images/affineDemo.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/affineDemo.mp4
--------------------------------------------------------------------------------
/docs/images/affineDemo.webm:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/affineDemo.webm
--------------------------------------------------------------------------------
/docs/images/background_resize.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/background_resize.jpg
--------------------------------------------------------------------------------
/docs/images/cat_original.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/cat_original.png
--------------------------------------------------------------------------------
/docs/images/cats image.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/cats image.jpg
--------------------------------------------------------------------------------
/docs/images/compositeMatching.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/compositeMatching.png
--------------------------------------------------------------------------------
/docs/images/dog1_resize3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/dog1_resize3.jpg
--------------------------------------------------------------------------------
/docs/images/morty.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/morty.mp4
--------------------------------------------------------------------------------
/docs/images/morty.webm:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pippy360/transformationInvariantImageSearch/10800ace74441382a41be1a48fe2e01cd8e89a9f/docs/images/morty.webm
--------------------------------------------------------------------------------
/docs/index.html:
--------------------------------------------------------------------------------
1 |
157 | This demo showcases a reverse image search algorithm which performs 2D affine transformation-invariant partial image-matching
158 | in sublinear time. The algorithm compares an input image to its database of preprocessed images and determines
159 | if the input matches any image in the database. The database need not contain the original image as inputs
160 | can be matched to any 2D affine transformation of the original. This means that images which have been
161 | scaled (uniformly or non-uniformly), skewed, translated, cropped or rotated (or have undergone any combination
162 | of these transformations) can be identified as coming from the same source image (Figure 1).
163 |
164 |
165 | The algorithm runs in sublinear time with respect to the number of images in the database regardless of the number of transformations applied.
166 | Note that if image-matching could not be done in sublinear time it would not
167 | function at the scale that the likes of Google or Microsoft require.
168 |
177 | If the input is a composite of images or image fragments, the algorithm will return matches for each
178 | image/image fragment (Figure 2).
179 |
180 |
181 |
182 |
Figure 2. The query image (c), which is a composite of (a) and (b), matches the
183 | two images (d) and (e) stored in the database. The code to reproduce this result can be found here.
184 |
185 |
186 |
187 |
188 |
How it Works
189 |
190 |
191 |
1.
192 |
193 | The algorithm finds keypoints in the input using edge detection1.
194 |
195 |
196 |
197 |
2.
198 |
199 | Each set of three keypoints is converted into a triangle2.
200 |
201 |
202 |
203 |
3.
204 |
205 | These triangles are transformed into equilateral triangles.
206 |
207 |
208 |
209 |
4.
210 |
211 | Each equilateral triangle is rotated to each of it's 3 edges and a perceptual hashing algorithm (in this case PHash) is used to produce a hash for each side3.
212 |
213 |
214 |
215 |
5.
216 |
217 | The algorithm compares the hash to those stored in the database and returns all matching
218 | images4.
219 |
220 |
221 |
222 |
223 |
224 | All images in the database have been preprocessed in this manner to produce hashes for comparison.
225 |
226 |
227 |
228 |
232 |
Figure 3. Step-by-step video guide showing how the algorithm operates
233 |
234 |
235 |
236 |
237 | 1
238 |
239 |
240 | Any keypoint-finding algorithm can be used so long as it is 2D
241 | affine transformation-invariant.
242 |
243 |
244 |
245 |
246 | 2
247 |
248 |
249 | The comparison can be done through a hash lookup (which can be done in constant time with respect to the number of images in the database)
250 | or by finding a ‘nearest-neighbour’ in the database (which can be done
251 | in amortized O(log2n) time).
252 |
253 |
254 |
255 |
256 | 3
257 |
258 |
259 | Rotating the triangle to each of it's 3 sides and hashing each rotation keeps the algorithm rotation invariant.
260 |
261 |
262 |
263 |
264 | 4
265 |
266 |
267 | The algorithm will return multiple matches if the input is a
268 | composite of images.
269 |
270 |
271 |
272 |
273 |
274 |
How it Compares to the Competition
275 |
276 | As you can see in Figure 4 below, the algorithm performs better than industry leaders in matching
277 | images which have undergone 2D affine transformations. Even the best-performing service, Google Image
278 | Search, fails to handle a simple 45 degree rotation.
279 |
280 |
281 |
282 |
283 | Figure 4. Comparison of the image-matching capabilities of our
284 | algorithm versus market leaders. The code to reproduce this result can be found here.
285 |
286 |
287 |
288 | Market leaders show limited ability to find matches of images which have undergone certain transformations.
289 | Our algorithm solves this problem for 2D affine transformations and, if used in
290 | conjunction with other modern techniques, offers a significant improvement in reverse-image searching.
291 |