This is a complete Structure from Motion pipeline. Structure from Motion is a technique to construct a 3-D point cloud from a set of images (or a video) of an object. The software in this repository relies heavily on a number of third party libaries, notably Bundler, CMVS, PMVS, and SIFT.
5 | This is a complete Structure from Motion pipeline. Structure from Motion is a technique to construct a 3-D point cloud from a set of images (or a video) of an object. The software in this repository relies heavily on a number of third party libaries, notably Bundler, CMVS, PMVS, and SIFT.
6 |
"
10 | authors:
11 | -
12 | affiliation: "Netherlands eScience Center"
13 | family-names: Spaaks
14 | given-names: "Jurriaan H."
15 | orcid: "https://orcid.org/0000-0002-7064-4069"
16 | -
17 | affiliation: "Netherlands eScience Center"
18 | family-names: Drost
19 | given-names: Niels
20 | orcid: "https://orcid.org/0000-0001-9795-7981"
21 | -
22 | affiliation: "Netherlands eScience Center"
23 | family-names: Maassen
24 | given-names: Jason
25 | orcid: "https://orcid.org/0000-0002-8172-4865"
26 | -
27 | affiliation: "Netherlands eScience Center"
28 | family-names: Oord
29 | given-names: Gijs
30 | name-particle: "van den"
31 | -
32 | affiliation: "Netherlands eScience Center"
33 | family-names: Georgievska
34 | given-names: Sonja
35 | -
36 | affiliation: "BIMData.io"
37 | family-names: Mor
38 | given-names: "Stéphane "
39 | -
40 | affiliation: "Netherlands eScience Center"
41 | family-names: Meijer
42 | given-names: Christiaan
43 | -
44 | affiliation: "Netherlands eScience Center"
45 | family-names: Verhoeven
46 | given-names: Stefan
47 | orcid: "https://orcid.org/0000-0002-5821-2060"
48 | cff-version: "1.0.3"
49 | date-released: 2018-09-26
50 | doi: "10.5281/zenodo.594751"
51 | keywords:
52 | - "structure from motion"
53 | - sfm
54 | - sift
55 | - bundler
56 | - CMVS
57 | - PMVS
58 | license: "GPL-2.0-only"
59 | message: "If you use this software, please cite it using these metadata."
60 | repository-code: "https://github.com/NLeSC/structure-from-motion/"
61 | title: "Structure from Motion"
62 | version: "1.0.5"
63 | ...
64 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | Contributions are welcome! Please create a pull request and adhere to the following list:
2 |
3 | 1. The code follows the standard style, check by fixing all errors
4 | 2. Use the GitHub Flow branching model
5 | 3. For other development and coding style conventions, see the NLeSC Style [Guide](https://guide.esciencecenter.nl/index.html)
6 | 4. Don't include extra dependencies without a good reason. Only use licenses compattible with the license of this project
7 | 5. Please document your code, and provide unit tests
8 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | # Docker File for structure-from-motion pipeline
2 | # Copyright 2015 Netherlands eScience Center
3 | #
4 | #
5 | # Build this image:
6 | #
7 | # sudo docker build -t sfm_image .
8 |
9 | # Run pipeline on a collection of images in the current working directory:
10 | #
11 | # sudo docker run -u $UID -v $PWD:/data sfm_image
12 | #
13 | # Alternatively, the image is also available ready-made on DockerHub:
14 | #
15 | # sudo docker run -u $UID -v $PWD:/data nlesc/structure-from-motion
16 |
17 | FROM ubuntu:16.04
18 | MAINTAINER Niels Drost
19 |
20 | # Create sfm source dir
21 | RUN mkdir /sfm
22 |
23 | # Copy sources
24 | ADD bundler_sfm /sfm/bundler_sfm
25 | ADD cmvs-pmvs /sfm/cmvs-pmvs
26 | ADD run-sfm.py /sfm/run-sfm.py
27 |
28 | # Install required packages
29 | RUN apt-get update && apt-get install -y --no-install-recommends cmake gfortran libgoogle-glog-dev libatlas-base-dev libeigen3-dev \
30 | libsuitesparse-dev zlib1g-dev libjpeg-dev libboost-dev python-pil git build-essential wget libcholmod3.0.6 && rm -rf /var/lib/apt/lists/* \
31 | # Download ceres
32 | && cd /opt && wget http://ceres-solver.org/ceres-solver-1.10.0.tar.gz && tar -zxf ceres-solver-1.10.0.tar.gz && rm -rf ceres-solver-1.10.0.tar.gz \
33 | # Build ceres
34 | && cd /opt/ceres-solver-1.10.0 && mkdir build && cd build && cmake .. && make -j3 && make test && make install \
35 | # Build bundler
36 | && cd /sfm/bundler_sfm && make \
37 | # Build cmvs
38 | && cd /sfm/cmvs-pmvs/program && mkdir build && cd build && cmake .. && make \
39 | # Clean up redundant packages
40 | && apt-get purge -y cmake gfortran libeigen3-dev wget build-essential && apt-get -y autoremove
41 |
42 | # Mount data volume
43 | VOLUME /data
44 |
45 | # Go to working dir
46 | WORKDIR /data
47 |
48 | # Run main script
49 | CMD /sfm/run-sfm.py
50 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 2, June 1991
3 |
4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
6 | Everyone is permitted to copy and distribute verbatim copies
7 | of this license document, but changing it is not allowed.
8 |
9 | Preamble
10 |
11 | The licenses for most software are designed to take away your
12 | freedom to share and change it. By contrast, the GNU General Public
13 | License is intended to guarantee your freedom to share and change free
14 | software--to make sure the software is free for all its users. This
15 | General Public License applies to most of the Free Software
16 | Foundation's software and to any other program whose authors commit to
17 | using it. (Some other Free Software Foundation software is covered by
18 | the GNU Lesser General Public License instead.) You can apply it to
19 | your programs, too.
20 |
21 | When we speak of free software, we are referring to freedom, not
22 | price. Our General Public Licenses are designed to make sure that you
23 | have the freedom to distribute copies of free software (and charge for
24 | this service if you wish), that you receive source code or can get it
25 | if you want it, that you can change the software or use pieces of it
26 | in new free programs; and that you know you can do these things.
27 |
28 | To protect your rights, we need to make restrictions that forbid
29 | anyone to deny you these rights or to ask you to surrender the rights.
30 | These restrictions translate to certain responsibilities for you if you
31 | distribute copies of the software, or if you modify it.
32 |
33 | For example, if you distribute copies of such a program, whether
34 | gratis or for a fee, you must give the recipients all the rights that
35 | you have. You must make sure that they, too, receive or can get the
36 | source code. And you must show them these terms so they know their
37 | rights.
38 |
39 | We protect your rights with two steps: (1) copyright the software, and
40 | (2) offer you this license which gives you legal permission to copy,
41 | distribute and/or modify the software.
42 |
43 | Also, for each author's protection and ours, we want to make certain
44 | that everyone understands that there is no warranty for this free
45 | software. If the software is modified by someone else and passed on, we
46 | want its recipients to know that what they have is not the original, so
47 | that any problems introduced by others will not reflect on the original
48 | authors' reputations.
49 |
50 | Finally, any free program is threatened constantly by software
51 | patents. We wish to avoid the danger that redistributors of a free
52 | program will individually obtain patent licenses, in effect making the
53 | program proprietary. To prevent this, we have made it clear that any
54 | patent must be licensed for everyone's free use or not licensed at all.
55 |
56 | The precise terms and conditions for copying, distribution and
57 | modification follow.
58 |
59 | GNU GENERAL PUBLIC LICENSE
60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
61 |
62 | 0. This License applies to any program or other work which contains
63 | a notice placed by the copyright holder saying it may be distributed
64 | under the terms of this General Public License. The "Program", below,
65 | refers to any such program or work, and a "work based on the Program"
66 | means either the Program or any derivative work under copyright law:
67 | that is to say, a work containing the Program or a portion of it,
68 | either verbatim or with modifications and/or translated into another
69 | language. (Hereinafter, translation is included without limitation in
70 | the term "modification".) Each licensee is addressed as "you".
71 |
72 | Activities other than copying, distribution and modification are not
73 | covered by this License; they are outside its scope. The act of
74 | running the Program is not restricted, and the output from the Program
75 | is covered only if its contents constitute a work based on the
76 | Program (independent of having been made by running the Program).
77 | Whether that is true depends on what the Program does.
78 |
79 | 1. You may copy and distribute verbatim copies of the Program's
80 | source code as you receive it, in any medium, provided that you
81 | conspicuously and appropriately publish on each copy an appropriate
82 | copyright notice and disclaimer of warranty; keep intact all the
83 | notices that refer to this License and to the absence of any warranty;
84 | and give any other recipients of the Program a copy of this License
85 | along with the Program.
86 |
87 | You may charge a fee for the physical act of transferring a copy, and
88 | you may at your option offer warranty protection in exchange for a fee.
89 |
90 | 2. You may modify your copy or copies of the Program or any portion
91 | of it, thus forming a work based on the Program, and copy and
92 | distribute such modifications or work under the terms of Section 1
93 | above, provided that you also meet all of these conditions:
94 |
95 | a) You must cause the modified files to carry prominent notices
96 | stating that you changed the files and the date of any change.
97 |
98 | b) You must cause any work that you distribute or publish, that in
99 | whole or in part contains or is derived from the Program or any
100 | part thereof, to be licensed as a whole at no charge to all third
101 | parties under the terms of this License.
102 |
103 | c) If the modified program normally reads commands interactively
104 | when run, you must cause it, when started running for such
105 | interactive use in the most ordinary way, to print or display an
106 | announcement including an appropriate copyright notice and a
107 | notice that there is no warranty (or else, saying that you provide
108 | a warranty) and that users may redistribute the program under
109 | these conditions, and telling the user how to view a copy of this
110 | License. (Exception: if the Program itself is interactive but
111 | does not normally print such an announcement, your work based on
112 | the Program is not required to print an announcement.)
113 |
114 | These requirements apply to the modified work as a whole. If
115 | identifiable sections of that work are not derived from the Program,
116 | and can be reasonably considered independent and separate works in
117 | themselves, then this License, and its terms, do not apply to those
118 | sections when you distribute them as separate works. But when you
119 | distribute the same sections as part of a whole which is a work based
120 | on the Program, the distribution of the whole must be on the terms of
121 | this License, whose permissions for other licensees extend to the
122 | entire whole, and thus to each and every part regardless of who wrote it.
123 |
124 | Thus, it is not the intent of this section to claim rights or contest
125 | your rights to work written entirely by you; rather, the intent is to
126 | exercise the right to control the distribution of derivative or
127 | collective works based on the Program.
128 |
129 | In addition, mere aggregation of another work not based on the Program
130 | with the Program (or with a work based on the Program) on a volume of
131 | a storage or distribution medium does not bring the other work under
132 | the scope of this License.
133 |
134 | 3. You may copy and distribute the Program (or a work based on it,
135 | under Section 2) in object code or executable form under the terms of
136 | Sections 1 and 2 above provided that you also do one of the following:
137 |
138 | a) Accompany it with the complete corresponding machine-readable
139 | source code, which must be distributed under the terms of Sections
140 | 1 and 2 above on a medium customarily used for software interchange; or,
141 |
142 | b) Accompany it with a written offer, valid for at least three
143 | years, to give any third party, for a charge no more than your
144 | cost of physically performing source distribution, a complete
145 | machine-readable copy of the corresponding source code, to be
146 | distributed under the terms of Sections 1 and 2 above on a medium
147 | customarily used for software interchange; or,
148 |
149 | c) Accompany it with the information you received as to the offer
150 | to distribute corresponding source code. (This alternative is
151 | allowed only for noncommercial distribution and only if you
152 | received the program in object code or executable form with such
153 | an offer, in accord with Subsection b above.)
154 |
155 | The source code for a work means the preferred form of the work for
156 | making modifications to it. For an executable work, complete source
157 | code means all the source code for all modules it contains, plus any
158 | associated interface definition files, plus the scripts used to
159 | control compilation and installation of the executable. However, as a
160 | special exception, the source code distributed need not include
161 | anything that is normally distributed (in either source or binary
162 | form) with the major components (compiler, kernel, and so on) of the
163 | operating system on which the executable runs, unless that component
164 | itself accompanies the executable.
165 |
166 | If distribution of executable or object code is made by offering
167 | access to copy from a designated place, then offering equivalent
168 | access to copy the source code from the same place counts as
169 | distribution of the source code, even though third parties are not
170 | compelled to copy the source along with the object code.
171 |
172 | 4. You may not copy, modify, sublicense, or distribute the Program
173 | except as expressly provided under this License. Any attempt
174 | otherwise to copy, modify, sublicense or distribute the Program is
175 | void, and will automatically terminate your rights under this License.
176 | However, parties who have received copies, or rights, from you under
177 | this License will not have their licenses terminated so long as such
178 | parties remain in full compliance.
179 |
180 | 5. You are not required to accept this License, since you have not
181 | signed it. However, nothing else grants you permission to modify or
182 | distribute the Program or its derivative works. These actions are
183 | prohibited by law if you do not accept this License. Therefore, by
184 | modifying or distributing the Program (or any work based on the
185 | Program), you indicate your acceptance of this License to do so, and
186 | all its terms and conditions for copying, distributing or modifying
187 | the Program or works based on it.
188 |
189 | 6. Each time you redistribute the Program (or any work based on the
190 | Program), the recipient automatically receives a license from the
191 | original licensor to copy, distribute or modify the Program subject to
192 | these terms and conditions. You may not impose any further
193 | restrictions on the recipients' exercise of the rights granted herein.
194 | You are not responsible for enforcing compliance by third parties to
195 | this License.
196 |
197 | 7. If, as a consequence of a court judgment or allegation of patent
198 | infringement or for any other reason (not limited to patent issues),
199 | conditions are imposed on you (whether by court order, agreement or
200 | otherwise) that contradict the conditions of this License, they do not
201 | excuse you from the conditions of this License. If you cannot
202 | distribute so as to satisfy simultaneously your obligations under this
203 | License and any other pertinent obligations, then as a consequence you
204 | may not distribute the Program at all. For example, if a patent
205 | license would not permit royalty-free redistribution of the Program by
206 | all those who receive copies directly or indirectly through you, then
207 | the only way you could satisfy both it and this License would be to
208 | refrain entirely from distribution of the Program.
209 |
210 | If any portion of this section is held invalid or unenforceable under
211 | any particular circumstance, the balance of the section is intended to
212 | apply and the section as a whole is intended to apply in other
213 | circumstances.
214 |
215 | It is not the purpose of this section to induce you to infringe any
216 | patents or other property right claims or to contest validity of any
217 | such claims; this section has the sole purpose of protecting the
218 | integrity of the free software distribution system, which is
219 | implemented by public license practices. Many people have made
220 | generous contributions to the wide range of software distributed
221 | through that system in reliance on consistent application of that
222 | system; it is up to the author/donor to decide if he or she is willing
223 | to distribute software through any other system and a licensee cannot
224 | impose that choice.
225 |
226 | This section is intended to make thoroughly clear what is believed to
227 | be a consequence of the rest of this License.
228 |
229 | 8. If the distribution and/or use of the Program is restricted in
230 | certain countries either by patents or by copyrighted interfaces, the
231 | original copyright holder who places the Program under this License
232 | may add an explicit geographical distribution limitation excluding
233 | those countries, so that distribution is permitted only in or among
234 | countries not thus excluded. In such case, this License incorporates
235 | the limitation as if written in the body of this License.
236 |
237 | 9. The Free Software Foundation may publish revised and/or new versions
238 | of the General Public License from time to time. Such new versions will
239 | be similar in spirit to the present version, but may differ in detail to
240 | address new problems or concerns.
241 |
242 | Each version is given a distinguishing version number. If the Program
243 | specifies a version number of this License which applies to it and "any
244 | later version", you have the option of following the terms and conditions
245 | either of that version or of any later version published by the Free
246 | Software Foundation. If the Program does not specify a version number of
247 | this License, you may choose any version ever published by the Free Software
248 | Foundation.
249 |
250 | 10. If you wish to incorporate parts of the Program into other free
251 | programs whose distribution conditions are different, write to the author
252 | to ask for permission. For software which is copyrighted by the Free
253 | Software Foundation, write to the Free Software Foundation; we sometimes
254 | make exceptions for this. Our decision will be guided by the two goals
255 | of preserving the free status of all derivatives of our free software and
256 | of promoting the sharing and reuse of software generally.
257 |
258 | NO WARRANTY
259 |
260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
268 | REPAIR OR CORRECTION.
269 |
270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
278 | POSSIBILITY OF SUCH DAMAGES.
279 |
280 | END OF TERMS AND CONDITIONS
281 |
282 | How to Apply These Terms to Your New Programs
283 |
284 | If you develop a new program, and you want it to be of the greatest
285 | possible use to the public, the best way to achieve this is to make it
286 | free software which everyone can redistribute and change under these terms.
287 |
288 | To do so, attach the following notices to the program. It is safest
289 | to attach them to the start of each source file to most effectively
290 | convey the exclusion of warranty; and each file should have at least
291 | the "copyright" line and a pointer to where the full notice is found.
292 |
293 |
294 | Copyright (C)
295 |
296 | This program is free software; you can redistribute it and/or modify
297 | it under the terms of the GNU General Public License as published by
298 | the Free Software Foundation; either version 2 of the License, or
299 | (at your option) any later version.
300 |
301 | This program is distributed in the hope that it will be useful,
302 | but WITHOUT ANY WARRANTY; without even the implied warranty of
303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
304 | GNU General Public License for more details.
305 |
306 | You should have received a copy of the GNU General Public License along
307 | with this program; if not, write to the Free Software Foundation, Inc.,
308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
309 |
310 | Also add information on how to contact you by electronic and paper mail.
311 |
312 | If the program is interactive, make it output a short notice like this
313 | when it starts in an interactive mode:
314 |
315 | Gnomovision version 69, Copyright (C) year name of author
316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
317 | This is free software, and you are welcome to redistribute it
318 | under certain conditions; type `show c' for details.
319 |
320 | The hypothetical commands `show w' and `show c' should show the appropriate
321 | parts of the General Public License. Of course, the commands you use may
322 | be called something other than `show w' and `show c'; they could even be
323 | mouse-clicks or menu items--whatever suits your program.
324 |
325 | You should also get your employer (if you work as a programmer) or your
326 | school, if any, to sign a "copyright disclaimer" for the program, if
327 | necessary. Here is a sample; alter the names:
328 |
329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program
330 | `Gnomovision' (which makes passes at compilers) written by James Hacker.
331 |
332 | , 1 April 1989
333 | Ty Coon, President of Vice
334 |
335 | This General Public License does not permit incorporating your program into
336 | proprietary programs. If your program is a subroutine library, you may
337 | consider it more useful to permit linking proprietary applications with the
338 | library. If this is what you want to do, use the GNU Lesser General
339 | Public License instead of this License.
340 |
--------------------------------------------------------------------------------
/NOTICE:
--------------------------------------------------------------------------------
1 | Structure From Motion Pipeline
2 | Copyright 2015 The Netherlands eScience Center
3 |
4 | This software contains Bundler, which is copyright 2008-2013 Noah Snavely (snavely@cs.cornell.edu). See https://github.com/snavely/bundler_sfm
5 |
6 | This software contains CMVS-PMVS. See https://github.com/pmoulon/CMVS-PMVS
7 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | Structure From Motion Pipeline
2 | ------------------------------
3 |
4 | [](https://travis-ci.org/NLeSC/structure-from-motion)
5 | [](https://www.codacy.com/app/NLeSC/structure-from-motion?utm_source=github.com&utm_medium=referral&utm_content=NLeSC/structure-from-motion&utm_campaign=Badge_Grade)
6 | [](https://coveralls.io/github/NLeSC/structure-from-motion?branch=)
7 | [](http://dx.doi.org/10.5281/zenodo.45937)
8 |
9 | Please cite the tool with its DOI if you are using it in your scientific publication.
10 |
11 |
12 |
13 | This repo contains a complete _Structure from Motion_ pipeline. Structure from Motion is a technique to construct a 3-D point cloud from a set of images (or a video) of an object. The software in this repository relies heavily on a number of third party libaries, notably Bundler, CMVS, PMVS, and SIFT.
14 |
15 |
16 | * Go [here](docs/install-ubuntu-14.10.md) for the installation instructions;
17 | * A conceptual overview of the pipeline is documented [here](docs/structure_from_motion.md);
18 | * The current pipeline has many options that can be configured. [This document](/docs/tuning_guide.md) describes which option does what and how it affects the characteristics of the resulting point cloud;
19 | * [This document](docs/related_work.md) lists a couple of key people, their websites, and tools;
20 | * [Here](docs/ideas.md) we describe some ideas we never found time to look into;
21 | * You can run the pipeline with [docker](https://www.docker.com/) using [this docker image](https://hub.docker.com/r/nlesc/structure-from-motion/). Find the instructions [here](docs/docker.md).
22 |
23 |
24 |
25 |
26 | Example
27 | --------
28 |
29 | 
30 |
31 | This software includes a small example, in this case [a rock on the parking lot outside of our building](https://www.google.com/maps/place/52%C2%B021'24.6%22N+4%C2%B057'15.1%22E/@52.356789,4.9542065,49m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0). See [here](docs/example.md) for some info on how to test the pipeline on the example.
32 |
33 |
34 |
35 |
36 | Copyrights & Disclaimers
37 | ------------------------
38 |
39 | The software is copyrighted by the Netherlands eScience Center and
40 | releases under the GNU general public license (GPL), Version 2.0.
41 |
42 | See for more information on the
43 | Netherlands eScience Center.
44 |
45 |
46 |
47 | See the "LICENSE" and "NOTICE" files for more information.
48 |
49 |
--------------------------------------------------------------------------------
/attic/pmpf/README.md:
--------------------------------------------------------------------------------
1 | This directory contains the pilot job framework file used for commiting jobs to a cluster.
2 |
--------------------------------------------------------------------------------
/attic/pmpf/Server.java:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2013 Netherlands eScience Center
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 |
18 | import java.io.BufferedReader;
19 | import java.io.File;
20 | import java.io.FileReader;
21 | import java.io.IOException;
22 | import java.net.ServerSocket;
23 | import java.net.Socket;
24 |
25 | /**
26 | * @author Jason Maassen
27 | * @version 1.0
28 | * @since 1.0
29 | *
30 | */
31 | public class Server {
32 |
33 | private static final short DEFAULT_PORT = 19876;
34 |
35 | private ServerSocket ss;
36 | private BufferedReader input;
37 |
38 | public Server(File inputfile, short port) throws IOException {
39 | input = new BufferedReader(new FileReader(inputfile));
40 | ss = new ServerSocket(port, 1024);
41 | }
42 |
43 | public void run() throws IOException {
44 |
45 | boolean done = false;
46 |
47 | System.out.println("Server starting!");
48 |
49 | while (!done) {
50 | String line = input.readLine();
51 |
52 | if (line == null) {
53 | // We've reached EOF!
54 | input.close();
55 | done = true;
56 | } else if (!line.startsWith("#")) {
57 | Socket tmp = ss.accept();
58 | System.out.println("Returning line: " + line);
59 | tmp.getOutputStream().write((line+"\n").getBytes());
60 | tmp.close();
61 | }
62 | }
63 |
64 | System.out.println("No more input -- shutting down");
65 | ss.close();
66 | }
67 |
68 | public static void main(String [] args) {
69 |
70 | short port = DEFAULT_PORT;
71 |
72 | if (args.length < 1 || args.length > 2) {
73 | System.err.println("Usage nls.esciencecenter.patty.Server [port]");
74 | System.exit(1);
75 | }
76 |
77 | File f = new File(args[0]);
78 |
79 | if (!f.exists() || !f.isFile() || !f.canRead()) {
80 | System.err.println("Cannot access inputfile " + args[0]);
81 | System.exit(1);
82 | }
83 |
84 | if (args.length == 2) {
85 | port = Short.parseShort(args[1]);
86 | }
87 |
88 | try {
89 | Server s = new Server(f, port);
90 | s.run();
91 | } catch (Exception e) {
92 | System.err.println("Server failed: " + e.getLocalizedMessage());
93 | e.printStackTrace(System.err);
94 | System.exit(1);
95 | }
96 |
97 | }
98 | }
99 |
100 |
--------------------------------------------------------------------------------
/attic/pmpf/example-input:
--------------------------------------------------------------------------------
1 | one
2 | two
3 | three
4 | four
5 | five
6 | six
7 | seven
8 | eight
9 | nine
10 | ten
11 | eleven
12 | twelve
13 |
--------------------------------------------------------------------------------
/attic/pmpf/pmpf:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 |
3 | #This script is in charge of starting as many workers as there are cores.
4 |
5 | for i in $(seq 1 8)
6 | do
7 | ./run-slave $@&
8 | done
9 |
10 | wait
11 |
--------------------------------------------------------------------------------
/attic/pmpf/pmpf-sge-script:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 | #$ -S /bin/sh
3 | #$ -N patty
4 | #$ -l h_rt=24:00:00
5 | #$ -wd /var/scratch/jason/patty
6 | #$ -pe javagat 1
7 |
8 | echo "script running"
9 |
10 | cat $PE_HOSTFILE
11 |
12 | for host in `cat $PE_HOSTFILE | cut -d " " -f 1` ; do
13 | for i in {0..7}
14 | do
15 | ssh -o StrictHostKeyChecking=false $host "cd `pwd` && /var/scratch/jason/patty/code/PattyAnalytics/scripts/pmpf/run-slave"&
16 | done
17 | done
18 |
19 | wait
20 | exit 0
21 |
--------------------------------------------------------------------------------
/attic/pmpf/run-slave:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | HOST=fs0
4 | PORT=19876
5 |
6 | while true
7 | do
8 | #Talk to server, get next directory name
9 | DIR=`nc $HOST $PORT`
10 |
11 | #Check if netcat reported an error
12 | if [ "$?" -ne "0" ]; then
13 | echo "Failed to contact server"
14 | exit 1
15 | fi
16 |
17 | echo "GOT DIR $DIR"
18 |
19 | if [ "$DIR" = "exit" ]
20 | then
21 | #Exit is magic string send by server if we are done
22 | echo "server says we're done"
23 | exit
24 | else
25 | #Insert actual processing script call here
26 | echo "running next dir: $DIR"
27 | cd /var/scratch/jason/patty/
28 | cp -r SITES_image_based/$DIR processed/$DIR
29 | cd processed/$DIR
30 | time /var/scratch/jason/patty/code/bundler-v0.4-source/RunBundler.sh 2>&1 >RunBundler.log
31 | fi
32 | done
33 |
34 |
--------------------------------------------------------------------------------
/docs/docker.md:
--------------------------------------------------------------------------------
1 | Using Docker to run the Stucture From Motion Pipeline
2 | =====================================================
3 |
4 | To facilitate running the pipeline with as little effort as possible we have created a docker image.
5 |
6 | Docker is a system for fast deployment of applications using virtual machines. See here for more information: https://www.docker.com/
7 |
8 |
9 |
10 | Quick HOWTO:
11 |
12 | 1. Install Docker: ```sudo apt-get install docker.io```,
13 | 1. retrieve the docker image by running ```docker pull nlesc/structure-from-motion```,
14 | 1. go to the examples directory 'examples/rock',
15 | 1. start the image using ```sudo docker run -u $UID -v "$PWD:/data" nlesc/structure-from-motion```. To process your own set of images, run this command inside the directory containing your image files.
16 |
17 | By default the docker image will run the entire structure-from-motion pipeline on all pictures in the current working directory. If instead you would like a terminal session to play around with the image try this command:
18 | ````
19 | sudo docker run -u $UID -v "$PWD:/data" -i -t nlesc/structure-from-motion /bin/bash
20 | ````
21 | The main script to run the pipeline is called run-sfm.py'.
22 | The image can also be built from source. To do this yourself, you need to checkout the submodules:
23 | ````
24 | git submodule update --init --recursive
25 | ````
26 | Then, build the image using the Dockerfile in the repository root directory:
27 | ````
28 | sudo docker build -t sfm_image .
29 | sudo docker run -u $UID -v $PWD:/data sfm_image
30 | ````
31 |
--------------------------------------------------------------------------------
/docs/example.md:
--------------------------------------------------------------------------------
1 | The example folder (examples/rock) contains an example input for the pipeline, in this case a rock.
2 |
3 | After [installing](install-ubuntu-14.10.md), the pipeline can be started by cd'ing into the data directory, and starting the 'run-sfm.py' script from there:
4 |
5 | ```
6 | $ cd ${HOME}/structure-from-motion/examples/rock
7 | $ python ../../run-sfm.py
8 | ```
9 |
10 | Alternatively, the docker image can be used, see [here](docker.md).
11 |
12 | ```
13 | $ cd examples/rock
14 | $ sudo docker run -u $UID -v $PWD:/data nlesc/structure-from-motion
15 | ```
16 |
17 | Viewing the resulting sparse pointcloud (in bundle/bundle.out) and dense pointcloud (in pmvs/models/optio-0000.ply) with for example [meshlab](http://meshlab.sourceforge.net/).
18 |
19 | To test if the point cloud was correctly generated, you can use a test script which prints the number of points in the generated cloud:
20 |
21 | ```
22 | $ cd ${HOME}/structure-from-motion
23 | $ test/number_of_points.py examples/rock
24 |
25 | # Using 'bundle.out' file from here: examples/rock/bundle/bundle.out
26 | # Using 'option-0000.ply' from here: examples/rock/pmvs/models/option-0000.ply
27 | # The results are:
28 | # nPointsSparse = 9753
29 | # nPointsDense = 2497111
30 |
31 | 9753
32 | 2497111
33 | ```
34 |
--------------------------------------------------------------------------------
/docs/ideas.md:
--------------------------------------------------------------------------------
1 | This document collects some of the ideas that we never had time to look into.
2 |
3 | Here's the list:
4 | * [Brute force calculation of point clouds](#brute-force-calculation-of-point-clouds)
5 | * [Quick feedback system](#quick-feedback-system)
6 | * [Camera parameters sensitivity analysis](#camera-parameters-sensitivity-analysis)
7 | * Improve visual quality of objects
8 | * [Alternative keypoint detectors](#alternative-keypoint-detectors)
9 | * [Improve accuracy of key matching by adding easily identifiable objects](#improve-accuracy-of-key-matching-by-adding-easily-identifiable-objects)
10 |
11 |
12 |
13 | ### Brute force calculation of point clouds
14 | * **context:** In our experience it is difficult to know which optimal settings to use when constructing a point cloud. There are many knobs to turn, and it's often not clear how the settings interact in terms of performance, memory requirements, quality of the result point cloud, etc.
15 | * **proposed solution:** start construction of the point cloud using different settings, and then either combine the results, or select a good one (automatically or by asking the user for visual inspection).
16 |
17 |
18 |
19 |
20 | ### Quick feedback system
21 | * **context:** It turns out that it is quite difficult to take 'good' pictures during data acquisition. In the Via Appia data set, we generally see at least a few images not being used for the point cloud. Furthermore, some photos generate many keypoints, while others have few, and additionally, some keypoints are really informative while others aren't. Photos can be good or bad due to various reasons, such as:
22 |
23 | * angle between adjacent photos is too small (in particular for 'photos' derived from video frames)
24 | * photos that are blurry
25 | * photos with wrong aperture (software assumes pinhole camera)
26 | * photos with much background
27 | * photos with low contrast (dark areas such as shadows, often of the photographer; light areas such as walls and other man-made structures.
28 |
29 | Add to this the many settings of modern cameras, and it becomes a multidimensional nonlinear optimization problem.
30 |
31 | * **proposed solution:** All in all, we think the most robust way of dealing with these factors is to come up with a system that is capable of providing quick feedback to the user. We already did a lot of work on increasing the performance of the pipeline, but this is still offline. It would be good to have quick feedback on how much the pointcloud improved as a result of the photo you _just_ took. Such a setup would probably involve wireless cameras that upload their photos to a cluster/cloud, with almost immediate feedback on the number, location, coverage of keypoints; further diagnostics on the resulting sparse/dense point clouds may be provided (albeit with a small delay, perhaps in the order of minutes). This way, archeologists can quickly get a feel for what makes a good photo for their purposes, given the prevalent lighting conditions, camera settings, photograph positions, etc., ultimately resulting in higher-quality datasets.
32 |
33 |
34 |
35 | ### Camera parameters sensitivity analysis
36 | * **context:** To get a better feel for the optimal camera model and camera settings, we could do a sensitivity analysis of different cameras, and vary the settings used on each camera
37 | * **proposed solution:** Vary:
38 | * camera model
39 | * flash settings
40 | * aperture
41 | * ISO
42 |
43 |
44 |
45 | ### Improve visual quality of point clouds/objects
46 |
47 | * **context:** The current pipeline spits out a point cloud, with colored points. The visual representation can still be improved by calculating meshes (surfaces between points) and then by calculating textures on top of the outer surface of objects. This is good for making visually attractive representations of the objects, and calculating meshes has the added bonus of being able to calculate certains metrics (e.g. volume, surface area) that may be of interest to archeologists.
48 | * **proposed solution:** there are a couple of tools available which calculate meshes. We experimented with using meshlab. This works OK, but scripting the mesh calculating was a bit ugly (though not impossible). Perhaps other tools can be used as well, for instance Blender&Python can do mesh and texture calculations.
49 |
50 |
51 |
52 | ### Alternative keypoint detectors
53 | * **context:** We currently use SIFT to do the keypoint detection. This works OK in principle, but has the potential drawback of being patented.
54 | * **proposed solution:** Other keypoint detectors are available, some of which are supposedly quicker (although that is not really where most time is spent, so maybe it's not worth optimizing). Avoiding license issues may be a reason to switch from SIFT to something else though. Also, it's worth investigating whether the results from different keypoint identifiers can be concatenated for a better result.
55 |
56 |
57 |
58 | ### Improve accuracy of key matching by adding easily identifiable objects
59 | * **context:** Key matching is sometimes difficult, in particular when the object has symmetry or repeating shapes (e.g. standard windows, pillars, tiles, etc).
60 | * **proposed solution:** Adding small objects to the scene before the photographs are taken can help correctly stitch together the photographs. Ideally, the objects are rigid, high contrast, and uniquely identifiable from any angle and distance. Perhaps [QR codes](http://en.wikipedia.org/wiki/QR_code) could be used.
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
--------------------------------------------------------------------------------
/docs/images/example-output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/docs/images/example-output.png
--------------------------------------------------------------------------------
/docs/images/sfm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/docs/images/sfm.png
--------------------------------------------------------------------------------
/docs/install-ubuntu-14.10.md:
--------------------------------------------------------------------------------
1 | Install guide
2 | =============
3 | This install guide explains how to install the structure from motion pipeline in Ubuntu 14.10.
4 |
5 | * [Setting up a virtual machine (optional)](#set-up-a-virtual-machine)
6 | * [Installing packages from Ubuntu repositories](#install-packages-from-ubuntu-repositories)
7 | * [Downloading and installing other tools](#download-and-install-other-tools)
8 | * [Running an example](#run-an-example)
9 |
10 |
11 | ## Setting up a virtual machine (optional)
12 |
13 |
14 | Creating a virtual machine is an optional step, for example, if you are using Windows. You can skip this step and go directly to [Installing packages from Ubuntu repositories](#install-packages-from-ubuntu-repositories) instead.
15 |
16 | Download and install the latest [VirtualBox](https://www.virtualbox.org/wiki/Downloads) if you have't already.
17 |
18 | Download the latest [Ubuntu](https://www.ubuntu.com/download/desktop) iso.
19 |
20 | Create an image in VirtualBox and install Ubuntu.
21 |
22 | I configured virtualbox to use:
23 |
24 | * 5000 MB memory
25 | * virtual harddisk
26 | * type: VDI
27 | * dynamically allocated storage
28 | * 64 GB diskspace
29 |
30 | After the VM has been created, you can set the following properties:
31 |
32 | * 2 cores
33 | * 128 MB video memory
34 | * 3D acceleration enabled
35 |
36 | These options are mainly determined by the limitations of the machine you run on (this is about as much as my laptop can handle). Generally, using more cores and more memory is a good idea.
37 |
38 | Start the virtual machine, install Ubuntu from the iso we just downloaded. Tick the box about downloading any available updates when installing.
39 |
40 | We also installed the following optional packages:
41 |
42 | sudo apt-get install virtualbox-guest-utils
43 | sudo apt-get install virtualbox-guest-x11
44 |
45 | Doing so allows you to share the clipboard between the host and the guest.
46 |
47 |
48 |
49 | ## Installing packages from the Ubuntu repositories
50 |
51 | Once you have Ubuntu up and running we need to install the necessary tools and libraries. Open a terminal and install the following packages:
52 |
53 | ```
54 | # git
55 | # You will need git to clone the lastest versions of the structure from motion
56 | # software from github:
57 | sudo apt-get install git
58 |
59 | # cmake
60 | # You need cmake to generate the Makefiles needed to build the
61 | # structure from motion software from github:
62 | sudo apt-get install cmake
63 |
64 | # gfortran
65 | # You need a Fortran compiler to compile (parts of) the structure from motion software:
66 | sudo apt-get install gfortran
67 |
68 | # Glog
69 | # a logging library from google (https://github.com/google/glog):
70 | sudo apt-get install libgoogle-glog-dev
71 |
72 | # Atlas
73 | # The "Automatically Tuned Linear Algebra Software" provides C and Fortran77
74 | # interfaces to a portably efficient BLAS implementation, as well as a few routines
75 | # from LAPACK (http://math-atlas.sourceforge.net/):
76 | sudo apt-get install libatlas-base-dev
77 |
78 | # Eigen3
79 | # C++ template library for linear algebra: matrices, vectors, numerical solvers,
80 | # and related algorithms (http://eigen.tuxfamily.org).
81 | sudo apt-get install libeigen3-dev
82 |
83 | # SuiteSparse
84 | # suite of sparse matrix algorithms (http://faculty.cse.tamu.edu/davis/suitesparse.html):
85 | sudo apt-get install libsuitesparse-dev
86 |
87 | # zlib
88 | # a library implementing the deflate compression method found in gzip and PKZIP:
89 | sudo apt-get install zlib1g-dev
90 |
91 | # libjpeg
92 | # a library implementing the loading of jpeg images:
93 | sudo apt-get install libjpeg-dev
94 |
95 | # libboost
96 | # library with 'all the features you wanted in C++ but weren't there'
97 | sudo apt-get install libboost-dev
98 |
99 | # Python imaging library may not be installed by default on the
100 | # lighter flavors of Ubuntu (e.g. Lubuntu 14.10 or Ubuntu 14.10 server)
101 | sudo apt-get install python-pil
102 |
103 | # for viewing the point clouds afterwards
104 | sudo apt-get install meshlab
105 |
106 | ```
107 |
108 |
109 |
110 | ## Downloading and installing other tools
111 |
112 |
113 | ### Cloning NLeSC's structure-from-motion repository
114 |
115 | Our repository includes two other repositories as submodules:
116 |
117 | * [bundler_sfm](http://www.cs.cornell.edu/~snavely/bundler/)
118 | * [cmvs](http://www.di.ens.fr/cmvs/)/[pmvs](http://www.di.ens.fr/pmvs/)
119 |
120 | To make sure you get the contents of the submodules when checking out the structure-from-motion repository, use the ``--recursive`` option to ``git clone``:
121 |
122 |
123 | ```
124 | cd ${HOME}
125 | git clone --recursive https://github.com/NLeSC/structure-from-motion.git
126 |
127 | ```
128 |
129 |
130 |
131 |
132 | ### Installing Ceres
133 |
134 |
135 | The [Ceres Solver](http://ceres-solver.org) is _"an open source C++ library for modeling and solving large,
136 | complicated optimization problems. It is a feature rich, mature and performant library which has been used
137 | in production at Google since 2010."_ This solver is needed by the _bundle adjustment_ step of the structure from motion (SfM) pipeline. We already installed the Ceres dependencies (originally described
138 | [here](http://ceres-solver.org/building.html)) in the previous section, so now we can proceed to download and install Ceres:
139 |
140 | ```
141 | cd ${HOME}/structure-from-motion
142 | wget http://ceres-solver.org/ceres-solver-1.10.0.tar.gz
143 | tar zxf ceres-solver-1.10.0.tar.gz
144 | mkdir ceres-bin
145 | cd ceres-bin
146 | cmake ../ceres-solver-1.10.0
147 | make -j3
148 | make test
149 | sudo make install
150 |
151 | ```
152 |
153 |
154 | ### Compiling Bundler
155 |
156 |
157 | [Bundler](http://www.cs.cornell.edu/~snavely/bundler/) is a structure-from-motion (SfM) system for unordered
158 | image collections. Bundler takes a set of images, image features, and image matches as input, and produces a
159 | 3D reconstruction of camera and (sparse) scene geometry as output.
160 |
161 | Next, compile bundler_sfm:
162 |
163 | ```
164 | cd ${HOME}/structure-from-motion/bundler_sfm
165 | make
166 |
167 | ```
168 |
169 |
170 | ### Compiling CMVS/PMVS2
171 |
172 | [PMVS2](http://www.di.ens.fr/pmvs/) is multi-view stereo software that takes a set of images and camera
173 | parameters (generated by bundler), and then reconstructs 3D structure of an object or a scene visible in the images. The software outputs a _dense point cloud_, that is, a set of oriented points where both the 3D coordinate and the surface normal are estimated for each point.
174 |
175 | [CMVS](http://www.di.ens.fr/cmvs/) is software for _clustering views for multi-view stereo_. It is basically a
176 | pre-processor for PMVS2 that takes the output of bundler and generates one or more (optimized) configuration files for PMVS2. CMVS is normally used to split the PMVS2 processing in multiple independent parts, for example when creating a 3D reconstruction on the basis of thousands of images (which would be too much for PMVS2 to handle all at once). However, even when the number of images used is small, there is an advantage in using CMVS as it also removes unused images from the data set, and provides the order in which PMVS2 should process the images. This significantly reduces the processing time needed by PMVS2. The version we included in the structure-from-motion repository is a fork of [pmoulon](https://github.com/pmoulon/CMVS-PMVS). It contains both CMVS and PVMS2, adds a cmake configuration, and contains several bug and performance fixes.
177 |
178 | Compile CMVS/PMVS like this:
179 |
180 | ```
181 | cd ${HOME}/structure-from-motion/cmvs-pmvs/program
182 | mkdir build
183 | cd build
184 | cmake ..
185 | make
186 |
187 | ```
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 | ## Running an example
196 |
197 | The pipeline can be started by ``cd``'ing into a data directory, and starting the 'run-sfm.py' script from there:
198 |
199 | ```
200 | cd ${HOME}/structure-from-motion/examples/rock
201 | python ../../run-sfm.py
202 |
203 | ```
204 | On my laptop the example finished in 1 hour. After finishing, you can open a .ply file from ```${HOME}/structure-from-motion/examples/rock/bundle``` with MeshLab to see the outcome.
205 |
--------------------------------------------------------------------------------
/docs/related_work.md:
--------------------------------------------------------------------------------
1 | This documents gives a small overview of the various developments on structure-from-motion that we found and/or have experience with
2 |
3 | List of SFM Tools
4 | =================
5 |
6 | Integrated tools for SFM
7 | ------------------------
8 |
9 | https://github.com/dddExperiments/SFMToolkit
10 | http://www.visual-experiments.com/demos/sfmtoolkit/
11 | Theia: http://cs.ucsb.edu/~cmsweeney/theia/sfm.html
12 |
13 | Keypoint Detection
14 | ------------------------
15 |
16 | Image libraries that contain SIFT/SURF/BRISK
17 |
18 | http://opencv.org/about.html
19 | (An example: http://stackoverflow.com/questions/5461148/sift-implementation-with-opencv-2-2)
20 |
21 | http://www.vlfeat.org/
22 |
23 | ### Sift
24 |
25 | Good explanation of what sift does:
26 |
27 | http://www.aishack.in/2010/05/sift-scale-invariant-feature-transform/
28 |
29 | Original sift:
30 |
31 | http://www.cs.ubc.ca/~lowe/keypoints/
32 |
33 | Some alternative implementations of sift:
34 |
35 | * http://www.robots.ox.ac.uk/~vedaldi/code/siftpp.html
36 | * http://robwhess.github.io/opensift/
37 | * http://www.cs.unc.edu/~ccwu/siftgpu/
38 |
39 | ### Surf
40 |
41 | http://www.vision.ee.ethz.ch/~surf/
42 |
43 | ### Brisk
44 |
45 | https://github.com/rghunter/BRISK
46 |
47 | Bundle adjustment
48 | -----------------
49 |
50 | Bundler Tool
51 |
52 | http://www.cs.cornell.edu/~snavely/
53 | https://github.com/snavely
54 |
55 | http://grail.cs.washington.edu/projects/mcba/
56 |
57 |
58 | Theia
59 |
60 | https://github.com/sweeneychris/TheiaSfM
61 |
62 | Theia is an alternative to bundler (and the processing pipeline proceeding bundler).
63 | It consists of a library containing all the elements needed to do bundle adjustment
64 | (keypoint detection, keypoint matching, etc.) and contains several example applications
65 | implementing the entire pipeline. Theia contains several state-of-the-art algorithms,
66 | such as a cascade hashing based keypoint matching, and a global SfM approach that
67 | considers the entire view graph at the same time instead of incrementally adding
68 | more and more images to the reconstruction. Late 2014, Theia was still in active
69 | development and not completely stable, but it is likely to become an efficient
70 | replacement for bundler.
71 |
72 | Clustering
73 | ----------
74 |
75 | http://www.di.ens.fr/pmvs/
76 | http://www.di.ens.fr/cmvs/
77 |
78 | Misc
79 | ----
80 |
81 | These libraries are used by some of the steps:
82 |
83 | http://www.cs.utexas.edu/users/dml/Software/graclus.html
84 | http://ceres-solver.org/
85 |
86 |
87 | Relevant Papers
88 | ===============
89 |
90 | http://foto.hut.fi/seura/julkaisut/pjf/pjf_e/2014/PJF2014_Lehtola_et_al.pdf
91 |
--------------------------------------------------------------------------------
/docs/structure_from_motion.md:
--------------------------------------------------------------------------------
1 | Structure From Motion
2 | =====================
3 |
4 | Structure from motion is a technique where a collection of images of a single object is transformed into a pointcloud.
5 |
6 | See the Wikipedia page on Structure from Motion: http://en.m.wikipedia.org/wiki/Structure_from_motion
7 |
8 | Basic Workflow
9 | --------------
10 |
11 | The process consists of 6 basic steps shown in the workflow below:
12 |
13 | 
14 |
15 | - focal point extraction -- extract the focal point and sensor size from the exif information in each image.
16 | - keypoint detection -- detects "point of interest" in each image.
17 | - keypoint matching -- compare keypoints of each image pair to see if and how they overlap.
18 | - bundle adjustment -- determine the camera positions in each image, using multiple overlapping images as input. This also produces an initial sparse pointcloud.
19 | - undistort images -- fix any distortion in the images caused by the camera.
20 | - reconstruction of 3D structure -- combine the images into a dense pointcloud.
21 |
22 | There are many different implementations for each step in this workflow. In our workflow we use the following combination of tools:
23 |
24 | - [SIFT](http://www.cs.ubc.ca/~lowe/keypoints/) for keypoint detection. Note that SIFT is patented and can only be used for research purposes.
25 | - [Bundler](http://www.cs.cornell.edu/~snavely/bundler/) for keypoint matching, bundle adjustment and undistort images.
26 | - [CMVS/PMVS2](http://www.di.ens.fr/cmvs/) to reconstruct the 3D structure using multi-view stereo.
27 |
28 | Note that we don't use the original versions provides in the links above, but instead use more up-to-date versions from github. The details can be found in the
29 | [install guide](./install-ubuntu-14.10.md).
30 |
31 |
--------------------------------------------------------------------------------
/docs/tuning_guide.md:
--------------------------------------------------------------------------------
1 | Tuning guide for the structure from motion pipeline
2 | ===================================================
3 |
4 | Many of the tools in the structure from motion pipeline require tuning
5 | to improve the quality of the output and/or the performance. For many
6 | tools, the best configuration to use may also depend on the image
7 | resolution or number of images that are used. In this document we
8 | describe what setting we have tried so far.
9 |
10 | SIFT
11 | ----
12 |
13 | Sift generates the keypoints for each image. Each keypoint describes a
14 | _distinctive feature_ in the image in a scale an rotation independent
15 | way. By matching the keypoints in each image with all other images, the
16 | SfM pipeline can determine which images overlap (partly).
17 |
18 | More information of sift can be found
19 | [here](http://en.wikipedia.org/wiki/Scale-invariant_feature_transform).
20 |
21 | The version of sift we use can be found in the ``bundler_sfm/src/Sift.cpp``
22 | file. In this sift implementation, the following settings are important:
23 |
24 | - ``SIFT::DoubleImSize`` this setting determines if the the image
25 | should be doubled in size before the sift algorithm is run. Sift internally
26 | downscales the image repeatedly to detect features of different sizes. It
27 | always downscales once before detecting the first features. Therefore, very
28 | small features cannot be detected unless the image is doubled in size first.
29 | It is typically good to set this to ``true`` for low resolution images
30 | (e.g. 1024x768) and ``false`` for high resolution images (as produced by
31 | modern cameras). In our pipeline the default is ``false``.
32 |
33 | - ``SIFT::PeakThreshInit`` this setting determines the minimum contrast required
34 | for a point to be considered as a keypoint. Dark areas in images typically result
35 | in _noisy_ keypoints which are easily confused with others. In our pipeline the
36 | default is ``0.08``. Using lower values will include more keypoints from darker
37 | areas.
38 |
39 | KeyPoint matching
40 | -----------------
41 |
42 | After sift, the keypoints generated for the images are compared using a keypoint matcher.
43 | The matchers we use, ``KeyMatchFull`` or ``KeyMatchPart`` are part of the bundler tool set.
44 | These matchers compare the keypoints using a _approximate nearest neighbor KD tree_. More
45 | information on the implementation of these trees can be found [here](https://www.cs.umd.edu/~mount/ANN/)
46 |
47 | The matcher we of use can be found in the ``bundler_sfm/src/KeyMatchPart.cpp``
48 | file. In this matcher implementation, the following settings are important:
49 |
50 | - ``ratio`` is the fifth (optional) parameter to KeyMatchPart. During matching, each keypoint in one image is compared to all keypoints in another image by computing the euclidean distance between the feature vectors of the two keypoints. The ratio of the distance of the two best matches (the best and the runner up) are then computed. If this ratio is close to 1, the match is considered to be bad, since there are multiple 'potential' matches for the keypoint. If the ratio is closer to 0, the match is considered good, since the difference (and thus the distance) between the best match and the runner up is large. The ratio parameter determines the threshold above which matches are discarded. The default in our pipeline is `0.6`. Higher values will make the matching less strict and thus produce more matches of lower quality.
51 |
52 |
53 | Bundler
54 | -------
55 |
56 | After matching, bundler takes the keypoints and attempts to reconstruct the camera positions in 3D space for each images. In addition, a sparse point cloud is created that contains a relatively small number of object points in 3D space.
57 |
58 | Bundler reads it configuration from a text file `options.txt` which is generated by the `run-sfm.py` script we us in our pipeline. In this configuration file, the following settings are important:
59 |
60 | - ``--use_ceres`` is used to switch between the ceres solver and the internal solver of bundler. Using ceres significantly improves the performance of bundler, as it is capable of using all cores in a machine.
61 |
62 | - ``--construct_max_connectivity`` is used to instruct bundler to add images in the order of how _connected_ they are to other images. That is, images containing features than can be matched to many other images are added first. The alternative is to add images based on the number of matches, which tends to add images based on how similar the are to the current set.
63 |
64 | - ``--projection_estimation_threshold 1.1`` is the RANSAC threshold used when doing pose estimation to add in a new image. Lower values will result in a stricter selection of which estimates are valid. The default value used in bundler is 4, which seemed to result in too noisy estimates of the camera positions. A lower value resulted higher quality result.
65 |
66 | CMVS/PMVS2
67 | ----------
68 |
69 | After estimating the camera positions, PMVS2 is used to estimate the 3D positions of object points. Before
70 | PMVS2 is run, CMVS is used to read the output of bundler and create a configuration file for PMVS2 in `pmvs/option-0000`. In this configuration file specifies which images should be used for the reconstruction, and at what resolution the input images should be used. In this configuration file, the following settings are important:
71 |
72 | - ``timages`` the images actually used in the reconstruction of the 3D object. Usually only a subset of the input images is used.
73 |
74 | - ``level`` this setting determines how much the input images are down sampled before 3D reconstruction. Level 0 means full resolution, 1 uses half the resolution, etc. We use 0 for the best result. Reducing this to 1 will significantly reduce the computation time and the number of points in the result.
75 |
76 | - ``threshold`` this setting determines which patch reconstructions are accepted. Lower values will accept more patches but will produce a in noisier result. Higher values will accept less patches and will produce a result with less errors, but more missing points. We use the default of `0.7`.
77 |
78 | - ``maxAngle`` determines the minimal angle between cameras before they are considered for 3D reconstruction. If the baseline between the cameras is too small, the 3D reconstruction tend to have higher errors. We use the default angle of 10 degrees.
79 |
80 | - ``CPU`` determined the number of cores used by PMVS2. Our `run-sfm.py` script detects the number of
81 | cores automatically and generated the correct configuration.
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
--------------------------------------------------------------------------------
/examples/rock-section/img_0001.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0001.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0002.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0002.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0003.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0003.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0004.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0004.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0005.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0005.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0006.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0006.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0007.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0007.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0008.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0008.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0009.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0009.jpg
--------------------------------------------------------------------------------
/examples/rock-section/img_0010.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-section/img_0010.jpg
--------------------------------------------------------------------------------
/examples/rock-video/.gitignore:
--------------------------------------------------------------------------------
1 | # ignore all files...
2 | *
3 |
4 |
5 | # ...except the current one...
6 | !.gitignore
7 |
8 | # ...and the video file
9 | !testvideo.mp4
10 |
11 |
--------------------------------------------------------------------------------
/examples/rock-video/testvideo.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock-video/testvideo.mp4
--------------------------------------------------------------------------------
/examples/rock/.gitignore:
--------------------------------------------------------------------------------
1 | # Ignore everything
2 | *
3 |
4 | # except these files:
5 | !.gitignore
6 | !*.jpg
7 | !*.JPG
8 | !*.jpeg
9 | !*.JPEG
10 |
--------------------------------------------------------------------------------
/examples/rock/img_0001.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0001.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0002.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0002.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0003.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0003.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0004.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0004.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0005.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0005.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0006.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0006.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0007.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0007.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0008.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0008.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0009.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0009.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0010.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0010.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0011.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0011.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0012.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0012.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0013.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0013.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0014.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0014.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0015.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0015.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0016.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0016.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0017.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0017.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0018.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0018.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0019.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0019.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0020.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0020.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0021.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0021.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0022.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0022.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0023.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0023.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0024.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0024.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0025.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0025.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0026.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0026.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0027.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0027.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0028.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0028.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0029.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0029.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0030.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0030.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0031.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0031.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0032.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0032.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0033.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0033.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0034.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0034.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0035.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0035.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0036.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0036.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0037.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0037.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0038.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0038.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0039.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0039.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0040.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0040.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0041.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0041.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0042.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0042.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0043.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0043.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0044.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0044.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0045.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0045.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0046.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0046.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0047.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0047.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0048.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0048.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0049.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0049.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0050.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0050.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0051.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0051.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0052.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0052.jpg
--------------------------------------------------------------------------------
/examples/rock/img_0053.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/examples/rock/img_0053.jpg
--------------------------------------------------------------------------------
/image-preprocessing/README.md:
--------------------------------------------------------------------------------
1 |
2 | In some cases, it can be advantageous to preprocess the images. The repository includes scripts to:
3 | * [crop](cropper) the edges;
4 | * [calculate](masker) a mask;
5 | * [resize](resizer) image to make them more managable.
6 |
7 | Both these rely on imagemagick for the heavy lifting. Imagemgick can be installed with:
8 |
9 | ```
10 | sudo apt-get install imagemagick
11 | ```
12 |
13 |
--------------------------------------------------------------------------------
/image-preprocessing/cropper/README.md:
--------------------------------------------------------------------------------
1 |
2 | Example usage:
3 | ```
4 | ./crop-image-sides.sh --in testimage --out cropped --cropSides 5 --cropTop 3
5 | ```
6 | crops one fifth from the sides of the images in the 'testimage' directory, and one third from the top (nothing from the bottom).
7 |
8 |
9 | 'crop-image-sides.sh' crops all ``*.jpg|*.JPG`` (but not ``*.jpeg|*.JPEG``) in the ``--in`` directory. The amount of cropping is controlled by the ``--cropSides`` and ``--cropTop`` arguments.
10 |
11 | 'crop-image-sides.sh' needs imagemagick's ``convert``. Install imagemagick using
12 |
13 | ```
14 | sudo apt-get install imagemagick
15 | ```
16 |
17 |
--------------------------------------------------------------------------------
/image-preprocessing/cropper/crop-image-sides.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | exampleUsageStr="$0 --in testimage --out cropped --cropSides 5 --cropTop 3"
4 | if [ "$1" == "--in" ]; then
5 | theInputDir=$2
6 | else
7 | echo
8 | echo 'example usage:'
9 | echo $exampleUsageStr
10 | echo
11 | exit -1
12 | fi
13 |
14 | if [ "$3" == "--out" ]; then
15 | theOutputDir=$4
16 | else
17 | echo
18 | echo 'example usage:'
19 | echo $exampleUsageStr
20 | echo
21 | exit -1
22 | fi
23 |
24 | if [ "$5" == "--cropSides" ]; then
25 | cropLeft=$6
26 | cropRight=$cropLeft
27 | else
28 | echo
29 | echo 'example usage:'
30 | echo $exampleUsageStr
31 | echo
32 | exit -1
33 | fi
34 |
35 | if [ "$7" == "--cropTop" ]; then
36 | cropTop=$8
37 | else
38 | echo
39 | echo 'example usage:'
40 | echo $exampleUsageStr
41 | echo
42 | exit -1
43 | fi
44 |
45 | if [ "$9" == "--vmirror" ]; then
46 | cropBottom=$cropTop
47 | else
48 | if [ "$9" == "" ]; then
49 | cropBottom=0
50 | else
51 | echo
52 | echo 'example usage:'
53 | echo $exampleUsageStr
54 | echo
55 | exit -1
56 | fi
57 | fi
58 |
59 | echo "cropLeft = $cropLeft"
60 | echo "cropRight = $cropRight"
61 | echo "cropTop = $cropTop"
62 | echo "cropBottom = $cropBottom"
63 |
64 | mkdir -p $theOutputDir
65 |
66 |
67 | #files=$theInputDir/*.$ext
68 |
69 | files=$(find $theInputDir -maxdepth 1 -type f -iname '*.jpg')
70 |
71 | for f in $files; do
72 |
73 | echo "Processing file: $f..."
74 | fileName=${f##*/}
75 |
76 | theInputFile=$theInputDir/$fileName
77 | theOutputFile=$theOutputDir/$fileName
78 |
79 | # doing math in Bash
80 |
81 | curWidth=$(identify -format "%w" $theInputFile)
82 |
83 | if [ "$cropRight" == 0 ] ; then
84 | cropRightPixels=0;
85 | else
86 | cropRightPixels=$(( curWidth / cropRight ))
87 | fi
88 |
89 | if [ "$cropLeft" == 0 ] ; then
90 | cropLeftPixels=0
91 | else
92 | cropLeftPixels=$(( curWidth / cropLeft ))
93 | fi
94 | newWidth=$(( curWidth - cropRightPixels - cropLeftPixels ))
95 |
96 | curHeight=$(identify -format "%h" $theInputFile)
97 | if [ "$cropTop" == 0 ] ; then
98 | cropTopPixels=0
99 | else
100 | cropTopPixels=$(( curHeight / cropTop ))
101 | fi
102 | if [ "$cropBottom" == 0 ] ; then
103 | cropBottomPixels=0
104 | else
105 | cropBottomPixels=$(( curHeight / cropBottom ))
106 | fi
107 | newHeight=$(( curHeight - cropBottomPixels - cropTopPixels ))
108 |
109 | # cropping the image
110 |
111 | convert $theInputFile -crop ${newWidth}x${newHeight}+${cropLeftPixels}+${cropTopPixels} $theOutputFile
112 |
113 | done
114 |
--------------------------------------------------------------------------------
/image-preprocessing/cropper/testimage/5x3-squares.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/image-preprocessing/cropper/testimage/5x3-squares.jpg
--------------------------------------------------------------------------------
/image-preprocessing/cropper/testimage/5x3-squares.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/image-preprocessing/cropper/testimage/5x3-squares.png
--------------------------------------------------------------------------------
/image-preprocessing/cropper/testimage/5x3-squares.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
161 |
--------------------------------------------------------------------------------
/image-preprocessing/masker/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | This script tries to guess the characteristics of the object of interest by assuming it is located in the middle of a photo. It then uses selection growing to calculate a mask for each image, cutting out the background. We found that this script works well for some sets of images, but not others.
4 |
5 |
--------------------------------------------------------------------------------
/image-preprocessing/masker/generate_mask.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 |
3 | for i in IMG*.jpg
4 | do
5 | echo convert $i -channel Blue -separate -background black -combine +channel blue-$i
6 | # convert $i -channel Blue -separate -background black -combine +channel blue-$i
7 | done
8 |
9 | for i in blue-*.jpg
10 | do
11 | echo convert $i -threshold 2% thres-$i
12 | # convert $i -threshold 2% thres-$i
13 | done
14 |
15 | #composite black.png black.png -blend 2 result.png
16 |
17 | for i in thres-blue-*.jpg
18 | do
19 | echo convert $i -blur 0x8 blur-$i
20 | convert $i -blur 0x8 blur-$i
21 | done
22 |
23 | cp black.png out.png
24 |
25 | for i in blur-thres-blue-*.jpg
26 | do
27 | echo composite -compose plus $i out.png out.png
28 | composite -compose plus $i out.png out.png
29 | done
30 |
31 |
32 | TODO
33 |
34 | convert IMG_0034.jpg -fuzz 15000 -fill white -opaque white -black-threshold 90% -blur 0x8 bla.jpg
35 |
36 |
37 |
38 |
39 |
--------------------------------------------------------------------------------
/image-preprocessing/masker/generate_mask2.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 |
3 | for i in IMG*.jpg
4 | do
5 | echo convert $i -fuzz 15000 -fill white -opaque white -black-threshold 90% -blur 0x8 bla-$i
6 | # convert $i -fuzz 15000 -fill white -opaque white -black-threshold 90% -blur 0x8 bla-$i
7 | done
8 |
9 | cp black.png out.png
10 |
11 | for i in bla-*.jpg
12 | do
13 | echo composite -compose plus $i out.png out.png
14 | composite -compose plus $i out.png out.png
15 | echo convert out.png -fuzz 15000 -fill white -opaque white -black-threshold 25% out.png
16 | convert out.png -fuzz 15000 -fill white -opaque white -black-threshold 25% out.png
17 | done
18 |
19 |
20 |
21 |
22 |
23 |
--------------------------------------------------------------------------------
/image-preprocessing/resizer/README.md:
--------------------------------------------------------------------------------
1 |
2 | Usage example:
3 | ```
4 | ./resize-images.sh --in ./testimage --out resized
5 | ```
6 |
7 |
8 | 'resize-images.sh' resizes all ``*.jpg|*.JPG`` (but not ``*.jpeg|*.JPEG`` ) in the ``--in`` directory , such that the smallest dimension will be 500 pixels long after conversion. The output is written in the ``--out`` directroy, which is created if it doesn't already exist.
9 |
10 | 'resize-images.sh' needs imagemagick's ``convert``. Install imagemagick using:
11 |
12 | ```
13 | sudo apt-get install imagemagick
14 | ```
15 |
--------------------------------------------------------------------------------
/image-preprocessing/resizer/resize-images.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 |
4 | exampleUsageStr='./resize-images.sh --in ./testimage --out resized'
5 | if [ "$1" == "--in" ]; then
6 | theInputDir=$2
7 | else
8 | echo
9 | echo 'example usage:'
10 | echo $exampleUsageStr
11 | echo
12 | exit -1
13 | fi
14 |
15 | if [ "$3" == "--out" ]; then
16 | theOutputDir=$4
17 | else
18 | echo
19 | echo 'example usage:'
20 | echo $exampleUsageStr
21 | echo
22 | exit -1
23 | fi
24 |
25 |
26 | mkdir -p $theOutputDir
27 |
28 |
29 | files=$(find $theInputDir -maxdepth 1 -type f -iname '*.jpg')
30 |
31 | for f in $files; do
32 |
33 | echo "Processing file: $f..."
34 | fileName=${f##*/}
35 |
36 | theInputFile=$theInputDir/$fileName
37 | theOutputFile=$theOutputDir/$fileName
38 |
39 | # resizing the image
40 | # (the smallest dimension will be 500 pixels long after conversion)
41 | convert $theInputFile -resize 500x500^ $theOutputFile
42 |
43 | done
44 |
45 |
--------------------------------------------------------------------------------
/image-preprocessing/resizer/testimage/5x3-squares.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/image-preprocessing/resizer/testimage/5x3-squares.jpg
--------------------------------------------------------------------------------
/image-preprocessing/resizer/testimage/5x3-squares.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLeSC/structure-from-motion/ee079fce6d716edaf217a6974fcce20e8b85e91f/image-preprocessing/resizer/testimage/5x3-squares.png
--------------------------------------------------------------------------------
/image-preprocessing/resizer/testimage/5x3-squares.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
166 |
--------------------------------------------------------------------------------
/images-from-video/README.md:
--------------------------------------------------------------------------------
1 | # extract all frames from the video file:
2 | ./frames-extractor.py 00017.MTS ~/tmp/frames/in
3 |
4 | # add the same camera exif data (focal length, camera make, camera model) to each frame
5 | ./add-exif-data.py ~/tmp/frames/in 2.8mm Panasonic HC-X900 1920 1080
6 |
7 | # make a new directory with links to a subset of all frames, for example, each 50th frame:
8 | ./frame-subsetter.py ~/tmp/frames/in ~/tmp/frames/out-49 49
9 |
--------------------------------------------------------------------------------
/images-from-video/add-exif-data.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import os
4 | import sys
5 | from subprocess import call
6 |
7 |
8 | class ExifDataAdder(object):
9 |
10 | def __init__(self,inputDir):
11 |
12 | self.inputDir = inputDir
13 |
14 |
15 | def printAllExifDataForAllJPEG(self):
16 |
17 | for fname in sorted(os.listdir(self.inputDir)):
18 |
19 | if (fname[-4:] in ['.jpg','.JPG'] or fname[-5:] in ['.jpeg','.JPEG']):
20 |
21 | call(["jhead", os.path.join(self.inputDir,fname)])
22 |
23 |
24 |
25 | def updateExifData(self, focalLengthStr,cameraMakeStr,cameraModelStr,exifImageWidth,exifImageHeight):
26 |
27 | for fname in sorted(os.listdir(self.inputDir)):
28 |
29 | if (fname[-4:] in ['.jpg','.JPG'] or fname[-5:] in ['.jpeg','.JPEG']):
30 |
31 | call(["exiftool",
32 | "-FocalLength=" + focalLengthStr,
33 | "-make=" + cameraMakeStr,
34 | "-model=" + cameraModelStr,
35 | "-makernotes=",
36 | "-ExifImageWidth=" + exifImageWidth,
37 | "-ExifImageHeight=" + exifImageHeight,
38 | "-overwrite_original",
39 | os.path.join(self.inputDir,fname)])
40 |
41 |
42 |
43 |
44 |
45 |
46 | if __name__ == "__main__":
47 |
48 | nArgs = len(sys.argv)
49 |
50 | if nArgs != 7 or (nArgs == 2 and argv[1] in ["-h", "--help"]):
51 | print
52 | print "# Script '" + sys.argv[0] + "' makes system calls to 'exiftool' and 'jhead'."
53 | print "# You can install these packages from Ubuntu's repositories using"
54 | print "# sudo apt-get install exiftool"
55 | print "# sudo apt-get install jhead"
56 | print "#"
57 | print "# Script " + sys.argv[0] + " needs 6 arguments"
58 | print "# arg1: input directory containing the video frames"
59 | print "# arg2: the focal length string"
60 | print "# arg3: the camera make string"
61 | print "# arg4: the camera model string"
62 | print "# arg5: image width in pixels"
63 | print "# arg6: image height in pixels"
64 | print "#"
65 | print "# " + sys.argv[0] + " updates the exif data of all images in directory ''."
66 | print
67 | sys.exit(1)
68 |
69 |
70 |
71 | theDir = sys.argv[1]
72 | argIsDir = os.path.isdir(theDir)
73 | if not argIsDir:
74 | print "Input argument should be a directory. Aborting."
75 | sys.exit(1)
76 |
77 | try:
78 | focalLengthStr = sys.argv[2]
79 | except IndexError:
80 | print "An error occurred."
81 | sys.exit(1)
82 |
83 |
84 | try:
85 | cameraMakeStr = sys.argv[3]
86 | except IndexError:
87 | print "An error occurred."
88 | sys.exit(1)
89 |
90 |
91 | try:
92 | cameraModelStr = sys.argv[4]
93 | except IndexError:
94 | print "An error occurred."
95 | sys.exit(1)
96 |
97 | try:
98 | exifImageWidth = sys.argv[5]
99 | except IndexError:
100 | print "An error occurred."
101 | sys.exit(1)
102 |
103 | try:
104 | exifImageHeight = sys.argv[6]
105 | except IndexError:
106 | print "An error occurred."
107 | sys.exit(1)
108 |
109 |
110 | # make: Panasonic
111 | # model: HC-X900
112 |
113 | exifDataAdder = ExifDataAdder(theDir)
114 | exifDataAdder.printAllExifDataForAllJPEG();
115 | exifDataAdder.updateExifData(focalLengthStr,cameraMakeStr,cameraModelStr,exifImageWidth,exifImageHeight)
116 | exifDataAdder.printAllExifDataForAllJPEG();
117 |
--------------------------------------------------------------------------------
/images-from-video/frame-subsetter.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import os
4 | import sys
5 |
6 |
7 | class FrameSubSetter():
8 |
9 | def __init__(self,inputDir,outputDir,nFramesSkip):
10 |
11 | self.inputDir = inputDir
12 | self.outputDir = outputDir
13 |
14 | if not os.path.exists(outputDir):
15 | os.makedirs(outputDir)
16 | else:
17 | print "Output directory exists. Aborting."
18 | sys.exit(1)
19 |
20 | iFile = 0;
21 | for file in sorted(os.listdir(inputDir)):
22 |
23 | if file.lower().endswith(".jpg") or file.lower().endswidth(".jpeg"):
24 |
25 | if iFile % (nFramesSkip + 1) == 0:
26 |
27 | file1 = os.path.join(self.inputDir,file)
28 | file2 = os.path.join(self.outputDir)
29 | src = os.path.relpath(file1,file2)
30 | linkName = os.path.join(outputDir,file)
31 | os.symlink(src,linkName)
32 |
33 | iFile += 1
34 |
35 |
36 |
37 | if __name__ == "__main__":
38 |
39 | if len(sys.argv) != 4:
40 | print
41 | print "# Script " + sys.argv[0] + " needs 3 arguments"
42 | print "# arg1: input data directory that contains the frames"
43 | print "# arg2: output directory that will contain relative links to the frames in "
44 | print "# arg3: number of frames to skip between included frames"
45 | print
46 | sys.exit(1)
47 |
48 |
49 | inputDir = sys.argv[1]
50 | if not os.path.isdir(inputDir):
51 | print "Input argument should be a directory. Aborting."
52 | sys.exit(1)
53 |
54 | outputDir = sys.argv[2]
55 |
56 | try:
57 | nFramesSkip = int(sys.argv[3])
58 | except ValueError:
59 | print "Third input argument is not an integer"
60 | sys.exit(1)
61 | except:
62 | print "an error occurred"
63 | sys.exit(1)
64 |
65 | if nFramesSkip < 0:
66 | print "Third input argument should be positive integer number"
67 | sys.exit(1)
68 |
69 |
70 | frameSubSetter = FrameSubSetter(inputDir,outputDir,nFramesSkip)
71 |
72 |
73 |
--------------------------------------------------------------------------------
/images-from-video/frames-extractor.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import os
4 | import sys
5 | from subprocess import call
6 |
7 |
8 | class FramesExtractor():
9 |
10 | def __init__(self,videoFileName,outputDir):
11 |
12 | self.videoFileName = videoFileName
13 | self.outputDir = outputDir
14 |
15 |
16 | def extractAllFrames(self):
17 |
18 | if not os.path.exists(self.outputDir):
19 | os.makedirs(self.outputDir)
20 |
21 | #call(["avconv", "-i", self.videoFileName,"-deinterlace",os.path.join(self.outputDir,"frame-%8d.jpg")])
22 | call(["avconv", "-i", self.videoFileName,"-filter:v","yadif",os.path.join(self.outputDir,"frame-%8d.jpg")])
23 |
24 |
25 |
26 | if __name__ == "__main__":
27 |
28 | nArgs = len(sys.argv)
29 |
30 | if nArgs != 3 or (nArgs == 2 and argv[1] in ["-h", "--help"]):
31 | print
32 | print "# Script " + sys.argv[0] + " needs 2 arguments:"
33 | print "# arg1: file name of the video file"
34 | print "# arg2: output directory name to write the frames to"
35 | print "#"
36 | print "# " + sys.argv[0] + " creates a new directory '' if it doesn't exist yet."
37 | print "# " + sys.argv[0] + " uses system calls to 'avconv' to split the video into frames"
38 | print "# You can install 'avconv' from the ubuntu repositories with:"
39 | print "# sudo apt-get install libav-tools"
40 |
41 | print
42 | sys.exit(1)
43 |
44 |
45 |
46 |
47 | try:
48 | videoFileName = sys.argv[1]
49 | except IndexError:
50 | print "An error occurred. Aborting."
51 | sys.exit(1)
52 |
53 | outputDir = sys.argv[2]
54 |
55 | framesExtractor = FramesExtractor(videoFileName,outputDir)
56 | framesExtractor.extractAllFrames()
57 |
--------------------------------------------------------------------------------
/run-sfm.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | # ###
4 | # This RunSfM.py script is based on the bundler.py script distributed
5 | # with bundler. The original licence can be found below:
6 |
7 | # #### BEGIN LICENSE BLOCK ####
8 | #
9 | # bundler.py - Python convenience module for running Bundler.
10 | # Copyright (C) 2013 Isaac Lenton (aka ilent2)
11 | #
12 | # This program is free software; you can redistribute it and/or
13 | # modify it under the terms of the GNU General Public License
14 | # as published by the Free Software Foundation; either version 2
15 | # of the License, or (at your option) any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU General Public License
23 | # along with this program; if not, write to the Free Software Foundation,
24 | # Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
25 | #
26 | # #### END LICENSE BLOCK ####
27 |
28 | import argparse
29 | #import gzip
30 | import os
31 | import sys
32 | #import Image
33 | import glob
34 | import subprocess
35 | import tempfile
36 | import fileinput
37 | import shutil
38 | import time
39 | import multiprocessing
40 |
41 | from multiprocessing.pool import ThreadPool
42 | from PIL import Image, ExifTags
43 |
44 | VERSION = "RunSfM 1.0"
45 | DESCRIPTION = """\
46 | Python convenience module to process a series of images and reconstruct
47 | the scene using Bundler followed by CMVS/PMVS2.
48 |
49 | Bundler is a structure-from-motion system for unordered image
50 | collections (for instance, images from the Internet). Bundler takes a
51 | set of images, image features, and image matches as input, and
52 | produces a 3D reconstruction of the camera and (sparse) scene geometry
53 | as output.
54 |
55 | CMVS/PMVS is multi-view stereo software that takes a set of images and
56 | camera parameters (generated by Bundler), and then reconstructs 3D structure
57 | of an object or a scene visible in the images. The software outputs a dense
58 | point cloud, that is, a set of oriented points where both the 3D coordinate
59 | and the surface normal are estimated for each point."""
60 |
61 | # This module replaces the existing RunBundler.sh script with a more
62 | # cross platform implementation. Additional elements replaced:
63 | # - RunBundler.sh 2008-2013 Noah Snavely
64 | # - ToSift.sh
65 | # - extract_focal.pl 2005-2009 Noah Snavely
66 | # - jhead
67 |
68 | MOD_PATH = os.path.dirname(__file__)
69 | BUNDLER_BIN_PATH = os.path.join(MOD_PATH, "./bundler_sfm/bin")
70 | BUNDLER_LIB_PATH = os.path.join(MOD_PATH, "./bundler_sfm/lib")
71 | CMVS_PMVS_BIN_PATH = os.path.join(MOD_PATH, "./cmvs-pmvs/program/build/main")
72 | BIN_SIFT = None
73 | BIN_BUNDLER = None
74 | BIN_MATCHKEYS = None
75 | BIN_MATCHKEYS_PART = None
76 | BIN_MATCHKEYS_FULL = None
77 | BIN_BUNDLE2PMVS = None
78 | BIN_RADIAL_UNDISTORT = None
79 | BIN_CMVS = None
80 | BIN_GEN_OPTION = None
81 | BIN_PMVS2 = None
82 |
83 | CCD_WIDTHS = {
84 | "Asahi Optical Co.,Ltd. PENTAX Optio330RS" : 7.176, # 1/1.8"
85 | "Canon Canon DIGITAL IXUS 400" : 7.176, # 1/1.8"
86 | "Canon Canon DIGITAL IXUS 40" : 5.76, # 1/2.5"
87 | "Canon Canon DIGITAL IXUS 430" : 7.176, # 1/1.8"
88 | "Canon Canon DIGITAL IXUS 500" : 7.176, # 1/1.8"
89 | "Canon Canon DIGITAL IXUS 50" : 5.76, # 1/2.5"
90 | "Canon Canon DIGITAL IXUS 55" : 5.76, # 1/2.5"
91 | "Canon Canon DIGITAL IXUS 60" : 5.76, # 1/2.5"
92 | "Canon Canon DIGITAL IXUS 65" : 5.76, # 1/2.5"
93 | "Canon Canon DIGITAL IXUS 700" : 7.176, # 1/1.8"
94 | "Canon Canon DIGITAL IXUS 750" : 7.176, # 1/1.8"
95 | "Canon Canon DIGITAL IXUS 800 IS" : 5.76, # 1/2.5"
96 | "Canon Canon DIGITAL IXUS II" : 5.27, # 1/2.7"
97 | "Canon Canon IXUS 240 HS" : 6.16, # 1/2.3"
98 | "Canon Canon EOS 10D" : 22.7,
99 | "Canon Canon EOS-1D Mark II" : 28.7,
100 | "Canon Canon EOS-1Ds Mark II" : 35.95,
101 | "Canon Canon EOS 20D" : 22.5,
102 | "Canon Canon EOS 20D" : 22.5,
103 | "Canon Canon EOS 300D DIGITAL" : 22.66,
104 | "Canon Canon EOS 30D" : 22.5,
105 | "Canon Canon EOS 350D DIGITAL" : 22.2,
106 | "Canon Canon EOS 400D DIGITAL" : 22.2,
107 | "Canon Canon EOS 40D" : 22.2,
108 | "Canon Canon EOS 5D" : 35.8,
109 | "Canon Canon EOS 5D Mark II" : 36.0,
110 | "Canon Canon EOS 5D Mark III" : 36.0,
111 | "Canon Canon EOS DIGITAL REBEL" : 22.66,
112 | "Canon Canon EOS DIGITAL REBEL XT" : 22.2,
113 | "Canon Canon EOS DIGITAL REBEL XTi" : 22.2,
114 | "Canon Canon EOS Kiss Digital" : 22.66,
115 | "Canon Canon EOS 1100D" : 22.2,
116 | "Canon Canon IXY DIGITAL 600" : 7.176, # 1/1.8"
117 | "Canon Canon PowerShot A20" : 7.176, # 1/1.8"
118 | "Canon Canon PowerShot A400" : 4.54, # 1/3.2"
119 | "Canon Canon PowerShot A40" : 5.27, # 1/2.7"
120 | "Canon Canon PowerShot A510" : 5.76, # 1/2.5"
121 | "Canon Canon PowerShot A520" : 5.76, # 1/2.5"
122 | "Canon Canon PowerShot A530" : 5.76, # 1/2.5"
123 | "Canon Canon PowerShot A60" : 5.27, # 1/2.7"
124 | "Canon Canon PowerShot A620" : 7.176, # 1/1.8"
125 | "Canon Canon PowerShot A630" : 7.176, # 1/1.8"
126 | "Canon Canon PowerShot A640" : 7.176, # 1/1.8"
127 | "Canon Canon PowerShot A700" : 5.76, # 1/2.5"
128 | "Canon Canon PowerShot A70" : 5.27, # 1/2.7"
129 | "Canon Canon PowerShot A710 IS" : 5.76, # 1/2.5"
130 | "Canon Canon PowerShot A75" : 5.27, # 1/2.7"
131 | "Canon Canon PowerShot A80" : 7.176, # 1/1.8"
132 | "Canon Canon PowerShot A85" : 5.27, # 1/2.7"
133 | "Canon Canon PowerShot A95" : 7.176, # 1/1.8"
134 | "Canon Canon PowerShot G1" : 7.176, # 1/1.8"
135 | "Canon Canon PowerShot G2" : 7.176, # 1/1.8"
136 | "Canon Canon PowerShot G3" : 7.176, # 1/1.8"
137 | "Canon Canon PowerShot G5" : 7.176, # 1/1.8"
138 | "Canon Canon PowerShot G6" : 7.176, # 1/1.8"
139 | "Canon Canon PowerShot G7" : 7.176, # 1/1.8"
140 | "Canon Canon PowerShot G9" : 7.600, # 1/1.7"
141 | "Canon Canon PowerShot Pro1" : 8.8, # 2/3"
142 | "Canon Canon PowerShot S110" : 5.27, # 1/2.7"
143 | "Canon Canon PowerShot S1 IS" : 5.27, # 1/2.7"
144 | "Canon Canon PowerShot S200" : 5.27, # 1/2.7"
145 | "Canon Canon PowerShot S2 IS" : 5.76, # 1/2.5"
146 | "Canon Canon PowerShot S30" : 7.176, # 1/1.8"
147 | "Canon Canon PowerShot S3 IS" : 5.76, # 1/2.5"
148 | "Canon Canon PowerShot S400" : 7.176, # 1/1.8"
149 | "Canon Canon PowerShot S40" : 7.176, # 1/1.8"
150 | "Canon Canon PowerShot S410" : 7.176, # 1/1.8"
151 | "Canon Canon PowerShot S45" : 7.176, # 1/1.8"
152 | "Canon Canon PowerShot S500" : 7.176, # 1/1.8"
153 | "Canon Canon PowerShot S50" : 7.176, # 1/1.8"
154 | "Canon Canon PowerShot S60" : 7.176, # 1/1.8"
155 | "Canon Canon PowerShot S70" : 7.176, # 1/1.8"
156 | "Canon Canon PowerShot S80" : 7.176, # 1/1.8"
157 | "Canon Canon PowerShot SD1000" : 5.75, # 1/2.5"
158 | "Canon Canon PowerShot SD100" : 5.27, # 1/2.7"
159 | "Canon Canon PowerShot SD10" : 5.75, # 1/2.5"
160 | "Canon Canon PowerShot SD110" : 5.27, # 1/2.7"
161 | "Canon Canon PowerShot SD200" : 5.76, # 1/2.5"
162 | "Canon Canon PowerShot SD300" : 5.76, # 1/2.5"
163 | "Canon Canon PowerShot SD400" : 5.76, # 1/2.5"
164 | "Canon Canon PowerShot SD450" : 5.76, # 1/2.5"
165 | "Canon Canon PowerShot SD500" : 7.176, # 1/1.8"
166 | "Canon Canon PowerShot SD550" : 7.176, # 1/1.8"
167 | "Canon Canon PowerShot SD600" : 5.76, # 1/2.5"
168 | "Canon Canon PowerShot SD630" : 5.76, # 1/2.5"
169 | "Canon Canon PowerShot SD700 IS" : 5.76, # 1/2.5"
170 | "Canon Canon PowerShot SD750" : 5.75, # 1/2.5"
171 | "Canon Canon PowerShot SD800 IS" : 5.76, # 1/2.5"
172 | "Canon Canon PowerShot SX500 IS" : 6.17, # 1/2.3"
173 | "Canon EOS 300D DIGITAL" : 22.66,
174 | "Canon EOS 1100D" : 22.2,
175 | "Canon EOS DIGITAL REBEL" : 22.66,
176 | "Canon PowerShot A510" : 5.76, # 1/2.5" ???
177 | "Canon PowerShot S30" : 7.176, # 1/1.8"
178 | "CASIO COMPUTER CO.,LTD. EX-S500" : 5.76, # 1/2.5"
179 | "CASIO COMPUTER CO.,LTD. EX-Z1000" : 7.716, # 1/1.8"
180 | "CASIO COMPUTER CO.,LTD EX-Z30" : 5.76, # 1/2.5 "
181 | "CASIO COMPUTER CO.,LTD. EX-Z600" : 5.76, # 1/2.5"
182 | "CASIO COMPUTER CO.,LTD. EX-Z60" : 7.176, # 1/1.8"
183 | "CASIO COMPUTER CO.,LTD EX-Z750" : 7.176, # 1/1.8"
184 | "CASIO COMPUTER CO.,LTD. EX-Z850" : 7.176,
185 | "EASTMAN KODAK COMPANY KODAK CX7330 ZOOM DIGITAL CAMERA" : 5.27, # 1/2.7"
186 | "EASTMAN KODAK COMPANY KODAK CX7530 ZOOM DIGITAL CAMERA" : 5.76, # 1/2.5"
187 | "EASTMAN KODAK COMPANY KODAK DX3900 ZOOM DIGITAL CAMERA" : 7.176, # 1/1.8"
188 | "EASTMAN KODAK COMPANY KODAK DX4900 ZOOM DIGITAL CAMERA" : 7.176, # 1/1.8"
189 | "EASTMAN KODAK COMPANY KODAK DX6340 ZOOM DIGITAL CAMERA" : 5.27, # 1/2.7"
190 | "EASTMAN KODAK COMPANY KODAK DX6490 ZOOM DIGITAL CAMERA" : 5.76, # 1/2.5"
191 | "EASTMAN KODAK COMPANY KODAK DX7630 ZOOM DIGITAL CAMERA" : 7.176, # 1/1.8"
192 | "EASTMAN KODAK COMPANY KODAK Z650 ZOOM DIGITAL CAMERA" : 5.76, # 1/2.5"
193 | "EASTMAN KODAK COMPANY KODAK Z700 ZOOM DIGITAL CAMERA" : 5.76, # 1/2.5"
194 | "EASTMAN KODAK COMPANY KODAK Z740 ZOOM DIGITAL CAMERA" : 5.76, # 1/2.5"
195 | "FUJIFILM FinePix2600Zoom" : 5.27, # 1/2.7"
196 | "FUJIFILM FinePix40i" : 7.600, # 1/1.7"
197 | "FUJIFILM FinePix A310" : 5.27, # 1/2.7"
198 | "FUJIFILM FinePix A330" : 5.27, # 1/2.7"
199 | "FUJIFILM FinePix A600" : 7.600, # 1/1.7"
200 | "FUJIFILM FinePix E500" : 5.76, # 1/2.5"
201 | "FUJIFILM FinePix E510" : 5.76, # 1/2.5"
202 | "FUJIFILM FinePix E550" : 7.600, # 1/1.7"
203 | "FUJIFILM FinePix E900" : 7.78, # 1/1.6"
204 | "FUJIFILM FinePix F10" : 7.600, # 1/1.7"
205 | "FUJIFILM FinePix F30" : 7.600, # 1/1.7"
206 | "FUJIFILM FinePix F450" : 5.76, # 1/2.5"
207 | "FUJIFILM FinePix F601 ZOOM" : 7.600, # 1/1.7"
208 | "FUJIFILM FinePix S3Pro" : 23.0,
209 | "FUJIFILM FinePix S5000" : 5.27, # 1/2.7"
210 | "FUJIFILM FinePix S5200" : 5.76, # 1/2.5"
211 | "FUJIFILM FinePix S5500" : 5.27, # 1/2.7"
212 | "FUJIFILM FinePix S6500fd" : 7.600, # 1/1.7"
213 | "FUJIFILM FinePix S7000" : 7.600, # 1/1.7"
214 | "FUJIFILM FinePix Z2" : 5.76, # 1/2.5"
215 | "Hewlett-Packard hp 635 Digital Camera" : 4.54, # 1/3.2"
216 | "Hewlett-Packard hp PhotoSmart 43x series" : 5.27, # 1/2.7"
217 | "Hewlett-Packard HP PhotoSmart 618 (V1.1)" : 5.27, # 1/2.7"
218 | "Hewlett-Packard HP PhotoSmart C945 (V01.61)" : 7.176, # 1/1.8"
219 | "Hewlett-Packard HP PhotoSmart R707 (V01.00)" : 7.176, # 1/1.8"
220 | "KONICA MILOLTA DYNAX 5D" : 23.5,
221 | "Konica Minolta Camera, Inc. DiMAGE A2" : 8.80, # 2/3"
222 | "KONICA MINOLTA CAMERA, Inc. DiMAGE G400" : 5.76, # 1/2.5"
223 | "Konica Minolta Camera, Inc. DiMAGE Z2" : 5.76, # 1/2.5"
224 | "KONICA MINOLTA DiMAGE A200" : 8.80, # 2/3"
225 | "KONICA MINOLTA DiMAGE X1" : 7.176, # 1/1.8"
226 | "KONICA MINOLTA DYNAX 5D" : 23.5,
227 | "Minolta Co., Ltd. DiMAGE F100" : 7.176, # 1/2.7"
228 | "Minolta Co., Ltd. DiMAGE Xi" : 5.27, # 1/2.7"
229 | "Minolta Co., Ltd. DiMAGE Xt" : 5.27, # 1/2.7"
230 | "Minolta Co., Ltd. DiMAGE Z1" : 5.27, # 1/2.7"
231 | "NIKON COOLPIX L3" : 5.76, # 1/2.5"
232 | "NIKON COOLPIX P2" : 7.176, # 1/1.8"
233 | "NIKON COOLPIX S4" : 5.76, # 1/2.5"
234 | "NIKON COOLPIX S7c" : 5.76, # 1/2.5"
235 | "NIKON CORPORATION NIKON D100" : 23.7,
236 | "NIKON CORPORATION NIKON D1" : 23.7,
237 | "NIKON CORPORATION NIKON D1H" : 23.7,
238 | "NIKON CORPORATION NIKON D200" : 23.6,
239 | "NIKON CORPORATION NIKON D2H" : 23.3,
240 | "NIKON CORPORATION NIKON D2X" : 23.7,
241 | "NIKON CORPORATION NIKON D40" : 23.7,
242 | "NIKON CORPORATION NIKON D50" : 23.7,
243 | "NIKON CORPORATION NIKON D60" : 23.6,
244 | "NIKON CORPORATION NIKON D70" : 23.7,
245 | "NIKON CORPORATION NIKON D70s" : 23.7,
246 | "NIKON CORPORATION NIKON D80" : 23.6,
247 | "NIKON CORPORATION NIKON D90" : 23.6,
248 | "NIKON CORPORATION NIKON D3300" : 23.5,
249 | "NIKON E2500" : 5.27, # 1/2.7"
250 | "NIKON E2500" : 5.27, # 1/2.7"
251 | "NIKON E3100" : 5.27, # 1/2.7"
252 | "NIKON E3200" : 5.27,
253 | "NIKON E3700" : 5.27, # 1/2.7"
254 | "NIKON E4200" : 7.176, # 1/1.8"
255 | "NIKON E4300" : 7.18,
256 | "NIKON E4500" : 7.176, # 1/1.8"
257 | "NIKON E4600" : 5.76, # 1/2.5"
258 | "NIKON E5000" : 8.80, # 2/3"
259 | "NIKON E5200" : 7.176, # 1/1.8"
260 | "NIKON E5400" : 7.176, # 1/1.8"
261 | "NIKON E5600" : 5.76, # 1/2.5"
262 | "NIKON E5700" : 8.80, # 2/3"
263 | "NIKON E5900" : 7.176, # 1/1.8"
264 | "NIKON E7600" : 7.176, # 1/1.8"
265 | "NIKON E775" : 5.27, # 1/2.7"
266 | "NIKON E7900" : 7.176, # 1/1.8"
267 | "NIKON E7900" : 7.176, # 1/1.8"
268 | "NIKON E8800" : 8.80, # 2/3"
269 | "NIKON E990" : 7.176, # 1/1.8"
270 | "NIKON E995" : 7.176, # 1/1.8"
271 | "NIKON S1" : 5.76, # 1/2.5"
272 | "Nokia N80" : 5.27, # 1/2.7"
273 | "Nokia N80" : 5.27, # 1/2.7"
274 | "Nokia N93" : 4.536, # 1/3.1"
275 | "Nokia N95" : 5.7, # 1/2.7"
276 | "OLYMPUS CORPORATION C-5000Z" : 7.176, # 1/1.8"
277 | "OLYMPUS CORPORATION C5060WZ" : 7.176, # 1/1.8"
278 | "OLYMPUS CORPORATION C750UZ" : 5.27, # 1/2.7"
279 | "OLYMPUS CORPORATION C765UZ" : 5.76, # 1//2.5"
280 | "OLYMPUS CORPORATION C8080WZ" : 8.80, # 2/3"
281 | "OLYMPUS CORPORATION X250,D560Z,C350Z" : 5.76, # 1/2.5"
282 | "OLYMPUS CORPORATION X-3,C-60Z" : 7.176, # 1.8"
283 | "OLYMPUS CORPORATION X400,D580Z,C460Z" : 5.27, # 1/2.7"
284 | "OLYMPUS IMAGING CORP. E-500" : 17.3, # 4/3?
285 | "OLYMPUS IMAGING CORP. FE115,X715" : 5.76, # 1/2.5"
286 | "OLYMPUS IMAGING CORP. SP310" : 7.176, # 1/1.8"
287 | "OLYMPUS IMAGING CORP. SP510UZ" : 5.75, # 1/2.5"
288 | "OLYMPUS IMAGING CORP. SP550UZ" : 5.76, # 1/2.5"
289 | "OLYMPUS IMAGING CORP. uD600,S600" : 5.75, # 1/2.5"
290 | "OLYMPUS_IMAGING_CORP. X450,D535Z,C370Z" : 5.27, # 1/2.7"
291 | "OLYMPUS IMAGING CORP. X550,D545Z,C480Z" : 5.76, # 1/2.5"
292 | "OLYMPUS OPTICAL CO.,LTD C2040Z" : 6.40, # 1/2"
293 | "OLYMPUS OPTICAL CO.,LTD C211Z" : 5.27, # 1/2.7"
294 | "OLYMPUS OPTICAL CO.,LTD C2Z,D520Z,C220Z" : 4.54, # 1/3.2"
295 | "OLYMPUS OPTICAL CO.,LTD C3000Z" : 7.176, # 1/1.8"
296 | "OLYMPUS OPTICAL CO.,LTD C300Z,D550Z" : 5.4,
297 | "OLYMPUS OPTICAL CO.,LTD C4100Z,C4000Z" : 7.176, # 1/1.8"
298 | "OLYMPUS OPTICAL CO.,LTD C750UZ" : 5.27, # 1/2.7"
299 | "OLYMPUS OPTICAL CO.,LTD X-2,C-50Z" : 7.176, # 1/1.8"
300 | "OLYMPUS SP550UZ" : 5.76, # 1/2.5"
301 | "OLYMPUS X100,D540Z,C310Z" : 5.27, # 1/2.7"
302 | "Panasonic DMC-FX01" : 5.76, # 1/2.5"
303 | "Panasonic DMC-FX07" : 5.75, # 1/2.5"
304 | "Panasonic DMC-FX9" : 5.76, # 1/2.5"
305 | "Panasonic DMC-FZ20" : 5.760, # 1/2.5"
306 | "Panasonic DMC-FZ2" : 4.54, # 1/3.2"
307 | "Panasonic DMC-FZ30" : 7.176, # 1/1.8"
308 | "Panasonic DMC-FZ50" : 7.176, # 1/1.8"
309 | "Panasonic DMC-FZ5" : 5.760, # 1/2.5"
310 | "Panasonic DMC-FZ7" : 5.76, # 1/2.5"
311 | "Panasonic DMC-LC1" : 8.80, # 2/3"
312 | "Panasonic DMC-LC33" : 5.760, # 1/2.5"
313 | "Panasonic DMC-LX1" : 8.50, # 1/6.5"
314 | "Panasonic DMC-LZ2" : 5.76, # 1/2.5"
315 | "Panasonic DMC-TZ1" : 5.75, # 1/2.5"
316 | "Panasonic DMC-TZ3" : 5.68, # 1/2.35"
317 | "Panasonic DMC-TZ10" : 6.23,
318 | "Panasonic HC-X900" : 3.20,
319 | "PENTAX Corporation PENTAX *ist DL" : 23.5,
320 | "PENTAX Corporation PENTAX *ist DS2" : 23.5,
321 | "PENTAX Corporation PENTAX *ist DS" : 23.5,
322 | "PENTAX Corporation PENTAX K100D" : 23.5,
323 | "PENTAX Corporation PENTAX Optio 450" : 7.176, # 1/1.8"
324 | "PENTAX Corporation PENTAX Optio 550" : 7.176, # 1/1.8"
325 | "PENTAX Corporation PENTAX Optio E10" : 5.76, # 1/2.5"
326 | "PENTAX Corporation PENTAX Optio S40" : 5.76, # 1/2.5"
327 | "PENTAX Corporation PENTAX Optio S4" : 5.76, # 1/2.5"
328 | "PENTAX Corporation PENTAX Optio S50" : 5.76, # 1/2.5"
329 | "PENTAX Corporation PENTAX Optio S5i" : 5.76, # 1/2.5"
330 | "PENTAX Corporation PENTAX Optio S5z" : 5.76, # 1/2.5"
331 | "PENTAX Corporation PENTAX Optio SV" : 5.76, # 1/2.5"
332 | "PENTAX Corporation PENTAX Optio WP" : 5.75, # 1/2.5"
333 | "PENTAX Corporation PENTAX K10D" : 23.5,
334 | "RICOH CaplioG3 modelM" : 5.27, # 1/2.7"
335 | "RICOH Caplio GX" : 7.176, # 1/1.8"
336 | "RICOH Caplio R30" : 5.75, # 1/2.5"
337 | "Samsung Digimax 301" : 5.27, # 1/2.7"
338 | "Samsung Techwin " : 5.76, # 1/2.5"
339 | "SAMSUNG TECHWIN Pro 815" : 8.80, # 2/3"
340 | "SONY DSC-F828" : 8.80, # 2/3"
341 | "SONY DSC-N12" : 7.176, # 1/1.8"
342 | "SONY DSC-P100" : 7.176, # 1/1.8"
343 | "SONY DSC-P10" : 7.176, # 1/1.8"
344 | "SONY DSC-P12" : 7.176, # 1/1.8"
345 | "SONY DSC-P150" : 7.176, # 1/1.8"
346 | "SONY DSC-P200" : 7.176, # 1/1.8");
347 | "SONY DSC-P52" : 5.27, # 1/2.7"
348 | "SONY DSC-P72" : 5.27, # 1/2.7"
349 | "SONY DSC-P73" : 5.27,
350 | "SONY DSC-P8" : 5.27, # 1/2.7"
351 | "SONY DSC-R1" : 21.5,
352 | "SONY DSC-S40" : 5.27, # 1/2.7"
353 | "SONY DSC-S600" : 5.760, # 1/2.5"
354 | "SONY DSC-T9" : 7.18,
355 | "SONY DSC-V1" : 7.176, # 1/1.8"
356 | "SONY DSC-W1" : 7.176, # 1/1.8"
357 | "SONY DSC-W30" : 5.760, # 1/2.5"
358 | "SONY DSC-W50" : 5.75, # 1/2.5"
359 | "SONY DSC-W5" : 7.176, # 1/1.8"
360 | "SONY DSC-W7" : 7.176, # 1/1.8"
361 | "SONY DSC-W80" : 5.75, # 1/2.5"
362 | }
363 |
364 | def get_images():
365 | """Searches the present directory for JPEG images."""
366 |
367 | images = glob.glob("./*.[jJ][pP][gG]")
368 | if len(images) == 0:
369 | error_str = ("Error: No images supplied! "
370 | "No JPEG files found in directory!")
371 | raise Exception(error_str)
372 |
373 | return sorted(images)
374 |
375 | def extract_focal_length(images=[], scale=1.0):
376 | """Extracts (pixel) focal length from images where available.
377 | The functions returns a dictionary of image, focal length pairs.
378 | If no focal length is extracted for an image, the second pair is None.
379 | """
380 | if len(images) == 0:
381 | error_str = ("Error: No images supplied! "
382 | "No JPEG files found in directory!")
383 | raise Exception(error_str)
384 |
385 | ret = []
386 |
387 | for image in images:
388 | print "[Extracting EXIF tags from image {0}]".format(image)
389 |
390 | tags = {}
391 | with open(image, 'rb') as fp:
392 | img = Image.open(fp)
393 | if hasattr(img, '_getexif'):
394 | exifinfo = img._getexif()
395 | if exifinfo is not None:
396 | for tag, value in exifinfo.items():
397 | tags[ExifTags.TAGS.get(tag, tag)] = value
398 |
399 | # Extract Focal Length
400 | focalN, focalD = tags.get('FocalLength', (0, 1))
401 | focal_length = float(focalN)/float(focalD)
402 |
403 | # Extract Resolution from the exif
404 | img_width = tags.get('ExifImageWidth', 0)
405 | img_height = tags.get('ExifImageHeight', 0)
406 |
407 | # Also extract the actual resolution of the image
408 | real_img_width = img.size[0]
409 | real_img_height = img.size[1]
410 |
411 | # Check if we are confused about the resolution
412 | if not img_width == real_img_width:
413 | print "[WARNING: EXIF resolution (%d x %d) does not match image resolution (%d x %d) for image %s (using image resolution instead)]" % (img_width,img_height,real_img_width,real_img_height, image)
414 | img_width = real_img_width
415 | img_height = real_img_height
416 |
417 | if img_width < img_height:
418 | img_width,img_height = img_height,img_width
419 |
420 | # Extract CCD Width (Prefer Lookup Table)
421 | ccd_width = 1.0
422 | make_model = tags.get('Make', '') + ' ' + tags.get('Model', '')
423 | if CCD_WIDTHS.has_key(make_model.strip()):
424 | ccd_width = CCD_WIDTHS[make_model.strip()]
425 | else:
426 | fplaneN, fplaneD = tags.get('FocalPlaneXResolution', (0, 1))
427 | if fplaneN != 0:
428 | ccd_width = 25.4*float(img_width)*float(fplaneD)/float(fplaneN)
429 | print " [Using CCD width from EXIF tags]"
430 | else:
431 | ccd_width = 0
432 |
433 | print " [EXIF focal length = {0}mm]".format(focal_length)
434 | print " [EXIF CCD width = {0}mm]".format(ccd_width)
435 | print " [EXIF resolution = {0} x {1}]".format(img_width, img_height)
436 | if ccd_width == 0:
437 | print "ERROR: No CCD width available for camera {0}]".format(make_model)
438 | exit(1)
439 |
440 | if (img_width==0 or img_height==0 or focalN==0 or ccd_width==0):
441 | print "ERROR: Could not determine pixel focal length for image", image
442 | exit(1)
443 |
444 | # Compute Focal Length in Pixels
445 | result = img_width * (focal_length / ccd_width) * scale
446 |
447 | ret.append((image, result))
448 | print " [Focal length (pixels) = {0}]".format(result)
449 |
450 | return ret
451 |
452 | def sift_image(image):
453 | """Extracts SIFT features from a single image. See sift_images."""
454 |
455 | pgm_filename = image.rsplit('.', 1)[0] + ".pgm"
456 | key_filename = image.rsplit('.', 1)[0] + ".key"
457 |
458 | # Convert image to PGM format (grayscale)
459 | with open(image, 'rb') as fp_img:
460 | image = Image.open(fp_img)
461 | image.convert('L').save(pgm_filename)
462 |
463 | # Extract SIFT data
464 | with open(pgm_filename, 'rb') as fp_in:
465 | with open(key_filename, 'wb') as fp_out:
466 | subprocess.call(BIN_SIFT, stdin=fp_in, stdout=fp_out)
467 |
468 | # Remove pgm file
469 | os.remove(pgm_filename)
470 |
471 | # GZIP compress key file (and remove)
472 | # with open(key_filename, 'rb') as fp_in:
473 | # with gzip.open(key_filename + ".gz", 'wb') as fp_out:
474 | # fp_out.writelines(fp_in)
475 | # os.remove(key_filename)
476 |
477 | return key_filename
478 |
479 | def sift_images(images):
480 | """Extracts SIFT features from images in 'images'.
481 |
482 | 'images' should be a list of file names. The function creates a
483 | SIFT compressed key file for each image in 'images' with a '.key.gz'
484 | extension. A list of the uncompressed key file names is returned.
485 |
486 | """
487 | pool = ThreadPool()
488 | return pool.map(sift_image, images)
489 |
490 | def match_image(image):
491 |
492 | # Add lib folder to LD_LIBRARY_PATH
493 | env = dict(os.environ)
494 | if env.has_key('LD_LIBRARY_PATH'):
495 | env['LD_LIBRARY_PATH'] = env['LD_LIBRARY_PATH'] + ':' + BUNDLER_LIB_PATH
496 | else:
497 | env['LD_LIBRARY_PATH'] = BUNDLER_LIB_PATH
498 |
499 | matches_file = 'match.' + str(image[0]) + '.txt'
500 |
501 | subprocess.call([BIN_MATCHKEYS_PART, image[1], str(image[0]), matches_file], env=env)
502 |
503 | return matches_file
504 |
505 |
506 |
507 | def par_match_images(key_file, image_count, matches_file):
508 | "Executes KeyMatchPart to match key points in each image."""
509 |
510 | images = []
511 |
512 | for i in range(image_count-1, 0, -1):
513 | images.append((i, key_file))
514 |
515 | pool = ThreadPool()
516 | match_filenames = pool.map(match_image, images)
517 |
518 | with open(matches_file, 'w') as fout:
519 | for line in fileinput.input(match_filenames):
520 | fout.write(line)
521 |
522 |
523 | def match_images(key_files, matches_file):
524 |
525 | keys_file = ""
526 | with tempfile.NamedTemporaryFile(delete=False) as fp:
527 | for key in key_files:
528 | fp.write(key + '\n')
529 | keys_file = fp.name
530 |
531 | par_match_images(keys_file, len(key_files), matches_file)
532 | os.remove(keys_file)
533 |
534 |
535 | def bundler(image_list=None, options_file=None, shell=False, *args, **kwargs):
536 | """Run bundler, parsing arguments from args and kwargs through.
537 | For Bundler usage run bundler("--help").
538 |
539 | image_list : File containing list of images.
540 | options_file : Specify an options file for bundler (optional).
541 | shell : Enable full shell support for parsing args (default: False).
542 | """
543 |
544 | def kwargs_bool(b, r):
545 | if b: return r
546 | else: return []
547 |
548 | kwargs_dict = {
549 | 'match_table' : lambda k,v: ['--'+k,v],
550 | 'output' : lambda k,v: ['--'+k,v],
551 | 'output_all' : lambda k,v: ['--'+k,v],
552 | 'output_dir' : lambda k,v: ['--'+k,v],
553 | 'variable_focal_length' : lambda k,v: kwargs_bool(v, ['--'+k]),
554 | 'use_focal_estimate' : lambda k,v: kwargs_bool(v, ['--'+k]),
555 | 'constrain_focal' : lambda k,v: kwargs_bool(v, ['--'+k]),
556 | 'constrain_focal_weight' : lambda k,v: ['--'+k,str(v)],
557 | 'estimate_distortion' : lambda k,v: kwargs_bool(v, ['--'+k]),
558 | 'projection_estimation_threshold' : lambda k,v: ['--'+k,str(v)],
559 | 'construct_max_connectivity' : lambda k,v: kwargs_bool(v, ['--'+k]),
560 | 'use_ceres' : lambda k,v: kwargs_bool(v, ['--'+k]),
561 | 'run_bundle' : lambda k,v: kwargs_bool(v, ['--'+k]),
562 | }
563 |
564 | str_args = [a for a in args if type(a) == str]
565 | for k,v in kwargs.items():
566 | if not kwargs_dict.has_key(k): continue
567 | str_args.extend(kwargs_dict[k](k,v))
568 |
569 | if len(str_args) != 0 and options_file is not None:
570 | with open(options_file, 'wb') as fp:
571 | for o in str_args:
572 | if o.startswith('--'): fp.write('\n')
573 | else: fp.write(' ')
574 | fp.write(o)
575 |
576 | image_list_file = ""
577 | if type(image_list) == dict:
578 | with tempfile.NamedTemporaryFile(delete=False) as fp:
579 | for image,value in image_list.items():
580 | if value == None: fp.write(image + '\n')
581 | else: fp.write(' '.join([image, '0', str(value), '\n']))
582 | image_list_file = fp.name
583 | elif type(image_list) == str:
584 | image_list_file = image_list
585 | else:
586 | raise Exception("Error: Not a valid list or filename for image_list!")
587 |
588 | # Add lib folder to LD_LIBRARY_PATH
589 | env = dict(os.environ)
590 | if env.has_key('LD_LIBRARY_PATH'):
591 | env['LD_LIBRARY_PATH'] = env['LD_LIBRARY_PATH'] + ':' + BUNDLER_LIB_PATH
592 | else:
593 | env['LD_LIBRARY_PATH'] = BUNDLER_LIB_PATH
594 |
595 | try: os.mkdir("bundle")
596 | except: pass
597 |
598 | with open(os.path.join("bundle", "out"), 'wb') as fp_out:
599 | if options_file is not None:
600 | subprocess.call([BIN_BUNDLER, image_list_file, "--options_file",
601 | options_file], shell=shell, env=env, stdout=fp_out)
602 | else:
603 | subprocess.call([BIN_BUNDLER, image_list_file] + str_args,
604 | shell=shell, env=env, stdout=fp_out)
605 |
606 | if type(image_list) == dict:
607 | os.remove(image_list_file)
608 |
609 |
610 | def save_image_list(image_list=None, filename='list.txt'):
611 | with open(filename, 'wb') as fp:
612 | for image in image_list:
613 | fp.write(' '.join([image[0], '0', str(image[1]), '\n']))
614 |
615 |
616 | def create_dense_pointcloud(image_names, image_list='list.txt', bundle_out="bundle.out"):
617 |
618 | # Start by running bundle2pmvs. This will create the pmvs directory, and
619 | # generate the undistortion matrices for each image.
620 |
621 | # Add lib folder to LD_LIBRARY_PATH
622 | env = dict(os.environ)
623 | if env.has_key('LD_LIBRARY_PATH'):
624 | env['LD_LIBRARY_PATH'] = env['LD_LIBRARY_PATH'] + ':' + BUNDLER_LIB_PATH
625 | else:
626 | env['LD_LIBRARY_PATH'] = BUNDLER_LIB_PATH
627 |
628 | bundle_output = os.path.join('bundle', 'bundle.out')
629 |
630 | subprocess.call([BIN_BUNDLE2PMVS, image_list, bundle_output], env=env)
631 |
632 | # Next run the image undistort.
633 | subprocess.call([BIN_RADIAL_UNDISTORT, image_list, bundle_output, 'pmvs'], env=env)
634 |
635 | # Create the necessay directories
636 | try: os.mkdir(os.path.join('pmvs', 'txt'))
637 | except: pass
638 |
639 | try: os.mkdir(os.path.join('pmvs', 'visualize'))
640 | except: pass
641 |
642 | try: os.mkdir(os.path.join('pmvs', 'models'))
643 | except: pass
644 |
645 | # Move some temp files to their final location. This is needed by CMVS
646 | count = 0
647 |
648 | for image in image_names:
649 | source = image.rsplit('.', 1)[0] + '.rd.jpg'
650 | target = str(count).zfill(8) + '.jpg'
651 | if os.path.exists(os.path.join('pmvs', source)):
652 | shutil.move(os.path.join('pmvs', source), os.path.join('pmvs', 'visualize', target))
653 | source = str(count).zfill(8) + '.txt'
654 | shutil.move(os.path.join('pmvs', source), os.path.join('pmvs', 'txt', source))
655 | count += 1
656 |
657 | # Run CMVS to generate the required centers.ply and vis.dat
658 | # FIXME: hardcoded CPU count!
659 | subprocess.call([BIN_CMVS, os.path.join('.', 'pmvs', ''), '100', '16'], env=env)
660 |
661 | # Run genOption to generate the required PMVS2 config.
662 | cores = multiprocessing.cpu_count()
663 |
664 | # The first option is a hardcoded 'quality' level. 1 is standard, 0 produces hi-res pointclouds.
665 | # The other options are the default setting, except for the last one, which is the number of
666 | # threads to use in PMVS2.
667 | subprocess.call([BIN_GEN_OPTION, os.path.join('.', 'pmvs', ''), '0', '2', '0.7', '7', '3', str(cores)], env=env)
668 |
669 | # Run genOption to generate the required PMVS2 config.
670 | subprocess.call([BIN_PMVS2, os.path.join('.', 'pmvs', ''), 'option-0000'], env=env)
671 |
672 |
673 | def run_bundler():
674 | """Prepare images and run bundler with default options."""
675 |
676 | # Prepare the list of executables we need.
677 | global BIN_SIFT, BIN_BUNDLER, BIN_MATCHKEYS_FULL, BIN_MATCHKEYS_PART, BIN_BUNDLE2PMVS, BIN_RADIAL_UNDISTORT, BIN_CMVS, BIN_GEN_OPTION, BIN_PMVS2, BUNDLER_BIN_PATH, CMVS_PMVS_BIN_PATH
678 |
679 | start = time.time()
680 |
681 | if sys.platform == 'win32' or sys.platform == 'cygwin':
682 | BIN_SIFT = os.path.join(BUNDLER_BIN_PATH, "siftWin32.exe")
683 | BIN_BUNDLER = os.path.join(BUNDLER_BIN_PATH, "Bundler.exe")
684 | BIN_MATCHKEYS_FULL = os.path.join(BUNDLER_BIN_PATH, "KeyMatchFull.exe")
685 | BIN_MATCHKEYS_PART = os.path.join(BUNDLER_BIN_PATH, "KeyMatchPart.exe")
686 | BIN_BUNDLE2PMVS = os.path.join(BUNDLER_BIN_PATH, "Bundle2PMVS.exe")
687 | BIN_RADIAL_UNDISTORT = os.path.join(BUNDLER_BIN_PATH, "RadialUndistort.exe")
688 | BIN_CMVS = os.path.join(CMVS_PMVS_BIN_PATH, "cmvs.exe")
689 | BIN_GEN_OPTION = os.path.join(CMVS_PMVS_BIN_PATH, "genOption.exe")
690 | BIN_PMVS2 = os.path.join(CMVS_PMVS_BIN_PATH, "pmvs2.exe")
691 | else:
692 | BIN_SIFT = os.path.join(BUNDLER_BIN_PATH, "sift")
693 | BIN_BUNDLER = os.path.join(BUNDLER_BIN_PATH, "bundler")
694 | BIN_MATCHKEYS_FULL = os.path.join(BUNDLER_BIN_PATH, "KeyMatchFull")
695 | BIN_MATCHKEYS_PART = os.path.join(BUNDLER_BIN_PATH, "KeyMatchPart")
696 | BIN_BUNDLE2PMVS = os.path.join(BUNDLER_BIN_PATH, "Bundle2PMVS")
697 | BIN_RADIAL_UNDISTORT = os.path.join(BUNDLER_BIN_PATH, "RadialUndistort")
698 | BIN_CMVS = os.path.join(CMVS_PMVS_BIN_PATH, "cmvs")
699 | BIN_GEN_OPTION = os.path.join(CMVS_PMVS_BIN_PATH, "genOption")
700 | BIN_PMVS2 = os.path.join(CMVS_PMVS_BIN_PATH, "pmvs2")
701 |
702 | step,totalsteps = 1,6
703 |
704 | # Create list of images
705 | print "[- Step %d/%d Creating list of images -]" % (step,totalsteps)
706 | images = get_images()
707 | step += 1
708 | time1 = time.time()
709 | if len(images) < 10:
710 | print "ERROR: not enough images (have %d, need at least 10)" % len(images)
711 | exit(1)
712 | else:
713 | print "[- Retrieved %d images in %.1f seconds -]" % (len(images), time1-start)
714 |
715 | # Extract focal length
716 | print "[- Step %d/%d Extracting EXIF tags from images -]" % (step,totalsteps)
717 | images_focal = extract_focal_length(images)
718 | step += 1
719 | time2 = time.time()
720 | print "[- Retrieved EXIF tags of %d images in %.1f seconds -]" % (len(images_focal), time2-time1)
721 | if not len(images_focal) == len(images):
722 | print "ERROR: some images are missing necessasy EXIF information!"
723 | exit(1)
724 |
725 | # Save the image list, since we need it later on.
726 | save_image_list(images_focal)
727 |
728 | # Extract SIFT features from images
729 | print "[- Step %d/%d Extracting keypoints -]" % (step,totalsteps)
730 | key_files = sift_images(images)
731 | step += 1
732 | time3 = time.time()
733 | print "[- Extracting keypoints took %.1f seconds -]" % (time3-time2)
734 |
735 | # Match images
736 | print "[- Step %d/%d Matching keypoints (this can take a while) -]" % (step,totalsteps)
737 | matches_file = "matches.init.txt"
738 | match_images(key_files, matches_file)
739 | step += 1
740 | time4 = time.time()
741 | print "[- Matching keypoints took %.1f seconds -]" % (time4-time3)
742 |
743 | # Run Bundler
744 |
745 | print "[- Step %d/%d Running Bundler (sparse pointcloud generation) -]" % (step,totalsteps)
746 | bundler(image_list='list.txt',
747 | options_file="options.txt",
748 | match_table=matches_file,
749 | output="bundle.out",
750 | output_all="bundle_",
751 | output_dir="bundle",
752 | variable_focal_length=True,
753 | use_focal_estimate=True,
754 | constrain_focal=True,
755 | constrain_focal_weight=0.0001,
756 | estimate_distortion=True,
757 | projection_estimation_threshold=1.1,
758 | construct_max_connectivity=True,
759 | use_ceres=True,
760 | run_bundle=True)
761 | step += 1
762 | time5 = time.time()
763 | print "[- Bundler took %.1f seconds-]" % (time5-time4)
764 |
765 | print "[- Step %d/%d Creating CMVS/PMVS configuration and running PMVS2 -]" % (step,totalsteps)
766 | create_dense_pointcloud(images, image_list='list.txt', bundle_out="bundle.out")
767 | step += 1
768 | end = time.time()
769 | print "[- Creating dense point cloud took %.1f seconds -]" % (end-time5)
770 |
771 | print "[- Done in %.1f seconds -]" % (end-start)
772 |
773 | if __name__ == '__main__':
774 | run_bundler()
775 |
--------------------------------------------------------------------------------
/test/README.md:
--------------------------------------------------------------------------------
1 | test dir
2 |
--------------------------------------------------------------------------------
/test/density.py:
--------------------------------------------------------------------------------
1 | # Calculate density of a point cloud, defined as num. of points per volume
2 | # inside the convex hull.
3 |
4 | from __future__ import division, print_function
5 |
6 | import numpy as np
7 | import pcl
8 | import sys
9 |
10 |
11 | try:
12 | p = pcl.load(sys.argv[1])
13 | except IndexError:
14 | print('usage: python %s file' % sys.argv[0], file=sys.stderr)
15 | print(' File may be in PCD or PLY format.', file=sys.stderr)
16 | sys.exit(1)
17 |
18 | # Use the easiest bounding box that we can find.
19 | arr = p.to_array()
20 | vol = np.product(arr.max(axis=0) - arr.min(axis=0))
21 |
22 | print("Points per unit volume: %.3g" % (len(arr) / vol))
23 |
24 | arr = arr[:, :2]
25 | area = np.product(arr.max(axis=0) - arr.min(axis=0))
26 |
27 | print("Points per unit area: %.3g" % (len(arr) / area))
28 |
--------------------------------------------------------------------------------
/test/number_of_points.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import os
4 | import sys
5 |
6 |
7 | class NumberOfPointsReader(object):
8 |
9 | def __init__(self,theDir):
10 |
11 | self.bundleOutFileName = os.path.join(theDir,'bundle','bundle.out')
12 | self.optionPlyFileName = os.path.join(theDir,'pmvs','models','option-0000.ply')
13 | self.nPointsSparse = -1
14 | self.nPointsDense = -1
15 |
16 |
17 | def readBundleOutFile(self):
18 |
19 | try:
20 | f = open(self.bundleOutFileName, 'r')
21 | # skip the first line, it's just a header
22 | line = f.readline()
23 | # read the second line to find out how many cameras and keypoints there are
24 | (nCameras,nKeypoints) = f.readline().split()
25 |
26 | self.nPointsSparse = int(nKeypoints)
27 |
28 | f.close()
29 |
30 | except IOError:
31 | print "# Can't find file: " + self.bundleOutFileName
32 |
33 |
34 | def readOptionPlyFile(self):
35 |
36 | try:
37 | f = open(self.optionPlyFileName, 'r')
38 | # skip the first 2 lines
39 | line = f.readline()
40 | line = f.readline()
41 |
42 | # read the second line to find out how vertices there are
43 | nPointsDense = f.readline().split()[2]
44 |
45 | self.nPointsDense = int(nPointsDense)
46 |
47 | f.close()
48 |
49 | except IOError:
50 | print "# Can't find file: " + self.optionPlyFileName
51 |
52 |
53 | def myPrint(self):
54 |
55 | if self.nPointsSparse == -1:
56 | pass
57 | else:
58 | print "# Using 'bundle.out' file from here: " + self.bundleOutFileName
59 |
60 | if self.nPointsDense == -1:
61 | pass
62 | else:
63 | print "# Using 'option-0000.ply' from here: " + self.optionPlyFileName
64 |
65 | print "# The results are:"
66 | print "# nPointsSparse =", self.nPointsSparse
67 | print "# nPointsDense =", self.nPointsDense
68 | print
69 |
70 |
71 | if __name__ == "__main__":
72 |
73 | if len(sys.argv) is not 2:
74 | sys.exit(sys.argv[0] + " needs exactly one argument: the folder containing the output of the sfm pipeline")
75 |
76 |
77 | arg1 = sys.argv[1]
78 | argIsDir = os.path.isdir(arg1)
79 |
80 | if argIsDir:
81 |
82 | theDir = sys.argv[1]
83 |
84 | nopReader = NumberOfPointsReader(theDir)
85 | nopReader.readBundleOutFile()
86 | nopReader.readOptionPlyFile()
87 |
88 | nopReader.myPrint()
89 |
90 | print nopReader.nPointsSparse
91 | print nopReader.nPointsDense
92 |
93 |
94 |
95 | else:
96 |
97 | print "Input argument should be a directory. Aborting."
98 |
--------------------------------------------------------------------------------
/test/parse_camera_data.m:
--------------------------------------------------------------------------------
1 |
2 |
3 | clear
4 | close all
5 | clc
6 |
7 |
8 |
9 | theDir = 'SITE_662';
10 |
11 | bundleFile = [theDir,filesep,'bundle',filesep,'bundle.out'];
12 |
13 | fid = fopen(bundleFile,'rt');
14 |
15 | C = textscan(fid,'%d%d',1,'headerlines',1);
16 |
17 | nCameras = double(C{1});
18 | nKeyPoints = double(C{2});
19 |
20 | nDataPerCamera = 5*3;
21 |
22 | C = textscan(fid,'%f',nDataPerCamera*nCameras);
23 |
24 | for iCamera = 1:nCameras
25 |
26 | idx = (iCamera - 1) * nDataPerCamera;
27 |
28 | cameras(iCamera,1).focalLength = C{1}(idx + 1);
29 | cameras(iCamera,1).radialDistortCoefs = transpose(C{1}(idx + (2:3)));
30 | cameras(iCamera,1).rotation = transpose(reshape(C{1}(idx + 3 + (1:9)),[3,3]));
31 | cameras(iCamera,1).translate = transpose(C{1}(idx + 12 + (1:3)));
32 |
33 | % derived properties:
34 | cameras(iCamera,1).position = (-cameras(iCamera,1).rotation' * cameras(iCamera,1).translate')';
35 | cameras(iCamera,1).viewDir = (cameras(iCamera,1).rotation' * [0,0,-1]')';
36 | cameras(iCamera,1).up = (cameras(iCamera,1).rotation' * [0,1,0]')';
37 | end
38 |
39 | fclose(fid);
40 |
41 |
42 | formatStrJSON = ['{"srid": 32633,\n',...
43 | ' "x": %f,\n',...
44 | ' "y": %f,\n',...
45 | ' "z": %f,\n',...
46 | ' "dx": %f,\n',...
47 | ' "dy": %f,\n',...
48 | ' "dz": %f,\n',...
49 | ' "ux": %f,\n',...
50 | ' "uy": %f,\n',...
51 | ' "uz": %f,\n',...
52 | ' }'];
53 |
54 |
55 |
56 |
57 | formatStr = sprintf('%%0%dd',ceil(log10(nCameras))+1);
58 |
59 | listFile = [theDir,filesep,'prepare',filesep,'list.txt'];
60 | fid = fopen(listFile,'rt');
61 | C = textscan(fid,'%s%d%f');
62 | fclose(fid);
63 |
64 | filenames = C{1};
65 |
66 | for iCamera = 1:nCameras
67 |
68 | filename = [filenames{iCamera},'.json'];
69 | fopen(filename,'wt');
70 | fprintf(fid,formatStrJSON,cameras(iCamera).position,cameras(iCamera).translate,cameras(iCamera).viewDir);
71 | fclose(fid);
72 |
73 | end
74 |
75 | %%
76 | viewDirScaleFactor = 5.0;
77 | scale2 = 1;
78 |
79 | close all
80 | figure
81 |
82 | ups = zeros(nCameras, 3);
83 | positions = zeros(nCameras, 3);
84 | for iCamera = 1:nCameras
85 | currentcam = cameras(iCamera);
86 | position = currentcam.position;
87 | viewDir = currentcam.viewDir;
88 | translation = currentcam.translate;
89 | up = currentcam.up
90 | plot3(position(1),position(2),position(3),'om','markerfacecolor','m')
91 | hold on
92 | plot3(position(1) + [0,viewDir(1)*viewDirScaleFactor],...
93 | position(2) + [0,viewDir(2)*viewDirScaleFactor],...
94 | position(3) + [0,viewDir(3)*viewDirScaleFactor],...
95 | '-b')
96 | plot3(position(1) + [0,up(1)*scale2],...
97 | position(2) + [0,up(2)*scale2],...
98 | position(3) + [0,up(3)*scale2],...
99 | '-g')
100 | text(position(1),position(2),position(3),num2str(iCamera));
101 | ups(iCamera,:) = currentcam.up';
102 | positions(iCamera,:) = currentcam.position';
103 |
104 | end
105 | meanpos = mean(positions,1)
106 | meanup = mean(ups,1)
107 | up = meanup / sqrt(meanup * meanup')
108 | upfrompos = up + meanpos
109 | upscale = 2;
110 | plot3(meanpos(1) + [0,up(1)*upscale],...
111 | meanpos(2) + [0,up(2)*upscale],...
112 | meanpos(3) + [0,up(3)*upscale],...
113 | '-r')
114 |
115 | grid on
116 | %axis image
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
--------------------------------------------------------------------------------
/test/readcams.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import os
4 | import sys
5 | from math import sqrt
6 |
7 |
8 |
9 | class BundleFileReader:
10 |
11 | def __init__(self,theDir):
12 |
13 | NaN = float('nan')
14 |
15 | self.bundleFileName = os.path.join(theDir,'bundle','bundle.out')
16 | self.listFileName = os.path.join(theDir,'prepare','list.txt')
17 | self.outputDir = os.getcwd()
18 | self.nCameras = 0
19 | self.nKeypoints = 0
20 | self.cameraData = []
21 | self.boundingBoxCameras = [[NaN,NaN],[NaN,NaN],[NaN,NaN]]
22 | self.upDirectionEstimation = [NaN,NaN,NaN]
23 |
24 |
25 | def readBundleFile(self):
26 |
27 | f = open(self.bundleFileName, 'r')
28 | # skip the first line, it's just a header
29 | line = f.readline()
30 | # read the second line to find out how many cameras and keypoints there are
31 | (nCameras,nKeypoints) = f.readline().split()
32 | self.nCameras = int(nCameras)
33 | self.nKeyPoints = int(nKeypoints)
34 |
35 | for iCamera in range(0,self.nCameras):
36 |
37 | cam = _CameraData(iCamera)
38 | cam.readOneCameraData(f)
39 | cam.calcCameraViewingDirection()
40 | self.cameraData.append(cam)
41 |
42 |
43 | f.close()
44 |
45 |
46 | def calcBoundingBoxCameras(self):
47 |
48 | for camera in self.cameraData:
49 |
50 | (x,y,z) = camera.position
51 |
52 | if camera.index == 0:
53 |
54 | self.boundingBoxCameras[0][0] = x
55 | self.boundingBoxCameras[0][1] = x
56 | self.boundingBoxCameras[1][0] = y
57 | self.boundingBoxCameras[1][1] = y
58 | self.boundingBoxCameras[2][0] = z
59 | self.boundingBoxCameras[2][1] = z
60 |
61 | else:
62 |
63 | if x < self.boundingBoxCameras[0][0]:
64 | self.boundingBoxCameras[0][0] = x
65 |
66 | if x > self.boundingBoxCameras[0][1]:
67 | self.boundingBoxCameras[0][1] = x
68 |
69 | if y < self.boundingBoxCameras[1][0]:
70 | self.boundingBoxCameras[1][0] = y
71 |
72 | if y > self.boundingBoxCameras[1][1]:
73 | self.boundingBoxCameras[1][1] = y
74 |
75 | if z < self.boundingBoxCameras[2][0]:
76 | self.boundingBoxCameras[2][0] = z
77 |
78 | if z > self.boundingBoxCameras[2][1]:
79 | self.boundingBoxCameras[2][1] = z
80 |
81 | def calcUpEstimation(self):
82 | ups = [camera.upDirection for camera in self.cameraData]
83 |
84 | # Calculate mean up direction
85 | nCameras = len(self.cameraData)
86 | meanUp = [0, 0, 0]
87 | for dim in range(3):
88 | meanUp[dim] = sum([up[dim] for up in ups]) / nCameras
89 |
90 | # Normalize upDirectionEstimation
91 | magnitude = sqrt(sum([x**2 for x in meanUp]))
92 | self.upDirectionEstimation = [meanUp[dim] / magnitude for dim in range(3)]
93 |
94 |
95 | def myPrint(self):
96 |
97 | print "bundle file = " + self.bundleFileName
98 | print "list file = " + self.listFileName
99 |
100 | print "nCameras =", self.nCameras
101 | print "nKeypoints =", self.nKeypoints
102 |
103 | iCamera = 0;
104 | for cameraData in self.cameraData:
105 |
106 | print " camera =",iCamera
107 | print(" focalLength = %.2f"% (cameraData.focalLength))
108 | print " radialDistort =",cameraData.radialDistortion
109 | print " rotation =",cameraData.rotation
110 | print " translation =",cameraData.translation
111 | print " position =",cameraData.position
112 | print "viewingDirection =",cameraData.viewingDirection
113 | print " cam relative up =",cameraData.upDirection
114 | print
115 |
116 | iCamera += 1
117 |
118 | def writeCameraDataAsJSON(self):
119 |
120 | f = open(self.listFileName, 'r')
121 | lines = f.readlines()
122 |
123 | iCamera = 0;
124 | for line in lines:
125 | jsonFileName = os.path.join(self.outputDir,line.split()[0] + ".json")
126 | f = open(jsonFileName,'w')
127 | f.write('{\n "srid": 32633,\n'
128 | ' "x": %f,\n'
129 | ' "y": %f,\n'
130 | ' "z": %f,\n'
131 | ' "dx": %f,\n'
132 | ' "dy": %f,\n'
133 | ' "dz": %f,\n'
134 | ' "ux": %f,\n'
135 | ' "uy": %f,\n'
136 | ' "uz": %f,\n}\n' % (
137 | self.cameraData[iCamera].position[0],
138 | self.cameraData[iCamera].position[1],
139 | self.cameraData[iCamera].position[2],
140 | self.cameraData[iCamera].translation[0],
141 | self.cameraData[iCamera].translation[1],
142 | self.cameraData[iCamera].translation[2],
143 | self.cameraData[iCamera].viewingDirection[0],
144 | self.cameraData[iCamera].viewingDirection[1],
145 | self.cameraData[iCamera].viewingDirection[2] ))
146 | f.close()
147 |
148 | iCamera += 1
149 |
150 |
151 | f.close()
152 |
153 |
154 |
155 | def writeBoundingBoxAsJSON(self):
156 |
157 | # this bounding box is the bbox of the cameras, not the
158 | # point cloud itself
159 |
160 | # according to this bounding box specification:
161 | # http://geojson.org/geojson-spec.html#bounding
162 |
163 | fileName = os.path.join(self.outputDir,"bbox-cameras.json")
164 |
165 | f = open(fileName, 'w')
166 |
167 | f.write('\n{\n "bbox":[%f,%f,%f,%f,%f,%f]\n}\n' % (
168 | self.boundingBoxCameras[0][0],
169 | self.boundingBoxCameras[0][1],
170 | self.boundingBoxCameras[1][0],
171 | self.boundingBoxCameras[1][1],
172 | self.boundingBoxCameras[2][0],
173 | self.boundingBoxCameras[2][1] ))
174 | f.close()
175 |
176 |
177 | def writeUpEstimationAsJSON(self):
178 |
179 | # this bounding box is the bbox of the cameras, not the
180 | # point cloud itself
181 |
182 | # according to this bounding box specification:
183 | # http://geojson.org/geojson-spec.html#bounding
184 |
185 | fileName = os.path.join(self.outputDir,"up-estimation.json")
186 |
187 | f = open(fileName, 'w')
188 |
189 | f.write('{\n "estimatedUpDirection":[%f,%f,%f]\n}\n' % (
190 | self.upDirectionEstimation[0],
191 | self.upDirectionEstimation[1],
192 | self.upDirectionEstimation[2]))
193 | f.close()
194 |
195 |
196 |
197 |
198 | class _CameraData:
199 |
200 | def __init__(self,index):
201 | NaN = float('nan')
202 | self.index = index
203 | self.focalLength = NaN
204 | self.radialDistortion = [NaN,NaN]
205 | self.rotationMatrix = [[NaN,NaN,NaN],[NaN,NaN,NaN],[NaN,NaN,NaN]]
206 | self.translationVector = [NaN,NaN,NaN]
207 | self.position = []
208 | self.viewingDirection = [NaN,NaN,NaN]
209 | self.upDirection = [NaN,NaN,NaN]
210 |
211 |
212 | def readOneCameraData(self,f):
213 |
214 | # the data for each camera is stored in 1 + 1 + 1 + 3x3 + 1x3 = 15 floating point numbers
215 |
216 | (focalLength,radialDistortionCoef1,radialDistortionCoef2) = f.readline().split()
217 | (R11,R12,R13) = f.readline().split()
218 | (R21,R22,R23) = f.readline().split()
219 | (R31,R32,R33) = f.readline().split()
220 | (T1,T2,T3) = f.readline().split()
221 |
222 | self.focalLength = float(focalLength)
223 |
224 | self.radialDistortion = [float(radialDistortionCoef1),
225 | float(radialDistortionCoef2)]
226 |
227 | self.rotation = [[float(R11),float(R12),float(R13)],
228 | [float(R21),float(R22),float(R23)],
229 | [float(R31),float(R32),float(R33)]]
230 |
231 | self.translation = [float(T1),float(T2),float(T3)]
232 |
233 | self.calcCameraPosition()
234 | self.calcCameraViewingDirection()
235 | self.calcCameraUp()
236 |
237 |
238 | def calcCameraPosition(self):
239 |
240 | for iCol in range (0,3):
241 | v = 0.0;
242 | for iRow in range (0,3):
243 | v += -1.0 * self.rotation[iRow][iCol] * self.translation[iRow];
244 | self.position.append(v)
245 |
246 |
247 | def calcCameraViewingDirection(self):
248 |
249 | for iCol in range(3):
250 | self.viewingDirection[iCol] = -1.0 * self.rotation[2][iCol]
251 |
252 |
253 | def calcCameraUp(self):
254 |
255 | for iCol in range(3):
256 | self.upDirection[iCol] = self.rotation[1][iCol]
257 |
258 |
259 | if __name__ == "__main__":
260 |
261 | if len(sys.argv) != 2:
262 | print
263 | print "# Script " + sys.argv[0] + " needs 1 argument"
264 | print "# arg1: input data directory that contains at least the following files:"
265 | print "# - bundle/bundle.out"
266 | print "# - prepare/list.txt"
267 | print
268 | sys.exit(1)
269 |
270 | arg1 = sys.argv[1]
271 | argIsDir = os.path.isdir(arg1)
272 |
273 | if argIsDir:
274 |
275 | theDir = sys.argv[1]
276 |
277 | bundleFileReader = BundleFileReader(theDir)
278 | bundleFileReader.readBundleFile()
279 | bundleFileReader.myPrint()
280 | bundleFileReader.writeCameraDataAsJSON()
281 | bundleFileReader.calcBoundingBoxCameras()
282 | bundleFileReader.writeBoundingBoxAsJSON()
283 | bundleFileReader.calcUpEstimation()
284 | bundleFileReader.writeUpEstimationAsJSON()
285 |
286 | print "Boundingbox [[minx, maxx],[miny,maxy],[minz,maxz]] is:" + str(bundleFileReader.boundingBoxCameras)
287 |
288 | else:
289 |
290 | print "Input argument should be a directory. Aborting."
291 |
292 |
--------------------------------------------------------------------------------
/test/test_rock.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import sys
4 | from number_of_points import NumberOfPointsReader
5 |
6 | if len(sys.argv) is not 2:
7 | sys.exit("test needs exactly one argument: the example folder")
8 |
9 | example_folder = sys.argv[1]
10 |
11 | nopReader = NumberOfPointsReader(example_folder)
12 | nopReader.readBundleOutFile()
13 | nopReader.readOptionPlyFile()
14 |
15 | print "number points in sparse pointcloud " + sys.argv[1] + ": " + str(nopReader.nPointsSparse)
16 | print "number points in dense pointcloud " + sys.argv[1] + ": " + str(nopReader.nPointsDense)
17 |
18 | if nopReader.nPointsSparse != 9753:
19 | sys.exit("error! incorrect amount of sparse points!")
20 |
21 | if nopReader.nPointsDense != 2497111:
22 | sys.exit("error! incorrect amount of dense points!")
23 |
24 | print "test ok: output contains the expected number of points"
25 |
--------------------------------------------------------------------------------
/test/test_rock_section.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import sys
4 | from number_of_points import NumberOfPointsReader
5 |
6 | if len(sys.argv) is not 2:
7 | sys.exit("test needs exactly one argument: the example folder")
8 |
9 | example_folder = sys.argv[1]
10 |
11 | nopReader = NumberOfPointsReader(example_folder)
12 | nopReader.readBundleOutFile()
13 | nopReader.readOptionPlyFile()
14 |
15 | print "number points in sparse pointcloud " + sys.argv[1] + ": " + str(nopReader.nPointsSparse)
16 | print "number points in dense pointcloud " + sys.argv[1] + ": " + str(nopReader.nPointsDense)
17 |
18 | if nopReader.nPointsSparse != 4041:
19 | sys.exit("error! incorrect amount of sparse points!")
20 |
21 | if nopReader.nPointsDense < 800000:
22 | sys.exit("error! to few dense points! Expect at least 800000")
23 |
24 | if nopReader.nPointsDense > 1000000:
25 | sys.exit("error! to few dense points! Expect 1000000 at maximum")
26 |
27 | print "test ok: output contains the expected number of points"
28 |
--------------------------------------------------------------------------------