├── .gitignore
├── .travis.yml
├── CODE_OF_CONDUCT.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── conftest.py
├── examples
├── README.md
├── closed-loop
│ └── README.md
└── open-loop
│ ├── open-loop-log.json
│ └── plot_loop.py
├── eyeloop
├── __init__.py
├── config.py
├── constants
│ ├── README.md
│ ├── __init__.py
│ ├── engine_constants.py
│ ├── minimum_gui_constants.py
│ └── processor_constants.py
├── engine
│ ├── README.md
│ ├── __init__.py
│ ├── engine.py
│ ├── models
│ │ ├── __init__.py
│ │ ├── circular.py
│ │ └── ellipsoid.py
│ ├── params
│ │ └── __init__.py
│ └── processor.py
├── extractors
│ ├── DAQ.py
│ ├── README.md
│ ├── __init__.py
│ ├── calibration.py
│ ├── closed_loop.py
│ ├── converter.py
│ ├── frametimer.py
│ ├── open_loop.py
│ ├── template.py
│ └── visstim.py
├── guis
│ ├── README.md
│ ├── __init__.py
│ ├── blink_test.py
│ └── minimum
│ │ ├── README.md
│ │ ├── __init__.py
│ │ ├── graphics
│ │ ├── instructions_md
│ │ │ ├── overview.svg
│ │ │ ├── rotation.svg
│ │ │ └── start.svg
│ │ ├── tip_1_cr.png
│ │ ├── tip_1_cr_error.png
│ │ ├── tip_1_cr_first.png
│ │ ├── tip_2_cr.png
│ │ ├── tip_3_pupil.png
│ │ ├── tip_3_pupil_error.png
│ │ ├── tip_4_pupil.png
│ │ └── tip_5_start.png
│ │ └── minimum_gui.py
├── importers
│ ├── README.md
│ ├── __init__.py
│ ├── cv.py
│ ├── importer.py
│ └── vimba.py
├── run_eyeloop.py
└── utilities
│ ├── __init__.py
│ ├── argument_parser.py
│ ├── file_manager.py
│ ├── format_print.py
│ ├── general_operations.py
│ ├── logging_config.yaml
│ ├── parser.py
│ └── shared_logging.py
├── misc
├── contributors.md
├── imgs
│ ├── aarhusuniversity.svg
│ ├── closed-loop.svg
│ ├── constant.svg
│ ├── contributors.svg
│ ├── dandrite.svg
│ ├── engine_ill.svg
│ ├── extractor_overview.svg
│ ├── extractor_scheme.svg
│ ├── eyeloop overview.svg
│ ├── importer_overview.svg
│ ├── logo.svg
│ ├── models.svg
│ ├── nordicembl.svg
│ ├── sample_1.gif
│ ├── sample_2.gif
│ ├── sample_3.gif
│ ├── sample_4.gif
│ ├── setup.svg
│ ├── software logic.svg
│ └── yoneharalab.svg
└── travis-sample
│ └── Frmd7.m4v
├── requirements.txt
├── requirements_examples.txt
├── requirements_testing.txt
├── setup.py
├── tests
├── test_integration.py
└── testdata
│ ├── short_human_3blink.mp4
│ └── short_mouse_noblink.m4v
└── tox.ini
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # OS
10 | .DS_Store
11 |
12 | # Distribution / packaging
13 | .Python
14 | build/
15 | develop-eggs/
16 | dist/
17 | downloads/
18 | eggs/
19 | .eggs/
20 | lib/
21 | lib64/
22 | parts/
23 | sdist/
24 | var/
25 | wheels/
26 | *.egg-info/
27 | .installed.cfg
28 | *.egg
29 | MANIFEST
30 | .idea/
31 |
32 | # PyInstaller
33 | # Usually these files are written by a python script from a template
34 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
35 | *.manifest
36 | *.spec
37 |
38 | # Installer logs
39 | pip-log.txt
40 | pip-delete-this-directory.txt
41 |
42 | # Unit test / coverage reports
43 | htmlcov/
44 | .tox/
45 | .coverage
46 | .coverage.*
47 | .cache
48 | nosetests.xml
49 | coverage.xml
50 | *.cover
51 | .hypothesis/
52 | .pytest_cache/
53 | /tests/reports/
54 |
55 | # Translations
56 | *.mo
57 | *.pot
58 |
59 | # Django stuff:
60 | *.log
61 | local_settings.py
62 | db.sqlite3
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # pyenv
81 | .python-version
82 |
83 | # celery beat schedule file
84 | celerybeat-schedule
85 |
86 | # SageMath parsed files
87 | *.sage.py
88 |
89 | # Environments
90 | .env
91 | .venv
92 | env/
93 | venv/
94 | ENV/
95 | env.bak/
96 | venv.bak/
97 |
98 | # Spyder project settings
99 | .spyderproject
100 | .spyproject
101 |
102 | # Rope project settings
103 | .ropeproject
104 |
105 | # mkdocs documentation
106 | /site
107 |
108 | # eyeloop generated files
109 | data/
110 | *.avi
111 |
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | language: python
2 |
3 | python:
4 | - "3.8"
5 |
6 | before_install:
7 | - "pip install -U pip"
8 | - "python setup.py install"
9 | install:
10 | - pip install pytest pandas
11 | dist: xenial
12 | services:
13 | - xvfb
14 |
15 | script: pytest
16 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | In the interest of fostering an open and welcoming environment, we as
6 | contributors and maintainers pledge to making participation in our project and
7 | our community a harassment-free experience for everyone, regardless of age, body
8 | size, disability, ethnicity, sex characteristics, gender identity and expression,
9 | level of experience, education, socio-economic status, nationality, personal
10 | appearance, race, religion, or sexual identity and orientation.
11 |
12 | ## Our Standards
13 |
14 | Examples of behavior that contributes to creating a positive environment
15 | include:
16 |
17 | * Using welcoming and inclusive language
18 | * Being respectful of differing viewpoints and experiences
19 | * Gracefully accepting constructive criticism
20 | * Focusing on what is best for the community
21 | * Showing empathy towards other community members
22 |
23 | Examples of unacceptable behavior by participants include:
24 |
25 | * The use of sexualized language or imagery and unwelcome sexual attention or
26 | advances
27 | * Trolling, insulting/derogatory comments, and personal or political attacks
28 | * Public or private harassment
29 | * Publishing others' private information, such as a physical or electronic
30 | address, without explicit permission
31 | * Other conduct which could reasonably be considered inappropriate in a
32 | professional setting
33 |
34 | ## Our Responsibilities
35 |
36 | Project maintainers are responsible for clarifying the standards of acceptable
37 | behavior and are expected to take appropriate and fair corrective action in
38 | response to any instances of unacceptable behavior.
39 |
40 | Project maintainers have the right and responsibility to remove, edit, or
41 | reject comments, commits, code, wiki edits, issues, and other contributions
42 | that are not aligned to this Code of Conduct, or to ban temporarily or
43 | permanently any contributor for other behaviors that they deem inappropriate,
44 | threatening, offensive, or harmful.
45 |
46 | ## Scope
47 |
48 | This Code of Conduct applies both within project spaces and in public spaces
49 | when an individual is representing the project or its community. Examples of
50 | representing a project or community include using an official project e-mail
51 | address, posting via an official social media account, or acting as an appointed
52 | representative at an online or offline event. Representation of a project may be
53 | further defined and clarified by project maintainers.
54 |
55 | ## Enforcement
56 |
57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
58 | reported by contacting the project team at . All
59 | complaints will be reviewed and investigated and will result in a response that
60 | is deemed necessary and appropriate to the circumstances. The project team is
61 | obligated to maintain confidentiality with regard to the reporter of an incident.
62 | Further details of specific enforcement policies may be posted separately.
63 |
64 | Project maintainers who do not follow or enforce the Code of Conduct in good
65 | faith may face temporary or permanent repercussions as determined by other
66 | members of the project's leadership.
67 |
68 | ## Attribution
69 |
70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
72 |
73 | [homepage]: https://www.contributor-covenant.org
74 |
75 | For answers to common questions about this code of conduct, see
76 | https://www.contributor-covenant.org/faq
77 |
--------------------------------------------------------------------------------
/MANIFEST.in:
--------------------------------------------------------------------------------
1 | include requirements.txt requirements_testing.txt tox.ini
2 | recursive-include eyeloop/guis/minimum/graphics *.svg *.png
3 | include eyeloop/utilities/logging_config.yaml
4 |
5 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # EyeLoop [](https://www.gnu.org/licenses/gpl-3.0) [](https://github.com/simonarvin/eyeloop/issues) [](https://travis-ci.com/simonarvin/eyeloop)   
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 | EyeLoop is a Python 3-based eye-tracker tailored specifically to dynamic, closed-loop experiments on consumer-grade hardware. Users are encouraged to contribute to EyeLoop's development.
15 |
16 | ## Features ##
17 | - [x] **High-speed** > 1000 Hz on non-specialized hardware (no dedicated processing units necessary).
18 | - [x] Modular, readable, **customizable**.
19 | - [x] **Open-source**, and entirely Python 3.
20 | - [x] **Works on any platform**, easy installation.
21 |
22 | ## Overview ##
23 | - [How it works](#how-it-works)
24 | - [Getting started](#getting-started)
25 | - [Your first experiment](#designing-your-first-experiment)
26 | - [Data](#data)
27 | - [User interface](#graphical-user-interface)
28 | - [Authors](#authors)
29 | - [Examples](https://github.com/simonarvin/eyeloop/blob/master/examples)
30 | - [*EyeLoop Playground*](https://github.com/simonarvin/eyeloop_playground)
31 |
32 | ## How it works ##
33 |
34 |
35 |
36 |
37 | EyeLoop consists of two functional domains: the engine and the optional modules. The engine performs the eye-tracking, whereas the modules perform optional tasks, such as:
38 |
39 | - Experiments
40 | - Data acquisition
41 | - Importing video sequences to the engine
42 |
43 | > The modules import or extract data from the engine, and are therefore called *Importers* and *Extractors*, respectively.
44 |
45 | One of EyeLoop's most appealing features is its modularity: Experiments are built simply by combining modules with the core Engine. Thus, the Engine has one task only: to compute eye-tracking data based on an *imported* sequence, and offer the generated data for *extraction*.
46 |
47 | > How does [the Engine](https://github.com/simonarvin/eyeloop/blob/master/eyeloop/engine/README.md) work?\
48 | > How does [the Importer](https://github.com/simonarvin/eyeloop/blob/master/eyeloop/importers/README.md) work?\
49 | > How does [the Extractor](https://github.com/simonarvin/eyeloop/blob/master/eyeloop/extractors/README.md) work?
50 |
51 | ## Getting started ##
52 |
53 | ### Installation ###
54 |
55 | Install EyeLoop by cloning the repository:
56 | ```
57 | git clone https://github.com/simonarvin/eyeloop.git
58 | ```
59 |
60 | >Dependencies: ```python -m pip install -r requirements.txt```
61 |
62 | >Using pip:
63 | > ```pip install .```
64 |
65 | You may want to use a Conda or Python virtual environment when
66 | installing `eyeloop`, to avoid mixing up with your system dependencies.
67 |
68 | >Using pip and a virtual environment:
69 |
70 | > ```python -m venv venv```
71 |
72 | > ```source venv/bin/activate```
73 |
74 | > ```(venv) pip install .```
75 |
76 | Alternatively:
77 |
78 | >- numpy: ```python pip install numpy```
79 | >- opencv: ```python pip install opencv-python```
80 |
81 | To download full examples with footage, check out EyeLoop's playground repository:
82 |
83 | ```
84 | git clone https://github.com/simonarvin/eyeloop_playground.git
85 | ```
86 |
87 | ---
88 |
89 | ### Initiation ###
90 |
91 | EyeLoop is initiated through the command-line utility `eyeloop`.
92 | ```
93 | eyeloop
94 | ```
95 | To access the video sequence, EyeLoop must be connected to an appropriate *importer class* module. Usually, the default opencv importer class (*cv*) is sufficient. For some machine vision cameras, however, a vimba-based importer (*vimba*) is neccessary.
96 | ```
97 | eyeloop --importer cv/vimba
98 | ```
99 | > [Click here](https://github.com/simonarvin/eyeloop/blob/master/eyeloop/importers/README.md) for more information on *importers*.
100 |
101 | To perform offline eye-tracking, we pass the video argument ```--video``` with the path of the video sequence:
102 | ```
103 | eyeloop --video [file]/[folder]
104 | ```
105 |
106 |
107 |
108 |
109 | EyeLoop can be used on a multitude of eye types, including rodents, human and non-human primates. Specifically, users can suit their eye-tracking session to any species using the ```--model``` argument.
110 |
111 | ```
112 | eyeloop --model ellipsoid/circular
113 | ```
114 | > In general, the ellipsoid pupil model is best suited for rodents, whereas the circular model is best suited for primates.
115 |
116 | To learn how to optimize EyeLoop for your video material, see [*EyeLoop Playground*](https://github.com/simonarvin/eyeloop_playground).
117 |
118 | To see all command-line arguments, pass:
119 |
120 | ```
121 | eyeloop --help
122 | ```
123 |
124 | ## Designing your first experiment ##
125 |
126 |
127 |
128 |
129 |
130 | In EyeLoop, experiments are built by stacking modules. By default, EyeLoop imports two base *extractors*, namely a FPS-counter and a data acquisition tool. To add custom extractors, e.g., for experimental purposes, use the argument tag ```--extractors```:
131 |
132 | ```
133 | eyeloop --extractors [file_path]/p (where p = file prompt)
134 | ```
135 |
136 | Inside the *extractor* file, or a composite python file containing several *extractors*, define the list of *extractors* to be added:
137 | ```python
138 | extractors_add = [extractor1, extractor2, etc]
139 | ```
140 |
141 | *Extractors* are instantiated by EyeLoop at start-up. Then, at every subsequent time-step, the *extractor's* ```fetch()``` function is called by the engine.
142 | ```python
143 | class Extractor:
144 | def __init__(self) -> None:
145 | ...
146 | def fetch(self, core) -> None:
147 | ...
148 | ```
149 | ```fetch()``` gains access to all eye-tracking data in real-time via the *core* pointer.
150 |
151 | > [Click here](https://github.com/simonarvin/eyeloop/blob/master/eyeloop/extractors/README.md) for more information on *extractors*.
152 |
153 | ### Open-loop example ###
154 |
155 | As an example, we'll here design a simple *open-loop* experiment where the brightness of a PC monitor is linked to the phase of the sine wave function. We create anew python-file, say "*test_ex.py*", and in it define the sine wave frequency and phase using the instantiator:
156 | ```python
157 | class Experiment:
158 | def __init__(self) -> None:
159 | self.frequency = ...
160 | self.phase = 0
161 | ```
162 | Then, by using ```fetch()```, we shift the phase of the sine wave function at every time-step, and use this to control the brightness of a cv-render.
163 | ```python
164 | ...
165 | def fetch(self, engine) -> None:
166 | self.phase += self.frequency
167 | sine = numpy.sin(self.phase) * .5 + .5
168 | brightness = numpy.ones((height, width), dtype=float) * sine
169 | cv2.imshow("Experiment", brightness)
170 | ```
171 |
172 | To add our test extractor to EyeLoop, we'll need to define an extractors_add array:
173 | ```python
174 | extractors_add = [Experiment()]
175 | ```
176 |
177 | Finally, we test the experiment by running command:
178 | ```
179 | eyeloop --extractors path/to/test_ex.py
180 | ```
181 |
182 | > See [Examples](https://github.com/simonarvin/eyeloop/blob/master/examples) for demo recordings and experimental designs.
183 |
184 | > For extensive test data, see [*EyeLoop Playground*](https://github.com/simonarvin/eyeloop_playground)
185 |
186 |
187 | ## Data ##
188 | EyeLoop produces a json-datalog for each eye-tracking session. The datalog's first column is the timestamp.
189 | The next columns define the pupil (if tracked):
190 |
191 | ```((center_x, center_y), radius1, radius2, angle)```
192 |
193 | The next columns define the corneal reflection (if tracked):
194 |
195 | ```((center_x, center_y), radius1, radius2, angle)```
196 |
197 | The next columns contain any data produced by custom Extractor modules
198 |
199 |
200 | ## Graphical user interface ##
201 | The default graphical user interface in EyeLoop is [*minimum-gui*.](https://github.com/simonarvin/eyeloop/blob/master/eyeloop/guis/minimum/README.md)
202 |
203 | > EyeLoop is compatible with custom graphical user interfaces through its modular logic. [Click here](https://github.com/simonarvin/eyeloop/blob/master/eyeloop/guis/README.md) for instructions on how to build your own.
204 |
205 | ## Running unit tests ##
206 |
207 | Install testing requirements by running in a terminal:
208 |
209 | `pip install -r requirements_testing.txt`
210 |
211 | Then run tox: `tox`
212 |
213 | Reports and results will be outputted to `/tests/reports`
214 |
215 |
216 | ## Known issues ##
217 | - [ ] Respawning/freezing windows when running *minimum-gui* in Ubuntu.
218 |
219 | ## References ##
220 | If you use any of this code or data, please cite [Arvin et al. 2021] ([article](https://www.frontiersin.org/articles/10.3389/fncel.2021.779628/full)).
221 | ```latex
222 |
223 | @ARTICLE{Arvin2021-tg,
224 | title = "{EyeLoop}: An open-source system for high-speed, closed-loop
225 | eye-tracking",
226 | author = "Arvin, Simon and Rasmussen, Rune and Yonehara, Keisuke",
227 | journal = "Front. Cell. Neurosci.",
228 | volume = 15,
229 | pages = "494",
230 | year = 2021
231 | }
232 |
233 | ```
234 |
235 | ## License ##
236 | This project is licensed under the GNU General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied.
237 |
238 | ## Authors ##
239 |
240 | **Lead Developer:**
241 | Simon Arvin, sarv@dandrite.au.dk
242 |
266 |
--------------------------------------------------------------------------------
/conftest.py:
--------------------------------------------------------------------------------
1 | # Extra configuration for pytest
2 | # The presence of this file in {project_dir} will force pytest to add the dir to PYTHONPATH
3 |
--------------------------------------------------------------------------------
/examples/README.md:
--------------------------------------------------------------------------------
1 | # Examples #
2 |
3 | These examples are described in-depth in the [preprint article](https://www.biorxiv.org/content/10.1101/2020.07.03.186387v1). They are described in a more programmatic perspective in this repository.
4 | Click into each folder to get started!
5 |
6 |
4 |
5 | Fig. Closed-loop experiment exhibiting properties of dynamical systems. (A) Closed-loop experiment using reciprocal feedback. Monitor brightness is set as a function of the pupil area. (B) State velocity, v, depends on pupil area, A. (C) Four trials of the closed loop with differing parameters showing distinct dynamic behavior.
6 |
7 |
8 | One of EyeLoop’s most appealing applications is closed-loop experiments (Fig). To demonstrate this, we designed an extractor to use the pupil area to modulate the brightness of a monitor, in effect a reciprocal feedback loop: Here, the light reflex causes the pupil to dilate in dim settings, which causes the extractor to increase monitor brightness. In turn, this causes the pupil to constrict, causing the extractor to decrease brightness and return the experiment to its initial state.
9 |
10 | The brightness formula contains four critical variables (Fig B): The rate of change, I, which is dependent on the pupil area, and its scalar, q. The velocity, v, which applies the rate of change to monitor brightness, and, the velocity friction, f, which decays the velocity towards zero. Interestingly, by varying these parameters, we observe behaviors characteristic of dynamical systems: For the reference and the slow decay trials, we find emergent limit-cycle oscillations (Fig C). This dynamic is dramatically impaired by a small scalar, and abolished in the low rate trial. These findings illustrate how a simple closed-loop experiment may generate self-sustaining dynamics emerging from the eyes engaging with the system, and the system engaging with the eyes.
11 |
12 | ## How to reproduce ##
13 | In *run_eyeloop.py*, import the closed-loop *Extractor* module and the *calibrator*:
14 | ```python
15 | from eyeloop.extractors.closed_loop import ClosedLoop_Extractor
16 | from eyeloop.extractors.calibration import Calibration_Extractor
17 | ```
18 |
19 | First, load the *calibrator*:
20 | ```python
21 | ENGINE.load_extractors(Calibration_Extractor())
22 | ```
23 |
24 |
25 |
26 |
27 |
28 | Position a PC monitor in front of the eye of the subject, turn off the lights and run the experiment.
29 | ```
30 | eyeloop
31 | ```
32 |
33 | > Note: Adjust the width, height and x, y coordinates of the visual stimulus (inside calibration.py, closed_loop.py to fit your setup.
34 |
35 | This returns a calibration value (saved in file format ```{time_stamp}._cal_```). Now, load the closed-loop *Extractor* and paste this value inside ```{time_stamp}._cal_``` as the first parameter:
36 | ```python
37 | ENGINE.load_extractors(ClosedLoop_Extractor(_CAL_))
38 | ```
39 |
40 | For example,
41 |
42 | ```python
43 | ENGINE.load_extractors(ClosedLoop_Extractor(794.58))
44 | ```
45 |
46 | That's it! Enjoy your experiment.
47 |
48 | ```
49 | eyeloop
50 | ```
51 |
52 | > Note: If you're using a Vimba-based camera, use command ```eyeloop --importer vimba```
53 |
--------------------------------------------------------------------------------
/examples/open-loop/plot_loop.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from eyeloop.utilities.parser import Parser
3 | import matplotlib.pyplot as plt
4 | import matplotlib.gridspec as gridspec
5 | import matplotlib.ticker as mticker
6 |
7 |
8 | class Loop_parser(Parser):
9 | def __init__(self):
10 | super().__init__(animal='human')
11 | self.set_style()
12 |
13 | def custom_lower_panel_ticks(self, y, pos) -> str:
14 | """
15 | Changes brightness-ticks to 'Dark' and 'Light'.
16 | """
17 |
18 | if y == 0:
19 | return "Light"
20 | elif y == 1:
21 | return "Dark"
22 |
23 | def set_style(self):
24 | """
25 | Sets the overall style of the plots.
26 | """
27 |
28 | self.color = ["k", "orange", "b", "g", "red", "purple"]
29 | plt.rcParams.update({'font.family': "Arial"})
30 | plt.rcParams.update({'font.weight': "regular"})
31 | plt.rcParams.update({'axes.linewidth': 1})
32 |
33 | def segmentize(self, key: str) -> np.ndarray:
34 | """
35 | Segmentizes the data log based on a key signal, such as a trigger.
36 | """
37 |
38 | segments = []
39 | for index, entry in enumerate(self.data):
40 | if key in entry and entry[key] == 1:
41 | segments.append(index)
42 | return np.array(segments)
43 |
44 | def plot_open_loop(self, rows: int = 2, columns: int = 3) -> None:
45 | """
46 | Retrieves and parses the open-loop demo data set.
47 | """
48 |
49 | print("Select the open-loop data log demo.")
50 | self.load_log()
51 |
52 | print("Computing pupil area.")
53 | pupil_area = self.compute_area()
54 |
55 | print("Extracting monitor brightness.")
56 | monitor_brightness = self.extract_unique_key("open_looptest")
57 |
58 | print("Extracting time-stamps.")
59 | time = self.extract_time()
60 |
61 | print("Segmentizing trial based on 'trigger' entries.")
62 | _segments = self.segmentize("trigger")
63 |
64 | print("Prepares {}x{} grid plot.".format(rows, columns))
65 | fig = plt.figure(figsize=(5, 4))
66 | fig.tight_layout()
67 | main_grid = gridspec.GridSpec(columns, rows, hspace=1, wspace=0.3)
68 |
69 | margin = 50
70 | for grid_index, _ in enumerate(_segments):
71 | segment_index = grid_index * 2
72 | if segment_index == len(_segments) or grid_index == rows * columns:
73 | break
74 |
75 | # We extend each segment by a margin.
76 | # We define the pupil area and monitor brightness values based on this 'padded' segment.
77 | start_crop = _segments[segment_index] - margin
78 | end_crop = _segments[segment_index + 1] + margin
79 | pupil_area_segment = pupil_area[start_crop: end_crop]
80 | monitor_brightness_segment = monitor_brightness[start_crop: end_crop]
81 |
82 | # We extend the time-stamps similarly.
83 | # However, to align the time-line to the segment's start, we add the margin to 'time-zero'.
84 | time_segment = time[start_crop: end_crop]
85 | time_zero = time_segment[margin]
86 | segment_duration = [entry - time_zero for entry in time_segment]
87 |
88 | # We define a 2x1 grid for the pupil area plot and the monitor brightness plot.
89 | segment_grid = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=main_grid[grid_index], hspace=0.0,
90 | height_ratios=[3, 2])
91 |
92 | # We add the upper panel and the lower panel.
93 | upper_panel = fig.add_subplot(segment_grid[0])
94 | lower_panel = fig.add_subplot(segment_grid[1], sharex=upper_panel)
95 |
96 | # We fix some graphical details, such as removing axes spines.
97 | lower_panel.axis(ymin=-.3, ymax=1.3)
98 | lower_panel.yaxis.set_major_formatter(mticker.FuncFormatter(self.custom_lower_panel_ticks))
99 | lower_panel.spines['right'].set_visible(False)
100 | lower_panel.yaxis.set_ticks_position('left')
101 |
102 | upper_panel.spines['right'].set_visible(False)
103 | upper_panel.spines['top'].set_visible(False)
104 | upper_panel.yaxis.set_ticks_position('left')
105 | upper_panel.xaxis.set_ticks_position('bottom')
106 |
107 | # Finally, we plot the data.
108 | upper_panel.plot(segment_duration, pupil_area_segment, self.color[grid_index], linewidth=2)
109 | lower_panel.plot(segment_duration, monitor_brightness_segment, "k", linewidth=1)
110 |
111 | print("Showing data plots.")
112 | plt.show()
113 |
114 |
115 | parser = Loop_parser()
116 | parser.plot_open_loop()
117 |
--------------------------------------------------------------------------------
/eyeloop/__init__.py:
--------------------------------------------------------------------------------
1 | from eyeloop import config
2 | from eyeloop import constants
3 | from eyeloop import engine
4 | from eyeloop import extractors
5 | from eyeloop import guis
6 | from eyeloop import importers
7 | from eyeloop import run_eyeloop
8 | from eyeloop import utilities
9 |
10 | __all__ = [
11 | "constants",
12 | "engine",
13 | "extractors",
14 | "guis",
15 | "importers",
16 | "utilities",
17 | "run_eyeloop",
18 | "config"
19 | ]
20 |
--------------------------------------------------------------------------------
/eyeloop/config.py:
--------------------------------------------------------------------------------
1 | version = "0.35-beta"
2 | importer = 0
3 | eyeloop = 0
4 | engine = 0
5 | arguments = 0
6 | file_manager = 0
7 | graphical_user_interface = 0
8 |
9 | #blink = 142.08
10 | import numpy as np
11 | blink = np.zeros(300, dtype=np.float64)
12 |
13 | blink_i = 0
14 |
--------------------------------------------------------------------------------
/eyeloop/constants/README.md:
--------------------------------------------------------------------------------
1 | # Constants #
2 |
3 |
5 |
6 | The engine processes each frame of the video sequentially. First, the user selects the corneal reflections, then the pupil. The frame is binarized, filtered, and smoothed by a gaussian kernel. Then, the engine utilizes a walk-out algorithm to detect contours. This produces a matrix of points, which is filtered to discard bad matches. Using the corneal reflections, any overlap between the corneal reflections and the pupil is removed. Finally, the shape is parameterized by a fitting model: either an ellipsoid (suitable for rodents, cats, etc.), or a circle model (human, non-human primates, rodents, etc.). The target species is easily changed:
7 | ```
8 | python eyeloop/run_eyeloop.py --model circular/ellipsoid
9 | ```
10 | Lastly, the data is formatted in JSON and passed to all modules, such as for rendering, or data acquisition and experiments.
11 |
12 | ## Shape processors ##
13 | EyeLoop's engine communicates with the *Shape* class, which processes the walkout contour detection. Accordingly, at least two *Shape*'s are defined by the instantiator, one for the pupil and *n* for the corneal reflections:
14 |
15 | ```python
16 | class Engine:
17 | def __init__(self, ...):
18 | max_cr_processor = 3 #max number of corneal reflections
19 | self.cr_processors = [Shape(self, type = 2) for _ in range(max_cr_processor)]
20 | self.pupil_processor= Shape(self)
21 | ...
22 |
23 | ```
24 |
--------------------------------------------------------------------------------
/eyeloop/engine/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/simonarvin/eyeloop/cd22fb79e8d2c186ed23e2b15cef57887bbdeffe/eyeloop/engine/__init__.py
--------------------------------------------------------------------------------
/eyeloop/engine/engine.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import time
3 | from typing import Optional
4 | from os.path import dirname, abspath
5 | import glob, os
6 |
7 | import cv2
8 |
9 | import eyeloop.config as config
10 | from eyeloop.constants.engine_constants import *
11 | from eyeloop.engine.processor import Shape
12 | from eyeloop.utilities.general_operations import to_int, tuple_int
13 |
14 | logger = logging.getLogger(__name__)
15 | PARAMS_DIR = f"{dirname(dirname(abspath(__file__)))}/engine/params"
16 |
17 | class Engine:
18 | def __init__(self, eyeloop):
19 |
20 | self.live = True # Access this to check if Core is running.
21 |
22 | self.eyeloop = eyeloop
23 | self.model = config.arguments.model # Used for assigning appropriate circular model.
24 |
25 | self.extractors = []
26 |
27 | if config.arguments.tracking == 0: # Recording mode. --tracking 0
28 | self.iterate = self.record
29 | else: # Tracking mode. --tracking 1 (default)
30 | self.iterate = self.track
31 |
32 | self.angle = 0
33 |
34 | self.cr_processor_1 = Shape(type = 2, n = 1)
35 | self.cr_processor_2 = Shape(type = 2, n = 2)
36 | self.pupil_processor = Shape()
37 |
38 | # Via "gui", assign "refresh_pupil" to function "processor.refresh_source"
39 | # when the pupil has been selected.
40 | self.refresh_pupil = lambda x: None
41 |
42 | def load_extractors(self, extractors: list = None) -> None:
43 | if extractors is None:
44 | extractors = []
45 | logger.info(f"loading extractors: {extractors}")
46 | self.extractors = extractors
47 |
48 |
49 | def run_extractors(self) -> None:
50 | """
51 | Calls all extractors at the end of each time-step.
52 | Assign additional extractors to core engine via eyeloop.py.
53 | """
54 |
55 | for extractor in self.extractors:
56 | try:
57 | extractor.fetch(self)
58 | except Exception as e:
59 | print("Error in module class: {}".format(extractor.__name__))
60 | print("Error message: ", e)
61 |
62 | def record(self) -> None:
63 | """
64 | Runs Core engine in record mode. Timestamps all frames in data output log.
65 | Runs gui update_record function with no tracking.
66 | Argument -s 1
67 | """
68 |
69 | timestamp = time.time()
70 |
71 | self.dataout = {
72 | "time": timestamp
73 | }
74 |
75 | config.graphical_user_interface.update_record(self.source)
76 |
77 | self.run_extractors()
78 |
79 | def arm(self, width, height, image) -> None:
80 |
81 | self.width, self.height = width, height
82 | config.graphical_user_interface.arm(width, height)
83 | self.center = (width//2, height//2)
84 |
85 | self.iterate(image)
86 |
87 | if config.arguments.blinkcalibration != "":
88 | config.blink = np.load(config.arguments.blinkcalibration)
89 | self.blink_sampled = lambda _:None
90 | logger.info("(success) blink calibration loaded")
91 |
92 | if config.arguments.clear == False or config.arguments.params != "":
93 |
94 | try:
95 | if config.arguments.params != "":
96 | latest_params = max(glob.glob(config.arguments.params), key=os.path.getctime)
97 |
98 | print(config.arguments.params + " loaded")
99 |
100 | else:
101 | latest_params = max(glob.glob(PARAMS_DIR + "/*.npy"), key=os.path.getctime)
102 |
103 | params_ = np.load(latest_params, allow_pickle=True).tolist()
104 |
105 | self.pupil_processor.binarythreshold, self.pupil_processor.blur = params_["pupil"][0], params_["pupil"][1]
106 |
107 | self.cr_processor_1.binarythreshold, self.cr_processor_1.blur = params_["cr1"][0], params_["cr1"][1]
108 | self.cr_processor_2.binarythreshold, self.cr_processor_2.blur = params_["cr2"][0], params_["cr2"][1]
109 |
110 | print("(!) Parameters reloaded. Run --clear 1 to prevent this.")
111 |
112 |
113 | param_dict = {
114 | "pupil" : [self.pupil_processor.binarythreshold, self.pupil_processor.blur],
115 | "cr1" : [self.cr_processor_1.binarythreshold, self.cr_processor_1.blur],
116 | "cr2" : [self.cr_processor_2.binarythreshold, self.cr_processor_2.blur]
117 | }
118 |
119 | logger.info(f"loaded parameters:\n{param_dict}")
120 |
121 | return
122 |
123 | except:
124 | pass
125 |
126 |
127 | filtered_image = image[np.logical_and((image < 220), (image > 30))]
128 | self.pupil_processor.binarythreshold = np.min(filtered_image) * 1 + np.median(filtered_image) * .1#+ 50
129 | self.cr_processor_1.binarythreshold = self.cr_processor_2.binarythreshold = float(np.min(filtered_image)) * .7 + 150
130 |
131 | param_dict = {
132 | "pupil" : [self.pupil_processor.binarythreshold, self.pupil_processor.blur],
133 | "cr1" : [self.cr_processor_1.binarythreshold, self.cr_processor_1.blur],
134 | "cr2" : [self.cr_processor_2.binarythreshold, self.cr_processor_2.blur]
135 | }
136 |
137 | logger.info(f"loaded parameters:\n{param_dict}")
138 |
139 |
140 | def blink_sampled(self, t:int = 1):
141 |
142 | if t == 1:
143 | if config.blink_i% 20 == 0:
144 | print(f"calibrating blink detector {round(config.blink_i/config.blink.shape[0]*100,1)}%")
145 | else:
146 | logger.info("(success) blink detection calibrated")
147 | path = f"{config.file_manager.new_folderpath}/blinkcalibration_{self.dataout['time']}.npy"
148 | np.save(path, config.blink)
149 | print("blink calibration file saved")
150 |
151 | def track(self, img) -> None:
152 | """
153 | Executes the tracking algorithm on the pupil and corneal reflections.
154 | First, blinking is analyzed.
155 | Second, corneal reflections are detected.
156 | Third, corneal reflections are inverted at pupillary overlap.
157 | Fourth, pupil is detected.
158 | Finally, data is logged and extractors are run.
159 | """
160 | mean_img = np.mean(img)
161 | try:
162 |
163 | config.blink[config.blink_i] = mean_img
164 | config.blink_i += 1
165 | self.blink_sampled(1)
166 |
167 | except IndexError:
168 | self.blink_sampled(0)
169 | self.blink_sampled = lambda _:None
170 | config.blink_i = 0
171 |
172 | self.dataout = {
173 | "time": time.time()
174 | }
175 |
176 | if np.abs(mean_img - np.mean(config.blink[np.nonzero(config.blink)])) > 10:
177 |
178 | self.dataout["blink"] = 1
179 | self.pupil_processor.fit_model.params = None
180 | logger.info("Blink detected.")
181 | else:
182 |
183 | self.pupil_processor.track(img)
184 |
185 | self.cr_processor_1.track(img)
186 | #self.cr_processor_2.track(img.copy(), img)
187 |
188 |
189 | try:
190 | config.graphical_user_interface.update(img)
191 | except Exception as e:
192 | logger.exception("Did you assign the graphical user interface (GUI) correctly? Attempting to release()")
193 | self.release()
194 | return
195 |
196 | self.run_extractors()
197 |
198 | def activate(self) -> None:
199 | """
200 | Activates all extractors.
201 | The extractor activate() function is optional.
202 | """
203 |
204 | for extractor in self.extractors:
205 | try:
206 | extractor.activate()
207 | except AttributeError:
208 | logger.warning(f"Extractor {extractor} has no activate() method")
209 |
210 | def release(self) -> None:
211 | """
212 | Releases/deactivates all running process, i.e., importers, extractors.
213 | """
214 | try:
215 | config.graphical_user_interface.out.release()
216 | except:
217 | pass
218 |
219 | param_dict = {
220 | "pupil" : [self.pupil_processor.binarythreshold, self.pupil_processor.blur],
221 | "cr1" : [self.cr_processor_1.binarythreshold, self.cr_processor_1.blur],
222 | "cr2" : [self.cr_processor_2.binarythreshold, self.cr_processor_2.blur]
223 | }
224 |
225 | path = f"{config.file_manager.new_folderpath}/params_{self.dataout['time']}.npy"
226 | np.save(path, param_dict)
227 | print("Parameters saved")
228 |
229 | self.live = False
230 | config.graphical_user_interface.release()
231 |
232 |
233 | for extractor in self.extractors:
234 | try:
235 | extractor.release(self)
236 | except AttributeError:
237 | logger.warning(f"Extractor {extractor} has no release() method")
238 | else:
239 | pass
240 |
241 | config.importer.release()
242 |
--------------------------------------------------------------------------------
/eyeloop/engine/models/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/simonarvin/eyeloop/cd22fb79e8d2c186ed23e2b15cef57887bbdeffe/eyeloop/engine/models/__init__.py
--------------------------------------------------------------------------------
/eyeloop/engine/models/circular.py:
--------------------------------------------------------------------------------
1 | # original script: https://github.com/AlliedToasters/circle-fit
2 | # original script author: Michael Klear/AlliedToasters
3 | # hyper-fit doi: https://doi.org/10.1016/j.csda.2010.12.012
4 | # hyper-fit authors: Kenichi Kanatani & Prasanna Rangarajan
5 |
6 | import numpy as np
7 | np.seterr('raise')
8 |
9 | from eyeloop.utilities.general_operations import tuple_int
10 |
11 |
12 | class Circle:
13 | def __init__(self, processor) -> None:
14 | self.shape_processor = processor
15 | self.fit = self.hyper_fit
16 | self.params = None
17 |
18 | def hyper_fit(self, r) -> tuple:
19 | """
20 | Fits coords to circle using hyperfit algorithm.
21 | Inputs:
22 | - coords, list or numpy array with len>2 of the form:
23 | [
24 | [x_coord, y_coord],
25 | ...,
26 | [x_coord, y_coord]
27 | ]
28 | or numpy array of shape (n, 2)
29 | Outputs:
30 | - xc : x-coordinate of solution center (float)
31 | - yc : y-coordinate of solution center (float)
32 | - R : Radius of solution (float)
33 | - residu : s, sigma - variance of data wrt solution (float)
34 | """
35 | X, Y = r[:,0], r[:,1]
36 | n = X.shape[0]
37 |
38 | mean_X = np.mean(X)
39 | mean_Y = np.mean(Y)
40 | Xi = X - mean_X
41 | Yi = Y - mean_Y
42 | Xi_sq = Xi**2
43 | Yi_sq = Yi**2
44 | Zi = Xi_sq + Yi_sq
45 |
46 | # compute moments
47 |
48 | Mxy = np.sum(Xi * Yi) / n
49 | Mxx = np.sum(Xi_sq) / n
50 | Myy = np.sum(Yi_sq) / n
51 | Mxz = np.sum(Xi * Zi) / n
52 | Myz = np.sum(Yi * Zi) / n
53 |
54 | Mz = Mxx + Myy
55 |
56 | # finding the root of the characteristic polynomial
57 |
58 | det = (Mxx * Myy - Mxy**2)*2
59 | #print(det)
60 | try:
61 | Xcenter = (Mxz * Myy - Myz * Mxy)/ det
62 | Ycenter = (Myz * Mxx - Mxz * Mxy)/ det
63 | except:
64 | return False
65 |
66 | x = Xcenter + mean_X
67 | y = Ycenter + mean_Y
68 | r = np.sqrt(Xcenter ** 2 + Ycenter ** 2 + Mz)
69 | self.params = ((x, y), r, r, 0)
70 | #self.center, self.width, self.height, self.angle = self.params
71 |
72 | return self.params[0]
73 |
--------------------------------------------------------------------------------
/eyeloop/engine/models/ellipsoid.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | np.seterr('raise')
3 |
4 | from eyeloop.utilities.general_operations import tuple_int
5 |
6 | """Demonstration of least-squares fitting of ellipses
7 | __author__ = "Ben Hammel, Nick Sullivan-Molina"
8 | __credits__ = ["Ben Hammel", "Nick Sullivan-Molina"]
9 | __maintainer__ = "Ben Hammel"
10 | __email__ = "bdhammel@gmail.com"
11 | __status__ = "Development"
12 | Requirements
13 | ------------
14 | Python 2.X or 3.X
15 | np
16 | matplotlib
17 | References
18 | ----------
19 | (*) Halir, R., Flusser, J.: 'Numerically Stable Direct Least Squares
20 | Fitting of Ellipses'
21 | (**) http://mathworld.wolfram.com/Ellipse.html
22 | (***) White, A. McHale, B. 'Faraday rotation data analysis with least-squares
23 | elliptical fitting'
24 | """
25 |
26 |
27 | class Ellipse:
28 | def __init__(self, processor):
29 | self.shape_processor = processor
30 | self.params = None
31 |
32 | def fit(self, r):
33 | """Least Squares fitting algor6ithm
34 | Theory taken from (*)
35 | Solving equation Sa=lCa. with a = |a b c d f g> and a1 = |a b c>
36 | a2 = |d f g>
37 | Args
38 | ----
39 | data (list:list:float): list of two lists containing the x and y data of the
40 | ellipse. of the form [[x1, x2, ..., xi],[y1, y2, ..., yi]]
41 | Returns
42 | ------
43 | coef (list): list of the coefficients describing an ellipse
44 | [a,b,c,d,f,g] corresponding to ax**2+2bxy+cy**2+2dx+2fy+g
45 | """
46 |
47 | x, y = r[:,0], r[:,1]
48 |
49 |
50 | # Quadratic part of design matrix [eqn. 15] from (*)
51 |
52 | D1 = np.mat(np.vstack([x ** 2, x * y, y ** 2])).T
53 | # Linear part of design matrix [eqn. 16] from (*)
54 | D2 = np.mat(np.vstack([x, y, np.ones(len(x))])).T
55 |
56 | # forming scatter matrix [eqn. 17] from (*)
57 | S1 = D1.T * D1
58 | S2 = D1.T * D2
59 | S3 = D2.T * D2
60 |
61 | # Constraint matrix [eqn. 18]
62 | C1 = np.mat('0. 0. 2.; 0. -1. 0.; 2. 0. 0.')
63 |
64 | # Reduced scatter matrix [eqn. 29]
65 | M = C1.I * (S1 - S2 * S3.I * S2.T)
66 |
67 | # M*|a b c >=l|a b c >. Find eigenvalues and eigenvectors from this equation [eqn. 28]
68 | eval, evec = np.linalg.eig(M)
69 |
70 | # eigenvector must meet constraint 4ac - b^2 to be valid.
71 | cond = 4 * np.multiply(evec[0, :], evec[2, :]) - np.power(evec[1, :], 2)
72 | a1 = evec[:, np.nonzero(cond.A > 0)[1]]
73 | # self.fitscore=eval[np.nonzero(cond.A > 0)[1]]
74 |
75 | # |d f g> = -S3^(-1)*S2^(T)*|a b c> [eqn. 24]
76 | #a2 = -S3.I * S2.T * a1
77 |
78 | # eigenvectors |a b c d f g>
79 | self.coef = np.vstack([a1, -S3.I * S2.T * a1])
80 |
81 |
82 | """finds the important parameters of the fitted ellipse
83 |
84 | Theory taken form http://mathworld.wolfram
85 | Args
86 | -----
87 | coef (list): list of the coefficients describing an ellipse
88 | [a,b,c,d,f,g] corresponding to ax**2+2bxy+cy**2+2dx+2fy+g
89 | Returns
90 | _______
91 | center (List): of the form [x0, y0]
92 | width (float): major axis
93 | height (float): minor axis
94 | phi (float): rotation of major axis form the x-axis in radians
95 | """
96 |
97 | # eigenvectors are the coefficients of an ellipse in general form
98 | # a*x^2 + 2*b*x*y + c*y^2 + 2*d*x + 2*f*y + g = 0 [eqn. 15) from (**) or (***)
99 | a = self.coef[0, 0]
100 | b = self.coef[1, 0] / 2.
101 | c = self.coef[2, 0]
102 | d = self.coef[3, 0] / 2.
103 | f = self.coef[4, 0] / 2.
104 | g = self.coef[5, 0]
105 |
106 | # if (a - c) == 0:
107 | # return True
108 |
109 | # finding center of ellipse [eqn.19 and 20] from (**)
110 | af = a * f
111 | cd = c * d
112 | bd = b * d
113 | ac = a * c
114 |
115 | b_sq = b ** 2.
116 | z_ = (b_sq - ac)
117 | x0 = (cd - b * f) / z_#(b ** 2. - a * c)
118 | y0 = (af - bd) / z_#(b ** 2. - a * c)
119 |
120 | # Find the semi-axes lengths [eqn. 21 and 22] from (**)
121 | ac_subtr = a - c
122 | numerator = 2 * (af * f + cd * d + g * b_sq - 2 * bd * f - ac * g)
123 | denom = ac_subtr * np.sqrt(1 + 4 * b_sq / ac_subtr**2)
124 | denominator1, denominator2 = (np.array([-denom, denom], dtype=np.float64) - c - a) * z_
125 |
126 | width = np.sqrt(numerator / denominator1)
127 | height = np.sqrt(numerator / denominator2)
128 |
129 | phi = .5 * np.arctan((2. * b) / ac_subtr)
130 | self.params = ((x0, y0), width, height, np.rad2deg(phi) % 360)
131 |
132 | #self.center, self.width, self.height, self.angle = self.params
133 | return self.params[0]
134 |
--------------------------------------------------------------------------------
/eyeloop/engine/params/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/simonarvin/eyeloop/cd22fb79e8d2c186ed23e2b15cef57887bbdeffe/eyeloop/engine/params/__init__.py
--------------------------------------------------------------------------------
/eyeloop/extractors/DAQ.py:
--------------------------------------------------------------------------------
1 | import json
2 | import logging
3 | from pathlib import Path
4 |
5 |
6 | class DAQ_extractor:
7 | def __init__(self, output_dir):
8 | self.output_dir = output_dir
9 | self.datalog_path = Path(output_dir, f"datalog.json")
10 | self.file = open(self.datalog_path, "a")
11 |
12 | def activate(self):
13 | return
14 |
15 | def fetch(self, core):
16 | try:
17 | self.file.write(json.dumps(core.dataout) + "\n")
18 |
19 | except ValueError:
20 | pass
21 |
22 | def release(self, core):
23 | try:
24 | self.file.write(json.dumps(core.dataout) + "\n")
25 | self.file.close()
26 | except ValueError:
27 | pass
28 | self.fetch(core)
29 | #return
30 | #logging.debug("DAQ_extractor.release() called")
31 |
32 | # def set_digital_line(channel, value):
33 | # digital_output = PyDAQmx.Task()
34 | # digital_output.CreateDOChan(channel,'do', DAQmxConstants.DAQmx_Val_ChanPerLine)
35 | # digital_output.WriteDigitalLines(1,
36 | # True,
37 | # 1.0,
38 | # DAQmxConstants.DAQmx_Val_GroupByChannel,
39 | # numpy.array([int(value)], dtype=numpy.uint8),
40 | # None,
41 | # None)
42 | # digital_output.ClearTask()
43 |
--------------------------------------------------------------------------------
/eyeloop/extractors/README.md:
--------------------------------------------------------------------------------
1 | # Extractors #
2 |
3 |
4 |
5 |
6 |
7 | *Extractors* form the *executive branch* of EyeLoop: Experiments, such as open- or closed-loops, are designed using *Extractors*. Similarly, data acquisition utilizes the *Extractor* class. So how does it work?
8 |
9 | > Check [Examples](https://github.com/simonarvin/eyeloop/blob/master/examples) for full Extractors.
10 |
11 | ## How the *Engine* handles *Extractors* ##
12 |
13 | *Extractors* are utilized by EyeLoop's *Engine* via the *Extractor array*. Users must first *load* all extractors into the *Engine* via *run_eyeloop.py*:
14 | ```python
15 | class EyeLoop:
16 | def __init__(self) -> None:
17 | extractors = [...]
18 | ENGINE = Engine(...)
19 | ENGINE.load_extractors(extractors)
20 | ```
21 |
22 | The *Extractor array* is *activated* by the *Engine* when the trial is initiated:
23 | ```python
24 | class Engine:
25 | def activate(self) -> None:
26 | for extractor in self.extractors:
27 | extractor.activate()
28 | ```
29 |
30 | Finally, the *Extractor array* is loaded by the *Engine* at each time-step:
31 | ```python
32 | def run_extractors(self) -> None:
33 | for extractor in self.extractors:
34 | extractor.fetch(self)
35 | ```
36 |
37 | At the termination of the *Engine*, the *Extractor array* is *released*:
38 | ```python
39 | def release(self) -> None:
40 | for extractor in self.extractors:
41 | extractor.release()
42 | ```
43 |
44 | ## Structure ##
45 | The *Extractor* class contains four functions:
46 | ### 1: ```__init__``` ###
47 |
48 | The instantiator sets class variables as soon as the Extractor array is generated, i.e., before the trial has begun.
49 | ```python
50 | class Extractor:
51 | def __init__(self, ...):
52 | (set variables)
53 | ```
54 |
55 | ### 2: ```activate``` ###
56 |
57 | The ```activate()``` function is called once when the trial is started.
58 | ```python
59 | ...
60 | def activate(self):
61 | ...
62 | ```
63 |
64 | An experiment *Extractor* might activate the experiment when the trial is initiatiated, by resetting timers:
65 | ```python
66 | ...
67 | def activate(self) -> None:
68 | self.start = time.time()
69 | ```
70 |
71 | ### 3: ```fetch``` ###
72 |
73 |
7 |
8 | **Overview**\
9 | *minimum-gui* consists of two panels. The left panel contains the source video sequence and an eye-tracking preview. The right panel contains the binary filter of the pupil (top) and corneal reflections (bottom).
10 |
11 | ## Getting started ##
12 | - **A**: Select the corneal reflections by hovering and key-pressing 2, 3 or 4. This initiates the tracking algorithm, which is rendered in the preview panel. Adjust binarization (key-press W/S) and gaussian (key-press E/D) parameters to improve detection. To switch between corneal reflections, key-press the corresponding index (for example, key-press 2 to modify corneal reflection "2").
13 | - **B**: Select the pupil by hovering and key-pressing 1. Similar to the corneal reflections, this initiates tracking. Adjust binarization (key-press R/F) and gaussian parameters (key-press T/G) for optimal detection.
14 | - **C**: To initiate the eye-tracking trial, key-press Z and confirm by key-pressing Y.
15 |
16 | > Key-press "q" to stop tracking.
17 |
18 | ## Optional ##
19 | ### Rotation ###
20 |