├── .gitignore ├── Images ├── Tenere_Muse-EEG_LXStudio.png └── tenere-raspberrypi-reference.png ├── LICENSE ├── README.md ├── muse-sigproc ├── .DS_Store ├── Images │ └── muselab.png ├── README.md ├── environment.yml ├── live_utils.py ├── museProc_tenere.py └── muselab_configurationTenere.json ├── muse-sock ├── README.md ├── dump_osc.py ├── muse-listener.py ├── muse-reconnect ├── muse-reconnect-status ├── muse-sock.py ├── muse │ ├── __init__.py │ └── muse.py ├── museStatus.py └── send_osc.py ├── sensors ├── README.md ├── grove-oled.py └── grove-osc.py ├── tmux_start.sh └── voicecontrol └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | local_settings.py 56 | 57 | # Flask stuff: 58 | instance/ 59 | .webassets-cache 60 | 61 | # Scrapy stuff: 62 | .scrapy 63 | 64 | # Sphinx documentation 65 | docs/_build/ 66 | 67 | # PyBuilder 68 | target/ 69 | 70 | # Jupyter Notebook 71 | .ipynb_checkpoints 72 | 73 | # pyenv 74 | .python-version 75 | 76 | # celery beat schedule file 77 | celerybeat-schedule 78 | 79 | # SageMath parsed files 80 | *.sage.py 81 | 82 | # dotenv 83 | .env 84 | 85 | # virtualenv 86 | .venv 87 | venv/ 88 | ENV/ 89 | 90 | # Spyder project settings 91 | .spyderproject 92 | .spyproject 93 | 94 | # Rope project settings 95 | .ropeproject 96 | 97 | # mkdocs documentation 98 | /site 99 | 100 | # mypy 101 | .mypy_cache/ 102 | -------------------------------------------------------------------------------- /Images/Tenere_Muse-EEG_LXStudio.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/treeoftenere/Interactivity/4a5ba0cb1617eaa08325be543eef789c209fb8ed/Images/Tenere_Muse-EEG_LXStudio.png -------------------------------------------------------------------------------- /Images/tenere-raspberrypi-reference.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/treeoftenere/Interactivity/4a5ba0cb1617eaa08325be543eef789c209fb8ed/Images/tenere-raspberrypi-reference.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Tree of Tenere 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Interactivity and Sensor/Effector integration 2 | A set of scripts to integrate streaming data with LXStudio over OSC for controlling the Tree of Tenere 3 | 4 | 5 | This project aims to provide a reference system and suite of sensor integrations for TENERE, written mostly in Python. The initial platform involves the use of a Raspberry Pi (3 model B - tested) to receive streaming raw data from various sensors, including: 6 | 7 | Works on PC/Mac or Raspberry Pi 8 | * Muse headband (https://choosemuse.com, Bluetooth LE) 9 | * USB microphone 10 | 11 | 12 | Raspberry Pi specific parts list: 13 | * Heartbeat sensor (http://pulsesensor.com, analog input) 14 | * Grove expansion shield (https://www.dexterindustries.com/grovepi/, I2C) 15 | * Pimoroni Blinkt (https://shop.pimoroni.com/products/blinkt, SPI) - equivalent to 1 of Tenere's leaves 16 | * Grove accelerometer (http://wiki.seeed.cc/Grove-3-Axis_Digital_Accelerometer-16g/, I2C) 17 | * Grove oled display (http://wiki.seeed.cc/Grove-OLED_Display_0.96inch/, I2C) 18 | * Grove I2C Hub (http://wiki.seeed.cc/Grove-I2C_Hub/) 19 | * Grove Button (https://www.seeedstudio.com/Grove-Button-p-766.html) 20 | * USB Microphone (https://kinobo.co.uk/) 21 | 22 | 23 | For those that would just like to Get 'er Done, Here is an Amazon wish list (Please make sure to change the Filter on the list to show both purchased and unpurchased items): http://a.co/fIXIPa8 24 | 25 | 26 | ![TenerePi](/Images/tenere-raspberrypi-reference.png) 27 | 28 | 29 | # Sensor Integration 30 | The following outlines the installation process for a Raspberry Pi 3 using the latest version of the Raspian image (July 2017): 31 | 32 | ## Setting up the Raspberry Pi 33 | 34 | * Start by following the installation instructions for downloading and writing the raspian image to a SD Card: https://www.raspberrypi.org/documentation/installation/installing-images/ 35 | * The Raspian Jessie with Deskop image has been tested: https://www.raspberrypi.org/downloads/raspbian/ 36 | * For Tenere, we suggest first configuring various options. Be sure to turn on I2C, SPI, set your locale, keyboard layout, setup network, etc. Please follow the relevant guides here: https://www.raspberrypi.org/documentation/configuration/ 37 | * Don't forget to change the default password to something more secure (use the `raspi-conf` tool)!!! 38 | * Now, update all of the base packages and restart: 39 | 40 | ``` 41 | sudo apt-get update 42 | sudo apt-get -y upgrade 43 | sudo apt-get -y dist-upgrade 44 | sudo apt-get -y install fail2ban 45 | sudo shutdown -r now 46 | ``` 47 | 48 | From your home directory (`/home/pi`), let's create a directory to hold all of our software: 49 | ``` 50 | cd 51 | mkdir SOFTWARE 52 | cd SOFTWARE 53 | ``` 54 | 55 | At the end of the tutorial, you should have a directory structure that looks something like this: 56 | ``` 57 | pi@raspberrypi:~/SOFTWARE $ ls 58 | GrovePI 59 | grovepi-zero 60 | liblsl 61 | Interactivity 62 | Pimoroni 63 | Tenere 64 | pi@raspberrypi:~/SOFTWARE $ 65 | ``` 66 | 67 | * Next, let's install the libraries we are going to use and clone any additional repositories (you may not need all of these for your specific setup, this tutorial includes everything for our reference system): 68 | 69 | ``` 70 | sudo apt-get -y install vim nano git git-core cmake python-pip python-dev 71 | ``` 72 | 73 | ## Raspberry Pi accessories 74 | 75 | * Install relevant libraries to enable the Blinkt LED strip 76 | ``` 77 | sudo apt-get -y install python-rpi.gpio python3-rpi.gpio 78 | sudo apt-get -y install python-psutil python3-psutil python-tweepy 79 | sudo apt-get -y install pimoroni python-blinkt python3-blinkt 80 | cd ~/SOFTWARE 81 | mkdir Pimoroni 82 | cd Pimoroni 83 | git clone https://github.com/pimoroni/blinkt.git 84 | cd library 85 | sudo python setup.py install 86 | ``` 87 | 88 | * Install relevant libraries for the Grove Pi expansion board (https://www.dexterindustries.com/GrovePi/get-started-with-the-grovepi/setting-software/) 89 | ``` 90 | sudo apt-get -y install libi2c-dev python-serial i2c-tools python-smbus python3-smbus arduino minicom 91 | cd ~/SOFTWARE 92 | git clone https://github.com/DexterInd/GrovePi.git 93 | cd GrovePi/Script 94 | sudo chmod +x install.sh 95 | sudo ./install.sh 96 | pip install grovepi 97 | cd ~/SOFTWARE 98 | git clone https://github.com/initialstate/grovepi-zero.git 99 | ``` 100 | 101 | ## Voice control with Jasper 102 | ``` 103 | cd ~/SOFTWARE/Interactivity/voicecontrol 104 | ``` 105 | Please follow the instructions at https://github.com/treeoftenere/Interactivity/tree/master/voicecontrol 106 | 107 | 108 | ## Pulse Sensor and other peripherals with the TenerePi 109 | ``` 110 | cd ~/SOFTWARE/Interactivity/sensors 111 | ``` 112 | Please follow the instructions at https://github.com/treeoftenere/Interactivity/tree/master/sensors 113 | 114 | 115 | 116 | 117 | 118 | ## Muse headband 119 | 120 | * For Muse Integration, several libraries are required. Please note, this setup is only valid for the Muse 2016 (or later) versions: 121 | ``` 122 | sudo apt-get -y install python-liblo python-matplotlib python-numpy python-scipy python3-scipi python-seaborn liblo-tools 123 | pip install pygatt 124 | pip install bitstring 125 | pip install pexpect 126 | ``` 127 | 128 | 129 | Now turn on your Muse and let's figure out its network address 130 | ``` 131 | sudo hcitool lescan 132 | ``` 133 | 134 | You should see something like (please write down the hex address as we will use it later to connect): 135 | ``` 136 | 00:55:DA:BO:0B:61 Muse-OB61 137 | ``` 138 | 139 | Now let's get the Muse talking to LXStudio. This assumes you have the latest version of LXStudio running somewhere on your local network. That is, we can test the Muse with LXStudio running on the same computer as the Muse is connecting. However, our preference is to have the Muse stream data to the Raspberry Pi and then have the Pi send this data over a network to a show control computer (typically a Mac or PC) dedicated to running LXStudio. 140 | 141 | To do this, first grab the latest version of Processing and install for your desired platform (https://processing.org/download/) 142 | 143 | Then clone the lastest version of LXStudio (see more at: https://github.com/treeoftenere/Tenere) 144 | ``` 145 | git clone https://github.com/treeoftenere/Tenere.git 146 | ``` 147 | 148 | To get data from the Muse, we first use the Lab Streaming Layer library (previously installed, https://github.com/sccn/labstreaminglayer) to connect to the Muse over Bluetooth LE. We then have a script that reads the streaming messages from LSL and then converts them to a format appropriate for OSC (http://opensoundcontrol.org/). The `liblo` python package then takes care of streaming this newly processing sensor stream in OSC format to our show computer running LXStudio. 149 | 150 | To test, let's clone this repository and launch our sensor processing pipeline (a big shout-out to @brainwaves for creating this): 151 | ``` 152 | cd ~/SOFTWARE 153 | git clone https://github.com/treeoftenere/Interactivity 154 | cd Interactivity 155 | cd muse-sock 156 | ``` 157 | 158 | Now using the address we discovered previously, start the script that connects to the Muse (replace `00:55:DA:BO:0B:61` with the address of your Muse Headband): 159 | ``` 160 | python muse-sock.py --address 00:55:DA:BO:0B:61 161 | ``` 162 | 163 | Then in a second terminal, start our script for OSC streaming to LXStudio (replace `192.168.0.50` with the IP address of the machine where you are running Tenere's LXStudio: 164 | ``` 165 | cd ~/SOFTWARE/Interactivity/muse-sock 166 | python muse-listener.py --oscip 192.168.0.50 167 | ``` 168 | 169 | Congratulations, you are now controlling Tenere with your brainwaves!!!! 170 | 171 | 172 | ![Tenere_Muse_LXStudio](/Images/Tenere_Muse-EEG_LXStudio.png) 173 | 174 | ### Setup raspberry pi to connect to a muse on boot 175 | ``` 176 | apt-get install tmux 177 | ``` 178 | Now edit `/etc/rc.local` and add the following line before (`exit 0`): 179 | 180 | ``` 181 | sudo -u pi bash /home/pi/SOFTWARE/Interactivity/tmux_start.sh 182 | ``` 183 | 184 | `tmux_start.sh` (In: https://github.com/treeoftenere/Interactivity) looks like this: 185 | ``` 186 | $ cat tmux_start.sh 187 | #!/bin/bash 188 | tmux new-session -d -s "musesock" "/home/pi/SOFTWARE/Interactivity/muse-sock/muse-reconnect 00:55:DA:B0:32:B1 10.0.0.2 9999" 189 | ``` 190 | Replace `00:55:DA:B0:32:B1` with the MAC address of the muse, replace `10.0.0.2` with the IP address of the machine where you are running Tenere's LXStudio, and replace `9999` with the port that `muse-listener.py` is listening to (it is `9999` by default). 191 | 192 | You should now be able to reboot the PI and have it connect to your muse on boot, and auto-reconnect 193 | 194 | If you want to check that the process is running, you can ssh into the Pi from another computer and attach to the tmux session 195 | 196 | ``` 197 | Starfox:~ chris$ ssh pi@10.0.0.12 198 | 199 | The programs included with the Debian GNU/Linux system are free software; 200 | the exact distribution terms for each program are described in the 201 | individual files in /usr/share/doc/*/copyright. 202 | 203 | Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 204 | permitted by applicable law. 205 | Last login: Sun Aug 6 08:21:22 2017 from 10.0.0.2 206 | pi@raspberrypi:~ $ tmux attach 207 | ``` 208 | If you have a muse connected, you tmux session will have debug output like this: 209 | ``` 210 | ('waited: 0.031495', 'dataloss: 0.0', 'avgloss: 0.000000') 211 | ('waited: 0.001427', 'dataloss: 0.0', 'avgloss: 0.000000') 212 | [musesock] 0:bash* "raspberrypi" 15:43 07-Aug-17 213 | ``` 214 | 215 | Detach from the tmux session by pressing `^b` then `d` 216 | 217 | ### Setting up multiple Muses 218 | Connect Multiple muses to Tenere by setting up one Raspberry Pi per muse. 219 | #### On each Raspberry Pi 220 | Edit `tmux_start.sh` for each of the Pi's to connect to a different muse, and send to different Ports on the Tenere LXstudio machine. For example, `Pi[0]` would send to port `9910` on LX machine, `Pi[1]` would send to `9911`, and so on. 221 | 222 | #### On the Tenere LX studio host computer 223 | Run in separate terminal sessions: 224 | For `Pi[0]`: 225 | ``` 226 | python muse-listener.py --port 9910 --oscip 127.0.0.1 --oscport 7810 227 | ``` 228 | For `Pi[1]`: 229 | ``` 230 | python muse-listener.py --port 9911 --oscip 127.0.0.1 --oscport 7811 231 | ``` 232 | And so on... 233 | 234 | ### Tenere LXstudio 235 | Tenere LXstudio will listen for Muse inputs with a different port (sequentially numbered) for each muse. One port per muse. 236 | (`commit b0cd179031c5277de0cb7bf161bf4b4e2f530473`) 237 | 238 | Configure Tenere LXstudio to Listen for Multiple Muse inputs 239 | Looks at Lines `36:41` in `Tenere.pde` 240 | ``` 241 | //Muliple Muses 242 | //each muse sends to a differnt port numbered sequentially starting with musePortOffset 243 | Muse[] muse; 244 | UIMuse[] uiMuse; 245 | int num_muses = 3; 246 | int musePortOffset=7810; 247 | ``` 248 | 249 | Now when you run Tenere.pde, 3 muses will appear as a collapsible section undernead `Sensors`. The Muses are Named with the Port associated with them. 250 | 251 | There are three sets of sliders for each Muse: 252 | * The first four are the correspond to the 4 EEG sensors of the Muse (ordered as: Back Left, Front Left, Front Right, Back Right). This is Raw EEG data updated at 256Hz, scaled to be (0-1) . Its very hard to use Raw EEG, Signal procesing for these will be added soon. 253 | * The next three are Accelerometer, one for each axis. Updated at 50Hz, scaled to (0,1) 254 | * The last three are Gyroscope, one for eah axis. Updated at 50Hz, scaled (0,1) 255 | 256 | (`default.lxp` in `commit b0cd179031c5277de0cb7bf161bf4b4e2f530473` will show the muse gyro output from 3 muses on the Tree) 257 | 258 | ### Visual feedback of connected muse using Pimironi Blinkt on the Pi 259 | 260 | Install liblo and pylibo to send and receive OSC messages on the Pi 261 | 262 | ``` 263 | sudo apt-get install liblo-dev cython 264 | sudo pip install pyliblo 265 | ``` 266 | 267 | Mount the Blinkt to the PI, and run: 268 | ``` 269 | python museStatus.py 270 | ``` 271 | Now in another terminal session on the Pi, run the muse connection script: 272 | ``` 273 | ./muse-reconnect-status 00:55:DA:B0:32:B1 192.168.1.118 9999 274 | ``` 275 | 276 | 277 | ### MuseLSL 278 | LSL (Lab Streaming Layer) is a standard for EEG research. 279 | Follow these instrucations to get muselsl working on the Pi 280 | so that you can connect to a muse and send the data out using LSL 281 | ``` 282 | sudo pip install pylsl 283 | ``` 284 | 285 | Unfortunately, there is an error in the latest pylsl package that distributes a library that is compiled for the wrong architecture. We can fix this with the following: 286 | 287 | ``` 288 | cd ~/SOFTWARE 289 | mkdir liblsl 290 | cd liblsl 291 | wget http://sccn.ucsd.edu/pub/software/LSL/SDK/liblsl-C-C++-1.11.zip 292 | unzip liblsl-C-C++-1.11.zip 293 | sudo cp liblsl-bcm2708.so /usr/local/lib/python2.7/dist-packages/pylsl-1.10.5-py2.7.egg/pylsl/liblsl32.so 294 | ``` 295 | 296 | Get muselsl from https://github.com/alexandrebarachant/muse-lsl and follow alexandre's instructions. 297 | 298 | 299 | ## Muse headband signal processing 300 | Signal processing scripts that receive muse data and create useful signals to use in Tenere LXstudio 301 | 302 | Check out: 303 | [/muse-sigproc/README.md](/muse-sigproc/README.md) 304 | 305 | -------------------------------------------------------------------------------- /muse-sigproc/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/treeoftenere/Interactivity/4a5ba0cb1617eaa08325be543eef789c209fb8ed/muse-sigproc/.DS_Store -------------------------------------------------------------------------------- /muse-sigproc/Images/muselab.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/treeoftenere/Interactivity/4a5ba0cb1617eaa08325be543eef789c209fb8ed/muse-sigproc/Images/muselab.png -------------------------------------------------------------------------------- /muse-sigproc/README.md: -------------------------------------------------------------------------------- 1 | # README # 2 | 3 | This repo contains Tenere specific signal processing for the Muse headband. 4 | * Processing is done in python and uses numpy and scipy. 5 | * Data is sent in and out using pyliblo (OSC) 6 | 7 | ## What it currently does 8 | * filters the EEG data to remove noise and make it zero mean (gets rid of the approx 800 offset in the data 9 | * downsamples the EEG data to 80 Hz to make it easier on Tenere LX 10 | * detects blinks and sends the info out as a binary variable and filtered waveform 11 | * detects saccadic eye movement and sends out the magnituds as a smoothly varying signal 12 | * has a simple slow varying metric called calm that increases when the your eyes become still 13 | * band powers for each channel (delta, theta, alpha, beta, gamma) 14 | * band ratios for each channel (beta/alpha, theta/alpha) 15 | * emg power for 1 channel (very controllable using jaw clench) 16 | * Headband status indicator (HSI) from variance of each channel to judge signal quality, this can be sent back to museStatus.py running on the Pi to show signal quality on the Blinkt LED strip 17 | * low_freq outout for each ear 18 | * Heart beat filter: extracts heart relate movement of the muse. Works if you are sitting still (meditation), and works better when you wear the muse around your neck like a pair of headphones. 19 | * Breath filter: extracts breath realated movement of the muse. work best if you wear the muse around your neck. 20 | 21 | 22 | ## Installation: 23 | 24 | This easiest way to get up and running is to install Anaconda Python 3.6 25 | (`https://www.continuum.io/downloads`) 26 | 27 | Create an Anaconda environment using the environment configuration file provided in this repo: 28 | 29 | ``` 30 | conda env create -f environment.yml 31 | ``` 32 | 33 | Next activate the enviroment. If you are using osc or linux, do it like this: 34 | 35 | ``` 36 | source activate musetools 37 | ``` 38 | 39 | Now run the signal processing script to listen for muse data coming in from muse-listener.py by configuring with --port, and choose where to send the processed data (--oscip, --oscport). If you want to send data directly to Tenere, use (--port 7878) 40 | 41 | ``` 42 | python museProc_tenere.py --port 9810 --oscip 127.0.0.1 --oscport 8001 43 | ``` 44 | 45 | Visualize the data using MuseLab (`http://developer.choosemuse.com/research-tools`) 46 | Use the configuration file `muselab_configurationTenere.json` to have muselab automatically configured to listen to port `8001` and plot the data. 47 | 48 | ![muselab](Images/muselab.png) 49 | 50 | ### Use MuseLab to forward the OSC messages to Tenere: 51 | * Choose OSC in the pulldown menu 52 | * Select Outgoing 53 | * hostname is: `127.0.0.1` or the IP address of Tenere computer, Port is `7878` 54 | * `UDP` 55 | * Then select the messages you want to forward. 56 | 57 | You can now visualize the data as well as use it in Tenere!! 58 | * tip: if you click on a graph in muselab, you can use the arrow keys to change the scale and speed. 59 | * Check out documentation for muselab at `http://dev.choosemuse.com/research-tools/getting-started` 60 | 61 | ### Sending signal quality status back to the Pi 62 | If you want to send signal quality data back to the Pi, use (--spareip, --sparse port) 63 | Run `museStatus.py` on the Pi and plug in your Pimironi `Blinkt` LEDs 64 | ``` 65 | python museProc_tenere.py --port 9810 --oscip 127.0.0.1 --oscport 8001 --sparseip 10.0.0.14 --sparseport 1234 66 | ``` 67 | 68 | 69 | -------------------------------------------------------------------------------- /muse-sigproc/environment.yml: -------------------------------------------------------------------------------- 1 | name: musetools 2 | channels: 3 | - defaults 4 | dependencies: 5 | - alabaster=0.7.10=py35_0 6 | - appnope=0.1.0=py35_0 7 | - astroid=1.4.9=py35_0 8 | - babel=2.4.0=py35_0 9 | - bleach=1.5.0=py35_0 10 | - chardet=3.0.3=py35_0 11 | - decorator=4.0.11=py35_0 12 | - docutils=0.13.1=py35_0 13 | - entrypoints=0.2.2=py35_1 14 | - html5lib=0.999=py35_0 15 | - icu=54.1=0 16 | - imagesize=0.7.1=py35_0 17 | - ipykernel=4.6.1=py35_0 18 | - ipython=6.0.0=py35_1 19 | - ipython_genutils=0.2.0=py35_0 20 | - isort=4.2.5=py35_0 21 | - jedi=0.10.2=py35_2 22 | - jinja2=2.9.6=py35_0 23 | - jsonschema=2.6.0=py35_0 24 | - jupyter_client=5.0.1=py35_0 25 | - jupyter_core=4.3.0=py35_0 26 | - lazy-object-proxy=1.2.2=py35_0 27 | - markupsafe=0.23=py35_2 28 | - mistune=0.7.4=py35_0 29 | - mkl=2017.0.1=0 30 | - nbconvert=5.1.1=py35_0 31 | - nbformat=4.3.0=py35_0 32 | - numpy=1.12.1=py35_0 33 | - numpydoc=0.6.0=py35_0 34 | - openssl=1.0.2k=2 35 | - pandocfilters=1.4.1=py35_0 36 | - path.py=10.3.1=py35_0 37 | - pep8=1.7.0=py35_0 38 | - pexpect=4.2.1=py35_0 39 | - pickleshare=0.7.4=py35_0 40 | - pip=9.0.1=py35_1 41 | - prompt_toolkit=1.0.14=py35_0 42 | - psutil=5.2.2=py35_0 43 | - ptyprocess=0.5.1=py35_0 44 | - pyflakes=1.5.0=py35_0 45 | - pygments=2.2.0=py35_0 46 | - pylint=1.6.4=py35_1 47 | - pyqt=5.6.0=py35_2 48 | - python=3.5.3=1 49 | - python-dateutil=2.6.0=py35_0 50 | - python.app=1.2=py35_4 51 | - pytz=2017.2=py35_0 52 | - pyzmq=16.0.2=py35_0 53 | - qt=5.6.2=2 54 | - qtawesome=0.4.4=py35_0 55 | - qtconsole=4.3.0=py35_0 56 | - qtpy=1.2.1=py35_0 57 | - readline=6.2=2 58 | - requests=2.14.2=py35_0 59 | - rope=0.9.4=py35_1 60 | - scipy=0.19.0=np112py35_0 61 | - setuptools=27.2.0=py35_0 62 | - simplegeneric=0.8.1=py35_1 63 | - sip=4.18=py35_0 64 | - six=1.10.0=py35_0 65 | - snowballstemmer=1.2.1=py35_0 66 | - sphinx=1.5.6=py35_0 67 | - spyder=3.1.4=py35_0 68 | - sqlite=3.13.0=0 69 | - testpath=0.3=py35_0 70 | - tk=8.5.18=0 71 | - tornado=4.5.1=py35_0 72 | - traitlets=4.3.2=py35_0 73 | - wcwidth=0.1.7=py35_0 74 | - wheel=0.29.0=py35_0 75 | - wrapt=1.10.10=py35_0 76 | - xz=5.2.2=1 77 | - zlib=1.2.8=3 78 | - pip: 79 | - appdirs==1.4.3 80 | - bitstring==3.1.5 81 | - cycler==0.10.0 82 | - cython==0.25.2 83 | - emd-signal==0.2.2 84 | - enum34==1.1.6 85 | - future==0.16.0 86 | - ipython-genutils==0.2.0 87 | - joblib==0.11 88 | - jupyter-client==5.0.1 89 | - jupyter-core==4.3.0 90 | - matplotlib==2.0.2 91 | - mne==0.14.1 92 | - packaging==16.8 93 | - pandas==0.20.1 94 | - prompt-toolkit==1.0.14 95 | - protobuf==3.1.0 96 | - pygatt==3.1.1 97 | - pyliblo==0.10.0 98 | - pylsl==1.10.5 99 | - pyopengl==3.1.0 100 | - pyparsing==2.2.0 101 | - pyriemann==0.2.4 102 | - pyserial==3.3 103 | - python-osc==1.6.3 104 | - pywavelets==0.5.2 105 | - rope-py3k==0.9.4.post1 106 | - scikit-learn==0.18.1 107 | -------------------------------------------------------------------------------- /muse-sigproc/live_utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Utilities for real-time processing 5 | """ 6 | 7 | from collections import OrderedDict 8 | from threading import Thread, Event 9 | 10 | import numpy as np 11 | from scipy import signal 12 | 13 | 14 | BAND_FREQS = OrderedDict() 15 | BAND_FREQS['delta'] = (1, 4) 16 | BAND_FREQS['theta'] = (4, 8) 17 | BAND_FREQS['alpha'] = (7.5, 13) 18 | BAND_FREQS['beta'] = (13, 30) 19 | BAND_FREQS['gamma'] = (30, 44) 20 | 21 | RATIOS = OrderedDict() 22 | RATIOS['beta/alpha'] = (3, 2) 23 | RATIOS['theta/alpha'] = (4, 2) 24 | 25 | BLINKWAVE = np.asarray([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 26 | 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 27 | 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 28 | 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 29 | 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 30 | 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 31 | 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 32 | -17.683,-31.44,-44.534,-56.351,-66.451,-74.543,-80.462, 33 | -84.142,-85.604,-84.934,-82.271,-77.79,-71.697,-64.214, 34 | -55.574,-46.011,-35.757,-25.037,-14.061,-3.028,7.882, 35 | 18.507,28.704,38.352,47.346,55.605,63.064,69.678, 36 | 75.418,80.272,84.239,87.336,89.588,91.031,91.709, 37 | 91.673,90.981,89.691,87.868,85.577,82.882,79.848, 38 | 76.538,73.011,69.325,65.534,61.686,57.826,53.994, 39 | 50.223,46.544,42.981,39.552,36.273,33.154,30.198, 40 | 27.408,24.782,22.313,19.993,17.813,15.758,13.815, 41 | 11.969, 10.204,8.5049,6.8558,5.2423,3.6505,2.0681]) 42 | 43 | 44 | HEARTWAVE = np.asarray([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,21.623,36.971,59.776,98.036,132.82,175.89,185.95,115.3,-24.952,-202.51,-364.39,-450.96,-412.91,-324.91,-304.24,-333.27,-347.46,-342.08,-261.02,-81.866,84.638,190.3,250.39,279.78,281.17,266.42,256.18,254.9,245.22,229.63,217.99,210.21,197.21,177.98,168.21,161.73,145.03,131.1,121.96,110.82,101.6,88.15,86.381,86.017,72.336,63.964,61.159,51.928,43.105,44.441,42.399,32.123]) 45 | 46 | 47 | def sigmoid(x, a=1, b=0, c=0): 48 | """Sigmoid function. 49 | 50 | Args: 51 | x (array_like): values to map 52 | a (float): control the steepness of the curve 53 | b (float): control the shift in x 54 | c (float): control the shift in y 55 | 56 | Returns: 57 | (numpy.ndarray) output of the sigmoid 58 | """ 59 | return 1 / (1 + np.exp(-(a*x + b))) + c 60 | 61 | 62 | def get_filter_coeff(fs, N, l_freq=None, h_freq=None, method='butter'): 63 | """Get filter coefficients. 64 | 65 | Args: 66 | fs (float): sampling rate of the signal to filter 67 | N (int): order of the filter 68 | 69 | Keyword Args: 70 | l_freq (float or None): lower cutoff frequency in Hz. If provided 71 | without `h_freq`, returns a highpass filter. If both `l_freq` 72 | and `h_freq` are provided and `h_freq` is larger than `l_freq`, 73 | returns a bandpass filter, otherwise returns a bandstop filter. 74 | h_freq (float or None): higher cutoff frequency in Hz. If provided 75 | without `l_freq`, returns a lowpass filter. 76 | method (string): method to compute the coefficients ('butter', etc.) 77 | 78 | Returns: 79 | (numpy.ndarray): b coefficients of the filter 80 | (numpy.ndarray): a coefficients of the filter 81 | 82 | Examples: 83 | Get a 5th order lowpass filter at 30 Hz for a signal sampled at 256 Hz 84 | >>> b, a = get_filter_coeff(256, 5, h_freq=30) 85 | """ 86 | 87 | if l_freq is not None and h_freq is not None: 88 | if l_freq < h_freq: 89 | btype = 'bandpass' 90 | Wn = [l_freq/(float(fs)/2), h_freq/(float(fs)/2)] 91 | elif l_freq > h_freq: 92 | btype = 'bandstop' 93 | Wn = [h_freq/(float(fs)/2), l_freq/(float(fs)/2)] 94 | elif l_freq is not None: 95 | Wn = l_freq/(float(fs)/2) 96 | btype = 'highpass' 97 | elif h_freq is not None: 98 | Wn = h_freq/(float(fs)/2) 99 | btype = 'lowpass' 100 | 101 | if method == 'butter': 102 | b, a = signal.butter(N, Wn, btype=btype) 103 | else: 104 | raise(ValueError('Method ''{}'' not supported.'.format(method))) 105 | 106 | return b, a 107 | 108 | 109 | # def plot_freq_response(b, a, fs): 110 | # """Plot the frequency response of a filter. 111 | 112 | # Args: 113 | # b (numpy.ndarray): coefficients `b` of the filter 114 | # a (numpy.ndarray): coefficients `a` of the filter 115 | # fs (float): sampling frequency of the signal to be filtered 116 | 117 | # Returns: 118 | # (matplotlib.figure.Figure) : figure 119 | # (matplotlib.axes.Axes) : axes of the plot 120 | 121 | # Taken from https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/ 122 | # scipy.signal.freqz.html 123 | # """ 124 | # w, h = signal.freqz(b, a) 125 | # f = w/np.pi*fs/2 126 | 127 | # fig, ax = plt.subplots() 128 | # ax.set_title('Digital filter frequency response') 129 | # ax.plot(f, 20 * np.log10(abs(h)), 'b') 130 | # ax.set_ylabel('Amplitude [dB]', color='b') 131 | # ax.set_xlabel('Frequency [Hz]]') 132 | 133 | # ax2 = ax.twinx() 134 | # angles = np.unwrap(np.angle(h)) 135 | # ax2.plot(f, angles, 'g') 136 | # ax2.set_ylabel('Angle (radians)', color='g') 137 | 138 | # plt.grid() 139 | # plt.axis('tight') 140 | # plt.show() 141 | 142 | # return fig, ax 143 | 144 | 145 | 146 | def blink_template_match(eegEar): 147 | blinkVal = (np.reshape(BLINKWAVE,(1,256))*np.reshape(eegEar,(1,256))).sum() 148 | return blinkVal 149 | 150 | def heart_template_match(window): 151 | ecgwindow = window[window.size-HEARTWAVE.size:window.size] 152 | #heartVal = (np.reshape(HEARTWAVE,(1,HEARTWAVE.size)*np.reshape(ecgwindow,(1,HEARTWAVE.size))).sum() 153 | heartVal = np.dot(HEARTWAVE.T,ecgwindow).sum() 154 | return heartVal 155 | 156 | def fft_continuous(data, n=None, psd=False, log='log', fs=None, 157 | window='hamming'): 158 | """Apply the Fast Fourier Transform on continuous data. 159 | 160 | Apply the Fast Fourier Transform algorithm on continuous data to get 161 | the spectrum. 162 | Steps: 163 | 1- Demeaning 164 | 2- Apply hamming window 165 | 3- Compute FFT 166 | 4- Grab lower half 167 | 168 | Args: 169 | data (numpy.ndarray): shape (`n_samples`, `n_channels`). Data for 170 | which to get the FFT 171 | 172 | Keyword Args: 173 | n (int): length of the FFT. If longer than `n_samples`, zero-padding 174 | is used; if smaller, then the signal is cropped. If None, use 175 | the same number as the number of samples 176 | psd (bool): if True, return the Power Spectral Density 177 | log (string): can be 'log' (log10(x)), 'log+1' (log10(x+1)) or None 178 | fs (float): Sampling rate of `data`. 179 | window (string): if 'no_window' do not use a window before 180 | applying the FFT. Otherwise, use as the window function. 181 | Currently only supports 'hamming'. 182 | 183 | Returns: 184 | (numpy.ndarray) Fourier Transform of the original signal 185 | (numpy.ndarray): array of frequency bins 186 | """ 187 | if data.ndim == 1: 188 | data = data.reshape((-1, 1)) 189 | [n_samples, n_channels] = data.shape 190 | 191 | data = data - data.mean(axis=0) 192 | if window.lower() == 'hamming': 193 | H = np.hamming(n_samples).reshape((-1, 1)) 194 | elif window.lower() == 'no_window': 195 | H = np.ones(n_samples).reshape((-1, 1)) 196 | else: 197 | raise ValueError('window value {} is not supported'.format(window)) 198 | L = np.min([n_samples, n]) if n else n_samples 199 | Y = np.fft.fft(data * H, n, axis=0) / L 200 | freq_bins = (fs * np.arange(0, Y.shape[0] / 2 + 1) / Y.shape[0]) \ 201 | if fs is not None else None 202 | 203 | out = Y[0:int(Y.shape[0] / 2) + 1, :] 204 | out[:, 0] = 2 * out[:, 0] 205 | 206 | if psd: 207 | out = np.abs(out) ** 2 208 | if log == 'log': 209 | out = np.log10(out) 210 | elif log == 'log+1': 211 | out = np.log10(out + 1) 212 | 213 | return out, freq_bins 214 | 215 | 216 | def compute_band_powers(psd, f, relative=False, band_freqs=BAND_FREQS): 217 | """Compute the standard band powers from a PSD. 218 | 219 | Compute the standard band powers from a PSD. 220 | 221 | Args: 222 | psd (numpy.ndarray): array of shape (n_freq_bins, n_channels) 223 | containing the PSD of each channel 224 | f (array_like): array of shape (n_freq_bins,) containing the 225 | frequency of each bin in `psd` 226 | 227 | Keyword Args: 228 | relative (bool): if True, compute relative band powers 229 | band_freqs (OrderedDict): dictionary containing the band names as 230 | keys, and tuples of frequency boundaries as values. See 231 | BAND_FREQS. 232 | 233 | Returns: 234 | (numpy.ndarray): array of shape (n_bands, n_channels) containing 235 | the band powers 236 | (list): band names 237 | """ 238 | band_powers = np.zeros((len(band_freqs), psd.shape[1])) 239 | for i, bounds in enumerate(band_freqs.values()): 240 | mask = (f >= bounds[0]) & (f <= bounds[1]) 241 | band_powers[i, :] = np.mean(psd[mask, :], axis=0) 242 | 243 | if relative: 244 | band_powers /= band_powers.sum(axis=0) 245 | 246 | return band_powers, list(band_freqs.keys()) 247 | 248 | 249 | def compute_band_ratios(band_powers, ratios=RATIOS): 250 | """Compute ratios of band powers. 251 | 252 | Args: 253 | band_powers (numpy.ndarray): array of shape (n_bands, n_channels) 254 | containing the band powers 255 | 256 | Keyword Args: 257 | ratios (tuple of tuples): contains the indices of band powers to 258 | compute ratios from. E.g., ((3, 2)) is beta/alpha. 259 | See BAND_FREQS and RATIOS. 260 | ratios (OrderedDict): dictionary containing the ratio names as keys 261 | and tuple of indices of the bands to used for each ratio. See 262 | RATIOS. 263 | 264 | Returns: 265 | (numpy.ndarray): array of shape (n_rations, n_channels) 266 | containing the ratios of band powers 267 | (list): ratio names 268 | """ 269 | ratio_powers = np.zeros((len(ratios), band_powers.shape[1])) 270 | for i, ratio in enumerate(ratios.values()): 271 | ratio_powers[i, :] = band_powers[ratio[0], :] / band_powers[ratio[1], :] 272 | 273 | return ratio_powers, list(ratios.keys()) 274 | 275 | 276 | class CircularBuffer(object): 277 | """Circular buffer for multi-channel 1D or 2D signals 278 | 279 | Circular buffer for multi-channel 1D or 2D signals (could be increased 280 | to arbitrary number of dimensions easily). 281 | 282 | Attributes: 283 | buffer (numpy.ndarray): array (n_samples, n_channels(, n_points)) 284 | containing the data 285 | noise (numpy.ndarray): array (n_samples, n_channels(, n_points)) of 286 | booleans marking bad data 287 | ind (int): current index in the circular buffer 288 | pts (int): total number of points seen so far in the buffer 289 | n (int): length of buffer (number of samples) 290 | m (int): number of channels 291 | 292 | Args: 293 | n (int): length of buffer (number of samples) 294 | m (int): number of channels 295 | 296 | Keyword Args: 297 | p (int): (optional) length of third dimension for 3D buffer 298 | fill_value (float): value to fill the buffer with when initializing 299 | 300 | Note: 301 | The indexing syntax of numpy lets us extract data from all trailing 302 | dimensions (e.g. x[0, 2] = x[0, 2, :]). This makes it easy to add 303 | dimensions. 304 | 305 | Todo: 306 | - Implement 3D buffering with argument `p`. 307 | - Add `pts`, `noise` and `buffer` as properties? 308 | """ 309 | 310 | def __init__(self, n, m, p=None, fill_value=0.): 311 | self.n = int(n) 312 | self.m = int(m) 313 | 314 | if p: 315 | self.p = int(p) 316 | self.buffer = np.zeros((self.n, self.m, self.p)) + fill_value 317 | self.noise = np.zeros((self.n, self.m, self.p), dtype=bool) 318 | else: 319 | self.buffer = np.zeros((self.n, self.m)) + fill_value 320 | self.noise = np.zeros((self.n, self.m), dtype=bool) 321 | 322 | self.ind = 0 323 | self.pts = 0 324 | 325 | def update(self, x): 326 | """Update the buffer. 327 | 328 | Args: 329 | x (numpy.ndarray): array of shape 330 | (n_new_samples, n_channels(, n_points)) 331 | """ 332 | if x.ndim != self.buffer.ndim: 333 | raise ValueError('x has not the same number of dimensions as ' 334 | 'the buffer.') 335 | nw = x.shape[0] 336 | 337 | # Determine index at which new values should be put into array 338 | ind = np.arange(self.ind, self.ind + nw, dtype=np.int16) % self.n 339 | self.buffer[ind, :] = x 340 | 341 | # Set self.ind = to the index at which new locations were put. 342 | # Separately defined here to allow new data to be an array rather 343 | # than just one row 344 | self.ind = (ind[-1] + 1) % self.n 345 | self.pts += nw 346 | 347 | def extract(self, nw=None): 348 | """Extract sample(s) from the buffer. 349 | 350 | Keyword Args: 351 | nw (int): number of samples to extract from the buffer. If 352 | None, return n points. 353 | """ 354 | if not nw: 355 | nw = self.n 356 | 357 | ind = np.arange(self.ind - nw, self.ind, dtype=np.int16) % self.n 358 | return self.buffer[ind, :] 359 | 360 | def mark_noise(self, noise, nw=None): 361 | """Mark noisy samples in the buffer. 362 | 363 | Mark the last `nw` samples in the buffer as noisy (noisy -> True; 364 | clean -> False). 365 | 366 | Args: 367 | noise (bool): if True, mark the last nw samples as noise 368 | 369 | Keyword Args: 370 | nw (int): number of samples to mark as noise. If None, use n 371 | points. 372 | """ 373 | if not nw: 374 | nw = self.n 375 | 376 | ind = np.arange(self.ind - nw, self.ind, dtype=np.int16) % self.n 377 | self.noise[ind, :] = noise 378 | 379 | @property 380 | def n(self): 381 | return self._n 382 | 383 | @n.setter 384 | def n(self, value): 385 | if not isinstance(value, int) or value < 1: 386 | raise TypeError('n must be a non-zero positive integer.') 387 | self._n = value 388 | 389 | @property 390 | def m(self): 391 | return self._m 392 | 393 | @m.setter 394 | def m(self, value): 395 | if not isinstance(value, int) or value < 1: 396 | raise TypeError('m must be a non-zero positive integer.') 397 | self._m = value 398 | 399 | 400 | class NanBuffer(CircularBuffer): 401 | """Circular buffer that can accomodate missing values (NaNs). 402 | 403 | Circular buffer that can accomodate missing values coming in as NaNs. 404 | Previous values are repeated as long as NaNs are received. When a new 405 | valid value finally is received, linear interpolation is performed to 406 | replace the chunk of missing values. 407 | 408 | Attributes: 409 | nan_buffer (numpy.ndarray): array with the same shape as the data 410 | buffer, but containing only 0, 1 or 2. 411 | 0 -> no NaN 412 | 1 -> NaN 413 | 2 -> interpolated NaNs 414 | nan_start_ind (numpy.ndarray): indices of the start of a NaN streak 415 | with shape (n_channels, ) 416 | prev_ind (int): index at which the previous sample was put in the 417 | buffer 418 | 419 | Args: 420 | 421 | """ 422 | def __init__(self, n, m, p=None, fill_value=0.): 423 | """ 424 | """ 425 | if p: 426 | raise NotImplementedError('3D arrays are not yet supported.') 427 | 428 | super().__init__(n, m, p=p, fill_value=fill_value) 429 | 430 | self.nan_buffer = np.zeros_like(self.buffer, dtype=np.int8) 431 | self.nan_start_ind = np.zeros((self.m,), dtype=np.int8) - 1 432 | self.prev_ind = self.n - 1 433 | 434 | def update(self, x): 435 | """Update the buffer. 436 | 437 | Args: 438 | x (numpy.ndarray): array of shape 439 | (n_new_samples, n_channels(, n_points)) 440 | """ 441 | if x.ndim != self.buffer.ndim: 442 | raise ValueError('x has not the same number of dimensions as ' 443 | 'the buffer.') 444 | if x.shape[0] > 1: 445 | raise NotImplementedError('Updating with more than one sample ' 446 | 'at once is not supported yet.') 447 | 448 | # Determine index at which new values should be put into array 449 | ind = [x % self.n for x in range(self.ind, self.ind + 1)] 450 | 451 | # Manage NaNs 452 | self.nan_buffer[ind, :] = np.logical_or((x == 0), np.isnan(x)) 453 | for c in range(self.m): 454 | # Case where the current sample is a NaN 455 | if self.nan_buffer[ind, c] == 1: 456 | # First case: NaNs at the very end of the buffer -> Set to 457 | # previous good value 458 | x[0, c] = self.buffer[self.prev_ind, c] 459 | 460 | if self.nan_start_ind[c] == -1: 461 | self.nan_start_ind[c] = self.prev_ind 462 | 463 | # Case where the previous sample was a NaN, but the current 464 | # sample is good (NaN streak in the middle of the buffer) 465 | elif (self.nan_buffer[ind, c] == 0) and (self.nan_buffer[self.prev_ind, c] == 1): 466 | 467 | # Find the boundaries of the NaN streak 468 | n_cont_nans = np.mod(self.n + (self.ind - self.nan_start_ind[c]), self.n) - 1 469 | 470 | if n_cont_nans > self.n: 471 | # If we reach a point where we have more continuous 472 | # NaNs than the size of the window, change the 473 | # nanStartInd to the start of that window. This is done 474 | # to avoid calling returnInds with a number of 475 | # continuous NaNs higher than approximately 300. 476 | self.nan_start_ind[c] = np.mod(self.n + (self.ind - self.n), self.n) 477 | n_cont_nans = self.n 478 | 479 | # Find the indices of values to replace 480 | indices = [x % self.n for x in range(self.ind - n_cont_nans, self.ind)] 481 | 482 | # Linearly interpolate 483 | intercept = self.buffer[self.nan_start_ind[c], c] 484 | slope = (x[0, c] - intercept) / (n_cont_nans + 1) 485 | self.buffer[indices, c] = slope * np.arange(1, n_cont_nans + 1) + intercept 486 | 487 | # Set this streak of NaNs to 2 in the nan buffer (to 488 | # distinguish them from the other cases when computing the 489 | # FFT) 490 | self.nan_buffer[indices, c] = 2 491 | 492 | # Reset nan_start_ind to -1 because the NaN streak is over 493 | self.nan_start_ind[c] = -1 494 | 495 | self.buffer[ind, :] = x 496 | self.prev_ind = self.ind 497 | self.ind = (ind[-1] + 1) % self.n 498 | self.pts += 1 499 | 500 | 501 | class Histogram(object): 502 | """Fixed-size histogram for live-scoring a multi-channel random variable. 503 | 504 | Attributes: 505 | hist (numpy.ndarray): shape (n_bins, n_channels), containing counts 506 | for each histogram bins 507 | cum_hist (numpy.ndarray): shape (n_bins, n_channels), containing 508 | cumulative sums for each histogram bins. This is useful to 509 | compute percentiles. 510 | bins (numpy.ndarray): shape (n_bins + 1,), containing the boundary 511 | values of each of the `n_bins` bins 512 | counts (int): number of values collected up until now 513 | min_count (int): minimum count before percentiles can be computed 514 | decay (float): value between 0 and 1 which controls the relative 515 | importance of new samples when updating the histogram. A value 516 | of 0 means that old values are ignored and only new values are 517 | used; a value of 1 means that old values never fade, and that 518 | new values are simply added to the existing histogram. 519 | 520 | Args: 521 | n_bins (int): number of bins in the histogram 522 | n_channels (int): number of channels 523 | 524 | Keyword Args: 525 | bounds (tuple): (min_value, max_value) to collect in the histogram 526 | min_count (int): minimum count before percentiles can be computed 527 | decay (float): see Attributes description. 528 | """ 529 | def __init__(self, n_bins, n_channels, bounds=(-2, 8), min_count=0, 530 | decay=1): 531 | self.n_bins = n_bins 532 | self.bins = np.linspace(bounds[0], bounds[1], n_bins + 1) 533 | self.n_channels = n_channels 534 | self.hist = np.zeros((n_bins, n_channels)) 535 | self.cum_hist = np.zeros((n_bins, n_channels)) 536 | self.counts = 0 537 | self.min_count = min_count 538 | self.decay = decay 539 | 540 | def get_prct_and_add(self, x): 541 | """Get the percentile of a new point, then add it to the histogram. 542 | 543 | Args: 544 | x (array_like): points to add to the histogram, of shape 545 | (n_channels, ) 546 | 547 | Returns: 548 | (numpy.ndarray): percentiles of `x` 549 | """ 550 | if self.decay != 1: 551 | self.hist *= self.decay 552 | 553 | prcts = np.zeros((self.n_channels, )) 554 | for c in range(self.n_channels): 555 | # Find corresponding bin 556 | ind = self._find_bin_ind(x[c]) 557 | if self.counts > self.min_count: 558 | prcts[c] = self.cum_hist[ind, c]/self.cum_hist[self.cum_hist.shape[0]-1,c] 559 | # prcts[c] = self.cum_hist[ind, c] / self.counts 560 | self.hist[ind, c] += 1 561 | 562 | # Update cumulative sum 563 | self.cum_hist = self.hist.cumsum(axis=0) 564 | self.counts += 1 565 | 566 | return prcts 567 | 568 | def reset(self): 569 | """Reset the histogram and cumulative sum. 570 | """ 571 | self.bins *= 0 572 | self.cum_hist *= 0 573 | self.counts = 0 574 | 575 | def _find_bin_ind(self, x): 576 | """Find histogram bin index. 577 | 578 | Args: 579 | x (float): value for which to find the bin index 580 | 581 | Returns: 582 | (int): bin index 583 | """ 584 | inds = np.where(self.bins >= x)[0].tolist() 585 | return self.n_bins - 1 if not inds else inds[0] - 1 586 | 587 | 588 | class Timer(Thread): 589 | """Repeating timer object. 590 | 591 | This timer calls its callback function every `interval` seconds, until 592 | its `stop` method is called. 593 | 594 | Args: 595 | interval (float): interval between callbacks (in seconds) 596 | callback (function): function to call every `interval` seconds 597 | 598 | """ 599 | def __init__(self, interval, callback): 600 | Thread.__init__(self) 601 | self.interval = interval 602 | self.callback = callback 603 | self.stopped = Event() 604 | 605 | def run(self): 606 | while not self.stopped.wait(self.interval): 607 | self.callback() 608 | 609 | def stop(self): 610 | self.stopped.set() 611 | -------------------------------------------------------------------------------- /muse-sigproc/museProc_tenere.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on Tue Aug 8 10:44:58 2017 5 | 6 | @author: chris 7 | """ 8 | 9 | #!/usr/bin/env python3 10 | # -*- coding: utf-8 -*- 11 | """ 12 | Muse output server 13 | ================== 14 | 15 | This script shows how to process and stream different Muse outputs, such as: 16 | - 17 | 18 | TODO: 19 | - Make musetools.realtime.EEGServer more general so we don't have to 20 | create a new class for this example (and instead only plug in 21 | necessary pieces or inherit from it). 22 | 23 | """ 24 | 25 | import time 26 | from threading import Thread 27 | 28 | import numpy as np 29 | from scipy import signal, interpolate 30 | from pylsl import StreamInlet, resolve_byprop 31 | USE_LIBLO = True # s.name != 'nt' 32 | if USE_LIBLO: 33 | from liblo import ServerThread, Address, Message, Bundle, send 34 | print('using Liblo') 35 | else: 36 | from pythonosc import dispatcher, osc_server, udp_client 37 | print('using pythonOSC') 38 | 39 | import live_utils as ut 40 | 41 | 42 | from optparse import OptionParser 43 | 44 | usage = "python museProc_tenere.py --port 9999 --oscip 127.0.0.1 --oscport 7878 --sparseip 10.0.0.14 --sparseport 1234" 45 | parser = OptionParser(usage=usage) 46 | parser.add_option("-l", "--port", 47 | dest="port", type='int', default=9810, 48 | help="port to listen for muse data on") 49 | parser.add_option("-o", "--oscip", 50 | dest="oscip", type='string', default="127.0.0.1", 51 | help="IP address of Tenere LXstudio to send OSC message to") 52 | parser.add_option("-p", "--oscport", 53 | dest="oscport", type='int', default=7878, 54 | help="The oort that Tenere LXstudio is listening on ") 55 | parser.add_option("-r", "--sparseip", 56 | dest="sparseip", type='string', default="127.0.0.1", 57 | help="IP address of the Pi to send OSC status message to") 58 | parser.add_option("-s", "--sparseport", 59 | dest="sparseport", type='int', default=1234, 60 | help="Port for OSC status messages on the Pi") 61 | 62 | 63 | (options, args) = parser.parse_args() 64 | 65 | 66 | class FFTServer(): 67 | """Server to receive EEG data and stream classifier outputs. 68 | 69 | Attributes: 70 | See args. 71 | 72 | Args: 73 | incoming (str or dict): incoming data stream. If provided as a 74 | string, look for an LSL stream with the corresponding type. If 75 | provided as dict with fields `address` and `port`, open an OSC 76 | port at that address and port. 77 | outgoing (str or dict): outgoing data stream. If provided as a 78 | string, stream to an LSL stream with the corresponding type. If 79 | provided as dict with fields `address` and `port`, stream to an 80 | OSC port at that address and port. 81 | 82 | Keyword Args: 83 | config (dict): dictionary containing the configuration and 84 | preprocessing parameters, e.g. (these are the default values): 85 | 86 | config = {'fs': 256., 87 | 'n_channels': 4, 88 | 'raw_buffer_len': 3 * fs, 89 | 'filt_buffer_len': 3 * fs, 90 | 'window_len': fs, 91 | 'step': int(fs / 10), 92 | 'filter': ([1], [1]), 93 | 'psd_window_len': 256., 94 | 'psd_buffer_len': 10} 95 | 96 | device_source (str): Device from which the data is coming from. 97 | 'muse' or 'vive' 98 | streaming_source (str): Software source of the data stream: 99 | 'muselsl' 100 | 'musedirect' 101 | 'musemonitor' 102 | debug_outputs (bool): if True, send debug outputs (not used by VR 103 | experience) 104 | verbose (bool): if True, print status whenever new data is 105 | received or sent. 106 | 107 | """ 108 | def __init__(self, incoming, outgoing, sparseOutput=None, config={}, device_source='Muse', 109 | software_source='muselsl', debug_outputs=True, verbose=False): 110 | 111 | self.incoming = incoming 112 | self.outgoing = outgoing 113 | self.sparseOutput = sparseOutput 114 | self.device_source = device_source 115 | self.software_source = software_source 116 | self.debug_outputs = debug_outputs 117 | self.verbose = verbose 118 | self.eeg_chunk_length = 12 119 | 120 | # 1. Initialize inlet 121 | if isinstance(self.incoming, str): # LSL inlet 122 | print('Looking for the {} stream...'.format(incoming)) 123 | self._stream = resolve_byprop('type', incoming, timeout=2) 124 | 125 | if len(self._stream) == 0: 126 | raise(RuntimeError('Can\'t find {} stream.'.format(incoming))) 127 | print('Aquiring data from the \'{}\' stream...'.format(incoming)) 128 | 129 | self._inlet = StreamInlet(self._stream[0], 130 | max_chunklen=self.eeg_chunk_length) 131 | self._info_in = self._inlet.info() 132 | 133 | else: # OSC port 134 | if USE_LIBLO: 135 | self._osc_server = ServerThread(incoming['port']) 136 | print('OSC server initialized at port {}.'.format( 137 | incoming['port'])) 138 | else: 139 | self._dispatcher = dispatcher.Dispatcher() 140 | print('python-osc dispatcher initialized.') 141 | 142 | # 2. Initialize outlets 143 | if not isinstance(self.outgoing, tuple): 144 | self.outgoing = [self.outgoing] 145 | self._output_threads = [] 146 | for out in self.outgoing: 147 | 148 | if isinstance(out, str): # LSL outlet 149 | raise NotImplementedError 150 | 151 | elif isinstance(out, dict): # OSC port 152 | if USE_LIBLO: 153 | self._output_threads.append(Address(out['address'], 154 | out['port'])) 155 | else: 156 | raise NotImplementedError 157 | # self._client = udp_client.SimpleUDPClient( 158 | # outgoing['address'], outgoing['port']) 159 | print('OSC client initialized at {}:{}.'.format( 160 | out['address'], out['port'])) 161 | 162 | if (self.sparseOutput !=None): 163 | if not isinstance(self.sparseOutput, tuple): 164 | self.sparseOutput = [self.sparseOutput] 165 | self._sparseOutput_threads = [] 166 | for out in self.sparseOutput: 167 | if isinstance(out, str): # LSL outlet 168 | raise NotImplementedError 169 | 170 | elif isinstance(out, dict): # OSC port 171 | if USE_LIBLO: 172 | self._sparseOutput_threads.append(Address(out['address'], 173 | out['port'])) 174 | else: 175 | raise NotImplementedError 176 | print('OSC sparse output client initialized at {}:{}.'.format( 177 | out['address'], out['port'])) 178 | 179 | 180 | # 3. Initialize internal buffers and variables 181 | self._init_processing(config) 182 | 183 | def _init_processing(self, config): 184 | """Initialize internal buffers and variables for EEG processing. 185 | 186 | Args: 187 | config (dict): dictionary containing various parameters. See 188 | DEFAULT_CONFIG below for default values. 189 | 190 | fs (float): sampling frequency 191 | n_channels (int): number of channels 192 | raw_buffer_len (int): raw data buffer length 193 | filt_buffer_len (int): filtered data buffer length 194 | window_len (int): processing window length 195 | step (int): number of samples between two consecutive 196 | windows to process 197 | filter (tuple or dict): filtering parameters. If provided 198 | as a tuple, the first and second elements should be 199 | the `b` and `a` coefficients of a filter. If provided 200 | as a dictionary, the fields `order`, `l_freq`, `h_freq` 201 | and `method` are required; the function 202 | pre.get_filter_coeff() will then be used to compute the 203 | coefficients. 204 | If None, don't use a filter (windows will be extracted 205 | from the raw buffer). 206 | psd_window_len (int): length of the window to use for PSD 207 | psd_buffer_len (int): PSD buffer length 208 | """ 209 | DEFAULT_CONFIG = {'fs': 256., 210 | 'n_channels': 5, 211 | 'raw_buffer_len': 3 * 256, 212 | 'filt_buffer_len': 3 * 256, 213 | 'window_len': 256, 214 | 'step': int(256 / 10), 215 | 'filter': ([1], [1]), 216 | 'filter_bank': {}, 217 | 'psd_window_len': 256., 218 | 'psd_buffer_len': 10} 219 | 220 | config = {**DEFAULT_CONFIG, **config} 221 | 222 | self.fs = config['fs'] 223 | self.n_channels = config['n_channels'] 224 | 225 | # Initialize EEG channel remapping parameters 226 | self.eeg_ch_remap = None 227 | if self.device_source.lower() == 'vive': 228 | self.eeg_ch_remap = [3, 1, 2, 3, 4] 229 | self.n_channels = 5 230 | if self.software_source.lower() == 'musedirect': 231 | self.eeg_ch_remap[-1] = 5 232 | self.n_channels = 5 233 | if self.device_source.lower() == 'leroy': 234 | self.eeg_ch_remap = None 235 | self.n_channels = 4 236 | if self.device_source.lower() == 'muse': 237 | self.eeg_ch_remap = None 238 | self.n_channels = 4 239 | if self.device_source.lower() == 'vivehr': 240 | self.eeg_ch_remap = [3, 1, 2, 3, 0] 241 | self.n_channels = 5 242 | 243 | # Initialize the EEG buffers 244 | raw_buffer_len = int(config['raw_buffer_len']) 245 | filt_buffer_len = int(config['filt_buffer_len']) 246 | 247 | self.eeg_buffer = ut.NanBuffer(raw_buffer_len, self.n_channels) 248 | self.filt_eeg_buffer = ut.CircularBuffer(filt_buffer_len, 249 | self.n_channels) 250 | self.hpfilt_eeg_buffer = ut.CircularBuffer(filt_buffer_len, 251 | self.n_channels) 252 | self.smooth_eeg_buffer = ut.CircularBuffer(filt_buffer_len, 253 | self.n_channels) 254 | self.eyeH_buffer = ut.CircularBuffer(100,1) 255 | 256 | # Initialize the EEG filter 257 | if config['filter']: 258 | if isinstance(config['filter'], tuple): 259 | b = config['filter'][0] 260 | a = config['filter'][1] 261 | elif isinstance(config['filter'], dict): 262 | b, a = ut.get_filter_coeff(self.fs, **config['filter']) 263 | zi = np.tile(signal.lfilter_zi(b, a), (self.n_channels, 1)).T 264 | self.bandpass_filt = {'b': b, 265 | 'a': a, 266 | 'zi': zi} 267 | if config['hpfilter']: 268 | b = config['hpfilter'][0] 269 | a = config['hpfilter'][1] 270 | zi = np.tile(signal.lfilter_zi(b, a), (self.n_channels, 1)).T 271 | self.hp_filt = {'b': b, 272 | 'a': a, 273 | 'zi': zi} 274 | if config['lpfilter']: 275 | b = config['lpfilter'][0] 276 | a = config['lpfilter'][1] 277 | zi = np.tile(signal.lfilter_zi(b, a), (self.n_channels, 1)).T 278 | self.lp_filt = {'b': b, 279 | 'a': a, 280 | 'zi': zi} 281 | 282 | # Initialize the filter bank 283 | if config['filter_bank']: 284 | self.filter_bank = {} 285 | for name, coeff in config['filter_bank'].items(): 286 | zi = np.tile(signal.lfilter_zi(coeff[0], coeff[1]), 287 | (self.n_channels, 1)).T 288 | self.filter_bank[name] = {'b': coeff[0], 289 | 'a': coeff[1], 290 | 'zi': zi} 291 | 292 | 293 | # Initialize processing parameters 294 | self.window_len = int(config['window_len']) 295 | self.step = int(config['step']) 296 | 297 | # Initialize processing buffers 298 | psd_buffer_len = int(config['psd_buffer_len']) 299 | self.psd_buffer = ut.CircularBuffer(psd_buffer_len, 129, 300 | self.n_channels) 301 | 302 | # Initialize scoring histograms 303 | decayRate = 0.997 304 | self.hists = {'delta': ut.Histogram(1000, self.n_channels, bounds=(0, 50), min_count=80, decay=decayRate ), 305 | 'theta': ut.Histogram(1000, self.n_channels, bounds=(0, 30),min_count=80, decay=decayRate), 306 | 'alpha': ut.Histogram(1000, self.n_channels,bounds=(0, 20), min_count=80, decay=decayRate), 307 | 'beta': ut.Histogram(1000, self.n_channels,bounds=(0, 10), min_count=80, decay=decayRate), 308 | 'gamma': ut.Histogram(1000, self.n_channels,bounds=(0, 10), min_count=80, decay=decayRate)} 309 | self.eyeH_hist = ut.Histogram(500, 1, bounds=(0, 10000), min_count=80, decay=decayRate ) 310 | self.emg_hist = ut.Histogram(500, 1, bounds=(0, 10), min_count=80, decay=decayRate ) 311 | self.blinkwait = 0 312 | self.blink = 0 313 | self.firstWindowProc = True 314 | self.band_names =0 315 | self.band_powers =0 316 | self.ratio_powers=0 317 | self.ratio_names=0 318 | 319 | # Used for calm score 320 | self.slow_calm_score = 0 321 | self.slow_alpha_score = 0 322 | self.eye_mov_percent_buffer = ut.CircularBuffer(256, 1) 323 | self.slow_calm_score_buffer = ut.CircularBuffer(512, 1) 324 | self.increments_buffer = ut.CircularBuffer(512, 1) 325 | self.low_freq_chs_buffer = ut.CircularBuffer(150, 2) 326 | self.low_freq_chs_std = 1 327 | 328 | ###################################################################### 329 | # BODY Motion Processing, Accelerometer, Gyro 330 | 331 | raw_buffer_len = 150 332 | filt_buffer_len = 150 333 | self.acc_window_len = 50 334 | self.acc_buffer = ut.NanBuffer(raw_buffer_len, 3) 335 | self.filt0_buffer = ut.CircularBuffer(filt_buffer_len,3) 336 | self.heart_buffer = ut.CircularBuffer(150,1) 337 | self.breath_buffer = ut.CircularBuffer(500,1) 338 | 339 | # Initialize the Body Filters 340 | if config['filter0']: 341 | b = config['filter0'][0] 342 | a = config['filter0'][1] 343 | zi = np.tile(signal.lfilter_zi(b, a), (3, 1)).T 344 | self.filter0 = {'b': b,'a': a,'zi': zi} 345 | if config['filter1']: 346 | b = config['filter1'][0] 347 | a = config['filter1'][1] 348 | zi = np.tile(signal.lfilter_zi(b, a), (3, 1)).T 349 | self.filter1 = {'b': b,'a': a,'zi': zi} 350 | if config['filter2']: 351 | b = config['filter2'][0] 352 | a = config['filter2'][1] 353 | zi = signal.lfilter_zi(b, a) 354 | self.filter2 = {'b': b,'a': a,'zi': zi} 355 | if config['filter3']: 356 | b = config['filter3'][0] 357 | a = config['filter3'][1] 358 | zi = np.tile(signal.lfilter_zi(b, a), (3, 1)).T 359 | self.filter3 = {'b': b,'a': a,'zi': zi} 360 | if config['filter4']: 361 | b = config['filter4'][0] 362 | a = config['filter4'][1] 363 | zi = signal.lfilter_zi(b, a) 364 | self.filter4 = {'b': b,'a': a,'zi': zi} 365 | if config['filter5']: 366 | b = config['filter5'][0] 367 | a = config['filter5'][1] 368 | zi = signal.lfilter_zi(b, a) 369 | self.filter5 = {'b': b,'a': a,'zi': zi} 370 | if config['filter6']: 371 | b = config['filter6'][0] 372 | a = config['filter6'][1] 373 | zi = signal.lfilter_zi(b, a) 374 | self.filter6 = {'b': b,'a': a,'zi': zi} 375 | if config['filter7']: 376 | b = config['filter7'][0] 377 | a = config['filter7'][1] 378 | zi = signal.lfilter_zi(b, a) 379 | self.filter7 = {'b': b,'a': a,'zi': zi} 380 | 381 | 382 | 383 | def _update_eeg_liblo_osc(self, path, args): 384 | """Collect new EEG data point(s) from pyliblo OSC and process. 385 | 386 | Args: 387 | path (str): OSC path listened to 388 | args (list): received values 389 | """ 390 | if self.verbose: 391 | print('Receiving OSC packet!') 392 | sample = np.array(args).reshape(1, -1) 393 | self._process_eeg(sample[:, :self.n_channels], 0) 394 | 395 | def _update_eeg_python_osc(self, unused_addr, args, *chs): 396 | """Collect new EEG data point(s) from python-osc and process. 397 | 398 | Args: 399 | path (str): OSC path listened to 400 | args (list): received values 401 | """ 402 | if self.verbose: 403 | print('Receiving OSC packet!') 404 | sample = np.array(chs).reshape(1, -1) 405 | self._process_eeg(sample[:, :self.n_channels], 0) 406 | 407 | 408 | def _update_acc_liblo_osc(self, path, args): 409 | if self.verbose: 410 | print('Receiving ACC packet!') 411 | sample = np.array(args).reshape(1, -1) 412 | self._process_acc(sample[:, :3], 0) 413 | 414 | def _update_gyro_liblo_osc(self, path, args): 415 | if self.verbose: 416 | print('Receiving GYRO packet!') 417 | sample = np.array(args).reshape(1, -1) 418 | self._process_gyro(sample[:, :3], 0) 419 | 420 | 421 | def _process_eeg(self, samples, timestamp): 422 | """Process EEG. 423 | 424 | Process EEG. Includes buffering, filtering, windowing and pipeline. 425 | 426 | Args: 427 | samples (numpy.ndarray): new EEG samples to process 428 | timestamp (float): timestamp 429 | 430 | Returns: 431 | output (scalar): output of the pipeline 432 | """ 433 | 434 | # Re-map 435 | if self.eeg_ch_remap: 436 | samples = samples[:, self.eeg_ch_remap] 437 | 438 | self.eeg_buffer.update(samples) 439 | # self._send_outputs(samples, timestamp, 'raw_eeg') 440 | 441 | # Apply filtes 442 | filt_samples = samples; 443 | 444 | if config['filter']: 445 | filt_samples, self.bandpass_filt['zi'] = signal.lfilter( 446 | self.bandpass_filt['b'], self.bandpass_filt['a'], 447 | samples, axis=0, zi=self.bandpass_filt['zi']) 448 | # self._send_filtered_eeg(filt_samples, timestamp) 449 | self.filt_eeg_buffer.update(filt_samples) 450 | 451 | if config['hpfilter']: 452 | filt_samples, self.hp_filt['zi'] = signal.lfilter( 453 | self.hp_filt['b'], self.hp_filt['a'], 454 | filt_samples, axis=0, zi=self.hp_filt['zi']) 455 | self.hpfilt_eeg_buffer.update(filt_samples) 456 | 457 | if config['lpfilter']: 458 | smooth_eeg_samples, self.lp_filt['zi'] = signal.lfilter( 459 | self.lp_filt['b'], self.lp_filt['a'], 460 | filt_samples, axis=0, zi=self.lp_filt['zi']) 461 | if self.debug_outputs: 462 | self._send_output_vec(smooth_eeg_samples, timestamp, 'smooth_eeg') 463 | else: 464 | smooth_eeg_samples = filt_samples 465 | self.smooth_eeg_buffer.update(smooth_eeg_samples) 466 | 467 | if config['filter_bank']: 468 | filter_bank_samples = {} 469 | for name, filt_dict in self.filter_bank.items(): 470 | filter_bank_samples[name], self.filter_bank[name]['zi'] = \ 471 | signal.lfilter(filt_dict['b'], filt_dict['a'], 472 | filt_samples, axis=0, 473 | zi=self.filter_bank[name]['zi']) 474 | low_freq_chs = filter_bank_samples['delta'][0, [0, 2]] #+ filter_bank_samples['theta'][0, [0, 1] 475 | 476 | window = self.smooth_eeg_buffer.extract(self.window_len) 477 | 478 | eegEarWindow = window[:, 3] #data from right ear Channel 479 | #eye movement computed from the difference between two frontal channels 480 | eyewindow = self.smooth_eeg_buffer.extract(200) 481 | eegFLWindow = eyewindow[:, 1] 482 | eegFRWindow = eyewindow[:, 2] 483 | # norm_diff_eyes = eegFLWindow[-1] - eegFRWindow[-1]*np.nanstd(eegFLWindow, axis=0)/np.nanstd(eegFRWindow, axis=0) 484 | # eyeH = np.reshape([np.square(norm_diff_eyes)], (1, 1)) 485 | 486 | #find blinks in the left eegEarWindow 487 | blinkVal = ut.blink_template_match(eegEarWindow) 488 | if (blinkVal > 100000 and self.blink == 0): 489 | self.blink = 50 490 | self.blinkwait = 350 491 | else: 492 | if (self.blinkwait > 0): 493 | self.blinkwait -= 1 494 | if (self.blink > 0): 495 | self.blink -= 1 496 | 497 | # LONGER-TERM CALM SCORE based on Saccadic Eye Movement 498 | eye_mov_percent = np.reshape(np.percentile(eegFLWindow - eegFRWindow, 90), (1, 1)) 499 | self.eye_mov_percent_buffer.update(eye_mov_percent) 500 | remap_eye_mov_percent = ut.sigmoid(self.eye_mov_percent_buffer.extract().mean(), 0.5, -10, 0) 501 | 502 | 503 | max_value = 1 504 | incr_decr = remap_eye_mov_percent < 0.2 505 | inc = self.increments_buffer.extract().mean() 506 | dpoints_per_second = 0.0005 507 | 508 | if incr_decr: 509 | self.slow_calm_score += dpoints_per_second*inc # 1/max([max_value - self.slow_calm_score, 1]) 510 | else: 511 | self.slow_calm_score -= dpoints_per_second*inc*4 #0.7 # (self.slow_calm_score)/1280 512 | 513 | 514 | self.increments_buffer.update(np.reshape(incr_decr, (1, 1))) 515 | 516 | if self.slow_calm_score > max_value: 517 | self.slow_calm_score = max_value 518 | elif self.slow_calm_score < 0: 519 | self.slow_calm_score = 0 520 | 521 | self.slow_calm_score_buffer.update(np.reshape(self.slow_calm_score, (1, 1))) 522 | 523 | 524 | # Send outputs at a reduced sampling rate 525 | if self.smooth_eeg_buffer.pts%3==0 : 526 | self._send_output_vec(smooth_eeg_samples, timestamp, 'muse/eeg') 527 | if (self.blink > 0): 528 | self._send_output(np.array([[1]]), timestamp, 'blink') 529 | else: 530 | self._send_output(np.array([[0]]), timestamp, 'blink') 531 | self._send_output(blinkVal/300000,timestamp,'blinkVal') 532 | self._send_output(remap_eye_mov_percent, timestamp, 'saccad') 533 | 534 | self._send_output(np.reshape(self.slow_calm_score_buffer.extract().mean(), (1, 1)),timestamp, 'calm') # slow_calm_score 535 | self._send_output(low_freq_chs / self.low_freq_chs_std + 0.5, timestamp, 'low_freq_chs') 536 | 537 | # process and send output at every step. usually about every 1/10s 538 | if self.eeg_buffer.pts > self.step: 539 | self.eeg_buffer.pts = 0 540 | 541 | # Get filtered EEG window 542 | if config['lpfilter']: 543 | window = self.smooth_eeg_buffer.extract(self.window_len) 544 | else: 545 | window = self.eeg_buffer.extract(self.window_len) 546 | psd_raw_buffer = self.eeg_buffer.extract(self.window_len) 547 | 548 | # Get average PSD 549 | psd, f = ut.fft_continuous(psd_raw_buffer, n=int(self.fs), psd=True, 550 | log='psd', fs=self.fs, window='hamming') 551 | self.psd_buffer.update(np.expand_dims(psd, axis=0)) 552 | mean_psd = np.nanmean(self.psd_buffer.extract(), axis=0) 553 | 554 | # find variance of eegWindow for Bad Signal detact 555 | eegVar = np.nanvar(window,axis=0) 556 | self._send_output_vec(eegVar.reshape(1,self.n_channels), timestamp, 'hsi') 557 | 558 | if (self.sparseOutput!=None): 559 | #send channel varience for signal quality indication at source Raspberry Pi 560 | #send(Address('10.0.0.14','1234'), "/hsi", eegVar[0],eegVar[1],eegVar[2],eegVar[3]) 561 | self._send_sparseOutput_vec(eegVar.reshape(1,self.n_channels), timestamp, 'hsi') 562 | 563 | 564 | # Get band powers and ratios 565 | 566 | bandPowers, bandNames = ut.compute_band_powers(mean_psd, f, relative=False) 567 | ratioPowers, ratioNames = ut.compute_band_ratios(bandPowers) 568 | 569 | if (self.firstWindowProc): 570 | self.band_powers = bandPowers 571 | self.band_names = bandNames 572 | self.ratio_powers = ratioPowers 573 | self.ratio_names = ratioNames 574 | self.scores = np.zeros((len(self.band_names), self.n_channels)) 575 | self.firstWindowProc = False 576 | 577 | if (eegVar.mean() < 300 and self.blinkwait == 0 ): #threshold for good data 578 | for i, (name, hist) in enumerate(self.hists.items()): 579 | self.band_powers = bandPowers 580 | self.ratio_powers = ratioPowers 581 | #send good data indicator based on mean eegWindow variance and blinkwait 582 | self._send_output(np.array([[1]]), timestamp, 'goodData') #good data 583 | else: 584 | self._send_output(np.array([[0]]), timestamp, 'goodData') #good data 585 | 586 | self._send_outputs(self.band_powers, timestamp, 'bands') 587 | self._send_outputs(self.ratio_powers, timestamp, 'ratios') 588 | 589 | 590 | mask = ((f >= 30 ) & (f<50)) 591 | 592 | self.low_freq_chs_buffer.update(np.reshape(low_freq_chs, (1, -1))) 593 | self.low_freq_chs_std = self.low_freq_chs_buffer.extract().std(axis=0) 594 | 595 | emg_power = np.mean(mean_psd[mask, 0], axis=0) #HF power of right ear 596 | self._send_output(np.array([np.sqrt(emg_power)/2]), timestamp, 'emg') 597 | 598 | def _process_acc(self, samples, timestamp): 599 | self._send_output_vec(samples,0,'muse/acc') 600 | 601 | self.acc_buffer.update(samples) 602 | window = self.acc_buffer.extract(self.acc_window_len) 603 | 604 | timestamps = np.linspace(0,1/50*self.acc_window_len,self.acc_window_len) 605 | new_fs= 250 606 | timestamps_upsampled = np.arange(timestamps[0], timestamps[-1],1/new_fs) 607 | f = interpolate.interp1d(timestamps, window, kind='cubic', axis=0, 608 | fill_value=np.nan, assume_sorted=True) 609 | window_upsampled = f(timestamps_upsampled) 610 | for t in range(timestamps_upsampled.size-5,timestamps_upsampled.size): 611 | if self.debug_outputs: 612 | self._send_output(window_upsampled[t],0,'upsamp') 613 | upsample = np.array(window_upsampled[t]).reshape(1,3) 614 | filt_samples, self.filter0['zi'] = signal.lfilter( 615 | self.filter0['b'], self.filter0['a'], 616 | upsample, axis=0, zi=self.filter0['zi']) 617 | self.filt0_buffer.update(filt_samples) 618 | if self.debug_outputs: 619 | self._send_outputs(filt_samples,0,'filter0') 620 | 621 | filt_samples, self.filter1['zi'] = signal.lfilter( 622 | self.filter1['b'], self.filter1['a'], 623 | filt_samples, axis=0, zi=self.filter1['zi']) 624 | if self.debug_outputs: 625 | self._send_outputs(filt_samples,0,'filter1') 626 | 627 | filt_samples = np.sqrt(np.sum(filt_samples ** 2, axis=1)) 628 | if self.debug_outputs: 629 | self._send_output(filt_samples,0,'filter1L2') 630 | 631 | heart_samples, self.filter2['zi'] = signal.lfilter( 632 | self.filter2['b'], self.filter2['a'], 633 | filt_samples, axis=0, zi=self.filter2['zi']) 634 | 635 | if self.debug_outputs: 636 | self._send_output(heart_samples,0,'filter2') 637 | 638 | breathfilt_samples, self.filter3['zi'] = signal.lfilter( 639 | self.filter3['b'], self.filter3['a'], 640 | upsample, axis=0, zi=self.filter3['zi']) 641 | if self.debug_outputs: 642 | self._send_outputs(breathfilt_samples,0,'filter3') 643 | self.heart_buffer.update(heart_samples.reshape(1,1)) 644 | heartbuf = self.heart_buffer.extract(150) 645 | heartbufMin = heartbuf.min() 646 | heartbufMax = heartbuf.max() 647 | heart = np.reshape((heartbuf[-1]-heartbufMin)/(heartbufMax-heartbufMin), (1, 1)) 648 | self._send_output(heart,0,'heart') 649 | 650 | 651 | breathSmooth = breathfilt_samples[0,2].reshape(1,) 652 | if self.debug_outputs: 653 | self._send_output(breathSmooth,0,'breathRaw') 654 | 655 | breathSmooth, self.filter4['zi'] = signal.lfilter( 656 | self.filter4['b'], self.filter4['a'], 657 | breathSmooth, axis=0, zi=self.filter4['zi']) 658 | if self.debug_outputs: 659 | self._send_output(breathSmooth,0,'breathSmooth') 660 | 661 | breathNorm, self.filter5['zi'] = signal.lfilter( 662 | self.filter5['b'], self.filter5['a'], 663 | breathSmooth, axis=0, zi=self.filter5['zi']) 664 | 665 | if self.debug_outputs: 666 | self._send_output(breathNorm,0,'breathNorm') 667 | 668 | breathFast, self.filter6['zi'] = signal.lfilter( 669 | self.filter6['b'], self.filter6['a'], 670 | breathSmooth, axis=0, zi=self.filter6['zi']) 671 | 672 | if self.debug_outputs: 673 | self._send_output(breathFast,0,'breathFast') 674 | 675 | breathLow, self.filter7['zi'] = signal.lfilter( 676 | self.filter7['b'], self.filter7['a'], 677 | breathSmooth, axis=0, zi=self.filter7['zi']) 678 | if self.debug_outputs: 679 | self._send_output(breathLow,0,'breathLow') 680 | 681 | 682 | self.breath_buffer.update(breathLow.reshape(1,1)) 683 | breathbuf = self.breath_buffer.extract(1000) 684 | breathbufMin = breathbuf.min() 685 | breathbufMax = breathbuf.max() 686 | breath = np.reshape((breathbuf[-1]-breathbufMin)/(breathbufMax-breathbufMin), (1, 1)) 687 | self._send_output(breath,0,'breath') 688 | 689 | def _process_gyro(self, samples, timestamp): 690 | self._send_output_vec(samples,0,'muse/gyro') 691 | 692 | 693 | def _send_outputs(self, output, timestamp, name): 694 | """Send pipeline outputs through the LSL or OSC stream. 695 | 696 | Args: 697 | output (scalar): output of the pipeline 698 | timestamp (float): timestamp 699 | """ 700 | for out in self._output_threads: 701 | if isinstance(out, str): # LSL outlet 702 | self._outlet.push_sample([output], timestamp=timestamp) 703 | 704 | else: # OSC output stream 705 | if USE_LIBLO: 706 | for c in range(self.n_channels): 707 | new_output = [('f', x) for x in output[:, c]] 708 | message = Message('/{}{}'.format(name, c), *new_output) 709 | #send(out, Bundle(timestamp, message)) 710 | send(out, message) 711 | else: 712 | for c in range(self.n_channels): 713 | self._client.send_message('/{}{}'.format(name, c), 714 | output[:, c]) 715 | 716 | if self.verbose: 717 | print('Output: {}'.format(output)) 718 | def _send_output_vec(self, output, timestamp, name): 719 | """Send pipeline outputs through the LSL or OSC stream. 720 | 721 | Args: 722 | output (scalar): output of the pipeline 723 | timestamp (float): timestamp 724 | """ 725 | for out in self._output_threads: 726 | if isinstance(out, str): # LSL outlet 727 | self._outlet.push_sample([output], timestamp=timestamp) 728 | 729 | else: # OSC output stream 730 | if USE_LIBLO: 731 | new_output = [('f', x) for x in output[0,:]] 732 | message = Message('/{}'.format(name), *new_output) 733 | # send(out, Bundle(timestamp, message)) 734 | send(out, message) 735 | if self.verbose: 736 | print('Output: {}'.format(output)) 737 | 738 | def _send_sparseOutput_vec(self, output, timestamp, name): 739 | """Send pipeline outputs through the LSL or OSC stream. 740 | 741 | Args: 742 | output (scalar): output of the pipeline 743 | timestamp (float): timestamp 744 | """ 745 | for out in self._sparseOutput_threads: 746 | if isinstance(out, str): # LSL outlet 747 | self._outlet.push_sample([output], timestamp=timestamp) 748 | 749 | else: # OSC output stream 750 | if USE_LIBLO: 751 | new_output = [('f', x) for x in output[0,:]] 752 | message = Message('/{}'.format(name), *new_output) 753 | #send(out, Bundle(timestamp, message)) 754 | send(out, message) 755 | if self.verbose: 756 | print('sparseOutput: {}'.format(output)) 757 | 758 | 759 | def _send_output(self, output, timestamp, name): 760 | """Send pipeline outputs through the LSL or OSC stream. 761 | NOT PER CHANNEL 762 | Args: 763 | output (scalar): output of the pipeline 764 | timestamp (float): timestamp 765 | """ 766 | for out in self._output_threads: 767 | if isinstance(out, str): # LSL outlet 768 | raise NotImplementedError 769 | # self._outlet.push_sample([output], timestamp=timestamp) 770 | 771 | else: # OSC output stream 772 | if USE_LIBLO: 773 | if (np.array(output).size==1): 774 | new_output = [('f', np.asscalar(output))] 775 | message = Message('/{}'.format(name), *new_output) 776 | else: 777 | new_output = [('f', x) for x in output[:]] 778 | message = Message('/{}'.format(name), *new_output) 779 | # send(out, Bundle(timestamp, message)) 780 | send(out, message) 781 | else: 782 | raise NotImplementedError 783 | # self._client.send_message(}{}'.format(name),output[:]) 784 | if self.verbose: 785 | print('Output: {}'.format(output)) 786 | 787 | def _send_sparseOutput(self, output, timestamp, name): 788 | for out in self._sparseOutput_threads: 789 | if isinstance(out, str): # LSL outlet 790 | raise NotImplementedError 791 | else: # OSC output stream 792 | if USE_LIBLO: 793 | if (np.array(output).size==1): 794 | new_output = [('f', np.asscalar(output))] 795 | message = Message('/{}'.format(name), *new_output) 796 | else: 797 | new_output = [('f', x) for x in output[:]] 798 | message = Message('/{}'.format(name), *new_output) 799 | #send(out, Bundle(timestamp, message)) 800 | send(out, message) 801 | else: 802 | raise NotImplementedError 803 | if self.verbose: 804 | print('spareOutput: {}'.format(output)) 805 | 806 | def start(self): 807 | """Start receiving and processing EEG data. 808 | """ 809 | self.started = True 810 | 811 | if isinstance(self.incoming, str): # LSL inlet 812 | self.eeg_thread = Thread(target=self._update_eeg_lsl) 813 | self.eeg_thread.daemon = True 814 | self.eeg_thread.start() 815 | else: # OSC input stream 816 | if USE_LIBLO: 817 | self._osc_server.add_method('/muse/eeg', None, 818 | self._update_eeg_liblo_osc) 819 | self._osc_server.add_method('/muse/acc', None, 820 | self._update_acc_liblo_osc) 821 | self._osc_server.add_method('/muse/gyro', None, 822 | self._update_gyro_liblo_osc) 823 | self._osc_server.start() 824 | else: 825 | self._dispatcher.map('Person2/eeg', 826 | self._update_eeg_python_osc, 'EEG') 827 | self._osc_server = osc_server.ThreadingOSCUDPServer( 828 | ('127.0.0.1', self.incoming['port']), self._dispatcher) 829 | print('OSC server initialized at port {}.'.format( 830 | self.incoming['port'])) 831 | self._server_thread = Thread( 832 | target=self._osc_server.serve_forever) 833 | self._server_thread.start() 834 | 835 | def stop(self): 836 | """ 837 | """ 838 | self.started = False 839 | 840 | if isinstance(self.incoming, dict): 841 | if not USE_LIBLO: 842 | self._server_thread.shutdown() 843 | 844 | 845 | if __name__ == '__main__': 846 | 847 | 848 | #EEG PROCESSING 849 | FS = 256. 850 | #AC notch filter 851 | EEG_b, EEG_a = ut.get_filter_coeff(FS, 6, l_freq=65, h_freq=55,method='butter') 852 | #demean 853 | EEG_b2, EEG_a2 = ut.get_filter_coeff(FS, 3, l_freq=1,method='butter') 854 | #eeg range for clean signal display 855 | EEG_b3, EEG_a3 = ut.get_filter_coeff(FS, 3, h_freq=40,method='butter') 856 | # Filter bank 857 | b_delta, a_delta = ut.get_filter_coeff(FS, 3, l_freq=1, h_freq=4, method='butter') 858 | b_theta, a_theta = ut.get_filter_coeff(FS, 3, l_freq=4, h_freq=7.5, method='butter') 859 | b_alpha, a_alpha = ut.get_filter_coeff(FS, 3, l_freq=7.5, h_freq=13, method='butter') 860 | b_beta, a_beta = ut.get_filter_coeff(FS, 3, l_freq=13, h_freq=30, method='butter') 861 | 862 | 863 | #Motion Sensor Processing 864 | FSb1 = 250. 865 | b0 = np.array([-1, 2, -1]) / 3 866 | a0 = 1 867 | b1, a1 = ut.get_filter_coeff(FSb1, 4, l_freq=10, h_freq=13,method='butter') 868 | b2, a2 = ut.get_filter_coeff(FSb1, 2, l_freq=0.75, h_freq=2.5,method='butter') 869 | min_breath_period = 0.3 # maximal breath frequency 870 | min_n_points_in_breath = int(min_breath_period * FSb1) 871 | b3 = np.ones((min_n_points_in_breath,)) / min_n_points_in_breath 872 | a3 = 1 873 | FSb2 = 50. 874 | b4, a4 = ut.get_filter_coeff(FSb2, 3, h_freq=5,method='butter') 875 | b5, a5 = ut.get_filter_coeff(FSb2, 3, l_freq=0.13, h_freq=1,method='butter') 876 | b6, a6 = ut.get_filter_coeff(FSb2, 3, l_freq=1, h_freq=5,method='butter') 877 | b7, a7 = ut.get_filter_coeff(FSb2, 3, h_freq=1,method='butter') 878 | 879 | config = {'fs': FS, 880 | 'n_channels': 5, 881 | 'raw_buffer_len': int(3 * FS), 882 | 'filt_buffer_len': int(3 * FS), 883 | 'window_len': int(FS), 884 | 'step': int(FS / 10), 885 | 'filter': (EEG_b, EEG_a), 886 | 'hpfilter': (EEG_b2, EEG_a2), 887 | 'lpfilter': (EEG_b3, EEG_a3), 888 | 'filter_bank': {'delta': (b_delta, a_delta)}, #'theta': (b_theta, a_theta)}, #'alpha': (b_alpha, a_alpha), 'beta': (b_beta, a_alpha)}, 889 | 'psd_window_len': int(FS), 890 | 'psd_buffer_len': 5, 891 | 'filter0': (b0, a0), 892 | 'filter1': (b1, a1), 893 | 'filter2': (b2, a2), 894 | 'filter3': (b3, a3), 895 | 'filter4': (b4, a4), 896 | 'filter5': (b5, a5), 897 | 'filter6': (b6, a6), 898 | 'filter7': (b7, a7), 899 | } 900 | 901 | fft_server = FFTServer({'port':options.port}, # 'MEG', # # 902 | ({'address': options.oscip, 'port': options.oscport}), 903 | ({'address': options.sparseip, 'port': options.sparseport}), 904 | config=config, 905 | device_source='muse', #vive, leroy 906 | software_source='muselsl', 907 | debug_outputs=False, 908 | verbose=False) 909 | fft_server.start() 910 | 911 | while True: 912 | try: 913 | time.sleep(1) 914 | except: 915 | fft_server.stop() 916 | print('breaking') 917 | break 918 | 919 | 920 | -------------------------------------------------------------------------------- /muse-sigproc/muselab_configurationTenere.json: -------------------------------------------------------------------------------- 1 | { 2 | "Control Pane": { 3 | "Is Visible": true 4 | }, 5 | "Incoming OSC Config": { 6 | "Receiving Ports": [ 7 | { 8 | "Port": 8001, 9 | "UDP": true 10 | } 11 | ] 12 | }, 13 | "Outgoing OSC Config": { 14 | "Forwarding": [] 15 | }, 16 | "DSP Config": { 17 | "DSP Function Settings": [], 18 | "DSP Signals": [] 19 | }, 20 | "Signal ID Groups": [ 21 | { 22 | "Signal Type": "GROUP", 23 | "Signal": { 24 | "Base": { 25 | "Colour": "Green", 26 | "Label": "/smooth_eeg" 27 | }, 28 | "IDs": [ 29 | { 30 | "Signal Type": "OSC", 31 | "Signal": { 32 | "Base": { 33 | "Colour": "Blue", 34 | "Label": "/smooth_eeg (0)" 35 | }, 36 | "Signal Source": "/smooth_eeg", 37 | "Index": 0, 38 | "Port": 8001, 39 | "TCP": false 40 | } 41 | }, 42 | { 43 | "Signal Type": "OSC", 44 | "Signal": { 45 | "Base": { 46 | "Colour": "Cyan", 47 | "Label": "/smooth_eeg (1)" 48 | }, 49 | "Signal Source": "/smooth_eeg", 50 | "Index": 1, 51 | "Port": 8001, 52 | "TCP": false 53 | } 54 | }, 55 | { 56 | "Signal Type": "OSC", 57 | "Signal": { 58 | "Base": { 59 | "Colour": "Dark Khaki", 60 | "Label": "/smooth_eeg (2)" 61 | }, 62 | "Signal Source": "/smooth_eeg", 63 | "Index": 2, 64 | "Port": 8001, 65 | "TCP": false 66 | } 67 | }, 68 | { 69 | "Signal Type": "OSC", 70 | "Signal": { 71 | "Base": { 72 | "Colour": "Fuchsia", 73 | "Label": "/smooth_eeg (3)" 74 | }, 75 | "Signal Source": "/smooth_eeg", 76 | "Index": 3, 77 | "Port": 8001, 78 | "TCP": false 79 | } 80 | } 81 | ] 82 | } 83 | }, 84 | { 85 | "Signal Type": "GROUP", 86 | "Signal": { 87 | "Base": { 88 | "Colour": "Light Blue", 89 | "Label": "/low_freq_chs" 90 | }, 91 | "IDs": [ 92 | { 93 | "Signal Type": "OSC", 94 | "Signal": { 95 | "Base": { 96 | "Colour": "Green Yellow", 97 | "Label": "/low_freq_chs (0)" 98 | }, 99 | "Signal Source": "/low_freq_chs", 100 | "Index": 0, 101 | "Port": 8001, 102 | "TCP": false 103 | } 104 | }, 105 | { 106 | "Signal Type": "OSC", 107 | "Signal": { 108 | "Base": { 109 | "Colour": "Khaki", 110 | "Label": "/low_freq_chs (1)" 111 | }, 112 | "Signal Source": "/low_freq_chs", 113 | "Index": 1, 114 | "Port": 8001, 115 | "TCP": false 116 | } 117 | } 118 | ] 119 | } 120 | }, 121 | { 122 | "Signal Type": "GROUP", 123 | "Signal": { 124 | "Base": { 125 | "Colour": "Cyan", 126 | "Label": "/eegDec" 127 | }, 128 | "IDs": [ 129 | { 130 | "Signal Type": "OSC", 131 | "Signal": { 132 | "Base": { 133 | "Colour": "Light Cyan", 134 | "Label": "/eegDec (0)" 135 | }, 136 | "Signal Source": "/eegDec", 137 | "Index": 0, 138 | "Port": 8001, 139 | "TCP": false 140 | } 141 | }, 142 | { 143 | "Signal Type": "OSC", 144 | "Signal": { 145 | "Base": { 146 | "Colour": "Light Green", 147 | "Label": "/eegDec (1)" 148 | }, 149 | "Signal Source": "/eegDec", 150 | "Index": 1, 151 | "Port": 8001, 152 | "TCP": false 153 | } 154 | }, 155 | { 156 | "Signal Type": "OSC", 157 | "Signal": { 158 | "Base": { 159 | "Colour": "Light Salmon", 160 | "Label": "/eegDec (2)" 161 | }, 162 | "Signal Source": "/eegDec", 163 | "Index": 2, 164 | "Port": 8001, 165 | "TCP": false 166 | } 167 | }, 168 | { 169 | "Signal Type": "OSC", 170 | "Signal": { 171 | "Base": { 172 | "Colour": "Orange", 173 | "Label": "/eegDec (3)" 174 | }, 175 | "Signal Source": "/eegDec", 176 | "Index": 3, 177 | "Port": 8001, 178 | "TCP": false 179 | } 180 | } 181 | ] 182 | } 183 | }, 184 | { 185 | "Signal Type": "GROUP", 186 | "Signal": { 187 | "Base": { 188 | "Colour": "Purple", 189 | "Label": "/bands0" 190 | }, 191 | "IDs": [ 192 | { 193 | "Signal Type": "OSC", 194 | "Signal": { 195 | "Base": { 196 | "Colour": "Red", 197 | "Label": "/bands0 (0)" 198 | }, 199 | "Signal Source": "/bands0", 200 | "Index": 0, 201 | "Port": 8001, 202 | "TCP": false 203 | } 204 | }, 205 | { 206 | "Signal Type": "OSC", 207 | "Signal": { 208 | "Base": { 209 | "Colour": "Salmon", 210 | "Label": "/bands0 (1)" 211 | }, 212 | "Signal Source": "/bands0", 213 | "Index": 1, 214 | "Port": 8001, 215 | "TCP": false 216 | } 217 | }, 218 | { 219 | "Signal Type": "OSC", 220 | "Signal": { 221 | "Base": { 222 | "Colour": "Silver", 223 | "Label": "/bands0 (2)" 224 | }, 225 | "Signal Source": "/bands0", 226 | "Index": 2, 227 | "Port": 8001, 228 | "TCP": false 229 | } 230 | }, 231 | { 232 | "Signal Type": "OSC", 233 | "Signal": { 234 | "Base": { 235 | "Colour": "White", 236 | "Label": "/bands0 (3)" 237 | }, 238 | "Signal Source": "/bands0", 239 | "Index": 3, 240 | "Port": 8001, 241 | "TCP": false 242 | } 243 | }, 244 | { 245 | "Signal Type": "OSC", 246 | "Signal": { 247 | "Base": { 248 | "Colour": "Yellow", 249 | "Label": "/bands0 (4)" 250 | }, 251 | "Signal Source": "/bands0", 252 | "Index": 4, 253 | "Port": 8001, 254 | "TCP": false 255 | } 256 | } 257 | ] 258 | } 259 | }, 260 | { 261 | "Signal Type": "GROUP", 262 | "Signal": { 263 | "Base": { 264 | "Colour": "Blue", 265 | "Label": "/bands1" 266 | }, 267 | "IDs": [ 268 | { 269 | "Signal Type": "OSC", 270 | "Signal": { 271 | "Base": { 272 | "Colour": "Cyan", 273 | "Label": "/bands1 (0)" 274 | }, 275 | "Signal Source": "/bands1", 276 | "Index": 0, 277 | "Port": 8001, 278 | "TCP": false 279 | } 280 | }, 281 | { 282 | "Signal Type": "OSC", 283 | "Signal": { 284 | "Base": { 285 | "Colour": "Dark Khaki", 286 | "Label": "/bands1 (1)" 287 | }, 288 | "Signal Source": "/bands1", 289 | "Index": 1, 290 | "Port": 8001, 291 | "TCP": false 292 | } 293 | }, 294 | { 295 | "Signal Type": "OSC", 296 | "Signal": { 297 | "Base": { 298 | "Colour": "Fuchsia", 299 | "Label": "/bands1 (2)" 300 | }, 301 | "Signal Source": "/bands1", 302 | "Index": 2, 303 | "Port": 8001, 304 | "TCP": false 305 | } 306 | }, 307 | { 308 | "Signal Type": "OSC", 309 | "Signal": { 310 | "Base": { 311 | "Colour": "Green", 312 | "Label": "/bands1 (3)" 313 | }, 314 | "Signal Source": "/bands1", 315 | "Index": 3, 316 | "Port": 8001, 317 | "TCP": false 318 | } 319 | }, 320 | { 321 | "Signal Type": "OSC", 322 | "Signal": { 323 | "Base": { 324 | "Colour": "Green Yellow", 325 | "Label": "/bands1 (4)" 326 | }, 327 | "Signal Source": "/bands1", 328 | "Index": 4, 329 | "Port": 8001, 330 | "TCP": false 331 | } 332 | } 333 | ] 334 | } 335 | }, 336 | { 337 | "Signal Type": "GROUP", 338 | "Signal": { 339 | "Base": { 340 | "Colour": "Khaki", 341 | "Label": "/bands2" 342 | }, 343 | "IDs": [ 344 | { 345 | "Signal Type": "OSC", 346 | "Signal": { 347 | "Base": { 348 | "Colour": "Light Blue", 349 | "Label": "/bands2 (0)" 350 | }, 351 | "Signal Source": "/bands2", 352 | "Index": 0, 353 | "Port": 8001, 354 | "TCP": false 355 | } 356 | }, 357 | { 358 | "Signal Type": "OSC", 359 | "Signal": { 360 | "Base": { 361 | "Colour": "Light Cyan", 362 | "Label": "/bands2 (1)" 363 | }, 364 | "Signal Source": "/bands2", 365 | "Index": 1, 366 | "Port": 8001, 367 | "TCP": false 368 | } 369 | }, 370 | { 371 | "Signal Type": "OSC", 372 | "Signal": { 373 | "Base": { 374 | "Colour": "Light Green", 375 | "Label": "/bands2 (2)" 376 | }, 377 | "Signal Source": "/bands2", 378 | "Index": 2, 379 | "Port": 8001, 380 | "TCP": false 381 | } 382 | }, 383 | { 384 | "Signal Type": "OSC", 385 | "Signal": { 386 | "Base": { 387 | "Colour": "Light Salmon", 388 | "Label": "/bands2 (3)" 389 | }, 390 | "Signal Source": "/bands2", 391 | "Index": 3, 392 | "Port": 8001, 393 | "TCP": false 394 | } 395 | }, 396 | { 397 | "Signal Type": "OSC", 398 | "Signal": { 399 | "Base": { 400 | "Colour": "Orange", 401 | "Label": "/bands2 (4)" 402 | }, 403 | "Signal Source": "/bands2", 404 | "Index": 4, 405 | "Port": 8001, 406 | "TCP": false 407 | } 408 | } 409 | ] 410 | } 411 | }, 412 | { 413 | "Signal Type": "GROUP", 414 | "Signal": { 415 | "Base": { 416 | "Colour": "Purple", 417 | "Label": "/bands3" 418 | }, 419 | "IDs": [ 420 | { 421 | "Signal Type": "OSC", 422 | "Signal": { 423 | "Base": { 424 | "Colour": "Red", 425 | "Label": "/bands3 (0)" 426 | }, 427 | "Signal Source": "/bands3", 428 | "Index": 0, 429 | "Port": 8001, 430 | "TCP": false 431 | } 432 | }, 433 | { 434 | "Signal Type": "OSC", 435 | "Signal": { 436 | "Base": { 437 | "Colour": "Salmon", 438 | "Label": "/bands3 (1)" 439 | }, 440 | "Signal Source": "/bands3", 441 | "Index": 1, 442 | "Port": 8001, 443 | "TCP": false 444 | } 445 | }, 446 | { 447 | "Signal Type": "OSC", 448 | "Signal": { 449 | "Base": { 450 | "Colour": "Silver", 451 | "Label": "/bands3 (2)" 452 | }, 453 | "Signal Source": "/bands3", 454 | "Index": 2, 455 | "Port": 8001, 456 | "TCP": false 457 | } 458 | }, 459 | { 460 | "Signal Type": "OSC", 461 | "Signal": { 462 | "Base": { 463 | "Colour": "White", 464 | "Label": "/bands3 (3)" 465 | }, 466 | "Signal Source": "/bands3", 467 | "Index": 3, 468 | "Port": 8001, 469 | "TCP": false 470 | } 471 | }, 472 | { 473 | "Signal Type": "OSC", 474 | "Signal": { 475 | "Base": { 476 | "Colour": "Yellow", 477 | "Label": "/bands3 (4)" 478 | }, 479 | "Signal Source": "/bands3", 480 | "Index": 4, 481 | "Port": 8001, 482 | "TCP": false 483 | } 484 | } 485 | ] 486 | } 487 | }, 488 | { 489 | "Signal Type": "GROUP", 490 | "Signal": { 491 | "Base": { 492 | "Colour": "Silver", 493 | "Label": "/ratios0" 494 | }, 495 | "IDs": [ 496 | { 497 | "Signal Type": "OSC", 498 | "Signal": { 499 | "Base": { 500 | "Colour": "Cyan", 501 | "Label": "/ratios0 (0)" 502 | }, 503 | "Signal Source": "/ratios0", 504 | "Index": 0, 505 | "Port": 8001, 506 | "TCP": false 507 | } 508 | }, 509 | { 510 | "Signal Type": "OSC", 511 | "Signal": { 512 | "Base": { 513 | "Colour": "Dark Khaki", 514 | "Label": "/ratios0 (1)" 515 | }, 516 | "Signal Source": "/ratios0", 517 | "Index": 1, 518 | "Port": 8001, 519 | "TCP": false 520 | } 521 | } 522 | ] 523 | } 524 | }, 525 | { 526 | "Signal Type": "GROUP", 527 | "Signal": { 528 | "Base": { 529 | "Colour": "Blue", 530 | "Label": "/ratios1" 531 | }, 532 | "IDs": [ 533 | { 534 | "Signal Type": "OSC", 535 | "Signal": { 536 | "Base": { 537 | "Colour": "Green", 538 | "Label": "/ratios1 (0)" 539 | }, 540 | "Signal Source": "/ratios1", 541 | "Index": 0, 542 | "Port": 8001, 543 | "TCP": false 544 | } 545 | }, 546 | { 547 | "Signal Type": "OSC", 548 | "Signal": { 549 | "Base": { 550 | "Colour": "Green Yellow", 551 | "Label": "/ratios1 (1)" 552 | }, 553 | "Signal Source": "/ratios1", 554 | "Index": 1, 555 | "Port": 8001, 556 | "TCP": false 557 | } 558 | } 559 | ] 560 | } 561 | }, 562 | { 563 | "Signal Type": "GROUP", 564 | "Signal": { 565 | "Base": { 566 | "Colour": "Fuchsia", 567 | "Label": "/ratios2" 568 | }, 569 | "IDs": [ 570 | { 571 | "Signal Type": "OSC", 572 | "Signal": { 573 | "Base": { 574 | "Colour": "Light Blue", 575 | "Label": "/ratios2 (0)" 576 | }, 577 | "Signal Source": "/ratios2", 578 | "Index": 0, 579 | "Port": 8001, 580 | "TCP": false 581 | } 582 | }, 583 | { 584 | "Signal Type": "OSC", 585 | "Signal": { 586 | "Base": { 587 | "Colour": "Light Cyan", 588 | "Label": "/ratios2 (1)" 589 | }, 590 | "Signal Source": "/ratios2", 591 | "Index": 1, 592 | "Port": 8001, 593 | "TCP": false 594 | } 595 | } 596 | ] 597 | } 598 | }, 599 | { 600 | "Signal Type": "GROUP", 601 | "Signal": { 602 | "Base": { 603 | "Colour": "Khaki", 604 | "Label": "/ratios3" 605 | }, 606 | "IDs": [ 607 | { 608 | "Signal Type": "OSC", 609 | "Signal": { 610 | "Base": { 611 | "Colour": "Light Salmon", 612 | "Label": "/ratios3 (0)" 613 | }, 614 | "Signal Source": "/ratios3", 615 | "Index": 0, 616 | "Port": 8001, 617 | "TCP": false 618 | } 619 | }, 620 | { 621 | "Signal Type": "OSC", 622 | "Signal": { 623 | "Base": { 624 | "Colour": "Orange", 625 | "Label": "/ratios3 (1)" 626 | }, 627 | "Signal Source": "/ratios3", 628 | "Index": 1, 629 | "Port": 8001, 630 | "TCP": false 631 | } 632 | } 633 | ] 634 | } 635 | }, 636 | { 637 | "Signal Type": "GROUP", 638 | "Signal": { 639 | "Base": { 640 | "Colour": "Khaki", 641 | "Label": "/muse/eeg" 642 | }, 643 | "IDs": [ 644 | { 645 | "Signal Type": "OSC", 646 | "Signal": { 647 | "Base": { 648 | "Colour": "Dark Khaki", 649 | "Label": "/muse/eeg (0)" 650 | }, 651 | "Signal Source": "/muse/eeg", 652 | "Index": 0, 653 | "Port": 8001, 654 | "TCP": false 655 | } 656 | }, 657 | { 658 | "Signal Type": "OSC", 659 | "Signal": { 660 | "Base": { 661 | "Colour": "Fuchsia", 662 | "Label": "/muse/eeg (1)" 663 | }, 664 | "Signal Source": "/muse/eeg", 665 | "Index": 1, 666 | "Port": 8001, 667 | "TCP": false 668 | } 669 | }, 670 | { 671 | "Signal Type": "OSC", 672 | "Signal": { 673 | "Base": { 674 | "Colour": "Green", 675 | "Label": "/muse/eeg (2)" 676 | }, 677 | "Signal Source": "/muse/eeg", 678 | "Index": 2, 679 | "Port": 8001, 680 | "TCP": false 681 | } 682 | }, 683 | { 684 | "Signal Type": "OSC", 685 | "Signal": { 686 | "Base": { 687 | "Colour": "Green Yellow", 688 | "Label": "/muse/eeg (3)" 689 | }, 690 | "Signal Source": "/muse/eeg", 691 | "Index": 3, 692 | "Port": 8001, 693 | "TCP": false 694 | } 695 | } 696 | ] 697 | } 698 | }, 699 | { 700 | "Signal Type": "GROUP", 701 | "Signal": { 702 | "Base": { 703 | "Colour": "White", 704 | "Label": "/muse/gyro" 705 | }, 706 | "IDs": [ 707 | { 708 | "Signal Type": "OSC", 709 | "Signal": { 710 | "Base": { 711 | "Colour": "Red", 712 | "Label": "/muse/gyro (0)" 713 | }, 714 | "Signal Source": "/muse/gyro", 715 | "Index": 0, 716 | "Port": 8001, 717 | "TCP": false 718 | } 719 | }, 720 | { 721 | "Signal Type": "OSC", 722 | "Signal": { 723 | "Base": { 724 | "Colour": "Salmon", 725 | "Label": "/muse/gyro (1)" 726 | }, 727 | "Signal Source": "/muse/gyro", 728 | "Index": 1, 729 | "Port": 8001, 730 | "TCP": false 731 | } 732 | }, 733 | { 734 | "Signal Type": "OSC", 735 | "Signal": { 736 | "Base": { 737 | "Colour": "Silver", 738 | "Label": "/muse/gyro (2)" 739 | }, 740 | "Signal Source": "/muse/gyro", 741 | "Index": 2, 742 | "Port": 8001, 743 | "TCP": false 744 | } 745 | } 746 | ] 747 | } 748 | }, 749 | { 750 | "Signal Type": "GROUP", 751 | "Signal": { 752 | "Base": { 753 | "Colour": "Dark Khaki", 754 | "Label": "/muse/acc" 755 | }, 756 | "IDs": [ 757 | { 758 | "Signal Type": "OSC", 759 | "Signal": { 760 | "Base": { 761 | "Colour": "Yellow", 762 | "Label": "/muse/acc (0)" 763 | }, 764 | "Signal Source": "/muse/acc", 765 | "Index": 0, 766 | "Port": 8001, 767 | "TCP": false 768 | } 769 | }, 770 | { 771 | "Signal Type": "OSC", 772 | "Signal": { 773 | "Base": { 774 | "Colour": "Blue", 775 | "Label": "/muse/acc (1)" 776 | }, 777 | "Signal Source": "/muse/acc", 778 | "Index": 1, 779 | "Port": 8001, 780 | "TCP": false 781 | } 782 | }, 783 | { 784 | "Signal Type": "OSC", 785 | "Signal": { 786 | "Base": { 787 | "Colour": "Cyan", 788 | "Label": "/muse/acc (2)" 789 | }, 790 | "Signal Source": "/muse/acc", 791 | "Index": 2, 792 | "Port": 8001, 793 | "TCP": false 794 | } 795 | } 796 | ] 797 | } 798 | } 799 | ], 800 | "Visualizers": [ 801 | { 802 | "Visualizer Type": "Scrolling Line Graph", 803 | "Visualizer Settings": { 804 | "Label": "EEG", 805 | "Max Amplitude": 312.5, 806 | "Min Amplitude": -312.5, 807 | "Override Master Time": false, 808 | "Time Range": 10000000000, 809 | "Spread Signals Apart": true, 810 | "Spread Factor": 1.0, 811 | "Data Lines": [ 812 | { 813 | "Signal": { 814 | "Signal Type": "OSC", 815 | "Signal": { 816 | "Base": { 817 | "Colour": "Green", 818 | "Label": "/muse/eeg (2)" 819 | }, 820 | "Signal Source": "/muse/eeg", 821 | "Index": 2, 822 | "Port": 8001, 823 | "TCP": false 824 | } 825 | }, 826 | "Zeroed": true, 827 | "Assuming Uniform Rate": true 828 | }, 829 | { 830 | "Signal": { 831 | "Signal Type": "OSC", 832 | "Signal": { 833 | "Base": { 834 | "Colour": "Fuchsia", 835 | "Label": "/muse/eeg (1)" 836 | }, 837 | "Signal Source": "/muse/eeg", 838 | "Index": 1, 839 | "Port": 8001, 840 | "TCP": false 841 | } 842 | }, 843 | "Zeroed": true, 844 | "Assuming Uniform Rate": true 845 | }, 846 | { 847 | "Signal": { 848 | "Signal Type": "OSC", 849 | "Signal": { 850 | "Base": { 851 | "Colour": "Green Yellow", 852 | "Label": "/muse/eeg (3)" 853 | }, 854 | "Signal Source": "/muse/eeg", 855 | "Index": 3, 856 | "Port": 8001, 857 | "TCP": false 858 | } 859 | }, 860 | "Zeroed": true, 861 | "Assuming Uniform Rate": true 862 | }, 863 | { 864 | "Signal": { 865 | "Signal Type": "OSC", 866 | "Signal": { 867 | "Base": { 868 | "Colour": "Dark Khaki", 869 | "Label": "/muse/eeg (0)" 870 | }, 871 | "Signal Source": "/muse/eeg", 872 | "Index": 0, 873 | "Port": 8001, 874 | "TCP": false 875 | } 876 | }, 877 | "Zeroed": true, 878 | "Assuming Uniform Rate": true 879 | } 880 | ] 881 | } 882 | }, 883 | { 884 | "Visualizer Type": "Scrolling Line Graph", 885 | "Visualizer Settings": { 886 | "Label": "ProcessedData", 887 | "Max Amplitude": 1.0, 888 | "Min Amplitude": 0.0, 889 | "Override Master Time": false, 890 | "Time Range": 10000000000, 891 | "Spread Signals Apart": false, 892 | "Spread Factor": 1.0, 893 | "Data Lines": [ 894 | { 895 | "Signal": { 896 | "Signal Type": "OSC", 897 | "Signal": { 898 | "Base": { 899 | "Colour": "Salmon", 900 | "Label": "/blinkVal (0)" 901 | }, 902 | "Signal Source": "/blinkVal", 903 | "Index": 0, 904 | "Port": 8001, 905 | "TCP": false 906 | } 907 | }, 908 | "Zeroed": false, 909 | "Assuming Uniform Rate": true 910 | }, 911 | { 912 | "Signal": { 913 | "Signal Type": "OSC", 914 | "Signal": { 915 | "Base": { 916 | "Colour": "Red", 917 | "Label": "/blink (0)" 918 | }, 919 | "Signal Source": "/blink", 920 | "Index": 0, 921 | "Port": 8001, 922 | "TCP": false 923 | } 924 | }, 925 | "Zeroed": false, 926 | "Assuming Uniform Rate": true 927 | }, 928 | { 929 | "Signal": { 930 | "Signal Type": "OSC", 931 | "Signal": { 932 | "Base": { 933 | "Colour": "White", 934 | "Label": "/calm (0)" 935 | }, 936 | "Signal Source": "/calm", 937 | "Index": 0, 938 | "Port": 8001, 939 | "TCP": false 940 | } 941 | }, 942 | "Zeroed": false, 943 | "Assuming Uniform Rate": true 944 | }, 945 | { 946 | "Signal": { 947 | "Signal Type": "OSC", 948 | "Signal": { 949 | "Base": { 950 | "Colour": "Yellow", 951 | "Label": "/emg (0)" 952 | }, 953 | "Signal Source": "/emg", 954 | "Index": 0, 955 | "Port": 8001, 956 | "TCP": false 957 | } 958 | }, 959 | "Zeroed": false, 960 | "Assuming Uniform Rate": true 961 | }, 962 | { 963 | "Signal": { 964 | "Signal Type": "OSC", 965 | "Signal": { 966 | "Base": { 967 | "Colour": "Blue", 968 | "Label": "/goodData (0)" 969 | }, 970 | "Signal Source": "/goodData", 971 | "Index": 0, 972 | "Port": 8001, 973 | "TCP": false 974 | } 975 | }, 976 | "Zeroed": false, 977 | "Assuming Uniform Rate": true 978 | } 979 | ] 980 | } 981 | }, 982 | { 983 | "Visualizer Type": "Scrolling Line Graph", 984 | "Visualizer Settings": { 985 | "Label": "Scrolling Line Graph", 986 | "Max Amplitude": 1.0, 987 | "Min Amplitude": 0.0, 988 | "Override Master Time": false, 989 | "Time Range": 10000000000, 990 | "Spread Signals Apart": false, 991 | "Spread Factor": 1.0, 992 | "Data Lines": [ 993 | { 994 | "Signal": { 995 | "Signal Type": "OSC", 996 | "Signal": { 997 | "Base": { 998 | "Colour": "Cyan", 999 | "Label": "/saccad (0)" 1000 | }, 1001 | "Signal Source": "/saccad", 1002 | "Index": 0, 1003 | "Port": 8001, 1004 | "TCP": false 1005 | } 1006 | }, 1007 | "Zeroed": false, 1008 | "Assuming Uniform Rate": true 1009 | }, 1010 | { 1011 | "Signal": { 1012 | "Signal Type": "OSC", 1013 | "Signal": { 1014 | "Base": { 1015 | "Colour": "White", 1016 | "Label": "/calm (0)" 1017 | }, 1018 | "Signal Source": "/calm", 1019 | "Index": 0, 1020 | "Port": 8001, 1021 | "TCP": false 1022 | } 1023 | }, 1024 | "Zeroed": false, 1025 | "Assuming Uniform Rate": true 1026 | } 1027 | ] 1028 | } 1029 | }, 1030 | { 1031 | "Visualizer Type": "Scrolling Line Graph", 1032 | "Visualizer Settings": { 1033 | "Label": "HeartBreath", 1034 | "Max Amplitude": 1.0, 1035 | "Min Amplitude": 0.0, 1036 | "Override Master Time": false, 1037 | "Time Range": 10000000000, 1038 | "Spread Signals Apart": false, 1039 | "Spread Factor": 1.0, 1040 | "Data Lines": [ 1041 | { 1042 | "Signal": { 1043 | "Signal Type": "OSC", 1044 | "Signal": { 1045 | "Base": { 1046 | "Colour": "Light Salmon", 1047 | "Label": "/breath (0)" 1048 | }, 1049 | "Signal Source": "/breath", 1050 | "Index": 0, 1051 | "Port": 8001, 1052 | "TCP": false 1053 | } 1054 | }, 1055 | "Zeroed": false, 1056 | "Assuming Uniform Rate": true 1057 | }, 1058 | { 1059 | "Signal": { 1060 | "Signal Type": "OSC", 1061 | "Signal": { 1062 | "Base": { 1063 | "Colour": "Fuchsia", 1064 | "Label": "/heart (0)" 1065 | }, 1066 | "Signal Source": "/heart", 1067 | "Index": 0, 1068 | "Port": 8001, 1069 | "TCP": false 1070 | } 1071 | }, 1072 | "Zeroed": false, 1073 | "Assuming Uniform Rate": true 1074 | } 1075 | ] 1076 | } 1077 | } 1078 | ], 1079 | "Marker": { 1080 | "Markings": [ 1081 | "/Marker/0", 1082 | "/Marker/1", 1083 | "/Marker/2", 1084 | "/Marker/3", 1085 | "/Marker/4", 1086 | "/Marker/5", 1087 | "/Marker/6", 1088 | "/Marker/7", 1089 | "/Marker/8", 1090 | "/Marker/9" 1091 | ] 1092 | } 1093 | } -------------------------------------------------------------------------------- /muse-sock/README.md: -------------------------------------------------------------------------------- 1 | # muse-sock 2 | 3 | This directory contains several utilities for streaming Muse data in real-time to LXStudio over LSL and OSC 4 | 5 | muse-sock now supports EEG, accelerometer and gyro data from the muse 6 | 7 | muse-sock.py has a watchdog exits if data hasn’t been received for 5s, or too much data is lost over 10s 8 | 9 | debug messages come out once per second and report approx time since last accelerometer data packet, percentage of data lost over the last second, and average data loss over the last 10s 10 | 11 | 12 | Run these on the command line for a short description of the functionality of each script: 13 | 14 | ``` 15 | python muse-sock.py --help 16 | python muse-listener.py --help 17 | ``` 18 | 19 | ## Runtime 20 | Start both `muse-sock.py` and `muse-listener.py` on the pi to receive, parse, and send out OSC messages. 21 | 22 | Optionally: 23 | muse-reconnect is a bash script that tries to reconnect to a muse forever 24 | 25 | 26 | -------------------------------------------------------------------------------- /muse-sock/dump_osc.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # pyliblo - Python bindings for the liblo OSC library 5 | # 6 | # Copyright (C) 2007-2011 Dominic Sacré 7 | # 8 | # This program is free software; you can redistribute it and/or modify 9 | # it under the terms of the GNU General Public License as published by 10 | # the Free Software Foundation; either version 2 of the License, or 11 | # (at your option) any later version. 12 | # 13 | 14 | import sys 15 | import liblo 16 | 17 | 18 | class DumpOSC: 19 | 20 | def blob_to_hex(self, b): 21 | return " ".join([ (hex(v/16).upper()[-1] + hex(v%16).upper()[-1]) for v in b ]) 22 | 23 | def callback(self, path, args, types, src): 24 | write = sys.stdout.write 25 | ## print source 26 | #write("from " + src.get_url() + ": ") 27 | # print path 28 | write(path + " ,") 29 | # print typespec 30 | write(types) 31 | # loop through arguments and print them 32 | for a, t in zip(args, types): 33 | write(" ") 34 | if t == None: 35 | #unknown type 36 | write("[unknown type]") 37 | elif t == 'b': 38 | # it's a blob 39 | write("[" + self.blob_to_hex(a) + "]") 40 | else: 41 | # anything else 42 | write(str(a)) 43 | write('\n') 44 | 45 | def __init__(self, port = None): 46 | # create server object 47 | try: 48 | self.server = liblo.Server(port) 49 | except liblo.ServerError as err: 50 | sys.exit(str(err)) 51 | 52 | print("listening on URL: " + self.server.get_url()) 53 | 54 | # register callback function for all messages 55 | self.server.add_method(None, None, self.callback) 56 | 57 | def run(self): 58 | # just loop and dispatch messages every 10ms 59 | while True: 60 | self.server.recv(10) 61 | 62 | 63 | if __name__ == '__main__': 64 | # display help 65 | if len(sys.argv) == 1 or sys.argv[1] in ("-h", "--help"): 66 | sys.exit("Usage: " + sys.argv[0] + " port") 67 | 68 | # require one argument (port number) 69 | if len(sys.argv) < 2: 70 | sys.exit("please specify a port or URL") 71 | 72 | app = DumpOSC(sys.argv[1]) 73 | try: 74 | app.run() 75 | except KeyboardInterrupt: 76 | del app 77 | -------------------------------------------------------------------------------- /muse-sock/muse-listener.py: -------------------------------------------------------------------------------- 1 | import socket 2 | import bitstring 3 | import numpy as np 4 | import liblo as lo 5 | from optparse import OptionParser 6 | 7 | usage = "python muse-listener.py --port 9999 --oscip 127.0.0.1 --oscport 7878" 8 | parser = OptionParser(usage=usage) 9 | parser.add_option("-l", "--port", 10 | dest="port", type='int', default=9999, 11 | help="port to listen to") 12 | parser.add_option("-o", "--oscip", 13 | dest="oscip", type='string', default="127.0.0.1", 14 | help="IP address to send OSC message to") 15 | parser.add_option("-p", "--oscport", 16 | dest="oscport", type='int', default=7878, 17 | help="host port") 18 | (options, args) = parser.parse_args() 19 | 20 | sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP 21 | sock.bind(('', options.port)) 22 | outputAddress = lo.Address(options.oscip, options.oscport) 23 | eegdata = np.zeros((5,12)) 24 | gyrodata = np.zeros((3,3)) 25 | accdata = np.zeros((3,3)) 26 | timestamps = np.zeros(12) 27 | while True: 28 | data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes 29 | """ 30 | samples are received in this order : 44, 41, 38, 32, 35 31 | wait until we get 35 and call the data 32 | """ 33 | handle = int(data[0:2]) 34 | #print(handle) 35 | aa = bitstring.Bits(bytes=data[2:]) 36 | 37 | if (handle==20): 38 | pattern = "uint:16,int:16,int:16,int:16,int:16,int:16,int:16,int:16,int:16,int:16" 39 | res = aa.unpack(pattern) 40 | timestamp = res[0] 41 | data = res[1:] 42 | # 16 bits on a 256 dps range 43 | data = np.array(data)*0.0074768 44 | gyrodata[0] = data[0:3] 45 | gyrodata[1] = data[3:6] 46 | gyrodata[2] = data[6:9] 47 | for ind in range(3): 48 | gyroMessage = lo.Message('/muse/gyro', gyrodata[ind][0], gyrodata[ind][1],gyrodata[ind][2]) 49 | lo.send(outputAddress, gyroMessage) 50 | 51 | elif (handle==23): 52 | pattern = "uint:16,int:16,int:16,int:16,int:16,int:16,int:16,int:16,int:16,int:16" 53 | res = aa.unpack(pattern) 54 | timestamp = res[0] 55 | data = res[1:] 56 | # 16 bits on a 256 dps range 57 | data = np.array(data)*0.0074768 58 | accdata[0] = data[0:3] 59 | accdata[1] = data[3:6] 60 | accdata[2] = data[6:9] 61 | for ind in range(3): 62 | accMessage = lo.Message('/muse/acc', accdata[ind][0], accdata[ind][1],accdata[ind][2]) 63 | lo.send(outputAddress, accMessage) 64 | 65 | else: 66 | pattern = "uint:16,uint:12,uint:12,uint:12,uint:12,uint:12,uint:12, \ 67 | uint:12,uint:12,uint:12,uint:12,uint:12,uint:12" 68 | res = aa.unpack(pattern) 69 | timestamp = res[0] 70 | data = res[1:] 71 | data = 0.40293 * np.array(data) 72 | 73 | #print(int(handle)," ", timestamp," ", data) 74 | #print(handle) 75 | 76 | index = int((handle - 32) / 3) 77 | eegdata[index] = data 78 | timestamps[index] = timestamp 79 | if handle == 35: 80 | for ind in range(12): 81 | eegMessage = lo.Message('/muse/eeg', eegdata[0][ind], eegdata[1][ind], 82 | eegdata[2][ind], eegdata[3][ind], 83 | eegdata[4][ind]) 84 | lo.send(outputAddress, eegMessage) 85 | 86 | -------------------------------------------------------------------------------- /muse-sock/muse-reconnect: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #use like this: ./muse-reconnect 00:55:DA:B0:32:B1 192.168.1.118 9999 3 | while [ 1 ] 4 | do 5 | python /home/pi/SOFTWARE/Interactivity/muse-sock/muse-sock.py --address $1 --host $2 --port $3 6 | sleep 2 7 | done 8 | 9 | 10 | -------------------------------------------------------------------------------- /muse-sock/muse-reconnect-status: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #use like this: ./muse-reconnect-status 00:55:DA:B0:32:B1 192.168.1.118 9999 3 | while [ 1 ] 4 | do 5 | python muse-sock-osc-status.py --address $1 --host $2 --port $3 6 | python send_osc.py 1234 /muse-reconnect 7 | sleep 2 8 | done 9 | 10 | 11 | -------------------------------------------------------------------------------- /muse-sock/muse-sock.py: -------------------------------------------------------------------------------- 1 | """Connect to a Muse and send its data through the LSL protocol 2 | """ 3 | 4 | from muse import Muse 5 | from time import sleep 6 | from optparse import OptionParser 7 | import time 8 | 9 | now = time.time() 10 | 11 | usage = "python muse-socket.py --address 00:55:DA:B0:32:B1 --host 192.168.1.118 --port 9999" 12 | parser = OptionParser(usage=usage) 13 | parser.add_option("-a", "--address", 14 | dest="address", type='string', default="00:55:DA:B0:06:D6", 15 | help="Device mac address.") 16 | parser.add_option("-i", "--host", 17 | dest="host", type='string', default="127.0.0.1", 18 | help="host IP") 19 | parser.add_option("-p", "--port", 20 | dest="port", type='int', default=9999, 21 | help="host port") 22 | parser.add_option("-b", "--backend", 23 | dest="backend", type='string', default="auto", 24 | help="Pygatt backend to use. Can be auto, gatt or bgapi") 25 | parser.add_option("-d", "--device-type", 26 | dest="device_type", type='string', default="muse", 27 | help="Device type.") 28 | 29 | (options, args) = parser.parse_args() 30 | 31 | countACC = 0; 32 | def process(): 33 | global now 34 | global countACC 35 | now = time.time() 36 | countACC+=1.0 37 | 38 | muse = Muse(address=options.address, device_type=options.device_type, 39 | host=options.host, port=options.port, 40 | callback=process, backend=options.backend,interface=None) 41 | 42 | muse.connect() 43 | print('Connected') 44 | muse.start() 45 | print('Streaming') 46 | idx =0 47 | losshist =[0 for i in range(10)] 48 | while 1: 49 | try: 50 | sleep(1) 51 | dataloss =max(0.0,100.0-countACC*3/50*100.0) 52 | losshist[idx] = dataloss 53 | idx=(idx+1)%10 54 | avgloss =sum(losshist)/float(len(losshist)) 55 | print('waited: %2f' % (time.time()-now), 'dataloss: %.1f' % dataloss,'avgloss: %f' % avgloss ) 56 | countACC = 0; 57 | if ((time.time()-now)>500): 58 | break 59 | if ((avgloss>40)): 60 | break 61 | except: 62 | break 63 | 64 | #muse.stop() 65 | #muse.disconnect() 66 | -------------------------------------------------------------------------------- /muse-sock/muse/__init__.py: -------------------------------------------------------------------------------- 1 | from .muse import Muse 2 | -------------------------------------------------------------------------------- /muse-sock/muse/muse.py: -------------------------------------------------------------------------------- 1 | import bitstring 2 | import pygatt 3 | import numpy as np 4 | from time import time 5 | from sys import platform 6 | import socket 7 | import sys 8 | 9 | class Muse(object): 10 | """Muse 2016 headband or SmithX prototype 11 | """ 12 | def __init__(self, address, callback, host, port, device_type=None, 13 | backend='auto', interface=None): 14 | self.address = address 15 | self.callback = callback 16 | self.device_type = device_type 17 | 18 | self.interface = interface 19 | 20 | self.HOST = host #'192.168.1.118' 21 | self.PORT = port #9999 22 | self.s=socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 23 | 24 | if backend == 'auto': 25 | if platform == "linux" or platform == "linux2": 26 | self.backend = 'gatt' 27 | else: 28 | self.backend = 'bgapi' 29 | elif backend in ['gatt', 'bgapi']: 30 | self.backend = backend 31 | else: 32 | raise(ValueError('Backend must be auto, gatt or bgapi')) 33 | 34 | def connect(self, interface=None): 35 | """Connect to the device 36 | 37 | Args: 38 | interface: if specified, call the backend with this interface 39 | """ 40 | if self.backend == 'gatt': 41 | self.interface = self.interface or 'hci0' 42 | self.adapter = pygatt.GATTToolBackend(self.interface) 43 | else: 44 | self.adapter = pygatt.BGAPIBackend(serial_port=self.interface) 45 | 46 | self.adapter.start() 47 | self.device = self.adapter.connect(self.address, timeout=5) 48 | 49 | self._subscribe_eeg() 50 | self._subscribe_acc() 51 | self._subscribe_gyro() 52 | 53 | 54 | def start(self): 55 | """Start streaming.""" 56 | 57 | if self.device_type.lower() is ('muse'): 58 | # Change preset to 31 59 | self.device.char_write_handle(0x00e, 60 | [0x04, 0x50, 0x33, 0x31, 0x0a], 61 | False) 62 | self.device.char_write_handle(0x000e, [0x02, 0x64, 0x0a], False) 63 | 64 | def stop(self): 65 | """Stop streaming.""" 66 | self.device.char_write_handle(0x000e, [0x02, 0x68, 0x0a], False) 67 | 68 | def disconnect(self): 69 | """Disconnect.""" 70 | self.device.disconnect() 71 | self.adapter.stop() 72 | 73 | def _subscribe_eeg(self): 74 | """Subscribe to EEG stream.""" 75 | self.device.subscribe('273e0003-4c4d-454d-96be-f03bac821358', 76 | callback=self._handle_eeg) 77 | self.device.subscribe('273e0004-4c4d-454d-96be-f03bac821358', 78 | callback=self._handle_eeg) 79 | self.device.subscribe('273e0005-4c4d-454d-96be-f03bac821358', 80 | callback=self._handle_eeg) 81 | self.device.subscribe('273e0006-4c4d-454d-96be-f03bac821358', 82 | callback=self._handle_eeg) 83 | self.device.subscribe('273e0007-4c4d-454d-96be-f03bac821358', 84 | callback=self._handle_eeg) 85 | 86 | def _subscribe_acc(self): 87 | """Subscribe to the accelerometer stream.""" 88 | self.device.subscribe('273e000a-4c4d-454d-96be-f03bac821358', 89 | callback=self._handle_acc) 90 | 91 | def _subscribe_gyro(self): 92 | """Subscribe to the gyroscope stream.""" 93 | self.device.subscribe('273e0009-4c4d-454d-96be-f03bac821358', 94 | callback=self._handle_gyro) 95 | 96 | def _handle_eeg(self, handle, data): 97 | self.s.sendto("%s%s"%(handle,data), (self.HOST,self.PORT)) 98 | 99 | def _handle_acc(self, handle, data): 100 | # print("acc ", handle) 101 | self.s.sendto("%s%s"%(handle,data), (self.HOST,self.PORT)) 102 | self.callback() 103 | 104 | def _handle_gyro(self, handle, data): 105 | # print("gyro ", handle) 106 | self.s.sendto("%s%s"%(handle,data), (self.HOST,self.PORT)) 107 | -------------------------------------------------------------------------------- /muse-sock/museStatus.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # This script receives OSC messages on port 1234 to render status about the muse 3 | # on a Pimironi Blinkt LED strip. 4 | # (connection information comes from muse-reconnect-status and muse-sock-osc-status.py 5 | # Signal quality (HSI - headband status indicator) data comes from a separate 6 | # signal processing script 7 | 8 | import colorsys 9 | import time 10 | 11 | import blinkt 12 | import liblo, sys 13 | import os 14 | # create server, listening on port 1234 15 | mode = 0 16 | try: 17 | server = liblo.Server(1234) 18 | except liblo.ServerError, err: 19 | print str(err) 20 | sys.exit() 21 | 22 | def musesock_callback(path, args): 23 | i,f = args 24 | #print "received message '%s' with arguments '%d' and '%f'" % (path, i, f) 25 | if (i==0): 26 | blinkt.set_pixel(0, 0, 0, 255) 27 | elif (i==1): 28 | blinkt.set_pixel(0, 255, 0, 255) 29 | elif (i==2): 30 | blinkt.set_pixel(0, int(255*(1.0-f)), int(255*f), 0) 31 | blinkt.show() 32 | 33 | def reconnect_callback(path, args): 34 | #print "received message '%s' " % (path) 35 | blinkt.set_pixel(0, 1.0,1.0, 1.0) 36 | blinkt.show() 37 | 38 | def hsi_callback(path, args): 39 | global mode 40 | h0,h1,h2,h3 = args 41 | #print "hsiVals %f %f %f %f'" % (h0,h1,h2,h3) 42 | if (h0<=254): 43 | blinkt.set_pixel(0, 0, int(255-h0), 0) 44 | else: 45 | blinkt.set_pixel(0, 0, 1, 0) 46 | if (h0<=1000): 47 | blinkt.set_pixel(1, 0, int(255-h0/4), 0) 48 | else: 49 | blinkt.set_pixel(1, 0, 1, 0) 50 | 51 | if (h1<=254): 52 | blinkt.set_pixel(2, int(255-h1), int((255-h1)/2), 0) 53 | else: 54 | blinkt.set_pixel(2, 1, 1, 0) 55 | if (h1<=1000): 56 | blinkt.set_pixel(3, int(255-h1/4), int((255-h1/4)/2), 0) 57 | else: 58 | blinkt.set_pixel(3, 1, 1, 0) 59 | 60 | if (h2<=254): 61 | blinkt.set_pixel(4, int(255-h2),0, 0) 62 | else: 63 | blinkt.set_pixel(4, 1, 0, 0) 64 | if (h2<=1000): 65 | blinkt.set_pixel(5, int(255-h2/4),0, 0) 66 | else: 67 | blinkt.set_pixel(5, 1, 0, 0) 68 | 69 | if (h3<=254): 70 | blinkt.set_pixel(6, 0, 0,int(255-h3)) 71 | else: 72 | blinkt.set_pixel(6, 0, 0, 1) 73 | if (h3<=1000): 74 | blinkt.set_pixel(7, 0, 0, int(255-h3/4)) 75 | else: 76 | blinkt.set_pixel(7, 0, 0, 1) 77 | blinkt.show() 78 | 79 | 80 | 81 | def mode_callback(path, args): 82 | global mode 83 | i = args 84 | print "switching display mode to %d'" % (i) 85 | mode = i 86 | 87 | def fallback(path, args, types, src): 88 | print "got unknown message '%s' from '%s'" % (path, src.url) 89 | for a, t in zip(args, types): 90 | print "argument of type '%s': %s" % (t, a) 91 | 92 | # register method taking an int and a float 93 | server.add_method("/muse-sock", "if", musesock_callback) 94 | server.add_method("/muse-reconnect", None, reconnect_callback) 95 | server.add_method("/hsi", "ffff", hsi_callback) 96 | #server.add_method("/breath", "f", breath_callback) 97 | #server.add_method("/heart", "f", heart_callback) 98 | #server.add_method("/calm", "f", calm_callback) 99 | #server.add_method("/mode", "i", mode_callback) 100 | #server.add_method(None, None, fallback) 101 | 102 | #light patterns 103 | #LED 1 104 | #muse-sock => connecting=purple, connected=red->green depending on quality, 105 | #muse-reconnect => sleeping=blue 106 | #LED 2 107 | #Blink every second if can ping Data proc computer. 108 | 109 | #8 LEDs, 4hsi. 1 breath, 1 heart, 1 calm, 1 blink, 1 status, 110 | 111 | # register a fallback for unhandled messages 112 | 113 | 114 | blinkt.set_clear_on_exit() 115 | blinkt.set_brightness(0.1) 116 | 117 | # loop and dispatch messages every 100ms 118 | 119 | while True: 120 | server.recv(100) 121 | #hostname = "google.com" 122 | #response = os.system("ping -c 1 " + hostname) 123 | #if response == 0: 124 | # pingstatus = "Network Active" 125 | #else: 126 | # pingstatus = "Network Error" 127 | 128 | 129 | 130 | #spacing = 360.0 / 16.0 131 | #hue = 0 132 | 133 | 134 | #while True: 135 | # hue = int(time.time() * 100) % 360 136 | # for x in range(blinkt.NUM_PIXELS): 137 | # offset = x * spacing 138 | # h = ((hue + offset) % 360) / 360.0 139 | # r, g, b = [int(c*255) for c in colorsys.hsv_to_rgb(h, 1.0, 1.0)] 140 | # blinkt.set_pixel(x, r, g, b) 141 | # blinkt.show() time.sleep(0.001) 142 | -------------------------------------------------------------------------------- /muse-sock/send_osc.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # pyliblo - Python bindings for the liblo OSC library 5 | # 6 | # Copyright (C) 2007-2011 Dominic Sacré 7 | # 8 | # This program is free software; you can redistribute it and/or modify 9 | # it under the terms of the GNU General Public License as published by 10 | # the Free Software Foundation; either version 2 of the License, or 11 | # (at your option) any later version. 12 | # 13 | 14 | import sys 15 | import liblo 16 | 17 | 18 | def make_message_auto(path, *args): 19 | msg = liblo.Message(path) 20 | 21 | for a in args: 22 | try: v = int(a) 23 | except ValueError: 24 | try: v = float(a) 25 | except ValueError: 26 | v = a 27 | msg.add(v) 28 | 29 | return msg 30 | 31 | 32 | def make_message_manual(path, types, *args): 33 | if len(types) != len(args): 34 | sys.exit("length of type string doesn't match number of arguments") 35 | 36 | msg = liblo.Message(path) 37 | try: 38 | for a, t in zip(args, types): 39 | msg.add((t, a)) 40 | except Exception as e: 41 | sys.exit(str(e)) 42 | 43 | return msg 44 | 45 | 46 | if __name__ == '__main__': 47 | # display help 48 | if len(sys.argv) == 1 or sys.argv[1] in ("-h", "--help"): 49 | sys.exit("Usage: " + sys.argv[0] + " port path [,types] [args...]") 50 | 51 | # require at least two arguments (target port/url and message path) 52 | if len(sys.argv) < 2: 53 | sys.exit("please specify a port or URL") 54 | if len(sys.argv) < 3: 55 | sys.exit("please specify a message path") 56 | 57 | if len(sys.argv) > 3 and sys.argv[3].startswith(','): 58 | msg = make_message_manual(sys.argv[2], sys.argv[3][1:], *sys.argv[4:]) 59 | else: 60 | msg = make_message_auto(*sys.argv[2:]) 61 | 62 | try: 63 | liblo.send(sys.argv[1], msg) 64 | except IOError as e: 65 | sys.exit(str(e)) 66 | else: 67 | sys.exit(0) 68 | -------------------------------------------------------------------------------- /sensors/README.md: -------------------------------------------------------------------------------- 1 | # Streaming data from the Raspberry Pi 2 | 3 | This folder contains scripts for gathering data from the Raspberry Pi and then streaming it to LX over OSC (Open Sound Control, http://opensoundcontrol.org/). 4 | 5 | 6 | For a description of the default settings and a list of command line options, please try: 7 | ``` 8 | python grove-osc.py --help 9 | ``` 10 | 11 | Typical usage is: 12 | ``` 13 | python grove-osc.py --oscip 14 | ``` 15 | 16 | Presently this script will collect data from whatever is connected to the 3 analog and 1 digital connector on the Grove Pi Zero board. 17 | 18 | For use with the Tenere reference system: 19 | * The Pulse Sensor is plugged into A0 20 | * The Grove Button is connected to D0 21 | * The I2C Hub is connected to the I2C port 22 | * The Grove Accelerometer and OLED display are both connected to the I2C Hub. 23 | 24 | 25 | If you would also like to use the OLED for a menu system or to display data during debugging, for example, we have provided `grove-oled.py`. For more information, please try: 26 | ``` 27 | python grove-oled.py --help 28 | ``` 29 | 30 | -------------------------------------------------------------------------------- /sensors/grove-oled.py: -------------------------------------------------------------------------------- 1 | import time 2 | import numpy as np 3 | import grovepi 4 | import grove_oled 5 | from adxl345 import ADXL345 6 | from optparse import OptionParser 7 | 8 | 9 | usage = "python sensor-osc.py --oscip 127.0.0.1 --oscport 7878 --with_accel 1 --with_oled 0 --with_pulse 1" 10 | parser = OptionParser(usage=usage) 11 | parser.add_option("-a", "--with_accel", 12 | dest="withAccel", type='int', default=1, 13 | help="Is a Grove Accelerometer ADXL345 connected via I2C?") 14 | parser.add_option("-d", "--debug", 15 | dest="printDebug", type='int', default=0, 16 | help="Print the sensor values to stdout?") 17 | 18 | 19 | (options, args) = parser.parse_args() 20 | 21 | global axes 22 | 23 | if options.withAccel: 24 | adxl345 = ADXL345() 25 | axes = adxl345.getAxes(True) 26 | 27 | 28 | 29 | print("starting display") 30 | grove_oled.oled_init() 31 | grove_oled.oled_clearDisplay() 32 | grove_oled.oled_setNormalDisplay() 33 | grove_oled.oled_setVerticalMode() 34 | 35 | while True: 36 | if options.withAccel: 37 | 38 | time.sleep(0.2) # only update as often as necessary 39 | 40 | axes = adxl345.getAxes(True) 41 | 42 | if options.printDebug: 43 | print("accel: x = %.3fG, y = %.3fG, z = %.3fG" % ( axes['x'], axes['y'], axes['z'])) 44 | 45 | grove_oled.oled_clearDisplay() 46 | grove_oled.oled_setTextXY(0,0) 47 | grove_oled.oled_putString("x = %.3fG" % (axes['x'])) 48 | grove_oled.oled_setTextXY(1,0) 49 | grove_oled.oled_putString("y = %.3fG" % (axes['y'])) 50 | grove_oled.oled_setTextXY(2,0) 51 | grove_oled.oled_putString("z = %.3fG" % (axes['z'])) 52 | grove_oled.oled_setTextXY(3,0) 53 | 54 | 55 | 56 | 57 | -------------------------------------------------------------------------------- /sensors/grove-osc.py: -------------------------------------------------------------------------------- 1 | import time 2 | import numpy as np 3 | import liblo as lo 4 | import grovepi 5 | import grove_oled 6 | from adxl345 import ADXL345 7 | from optparse import OptionParser 8 | 9 | 10 | 11 | usage = "python sensor-osc.py --oscip 127.0.0.1 --oscport 7878 --with_accel 1 --with_oled 0 --with_pulse 1" 12 | parser = OptionParser(usage=usage) 13 | parser.add_option("-o", "--oscip", 14 | dest="oscip", type='string', default="127.0.0.1", 15 | help="IP address to send OSC message to") 16 | parser.add_option("-p", "--oscport", 17 | dest="oscport", type='int', default=7878, 18 | help="host port") 19 | parser.add_option("-a", "--with_accel", 20 | dest="withAccel", type='int', default=1, 21 | help="Is a Grove Accelerometer ADXL345 connected via I2C?") 22 | parser.add_option("-P", "--with_pulse", 23 | dest="withPulse", type='int', default=1, 24 | help="Is a pulse sensor connected to A0?") 25 | parser.add_option("-d", "--debug", 26 | dest="printDebug", type='int', default=0, 27 | help="Print the sensor values to stdout?") 28 | 29 | 30 | (options, args) = parser.parse_args() 31 | 32 | outputAddress = lo.Address(options.oscip, options.oscport) 33 | global axes 34 | 35 | if options.withAccel: 36 | adxl345 = ADXL345() 37 | axes = adxl345.getAxes(True) 38 | 39 | 40 | 41 | analogdata = np.zeros(3) 42 | digitaldata = np.zeros(1) 43 | timestamps = np.zeros(3) 44 | 45 | 46 | 47 | 48 | while True: 49 | 50 | # this is specific to GrovePi Zero. Additional pins may be used. 51 | # See https://www.dexterindustries.com/GrovePi/engineering/port-description/ 52 | # for unlocking more I/O 53 | 54 | analogdata[0] = grovepi.analogRead(0) 55 | analogdata[1] = grovepi.analogRead(1) 56 | analogdata[2] = grovepi.analogRead(2) 57 | timestamps[0] = time.time() # let's take a timestamp, in case we want to use it someday 58 | analogMessage = lo.Message('/grove/analog', analogdata[0], analogdata[1], analogdata[2]) 59 | lo.send(outputAddress, analogMessage) 60 | 61 | 62 | digitaldata[0] = grovepi.digitalRead(3) 63 | timestamps[1] = time.time() 64 | digitalMessage = lo.Message('/grove/digital', digitaldata[0]) 65 | lo.send(outputAddress, digitalMessage) 66 | 67 | if options.printDebug: 68 | print("analog: A0 %.3f, A1 %.3f, A2 %.3f" % (analogdata[0], analogdata[1], analogdata[2])) 69 | print("digital: D3 %d" % (digitaldata[0])) 70 | 71 | if options.withPulse: 72 | pulseMessage = lo.Message('/grove/pulsesensor', analogdata[0]) 73 | lo.send(outputAddress, pulseMessage) 74 | 75 | if options.withAccel: 76 | timestamps[2] = time.time() 77 | axes = adxl345.getAxes(True) 78 | accelMessage = lo.Message('/grove/accel', axes['x'], axes['y'], axes['z']) 79 | lo.send(outputAddress, accelMessage) 80 | 81 | if options.printDebug: 82 | print("accel: x = %.3fG, y = %.3fG, z = %.3fG" % ( axes['x'], axes['y'], axes['z'])) 83 | 84 | 85 | 86 | 87 | 88 | 89 | -------------------------------------------------------------------------------- /tmux_start.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | tmux new-session -d -s "musesock" "/home/pi/SOFTWARE/Interactivity/muse-sock/muse-reconnect 00:55:DA:B0:32:B1 10.0.0.2 9910" 3 | -------------------------------------------------------------------------------- /voicecontrol/README.md: -------------------------------------------------------------------------------- 1 | # Voice Control with Jasper 2 | 3 | 4 | * For setting up Voice Control, here are some of the prerequisites: 5 | 6 | ``` 7 | sudo apt-get -y install libasound2 libasound2-dev libportaudio-dev python-pyaudio libyaml-dev 8 | sudo apt-get -y install pocketsphinx python-pocketsphinx libfst-tools 9 | sudo apt-get -y install cvs subversion autoconf libtool automake gfortran g++ autoconf-archive 10 | sudo apt-get -y install alsa-tools alsa-oss flex libc-bin libc-dev-bin python-pexpect 11 | sudo apt-get -y install zlib1g-dev flex libesd0-dev libsndfile1-dev 12 | sudo apt-get -y install espeak libmad0-dev libfftw3-dev 13 | sudo apt-get remove pulseaudio 14 | sudo apt-get autoremove 15 | ``` 16 | 17 | Additional instructions: WIP 18 | --------------------------------------------------------------------------------