├── .gitignore ├── INSTRUCTIONS.md ├── LICENSE.txt ├── NOTES.md ├── README.md ├── fig ├── decision_plotter.png ├── download_zip.JPG ├── ex1_figures.png ├── ex1_figures_new.png ├── ex2_figure.png ├── lsl_viewer.png ├── mules.png ├── muse-io.png ├── muse.png └── play_button.JPG ├── python ├── bci_workshop_tools.py ├── compute_feature_vector_advanced.py ├── exercise_01.py ├── exercise_01_multichannel.py ├── exercise_02.py ├── extra_stuff │ ├── compute_feature_vector_advanced.py │ ├── livebargraph.py │ └── mules.py ├── livebargraph.py ├── lsl-viewer.py ├── muse-lsl.py └── muse │ ├── __init__.py │ └── muse.py └── workshop_slides.pdf /.gitignore: -------------------------------------------------------------------------------- 1 | # Mac 2 | .DS_Store 3 | 4 | # Matlab 5 | *.asv 6 | 7 | # Python 8 | *.pyc 9 | 10 | # Markdown 11 | *.md~ 12 | 13 | # Others 14 | .Rhistory -------------------------------------------------------------------------------- /INSTRUCTIONS.md: -------------------------------------------------------------------------------- 1 | # BCI workshop - NeuroTechTO 2 | 3 | This document will lead users through NeuroTechX's introductory BCI Workshop. 4 | 5 | This workshop is intended for people with no or limited experience with Brain-Computer Interfaces (BCIs). The workshop will teach them the basic principles that are necessary to "hack" and develop new applications with BCIs: background on brain activity and brain activity measurement with EEG, structure of a BCI, feature extraction and machine learning. Two hands-on exercises will allow the participants to 1) visualize their EEG signals and some relevant features, and 2) experiment with a very simple BCI design. This should give the participants sufficient knowledge to understand the advantages and limitations of current BCIs, and to devise their own applications. 6 | 7 | ## Programming languages for the workshop exercises 8 | 9 | This version of the workshop currently only supports **Python** 3. (The [original version](https://github.com/NeuroTechX/bci-workshop) also supports MATLAB and GNU Octave, but only works on Windows). 10 | Python is a very popular, multi-purpose powerful, free, open and simple to read scripting language. 11 | 12 | ## Supported operating systems 13 | 14 | The workshop has been tested and works on Ubuntu 17.10, Windows 10, and macOS. 15 | 16 | ## Required hardware for the workshop 17 | 18 | The [Muse 2016](http://www.choosemuse.com/research/) model is the best option for this version of the workshop. However, if you are working with a older device you can use the muse-io command line tool to get the data. 19 | 20 | ![muse_diagram](fig/muse.png?raw=true "The Muse EEG headband.") 21 | 22 | If you are working on macOS or Windows, you will also need a BLED112 Bluetooth dongle. 23 | 24 | ## A. Installation of software for the workshop 25 | 26 | There are many other programming languages ( C, C++, Java, Processing, etc.); a diversity of **BCI toolboxes** ([OpenVIBE](http://openvibe.inria.fr/), [BCI2000](http://www.bci2000.org/wiki/index.php/Main_Page), [BCILAB](http://sccn.ucsd.edu/wiki/BCILAB), etc.); and of course **other EEG devices** (OpenBCI, Emotiv EPOC, Neurosky Mindwave, etc.). 27 | 28 | Among those, we chose the **Python-`muse-lsl`-Muse** combination as it provides a lot of flexibility to hackers, but at the same time is simple enough that novice users can understand what they are doing. Keep in mind though that the goal of this workshop is to teach you about BCIs in general, so that you are able to apply this knowledge to the environment and tools of your choice. We won't focus much on tools here. 29 | 30 | These are the steps to setup your computer 31 | 32 | **A.1.** Installing Python and required packages 33 | 34 | **A.2.** Download the code for the workshop 35 | 36 | **A.3.** Pairing the Muse EEG headset with `muse-lsl` 37 | 38 | ### A.1 Installing Python and required packages 39 | 40 | Python is a high-level scripting language that has been widely adopted in a many fields. It is open, free, simple to read, and has an extensive standard library. Many packages can also be downloaded online to complement its features. 41 | 42 | Two packages are especially useful when dealing with scientific computing (as for BCIs): [`numpy`](http://www.numpy.org/) and [`matplotlib`](http://matplotlib.org/). `numpy` allows easy manipulation of arrays and matrices (very similar to [MATLAB](http://mathesaurus.sourceforge.net/matlab-numpy.html)), which is necessary when dealing with data such as neurophysiological signals. `matplotlib` is similar to MATLAB's plotting functionalities, and can be used to visualize the signals and features we will be working with. 43 | 44 | Other packages we will use in this workshop are: 45 | 46 | * [`pylsl`](https://pypi.python.org/pypi/pylsl): the Python interface to the [Lab Streaming Layer](https://github.com/sccn/labstreaminglayer) (LSL), a protocol for real-time streaming of time series data over a network. 47 | * [`muse-lsl`](https://github.com/alexandrebarachant/muse-lsl): a pure-Python library to connect to a Muse headband and stream data using `pylsl`, 48 | * [`scikit-learn`](http://scikit-learn.org/stable/): a machine learning library. 49 | 50 | To install Python 3 and some of these required packages, we suggest you download and install the [Anaconda distribution](http://continuum.io/downloads). Anaconda is a Python distribution that includes Python 3.6 (in the case of Anaconda 3), most of the packages we will need for the workshop (as well as plenty other useful packages), and [Spyder](https://pythonhosted.org/spyder/), a great IDE for scientific computing in Python. 51 | 52 | #### Installation of Python with Anaconda (recommended) 53 | 54 | 1. Download the [Anaconda graphical installer](http://continuum.io/downloads) (if your OS version is 32-bit, make sure to download the 32-bit installer). 55 | 2. Follow the instructions to install. 56 | 57 | This installs Python, Spyder and some of the packages we will need for the workshop (`numpy`, `matplotlib` and `scikit-learn`). 58 | 59 | #### Individual installation of Python and packages (optional) 60 | 61 | Alternatively, you can [download Python 3.6 independently](https://www.python.org/downloads/). Make sure to install `pip` and then grab `numpy`, `matplotlib`, `scipy`, `seaborn` and ``scikit-learn`` by calling ```pip install ``` in the command line (or any other way you prefer). Make sure you have a text editor or IDE you can work with as well. 62 | 63 | #### Installation of additional Python packages 64 | 65 | Run the following command in a terminal __*__ to install the remaining packages: 66 | 67 | ``` 68 | pip install git+https://github.com/peplin/pygatt pylsl bitstring pexpect 69 | ``` 70 | 71 | If you do not have pip installed on your machine, you can do so via the Anaconda prompt. Here are the instructions 72 | 73 | 1. Search on your machine for Open Anaconda prompt and open it 74 | 2. Put in the following command: `conda install pip` 75 | 76 | If you don't have git installed on your machine, you can download it [here](https://git-scm.com/downloads) 77 | 78 | **Fixing connection bug in `pygatt`!**: The latest version of `pygatt` can throw an error on Windows and macOS when trying to connect to a device (see [here](https://github.com/peplin/pygatt/issues/118)). To avoid it, we need to add a simple line of code to the file `bgapi.py`. First, find the folder where that file is by running the following in a terminal: 79 | 80 | ``` 81 | python 82 | import os 83 | import pygatt 84 | print(os.path.join(os.path.dirname(pygatt.__file__), 'backends', 'bgapi', 'bgapi.py')) 85 | ``` 86 | 87 | Then, add the following to line 191 of `bgapi.py` and save: 88 | ``` 89 | time.sleep(.25) 90 | ``` 91 | 92 | This should fix it! 93 | 94 | __*__ The way to open a terminal depends on your OS. On Windows, press Windows + R, type `cmd`, and then press Enter. On macOS, press Apple+spacebar, type `terminal`, and press Enter. On Ubuntu, Ctrl+alt+T will open a terminal. 95 | 96 | ### A.2. Download the code for the workshop 97 | 98 | The code for the workshop consists of Python scripts that you can find [here](https://github.com/jdpigeon/bci-workshop). 99 | You can download everything as a ```.zip``` file using the button ![downloadzip](fig/download_zip.jpg?raw=true "Download zip button") on the right. You then need to unzip the folder on your computer. 100 | 101 | Alternatively, if you have ```git``` installed on your computer, you can clone the repository by calling ```git clone https://github.com/jdpigeon/bci-workshop.git``` in a terminal. 102 | 103 | ### A.3. Connecting the Muse to `muse-lsl` 104 | 105 | To figure out the name of your Muse, look for the last 4 digits on the inner left part of the headband. The headband name will then just be `Muse-`, e.g., `Muse-0A14`. Alternatively, if you are on Linux, you can use `hcitool` to find your devices's MAC address: ```sudo hcitool lescan```. 106 | 107 | With your Muse turned on, you should now be able to connect to it with your computer. Use the `cd` command to navigate to where you downloaded the code for the workshop (e.g., `cd C:\Users\Hubert\Documents\bci-workshop\python`). Then, run the `muse-lsl` Python script with the name of your headset in a terminal: 108 | 109 | ```python -u muse-lsl.py --name ``` 110 | 111 | You can also directly pass the MAC address if you found it with `hcitool` or by some other way (this option is faster at startup): 112 | 113 | ```python -u muse-lsl.py --address ``` 114 | 115 | Depending on your OS and hardware, you might need to repeat this command a few times before the connection is established. 116 | 117 | Once the stream is up and running, you can test the stream by calling the following in another terminal (make sure to `cd` to the right directory first as above): 118 | 119 | ```python -u lsl-viewer.py``` 120 | 121 | 122 | ![ex1_figures](fig/lsl_viewer.png?raw=true "Visualizing EEG with `lsl-viewer.py`") 123 | 124 | If the data is out of bounds, press `d` to toggle filter. 125 | 126 | ### A.4. Connecting to lsl via muse-io 127 | 128 | If you have an older device that doesn't run on Bluetooth Low Energy, you will need to run muse-io lsl. In order to do this, you will first be required to get [Muse Research Tool which can be found here](http://developer.choosemuse.com/research-tools). Once it's installed, go to your command prompt and enter the following 129 | ``` 130 | muse-io --device Muse- --osc osc.udp://localhost:5000 --lsl-eeg EEG 131 | ``` 132 | 133 | ## Exercise 1: A simple neurofeedback interface 134 | 135 | In this first exercise, we will learn how to visualize the raw EEG signals that come from the Muse inside a Python application. We will also extract and visualize basic features of the raw EEG signals. This will showcase a first basic use of a Brain-Computer Interface: a so-called neurofeedback interface. 136 | 137 | > Neurofeedback (also called neurotherapy or neurobiofeedback) uses real-time representation of brain-activity (e.g., as sound or visual effects) to teach users to control their brain activity. 138 | 139 | ### E1.1 Running Exercise 1 script 140 | 141 | 1. Open the script ```exercise_01.py``` in **Spyder** or the text editor of your choice. 142 | 2. Read the code - it is thoroughly commented - and modify the experiment parameters as you wish in the **2. SET EXPERIMENTAL PARAMETERS** section. 143 | 3. Run the script. In **Spyder**, select an IPython console on the bottom right of the screen, then click on the *Run File* button on top of the editor. 144 | 4. Two figures should appear: one displaying the raw signals of the Muse's 4 EEG sensors, and another one showing the basic band power features we are computing on one of the EEG signals. __**__ 145 | 5. To stop the execution of the script, press Ctrl+C in the IPython console. 146 | 147 | ![ex1_figures](fig/ex1_figures_new.png?raw=true "Visualization in E1.1") 148 | 149 | __**__ If you are using **Spyder** and are not seeing the two figures, you might have to setup the IPython console differently. Using the top dropdown menu, open up the `Preferences` dialog. Then, under the `IPython console` section, click on the `Graphics` tab, and change the backend to `Automatic`. 150 | 151 | ### E1.2 Playing around 152 | 153 | Here are some things we suggest you do to understand what the script does. 154 | 155 | #### Visualizing your raw EEG signals 156 | 157 | Run the script and look at the first figure (raw EEG visualization). What makes your signal change? 158 | 159 | 1. Try blinking, clenching your jaw, moving your head, etc. 160 | 2. Imagine repeatedly throwing a ball for a few seconds, then imagine talking to a good friend for another few seconds, while minimizing any movement or blinks. 161 | 162 | In the first case, you can see that the first movements (blinking, etc.) strongly disturb the EEG signal. We call these **artifacts**. Some artifacts are so huge that they can completely obscure the actual EEG signal coming from your brain. We typically divide artifacts according to their source: *physiological artifacts* (caused by the electrical activity of the heart, muscles, movement of the eyes, etc.), and *motion artifacts* (caused by a relative displacement of the sensor and the skin). 163 | 164 | In the second case, you can see that different mental activities (e.g. imagining eating or talking) are not easily recognizable in the EEG signals. 165 | First, this is because mental activity is distributed across the brain: for example, sensorimotor processing occurs on top of the brain, in the central cortex, while speech-related functions occurs at the sides of the brain, in the temporal cortex. Therefore, the 4 sensors on the Muse are not necessarily on the right "spot" to capture those EEG signals. 166 | Second, EEG is very, very, very noisy. Indeed, the electrical signals that we pick up on the scalp are "smeared" by the skull, muscles and skin. As you saw, eye balling the signals is often not enough to analyze brain activity. (That is why we need to extract descriptive characteristics from the signals -- what we call features!) 167 | 168 | #### Visualizing your EEG features 169 | 170 | Since the raw EEG signals are not easy to read, we will extract **features** that will hopefully be more insightful. Features are simply a different representation, or an individual measurable property, of the EEG signal. Good features give clearer information about a phenomenon being observed. 171 | 172 | The most often used features to describe EEG are frequency band powers. 173 | 174 | 1. Open the script ```bci_workshop_tools.py```. 175 | 2. Locate the function ```compute_feature_vector()```. 176 | 177 | This function uses the [Fast Fourier Transform](http://en.wikipedia.org/wiki/Fast_Fourier_transform), an algorithm that extracts the frequency information of a time signal. It transforms the EEG time series (i.e., the raw EEG signal that you visualized above) into a list of amplitudes each corresponding to a specific frequency. 178 | 179 | In EEG analysis, we typically look at ranges of frequencies, that we call *frequency bands*: 180 | 181 | - Delta (< 4 Hz) 182 | - Theta (4-7 Hz) 183 | - Alpha (8-15 Hz) 184 | - Beta (16-31 Hz) 185 | - Gamma (> 31 Hz) 186 | 187 | These are the features that you visualized in E1.1 in Figure 2. 188 | 189 | We expect each band to loosely reflect different mental activities. For example, we know that closing one's eyes and relaxing provokes an increase in Alpha band activity and a decrease in Beta band activity, especially at the back of the head. This is thought to be due to the quieting down of areas of the brain involved in processing visual information. We will try to reproduce this result now. 190 | 191 | 1. Open the script ```exercise_01.py```. 192 | 2. Change the value of the ```buffer_length``` parameter to around 40. 193 | 3. Run the script and look at the second figure. 194 | 4. Keep your eyes open for 20 seconds (again, try to minimize your movements). 195 | 5. Close your eyes and relax for another 20 seconds (minimize your movements). 196 | 197 | Do you see a difference between the first and the last 20-s windows for the Alpha and Beta features? 198 | 199 | #### Advanced: computing supplementary features 200 | 201 | Many other features can be used to describe the EEG activity: band powers, Auto-Regressive model coefficients, Wavelet coefficients, coherence, mutual information, etc. As a starting point, adapt the code in the ```compute_feature_vector()``` function to compute additional, finer bands, i.e. low Alpha, medium Alpha, high Alpha, low Beta, and high Beta: 202 | 203 | - Low Alpha (8-10 Hz) 204 | - Medium Alpha (9-11 Hz) 205 | - High Alpha (10-12 Hz) 206 | - Low Beta (12-21 Hz) 207 | - High Beta (21-30 Hz) 208 | 209 | These bands provide more specific information on the EEG activity, and can be more insightful than standard band powers. Additionally, adapt the code to compute ratios of band powers. For example, Theta/Beta and Alpha/Beta ratios are often used to study EEG. 210 | 211 | Repeat the above eyes open/closed procedure with your new features. Can you see a clearer difference between the two mental states? 212 | 213 | Other points you can consider to design better features: 214 | 215 | * Extract the features for each of the Muse's 4 sensors. Each sensor measures brain activity from different parts of the brain with different strengths, and so features from different sensors can provide more specific information. 216 | * Similarly, ratios of features between different sensors (e.g. Alpha in frontal region/Beta in temporal region) can provide additional useful information. 217 | 218 | ## Exercise 2: A basic BCI 219 | 220 | In this second exercise, we will learn how to use an automatic algorithm called a *classifier* to recognize somebody's mental states from their EEG. 221 | 222 | > A *classifier* is an algorithm that, provided some data, learns to recognize patterns. A well-trained classifier can then be used to classify similar, but never-before-seen data. 223 | 224 | For example, let's say we have many [images of either cats or dogs that we want to classify](https://www.kaggle.com/c/dogs-vs-cats). A classifier would first require *training data*: in this case we could give the classifier 1000 pictures of cats that we identify as cats, and 1000 pictures of dogs that we identify as dogs. The classifier will then learn specific patterns to discriminate a dog from a cat. Once it is trained, the classifier can now output *decisions*: if we give it a new, unseen picture, it will try its best to correctly determine whether the picture contains a cat or a dog. 225 | 226 | In a Brain-Computer Interface, we might use classifiers to identify what type of mental operation someone is performing. For example, in the previous exercise, you saw that opening and closing your eyes modifies the features of the EEG signal. To use a classifier, we would need to proceed like this: 227 | 228 | 1. Collect EEG data of a user performing two mental activities (e.g. eyes open vs eyes closed, reading vs relaxing). 229 | 2. Input this data to the classifier, while specifying which data corresponds to which mental activity. 230 | 3. *Train* the classifier. 231 | 4. Use the trained classifier by giving it new EEG data, and asking for a decision on which mental activity this represents. 232 | 233 | Brain-Computer Interfaces rely heavily on **machine learning**, the field devoted to algorithms that learn from data, such as classifiers. You might have already understood why: in a typical EEG application we have several features (e.g. many band powers) and we might not know *a priori* what the mental activity we need to classify looks like. Letting the machine find the optimal combination of features by itself simplifies the whole process. 234 | 235 | Let's try it now. 236 | 237 | ### E2.1 Running the Python basic BCI script 238 | 239 | 1. Open the script ```exercise_02.py```. 240 | 2. Read the code - it is thoroughly commented - and modify the experiment parameters as you wish in the **Set the experiment parameters section**. You will see it's very similar to the code of **Exercise 1**, but with a few new sections. 241 | 3. When you feel confident about what the code does, run the script. 242 | 4. When you hear a **first beep**; keep your eyes open and concentrate (while minimizing your movements). 243 | 5. When you hear a **second beep**; close your eyes and relax (while minimizing your movements). 244 | 6. When you hear a **third beep**, the classifier is trained and starts outputting decisions in the IPython console: ```0``` when your eyes are open, and ```1``` when you close them. Additionally, a figure will display the decisions over a period of 30 seconds. 245 | 7. To stop the execution of the script, press Ctrl+C in the IPython console. 246 | 247 | ![ex2_figures](fig/decision_plotter.png?raw=true "Visualization of decisions in E2.1") 248 | 249 | ### E2.2 Playing around 250 | 251 | Here are some things we suggest you do to understand what the script does. 252 | 253 | #### Visualizing the classifier decisions 254 | 255 | Try the above procedure to train and use the classifier. If it does not work well, make sure the EEG signals are stable (you can reuse the code of Exercise 1 to visualize the raw signals) and try again. Some people will typically have a stronger Alpha response than others, and it also takes practice to be able to modulate it at will. 256 | 257 | #### Using other mental tasks 258 | 259 | Try the same procedure again, but train your classifier with different mental activities. For example, perform some random mental mutliplication during the first 20 seconds, and try to come up with as many words as possible starting with the letter *T* during the next 20 seconds. Once the classifier is trained, repeat the two mental tasks. Is the classifier able to recognize the task you are performing? 260 | 261 | Additionally, try the first training procedure again (concentration vs. relaxation). However, don't close your eyes during the second task; instead, relax with your eyes open. Can the classifier recognize your mental state? 262 | 263 | #### Machine learning tricks and other considerations 264 | 265 | If you have some experience with machine learning or are interested by the topic, consider adding the following things to the script: 266 | 267 | * Get an estimate of the classifier's performance by dividing the data into training and testing sets. Perhaps cross-validation would be informative with such a small data set? 268 | * How much data is necessary to attain stable performance? 269 | * Can a classifier trained with one person's data be used on someone else? 270 | * Visualize the importance of each feature. Are the most important features the one you expect? 271 | * Run model selection and hyperparameter search. Can you find an algorithm that is better suited to our task? (This is relatively easy to perform with the [TPOT](https://github.com/rhiever/tpot) library) 272 | 273 | Also, keeping in mind our earlier discussion of **artefacts**, what is the impact of artefacts during training and/or live testing? 274 | 275 | #### Sending decisions to an external application 276 | 277 | Once your BCI framework is functional, you can start thinking about sending your EEG features or classifier decisions to an external application. Many different libraries can be used for that, including standard TCP/IP communication implementations (e.g. Python's [socket](https://docs.python.org/3/library/socket.html) module). Another option is [`pyZMQ`](https://zeromq.github.io/pyzmq/), which allows simple communication between a Python application and any programming language supporting the [ZMQ](http://zeromq.org/) protocol. 278 | 279 | One idea you could work on would be sending the classifier's decisions to a [`Processing`](https://processing.org/) script to create simple animations based on your mental activity. Or how about sending the information to a Unity environment to create a brain-sensing video game? 280 | 281 | ## Conclusion 282 | 283 | In this workshop, we saw **1)** how to run a simple neurofeedback interface, and **2)** use a basic Brain-Computer Interface. To do so, we covered the basic principles behind the use of electroencephalography signals in modern BCI applications: properties of the raw EEG time series, extraction of band power features, physiological and motion artifacts, and machine learning-based classification of mental activities. 284 | 285 | We used the following tools in this workshop: the **Python** scripting language and the **Muse EEG headset**. All the necessary scripts for this workshop are available [online](https://github.com/NeuroTechX/bci-workshop) and their re-use is strongly encouraged. 286 | 287 | Now is **your turn** to come up with inventive ways of using neurophysiological data! You can follow the pointers in the *References* section for inspiration. 288 | 289 | ## References 290 | 291 | ### Tutorials and neurohacks 292 | - A blog with very cool and detailed posts about EEG/BCI hacking: [http://eeghacker.blogspot.ca/](http://eeghacker.blogspot.ca/) 293 | - The [neuralDrift](http://neuraldrift.net/), a neurogame based in MATLAB that exploits the same concept as was seen in this workshop: [https://github.com/hubertjb/neuraldrift](https://github.com/hubertjb/neuraldrift) 294 | - Series of introductory lectures on Brain-Computer Interfacing: [http://sccn.ucsd.edu/wiki/Introduction_To_Modern_Brain-Computer_Interface_Design](http://sccn.ucsd.edu/wiki/Introduction_To_Modern_Brain-Computer_Interface_Design) 295 | - A visualizer in [Unity](https://github.com/syswsi/BCI_Experiments) 296 | - [Using spotify and the Muse](https://github.com/eisendrachen00/musemusic) 297 | - Other NeurotechX [resource](https://github.com/NeuroTechX/awesome-bci) 298 | 299 | ## Authors 300 | 301 | * Original version (2015): Hubert Banville & Raymundo Cassani 302 | * Current version (September 2017): updated by Hubert Banville & Dano Morrison 303 | 304 | If you use code from this workshop please don't forget to follow the terms of the [MIT License](http://opensource.org/licenses/MIT). 305 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | bci_workshop is licensed under the MIT License 2 | 3 | Copyright (c) 2015 Hubert Banville and Raymundo Cassani 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /NOTES.md: -------------------------------------------------------------------------------- 1 | # Notes 2 | 3 | What is eeg time correction? 4 | 5 | Should we use the newer lsl-viewer for plotting style? 6 | 7 | Probably should use lsl-viewer. The raw EEG plot looks bad and continually drops sections of the signal 8 | 9 | How can we improve the visualization of the band powers? 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## README 2 | 3 | Material for the BCI Workshop to be held with NeuroTechTO 4 | 5 | See the document ```INSTRUCTIONS.md``` for instructions to follow for the workshop. 6 | 7 | ## Prior to the workshop 8 | 9 | Before coming to the workshop, please **download and install all the required dependencies** (see ```INSTRUCTIONS.md```). Since the whole download/installation process takes **around 30 minutes**, you will most likely miss the first part of the workshop if you haven't done it before coming. 10 | 11 | If for some reason you can't download/install the dependencies, please bring a USB stick to the workshop so you can make a copy of somebody else's installation files. 12 | 13 | ## Authors 14 | 15 | Raymundo Cassani & Hubert Banville 16 | 17 | ## License 18 | [MIT](http://opensource.org/licenses/MIT). 19 | -------------------------------------------------------------------------------- /fig/decision_plotter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/decision_plotter.png -------------------------------------------------------------------------------- /fig/download_zip.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/download_zip.JPG -------------------------------------------------------------------------------- /fig/ex1_figures.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/ex1_figures.png -------------------------------------------------------------------------------- /fig/ex1_figures_new.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/ex1_figures_new.png -------------------------------------------------------------------------------- /fig/ex2_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/ex2_figure.png -------------------------------------------------------------------------------- /fig/lsl_viewer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/lsl_viewer.png -------------------------------------------------------------------------------- /fig/mules.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/mules.png -------------------------------------------------------------------------------- /fig/muse-io.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/muse-io.png -------------------------------------------------------------------------------- /fig/muse.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/muse.png -------------------------------------------------------------------------------- /fig/play_button.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/fig/play_button.JPG -------------------------------------------------------------------------------- /python/bci_workshop_tools.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | BCI Workshop Auxiliary Tools 4 | 5 | Created on Fri May 08 15:34:59 2015 6 | 7 | @author: Cassani 8 | """ 9 | 10 | import os 11 | import sys 12 | from tempfile import gettempdir 13 | from subprocess import call 14 | 15 | import matplotlib.pyplot as plt 16 | import numpy as np 17 | from sklearn import svm 18 | from scipy.signal import butter, lfilter, lfilter_zi 19 | 20 | 21 | NOTCH_B, NOTCH_A = butter(4, np.array([55, 65])/(256/2), btype='bandstop') 22 | 23 | 24 | def plot_multichannel(data, params=None): 25 | """Create a plot to present multichannel data. 26 | 27 | Args: 28 | data (numpy.ndarray): Multichannel Data [n_samples, n_channels] 29 | params (dict): information about the data acquisition device 30 | 31 | TODO Receive labels as arguments 32 | """ 33 | fig, ax = plt.subplots() 34 | 35 | n_samples = data.shape[0] 36 | n_channels = data.shape[1] 37 | 38 | if params is not None: 39 | fs = params['sampling frequency'] 40 | names = params['names of channels'] 41 | else: 42 | fs = 1 43 | names = [''] * n_channels 44 | 45 | time_vec = np.arange(n_samples) / float(fs) 46 | 47 | data = np.fliplr(data) 48 | offset = 0 49 | for i_channel in range(n_channels): 50 | data_ac = data[:, i_channel] - np.mean(data[:, i_channel]) 51 | offset = offset + 2 * np.max(np.abs(data_ac)) 52 | ax.plot(time_vec, data_ac + offset, label=names[i_channel]) 53 | 54 | ax.set_xlabel('Time [s]') 55 | ax.set_ylabel('Amplitude') 56 | plt.legend() 57 | plt.draw() 58 | 59 | 60 | def epoch(data, samples_epoch, samples_overlap=0): 61 | """Extract epochs from a time series. 62 | 63 | Given a 2D array of the shape [n_samples, n_channels] 64 | Creates a 3D array of the shape [wlength_samples, n_channels, n_epochs] 65 | 66 | Args: 67 | data (numpy.ndarray or list of lists): data [n_samples, n_channels] 68 | samples_epoch (int): window length in samples 69 | samples_overlap (int): Overlap between windows in samples 70 | 71 | Returns: 72 | (numpy.ndarray): epoched data of shape 73 | """ 74 | 75 | if isinstance(data, list): 76 | data = np.array(data) 77 | 78 | n_samples, n_channels = data.shape 79 | 80 | samples_shift = samples_epoch - samples_overlap 81 | 82 | n_epochs = int(np.floor((n_samples - samples_epoch) / float(samples_shift)) + 1) 83 | 84 | # Markers indicate where the epoch starts, and the epoch contains samples_epoch rows 85 | markers = np.asarray(range(0, n_epochs + 1)) * samples_shift 86 | markers = markers.astype(int) 87 | 88 | # Divide data in epochs 89 | epochs = np.zeros((samples_epoch, n_channels, n_epochs)) 90 | 91 | for i in range(0, n_epochs): 92 | epochs[:, :, i] = data[markers[i]:markers[i] + samples_epoch, :] 93 | 94 | return epochs 95 | 96 | 97 | def compute_feature_vector(eegdata, fs): 98 | """Extract the features from the EEG. 99 | 100 | Args: 101 | eegdata (numpy.ndarray): array of dimension [number of samples, 102 | number of channels] 103 | fs (float): sampling frequency of eegdata 104 | 105 | Returns: 106 | (numpy.ndarray): feature matrix of shape [number of feature points, 107 | number of different features] 108 | """ 109 | # 1. Compute the PSD 110 | winSampleLength, nbCh = eegdata.shape 111 | 112 | # Apply Hamming window 113 | w = np.hamming(winSampleLength) 114 | dataWinCentered = eegdata - np.mean(eegdata, axis=0) # Remove offset 115 | dataWinCenteredHam = (dataWinCentered.T*w).T 116 | 117 | NFFT = nextpow2(winSampleLength) 118 | Y = np.fft.fft(dataWinCenteredHam, n=NFFT, axis=0)/winSampleLength 119 | PSD = 2*np.abs(Y[0:int(NFFT/2), :]) 120 | f = fs/2*np.linspace(0, 1, int(NFFT/2)) 121 | 122 | # SPECTRAL FEATURES 123 | # Average of band powers 124 | # Delta <4 125 | ind_delta, = np.where(f < 4) 126 | meanDelta = np.mean(PSD[ind_delta, :], axis=0) 127 | # Theta 4-8 128 | ind_theta, = np.where((f >= 4) & (f <= 8)) 129 | meanTheta = np.mean(PSD[ind_theta, :], axis=0) 130 | # Alpha 8-12 131 | ind_alpha, = np.where((f >= 8) & (f <= 12)) 132 | meanAlpha = np.mean(PSD[ind_alpha, :], axis=0) 133 | # Beta 12-30 134 | ind_beta, = np.where((f >= 12) & (f < 30)) 135 | meanBeta = np.mean(PSD[ind_beta, :], axis=0) 136 | 137 | feature_vector = np.concatenate((meanDelta, meanTheta, meanAlpha, 138 | meanBeta), axis=0) 139 | 140 | feature_vector = np.log10(feature_vector) 141 | 142 | return feature_vector 143 | 144 | 145 | def nextpow2(i): 146 | """ 147 | Find the next power of 2 for number i 148 | """ 149 | n = 1 150 | while n < i: 151 | n *= 2 152 | return n 153 | 154 | 155 | def compute_feature_matrix(epochs, fs): 156 | """ 157 | Call compute_feature_vector for each EEG epoch 158 | """ 159 | n_epochs = epochs.shape[2] 160 | 161 | for i_epoch in range(n_epochs): 162 | if i_epoch == 0: 163 | feat = compute_feature_vector(epochs[:, :, i_epoch], fs).T 164 | feature_matrix = np.zeros((n_epochs, feat.shape[0])) # Initialize feature_matrix 165 | 166 | feature_matrix[i_epoch, :] = compute_feature_vector( 167 | epochs[:, :, i_epoch], fs).T 168 | 169 | return feature_matrix 170 | 171 | 172 | def train_classifier(feature_matrix_0, feature_matrix_1, algorithm='SVM'): 173 | """Train a binary classifier. 174 | 175 | Train a binary classifier. First perform Z-score normalization, then 176 | fit 177 | 178 | Args: 179 | feature_matrix_0 (numpy.ndarray): array of shape (n_samples, 180 | n_features) with examples for Class 0 181 | feature_matrix_0 (numpy.ndarray): array of shape (n_samples, 182 | n_features) with examples for Class 1 183 | alg (str): Type of classifer to use. Currently only SVM is 184 | supported. 185 | 186 | Returns: 187 | (sklearn object): trained classifier (scikit object) 188 | (numpy.ndarray): normalization mean 189 | (numpy.ndarray): normalization standard deviation 190 | """ 191 | # Create vector Y (class labels) 192 | class0 = np.zeros((feature_matrix_0.shape[0], 1)) 193 | class1 = np.ones((feature_matrix_1.shape[0], 1)) 194 | 195 | # Concatenate feature matrices and their respective labels 196 | y = np.concatenate((class0, class1), axis=0) 197 | features_all = np.concatenate((feature_matrix_0, feature_matrix_1), 198 | axis=0) 199 | 200 | # Normalize features columnwise 201 | mu_ft = np.mean(features_all, axis=0) 202 | std_ft = np.std(features_all, axis=0) 203 | 204 | X = (features_all - mu_ft) / std_ft 205 | 206 | # Train SVM using default parameters 207 | clf = svm.SVC() 208 | clf.fit(X, y) 209 | score = clf.score(X, y.ravel()) 210 | 211 | # Visualize decision boundary 212 | # plot_classifier_training(clf, X, y, features_to_plot=[0, 1]) 213 | 214 | return clf, mu_ft, std_ft, score 215 | 216 | 217 | def test_classifier(clf, feature_vector, mu_ft, std_ft): 218 | """Test the classifier on new data points. 219 | 220 | Args: 221 | clf (sklearn object): trained classifier 222 | feature_vector (numpy.ndarray): array of shape (n_samples, 223 | n_features) 224 | mu_ft (numpy.ndarray): normalization mean 225 | std_ft (numpy.ndarray): normalization standard deviation 226 | 227 | Returns: 228 | (numpy.ndarray): decision of the classifier on the data points 229 | """ 230 | 231 | # Normalize feature_vector 232 | x = (feature_vector - mu_ft) / std_ft 233 | y_hat = clf.predict(x) 234 | 235 | return y_hat 236 | 237 | 238 | def beep(waveform=(79, 45, 32, 50, 99, 113, 126, 127)): 239 | """Play a beep sound. 240 | 241 | Cross-platform sound playing with standard library only, no sound 242 | file required. 243 | 244 | From https://gist.github.com/juancarlospaco/c295f6965ed056dd08da 245 | """ 246 | wavefile = os.path.join(gettempdir(), "beep.wav") 247 | if not os.path.isfile(wavefile) or not os.access(wavefile, os.R_OK): 248 | with open(wavefile, "w+") as wave_file: 249 | for sample in range(0, 300, 1): 250 | for wav in range(0, 8, 1): 251 | wave_file.write(chr(waveform[wav])) 252 | if sys.platform.startswith("linux"): 253 | return call("chrt -i 0 aplay '{fyle}'".format(fyle=wavefile), 254 | shell=1) 255 | if sys.platform.startswith("darwin"): 256 | return call("afplay '{fyle}'".format(fyle=wavefile), shell=True) 257 | if sys.platform.startswith("win"): # FIXME: This is Ugly. 258 | return call("start /low /min '{fyle}'".format(fyle=wavefile), 259 | shell=1) 260 | 261 | 262 | def get_feature_names(ch_names): 263 | """Generate the name of the features. 264 | 265 | Args: 266 | ch_names (list): electrode names 267 | 268 | Returns: 269 | (list): feature names 270 | """ 271 | bands = ['delta', 'theta', 'alpha', 'beta'] 272 | 273 | feat_names = [] 274 | for band in bands: 275 | for ch in range(len(ch_names)): 276 | feat_names.append(band + '-' + ch_names[ch]) 277 | 278 | return feat_names 279 | 280 | 281 | def update_buffer(data_buffer, new_data, notch=False, filter_state=None): 282 | """ 283 | Concatenates "new_data" into "data_buffer", and returns an array with 284 | the same size as "data_buffer" 285 | """ 286 | if new_data.ndim == 1: 287 | new_data = new_data.reshape(-1, data_buffer.shape[1]) 288 | 289 | if notch: 290 | if filter_state is None: 291 | filter_state = np.tile(lfilter_zi(NOTCH_B, NOTCH_A), 292 | (data_buffer.shape[1], 1)).T 293 | new_data, filter_state = lfilter(NOTCH_B, NOTCH_A, new_data, axis=0, 294 | zi=filter_state) 295 | 296 | new_buffer = np.concatenate((data_buffer, new_data), axis=0) 297 | new_buffer = new_buffer[new_data.shape[0]:, :] 298 | 299 | return new_buffer, filter_state 300 | 301 | 302 | def get_last_data(data_buffer, newest_samples): 303 | """ 304 | Obtains from "buffer_array" the "newest samples" (N rows from the 305 | bottom of the buffer) 306 | """ 307 | new_buffer = data_buffer[(data_buffer.shape[0] - newest_samples):, :] 308 | 309 | return new_buffer 310 | 311 | 312 | class DataPlotter(): 313 | """ 314 | Class for creating and updating a line plot. 315 | """ 316 | 317 | def __init__(self, nbPoints, chNames, fs=None, title=None): 318 | """Initialize the figure.""" 319 | 320 | self.nbPoints = nbPoints 321 | self.chNames = chNames 322 | self.nbCh = len(self.chNames) 323 | 324 | self.fs = 1 if fs is None else fs 325 | self.figTitle = '' if title is None else title 326 | 327 | data = np.empty((self.nbPoints, 1))*np.nan 328 | self.t = np.arange(data.shape[0])/float(self.fs) 329 | 330 | # Create offset parameters for plotting multiple signals 331 | self.yAxisRange = 100 332 | self.chRange = self.yAxisRange/float(self.nbCh) 333 | self.offsets = np.round((np.arange(self.nbCh)+0.5)*(self.chRange)) 334 | 335 | # Create the figure and axis 336 | plt.ion() 337 | self.fig, self.ax = plt.subplots() 338 | self.ax.set_yticks(self.offsets) 339 | self.ax.set_yticklabels(self.chNames) 340 | 341 | # Initialize the figure 342 | self.ax.set_title(self.figTitle) 343 | 344 | self.chLinesDict = {} 345 | for i, chName in enumerate(self.chNames): 346 | self.chLinesDict[chName], = self.ax.plot( 347 | self.t, data+self.offsets[i], label=chName) 348 | 349 | self.ax.set_xlabel('Time') 350 | self.ax.set_ylim([0, self.yAxisRange]) 351 | self.ax.set_xlim([np.min(self.t), np.max(self.t)]) 352 | 353 | plt.show() 354 | 355 | def update_plot(self, data): 356 | """ Update the plot """ 357 | 358 | data = data - np.mean(data, axis=0) 359 | std_data = np.std(data, axis=0) 360 | std_data[np.where(std_data == 0)] = 1 361 | data = data/std_data*self.chRange/5.0 362 | 363 | for i, chName in enumerate(self.chNames): 364 | self.chLinesDict[chName].set_ydata(data[:, i] + self.offsets[i]) 365 | 366 | self.fig.canvas.draw() 367 | 368 | def clear(self): 369 | """ Clear the figure """ 370 | 371 | blankData = np.empty((self.nbPoints, 1))*np.nan 372 | 373 | for i, chName in enumerate(self.chNames): 374 | self.chLinesDict[chName].set_ydata(blankData) 375 | 376 | self.fig.canvas.draw() 377 | 378 | def close(self): 379 | """ Close the figure """ 380 | 381 | plt.close(self.fig) 382 | 383 | 384 | def plot_classifier_training(clf, X, y, features_to_plot=[0, 1]): 385 | """Visualize the decision boundary of a classifier. 386 | 387 | Args: 388 | clf (sklearn object): trained classifier 389 | X (numpy.ndarray): data to visualize the decision boundary for 390 | y (numpy.ndarray): labels for X 391 | 392 | Keyword Args: 393 | features_to_plot (list): indices of the two features to use for 394 | plotting 395 | 396 | Inspired from: http://scikit-learn.org/stable/auto_examples/tree/plot_iris.html 397 | """ 398 | 399 | plot_colors = "bry" 400 | plot_step = 0.02 401 | n_classes = len(np.unique(y)) 402 | 403 | x_min = np.min(X[:, 1])-1 404 | x_max = np.max(X[:, 1])+1 405 | y_min = np.min(X[:, 0])-1 406 | y_max = np.max(X[:, 0])+1 407 | 408 | xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), 409 | np.arange(y_min, y_max, plot_step)) 410 | 411 | Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) 412 | Z = Z.reshape(xx.shape) 413 | 414 | fig, ax = plt.subplots() 415 | ax.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.5) 416 | 417 | # Plot the training points 418 | for i, color in zip(range(n_classes), plot_colors): 419 | idx = np.where(y == i) 420 | ax.scatter(X[idx, 0], X[idx, 1], c=color, cmap=plt.cm.Paired) 421 | 422 | plt.axis('tight') 423 | -------------------------------------------------------------------------------- /python/compute_feature_vector_advanced.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | def compute_feature_vector(eegdata, Fs): 4 | """Extract the features from the EEG 5 | Inputs: 6 | eegdata: array of dimension [number of samples, number of channels] 7 | Fs: sampling frequency of eegdata 8 | 9 | Outputs: 10 | feature_vector: [number of features points; 11 | number of different features] 12 | 13 | """ 14 | # Delete last column (Status) 15 | eegdata = np.delete(eegdata, -1, 1) 16 | 17 | # 1. Compute the PSD 18 | winSampleLength, nbCh = eegdata.shape 19 | 20 | # Apply Hamming window 21 | w = np.hamming(winSampleLength) 22 | dataWinCentered = eegdata - np.mean(eegdata, axis=0) # Remove offset 23 | dataWinCenteredHam = (dataWinCentered.T*w).T 24 | 25 | NFFT = nextpow2(winSampleLength) 26 | Y = np.fft.fft(dataWinCenteredHam, n=NFFT, axis=0)/winSampleLength 27 | PSD = 2*np.abs(Y[0:NFFT/2,:]) 28 | f = Fs/2*np.linspace(0,1,NFFT/2) 29 | 30 | # SPECTRAL FEATURES 31 | # Average of band powers 32 | # Delta <4 33 | ind_delta, = np.where(f<4) 34 | meanDelta = np.mean(PSD[ind_delta,:],axis=0) 35 | # Theta 4-8 36 | ind_theta, = np.where((f>=4) & (f<=8)) 37 | meanTheta = np.mean(PSD[ind_theta,:],axis=0) 38 | # Low alpha 8-10 39 | ind_alpha, = np.where((f>=8) & (f<=10)) 40 | meanLowAlpha = np.mean(PSD[ind_alpha,:],axis=0) 41 | # Medium alpha 42 | ind_alpha, = np.where((f>=9) & (f<=11)) 43 | meanMedAlpha = np.mean(PSD[ind_alpha,:],axis=0) 44 | # High alpha 10-12 45 | ind_alpha, = np.where((f>=10) & (f<=12)) 46 | meanHighAlpha = np.mean(PSD[ind_alpha,:],axis=0) 47 | # Low beta 12-21 48 | ind_beta, = np.where((f>=12) & (f<=21)) 49 | meanLowBeta = np.mean(PSD[ind_beta,:],axis=0) 50 | # High beta 21-30 51 | ind_beta, = np.where((f>=21) & (f<=30)) 52 | meanHighBeta = np.mean(PSD[ind_beta,:],axis=0) 53 | # Alpha 8 - 12 54 | ind_alpha, = np.where((f>=8) & (f<=12)) 55 | meanAlpha = np.mean(PSD[ind_alpha,:],axis=0) 56 | # Beta 12-30 57 | ind_beta, = np.where((f>=12) & (f<=30)) 58 | meanBeta = np.mean(PSD[ind_beta,:],axis=0) 59 | 60 | feature_vector = np.concatenate((meanDelta, meanTheta, meanLowAlpha, meanHighAlpha, 61 | meanLowBeta, meanHighBeta, 62 | meanDelta/meanBeta, meanTheta/meanBeta, 63 | meanAlpha/meanBeta, meanAlpha/meanTheta),axis=0) 64 | 65 | feature_vector = np.log10(feature_vector) 66 | 67 | return feature_vector 68 | 69 | def feature_names(ch_names): 70 | """ 71 | Generate the name of the features 72 | 73 | Arguments 74 | ch_names: List with Electrode names 75 | """ 76 | 77 | bands = ['pwr-delta', 'pwr-theta', 'pwr-low-alpha', 'pwr-high-alpha', 78 | 'pwr-low-beta', 'pwr-high-beta', 79 | 'pwr-delta/beta', 'pwr-theta/beta', 'pwr-alpha/beta', 'pwr-alpha-theta'] 80 | feat_names = [] 81 | for band in bands: 82 | for ch in range(0,len(ch_names)-1): 83 | #Last column is ommited because it is the Status Channel 84 | feat_names.append(band + '-' + ch_names[ch]) 85 | 86 | return feat_names 87 | 88 | def nextpow2(i): 89 | """ Find the next power of 2 for number i """ 90 | n = 1 91 | while n < i: 92 | n *= 2 93 | return n 94 | -------------------------------------------------------------------------------- /python/exercise_01.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Exercise 1: A neurofeedback interface (single-channel) 4 | ====================================================== 5 | 6 | Description: 7 | In this exercise, we'll try and play around with a simple interface that 8 | receives EEG from one electrode, computes standard frequency band powers 9 | and displays both the raw signals and the features. 10 | 11 | """ 12 | 13 | import numpy as np # Module that simplifies computations on matrices 14 | import matplotlib.pyplot as plt # Module used for plotting 15 | from pylsl import StreamInlet, resolve_byprop # Module to receive EEG data 16 | 17 | import bci_workshop_tools as BCIw # Our own functions for the workshop 18 | 19 | 20 | if __name__ == "__main__": 21 | 22 | """ 1. CONNECT TO EEG STREAM """ 23 | 24 | # Search for active LSL stream 25 | print('Looking for an EEG stream...') 26 | streams = resolve_byprop('type', 'EEG', timeout=2) 27 | if len(streams) == 0: 28 | raise RuntimeError('Can\'t find EEG stream.') 29 | 30 | # Set active EEG stream to inlet and apply time correction 31 | print("Start acquiring data") 32 | inlet = StreamInlet(streams[0], max_chunklen=12) 33 | eeg_time_correction = inlet.time_correction() 34 | 35 | # Get the stream info and description 36 | info = inlet.info() 37 | description = info.desc() 38 | 39 | # Get the sampling frequency 40 | # This is an important value that represents how many EEG data points are 41 | # collected in a second. This influences our frequency band calculation. 42 | fs = int(info.nominal_srate()) 43 | 44 | """ 2. SET EXPERIMENTAL PARAMETERS """ 45 | 46 | # Length of the EEG data buffer (in seconds) 47 | # This buffer will hold last n seconds of data and be used for calculations 48 | buffer_length = 15 49 | 50 | # Length of the epochs used to compute the FFT (in seconds) 51 | epoch_length = 1 52 | 53 | # Amount of overlap between two consecutive epochs (in seconds) 54 | overlap_length = 0.8 55 | 56 | # Amount to 'shift' the start of each next consecutive epoch 57 | shift_length = epoch_length - overlap_length 58 | 59 | # Index of the channel (electrode) to be used 60 | # 0 = left ear, 1 = left forehead, 2 = right forehead, 3 = right ear 61 | index_channel = [0] 62 | ch_names = ['ch1'] # Name of our channel for plotting purposes 63 | 64 | # Get names of features 65 | # ex. ['delta - CH1', 'pwr-theta - CH1', 'pwr-alpha - CH1',...] 66 | feature_names = BCIw.get_feature_names(ch_names) 67 | 68 | """ 3. INITIALIZE BUFFERS """ 69 | 70 | # Initialize raw EEG data buffer (for plotting) 71 | eeg_buffer = np.zeros((int(fs * buffer_length), 1)) 72 | filter_state = None # for use with the notch filter 73 | 74 | # Compute the number of epochs in "buffer_length" (used for plotting) 75 | n_win_test = int(np.floor((buffer_length - epoch_length) / 76 | shift_length + 1)) 77 | 78 | # Initialize the feature data buffer (for plotting) 79 | feat_buffer = np.zeros((n_win_test, len(feature_names))) 80 | 81 | # Initialize the plots 82 | plotter_eeg = BCIw.DataPlotter(fs * buffer_length, ch_names, fs) 83 | plotter_feat = BCIw.DataPlotter(n_win_test, feature_names, 84 | 1 / shift_length) 85 | 86 | """ 3. GET DATA """ 87 | 88 | # The try/except structure allows to quit the while loop by aborting the 89 | # script with 90 | print('Press Ctrl-C in the console to break the while loop.') 91 | 92 | try: 93 | # The following loop does what we see in the diagram of Exercise 1: 94 | # acquire data, compute features, visualize raw EEG and the features 95 | while True: 96 | 97 | """ 3.1 ACQUIRE DATA """ 98 | # Obtain EEG data from the LSL stream 99 | eeg_data, timestamp = inlet.pull_chunk( 100 | timeout=1, max_samples=int(shift_length * fs)) 101 | 102 | # Only keep the channel we're interested in 103 | ch_data = np.array(eeg_data)[:, index_channel] 104 | 105 | # Update EEG buffer 106 | eeg_buffer, filter_state = BCIw.update_buffer( 107 | eeg_buffer, ch_data, notch=True, 108 | filter_state=filter_state) 109 | 110 | """ 3.2 COMPUTE FEATURES """ 111 | # Get newest samples from the buffer 112 | data_epoch = BCIw.get_last_data(eeg_buffer, 113 | epoch_length * fs) 114 | 115 | # Compute features 116 | feat_vector = BCIw.compute_feature_vector(data_epoch, fs) 117 | feat_buffer, _ = BCIw.update_buffer(feat_buffer, 118 | np.asarray([feat_vector])) 119 | 120 | """ 3.3 VISUALIZE THE RAW EEG AND THE FEATURES """ 121 | plotter_eeg.update_plot(eeg_buffer) 122 | plotter_feat.update_plot(feat_buffer) 123 | plt.pause(0.00001) 124 | 125 | except KeyboardInterrupt: 126 | print('Closing!') 127 | -------------------------------------------------------------------------------- /python/exercise_01_multichannel.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Exercise 1b: A neurofeedback interface (multi-channel) 4 | ====================================================== 5 | 6 | Description: 7 | In this exercise, we'll try and play around with a simple interface that 8 | receives EEG from multiple electrodes, computes standard frequency band 9 | powers and displays both the raw signals and the features. 10 | 11 | """ 12 | 13 | import numpy as np # Module that simplifies computations on matrices 14 | import matplotlib.pyplot as plt # Module used for plotting 15 | from pylsl import StreamInlet, resolve_byprop # Module to receive EEG data 16 | 17 | import bci_workshop_tools as BCIw # Our own functions for the workshop 18 | 19 | 20 | if __name__ == "__main__": 21 | 22 | """ 1. CONNECT TO EEG STREAM """ 23 | 24 | # Search for active LSL stream 25 | print('Looking for an EEG stream...') 26 | streams = resolve_byprop('type', 'EEG', timeout=2) 27 | if len(streams) == 0: 28 | raise RuntimeError('Can\'t find EEG stream.') 29 | 30 | # Set active EEG stream to inlet and apply time correction 31 | print("Start acquiring data") 32 | inlet = StreamInlet(streams[0], max_chunklen=12) 33 | eeg_time_correction = inlet.time_correction() 34 | 35 | # Get the stream info, description, sampling frequency, number of channels 36 | info = inlet.info() 37 | description = info.desc() 38 | fs = int(info.nominal_srate()) 39 | n_channels = info.channel_count() 40 | 41 | # Get names of all channels 42 | ch = description.child('channels').first_child() 43 | ch_names = [ch.child_value('label')] 44 | for i in range(1, n_channels): 45 | ch = ch.next_sibling() 46 | ch_names.append(ch.child_value('label')) 47 | 48 | """ 2. SET EXPERIMENTAL PARAMETERS """ 49 | 50 | # Length of the EEG data buffer (in seconds) 51 | # This buffer will hold last n seconds of data and be used for calculations 52 | buffer_length = 15 53 | 54 | # Length of the epochs used to compute the FFT (in seconds) 55 | epoch_length = 1 56 | 57 | # Amount of overlap between two consecutive epochs (in seconds) 58 | overlap_length = 0.8 59 | 60 | # Amount to 'shift' the start of each next consecutive epoch 61 | shift_length = epoch_length - overlap_length 62 | 63 | # Index of the channel (electrode) to be used 64 | # 0 = left ear, 1 = left forehead, 2 = right forehead, 3 = right ear 65 | index_channel = [0, 1, 2, 3] 66 | # Name of our channel for plotting purposes 67 | ch_names = [ch_names[i] for i in index_channel] 68 | n_channels = len(index_channel) 69 | 70 | # Get names of features 71 | # ex. ['delta - CH1', 'pwr-theta - CH1', 'pwr-alpha - CH1',...] 72 | feature_names = BCIw.get_feature_names(ch_names) 73 | 74 | """3. INITIALIZE BUFFERS """ 75 | 76 | # Initialize raw EEG data buffer (for plotting) 77 | eeg_buffer = np.zeros((int(fs * buffer_length), n_channels)) 78 | filter_state = None # for use with the notch filter 79 | 80 | # Compute the number of epochs in "buffer_length" (used for plotting) 81 | n_win_test = int(np.floor((buffer_length - epoch_length) / 82 | shift_length + 1)) 83 | 84 | # Initialize the feature data buffer (for plotting) 85 | feat_buffer = np.zeros((n_win_test, len(feature_names))) 86 | 87 | # Initialize the plots 88 | plotter_eeg = BCIw.DataPlotter(fs * buffer_length, ch_names, fs) 89 | plotter_feat = BCIw.DataPlotter(n_win_test, feature_names, 90 | 1 / shift_length) 91 | 92 | """ 3. GET DATA """ 93 | 94 | # The try/except structure allows to quit the while loop by aborting the 95 | # script with 96 | print('Press Ctrl-C in the console to break the while loop.') 97 | 98 | try: 99 | # The following loop does what we see in the diagram of Exercise 1: 100 | # acquire data, compute features, visualize raw EEG and the features 101 | while True: 102 | 103 | """ 3.1 ACQUIRE DATA """ 104 | # Obtain EEG data from the LSL stream 105 | eeg_data, timestamp = inlet.pull_chunk( 106 | timeout=1, max_samples=int(shift_length * fs)) 107 | 108 | # Only keep the channel we're interested in 109 | ch_data = np.array(eeg_data)[:, index_channel] 110 | 111 | # Update EEG buffer 112 | eeg_buffer, filter_state = BCIw.update_buffer( 113 | eeg_buffer, ch_data, notch=True, 114 | filter_state=filter_state) 115 | 116 | """ 3.2 COMPUTE FEATURES """ 117 | # Get newest samples from the buffer 118 | data_epoch = BCIw.get_last_data(eeg_buffer, 119 | epoch_length * fs) 120 | 121 | # Compute features 122 | feat_vector = BCIw.compute_feature_vector(data_epoch, fs) 123 | feat_buffer, _ = BCIw.update_buffer(feat_buffer, 124 | np.asarray([feat_vector])) 125 | 126 | """ 3.3 VISUALIZE THE RAW EEG AND THE FEATURES """ 127 | plotter_eeg.update_plot(eeg_buffer) 128 | plotter_feat.update_plot(feat_buffer) 129 | plt.pause(0.00001) 130 | 131 | except KeyboardInterrupt: 132 | 133 | print('Closing!') 134 | -------------------------------------------------------------------------------- /python/exercise_02.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Exercise 2: A basic Brain-Computer Interface 4 | ============================================= 5 | 6 | Description: 7 | In this second exercise, we will learn how to use an automatic algorithm to 8 | recognize somebody's mental states from their EEG. We will use a classifier, 9 | i.e., an algorithm that, provided some data, learns to recognize patterns, 10 | and can then classify similar unseen information. 11 | 12 | """ 13 | 14 | import argparse 15 | import numpy as np # Module that simplifies computations on matrices 16 | import matplotlib.pyplot as plt # Module used for plotting 17 | from pylsl import StreamInlet, resolve_byprop # Module to receive EEG data 18 | 19 | import bci_workshop_tools as BCIw # Our own functions for the workshop 20 | 21 | 22 | if __name__ == "__main__": 23 | 24 | """ 0. PARSE ARGUMENTS """ 25 | parser = argparse.ArgumentParser(description='BCI Workshop example 2') 26 | parser.add_argument('channels', metavar='N', type=int, nargs='*', 27 | default=[0, 1, 2, 3], 28 | help='channel number to use. If not specified, all the channels are used') 29 | 30 | args = parser.parse_args() 31 | 32 | """ 1. CONNECT TO EEG STREAM """ 33 | 34 | # Search for active LSL stream 35 | print('Looking for an EEG stream...') 36 | streams = resolve_byprop('type', 'EEG', timeout=2) 37 | if len(streams) == 0: 38 | raise RuntimeError('Can\'t find EEG stream.') 39 | 40 | # Set active EEG stream to inlet and apply time correction 41 | print("Start acquiring data") 42 | inlet = StreamInlet(streams[0], max_chunklen=12) 43 | eeg_time_correction = inlet.time_correction() 44 | 45 | # Get the stream info, description, sampling frequency, number of channels 46 | info = inlet.info() 47 | description = info.desc() 48 | fs = int(info.nominal_srate()) 49 | n_channels = info.channel_count() 50 | 51 | # Get names of all channels 52 | ch = description.child('channels').first_child() 53 | ch_names = [ch.child_value('label')] 54 | for i in range(1, n_channels): 55 | ch = ch.next_sibling() 56 | ch_names.append(ch.child_value('label')) 57 | 58 | """ 2. SET EXPERIMENTAL PARAMETERS """ 59 | 60 | # Length of the EEG data buffer (in seconds) 61 | # This buffer will hold last n seconds of data and be used for calculations 62 | buffer_length = 15 63 | 64 | # Length of the epochs used to compute the FFT (in seconds) 65 | epoch_length = 1 66 | 67 | # Amount of overlap between two consecutive epochs (in seconds) 68 | overlap_length = 0.8 69 | 70 | # Amount to 'shift' the start of each next consecutive epoch 71 | shift_length = epoch_length - overlap_length 72 | 73 | # Index of the channel (electrode) to be used 74 | # 0 = left ear, 1 = left forehead, 2 = right forehead, 3 = right ear 75 | index_channel = args.channels 76 | # Name of our channel for plotting purposes 77 | ch_names = [ch_names[i] for i in index_channel] 78 | n_channels = len(index_channel) 79 | 80 | # Get names of features 81 | # ex. ['delta - CH1', 'pwr-theta - CH1', 'pwr-alpha - CH1',...] 82 | feature_names = BCIw.get_feature_names(ch_names) 83 | 84 | # Number of seconds to collect training data for (one class) 85 | training_length = 20 86 | 87 | """ 3. RECORD TRAINING DATA """ 88 | 89 | # Record data for mental activity 0 90 | BCIw.beep() 91 | eeg_data0, timestamps0 = inlet.pull_chunk( 92 | timeout=training_length+1, max_samples=fs * training_length) 93 | eeg_data0 = np.array(eeg_data0)[:, index_channel] 94 | 95 | print('\nClose your eyes!\n') 96 | 97 | # Record data for mental activity 1 98 | BCIw.beep() # Beep sound 99 | eeg_data1, timestamps1 = inlet.pull_chunk( 100 | timeout=training_length+1, max_samples=fs * training_length) 101 | eeg_data1 = np.array(eeg_data1)[:, index_channel] 102 | 103 | # Divide data into epochs 104 | eeg_epochs0 = BCIw.epoch(eeg_data0, epoch_length * fs, 105 | overlap_length * fs) 106 | eeg_epochs1 = BCIw.epoch(eeg_data1, epoch_length * fs, 107 | overlap_length * fs) 108 | 109 | """ 4. COMPUTE FEATURES AND TRAIN CLASSIFIER """ 110 | 111 | feat_matrix0 = BCIw.compute_feature_matrix(eeg_epochs0, fs) 112 | feat_matrix1 = BCIw.compute_feature_matrix(eeg_epochs1, fs) 113 | 114 | [classifier, mu_ft, std_ft, score] = BCIw.train_classifier( 115 | feat_matrix0, feat_matrix1, 'SVM') 116 | 117 | print(str(score * 100) + '% correctly predicted') 118 | 119 | BCIw.beep() 120 | 121 | """ 5. USE THE CLASSIFIER IN REAL-TIME""" 122 | 123 | # Initialize the buffers for storing raw EEG and decisions 124 | eeg_buffer = np.zeros((int(fs * buffer_length), n_channels)) 125 | filter_state = None # for use with the notch filter 126 | decision_buffer = np.zeros((30, 1)) 127 | 128 | plotter_decision = BCIw.DataPlotter(30, ['Decision']) 129 | 130 | # The try/except structure allows to quit the while loop by aborting the 131 | # script with 132 | print('Press Ctrl-C in the console to break the while loop.') 133 | 134 | try: 135 | while True: 136 | 137 | """ 3.1 ACQUIRE DATA """ 138 | # Obtain EEG data from the LSL stream 139 | eeg_data, timestamp = inlet.pull_chunk( 140 | timeout=1, max_samples=int(shift_length * fs)) 141 | 142 | # Only keep the channel we're interested in 143 | ch_data = np.array(eeg_data)[:, index_channel] 144 | 145 | # Update EEG buffer 146 | eeg_buffer, filter_state = BCIw.update_buffer( 147 | eeg_buffer, ch_data, notch=True, 148 | filter_state=filter_state) 149 | 150 | """ 3.2 COMPUTE FEATURES AND CLASSIFY """ 151 | # Get newest samples from the buffer 152 | data_epoch = BCIw.get_last_data(eeg_buffer, 153 | epoch_length * fs) 154 | 155 | # Compute features 156 | feat_vector = BCIw.compute_feature_vector(data_epoch, fs) 157 | y_hat = BCIw.test_classifier(classifier, 158 | feat_vector.reshape(1, -1), mu_ft, 159 | std_ft) 160 | print(y_hat) 161 | 162 | decision_buffer, _ = BCIw.update_buffer(decision_buffer, 163 | np.reshape(y_hat, (-1, 1))) 164 | 165 | """ 3.3 VISUALIZE THE DECISIONS """ 166 | plotter_decision.update_plot(decision_buffer) 167 | plt.pause(0.00001) 168 | 169 | except KeyboardInterrupt: 170 | 171 | print('Closed!') 172 | -------------------------------------------------------------------------------- /python/extra_stuff/compute_feature_vector_advanced.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | def compute_feature_vector(eegdata, Fs): 4 | """Extract the features from the EEG 5 | Inputs: 6 | eegdata: array of dimension [number of samples, number of channels] 7 | Fs: sampling frequency of eegdata 8 | 9 | Outputs: 10 | feature_vector: [number of features points; number of different features 11 | 12 | """ 13 | #Delete last column (Status) 14 | eegdata = np.delete(eegdata, -1 , 1) 15 | 16 | # 1. Compute the PSD 17 | winSampleLength, nbCh = eegdata.shape 18 | 19 | # Apply Hamming window 20 | w = np.hamming(winSampleLength) 21 | dataWinCentered = eegdata - np.mean(eegdata, axis=0) # Remove offset 22 | dataWinCenteredHam = (dataWinCentered.T*w).T 23 | 24 | NFFT = nextpow2(winSampleLength) 25 | Y = np.fft.fft(dataWinCenteredHam, n=NFFT, axis=0)/winSampleLength 26 | PSD = 2*np.abs(Y[0:NFFT/2,:]) 27 | f = Fs/2*np.linspace(0,1,NFFT/2) 28 | 29 | # SPECTRAL FEATURES 30 | # Average of band powers 31 | # Delta <4 32 | ind_delta, = np.where(f<4) 33 | meanDelta = np.mean(PSD[ind_delta,:],axis=0) 34 | # Theta 4-8 35 | ind_theta, = np.where((f>=4) & (f<=8)) 36 | meanTheta = np.mean(PSD[ind_theta,:],axis=0) 37 | # Low alpha 8-10 38 | ind_alpha, = np.where((f>=8) & (f<=10)) 39 | meanLowAlpha = np.mean(PSD[ind_alpha,:],axis=0) 40 | # Medium alpha 41 | ind_alpha, = np.where((f>=9) & (f<=11)) 42 | meanMedAlpha = np.mean(PSD[ind_alpha,:],axis=0) 43 | # High alpha 10-12 44 | ind_alpha, = np.where((f>=10) & (f<=12)) 45 | meanHighAlpha = np.mean(PSD[ind_alpha,:],axis=0) 46 | # Low beta 12-21 47 | ind_beta, = np.where((f>=12) & (f<=21)) 48 | meanLowBeta = np.mean(PSD[ind_beta,:],axis=0) 49 | # High beta 21-30 50 | ind_beta, = np.where((f>=21) & (f<=30)) 51 | meanHighBeta = np.mean(PSD[ind_beta,:],axis=0) 52 | # Alpha 8 - 12 53 | ind_alpha, = np.where((f>=8) & (f<=12)) 54 | meanAlpha = np.mean(PSD[ind_alpha,:],axis=0) 55 | # Beta 12-30 56 | ind_beta, = np.where((f>=12) & (f<=30)) 57 | meanBeta = np.mean(PSD[ind_beta,:],axis=0) 58 | 59 | feature_vector = np.concatenate((meanDelta, meanTheta, meanLowAlpha, meanHighAlpha, 60 | meanLowBeta, meanHighBeta, 61 | meanDelta/meanBeta, meanTheta/meanBeta, 62 | meanAlpha/meanBeta, meanAlpha/meanTheta),axis=0) 63 | 64 | feature_vector = np.log10(feature_vector) 65 | 66 | return feature_vector 67 | 68 | def feature_names(ch_names): 69 | """ 70 | Generate the name of the features 71 | 72 | Arguments 73 | ch_names: List with Electrode names 74 | """ 75 | 76 | bands = ['pwr-delta', 'pwr-theta', 'pwr-low-alpha', 'pwr-high-alpha', 77 | 'pwr-low-beta', 'pwr-high-beta', 78 | 'pwr-delta/beta', 'pwr-theta/beta', 'pwr-alpha/beta', 'pwr-alpha-theta'] 79 | feat_names = [] 80 | for band in bands: 81 | for ch in range(0,len(ch_names)-1): 82 | #Last column is ommited because it is the Status Channel 83 | feat_names.append(band + '-' + ch_names[ch]) 84 | 85 | return feat_names 86 | 87 | def nextpow2(i): 88 | """ Find the next power of 2 for number i """ 89 | n = 1 90 | while n < i: 91 | n *= 2 92 | return n 93 | -------------------------------------------------------------------------------- /python/extra_stuff/livebargraph.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on Fri Aug 25 21:11:45 2017 5 | 6 | @author: hubert 7 | """ 8 | 9 | import numpy as np 10 | import matplotlib.pyplot as plt 11 | 12 | 13 | class LiveBarGraph(object): 14 | """ 15 | """ 16 | def __init__(self, band_names=['delta', 'theta', 'alpha', 'beta'], 17 | ch_names=['TP9', 'AF7', 'AF8', 'TP10']): 18 | """ 19 | """ 20 | self.band_names = band_names 21 | self.ch_names = ch_names 22 | self.n_bars = self.band_names * self.ch_names 23 | 24 | self.x = 25 | 26 | self.fig, self.ax = plt.subplots() 27 | self.ax.set_ylim((0, 1)) 28 | 29 | y = np.zeros((self.n_bars,)) 30 | x = range(self.n_bars) 31 | 32 | self.rects = self.ax.bar(x, y) 33 | 34 | def update(self, new_y): 35 | [rect.set_height(y) for rect, y in zip(self.rects, new_y)] 36 | 37 | 38 | if __name__ == '__main__': 39 | 40 | bar = LiveBarGraph() 41 | plt.show() 42 | 43 | while True: 44 | bar.update(np.random.random(10)) 45 | plt.pause(0.1) 46 | 47 | 48 | 49 | 50 | 51 | -------------------------------------------------------------------------------- /python/extra_stuff/mules.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Authors: Alexandre Drouin-Picaro and Raymundo Cassani 3 | April 2015 4 | 5 | This file contains the MuLES class which handles a TCP/IP connection to the 6 | MuLES software, send commands, get data, send triggers among other functions. 7 | 8 | Methods: 9 | __init__(ip, port) 10 | connect() 11 | disconnect() 12 | kill() 13 | sendcommand(command) 14 | flushdata() 15 | sendtrigger(trigger) 16 | getparam() 17 | getfs() 18 | getdevicename() 19 | getmessage() 20 | getheadder() 21 | parseheader(package) 22 | getnames() 23 | getalldata() 24 | parsedata(package) 25 | getdata(seconds, flush) 26 | 27 | ''' 28 | 29 | import socket 30 | import numpy as np 31 | import array 32 | import struct 33 | import sys 34 | 35 | class MulesClient(): 36 | """ 37 | This class represents a TCP/IP client for the MuLES software. 38 | 39 | """ 40 | 41 | def __init__(self, ip, port): 42 | """ 43 | Constructor method. This method connects to a MuLES Instance, request and 44 | retrieves the following information about the data acquisition device being used: 45 | Device name 46 | Device hardware 47 | Sampling frequency (samples/second) 48 | Data format 49 | Number of channels 50 | Extra parameters 51 | 52 | Arguments: 53 | ip: the IP adress to be used to connect to the MuLES Server. 54 | port: the port to use for a particular MuLES client. Every instance of MuLES should 55 | use a different port. To determine which port to use, please refer to the 56 | configuration file you are using for each instance of MuLES. 57 | """ 58 | self.ip = ip 59 | self.port = port 60 | self.python2 = sys.version_info < (3,0) 61 | 62 | # TCP/IP connection 63 | self.connect() 64 | # Header information 65 | dev_name, dev_hardware, fs, data_format, nCh = self.getheader() 66 | channel_names = self.getnames() 67 | # Dictionary containing information about the device 68 | self.params = {'device name': dev_name, 69 | 'device hardware': dev_hardware, 70 | 'sampling frequency': int(fs), 71 | 'data format': data_format, 72 | 'number of channels': nCh, 73 | 'names of channels': channel_names} 74 | 75 | 76 | def connect(self): 77 | """ 78 | If, for some reason, the connection should be lost, this method can be used 79 | to attempt to reconnect to the MuLES (Server). An exception is raised if the 80 | reconnection attempt is unsuccessful. 81 | """ 82 | print('Attempting connection') 83 | try: 84 | self.client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 85 | self.client.connect((self.ip, self.port)) 86 | print('Connection successful') 87 | except: 88 | self.client = None 89 | print('Connection attempt unsuccessful') 90 | raise 91 | 92 | def disconnect(self): 93 | """ 94 | This method shuts down the connection to the MuLES and sets client to None. 95 | The connection parameters are preserved, so the connection can later be reestablished 96 | by using the connect() method. 97 | """ 98 | self.client.close() 99 | self.client = None 100 | print('Connection closed successfully') 101 | 102 | def kill(self): 103 | """ 104 | This method send the command Kill to the MuLES software, which causes 105 | to end its execution 106 | """ 107 | self.sendcommand('K') 108 | 109 | def sendcommand(self, command): 110 | """ 111 | Sends an arbitrary command to the MuLES software. 112 | 113 | Arguments: 114 | command: the command to be sent. 115 | """ 116 | if self.python2: 117 | self.client.send(command) 118 | else: 119 | self.client.send(bytearray(command,'ISO-8859-1')) 120 | 121 | 122 | def flushdata(self): 123 | """ 124 | Convenience method. 125 | 126 | This method flushes the data from the MuLES software. This is equivalent to calling 127 | sendcommand('F'). 128 | """ 129 | self.sendcommand('F') 130 | 131 | def sendtrigger(self, trigger): 132 | """ 133 | Send a trigger to the MuLES software. 134 | 135 | Arguments: 136 | trigger: the trigger to be sent, it has to be in the range [1 64]. 137 | """ 138 | print('Trigger: ' + str(trigger) + ' was sent') 139 | self.sendcommand(chr(trigger)) 140 | 141 | def getparams(self): 142 | """ 143 | Returns the data acquisition device's parameters. These are stored in a dictionary. 144 | To obtain a value from the dictionary, the following strings should be used: 145 | 'device name' 146 | 'device hardware' 147 | 'sampling frequency' 148 | 'data format' 149 | 'number of channels' 150 | 151 | Returns: 152 | A dictionary containing information about the device. 153 | """ 154 | return self.params 155 | 156 | def getfs(self): 157 | """ 158 | Retrieves sampling frequency 'fs' [Hz] 159 | """ 160 | return self.params['sampling frequency'] 161 | 162 | def getdevicename(self): 163 | """ 164 | Retrieves the name of the device 165 | """ 166 | return self.params['device name'] 167 | 168 | def getmessage(self): 169 | """ 170 | This gets a Message sent by MuLES an returns a byte array with the 171 | Message content 172 | """ 173 | n_bytes_4B = array.array('B',self.client.recv(4) ) 174 | n_bytes = struct.unpack('i',n_bytes_4B[::-1])[0] 175 | #n_bytes equals to number of bytes to read from the connection 176 | #[0] is used to get an int32 rather than turple 177 | # Next while lopp secures the integrity of the package 178 | package = ''; 179 | 180 | while len(package) < n_bytes: 181 | if self.python2: 182 | package += self.client.recv(1) 183 | else: 184 | package += self.client.recv(1).decode("ISO-8859-1") 185 | 186 | return package 187 | 188 | def getheader(self): 189 | """ 190 | Request and Retrieves Header Information from MuLES 191 | """ 192 | #print('Header request') 193 | self.sendcommand('H') 194 | return self.parseheader(self.getmessage()) 195 | 196 | def parseheader(self, package): 197 | """ 198 | This function parses the Header Package sent by MuLES to obtain the 199 | device's parameters. NAME, HARDWARE, FS, DATAFORMAT, #CH, EXTRA 200 | 201 | Argument: 202 | package: Header package sent by MuLES. 203 | """ 204 | 205 | array_header = package.split(',') 206 | for field in array_header: 207 | if field.find('NAME') != -1: 208 | ind = field.find('NAME') 209 | dev_name = field[ind+len('NAME='):] 210 | elif field.find('HARDWARE') != -1: 211 | ind = field.find('HARDWARE') 212 | dev_hardware = field[ind+len('HARDWARE='):] 213 | elif field.find('FS') != -1: 214 | ind = field.find('FS') 215 | fs = float(field[ind+len('FS='):]) 216 | elif field.find('DATA') != -1: 217 | ind = field.find('DATA') 218 | data_format = field[ind+len('DATA='):] 219 | elif field.find('#CH') != -1: 220 | ind = field.find('#CH') 221 | nCh = int(field[ind+len('#CH='):]) 222 | 223 | return dev_name, dev_hardware, fs, data_format, nCh 224 | 225 | def getnames(self): 226 | """ 227 | Request and Retrieves the names of channels from MuLES 228 | """ 229 | #print('Names Request') 230 | self.sendcommand('N') 231 | return self.getmessage().split(',') 232 | 233 | 234 | def getalldata(self): 235 | """ 236 | Request and Retrieves ALL Data present in MuLES buffer 237 | (Data collected since the last Flush or last DataRequest) 238 | in the shape 239 | [samples, channels] 240 | """ 241 | #print('Data Request') 242 | self.sendcommand('R') 243 | return self.parsedata(self.getmessage()) 244 | 245 | def parsedata(self, package): 246 | """ 247 | This function parses the Data Package sent by MuLES to obtain all the data 248 | available in MuLES as matrix of the size [n_samples, n_columns], therefore the 249 | total of elements in the matrix is n_samples * n_columns. Each column represents 250 | one channel 251 | 252 | Argument: 253 | package: Data package sent by MuLES. 254 | """ 255 | size_element = 4 # Size of each one of the elements is 4 bytes 256 | 257 | n_columns = len(self.params['data format']) 258 | n_bytes = len(package) 259 | n_samples = (n_bytes/size_element) / n_columns 260 | ####mesData = np.uint8(mesData) # Convert from binary to integers (not necessary pyton) 261 | if self.python2: 262 | bytes_per_element = np.flipud(np.reshape(list(bytearray(package)), [size_element,-1],order='F')) 263 | else: 264 | bytes_per_element = np.flipud(np.reshape(list(bytearray(package, 'ISO-8859-1'), ), [size_element,-1],order='F')) 265 | # Changes "package" to a list with size (n_bytes,1) 266 | # Reshapes the list into a matrix bytes_per_element which has the size: (4,n_bytes/4) 267 | # Flips Up-Down the matrix of size (4,n_bytes/4) to correct the swap in bytes 268 | 269 | package_correct_order = np.uint8(np.reshape(bytes_per_element,[n_bytes,-1],order='F' )) 270 | # Unrolls the matrix bytes_per_element, in "package_correct_order" 271 | # that has a size (n_bytes,1) 272 | 273 | data_format_tags = self.params['data format']*int(n_samples) 274 | # Tags used to map the elements into their corresponding representation 275 | package_correct_order_char = "".join(map(chr,package_correct_order)) 276 | 277 | if self.python2: 278 | elements = struct.unpack(data_format_tags,package_correct_order_char) 279 | else: 280 | elements = struct.unpack(data_format_tags, bytearray(package_correct_order_char,'ISO-8859-1')) 281 | # Elements are cast in their corresponding representation 282 | data = np.reshape(np.array(elements),[n_samples,n_columns],order='C') 283 | # Elements are reshap into data [n_samples, n_columns] 284 | 285 | return data 286 | 287 | def getdata(self, seconds, flush=True ): 288 | """ 289 | Flush all the Data present in MuLES buffer and, 290 | Request and Retrieve a certain amount of Data indicated as seconds 291 | Data returned has the shape [seconds * sampling_frequency, channels] 292 | 293 | Argument: 294 | seconds: used to calculate the amount of samples requested n_samples 295 | n_samples = seconds * sampling_frequency 296 | flush: Boolean, if True send the command Flush before getting Data, 297 | Defaul = True 298 | """ 299 | if flush: 300 | self.flushdata() 301 | 302 | # Size of data requested 303 | n_samples = int(round(seconds * self.params['sampling frequency'])) 304 | n_columns = len(self.params['data format']) 305 | data_buffer = -1 * np.ones((n_samples, n_columns)) 306 | 307 | while (data_buffer[0, n_columns - 1]) < 0 : #While the first row has not been rewriten 308 | new_data = self.getalldata() 309 | new_samples = new_data.shape[0] 310 | data_buffer = np.concatenate((data_buffer, new_data), axis =0) 311 | data_buffer = np.delete(data_buffer, np.s_[0:new_samples], 0) 312 | 313 | return data_buffer 314 | -------------------------------------------------------------------------------- /python/livebargraph.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | """ 4 | Created on Fri Aug 25 21:11:45 2017 5 | 6 | @author: hubert 7 | """ 8 | 9 | import numpy as np 10 | import matplotlib.pyplot as plt 11 | 12 | 13 | class LiveBarGraph(object): 14 | """ 15 | """ 16 | def __init__(self, band_names=['delta', 'theta', 'alpha', 'beta'], 17 | ch_names=['TP9', 'AF7', 'AF8', 'TP10']): 18 | """ 19 | """ 20 | self.band_names = band_names 21 | self.ch_names = ch_names 22 | self.n_bars = self.band_names * self.ch_names 23 | 24 | self.fig, self.ax = plt.subplots() 25 | self.ax.set_ylim((0, 1)) 26 | 27 | y = np.zeros((self.n_bars,)) 28 | x = range(self.n_bars) 29 | 30 | self.rects = self.ax.bar(x, y) 31 | 32 | def update(self, new_y): 33 | [rect.set_height(y) for rect, y in zip(self.rects, new_y)] 34 | 35 | 36 | if __name__ == '__main__': 37 | 38 | bar = LiveBarGraph() 39 | plt.show() 40 | 41 | while True: 42 | bar.update(np.random.random(10)) 43 | plt.pause(0.1) 44 | -------------------------------------------------------------------------------- /python/lsl-viewer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | from scipy.signal import butter, lfilter, lfilter_zi, firwin 5 | from time import sleep 6 | from pylsl import StreamInlet, resolve_byprop 7 | from optparse import OptionParser 8 | import seaborn as sns 9 | from threading import Thread 10 | 11 | sns.set(style="whitegrid") 12 | 13 | parser = OptionParser() 14 | 15 | parser.add_option("-w", "--window", 16 | dest="window", type='float', default=5., 17 | help="window lenght to display in seconds.") 18 | parser.add_option("-s", "--scale", 19 | dest="scale", type='float', default=100, 20 | help="scale in uV") 21 | parser.add_option("-r", "--refresh", 22 | dest="refresh", type='float', default=0.2, 23 | help="refresh rate in seconds.") 24 | parser.add_option("-f", "--figure", 25 | dest="figure", type='string', default="15x6", 26 | help="window size.") 27 | 28 | filt = True 29 | subsample = 2 30 | buf = 12 31 | 32 | (options, args) = parser.parse_args() 33 | 34 | window = options.window 35 | scale = options.scale 36 | figsize = np.int16(options.figure.split('x')) 37 | 38 | print("looking for an EEG stream...") 39 | streams = resolve_byprop('type', 'EEG', timeout=2) 40 | 41 | if len(streams) == 0: 42 | raise(RuntimeError("Cant find EEG stream")) 43 | print("Start aquiring data") 44 | 45 | 46 | class LSLViewer(): 47 | 48 | def __init__(self, stream, fig, axes, window, scale, dejitter=True): 49 | """Init""" 50 | self.stream = stream 51 | self.window = window 52 | self.scale = scale 53 | self.dejitter = dejitter 54 | self.inlet = StreamInlet(stream, max_chunklen=buf) 55 | self.filt = True 56 | 57 | info = self.inlet.info() 58 | description = info.desc() 59 | 60 | self.sfreq = info.nominal_srate() 61 | self.n_samples = int(self.sfreq * self.window) 62 | self.n_chan = info.channel_count() 63 | 64 | ch = description.child('channels').first_child() 65 | ch_names = [ch.child_value('label')] 66 | 67 | for i in range(self.n_chan): 68 | ch = ch.next_sibling() 69 | ch_names.append(ch.child_value('label')) 70 | 71 | self.ch_names = ch_names 72 | 73 | fig.canvas.mpl_connect('key_press_event', self.OnKeypress) 74 | fig.canvas.mpl_connect('button_press_event', self.onclick) 75 | 76 | self.fig = fig 77 | self.axes = axes 78 | 79 | sns.despine(left=True) 80 | 81 | self.data = np.zeros((self.n_samples, self.n_chan)) 82 | self.times = np.arange(-self.window, 0, 1./self.sfreq) 83 | impedances = np.std(self.data, axis=0) 84 | lines = [] 85 | 86 | for ii in range(self.n_chan): 87 | line, = axes.plot(self.times[::subsample], 88 | self.data[::subsample, ii] - ii, lw=1) 89 | lines.append(line) 90 | self.lines = lines 91 | 92 | axes.set_ylim(-self.n_chan + 0.5, 0.5) 93 | ticks = np.arange(0, -self.n_chan, -1) 94 | 95 | axes.set_xlabel('Time (s)') 96 | axes.xaxis.grid(False) 97 | axes.set_yticks(ticks) 98 | 99 | ticks_labels = ['%s - %.1f' % (ch_names[ii], impedances[ii]) 100 | for ii in range(self.n_chan)] 101 | axes.set_yticklabels(ticks_labels) 102 | 103 | self.display_every = int(0.2 / (12/self.sfreq)) 104 | 105 | # self.bf, self.af = butter(4, np.array([1, 40])/(self.sfreq/2.), 106 | # 'bandpass') 107 | 108 | self.bf = firwin(32, np.array([1, 40])/(self.sfreq/2.), width=0.05, 109 | pass_zero=False) 110 | self.af = [1.0] 111 | 112 | zi = lfilter_zi(self.bf, self.af) 113 | self.filt_state = np.tile(zi, (self.n_chan, 1)).transpose() 114 | self.data_f = np.zeros((self.n_samples, self.n_chan)) 115 | 116 | def update_plot(self): 117 | k = 0 118 | while self.started: 119 | samples, timestamps = self.inlet.pull_chunk(timeout=1.0, 120 | max_samples=12) 121 | if timestamps: 122 | if self.dejitter: 123 | timestamps = np.float64(np.arange(len(timestamps))) 124 | timestamps /= self.sfreq 125 | timestamps += self.times[-1] + 1./self.sfreq 126 | self.times = np.concatenate([self.times, timestamps]) 127 | self.n_samples = int(self.sfreq * self.window) 128 | self.times = self.times[-self.n_samples:] 129 | self.data = np.vstack([self.data, samples]) 130 | self.data = self.data[-self.n_samples:] 131 | filt_samples, self.filt_state = lfilter( 132 | self.bf, self.af, 133 | samples, 134 | axis=0, zi=self.filt_state) 135 | self.data_f = np.vstack([self.data_f, filt_samples]) 136 | self.data_f = self.data_f[-self.n_samples:] 137 | k += 1 138 | if k == self.display_every: 139 | 140 | if self.filt: 141 | plot_data = self.data_f 142 | elif not self.filt: 143 | plot_data = self.data - self.data.mean(axis=0) 144 | for ii in range(self.n_chan): 145 | self.lines[ii].set_xdata(self.times[::subsample] - 146 | self.times[-1]) 147 | self.lines[ii].set_ydata(plot_data[::subsample, ii] / 148 | self.scale - ii) 149 | impedances = np.std(plot_data, axis=0) 150 | 151 | ticks_labels = ['%s - %.2f' % (self.ch_names[ii], 152 | impedances[ii]) 153 | for ii in range(self.n_chan)] 154 | self.axes.set_yticklabels(ticks_labels) 155 | self.axes.set_xlim(-self.window, 0) 156 | self.fig.canvas.draw() 157 | k = 0 158 | else: 159 | sleep(0.2) 160 | 161 | def onclick(self, event): 162 | print((event.button, event.x, event.y, event.xdata, event.ydata)) 163 | 164 | def OnKeypress(self, event): 165 | if event.key == '/': 166 | self.scale *= 1.2 167 | elif event.key == '*': 168 | self.scale /= 1.2 169 | elif event.key == '+': 170 | self.window += 1 171 | elif event.key == '-': 172 | if self.window > 1: 173 | self.window -= 1 174 | elif event.key == 'd': 175 | self.filt = not(self.filt) 176 | 177 | def start(self): 178 | self.started = True 179 | self.thread = Thread(target=self.update_plot) 180 | self.thread.daemon = True 181 | self.thread.start() 182 | 183 | def stop(self): 184 | self.started = False 185 | 186 | 187 | fig, axes = plt.subplots(1, 1, figsize=figsize, sharex=True) 188 | lslv = LSLViewer(streams[0], fig, axes, window, scale) 189 | 190 | help_str = """ 191 | toggle filter : d 192 | toogle full screen : f 193 | zoom out : / 194 | zoom in : * 195 | increase time scale : - 196 | decrease time scale : + 197 | """ 198 | print(help_str) 199 | lslv.start() 200 | 201 | plt.show() 202 | lslv.stop() 203 | -------------------------------------------------------------------------------- /python/muse-lsl.py: -------------------------------------------------------------------------------- 1 | from muse import Muse 2 | from time import sleep 3 | from pylsl import StreamInfo, StreamOutlet, local_clock 4 | from optparse import OptionParser 5 | 6 | parser = OptionParser() 7 | parser.add_option("-a", "--address", 8 | dest="address", type='string', default=None, 9 | help="device mac adress.") 10 | parser.add_option("-n", "--name", 11 | dest="name", type='string', default=None, 12 | help="name of the device.") 13 | parser.add_option("-b", "--backend", 14 | dest="backend", type='string', default="auto", 15 | help="pygatt backend to use. can be auto, gatt or bgapi") 16 | parser.add_option("-i", "--interface", 17 | dest="interface", type='string', default=None, 18 | help="The interface to use, 'hci0' for gatt or a com port for bgapi") 19 | 20 | (options, args) = parser.parse_args() 21 | 22 | info = info = StreamInfo('Muse', 'EEG', 5, 256, 'float32', 23 | 'Muse%s' % options.address) 24 | 25 | info.desc().append_child_value("manufacturer", "Muse") 26 | channels = info.desc().append_child("channels") 27 | 28 | for c in ['TP9', 'AF7', 'AF8', 'TP10', 'Right AUX']: 29 | channels.append_child("channel") \ 30 | .append_child_value("label", c) \ 31 | .append_child_value("unit", "microvolts") \ 32 | .append_child_value("type", "EEG") 33 | outlet = StreamOutlet(info, 12, 360) 34 | 35 | 36 | def process(data, timestamps): 37 | for ii in range(12): 38 | outlet.push_sample(data[:, ii], timestamps[ii]) 39 | 40 | muse = Muse(address=options.address, callback=process, 41 | backend=options.backend, time_func=local_clock, 42 | interface=options.interface, name=options.name) 43 | 44 | muse.connect() 45 | print('Connected') 46 | muse.start() 47 | print('Streaming') 48 | 49 | while 1: 50 | try: 51 | sleep(1) 52 | except: 53 | break 54 | 55 | muse.stop() 56 | muse.disconnect() 57 | print('Disonnected') 58 | -------------------------------------------------------------------------------- /python/muse/__init__.py: -------------------------------------------------------------------------------- 1 | from .muse import Muse 2 | -------------------------------------------------------------------------------- /python/muse/muse.py: -------------------------------------------------------------------------------- 1 | import bitstring 2 | import pygatt 3 | import numpy as np 4 | from time import time, sleep 5 | from sys import platform 6 | 7 | 8 | class Muse(): 9 | """Muse 2016 headband""" 10 | 11 | def __init__(self, address=None, callback=None, eeg=True, accelero=False, 12 | giro=False, backend='auto', interface=None, time_func=time, 13 | name=None): 14 | """Initialize""" 15 | self.address = address 16 | self.name = name 17 | self.callback = callback 18 | self.eeg = eeg 19 | self.accelero = accelero 20 | self.giro = giro 21 | self.interface = interface 22 | self.time_func = time_func 23 | 24 | if backend in ['auto', 'gatt', 'bgapi']: 25 | if backend == 'auto': 26 | if platform == "linux" or platform == "linux2": 27 | self.backend = 'gatt' 28 | else: 29 | self.backend = 'bgapi' 30 | else: 31 | self.backend = backend 32 | else: 33 | raise(ValueError('Backend must be auto, gatt or bgapi')) 34 | 35 | def connect(self, interface=None, backend='auto'): 36 | """Connect to the device""" 37 | 38 | if self.backend == 'gatt': 39 | self.interface = self.interface or 'hci0' 40 | self.adapter = pygatt.GATTToolBackend(self.interface) 41 | else: 42 | self.adapter = pygatt.BGAPIBackend(serial_port=self.interface) 43 | 44 | self.adapter.start() 45 | 46 | if self.address is None: 47 | address = self.find_muse_address(self.name) 48 | if address is None: 49 | raise(ValueError("Can't find Muse Device")) 50 | else: 51 | self.address = address 52 | self.device = self.adapter.connect(self.address) 53 | 54 | # subscribes to EEG stream 55 | if self.eeg: 56 | self._subscribe_eeg() 57 | 58 | # subscribes to Accelerometer 59 | if self.accelero: 60 | raise(NotImplementedError('Accelerometer not implemented')) 61 | 62 | # subscribes to Giroscope 63 | if self.giro: 64 | raise(NotImplementedError('Giroscope not implemented')) 65 | 66 | def find_muse_address(self, name=None): 67 | """look for ble device with a muse in the name""" 68 | list_devices = self.adapter.scan(timeout=10.5) 69 | for device in list_devices: 70 | print(device) 71 | if name: 72 | if device['name'] == name: 73 | print('Found device %s : %s' % (device['name'], 74 | device['address'])) 75 | return device['address'] 76 | 77 | elif 'Muse' in device['name']: 78 | print('Found device %s : %s' % (device['name'], 79 | device['address'])) 80 | return device['address'] 81 | 82 | return None 83 | 84 | def start(self): 85 | """Start streaming.""" 86 | self._init_sample() 87 | self.last_tm = 0 88 | self.device.char_write_handle(0x000e, [0x02, 0x64, 0x0a], False) 89 | 90 | def stop(self): 91 | """Stop streaming.""" 92 | self.device.char_write_handle(0x000e, [0x02, 0x68, 0x0a], False) 93 | 94 | def disconnect(self): 95 | """disconnect.""" 96 | self.device.disconnect() 97 | self.adapter.stop() 98 | 99 | def _subscribe_eeg(self): 100 | """subscribe to eeg stream.""" 101 | self.device.subscribe('273e0003-4c4d-454d-96be-f03bac821358', 102 | callback=self._handle_eeg) 103 | self.device.subscribe('273e0004-4c4d-454d-96be-f03bac821358', 104 | callback=self._handle_eeg) 105 | self.device.subscribe('273e0005-4c4d-454d-96be-f03bac821358', 106 | callback=self._handle_eeg) 107 | self.device.subscribe('273e0006-4c4d-454d-96be-f03bac821358', 108 | callback=self._handle_eeg) 109 | self.device.subscribe('273e0007-4c4d-454d-96be-f03bac821358', 110 | callback=self._handle_eeg) 111 | 112 | def _unpack_eeg_channel(self, packet): 113 | """Decode data packet of one eeg channel. 114 | 115 | Each packet is encoded with a 16bit timestamp followed by 12 time 116 | samples with a 12 bit resolution. 117 | """ 118 | aa = bitstring.Bits(bytes=packet) 119 | pattern = "uint:16,uint:12,uint:12,uint:12,uint:12,uint:12,uint:12, \ 120 | uint:12,uint:12,uint:12,uint:12,uint:12,uint:12" 121 | res = aa.unpack(pattern) 122 | timestamp = res[0] 123 | data = res[1:] 124 | # 12 bits on a 2 mVpp range 125 | data = 0.48828125 * (np.array(data) - 2048) 126 | return timestamp, data 127 | 128 | def _init_sample(self): 129 | """initialize array to store the samples""" 130 | self.timestamps = np.zeros(5) 131 | self.data = np.zeros((5, 12)) 132 | 133 | def _handle_eeg(self, handle, data): 134 | """Calback for receiving a sample. 135 | 136 | sample are received in this oder : 44, 41, 38, 32, 35 137 | wait until we get 35 and call the data callback 138 | """ 139 | timestamp = self.time_func() 140 | index = int((handle - 32) / 3) 141 | tm, d = self._unpack_eeg_channel(data) 142 | 143 | if self.last_tm == 0: 144 | self.last_tm = tm - 1 145 | 146 | self.data[index] = d 147 | self.timestamps[index] = timestamp 148 | # last data received 149 | if handle == 35: 150 | if tm != self.last_tm + 1: 151 | print("missing sample %d : %d" % (tm, self.last_tm)) 152 | self.last_tm = tm 153 | # affect as timestamps the first timestamps - 12 sample 154 | timestamps = np.arange(-12, 0) / 256. 155 | timestamps += np.min(self.timestamps[self.timestamps != 0]) 156 | self.callback(self.data, timestamps) 157 | self._init_sample() 158 | -------------------------------------------------------------------------------- /workshop_slides.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NeuroTechX/bci-workshop/2ce12f3b44e89f35bc1f04c00a184a372cddfe1e/workshop_slides.pdf --------------------------------------------------------------------------------