├── .gitignore ├── 1-regression.ipynb ├── 2-lvms.ipynb ├── 3-discrete.ipynb ├── LICENSE ├── README.md └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | .DS_Store 3 | .ipynb_checkpoints 4 | */.ipynb_checkpoints/* 5 | .\ / 6 | 7 | # Installer logs 8 | pip-log.txt 9 | pip-delete-this-directory.txt 10 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Kris Jensen, David Liu, Marine Schimel 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # COSYNE 2023 Workshop on Generative Models 2 | 3 | *What I cannot create I do not understand: analyzing neural and behavioral data with generative models* 4 | 5 | [Website](https://sites.google.com/cam.ac.uk/cosyne-generative-workshop/) 6 | 7 | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7701723.svg)](https://doi.org/10.5281/zenodo.7701723) 8 | 9 | ## Introduction 10 | 11 | A central goal of systems neuroscience is to understand how high-dimensional neural activity relates to complex stimuli and behaviours. Recent advances in neural and behavioural recording techniques have enabled routine collection of large datasets to answer these questions. With access to such rich data, we are now able to describe activity at the level of neural populations rather than individual neurons, and we can look at the neural underpinnings of increasingly complex and naturalistic behaviours. Unfortunately, it can be challenging to extract interpretable structure from such high-dimensional, often noisy, data. Generative modelling is a powerful approach from probabilistic machine learning that can reveal this structure by learning the statistical properties of the recorded data, often under the assumption that the high-dimensional observations arise from some lower-dimensional ‘latent’ process. Moreover, constructing such generative models makes it possible to build prior knowledge about the data directly into the analysis pipeline, such as multi-region structure or temporal continuity. This makes it possible both to make more efficient use of the available data by building in appropriate inductive biases, and to make the models more interpretable by shaping them according to the known structure of the data. Given the wealth of advances in generative modelling for systems neuroscience in recent years, we think the time is ripe to review this progress and discuss both challenges and opportunities for the future. 12 | 13 | 14 | ## Example notebooks 15 | 16 | We provide three example notebooks which implement and discuss a range of generative models commonly used in neuroscience. 17 | 18 | **1. Regression**\ 19 | This notebook considers methods used for regression - the case where we have both some observations and set of regressors that we think can predict our observations. 20 | We start from the simple case of linear regression and reformulate it as a Bayesian method, which can be generalized to the more complicated but powerful *Gaussian process* regression (see this [video](https://www.youtube.com/watch?v=cQAPIlMeL_g) for a more thorough overview of the use of Gaussian processes in systems neuroscience). 21 | 22 | **2. Latent variable models (lvms)**\ 23 | Having treated the case of regression, we then move on to latent variable models. This *unsupervised learning* setting generalizes regression to the case where we do now know the regressors but instead have to *infer* them from the data. 24 | This inference process is often complicated, which calls for simplifying assumptions such as Gaussianity or linearity. 25 | In this notebook, we consider both linear and non-linear method and look at the importance of such modelling choices when analysing high-dimensional data. 26 | 27 | **3. Discrete state spaces**\ 28 | In both of the above cases, we worked with *continuous* state spaces. 29 | However, it is becoming increasingly common in systems neuroscience to also work with discrete state spaces, such as motivational states or different regimes of neural dynamics. 30 | In this notebook, we start from the simple Hidden Markov Model for inferring discrete states and transitions from data and then generalize to the increasingly popular approach of Switching Linear Dynamical Systems for modelling neural data. 31 | 32 |
33 | Instructions to run locally 💻 34 | 35 | To run the notebooks, we need to install software dependencies in Python 3 (3.7 or higher). Note for Windows one has to [do some more work](https://github.com/cloudhan/jax-windows-builder) to install JAX. Open the terminal: 36 | 37 | 1. Create a Python 3 virtual environment in a directory of your choice 38 | 39 | ``` 40 | mkdir /path_to_environment/ 41 | python3 -m venv /path_to_environment/ 42 | ``` 43 | 44 | 2. Activate the new environment 45 | 46 | ``` 47 | . /path_to_environment/bin/activate 48 | ``` 49 | 50 | 3. Install the required dependencies 51 | 52 | ``` 53 | python3 -m pip install -r requirements.txt 54 | ``` 55 | 56 | 4. Add the new environment to Jupyter kernels 57 | 58 | ``` 59 | python3 -m ipykernel install --user --name=cosyne_2023 60 | ``` 61 | 62 | 5. We also make use of the [mgplvm](https://github.com/tachukao/mgplvm-pytorch/tree/cosyne2023) and [ssm](https://github.com/lindermanlab/ssm) libraries, which can be installed from 63 | 64 | ``` 65 | cd .. 66 | python3 -m pip install git+https://github.com/tachukao/mgplvm-pytorch@cosyne2023 67 | git clone https://github.com/lindermanlab/ssm.git 68 | cd ssm 69 | python3 -m pip install numpy cython 70 | python3 -m pip install -e . 71 | cd ../cosyne-2023-generative-models/ 72 | ``` 73 | 74 | 6. Now one should be able to run the notebooks with all dependencies available using the `cosyne_2023` IPython kernel. 75 | 76 |
77 | 78 | 79 | 80 | ## Literature 81 | 82 | - **Review of linear Gaussian LVMs** ([Roweis & Ghahramani, 1999](https://ieeexplore.ieee.org/abstract/document/6790691)) 83 | - **Gaussian process factor analysis** ([Yu et al., 2009](https://proceedings.neurips.cc/paper/2008/hash/ad972f10e0800b49d76fed33a21f6698-Abstract.html)) 84 | - **Bayesian GPFA** ([Jensen et al., 2021](https://proceedings.neurips.cc/paper/2021/hash/58238e9ae2dd305d79c2ebc8c1883422-Abstract.html)) 85 | - **Gaussian process latent variable models** ([Wu et al., 2017](https://proceedings.neurips.cc/paper/2017/hash/b3b4d2dbedc99fe843fd3dedb02f086f-Abstract.html)) 86 | - **Manifold GPLVMs** ([Jensen et al., 2020](https://proceedings.neurips.cc/paper/2020/hash/fedc604da8b0f9af74b6cfc0fab2163c-Abstract.html)) 87 | - **Universal count model** ([Liu and Lengyel, 2021](https://proceedings.neurips.cc/paper/2021/hash/6f5216f8d89b086c18298e043bfe48ed-Abstract.html)) 88 | - **LFADS** ([Pandarinath et al., 2018](https://www.nature.com/articles/s41592-018-0109-9)) 89 | - **iLQR-VAE** ([Schimel et al., 2022](https://www.biorxiv.org/content/10.1101/2021.10.07.463540v2.abstract)) 90 | 91 | - **Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems** ([Linderman et al., 2017](https://proceedings.mlr.press/v54/linderman17a.html)) 92 | 93 | 94 | ## Acknowledgements 95 | 96 | We would like to thank the COSYNE Workshops organizing committee for giving us the opportunity to run our Workshop in Montreal. 97 | We are also grateful to Ta-Chu Kao, Guillaume Hennequin, Máté Lengyel, and many others in CBL and beyond for various discussions and assistance with implementations of some of the methods. 98 | 99 | 100 | ## Citing this 101 | 102 | Please cite the Zenodo DOI. 103 | 104 | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7701723.svg)](https://doi.org/10.5281/zenodo.7701723) -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch>=1.9 2 | jaxlib 3 | jax[cpu] 4 | jupyter 5 | numpy 6 | scipy 7 | matplotlib 8 | scikit-learn 9 | dynamax 10 | autograd 11 | --------------------------------------------------------------------------------