├── .gitignore ├── .travis.yml ├── LICENSE ├── README.rst ├── examples ├── Examples.html ├── Examples.ipynb └── data │ ├── coherentmotion_data.mat │ ├── contrast_data.mat │ └── speech_data.mat ├── pymtrf ├── __init__.py ├── helper.py ├── mtrf.py └── test │ ├── __init__.py │ ├── context.py │ ├── matlab_test_sets.m │ ├── mtrf_test_set.m │ ├── simulate_test_data.py │ ├── test_files │ ├── cross_val_equal_bwd.mat │ ├── cross_val_equal_fwd.mat │ ├── cross_val_unequal_fwd.mat │ ├── gendata.mat │ ├── mtrf_predict_fwd.mat │ ├── mtrf_train_bwd.mat │ ├── mtrf_train_fwd.mat │ ├── mtrf_transform_bwd.mat │ ├── mtrf_transform_fwd.mat │ ├── multicross_val_equal_bwd.mat │ ├── multicross_val_equal_fwd.mat │ └── multicross_val_unequal_fwd.mat │ ├── test_helper.py │ └── test_mtrf.py └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | venv/ 2 | *pyc 3 | .ipynb_checkpoints/ 4 | .idea/ 5 | .eggs/ 6 | *.egg-info 7 | *.pyc 8 | pymtrf/**/__pycache__ 9 | pymtrf/pymtrf/test/context.py 10 | .vscode 11 | .pytest_cache 12 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | branches: 2 | only: 3 | - master 4 | - stable 5 | 6 | # Test push 7 | matrix: 8 | include: 9 | - os: linux 10 | dist: xenial 11 | 12 | language: python 13 | python: 14 | - "3.6" 15 | - "3.7" 16 | 17 | before_install: 18 | - pip install codecov 19 | - pip install pytest-cov 20 | 21 | - pip install . 22 | 23 | script: 24 | - pytest --cov=./ 25 | 26 | after_success: 27 | - codecov # submit coverage 28 | - bash <(curl -s https://codecov.io/bash) 29 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 2-Clause License 2 | 3 | Copyright (c) 2019, Simon Steinkamp 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | 1. Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | 2. Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 17 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 18 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 19 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 20 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 21 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 22 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 23 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 24 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 25 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 26 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | |Travis| |Codecov| |Codacy| 2 | 3 | Important 4 | ========= 5 | 6 | This project is not actively maintained anymore and I would suggest you check out: https://github.com/powerfulbean/mTRFpy 7 | 8 | pymtrf 9 | ====== 10 | 11 | pymtrf is a translation to Python 3.6 of the mTRF Toolbox (v.1.5) for MATLAB, which can be found at http://www.mee.tcd.ie/lalorlab/resources.html or at https://github.com/mickcrosse/mTRF-Toolbox. 12 | 13 | original mTRF Toolbox Summary 14 | ----------------------------- 15 | 16 | (copied from https://sourceforge.net/projects/aespa/?source=navbar ) 17 | 18 | mTRF Toolbox is a MATLAB toolbox that permits the fast computation of the linear stimulus-response mapping of any sensory system in the forward or backward direction. It is suitable for analysing EEG, MEG, ECoG and EMG data. 19 | 20 | The forward model, or temporal response function (TRF), can be interpreted using conventional analysis techniques such as time-frequency and source analysis. The TRF can also be used to predict future responses of the system given a new stimulus signal. Similarly, the backward model can be used to reconstruct spectrotemporal stimulus information given new response data. 21 | 22 | mTRF Toolbox facilitates the use of continuous stimuli in electrophysiological studies as opposed to time-locked averaging techniques which require discrete stimuli. This enables examination of how neural systems process more natural and ecologically valid stimuli such as speech, music, motion and contrast. 23 | 24 | Support documentation: http://dx.doi.org/10.3389/fnhum.2016.00604 25 | 26 | Dependencies 27 | ~~~~~~~~~~~~ 28 | 29 | pymtrf requires: 30 | 31 | - Python (>=3.6) 32 | - NumPy (>=1.14.0) 33 | - SciPy (>=1.11.0) 34 | - Pytest (5.0.1) 35 | 36 | examples.ipynb requires additionally: 37 | - seaborn 38 | - matplotlib 39 | 40 | Due to the use of the f'' format string the Python version ist set to 3.6, it is possible that in future releases this requirement will be removed. Furthermore, the requirements for NumPy and SciPy are (still) rather arbitrary, further testing will be required. 41 | 42 | Installation 43 | ~~~~~~~~~~~~ 44 | 45 | Cone or download the repository and move (cd) into the pymtrf folder. Run :code:`pip install .`. This has so far only been test using pip 18.1. You can run the tests in the folder usinge :code:`python setup.py pytest`, this will require pytest. Another way is to install via pip and git+, however, this also downloads the example data, which is quite a lot (and proably not wanted). 46 | 47 | Functions 48 | ========= 49 | 50 | Inside the package 51 | ------------------ 52 | 53 | The functions in the Python version of mtrf are the same as in the MATLAB Toolbox, with similar use. Naming conventions have been adjusted for Python. 54 | 55 | - lag_gen: Generate lagged timeseries 56 | - mtrf_train: trains the linear model (backward and forward modeling) 57 | - mtrf_predict: predicts and evaluates model 58 | - mtrf_crossval: leave-one-out cross-validation function, does prediction and validation 59 | - mtrf_multicrossval: similar to mtrf_crossval, allows multisensory responses 60 | - mtrf_transform: transforms model weights, for better interpretability 61 | 62 | Other 63 | ----- 64 | 65 | - matlab_test_sets.m: Matlab script to recreate the 'test_files' folder (data created using MATLAB 2016b and mTRF Toolbox v1.5) 66 | - mtrf_test_set.m: Legacy, used to validate pymtrf against mTRF Toolbox 67 | - simulate_test_data.py: Used to simulate test cases for precision tests (Python and MATLAB instances of the Toolbox). 68 | 69 | Usage 70 | ===== 71 | 72 | See examples/examples.ipynb for more detailed use of the different functions. 73 | 74 | Tips on Practical Use 75 | ===================== 76 | 77 | See README.txt of the MATLAB Toolbox 78 | 79 | - Ensure that the stimulus and response data have the same sample rate 80 | and number of samples. 81 | - Downsample the data when conducting large-scale multivariate analyses 82 | to reduce running time, e.g., 128 Hz or 64 Hz. 83 | - Normalise all data, e.g., between [-1,1] or [0,1] or z-score. This will 84 | stabalise regularisation across trials and enable a smaller parameter 85 | search. 86 | - Enter the start and finish time lags in milliseconds. Enter positive 87 | lags for post-stimulus mapping and negative lags for pre-stimulus 88 | mapping. This is the same for both forward and backward mapping - the 89 | code will automatically reverse the lags for backward mapping. 90 | - When using mtrf_predict, always enter the model in its original 91 | 3-dimensional form, i.e., do not remove any singleton dimensions. 92 | - When using mtrf_crossval, the trials do not have to be the same length, 93 | but using trials of the same length will optimise performance. 94 | - When using mtrf_multicrossval, the trials in each of the three sensory 95 | conditions should correspond to the stimuli in STIM. 96 | 97 | 98 | Example Data Sets 99 | ================ 100 | 101 | See README.txt of the MATLAB Toolbox 102 | 103 | contrast_data.mat 104 | This MATLAB file contains 3 variables. The first is a matrix consisting 105 | of 120 seconds of 128-channel EEG data. The second is a vector consisting 106 | of a normalised sequence of numbers that indicate the contrast of a 107 | checkerboard that was presented during the EEG at a rate of 60 Hz. The 108 | third is a scaler which represents the sample rate of the contrast signal 109 | and EEG data (128 Hz). See Lalor et al. (2006) for further details. 110 | 111 | coherentMotion_data.mat 112 | This MATLAB file contains 3 variables. The first is a matrix consisting 113 | of 200 seconds of 128-channel EEG data. The second is a vector consisting 114 | of a normalised sequence of numbers that indicate the motion coherence of 115 | a dot field that was presented during the EEG at a rate of 60 Hz. The 116 | third is a scaler which represents the sample rate of the motion signal 117 | and EEG data (128 Hz). See Gonçalves et al. (2014) for further details. 118 | 119 | speech_data.mat 120 | This MATLAB file contains 4 variables. The first is a matrix consisting 121 | of 120 seconds of 128-channel EEG data. The second is a matrix consisting 122 | of a speech spectrogram. This was calculated by band-pass filtering the 123 | speech signal into 128 logarithmically-spaced frequency bands between 100 124 | and 4000 Hz and taking the Hilbert transform at each frequency band. The 125 | spectrogram was then downsampled to 16 frequency bands by averaging 126 | across every 8 neighbouring frequency bands. The third variable is the 127 | broadband envelope, obtained by taking the mean across the 16 narrowband 128 | envelopes. The fourth variable is a scaler which represents the sample 129 | rate of the envelope, spectrogram and EEG data (128 Hz). See Lalor & 130 | Foxe (2010) for further details. 131 | 132 | 133 | References 134 | ========== 135 | 136 | - Lalor EC, Pearlmutter BA, Reilly RB, McDarby G, Foxe JJ (2006) The 137 | VESPA: a method for the rapid estimation of a visual evoked potential. 138 | NeuroImage 32:1549-1561. https://doi.org/10.1016/j.neuroimage.2006.05.054 139 | - Gonçalves NR, Whelan R, Foxe JJ, Lalor EC (2014) Towards obtaining 140 | spatiotemporally precise responses to continuous sensory stimuli in 141 | humans: a general linear modeling approach to EEG. NeuroImage 97(2014):196-205. 142 | https://doi.org/10.1016/j.neuroimage.2014.04.012 143 | - Lalor, EC, & Foxe, JJ (2010) Neural responses to uninterrupted natural 144 | speech can be extracted with precise temporal resolution. Eur J Neurosci 145 | 31(1):189-193. https://doi.org/10.1111/j.1460-9568.2009.07055.x 146 | - Crosse MC, Di Liberto GM, Bednar A, Lalor EC (2015) The multivariate 147 | temporal response function (mTRF) toolbox: a MATLAB toolbox for relating 148 | neural signals to continuous stimuli. Front Hum Neurosci 10:604. 149 | https://dx.doi.org/10.3389%2Ffnhum.2016.00604 150 | - Haufe S, Meinecke F, Gorgen K, Dahne S, Haynes JD, Blankertz B, 151 | Bießmann F (2014) On the interpretation of weight vectors of 152 | linear models in multivariate neuroimaging. NeuroImage 87:96-110. 153 | https://doi.org/10.1016/j.neuroimage.2013.10.067 154 | - Crosse MC, Butler JS, Lalor EC (2015) Congruent visual speech 155 | enhances cortical entrainment to continuous auditory speech in 156 | noise-free conditions. J Neurosci 35(42):14195-14204. 157 | https://doi.org/10.1523/JNEUROSCI.1829-15.2015 158 | 159 | TODO 160 | ==== 161 | 162 | - Extensive documentation 163 | - More tests 164 | - Tutorial to the method 165 | - mtrf_predict, allow prediction only (skipping evaluation step) 166 | 167 | Wishlist 168 | ======== 169 | 170 | - mtrf_class following scikit-learn API 171 | - mne-python workflow (need data set...) 172 | 173 | 174 | .. |Travis| image:: https://travis-ci.org/SRSteinkamp/pymtrf.svg?branch=master 175 | :target: https://travis-ci.org/SRSteinkamp/pymtrf 176 | 177 | .. |Codecov| image:: https://codecov.io/gh/SRSteinkamp/pymtrf/branch/master/graph/badge.svg 178 | :target: https://codecov.io/gh/SRSteinkamp/pymtrf 179 | 180 | 181 | .. |Codacy| image:: https://api.codacy.com/project/badge/Grade/f9c888a4b6584a4bb3f72e8fb1920425 182 | :alt: Codacy Badge 183 | :target: https://app.codacy.com/manual/SRSteinkamp/pymtrf?utm_source=github.com&utm_medium=referral&utm_content=SRSteinkamp/pymtrf&utm_campaign=Badge_Grade_Settings 184 | -------------------------------------------------------------------------------- /examples/data/coherentmotion_data.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/examples/data/coherentmotion_data.mat -------------------------------------------------------------------------------- /examples/data/contrast_data.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/examples/data/contrast_data.mat -------------------------------------------------------------------------------- /examples/data/speech_data.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/examples/data/speech_data.mat -------------------------------------------------------------------------------- /pymtrf/__init__.py: -------------------------------------------------------------------------------- 1 | from .mtrf import lag_gen, mtrf_train, mtrf_predict, \ 2 | mtrf_crossval, mtrf_multicrossval, mtrf_transform 3 | from .helper import lag_builder, quadratic_regularization -------------------------------------------------------------------------------- /pymtrf/helper.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy import linalg 3 | import warnings 4 | 5 | 6 | def lag_builder(time_min, time_max): 7 | """Build the lags for the lag_generator function. Basically the indices of 8 | the time lags (including the starting and stopping points) of the data 9 | matrix. 10 | 11 | Parameters 12 | ---------- 13 | time_min : np.int 14 | The starting index of the matrix as integer. 15 | time_max : np.int 16 | The stopping index of the matrix as integer. 17 | 18 | Returns 19 | ------- 20 | lag_vector : numpy.ndarray, shape (np.abs(time_max) + np.abs(time_min) + 1,) 21 | A numpy array including all the lags. 22 | """ 23 | 24 | if time_min > time_max: 25 | lag_vector = np.arange(time_max, time_min + 1)[::-1] 26 | else: 27 | lag_vector = np.arange(time_min, time_max + 1) 28 | 29 | return lag_vector 30 | 31 | 32 | def quadratic_regularization(dim): 33 | d = 2 * np.eye(dim) 34 | d[[0, -1], [0, -1]] = 1 35 | 36 | upper = np.hstack([np.zeros((dim, 1)), np.eye(dim, dim - 1)]) 37 | lower = np.vstack([np.zeros((1, dim)), np.eye(dim - 1, dim)]) 38 | m = d - upper - lower 39 | 40 | return m 41 | 42 | 43 | def regularized_regression_fit(X, y, m, alpha=1.0): 44 | """Calculate the parameters using regularized reverse correlation. Assumes, 45 | that a intercept term has been added manually! 46 | 47 | Parameters 48 | ---------- 49 | X : np.ndarray 50 | The feature matrix. Should be in shape: times x features 51 | y : np.ndarray 52 | The target matrix. Should be in shape: times x targets 53 | m : np.array 54 | Regularization matrix, either quadratic regularization or identity matrix 55 | alpha : np.float 56 | The regularization parameter. Usually named lambda. Must be >0, defaults to 57 | 1.0 58 | 59 | Returns 60 | ---------- 61 | :return beta: np.array 62 | The beta parameters in shape target_features x features 63 | """ 64 | 65 | if X.shape[0] < X.shape[1]: 66 | warnings.warn(f'X: more features {X.shape[1]}' + 67 | f' than samples {X.shape[0]}, check input dimensions!') 68 | if y.shape[0] < y.shape[1]: 69 | warnings.warn(f'y: more features {y.shape[1]}' + 70 | f' than samples {y.shape[0]}, check input dimensions!') 71 | 72 | assert alpha >= 0, 'reg_lambda has to be positive!' 73 | assert X.shape[0] == y.shape[0], f'Cannot multiply X with dim {X.shape[0]}' \ 74 | + f' and y with dim {y.shape[0]}' 75 | 76 | if np.sum(X[:, 0] == 1) != X.shape[0]: 77 | warnings.warn('Please check, whether an intercept term has been added!') 78 | 79 | # TODO Tests: 1, 2 numerical tests 80 | 81 | xtx = X.T.dot(X) 82 | xtx += m * alpha 83 | 84 | xy = X.T.dot(y) 85 | 86 | beta = linalg.solve(xtx, xy, sym_pos=True, overwrite_a=False) 87 | 88 | return beta 89 | 90 | 91 | def regularized_regression_predict(x, coefficients): 92 | # TODO Test output dims 93 | # TODO test numerical things 94 | # TODO assert: input dimensions and test 95 | y_hat = x.dot(coefficients) 96 | return y_hat 97 | 98 | 99 | def model_to_coefficients(model, intercept): 100 | # TODO tests, 101 | # TODO asserts 102 | # Reshaping using fortran due to Matlab combability, maybe this can be dropped later. 103 | 104 | model = np.vstack([intercept.reshape(-1, model.shape[2]), 105 | np.reshape(model, (model.shape[0] * model.shape[1], 106 | model.shape[2]), order='F')]) 107 | 108 | return model 109 | 110 | 111 | def coefficient_to_model(coefficients, x_dim1, n_lags, y_dim1): 112 | assert x_dim1 * (n_lags + 1) * y_dim1 == coefficients.ravel().shape[0], 'Check inputs!' 113 | intercept = coefficients[:x_dim1, :] 114 | # The reshape order is set to Fortran to keep Matlab compability 115 | model = np.reshape(coefficients[x_dim1:, :], 116 | (x_dim1, n_lags, y_dim1), order='F') 117 | return model, intercept 118 | 119 | 120 | def stimulus_mapping(mapping_direction, stim, resp, tmin, tmax): 121 | # TODO Documentation, tests 122 | if mapping_direction == 1: 123 | x = stim.copy() 124 | y = resp.copy() 125 | elif mapping_direction == -1: 126 | x = resp.copy() 127 | y = stim.copy() 128 | (tmin, tmax) = (tmax, tmin) 129 | else: 130 | raise ValueError('Value of mapping_direction must be 1 (forward) or -1 (backward)') 131 | 132 | return x, y, tmin, tmax 133 | 134 | 135 | def test_input_dimensions(x): 136 | if isinstance(x, list): 137 | n_trials = len(x) 138 | n_feat = x[0].shape[1] 139 | for trl in x: 140 | assert trl.shape[-1] == n_feat, 'Number of features have to be equal!' 141 | assert trl.ndim == 2, 'Arrays in list have to be of size times x features' 142 | elif isinstance(x, np.ndarray): 143 | assert x.ndim == 3, 'Nd array of trials x times x features expected!' 144 | n_trials = x.shape[0] 145 | n_feat = x.shape[2] 146 | else: 147 | raise ValueError('Input shut be either a list of np.arrays or a single' + 148 | ' np.array of shape trials x times x features') 149 | 150 | return n_trials, n_feat 151 | 152 | 153 | def test_reg_lambda(reg_lambda): 154 | if isinstance(reg_lambda, (np.ndarray, list)): 155 | for rl in reg_lambda: 156 | assert rl > 0, 'Regularization has to be positive and larger than 0' 157 | elif isinstance(reg_lambda, float): 158 | assert reg_lambda > 0, 'reg_lambda has to be positive and larger than 0!' 159 | reg_lambda = [reg_lambda] 160 | else: 161 | raise ValueError('reg_lambda has to be a list, np.ndarray or float!') 162 | 163 | return reg_lambda -------------------------------------------------------------------------------- /pymtrf/mtrf.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy.stats import pearsonr 3 | from .helper import * 4 | import warnings 5 | 6 | 7 | def lag_gen(data, time_lags): 8 | '''lag_gen returns the matrix containing the lagged time series of data for 9 | a range of time lags given by the list or numpy array lags. If the data is 10 | multivariate, lag_gen concatenates the features for each lag along the 11 | columns of the output array. 12 | 13 | Parameters 14 | ---------- 15 | data : {float, array_like}, shape = [n_samples, n_features] 16 | The training data, i.e. the data that is shifted in time. 17 | time_lags : {int, array_like}, shape = [n_lags] 18 | Indices for lags that will be applied to the data. 19 | 20 | Returns 21 | ------- 22 | lagged_data : {float, array_like}, shape = [n_samples, n_features * n_lag} 23 | The data shifted in time, as described above. 24 | 25 | See also 26 | -------- 27 | mtrf_train : calculate forward or backward models. 28 | mtrf_predict : predict stimulus or response based on models. 29 | mtrf_crossval : calculate reconstruction accuracies for a dataset. 30 | 31 | Translation to Python: Simon Richard Steinkamp 32 | Github: 33 | October 2018; Last revision: 18.January 2019 34 | Original MATLAB toolbox, mTRF v. 1.5 35 | Author: Michael Crosse 36 | Lalor Lab, Trinity College Dublin, IRELAND 37 | Email: edmundlalor@gmail.com 38 | Website: http://lalorlab.net/ 39 | April 2014; Last revision: 18 August 2015 40 | ''' 41 | lagged_data = np.zeros((data.shape[0], data.shape[1] * time_lags.shape[0])) 42 | 43 | chan = 0 44 | for lags in time_lags: 45 | 46 | if lags < 0: 47 | lagged_data[:lags, chan:chan + data.shape[1]] = data[-lags:, :] 48 | elif lags > 0: 49 | lagged_data[lags:, chan:chan + data.shape[1]] = data[:-lags, :] 50 | else: 51 | lagged_data[:, chan:chan + data.shape[1]] = data 52 | 53 | chan = chan + data.shape[1] 54 | 55 | return lagged_data 56 | 57 | 58 | def mtrf_train(stim, resp, fs, mapping_direction, tmin, tmax, reg_lambda): 59 | '''performs ridge regression on the stimulus property stim and the neural 60 | response data resp to solve for their linear mapping function. Pass in 61 | mapping_direction = 1 to map in the forward direction or -1 to map 62 | backwards. The sampling frequency fs should be defined in Hertz and the 63 | time lags should be set in milliseconds between tmin and tmap. 64 | Regularisation is controlled by the ridge parameter reg_lambda. 65 | 66 | Parameters 67 | ---------- 68 | stim : {float, array_like}, shape = [n_samples, n_features] 69 | The stimulus property. 70 | resp : {float, array_like}, shape = [n_samples, n_channels] 71 | The neural data. 72 | fs : {float} 73 | sampling frequency 74 | mapping_direction : {1, -1} 75 | mapping direction 76 | tmin : {float} 77 | minimum time lag in ms 78 | tmax : {float} 79 | maximum time lag in ms 80 | reg_lambda : {float} 81 | ridge parameter 82 | 83 | Returns 84 | ------- 85 | model : {float, array_like}, shape = [n_features, n_lags, n_targets] 86 | linear mapping function, features by lags by channels for 87 | mapping_direction = 1, channels by lags by features for 88 | mapping_direction = -1. 89 | time_lags : {array_like}, shape = [n_lags] 90 | vector of time lags in ms 91 | intercept : {array_like}, shape = [n_features] 92 | the regression constant. 93 | See also 94 | -------- 95 | mtrf_train : calculate forward or backward models. 96 | mtrf_predict : predict stimulus or response based on models. 97 | mtrf_crossval : calculate reconstruction accuracies for a dataset. 98 | 99 | References 100 | ---------- 101 | [1] Lalor EC, Pearlmutter BA, Reilly RB, McDarby G, Foxe JJ (2006) 102 | The VESPA: a method for the rapid estimation of a visual evoked 103 | potential. NeuroImage 32:1549-1561. 104 | [1] Crosse MC, Di Liberto GM, Bednar A, Lalor EC (2015) The 105 | multivariate temporal response function (mTRF) toolbox: a MATLAB 106 | toolbox for relating neural signals to continuous stimuli. Front 107 | Hum Neurosci 10:604. 108 | 109 | Translation to Python: Simon Richard Steinkamp 110 | Github: 111 | October 2018; Last revision: 18.January 2019 112 | Original MATLAB toolbox, mTRF v. 1.5 113 | Author: Edmund Lalor, Michael Crosse, Giovanni Di Liberto 114 | Lalor Lab, Trinity College Dublin, IRELAND 115 | Email: edmundlalor@gmail.com 116 | Website: http://lalorlab.net/ 117 | April 2014; Last revision: 8 January 2016 118 | ''' 119 | if stim.shape[0] < stim.shape[1]: 120 | warnings.warn(f'stim: more features {stim.shape[0]} ' + 121 | f'than samples {stim.shape[1]}, check input dimensions!') 122 | if resp.shape[0] < resp.shape[1]: 123 | warnings.warn(f'resp: more features {resp.shape[0]} ' + 124 | f'than samples {resp.shape[1]}, check input dimensions!') 125 | 126 | assert tmin < tmax, 'tmin has to be smaller than tmax' 127 | assert reg_lambda > 0, 'reg_lambda has to be positive and larger than 0!' 128 | 129 | x, y, tmin, tmax = stimulus_mapping(mapping_direction, stim, resp, tmin, tmax) 130 | 131 | t_min = np.floor(tmin / 1e3 * fs * mapping_direction).astype(int) 132 | t_max = np.ceil(tmax / 1e3 * fs * mapping_direction).astype(int) 133 | 134 | lags = lag_builder(t_min, t_max) 135 | lag_x = lag_gen(x, lags) 136 | x_input = np.hstack([np.ones(x.shape), lag_x]) 137 | n_feat = x_input.shape[1] 138 | 139 | if x.shape[1] == 1: 140 | reg_matrix = quadratic_regularization(n_feat) 141 | else: 142 | reg_matrix = np.eye(n_feat) 143 | 144 | coefficients = regularized_regression_fit(x_input, y, reg_matrix, reg_lambda) 145 | model, intercept = coefficient_to_model(coefficients, x.shape[1], 146 | lags.shape[0], y.shape[1]) 147 | time_lags = lags / fs * 1e3 148 | 149 | return model, time_lags, intercept 150 | 151 | 152 | def mtrf_predict(stim, resp, model, fs, mapping_direction, tmin, tmax, constant): 153 | '''performs a convolution of the stimulus property or the neural response 154 | data with their linear mapping function (estimated by mtrf_train) to solve 155 | predict the neural response (mapping_direction = 1) or the stimulus property 156 | (mapping_direction = 1). 157 | 158 | Parameters 159 | ---------- 160 | stim : {float, array_like}, shape = [n_samples, n_features] 161 | The stimulus property. 162 | resp : {float, array_like}, shape = [n_samples, n_channels] 163 | The neural data. 164 | model : {float, array_like}, shape = [ ] 165 | linear mapping function, features by lags by channels for 166 | mapping_direction = 1, channels by lags by features for 167 | mapping_direction = -1. 168 | fs : {float} 169 | sampling frequency in hz 170 | mapping_direction : {1, -1} 171 | mapping direction for forward or backward modeling 172 | tmin : {float} 173 | minimum time lag in ms 174 | tmax : {float} 175 | maximum time lag in ms 176 | constant : {float, array_like} 177 | Regression constant, if None is given, a zero constant is assumed. 178 | 179 | Returns 180 | ------- 181 | pred : {float, array_like}, shape = [n_times, n_features] 182 | prediction of the regression model 183 | r : {float} 184 | correlation coefficient between prediction and original data 185 | p : {float} 186 | p-value corresponding to r 187 | mse : {float} 188 | mean squared error of prediction 189 | 190 | See also 191 | -------- 192 | mtrf_train : calculate forward or backward models. 193 | mtrf_predict : predict stimulus or response based on models. 194 | mtrf_crossval : calculate reconstruction accuracies for a dataset. 195 | 196 | Translation to Python: Simon Richard Steinkamp 197 | Github: 198 | October 2018; Last revision: 18.January 2019 199 | Original MATLAB toolbox, mTRF v. 1.5 200 | Author: Michael Crosse, Giovanni Di Liberto 201 | Lalor Lab, Trinity College Dublin, IRELAND 202 | Email: edmundlalor@gmail.com 203 | Website: http://lalorlab.net/ 204 | April 2014; Last revision: 8 January 2016 205 | ''' 206 | 207 | # Define x and y 208 | assert tmin < tmax, 'Value of tmin must be < tmax' 209 | 210 | if constant is None: 211 | constant = np.zeros((model.shape[0], model.shape[2])) 212 | else: 213 | assert np.all(constant.shape == np.array([model.shape[0], 214 | model.shape[2]])) 215 | 216 | x, y, tmin, tmax = stimulus_mapping(mapping_direction, stim, resp, tmin, tmax) 217 | 218 | t_min = np.floor(tmin / 1e3 * fs * mapping_direction).astype(int) 219 | t_max = np.ceil(tmax / 1e3 * fs * mapping_direction).astype(int) 220 | 221 | lags = lag_builder(t_min, t_max) 222 | 223 | x_lag = np.hstack([np.ones(x.shape), lag_gen(x, lags)]) 224 | 225 | model = model_to_coefficients(model, constant) 226 | 227 | pred = regularized_regression_predict(x_lag, model) 228 | 229 | # Calculate accuracy 230 | if y is not None: 231 | r = np.zeros((1, y.shape[1])) 232 | p = np.zeros((1, y.shape[1])) 233 | mse = np.zeros((1, y.shape[1])) 234 | for i in range(y.shape[1]): 235 | r[:, i], p[:, i] = pearsonr(y[:, i], pred[:, i]) 236 | mse[:, i] = np.mean((y[:, i] - pred[:, i]) ** 2) 237 | else: 238 | r = None 239 | p = None 240 | mse = None 241 | 242 | return pred, r, p, mse 243 | 244 | 245 | def mtrf_crossval(stim, resp, fs, mapping_direction, tmin, tmax, reg_lambda): 246 | '''performs leave-one-out cross-validation on the set of stimuli and the 247 | neural responses for a range of ridge parameter values. 248 | As a measure of performance, the correlation coefficients between the 249 | predicted and original signals, the corresponding p-values, and the mean 250 | squared errors (mse) are returned. Forward and backward modelling can be 251 | performed. 252 | 253 | Parameters 254 | ---------- 255 | stim : {float, array_like}, shape = [n_trials, n_samples, n_features] 256 | The stimulus property, can be an array including the different trials 257 | in the first dimension or a list of arrays of 258 | shape = [n_sample, n_features]. 259 | resp : {float, array_like}, shape = [n_trials, n_samples, n_features] 260 | The neural data, can be an array including the different trials 261 | in the first dimension or a list of arrays of 262 | shape = [n_sample, n_features]. 263 | fs : {float} 264 | sampling frequency 265 | mapping_direction : {1, -1} 266 | mapping direction, 1 for forward mapping, -1 for backward mapping 267 | tmin : {float} 268 | minimum time lag in ms 269 | tmax : {float} 270 | maximum time lag in ms 271 | reg_lambda : {float} 272 | list of ridge parameters 273 | 274 | Returns 275 | ------- 276 | r : {float, np.ndarray}, shape = [n_trials, n_lambdas] 277 | correlation coefficient between prediction and original data for each 278 | trial and each parameter. 279 | p : {float, np.ndarray}, shape = [n_trials, n_lambdas] 280 | p-values corresponding to r 281 | mse : {float, np.ndarray}, shape = [n_trials, n_lambdas] 282 | mean squared error of the predictions 283 | pred : {list} 284 | list of length n_trials, containing np.ndarrays with shape = 285 | [n_lambdas, n_timepoints, n_targets] 286 | model : {float, np.ndarray}, shape = [n_trials, n_lambdas, 287 | n_lags, n_targets, n_features] 288 | linear mapping function from resp to stim or vice versa, depending on the 289 | mapping direction. 290 | 291 | References 292 | ---------- 293 | [1] Crosse MC, Di Liberto GM, Bednar A, Lalor EC (2015) The 294 | multivariate temporal response function (mTRF) toolbox: a MATLAB 295 | toolbox for relating neural signals to continuous stimuli. Front 296 | Hum Neurosci 10:604. 297 | 298 | Translation to Python: Simon Richard Steinkamp 299 | Github: 300 | October 2018; Last revision: 18.January 2019 301 | Original MATLAB toolbox, mTRF v. 1.5 302 | Author: Michael Crosse 303 | Lalor Lab, Trinity College Dublin, IRELAND 304 | Email: edmundlalor@gmail.com 305 | Website: http://lalorlab.net/ 306 | April 2014; Last revision: 31 May 2016 307 | ''' 308 | assert tmin < tmax, 'Value of tmin must be < tmax' 309 | 310 | x, y, tmin, tmax = stimulus_mapping(mapping_direction, stim, resp, tmin, tmax) 311 | 312 | n_trials, n_feat = test_input_dimensions(x) 313 | 314 | n_trl_y, n_targets = test_input_dimensions(y) 315 | 316 | assert n_trials == n_trl_y, 'stim and resp should have the same no of trials!' 317 | 318 | reg_lambda = test_reg_lambda(reg_lambda) 319 | 320 | n_lambda = len(reg_lambda) 321 | 322 | t_min = np.floor(tmin / 1e3 * fs * mapping_direction).astype(int) 323 | t_max = np.ceil(tmax / 1e3 * fs * mapping_direction).astype(int) 324 | 325 | lags = lag_builder(t_min, t_max) 326 | 327 | # Set up regularisation 328 | dim1 = n_feat * lags.shape[0] + n_feat 329 | model = np.zeros((n_trials, n_lambda, dim1, n_targets)) 330 | 331 | if n_feat == 1: 332 | reg_matrix = quadratic_regularization(dim1) 333 | else: 334 | reg_matrix = np.eye(dim1) 335 | 336 | # Training 337 | x_input = [] 338 | 339 | for c_trials in range(n_trials): 340 | # Generate lag matrix 341 | x_input.append(np.hstack([np.ones(x[c_trials].shape), lag_gen(x[c_trials], lags)])) 342 | # Calculate model for each lambda value 343 | for c_lambda in range(n_lambda): 344 | temp = regularized_regression_fit(x_input[c_trials], 345 | y[c_trials], reg_matrix, reg_lambda[c_lambda]) 346 | model[c_trials, c_lambda, :, :] = temp 347 | 348 | r = np.zeros((n_trials, n_lambda, n_targets)) 349 | p = np.zeros(r.shape) 350 | mse = np.zeros(r.shape) 351 | pred = [] 352 | 353 | for trial in range(n_trials): 354 | pred.append(np.zeros((n_lambda, y[trial].shape[0], n_targets))) 355 | 356 | # Perform cross-validation for each lambda value 357 | for c_lambda in range(n_lambda): 358 | # Calculate prediction 359 | cv_coef = np.mean(model[np.arange(n_trials) != trial, c_lambda, :, :], 0, keepdims=False) 360 | pred[trial][c_lambda, :, :] = regularized_regression_predict(x_input[trial], cv_coef) 361 | 362 | # Calculate accuracy 363 | for k in range(n_targets): 364 | temp_pred = np.squeeze(pred[trial][c_lambda, :, k]).T 365 | r[trial, c_lambda, k], p[trial, c_lambda, k] = pearsonr(y[trial][:, k], temp_pred) 366 | mse[trial, c_lambda, k] = np.mean((y[trial][:, k] - temp_pred) ** 2) 367 | 368 | return r, p, mse, pred, model 369 | 370 | 371 | def mtrf_transform(stim, resp, model, fs, mapping_direction, tmin, tmax, constant=None): 372 | '''transforms the coefficients of the model weights into transformed model 373 | coefficients. 374 | Parameters 375 | ---------- 376 | stim : {float, array_like}, shape = [n_samples, n_features] 377 | The stimulus property. 378 | resp : {float, array_like}, shape = [n_samples, n_channels] 379 | The neural data. 380 | model : {float, array_like}, shape = [ ] 381 | linear mapping function, features by lags by channels for 382 | mapping_direction = 1, channels by lags by features for 383 | mapping_direction = -1. 384 | fs : {float} 385 | sampling frequency in hz 386 | mapping_direction : {1, -1} 387 | mapping direction for forward or backward modeling 388 | tmin : {float} 389 | minimum time lag in ms 390 | tmax : {float} 391 | maximum time lag in ms 392 | intercept : {float, array_like} 393 | Regression constant. 394 | 395 | Returns 396 | ------- 397 | model_t : {float, array_like}, shape = [n_times, n_features] 398 | transformed model weights 399 | t : {float}, shape = [n_lags] 400 | vector of time lags used in ms 401 | intercept_t : {float} 402 | transformed model constant 403 | 404 | See also 405 | -------- 406 | mtrf_train : calculate forward or backward models. 407 | mtrf_predict : predict stimulus or response based on models. 408 | mtrf_crossval : calculate reconstruction accuracies for a dataset. 409 | 410 | References 411 | ---------- 412 | [1] Haufe S, Meinecke F, Gorgen K, Dahne S, Haynes JD, Blankertz B, 413 | Bießmann F (2014) On the interpretation of weight vectors of 414 | linear models in multivariate neuroimaging. NeuroImage 87:96-110. 415 | 416 | Translation to Python: Simon Richard Steinkamp 417 | Github: 418 | October 2018; Last revision: 18.January 2019 419 | 420 | Original MATLAB toolbox, mTRF v. 1.5 421 | Author: Adam Bednar, Emily Teoh, Giovanni Di Liberto, Michael Crosse 422 | Lalor Lab, Trinity College Dublin, IRELAND 423 | Email: edmundlalor@gmail.com 424 | Website: http://lalorlab.net/ 425 | April 2016; Last revision: 15 July 2016 426 | ''' 427 | # Define x and y 428 | assert tmin < tmax, 'Value of tmin must be < tmax' 429 | 430 | x, y, tmin, tmax = stimulus_mapping(mapping_direction, stim, resp, tmin, tmax) 431 | 432 | if constant is None: 433 | constant = np.zeros((model.shape[0], model.shape[2])) 434 | else: 435 | assert np.all(constant.shape == np.array([model.shape[0], 436 | model.shape[2]])) 437 | 438 | t_min = np.floor(tmin / 1e3 * fs * mapping_direction).astype(int) 439 | t_max = np.ceil(tmax / 1e3 * fs * mapping_direction).astype(int) 440 | 441 | lags = lag_builder(t_min, t_max) 442 | 443 | X = np.hstack([np.ones(x.shape), lag_gen(x, lags)]) 444 | 445 | # Transform model weights 446 | model = model_to_coefficients(model, constant) 447 | coef_t = (X.T.dot(X)).dot(model).dot(np.linalg.inv((y.T.dot(y)))) 448 | 449 | model_t, constant_t = coefficient_to_model(coef_t, x.shape[1], lags.shape[0], y.shape[1]) 450 | t = lags / fs * 1e3 451 | 452 | return model_t, t, constant_t 453 | 454 | 455 | def mtrf_multicrossval(stim, resp, resp1, resp2, fs, mapping_direction, tmin, tmax, reg_lambda1, reg_lambda2): 456 | '''performs leave-one-out cross-validation of an additive model for a 457 | multisensory dataset as follows: 458 | 1. Separate unisensory models are calculated using the set of stimuli 459 | properties and unisensory neural responses (resp1, resp2) for each 460 | of their respective ridge parameters (reg_lambda1, reg_lambda2) 461 | 2. The algebraic sums of the unisensory models for every combination of 462 | ridge parameters are calculated, i.e. the additive models. 463 | 3. The additive models are validated by testing them on the set of 464 | multisensory neural responses. 465 | As a measure of performance, the correlation coefficients between the 466 | predicted and original signals, the corresponding p-values, and the mean 467 | squared errors (mse) are returned. Forward and backward modelling can be 468 | performed. 469 | 470 | Parameters 471 | ---------- 472 | stim : {float, array_like}, shape = [n_trials, n_samples, n_features] 473 | The stimulus property, can be an array including the different trials 474 | in the first dimension or a list of arrays of 475 | shape = [n_sample, n_features]. 476 | resp : {float, array_like}, shape = [n_trials, n_samples, n_features] 477 | The multisensory neural responses, can be an array including the different trials 478 | in the first dimension or a list of arrays of 479 | shape = [n_sample, n_features]. 480 | resp1 : {float, array_like}, shape = [n_trials, n_samples, n_features] 481 | The first set of unisensory neural responses, can be an array including 482 | the different trials in the first dimension or a list of arrays of 483 | shape = [n_sample, n_features]. 484 | resp2 : {float, array_like}, shape = [n_trials, n_samples, n_features] 485 | The first set of unisensory neural responses, can be an array including 486 | the different trials in the first dimension or a list of arrays of 487 | shape = [n_sample, n_features]. 488 | fs : {float} 489 | sampling frequency 490 | mapping_direction : {1, -1} 491 | mapping direction, 1 for forward mapping, -1 for backward mapping 492 | tmin : {float} 493 | minimum time lag in ms 494 | tmax : {float} 495 | maximum time lag in ms 496 | reg_lambda1 : {float} 497 | list of ridge parameters for resp1 498 | reg_lambda2 : {float} 499 | list of ridge parameters for resp2 500 | 501 | Returns 502 | ------- 503 | r : {float, np.ndarray}, shape = [n_trials, n_lambdas1, n_lambdas2] 504 | correlation coefficient between prediction and original data for each 505 | trial and each parameter. 506 | p : {float, np.ndarray}, shape = [n_trials, n_lambdas1, n_lambdas2] 507 | p-values corresponding to r 508 | mse : {float, np.ndarray}, shape = [n_trials, n_lambdas1, n_lambdas2] 509 | mean squared error of the predictions 510 | pred : {list} 511 | list of length n_trials, containing np.ndarrays with shape = 512 | [n_lambdas1, n_lambdas2, n_timepoints, n_targets] 513 | model : {float, np.ndarray}, shape = [n_trials, n_lambdas1, n_lambdas2, 514 | n_lags, n_targets, n_features] 515 | linear mapping function from resp to stim or vice versa, depending on the 516 | mapping direction. 517 | 518 | References 519 | ---------- 520 | [1] Crosse MC, Butler JS, Lalor EC (2015) Congruent visual speech 521 | enhances cortical entrainment to continuous auditory speech in 522 | noise-free conditions. J Neurosci 35(42):14195-14204. 523 | 524 | Translation to Python: Simon Richard Steinkamp 525 | Github: 526 | October 2018; Last revision: 09.January 2019 527 | Original MATLAB toolbox, mTRF v. 1.5 528 | Author: Michael Crosse 529 | Lalor Lab, Trinity College Dublin, IRELAND 530 | Email: edmundlalor@gmail.com 531 | Website: http://lalorlab.net/ 532 | April 2014; Last revision: 13 December 2016 533 | ''' 534 | 535 | assert tmin < tmax, 'Value of tmin must be < tmax' 536 | 537 | x, y, tmin, tmax = stimulus_mapping(mapping_direction, stim, resp, tmin, tmax) 538 | 539 | reg_lambda1 = test_reg_lambda(reg_lambda1) 540 | reg_lambda2 = test_reg_lambda(reg_lambda2) 541 | n_lambda1 = len(reg_lambda1) 542 | n_lambda2 = len(reg_lambda2) 543 | t_min = np.floor(tmin / 1e3 * fs * mapping_direction).astype(int) 544 | t_max = np.ceil(tmax / 1e3 * fs * mapping_direction).astype(int) 545 | 546 | lags = lag_builder(t_min, t_max) 547 | n_trials, n_feat = test_input_dimensions(x) 548 | 549 | n_trl_y, n_targets = test_input_dimensions(y) 550 | 551 | assert n_trials == n_trl_y, 'stim and resp should have the same no of trials!' 552 | 553 | dim1 = n_feat * lags.shape[0] + n_feat 554 | 555 | model = np.zeros((n_trials, n_lambda1, n_lambda2, dim1, n_targets)) 556 | 557 | if n_feat == 1: 558 | reg_matrix = quadratic_regularization(dim1) 559 | else: 560 | reg_matrix = np.eye(dim1) 561 | 562 | x_input = [] 563 | 564 | for trial in range(n_trials): 565 | # Generate lag matrix 566 | x_input.append(np.hstack([np.ones(x[trial].shape), lag_gen(x[trial], lags)])) 567 | model_1_temp = np.zeros((n_lambda1, dim1, n_targets)) 568 | model_2_temp = np.zeros((n_lambda2, dim1, n_targets)) 569 | if mapping_direction == 1: 570 | # calculating unisensory model for each lambda 571 | # Calculate model for each lambda value 572 | for c_lambda, alpha in enumerate(reg_lambda1): 573 | temp = regularized_regression_fit(x_input[trial], 574 | resp1[trial], reg_matrix, alpha) 575 | model_1_temp[c_lambda, :, :] = temp 576 | 577 | for c_lambda, alpha in enumerate(reg_lambda2): 578 | temp = regularized_regression_fit(x_input[trial], 579 | resp2[trial], reg_matrix, alpha) 580 | model_2_temp[c_lambda, :, :] = temp 581 | 582 | elif mapping_direction == -1: 583 | resp1_lag = np.hstack([np.ones(resp1[trial].shape), 584 | lag_gen(resp1[trial], lags)]) 585 | resp2_lag = np.hstack([np.ones(resp2[trial].shape), 586 | lag_gen(resp2[trial], lags)]) 587 | for c_lambda in range(n_lambda1): 588 | temp = regularized_regression_fit(resp1_lag, y[trial], 589 | reg_matrix, reg_lambda1[c_lambda]) 590 | model_1_temp[c_lambda, :, :] = temp 591 | 592 | for c_lambda in range(n_lambda2): 593 | temp = regularized_regression_fit(resp2_lag, y[trial], 594 | reg_matrix, reg_lambda2[c_lambda]) 595 | model_2_temp[c_lambda, :, :] = temp 596 | 597 | for c_lambda1 in range(n_lambda1): 598 | for c_lambda2 in range(n_lambda2): 599 | model[trial, c_lambda1, c_lambda2, :, :] = (model_1_temp[c_lambda1, :, :] 600 | + model_2_temp[c_lambda2, :, :]) 601 | 602 | r = np.zeros((n_trials, n_lambda1, n_lambda2, n_targets)) 603 | p = np.zeros(r.shape) 604 | mse = np.zeros(r.shape) 605 | pred = [] 606 | 607 | for trial in range(n_trials): 608 | pred.append(np.zeros((n_lambda1, n_lambda2, y[trial].shape[0], n_targets))) 609 | 610 | for c_lambda1 in range(n_lambda1): 611 | for c_lambda2 in range(n_lambda2): 612 | # Calculate prediction 613 | cv_coef = np.mean(model[np.arange(n_trials) != trial, c_lambda1, c_lambda2, :, :], 0, keepdims=False) 614 | pred[trial][c_lambda1, c_lambda2, :, :] = regularized_regression_predict(x_input[trial], cv_coef) 615 | # Calculate accuracy 616 | for c_target in range(n_targets): 617 | temp_pred = np.squeeze(pred[trial][c_lambda1, c_lambda2, :, c_target]).T 618 | 619 | (r[trial, c_lambda1, c_lambda2, c_target], 620 | p[trial, c_lambda1, c_lambda2, c_target]) = pearsonr(y[trial][:, c_target], temp_pred) 621 | mse[trial, c_lambda1, c_lambda2, c_target] = np.mean((y[trial][:, c_target] - temp_pred) ** 2) 622 | 623 | return r, p, mse, pred, model 624 | -------------------------------------------------------------------------------- /pymtrf/test/__init__.py: -------------------------------------------------------------------------------- 1 | #from .test_mtrf import * 2 | #from .test_helper import * -------------------------------------------------------------------------------- /pymtrf/test/context.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../'))) 4 | -------------------------------------------------------------------------------- /pymtrf/test/matlab_test_sets.m: -------------------------------------------------------------------------------- 1 | % Script to create matlab test sets for the mtrf toolbox 2 | % These can be seen as a precision test of the python translation 3 | % versus the original toolbox. 4 | % The tests have been performed using simulated data already using matlab9.1 5 | % (2016b) 6 | 7 | MTRF_DIR = ''; % Change towards the directory where the mtrf toolbox is located 8 | OUT_DIR = ['test_files' filesep]; % Directory where test files are stored. 9 | TEST_DATA = []; % Path to the simulated data. 10 | mkdir([OUT_DIR]) % Create result folder 11 | addpath(genpath([MTRF_DIR])) % Put mtrf toolbox on path 12 | 13 | Fs = 64; % Set during simulation 14 | tmin = -60; % ms from simulation 15 | tmax = 60; % ms form simulation 16 | 17 | SIMDATA = load([TEST_DATA filesep 'gendata.mat']); 18 | 19 | 20 | x = SIMDATA.x; % x has shape (8 * 64, 5) 21 | y = SIMDATA.y_sim; % y has shape (8 * 64, 6) 22 | model = SIMDATA.model; 23 | 24 | constant = zeros(size(model, 1), size(model, 3)); 25 | 26 | [w, t, i] = mTRFtrain(x(:,1:3), y(:,1:3), Fs, 1, -60, 60, 1); 27 | save([OUT_DIR filesep 'mtrf_train_fwd.mat'], 'w', 't', 'i') 28 | clear w t i 29 | 30 | [w, t, i] = mTRFtrain(x(:,1:3), y(:,1:3), Fs, -1, -60, 60, 1); 31 | save([OUT_DIR filesep 'mtrf_train_bwd.mat'], 'w', 't', 'i') 32 | clear w t i 33 | 34 | % Create train splits: 35 | train_x = {}; 36 | train_y = {}; 37 | for i = 1 : 4 38 | train_x{i} = x( (i-1) * (Fs*2) + 1 : i * (Fs*2),1:5); 39 | train_y{i} = y( (i-1) * (Fs*2) + 1 : i * (Fs*2),1:4); 40 | end; 41 | 42 | [w, t, i] = mTRFtrain(train_x{1}, train_y{1}, Fs, 1, 0, 60, 1); 43 | [rec, r, p, mse] = mTRFpredict(train_x{2}, train_y{2}, w, Fs, 1, 0, 60, i); 44 | save([OUT_DIR filesep 'mtrf_predict_fwd.mat'], 'rec', 'r', 'p', 'mse') 45 | 46 | clear w t i rec r p mse 47 | 48 | [r,p,mse,pred,model] = mTRFcrossval(train_x, train_y, Fs, -1, -60, 60, [0.1, 1, 10]); 49 | save([OUT_DIR 'cross_val_equal_bwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 50 | clear r p mse mse pred model 51 | 52 | [r,p,mse,pred,model] = mTRFmulticrossval(train_x, train_y, train_y, train_y, Fs, -1, -60, 60, [0.1, 1, 10], [0.1, 1, 10]); 53 | save([OUT_DIR 'multicross_val_equal_bwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 54 | 55 | clear r p mse mse pred model 56 | 57 | [r,p,mse,pred,model] = mTRFcrossval(train_x, train_y, Fs, 1, -60, 60, [0.1, 1, 10]); 58 | save([OUT_DIR 'cross_val_equal_fwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 59 | clear r p mse mse pred model 60 | 61 | [r,p,mse,pred,model] = mTRFmulticrossval(train_x, train_y, train_y, train_y, Fs, 1, -60, 60, [0.1, 1, 10], [0.1, 1, 10]); 62 | save([OUT_DIR 'multicross_val_equal_fwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 63 | clear r p mse mse pred model train_x train_y 64 | 65 | train_x = {}; 66 | train_y = {}; 67 | for i = 1 : 2 68 | train_x{i} = x( (i-1) * (Fs*2) + 1 : i * (Fs*2),1:5); 69 | train_y{i} = y( (i-1) * (Fs*2) + 1 : i * (Fs*2),1:4); 70 | end; 71 | 72 | train_x{i+1} = x( (i-1) * (Fs*2) + 1 : i * (Fs*3),1:5); 73 | train_y{i+1} = y( (i-1) * (Fs*2) + 1 : i * (Fs*3),1:4); 74 | 75 | [r,p,mse,pred,model] = mTRFcrossval(train_x, train_y, Fs, 1, -60, 60, [0.1, 1, 10]); 76 | save([OUT_DIR 'cross_val_unequal_fwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 77 | clear r p mse mse pred model 78 | 79 | [r,p,mse,pred,model] = mTRFmulticrossval(train_x, train_y, train_y, train_y, Fs, 1, -60, 60, [0.1, 1, 10], [0.1, 1, 10]); 80 | save([OUT_DIR 'multicross_val_unequal_fwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 81 | clear r p mse mse pred train_x train_y 82 | %% 83 | sim_shape = size(SIMDATA.model); 84 | [model_t,t,c_t] = mTRFtransform(x, y, SIMDATA.model, Fs, 1, -60, 60, zeros(sim_shape(1),sim_shape(3))); 85 | save([OUT_DIR 'mtrf_transform_fwd.mat'], 'model_t', 't', 'c_t') 86 | clear model_t t c_t 87 | 88 | [model_t,t,c_t] = mTRFtransform(x, y, permute(SIMDATA.model, [3,2,1]), Fs, -1, -60, 60, zeros(sim_shape(3),sim_shape(1))); 89 | save([OUT_DIR 'mtrf_transform_bwd.mat'], 'model_t', 't', 'c_t') 90 | clear model_t t c_t -------------------------------------------------------------------------------- /pymtrf/test/mtrf_test_set.m: -------------------------------------------------------------------------------- 1 | % Creates matlab tests sets for the toolbox 2 | MTRF_DIR = ''; % Change accordingly; 3 | OUT_DIR = ['test_results' filesep]; % Change accordingly (pymtrf/tests/test_results) 4 | mkdir([OUT_DIR]) 5 | addpath(genpath(MTRF_DIR)) 6 | 7 | % Readme set 1: 8 | load([MTRF_DIR 'contrast_data.mat']); 9 | [w, t, i] = mTRFtrain(contrastLevel, EEG, Fs, 1, -150, 450, 1); 10 | save([OUT_DIR 'rdm_contrast_data.mat'], 'w', 't', 'i') 11 | clear Fs EEG w t i contrastLevel 12 | 13 | % Readme set 2: 14 | load([MTRF_DIR 'coherentmotion_data.mat']); 15 | [w, t, i] = mTRFtrain(coherentMotionLevel, EEG, Fs, 1, -150, 450, 1); 16 | save([OUT_DIR 'rdm_motion_data.mat'], 'w', 't', 'i') 17 | clear Fs EEG w t i coherentMotionLevel 18 | %% 19 | % Readme set 3: 20 | load([MTRF_DIR 'speech_data.mat']); 21 | [w, t, i] = mTRFtrain(envelope, EEG, 128, 1, -150, 450, 0.1); 22 | save([OUT_DIR 'rdm_speech_data_trf.mat'], 'w', 't', 'i') 23 | clear w t i 24 | 25 | % Readme set 4 26 | [w, t, i] = mTRFtrain(spectrogram, EEG, 128, 1, -150, 450, 100); 27 | save([OUT_DIR 'rdm_speech_data_strf.mat'], 'w', 't', 'i') 28 | clear w t i 29 | 30 | % Readme set 5:without resampling because of toolbox requirement 31 | stimTrain = envelope(1:Fs*60,1); 32 | respTrain = EEG(1:Fs*60,:); 33 | stimTest = envelope(Fs*60 + 1:end, 1); 34 | respTest = EEG(Fs*60 + 1:end, :); 35 | tic; 36 | [g, t, con] = mTRFtrain(stimTrain, respTrain, Fs, -1, 0, 500, 1e5); 37 | disp(toc) 38 | [recon,r,p,MSE] = mTRFpredict(stimTest, respTest, g, Fs, -1, 0, 500, con); 39 | save([OUT_DIR 'rdm_speech_data_recon.mat'], 'g', 't', 'con', 'recon', 'r', 'p', 'MSE') 40 | 41 | clear r p MSE w t i g con recon 42 | % Not example: Crossvall predict, unequal 43 | stim1 = envelope(1:Fs*30,1); 44 | stim2 = envelope(Fs*30 +1 : Fs * 60, 1); 45 | stim3 = envelope(Fs * 60 + 1:Fs*100, 1); 46 | stim4 = envelope(Fs * 100 + 1: end, 1); 47 | resp1 = EEG(1:Fs*30,:); 48 | resp2 = EEG(Fs*30 +1 : Fs * 60, :); 49 | resp3 = EEG(Fs * 60 + 1:Fs*100, :); 50 | resp4 = EEG(Fs * 100 + 1: end, :); 51 | 52 | [r,p,mse,pred,model] = mTRFcrossval({stim1, stim2, stim3, stim4}, {resp1, resp2, resp3, resp4}, Fs, -1, -50, 150, [0.1, 1, 10]); 53 | save([OUT_DIR 'rdm_speech_data_cross_val_unequal.mat'], 'r', 'p', 'mse', 'pred', 'model') 54 | 55 | [r,p,mse,pred,model] = mTRFmulticrossval({stim1, stim2, stim3, stim4}, {resp1, resp2, resp3, resp4}, {resp1, resp2, resp3, resp4},{resp1, resp2, resp3, resp4}, Fs, -1, -50, 150, [0.1, 1, 10], [0.1, 1, 10]); 56 | save([OUT_DIR 'rdm_speech_data_multi_cross_val_unequal.mat'], 'r', 'p', 'mse', 'pred', 'model') 57 | 58 | clear 'r' 'p' 'mse' 'pred' 'model' 'stim1' 'stim2' 'stim3' 'stim4' 'resp1' 'resp2' 'resp3' 'resp4' 59 | stim1 = envelope(1:Fs*30,1); 60 | stim2 = envelope(Fs*30 +1 : Fs * 60, 1); 61 | stim3 = envelope(Fs * 60 + 1:Fs*90, 1); 62 | %stim4 = envelope(Fs * 90 + 1: end, 1); 63 | resp1 = EEG(1:Fs*30,:); 64 | resp2 = EEG(Fs*30 +1 : Fs * 60, :); 65 | resp3 = EEG(Fs * 60 + 1:Fs*90, :); 66 | %resp4 = EEG(Fs * 90 + 1: end, :); 67 | 68 | [r,p,mse,pred,model] = mTRFcrossval({stim1, stim2, stim3}, {resp1, resp2, resp3}, Fs, -1, -50, 150, [0.1, 1, 10]); 69 | save([OUT_DIR 'rdm_speech_data_cross_val_equal.mat'], 'r', 'p', 'mse', 'pred', 'model') 70 | 71 | [r,p,mse,pred,model] = mTRFmulticrossval({stim1, stim2, stim3}, {resp1, resp2, resp3}, {resp1, resp2, resp3}, {resp1, resp2, resp3}, Fs, -1, -50, 150, [0.1, 1, 10], [0.1, 1, 10]); 72 | save([OUT_DIR 'rdm_speech_data_multi_cross_val_equal.mat'], 'r', 'p', 'mse', 'pred', 'model') 73 | 74 | % Not example: Crossvall predict, unequal 75 | stim1 = envelope(1:Fs*30,1); 76 | stim2 = envelope(Fs*30 +1 : Fs * 60, 1); 77 | stim3 = envelope(Fs * 60 + 1:Fs*100, 1); 78 | stim4 = envelope(Fs * 100 + 1: end, 1); 79 | resp1 = EEG(1:Fs*30,:); 80 | resp2 = EEG(Fs*30 +1 : Fs * 60, :); 81 | resp3 = EEG(Fs * 60 + 1:Fs*100, :); 82 | resp4 = EEG(Fs * 100 + 1: end, :); 83 | 84 | [r,p,mse,pred,model] = mTRFcrossval({stim1, stim2, stim3, stim4}, {resp1, resp2, resp3, resp4}, Fs, 1, -50, 150, [0.1, 1, 10]); 85 | save([OUT_DIR 'rdm_speech_data_cross_val_unequal_fwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 86 | %% 87 | [r,p,mse,pred,model] = mTRFmulticrossval({stim1, stim2, stim3, stim4}, {resp1, resp2, resp3, resp4}, {resp1, resp2, resp3, resp4}, {resp1, resp2, resp3, resp4}, Fs, 1, -50, 150, [0.1, 1, 10], [0.1, 1, 10]); 88 | save([OUT_DIR 'rdm_speech_data_multi_cross_val_unequal_fwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 89 | 90 | clear 'r' 'p' 'mse' 'pred' 'model' 'stim1' 'stim2' 'stim3' 'stim4' 'resp1' 'resp2' 'resp3' 'resp4' 91 | stim1 = envelope(1:Fs*30,1); 92 | stim2 = envelope(Fs*30 +1 : Fs * 60, 1); 93 | stim3 = envelope(Fs * 60 + 1:Fs*90, 1); 94 | %stim4 = envelope(Fs * 90 + 1: end, 1); 95 | resp1 = EEG(1:Fs*30,:); 96 | resp2 = EEG(Fs*30 +1 : Fs * 60, :); 97 | resp3 = EEG(Fs * 60 + 1:Fs*90, :); 98 | %resp4 = EEG(Fs * 90 + 1: end, :); 99 | 100 | [r,p,mse,pred,model] = mTRFcrossval({stim1, stim2, stim3}, {resp1, resp2, resp3}, Fs, 1, -50, 150, [0.1, 1, 10]); 101 | save([OUT_DIR 'rdm_speech_data_cross_val_equal_fwd.mat'], 'r', 'p', 'mse', 'pred', 'model') 102 | 103 | [r,p,mse,pred,model] = mTRFmulticrossval({stim1, stim2, stim3}, {resp1, resp2, resp3}, {resp1, resp2, resp3}, {resp1, resp2, resp3}, Fs, 1, -50, 150, [0.1, 1, 10], [0.1, 1, 10]); 104 | save([OUT_DIR 'rdm_speech_data_multi_cross_val_equal_fwd.mat'], 'r', 'p', 'mse', 'pred', 'model') -------------------------------------------------------------------------------- /pymtrf/test/simulate_test_data.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy.stats import norm 3 | from pymtrf.helper import lag_builder, model_to_coefficients 4 | from pymtrf.helper import regularized_regression_predict 5 | from pymtrf.mtrf import lag_gen 6 | from scipy.io import savemat 7 | 8 | def build_test_data(save_to_file=False, noise=1e-5): 9 | # Model: we define 10 channels, 9 lags, 6 targets. 10 | # model is channel by lags by target. 11 | np.random.seed(221) 12 | model = np.zeros((5, 9, 6)) 13 | 14 | for i in range(6): 15 | model[0, :, i] = np.sin(np.linspace(0, (1 + (i/10)) * np.pi, 9)) 16 | model[1, :, i] = 0.5 * np.sin(np.linspace(0, (1 + (i/10)) * np.pi, 9)) 17 | model[2, :, i] = np.cos(np.linspace(0, (1 + (i/10)) * np.pi, 9)) 18 | model[3, :, i] = 0.5 * np.cos(np.linspace(0, (1 + (i/10)) * np.pi, 9)) 19 | model[4, :, i] = norm.pdf(np.linspace(-1, 1, 9), scale=1 + (i/10)) 20 | # model[5, :, i] = norm.pdf(np.linspace(-1, 1, 9), loc=0 + (i/10)) 21 | 22 | fs = 64 23 | tmin = -60 24 | tmax = 60 25 | mapping_direction = 1 26 | t_min = np.floor(tmin / 1e3 * fs * mapping_direction).astype(int) 27 | t_max = np.ceil(tmax / 1e3 * fs * mapping_direction).astype(int) 28 | 29 | lags = lag_builder(t_min, t_max) 30 | 31 | x = np.random.rand(8 * fs, 5) 32 | x = x + np.random.randn(x.shape[0], x.shape[1]) * noise 33 | x_lag = lag_gen(x, lags) 34 | x_lag = np.hstack([np.ones(x.shape), x_lag]) 35 | 36 | coef = model_to_coefficients(model[:, :, :], np.zeros((5, 6))) 37 | 38 | y_sim = regularized_regression_predict(x_lag, coef) 39 | 40 | if save_to_file: 41 | savemat('gendata.mat', {'x': x, 'model': model, 'y_sim': y_sim}) 42 | 43 | return x, model, y_sim 44 | -------------------------------------------------------------------------------- /pymtrf/test/test_files/cross_val_equal_bwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/cross_val_equal_bwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/cross_val_equal_fwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/cross_val_equal_fwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/cross_val_unequal_fwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/cross_val_unequal_fwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/gendata.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/gendata.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/mtrf_predict_fwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/mtrf_predict_fwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/mtrf_train_bwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/mtrf_train_bwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/mtrf_train_fwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/mtrf_train_fwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/mtrf_transform_bwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/mtrf_transform_bwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/mtrf_transform_fwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/mtrf_transform_fwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/multicross_val_equal_bwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/multicross_val_equal_bwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/multicross_val_equal_fwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/multicross_val_equal_fwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_files/multicross_val_unequal_fwd.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRSteinkamp/pymtrf/dd32a1f8f96e19bc6e151141c4fc9f7a2d7ac524/pymtrf/test/test_files/multicross_val_unequal_fwd.mat -------------------------------------------------------------------------------- /pymtrf/test/test_helper.py: -------------------------------------------------------------------------------- 1 | #import context 2 | import pymtrf 3 | import numpy as np 4 | from .simulate_test_data import build_test_data 5 | 6 | def test_lag_builder_positive_lags(): 7 | # Test lag_builder for the creation of a positive lag vector 8 | lags = pymtrf.lag_builder(1, 4) 9 | assert np.all(lags == [1, 2, 3, 4]) 10 | 11 | 12 | def test_lag_builder_negative_lags(): 13 | # Test lag_builder for the creation of a negative lag vector, starting with 14 | # a negative value 15 | lags = pymtrf.lag_builder(-2, 2) 16 | assert np.all(lags == [-2, -1, 0, 1, 2]) 17 | 18 | 19 | def test_lag_builder_negative_lags_reverse(): 20 | # Test lag_builder for the creation of a negative lag vector, starting with 21 | # a positive value 22 | lags = pymtrf.lag_builder(2, -2) 23 | assert np.all(lags == [2, 1, 0, -1, -2]) 24 | 25 | 26 | def test_lag_builder_starting_from_zero(): 27 | # Test lag_builder for the creation of a negative lag vector, starting with 28 | # a positive value 29 | lags = pymtrf.lag_builder(0, 3) 30 | assert np.all(lags == [0, 1, 2, 3]) 31 | 32 | 33 | def test_lag_builder_only_zero(): 34 | # Test lag_builder for the creation of a negative lag vector, starting with 35 | # a positive value 36 | lags = pymtrf.lag_builder(0, 0) 37 | assert np.all(lags == [0]) 38 | 39 | 40 | def test_quadratic_regularization_3(): 41 | m_mat = pymtrf.quadratic_regularization(3) 42 | test_mat = np.array([[1, -1, 0], [-1, 2, -1], [0, -1, 1]]) 43 | assert np.all(m_mat == test_mat) 44 | 45 | 46 | def test_quadratic_regularization_5(): 47 | m_mat = pymtrf.quadratic_regularization(5) 48 | test_mat = np.array([[1, -1, 0, 0, 0], [-1, 2, -1, 0, 0], 49 | [0, -1, 2, -1, 0], [0, 0, -1, 2, -1], 50 | [0, 0, 0, -1, 1]]) 51 | assert np.all(m_mat == test_mat) 52 | 53 | def test_create_test_data(): 54 | x_shape = np.array([64 * 8, 5]) 55 | y_shape = np.array([64 * 8, 6]) 56 | model_shape = np.array([5, 9, 6]) 57 | x, model, y =build_test_data() 58 | assert np.all(x.shape == x_shape) 59 | assert np.all(model.shape == model_shape) 60 | assert np.all(y.shape == y_shape) 61 | 62 | -------------------------------------------------------------------------------- /pymtrf/test/test_mtrf.py: -------------------------------------------------------------------------------- 1 | #import context 2 | import pymtrf 3 | from .simulate_test_data import build_test_data 4 | import os 5 | import numpy as np 6 | from scipy.io import loadmat 7 | t_precision = 10 8 | 9 | 10 | def test_lag_gen_shape_neg_lags(): 11 | fake_data = np.random.rand(20, 2) 12 | lags = pymtrf.lag_builder(-2, 0) 13 | shape = pymtrf.lag_gen(fake_data, lags).shape 14 | 15 | assert np.all(shape == (20, 6)) 16 | 17 | 18 | def test_lag_gen_shape_pos_lags(): 19 | fake_data = np.random.rand(20, 2) 20 | lags = pymtrf.lag_builder(2, 0) 21 | shape = pymtrf.lag_gen(fake_data, lags).shape 22 | assert np.all(shape == (20, 6)) 23 | 24 | 25 | def test_lag_gen_shape_no_lags(): 26 | fake_data = np.random.rand(20, 2) 27 | lags = pymtrf.lag_builder(0, 0) 28 | shape = pymtrf.lag_gen(fake_data, lags).shape 29 | assert np.all(shape == (20, 2)) 30 | 31 | 32 | def test_lag_gen_shift_pos(): 33 | fake_data = np.ones((3, 1)) 34 | lags = pymtrf.lag_builder(0, 1) 35 | lag_matrix = pymtrf.lag_gen(fake_data, lags) 36 | test_matrix = np.ones((3, 2)) 37 | test_matrix[0, 1] = 0 38 | assert np.all(test_matrix == lag_matrix) 39 | 40 | 41 | def test_lag_gen_shift_neg(): 42 | fake_data = np.ones((3, 1)) 43 | lags = pymtrf.lag_builder(-1, 0) 44 | lag_matrix = pymtrf.lag_gen(fake_data, lags) 45 | test_matrix = np.ones((3, 2)) 46 | test_matrix[2, 0] = 0 47 | assert np.all(test_matrix == lag_matrix) 48 | 49 | 50 | def test_lag_gen_shift_pos_one(): 51 | fake_data = np.ones((3, 1)) 52 | lags = pymtrf.lag_builder(1, 1) 53 | lag_matrix = pymtrf.lag_gen(fake_data, lags) 54 | test_matrix = np.ones((3, 1)) 55 | test_matrix[0, 0] = 0 56 | assert np.all(test_matrix == lag_matrix) 57 | 58 | 59 | def test_mtrf_train_fwd(): 60 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}mtrf_train_fwd.mat') 61 | w = data['w'] 62 | t = np.squeeze(data['t']) 63 | i = data['i'] 64 | x, model, y = build_test_data() 65 | w_t, t_t, i_t = pymtrf.mtrf_train(x[:, :3], y[:, :3], 64, 1, -60, 60, 1) 66 | np.testing.assert_almost_equal(w, w_t, t_precision) 67 | np.testing.assert_almost_equal(t, t_t, t_precision) 68 | np.testing.assert_almost_equal(i, i_t, t_precision) 69 | 70 | 71 | def test_mtrf_train_bwd(): 72 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}mtrf_train_bwd.mat') 73 | w = data['w'] 74 | t = np.squeeze(data['t']) 75 | i = data['i'] 76 | x, _, y = build_test_data() 77 | w_t, t_t, i_t = pymtrf.mtrf_train(x[:, :3], y[:, :3], 64, -1, -60, 60, 1) 78 | np.testing.assert_almost_equal(w, w_t, t_precision) 79 | np.testing.assert_almost_equal(t, t_t, t_precision) 80 | np.testing.assert_almost_equal(i, i_t, t_precision) 81 | 82 | 83 | def test_mtrf_predict_fwd(): 84 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}mtrf_predict_fwd.mat') 85 | rec = data['rec'] 86 | r = data['r'] 87 | p = data['p'] 88 | mse = data['mse'] 89 | x, _, y = build_test_data() 90 | w_t, t_t, i_t = pymtrf.mtrf_train(x[:64 * 2, :], y[:64*2, :4], 64, 1, 0, 60, 1) 91 | rec_t, r_t, p_t, mse_t = pymtrf.mtrf_predict(x[64*2:64*2*2, :], 92 | y[64*2:64*2*2, :4], w_t, 64, 1, 0, 93 | 60, i_t) 94 | 95 | np.testing.assert_almost_equal(rec, rec_t, t_precision) 96 | np.testing.assert_almost_equal(r, r_t, t_precision) 97 | np.testing.assert_almost_equal(p, p_t, t_precision) 98 | np.testing.assert_almost_equal(mse, mse_t, t_precision) 99 | 100 | 101 | def test_mtrf_cross_val_equal_bwd(): 102 | x, _, y = build_test_data() 103 | x_train = [x[i*64*2:(i+1)*64*2, :] for i in range(4)] 104 | y_train = [y[i*64*2:(i+1)*64*2, :4] for i in range(4)] 105 | x_train = np.stack(x_train) 106 | y_train = np.stack(y_train) 107 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}cross_val_equal_bwd.mat') 108 | pred = data['pred'] 109 | r = data['r'] 110 | p = data['p'] 111 | mse = data['mse'] 112 | model = data['model'] 113 | r_t, p_t, mse_t, pred_t, model_t = pymtrf.mtrf_crossval(x_train, y_train, 114 | 64, -1, -60, 60, 115 | [0.1, 1, 10]) 116 | 117 | np.testing.assert_almost_equal(r, r_t, t_precision) 118 | np.testing.assert_almost_equal(p, p_t, t_precision) 119 | np.testing.assert_almost_equal(mse, mse_t, t_precision) 120 | for (pr, pr_t) in zip(pred.T, pred_t): 121 | np.testing.assert_almost_equal(pr[0], pr_t, t_precision) 122 | np.testing.assert_almost_equal(model, model_t, t_precision) 123 | 124 | 125 | def test_mtrf_multicross_val_equal_bwd(): 126 | x, _, y = build_test_data() 127 | x_train = [x[i*64*2:(i+1)*64*2, :] for i in range(4)] 128 | y_train = [y[i*64*2:(i+1)*64*2, :4] for i in range(4)] 129 | x_train = np.stack(x_train) 130 | y_train = np.stack(y_train) 131 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}multicross_val_equal_bwd.mat') 132 | pred = data['pred'] 133 | r = data['r'] 134 | p = data['p'] 135 | mse = data['mse'] 136 | model = data['model'] 137 | r_t, p_t, mse_t, pred_t, model_t = pymtrf.mtrf_multicrossval(x_train, 138 | y_train, 139 | y_train, 140 | y_train, 141 | 64, -1, -60, 142 | 60, 143 | [0.1, 1, 10], 144 | [0.1, 1, 10]) 145 | 146 | np.testing.assert_almost_equal(r, r_t, t_precision) 147 | np.testing.assert_almost_equal(p, p_t, t_precision) 148 | np.testing.assert_almost_equal(mse, mse_t, t_precision) 149 | for (pr, pr_t) in zip(pred.T, pred_t): 150 | np.testing.assert_almost_equal(pr[0], pr_t, t_precision) 151 | np.testing.assert_almost_equal(model, model_t, t_precision) 152 | 153 | 154 | def test_mtrf_cross_val_equal_fwd(): 155 | x, _, y = build_test_data() 156 | x_train = [x[i*64*2:(i+1)*64*2, :] for i in range(4)] 157 | y_train = [y[i*64*2:(i+1)*64*2, :4] for i in range(4)] 158 | x_train = np.stack(x_train) 159 | y_train = np.stack(y_train) 160 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}cross_val_equal_fwd.mat') 161 | pred = data['pred'] 162 | r = data['r'] 163 | p = data['p'] 164 | mse = data['mse'] 165 | model = data['model'] 166 | r_t, p_t, mse_t, pred_t, model_t = pymtrf.mtrf_crossval(x_train, y_train, 167 | 64, 1, -60, 60, 168 | [0.1, 1, 10]) 169 | 170 | np.testing.assert_almost_equal(r, r_t, t_precision) 171 | np.testing.assert_almost_equal(p, p_t, t_precision) 172 | np.testing.assert_almost_equal(mse, mse_t, t_precision) 173 | for (pr, pr_t) in zip(pred.T, pred_t): 174 | np.testing.assert_almost_equal(pr[0], pr_t, t_precision) 175 | np.testing.assert_almost_equal(model, model_t, t_precision) 176 | 177 | 178 | def test_mtrf_multicross_val_equal_fwd(): 179 | x, _, y = build_test_data() 180 | x_train = [x[i*64*2:(i+1)*64*2, :] for i in range(4)] 181 | y_train = [y[i*64*2:(i+1)*64*2, :4] for i in range(4)] 182 | x_train = np.stack(x_train) 183 | y_train = np.stack(y_train) 184 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}multicross_val_equal_fwd.mat') 185 | pred = data['pred'] 186 | r = data['r'] 187 | p = data['p'] 188 | mse = data['mse'] 189 | model = data['model'] 190 | r_t, p_t, mse_t, pred_t, model_t = pymtrf.mtrf_multicrossval(x_train, 191 | y_train, 192 | y_train, 193 | y_train, 194 | 64, 1, -60, 60, 195 | [0.1, 1, 10], 196 | [0.1, 1, 10]) 197 | 198 | np.testing.assert_almost_equal(r, r_t, t_precision) 199 | np.testing.assert_almost_equal(p, p_t, t_precision) 200 | np.testing.assert_almost_equal(mse, mse_t, t_precision) 201 | for (pr, pr_t) in zip(pred.T, pred_t): 202 | np.testing.assert_almost_equal(pr[0], pr_t, t_precision) 203 | np.testing.assert_almost_equal(model, model_t, t_precision) 204 | 205 | 206 | def test_mtrf_cross_val_unequal_fwd(): 207 | x, _, y = build_test_data() 208 | x_train = [x[i*64*2:(i+1)*64*2, :] for i in range(2)] 209 | y_train = [y[i*64*2:(i+1)*64*2, :4] for i in range(2)] 210 | x_train.append(x[1*64*2: 64*2*3, :]) 211 | y_train.append(y[1*64*2: 64*2*3, :4]) 212 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}cross_val_unequal_fwd.mat') 213 | pred = data['pred'] 214 | r = data['r'] 215 | p = data['p'] 216 | mse = data['mse'] 217 | model = data['model'] 218 | r_t, p_t, mse_t, pred_t, model_t = pymtrf.mtrf_crossval(x_train, y_train, 219 | 64, 1, -60, 60, 220 | [0.1, 1, 10]) 221 | 222 | np.testing.assert_almost_equal(r, r_t, t_precision) 223 | np.testing.assert_almost_equal(p, p_t, t_precision) 224 | np.testing.assert_almost_equal(mse, mse_t, t_precision) 225 | for (pr, pr_t) in zip(pred.T, pred_t): 226 | np.testing.assert_almost_equal(pr[0], pr_t, t_precision) 227 | np.testing.assert_almost_equal(model, model_t, t_precision) 228 | 229 | 230 | def test_mtrf_multicross_val_unequal_fwd(): 231 | x, _, y = build_test_data() 232 | x_train = [x[i*64*2:(i+1)*64*2, :] for i in range(2)] 233 | y_train = [y[i*64*2:(i+1)*64*2, :4] for i in range(2)] 234 | x_train.append(x[1*64*2: 64*2*3, :]) 235 | y_train.append(y[1*64*2: 64*2*3, :4]) 236 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}multicross_val_unequal_fwd.mat') 237 | pred = data['pred'] 238 | r = data['r'] 239 | p = data['p'] 240 | mse = data['mse'] 241 | model = data['model'] 242 | r_t, p_t, mse_t, pred_t, model_t = pymtrf.mtrf_multicrossval(x_train, 243 | y_train, 244 | y_train, 245 | y_train, 246 | 64, 1, -60, 60, 247 | [0.1, 1, 10], 248 | [0.1, 1, 10]) 249 | 250 | np.testing.assert_almost_equal(r, r_t, t_precision) 251 | np.testing.assert_almost_equal(p, p_t, t_precision) 252 | np.testing.assert_almost_equal(mse, mse_t, t_precision) 253 | for (pr, pr_t) in zip(pred.T, pred_t): 254 | np.testing.assert_almost_equal(pr[0], pr_t, t_precision) 255 | np.testing.assert_almost_equal(model, model_t, t_precision) 256 | 257 | 258 | def test_mtrf_transform_fwd(): 259 | # Output works, but matrix is quite ill conditioned (I think) therefore, 260 | # low precision in this step. 261 | t_precision = 1 262 | x, model, y = build_test_data() 263 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}mtrf_transform_fwd.mat') 264 | model_t = data['model_t'] 265 | t = np.squeeze(data['t']) 266 | c_t = data['c_t'] 267 | 268 | [model_t_t, t_t, c_t_t] = pymtrf.mtrf_transform(x, y, model, 64, 1, 269 | -60, 60) 270 | 271 | np.testing.assert_almost_equal(model_t, model_t_t, t_precision) 272 | np.testing.assert_almost_equal(t, t_t, t_precision) 273 | np.testing.assert_almost_equal(c_t, c_t_t, t_precision) 274 | 275 | 276 | def test_mtrf_transform_bwd(): 277 | t_precision = 10 278 | x, model, y = build_test_data() 279 | data = loadmat(f'pymtrf{os.sep}test{os.sep}test_files{os.sep}mtrf_transform_bwd.mat') 280 | model_t = data['model_t'] 281 | t = np.squeeze(data['t']) 282 | c_t = data['c_t'] 283 | 284 | [model_t_t, t_t, c_t_t] = pymtrf.mtrf_transform(x, y, 285 | np.transpose(model, 286 | [2, 1, 0]), 287 | 64, -1, -60, 60) 288 | 289 | np.testing.assert_almost_equal(model_t, model_t_t, t_precision) 290 | np.testing.assert_almost_equal(t, t_t, t_precision) 291 | np.testing.assert_almost_equal(c_t, c_t_t, t_precision) 292 | 293 | ''' 294 | Old tests based on mtrf_test_data.m, were used before for for testing of 295 | precision etc. Now we are using simulated data for the tests. See the matlab 296 | functions. There is some copying of files involved to get things running... 297 | def test_mtrf_train_model_contrast(): 298 | contr = loadmat('test_results/rdm_contrast_data') 299 | w = contr['w'] 300 | t = contr['t'] 301 | i = contr['i'] 302 | data = loadmat('../../contrast_data.mat') 303 | eeg = data['EEG'] 304 | contrast_level = data['contrastLevel'] 305 | Fs = data['Fs'] 306 | w_t, t_t, i_t = pymtrf.mtrf_train(contrast_level, eeg, Fs, 1, -150, 450, 1) 307 | 308 | np.testing.assert_almost_equal(w, w_t, decimal=10, err_msg="Failure in weights calculation") 309 | np.testing.assert_almost_equal(t, t_t, decimal=10, err_msg="Failure in lag calculation") 310 | np.testing.assert_almost_equal(i, i_t, decimal=10, err_msg="Failure in intercept calculation") 311 | 312 | def test_mtrf_train_model_motion(): 313 | motion = loadmat('test_results/rdm_motion_data') 314 | w = motion['w'] 315 | t = motion['t'] 316 | i = motion['i'] 317 | data = loadmat('../../coherentmotion_data.mat') 318 | eeg = data['EEG'] 319 | motion_level = data['coherentMotionLevel'] 320 | Fs = data['Fs'] 321 | w_t, t_t, i_t = pymtrf.mtrf_train(motion_level, eeg, Fs, 1, -150, 450, 1) 322 | 323 | np.testing.assert_almost_equal(w, w_t, decimal=10, err_msg="Failure in weights calculation") 324 | np.testing.assert_almost_equal(t, t_t, decimal=10, err_msg="Failure in lag calculation") 325 | np.testing.assert_almost_equal(i, i_t, decimal=10, err_msg="Failure in intercept calculation") 326 | 327 | 328 | def test_mtrf_train_model_speech_trf(): 329 | # TODO rename contrasts 330 | speech = loadmat('test_results/rdm_speech_data_trf') 331 | w = speech['w'] 332 | t = speech['t'] 333 | i = speech['i'] 334 | data = loadmat('../../speech_data.mat') 335 | eeg = data['EEG'] 336 | envelope = data['envelope'] 337 | Fs = data['Fs'] 338 | w_t, t_t, i_t = pymtrf.mtrf_train(envelope, eeg, Fs, 1, -150, 450, 0.1) 339 | 340 | np.testing.assert_almost_equal(w, w_t, decimal=10, err_msg="Failure in weights calculation") 341 | np.testing.assert_almost_equal(t, t_t, decimal=10, err_msg="Failure in lag calculation") 342 | np.testing.assert_almost_equal(i, i_t, decimal=10, err_msg="Failure in intercept calculation") 343 | 344 | 345 | def test_mtrf_train_model_speech_strf(): 346 | speech = loadmat('test_results/rdm_speech_data_strf') 347 | w = speech['w'] 348 | t = speech['t'] 349 | i = speech['i'] 350 | data = loadmat('../../speech_data.mat') 351 | eeg = data['EEG'] 352 | spectrogram = data['spectrogram'] 353 | Fs = data['Fs'] 354 | w_t, t_t, i_t = pymtrf.mtrf_train(spectrogram, eeg, Fs, 1, -150, 450, 100) 355 | 356 | np.testing.assert_almost_equal(w, w_t, decimal=10, err_msg="Failure in weights calculation") 357 | np.testing.assert_almost_equal(t, t_t, decimal=10, err_msg="Failure in lag calculation") 358 | np.testing.assert_almost_equal(i, i_t, decimal=10, err_msg="Failure in intercept calculation") 359 | 360 | 361 | def test_mtrf_train_model_speech_recon_train(): 362 | speech = loadmat('test_results/rdm_speech_data_recon') 363 | g = np.expand_dims(speech['g'], -1) 364 | t = speech['t'] 365 | con = np.expand_dims(speech['con'], 0) 366 | data = loadmat('../../speech_data.mat') 367 | eeg = data['EEG'] 368 | envelope = data['envelope'] 369 | Fs = data['Fs'].astype('int')[0][0] 370 | eeg_train = eeg[: Fs * 60, :] 371 | 372 | envelope_train = envelope[: Fs * 60, :] 373 | g_t, t_t, con_t = pymtrf.mtrf_train(envelope_train, eeg_train, Fs, -1, 0, 500, 1e5) 374 | np.testing.assert_almost_equal(g, g_t, decimal=10, err_msg="Failure in weights calculation") 375 | np.testing.assert_almost_equal(t[0], t_t, decimal=10, err_msg="Failure in lag calculation") 376 | np.testing.assert_almost_equal(con[0], con_t, decimal=10, err_msg="Failure in intercept calculation") 377 | 378 | 379 | def test_mtrf_train_model_speech_recon_predict(): 380 | speech = loadmat('test_results/rdm_speech_data_recon') 381 | g = np.expand_dims(speech['g'], -1) 382 | con = np.expand_dims(speech['con'], 0) 383 | recon = speech['recon'] 384 | r = speech['r'] 385 | p = speech['p'] 386 | mse = speech['MSE'] 387 | data = loadmat('../../speech_data.mat') 388 | eeg = data['EEG'] 389 | envelope = data['envelope'] 390 | Fs = data['Fs'].astype('int')[0][0] 391 | eeg_test = eeg[Fs * 60:, :] 392 | envelope_test = envelope[Fs * 60:, :] 393 | 394 | recon_t, r_t, p_t, mse_t = pymtrf.mtrf_predict(envelope_test, eeg_test, g, Fs, -1, 0, 500, con) 395 | 396 | np.testing.assert_almost_equal(recon, recon_t, decimal=10, err_msg="Failure in reconstruction") 397 | np.testing.assert_almost_equal(r, r_t, decimal=10, err_msg="Failure in correlation - r") 398 | np.testing.assert_almost_equal(p, p_t, decimal=10, err_msg="Failure in correlation - p") 399 | np.testing.assert_almost_equal(mse, mse_t, decimal=10, err_msg="Failure in error calculation") 400 | 401 | 402 | def test_mtrf_train_model_speech_cross_val_equal(): 403 | speech = loadmat('test_results/rdm_speech_data_cross_val_equal.mat') 404 | model = np.expand_dims(speech['model'], -1) 405 | r = np.expand_dims(speech['r'], -1) 406 | p = np.expand_dims(speech['p'], -1) 407 | mse = np.expand_dims(speech['mse'], -1) 408 | data = loadmat('../../speech_data.mat') 409 | eeg = data['EEG'] 410 | envelope = data['envelope'] 411 | 412 | Fs = data['Fs'].astype('int')[0][0] 413 | eeg_in = np.stack([eeg[: Fs * 30, :], eeg[Fs * 30: Fs * 60, :], eeg[Fs * 60: Fs * 90]]) 414 | envelope_in = np.stack([envelope[: Fs * 30, :], envelope[Fs * 30: Fs * 60, :], envelope[Fs * 60: Fs * 90]]) 415 | 416 | r_t, p_t, mse_t, _, model_t = pymtrf.mtrf_crossval(envelope_in, eeg_in, Fs, -1, -50, 150, [0.1, 1, 10]) 417 | np.testing.assert_almost_equal(r, r_t, decimal=9, err_msg="Failure in correlation") 418 | np.testing.assert_almost_equal(p, p_t, decimal=8, err_msg="Failure in p-values") 419 | np.testing.assert_almost_equal(mse, mse_t, decimal=8, err_msg="Failure in mse") 420 | np.testing.assert_almost_equal(model, model_t, decimal=8, err_msg="Failure in model") 421 | 422 | 423 | def test_mtrf_train_model_speech_cross_val_unequal(): 424 | speech = loadmat('test_results/rdm_speech_data_cross_val_unequal.mat') 425 | model = np.expand_dims(speech['model'], -1) 426 | r = np.expand_dims(speech['r'], -1) 427 | p = np.expand_dims(speech['p'], -1) 428 | mse = np.expand_dims(speech['mse'], -1) 429 | data = loadmat('../../speech_data.mat') 430 | eeg = data['EEG'] 431 | envelope = data['envelope'] 432 | 433 | Fs = data['Fs'].astype('int')[0][0] 434 | eeg_in = [eeg[: Fs * 30, :], eeg[Fs * 30: Fs * 60, :], eeg[Fs * 60: Fs * 100], eeg[Fs * 100:]] 435 | envelope_in = [envelope[: Fs * 30, :], envelope[Fs * 30: Fs * 60, :], envelope[Fs * 60: Fs * 100], envelope[Fs * 100:]] 436 | 437 | r_t, p_t, mse_t, _, model_t = pymtrf.mtrf_crossval(envelope_in, eeg_in, Fs, -1, -50, 150, [0.1, 1, 10]) 438 | np.testing.assert_almost_equal(r, r_t, decimal=9, err_msg="Failure in correlation") 439 | np.testing.assert_almost_equal(p, p_t, decimal=8, err_msg="Failure in p-values") 440 | np.testing.assert_almost_equal(mse, mse_t, decimal=8, err_msg="Failure in mse") 441 | np.testing.assert_almost_equal(model, model_t, decimal=8, err_msg="Failure in model") 442 | 443 | 444 | def test_mtrf_train_model_speech_multi_cross_val_equal(): 445 | speech = loadmat('test_results/rdm_speech_data_multi_cross_val_equal.mat') 446 | model = np.expand_dims(speech['model'], -1) 447 | r = np.expand_dims(speech['r'], -1) 448 | p = np.expand_dims(speech['p'], -1) 449 | mse = np.expand_dims(speech['mse'], -1) 450 | data = loadmat('../../speech_data.mat') 451 | eeg = data['EEG'] 452 | envelope = data['envelope'] 453 | 454 | Fs = data['Fs'].astype('int')[0][0] 455 | eeg_in = np.stack([eeg[: Fs * 30, :], eeg[Fs * 30: Fs * 60, :], eeg[Fs * 60: Fs * 90]]) 456 | envelope_in = np.stack([envelope[: Fs * 30, :], envelope[Fs * 30: Fs * 60, :], envelope[Fs * 60: Fs * 90]]) 457 | 458 | r_t, p_t, mse_t, _, model_t = pymtrf.mtrf_multicrossval(envelope_in, eeg_in, eeg_in, eeg_in, Fs, -1, -50, 150, 459 | [0.1, 1, 10], [0.1, 1, 10]) 460 | np.testing.assert_almost_equal(r, r_t, decimal=9, err_msg="Failure in correlation") 461 | np.testing.assert_almost_equal(p, p_t, decimal=8, err_msg="Failure in p-values") 462 | np.testing.assert_almost_equal(mse, mse_t, decimal=8, err_msg="Failure in mse") 463 | np.testing.assert_almost_equal(model, model_t, decimal=8, err_msg="Failure in model") 464 | 465 | 466 | def test_mtrf_train_model_speech_multi_cross_val_unequal(): 467 | speech = loadmat('test_results/rdm_speech_data_multi_cross_val_unequal.mat') 468 | model = np.expand_dims(speech['model'], -1) 469 | r = np.expand_dims(speech['r'], -1) 470 | p = np.expand_dims(speech['p'], -1) 471 | mse = np.expand_dims(speech['mse'], -1) 472 | data = loadmat('../../speech_data.mat') 473 | eeg = data['EEG'] 474 | envelope = data['envelope'] 475 | 476 | Fs = data['Fs'].astype('int')[0][0] 477 | eeg_in = [eeg[: Fs * 30, :], eeg[Fs * 30: Fs * 60, :], eeg[Fs * 60: Fs * 100], eeg[Fs * 100:]] 478 | envelope_in = [envelope[: Fs * 30, :], envelope[Fs * 30: Fs * 60, :], envelope[Fs * 60: Fs * 100], envelope[Fs * 100:]] 479 | 480 | r_t, p_t, mse_t, _, model_t = pymtrf.mtrf_multicrossval(envelope_in, eeg_in, eeg_in, eeg_in, Fs, -1, -50, 150, 481 | [0.1, 1, 10], [0.1, 1, 10]) 482 | np.testing.assert_almost_equal(r, r_t, decimal=9, err_msg="Failure in correlation") 483 | np.testing.assert_almost_equal(p, p_t, decimal=8, err_msg="Failure in p-values") 484 | np.testing.assert_almost_equal(mse, mse_t, decimal=8, err_msg="Failure in mse") 485 | np.testing.assert_almost_equal(model, model_t, decimal=8, err_msg="Failure in model") 486 | 487 | 488 | def test_mtrf_train_model_speech_cross_val_equal_fwd(): 489 | speech = loadmat('test_results/rdm_speech_data_cross_val_equal_fwd.mat') 490 | model = speech['model'] 491 | r = speech['r'] 492 | p =speech['p'] 493 | mse = speech['mse'] 494 | data = loadmat('../../speech_data.mat') 495 | eeg = data['EEG'] 496 | envelope = data['envelope'] 497 | 498 | Fs = data['Fs'].astype('int')[0][0] 499 | eeg_in = np.stack([eeg[: Fs * 30, :], eeg[Fs * 30: Fs * 60, :], eeg[Fs * 60: Fs * 90]]) 500 | envelope_in = np.stack([envelope[: Fs * 30, :], envelope[Fs * 30: Fs * 60, :], envelope[Fs * 60: Fs * 90]]) 501 | 502 | r_t, p_t, mse_t, _, model_t = pymtrf.mtrf_crossval(envelope_in, eeg_in, Fs, 1, -50, 150, [0.1, 1, 10]) 503 | np.testing.assert_almost_equal(r, r_t, decimal=9, err_msg="Failure in correlation") 504 | np.testing.assert_almost_equal(p, p_t, decimal=8, err_msg="Failure in p-values") 505 | np.testing.assert_almost_equal(mse, mse_t, decimal=8, err_msg="Failure in mse") 506 | np.testing.assert_almost_equal(model, model_t, decimal=8, err_msg="Failure in model") 507 | 508 | 509 | def test_mtrf_train_model_speech_cross_val_unequal_fwd(): 510 | speech = loadmat('test_results/rdm_speech_data_cross_val_unequal_fwd.mat') 511 | model = speech['model'] 512 | r = speech['r'] 513 | p =speech['p'] 514 | mse = speech['mse'] 515 | data = loadmat('../../speech_data.mat') 516 | eeg = data['EEG'] 517 | envelope = data['envelope'] 518 | 519 | Fs = data['Fs'].astype('int')[0][0] 520 | eeg_in = [eeg[: Fs * 30, :], eeg[Fs * 30: Fs * 60, :], eeg[Fs * 60: Fs * 100], eeg[Fs * 100:]] 521 | envelope_in = [envelope[: Fs * 30, :], envelope[Fs * 30: Fs * 60, :], envelope[Fs * 60: Fs * 100], envelope[Fs * 100:]] 522 | 523 | r_t, p_t, mse_t, _, model_t = pymtrf.mtrf_crossval(envelope_in, eeg_in, Fs, 1, -50, 150, [0.1, 1, 10]) 524 | np.testing.assert_almost_equal(r, r_t, decimal=9, err_msg="Failure in correlation") 525 | np.testing.assert_almost_equal(p, p_t, decimal=9, err_msg="Failure in p-values") 526 | np.testing.assert_almost_equal(mse, mse_t, decimal=9, err_msg="Failure in mse") 527 | np.testing.assert_almost_equal(model, model_t, decimal=9, err_msg="Failure in model") 528 | 529 | 530 | def test_mtrf_train_model_speech_multi_cross_val_equal_fwd(): 531 | speech = loadmat('test_results/rdm_speech_data_multi_cross_val_equal_fwd.mat') 532 | model = speech['model'] 533 | r = speech['r'] 534 | p =speech['p'] 535 | mse = speech['mse'] 536 | data = loadmat('../../speech_data.mat') 537 | eeg = data['EEG'] 538 | envelope = data['envelope'] 539 | 540 | Fs = data['Fs'].astype('int')[0][0] 541 | eeg_in = np.stack([eeg[: Fs * 30, :], eeg[Fs * 30: Fs * 60, :], eeg[Fs * 60: Fs * 90]]) 542 | envelope_in = np.stack([envelope[: Fs * 30, :], envelope[Fs * 30: Fs * 60, :], envelope[Fs * 60: Fs * 90]]) 543 | 544 | r_t, p_t, mse_t, _, model_t = pymtrf.mtrf_multicrossval(envelope_in, eeg_in, eeg_in, eeg_in, Fs, 1, -50, 150, 545 | [0.1, 1, 10], [0.1, 1, 10]) 546 | np.testing.assert_almost_equal(r, r_t, decimal=9, err_msg="Failure in correlation") 547 | np.testing.assert_almost_equal(p, p_t, decimal=9, err_msg="Failure in p-values") 548 | np.testing.assert_almost_equal(mse, mse_t, decimal=9, err_msg="Failure in mse") 549 | np.testing.assert_almost_equal(model, model_t, decimal=9, err_msg="Failure in model") 550 | 551 | 552 | def test_mtrf_train_model_speech_multi_cross_val_unequal_fwd(): 553 | speech = loadmat('test_results/rdm_speech_data_multi_cross_val_unequal_fwd.mat') 554 | model = speech['model'] 555 | r = speech['r'] 556 | p =speech['p'] 557 | mse = speech['mse'] 558 | data = loadmat('../../speech_data.mat') 559 | eeg = data['EEG'] 560 | envelope = data['envelope'] 561 | 562 | Fs = data['Fs'].astype('int')[0][0] 563 | eeg_in = [eeg[: Fs * 30, :], eeg[Fs * 30: Fs * 60, :], eeg[Fs * 60: Fs * 100], eeg[Fs * 100:]] 564 | envelope_in = [envelope[: Fs * 30, :], envelope[Fs * 30: Fs * 60, :], envelope[Fs * 60: Fs * 100], envelope[Fs * 100:]] 565 | 566 | r_t, p_t, mse_t, _, model_t = pymtrf.mtrf_multicrossval(envelope_in, eeg_in, eeg_in, eeg_in, Fs, 1, -50, 150, 567 | [0.1, 1, 10], [0.1, 1, 10]) 568 | np.testing.assert_almost_equal(r, r_t, decimal=9, err_msg="Failure in correlation") 569 | np.testing.assert_almost_equal(p, p_t, decimal=9, err_msg="Failure in p-values") 570 | np.testing.assert_almost_equal(mse, mse_t, decimal=9, err_msg="Failure in mse") 571 | np.testing.assert_almost_equal(model, model_t, decimal=9, err_msg="Failure in model") 572 | ''' 573 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | 3 | setup(name='pymtrf', 4 | version='0.0.1a0', 5 | description='Translation of the matlab mtrf toolbox, see http://www.mee.tcd.ie/lalorlab/resources.html', 6 | url='http://github.com/storborg/funniest', 7 | author='Simon R. Steinkamp', 8 | author_email='simon.steinkamp@googlemail.com', 9 | license='tbd', 10 | packages=['pymtrf'], 11 | install_requires=['scipy>=0.11.0', 'numpy>=1.14.0', 'pytest==5.0.1'], 12 | python_requires='>3.6.0', 13 | setup_requires=['pytest-runner'], 14 | tests_require=['pytest'], 15 | zip_safe=False) --------------------------------------------------------------------------------