├── LICENSE ├── README.md ├── Spike_sorting.ipynb └── images ├── Spike_sorting_11_0.png ├── Spike_sorting_13_0.png ├── Spike_sorting_17_0.png ├── Spike_sorting_19_0.png ├── Spike_sorting_21_0.png ├── Spike_sorting_3_0.png ├── Spike_sorting_7_0.png ├── output_12_0.png ├── output_16_0.png ├── output_18_0.png ├── output_20_0.png ├── output_25_0.png ├── output_27_0.png ├── output_31_0.png ├── output_33_0.png ├── output_35_0.png ├── output_3_0.png └── output_7_0.png /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Carsten Klein 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # Using signal processing and K-means clustering to extract and sort neural events in Python 3 | 4 | This is the Python Jupyter Notebook for the Medium articles ([X](https://towardsdatascience.com/using-signal-processing-to-extract-neural-events-in-python-964437dc7c0) and [Y](https://towardsdatascience.com/whos-talking-using-k-means-clustering-to-sort-neural-events-in-python-e7a8a76f316)) on how to use signal processing techniques and K-means clustering to sort spikes. 5 | 6 | ### Part I 7 | Code to read data from a .ncs file and extract the spike channel form the raw braod band signal through bandpass filtering. Also includes a function to extract and align spikes from the signal. 8 | 9 | 10 | ### Part II 11 | Code to perform PCA on the extracted spike waveforms. Followed by a function that does K-means clustering on the PCA data to determine the number of clusters in the data and average waveforms according to their cluster number. 12 | 13 | First we start with importing the libraries for reading in the data and processing it. 14 | 15 | 16 | ```python 17 | from scipy.signal import butter, lfilter 18 | import numpy as np 19 | import matplotlib.pyplot as plt 20 | 21 | # Enable plots inside the Jupyter Notebook 22 | %matplotlib inline 23 | ``` 24 | 25 | Before we can start looking at the data (https://www2.le.ac.uk/centres/csn/software) we need to write a function to import the .ncs data format. You can check out the organization of the data format on the companies web page (https://neuralynx.com/software/NeuralynxDataFileFormats.pdf). The information provided there is the basis for the import routine below. 26 | 27 | 28 | ```python 29 | # Define data path 30 | data_file = './UCLA_data/CSC4.Ncs' 31 | 32 | # Header has 16 kilobytes length 33 | header_size = 16 * 1024 34 | 35 | # Open file 36 | fid = open(data_file, 'rb') 37 | 38 | # Skip header by shifting position by header size 39 | fid.seek(header_size) 40 | 41 | # Read data according to Neuralynx information 42 | data_format = np.dtype([('TimeStamp', np.uint64), 43 | ('ChannelNumber', np.uint32), 44 | ('SampleFreq', np.uint32), 45 | ('NumValidSamples', np.uint32), 46 | ('Samples', np.int16, 512)]) 47 | 48 | raw = np.fromfile(fid, dtype=data_format) 49 | 50 | # Close file 51 | fid.close() 52 | 53 | # Get sampling frequency 54 | sf = raw['SampleFreq'][0] 55 | 56 | # Create data vector 57 | data = raw['Samples'].ravel() 58 | 59 | # Determine duration of recording in seconds 60 | dur_sec = data.shape[0]/sf 61 | 62 | # Create time vector 63 | time = np.linspace(0, dur_sec,data.shape[0]) 64 | 65 | # Plot first second of data 66 | fig, ax = plt.subplots(figsize=(15, 5)) 67 | ax.plot(time[0:sf], data[0:sf]) 68 | ax.set_title('Broadband; Sampling Frequency: {}Hz'.format(sf), fontsize=23) 69 | ax.set_xlim(0, time[sf]) 70 | ax.set_xlabel('time [s]', fontsize=20) 71 | ax.set_ylabel('amplitude [uV]', fontsize=20) 72 | plt.show() 73 | ``` 74 | 75 | 76 | ![png](images/output_3_0.png) 77 | 78 | 79 | # Part I 80 | 81 | ## Bandpass filter the data 82 | As we can see the signal has strong 60Hz noise in it. The function below will bandpass filter the signal to exclude the 60Hz domain. 83 | 84 | 85 | ```python 86 | def filter_data(data, low, high, sf, order=2): 87 | # Determine Nyquist frequency 88 | nyq = sf/2 89 | 90 | # Set bands 91 | low = low/nyq 92 | high = high/nyq 93 | 94 | # Calculate coefficients 95 | b, a = butter(order, [low, high], btype='band') 96 | 97 | # Filter signal 98 | filtered_data = lfilter(b, a, data) 99 | 100 | return filtered_data 101 | ``` 102 | 103 | Using the above function lets us compare the raw data with the filtered signal 104 | 105 | 106 | ```python 107 | spike_data = filter_data(data, low=500, high=9000, sf=sf) 108 | 109 | # Plot signals 110 | fig, ax = plt.subplots(2, 1, figsize=(15, 5)) 111 | ax[0].plot(time[0:sf], data[0:sf]) 112 | ax[0].set_xticks([]) 113 | ax[0].set_title('Broadband', fontsize=23) 114 | ax[0].set_xlim(0, time[sf]) 115 | ax[0].set_ylabel('amplitude [uV]', fontsize=16) 116 | ax[0].tick_params(labelsize=12) 117 | 118 | ax[1].plot(time[0:sf], spike_data[0:sf]) 119 | ax[1].set_title('Spike channel [0.5 to 9kHz]', fontsize=23) 120 | ax[1].set_xlim(0, time[sf]) 121 | ax[1].set_xlabel('time [s]', fontsize=20) 122 | ax[1].set_ylabel('amplitude [uV]', fontsize=16) 123 | ax[1].tick_params(labelsize=12) 124 | plt.show() 125 | ``` 126 | 127 | 128 | ![png](images/output_7_0.png) 129 | 130 | 131 | ## Interlude: LFP data processing 132 | 133 | Here we will have a brief look at the LFP signal, the low frequency part of the recording. It is not relevant for the spike extraction or sorting so you might skip this section. However it will give you a better understanding about the nature of the recorded data. 134 | 135 | 136 | ```python 137 | from scipy import signal 138 | 139 | # First lowpass filter the data to get the LFP signal 140 | lfp_data = filter_data(data, low=1, high=300, sf=sf) 141 | ``` 142 | 143 | Because the LFP signal does not contain high frequency components anymore the original sampling rate can be reduced. This will lower the size of the data and speed up calculations. Therefore we define a short down-sampling function. 144 | 145 | 146 | ```python 147 | def downsample_data(data, sf, target_sf): 148 | factor = sf/target_sf 149 | if factor <= 10: 150 | data_down = signal.decimate(data, factor) 151 | else: 152 | factor = 10 153 | data_down = data 154 | while factor > 1: 155 | data_down = signal.decimate(data_down, factor) 156 | sf = sf/factor 157 | factor = int(min([10, sf/target_sf])) 158 | 159 | return data_down, sf 160 | ``` 161 | 162 | 163 | ```python 164 | # Now we use the above function to downsample the signal. 165 | lfp_data, sf_lfp = downsample_data(lfp_data, sf=sf, target_sf=600) 166 | 167 | # Lets have a look at the downsampled LFP signal 168 | fig, ax = plt.subplots(figsize=(15, 5)) 169 | 170 | ax.plot(lfp_data[0:int(sf_lfp)]) 171 | ax.set_title('LFP signal', fontsize=23) 172 | ax.set_xlim([0, sf_lfp]) 173 | ax.set_xlabel('sample #', fontsize=20) 174 | ax.set_ylabel('amplitude [uV]', fontsize=20) 175 | plt.show() 176 | ``` 177 | 178 | 179 | ![png](images/output_12_0.png) 180 | 181 | 182 | As we already saw previously the signal is dominated by a strong 60Hz noise. If we wanted to continue working with the LFP signal this would be a problem. One way of dealing with this issue is to apply a notch filter that removes the 60Hz noise. 183 | The code below sets up such a filter. 184 | 185 | 186 | ```python 187 | f0 = 60.0 # Frequency to be removed from signal (Hz) 188 | w0 = f0/(sf_lfp/2) # Normalized Frequency 189 | Q = 30 # Quality factor 190 | 191 | # Design notch filter 192 | b, a = signal.iirnotch(w0, Q) 193 | 194 | # Filter signal 195 | clean_data = signal.lfilter(b, a, lfp_data) 196 | ``` 197 | 198 | Now before we apply the filter lets briefly check the filter properties. Mainly we are interested in how *sharp* the filter cuts the 60Hz and how does it influence the phase of the signal. Because in the end we only want to remove the noise and not alter the signal in other frequency domains. 199 | 200 | 201 | ```python 202 | # Frequency response 203 | w, h = signal.freqz(b, a) 204 | 205 | # Generate frequency axis 206 | freq = w*sf_lfp/(2*np.pi) 207 | 208 | # Plot 209 | fig, ax = plt.subplots(2, 1, figsize=(8, 6)) 210 | ax[0].plot(freq, 20*np.log10(abs(h)), color='blue') 211 | ax[0].set_title("Frequency Response") 212 | ax[0].set_ylabel("Amplitude (dB)", color='blue') 213 | ax[0].set_xlim([0, 100]) 214 | ax[0].grid() 215 | 216 | ax[1].plot(freq, np.unwrap(np.angle(h))*180/np.pi, color='green') 217 | ax[1].set_ylabel("Angle (degrees)", color='green') 218 | ax[1].set_xlabel("Frequency (Hz)") 219 | ax[1].set_xlim([0, 100]) 220 | ax[1].set_yticks([-90, -60, -30, 0, 30, 60, 90]) 221 | #ax[1].set_ylim([-90, 90]) 222 | ax[1].grid() 223 | plt.show() 224 | ``` 225 | 226 | 227 | ![png](images/output_16_0.png) 228 | 229 | 230 | Ok seems like the filter should work. You can play around with the settings to see how the filter behaviour changes. But now lets see how the filtered sigal compares to the original LFP signal. 231 | 232 | 233 | ```python 234 | fig, ax = plt.subplots(1, 1, figsize=(15, 5)) 235 | ax.plot(lfp_data[0:int(sf_lfp)], label='original data') 236 | ax.plot(clean_data[0:int(sf_lfp)], label='filtered data') 237 | ax.set_title('LFP signal', fontsize=23) 238 | ax.set_xlim([0, sf_lfp]) 239 | ax.set_xlabel('sample #', fontsize=20) 240 | ax.set_ylabel('amplitude [uV]', fontsize=20) 241 | 242 | plt.legend() 243 | plt.show() 244 | ``` 245 | 246 | 247 | ![png](images/output_18_0.png) 248 | 249 | 250 | This looks much better! As we can see the 60Hz is removed and we can now see other frequency signals riding on the very low frequency components. 251 | Now finally, to check the effect of our notch filtering lets calculate the power spectra of the filtered and un-filtered signals. 252 | 253 | 254 | ```python 255 | f, clean_data_power = signal.welch(clean_data, fs=sf_lfp, window='hanning', nperseg=2048, scaling='spectrum') 256 | f, lfp_power = signal.welch(lfp_data, fs=sf_lfp, window='hanning', nperseg=2048, scaling='spectrum') 257 | 258 | fig, ax = plt.subplots(1, 1, figsize=(15, 5)) 259 | ax.semilogy(f, clean_data_power, label='60Hz cleaned') 260 | ax.semilogy(f, lfp_power, label='original LFP signal') 261 | ax.set_title('un-filtered and filtered power spectra', fontsize=23) 262 | ax.set_xlim([0, 100]) 263 | ax.set_ylim([.3, 6500]) 264 | ax.set_xlabel('frequency [Hz]', fontsize=20) 265 | ax.set_ylabel('PSD [V**2/Hz]', fontsize=20) 266 | ax.legend() 267 | 268 | plt.show() 269 | ``` 270 | 271 | 272 | ![png](images/output_20_0.png) 273 | 274 | 275 | Clearly the filter removed the 60Hz band in the LFP and left the other frequencies intact. 276 | There are actually now many things we could do now with the LFP signal but this is a different topic. So lets get back to the spike channel and see how we can extract spikes from there. 277 | 278 | ## Extract spikes from the filtered signal 279 | Now that we have a clean spike channel we can identify and extract spikes. The following function does that for us. It take five input arguments: 280 | 281 | 1. the filtered data 282 | 1. the number of samples or window which should be extracted from the signal 283 | 1. the threshold factor (mean(signal)*tf) 284 | 1. an offset expressed in number of samples which shifts the maximum peak from the center 285 | 1. the upper threshold which excludes data points above this limit to avoid extracting artifacts. 286 | 287 | 288 | ```python 289 | def get_spikes(data, spike_window=80, tf=5, offset=10, max_thresh=350): 290 | 291 | # Calculate threshold based on data mean 292 | thresh = np.mean(np.abs(data)) *tf 293 | 294 | # Find positions wherere the threshold is crossed 295 | pos = np.where(data > thresh)[0] 296 | pos = pos[pos > spike_window] 297 | 298 | # Extract potential spikes and align them to the maximum 299 | spike_samp = [] 300 | wave_form = np.empty([1, spike_window*2]) 301 | for i in pos: 302 | if i < data.shape[0] - (spike_window+1): 303 | # Data from position where threshold is crossed to end of window 304 | tmp_waveform = data[i:i+spike_window*2] 305 | 306 | # Check if data in window is below upper threshold (artifact rejection) 307 | if np.max(tmp_waveform) < max_thresh: 308 | # Find sample with maximum data point in window 309 | tmp_samp = np.argmax(tmp_waveform) +i 310 | 311 | # Re-center window on maximum sample and shift it by offset 312 | tmp_waveform = data[tmp_samp-(spike_window-offset):tmp_samp+(spike_window+offset)] 313 | 314 | # Append data 315 | spike_samp = np.append(spike_samp, tmp_samp) 316 | wave_form = np.append(wave_form, tmp_waveform.reshape(1, spike_window*2), axis=0) 317 | 318 | # Remove duplicates 319 | ind = np.where(np.diff(spike_samp) > 1)[0] 320 | spike_samp = spike_samp[ind] 321 | wave_form = wave_form[ind] 322 | 323 | return spike_samp, wave_form 324 | ``` 325 | 326 | Using the function on our filtered spike channel and plotting 100 randomly selected waveforms that were extracted. 327 | 328 | 329 | ```python 330 | spike_samp, wave_form = get_spikes(spike_data, spike_window=50, tf=8, offset=20) 331 | 332 | np.random.seed(10) 333 | fig, ax = plt.subplots(figsize=(15, 5)) 334 | 335 | for i in range(100): 336 | spike = np.random.randint(0, wave_form.shape[0]) 337 | ax.plot(wave_form[spike, :]) 338 | 339 | ax.set_xlim([0, 90]) 340 | ax.set_xlabel('# sample', fontsize=20) 341 | ax.set_ylabel('amplitude [uV]', fontsize=20) 342 | ax.set_title('spike waveforms', fontsize=23) 343 | plt.show() 344 | ``` 345 | 346 | 347 | ![png](images/output_25_0.png) 348 | 349 | 350 | # Part II 351 | 352 | ## Reducing the number of dimensions with PCA 353 | To cluster the waveforms we need some features to work with. A feature could be for example the peak amplitude of the spike or the width of the waveform. Another way to go is to use the principal components of the waveforms. Principal component analysis (PCA) is a dimensionality reduction method which requires normalized data. Here we will use Scikit Learn for both the normalization and the PCA. We will not go into the details of PCA here since the focus is the clustering. 354 | 355 | 356 | ```python 357 | import sklearn as sk 358 | from sklearn.decomposition import PCA 359 | 360 | # Apply min-max scaling 361 | scaler= sk.preprocessing.MinMaxScaler() 362 | dataset_scaled = scaler.fit_transform(wave_form) 363 | 364 | # Do PCA 365 | pca = PCA(n_components=12) 366 | pca_result = pca.fit_transform(dataset_scaled) 367 | 368 | # Plot the 1st principal component aginst the 2nd and use the 3rd for color 369 | fig, ax = plt.subplots(figsize=(8, 8)) 370 | ax.scatter(pca_result[:, 0], pca_result[:, 1], c=pca_result[:, 2]) 371 | ax.set_xlabel('1st principal component', fontsize=20) 372 | ax.set_ylabel('2nd principal component', fontsize=20) 373 | ax.set_title('first 3 principal components', fontsize=23) 374 | 375 | fig.subplots_adjust(wspace=0.1, hspace=0.1) 376 | plt.show() 377 | ``` 378 | 379 | 380 | ![png](images/output_27_0.png) 381 | 382 | 383 | The way we will implement K-means is quite straight forward. First, we choose a number of K random datapoints from our sample. These datapoints represent the cluster centers and their number equals the number of clusters. Next, we will calculate the Euclidean distance between all of the random cluster centers and any other datapoint. Then we assign each datapoint to the cluster center closest to it. Obviously doing all of this with random datapoints will not give us a good clustering result. So, we start over again. But this time we don't use random datapoints as cluster centers. Instead we calculate the actual cluster centers based on the previous random assignments and start the process again… and again… and again. With every iteration the datapoints that switch clusters will go down and we will arrive at a (hopefully) global optimum. 384 | 385 | 386 | ```python 387 | def k_means(data, num_clus=3, steps=200): 388 | 389 | # Convert data to Numpy array 390 | cluster_data = np.array(data) 391 | 392 | # Initialize by randomly selecting points in the data 393 | center_init = np.random.randint(0, cluster_data.shape[0], num_clus) 394 | 395 | # Create a list with center coordinates 396 | center_init = cluster_data[center_init, :] 397 | 398 | # Repeat clustering x times 399 | for _ in range(steps): 400 | 401 | # Calculate distance of each data point to cluster center 402 | distance = [] 403 | for center in center_init: 404 | tmp_distance = np.sqrt(np.sum((cluster_data - center)**2, axis=1)) 405 | 406 | # Adding smalle random noise to the data to avoid matching distances to centroids 407 | tmp_distance = tmp_distance + np.abs(np.random.randn(len(tmp_distance))*0.0001) 408 | distance.append(tmp_distance) 409 | 410 | # Assign each point to cluster based on minimum distance 411 | _, cluster = np.where(np.transpose(distance == np.min(distance, axis=0))) 412 | 413 | # Find center of mass for each cluster 414 | center_init = [] 415 | for i in range(num_clus): 416 | center_init.append(cluster_data[cluster == i, :].mean(axis=0).tolist()) 417 | 418 | return cluster, center_init, distance 419 | ``` 420 | 421 | So how should we choose the number of clusters? One way is to run our K-means function many times with different cluster numbers. The resulting plot shows the average inter-cluster distance. That is the average Euclidian distance between the datapoints of a cluster to the cluster center. From the plot we can see that after 4 to 6 clusters we do not see a strong decrease in the inter-cluster distance. 422 | 423 | 424 | ```python 425 | max_num_clusters = 15 426 | 427 | average_distance = [] 428 | for run in range(20): 429 | tmp_average_distance = [] 430 | for num_clus in range(1, max_num_clusters +1): 431 | cluster, centers, distance = k_means(pca_result, num_clus) 432 | tmp_average_distance.append(np.mean([np.mean(distance[x][cluster==x]) for x in range(num_clus)], axis=0)) 433 | average_distance.append(tmp_average_distance) 434 | 435 | fig, ax = plt.subplots(1, 1, figsize=(15, 5)) 436 | ax.plot(range(1, max_num_clusters +1), np.mean(average_distance, axis=0)) 437 | ax.set_xlim([1, max_num_clusters]) 438 | ax.set_xlabel('number of clusters', fontsize=20) 439 | ax.set_ylabel('average inter cluster distance', fontsize=20) 440 | ax.set_title('Ellbow point', fontsize=23) 441 | plt.show() 442 | ``` 443 | 444 | 445 | ![png](images/output_31_0.png) 446 | 447 | 448 | So, let’s see what we get if we run the K-means algorithm with 6 cluster. Probably we could also go with 4 but let’s check the results with 6 first. 449 | 450 | 451 | ```python 452 | num_clus = 6 453 | cluster, centers, distance = k_means(pca_result, num_clus) 454 | 455 | # Plot the result 456 | fig, ax = plt.subplots(1, 2, figsize=(15, 5)) 457 | ax[0].scatter(pca_result[:, 0], pca_result[:, 1], c=cluster) 458 | ax[0].set_xlabel('1st principal component', fontsize=20) 459 | ax[0].set_ylabel('2nd principal component', fontsize=20) 460 | ax[0].set_title('clustered data', fontsize=23) 461 | 462 | time = np.linspace(0, wave_form.shape[1]/sf, wave_form.shape[1])*1000 463 | for i in range(num_clus): 464 | cluster_mean = wave_form[cluster==i, :].mean(axis=0) 465 | cluster_std = wave_form[cluster==i, :].std(axis=0) 466 | 467 | ax[1].plot(time, cluster_mean, label='Cluster {}'.format(i)) 468 | ax[1].fill_between(time, cluster_mean-cluster_std, cluster_mean+cluster_std, alpha=0.15) 469 | 470 | ax[1].set_title('average waveforms', fontsize=23) 471 | ax[1].set_xlim([0, time[-1]]) 472 | ax[1].set_xlabel('time [ms]', fontsize=20) 473 | ax[1].set_ylabel('amplitude [uV]', fontsize=20) 474 | 475 | plt.legend() 476 | plt.show() 477 | ``` 478 | 479 | 480 | ![png](images/output_33_0.png) 481 | 482 | 483 | Looking at above results it seems we chose to many clusters. The waveforms plot indicates that we may have extracted spikes from three different sources. Clusters 0, 1, 3 and 4 look like they have the same origin, while clusters 0 and 5 seem to be separate neurons. So, lets combine clusters 0, 1, 3 and 4. 484 | 485 | 486 | ```python 487 | combine_clusters = [0, 1, 3, 4] 488 | combined_waveforms_mean = wave_form[[x in combine_clusters for x in cluster], :].mean(axis=0) 489 | combined_waveforms_std = wave_form[[x in combine_clusters for x in cluster], :].std(axis=0) 490 | 491 | cluster_0_waveform_mean = wave_form[cluster==2, :].mean(axis=0) 492 | cluster_0_waveform_std = wave_form[cluster==2, :].std(axis=0) 493 | 494 | cluster_1_waveform_mean = wave_form[cluster==5, :].mean(axis=0) 495 | cluster_1_waveform_std = wave_form[cluster==5, :].std(axis=0) 496 | 497 | fig, ax = plt.subplots(1, 1, figsize=(8, 5)) 498 | ax.plot(time, combined_waveforms_mean, label='Cluster 1') 499 | ax.fill_between(time, combined_waveforms_mean-combined_waveforms_std, combined_waveforms_mean+combined_waveforms_std, 500 | alpha=0.15) 501 | 502 | ax.plot(time, cluster_0_waveform_mean, label='Cluster 2') 503 | ax.fill_between(time, cluster_0_waveform_mean-cluster_0_waveform_std, cluster_0_waveform_mean+cluster_0_waveform_std, 504 | alpha=0.15) 505 | 506 | ax.plot(time, cluster_1_waveform_mean, label='Cluster 3') 507 | ax.fill_between(time, cluster_1_waveform_mean-cluster_1_waveform_std, cluster_1_waveform_mean+cluster_1_waveform_std, 508 | alpha=0.15) 509 | 510 | ax.set_title('average waveforms', fontsize=23) 511 | ax.set_xlim([0, time[-1]]) 512 | ax.set_xlabel('time [ms]', fontsize=20) 513 | ax.set_ylabel('amplitude [uV]', fontsize=20) 514 | 515 | plt.legend() 516 | plt.show() 517 | ``` 518 | 519 | 520 | ![png](images/output_35_0.png) 521 | 522 | 523 | 524 | ```python 525 | 526 | ``` 527 | -------------------------------------------------------------------------------- /images/Spike_sorting_11_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/Spike_sorting_11_0.png -------------------------------------------------------------------------------- /images/Spike_sorting_13_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/Spike_sorting_13_0.png -------------------------------------------------------------------------------- /images/Spike_sorting_17_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/Spike_sorting_17_0.png -------------------------------------------------------------------------------- /images/Spike_sorting_19_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/Spike_sorting_19_0.png -------------------------------------------------------------------------------- /images/Spike_sorting_21_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/Spike_sorting_21_0.png -------------------------------------------------------------------------------- /images/Spike_sorting_3_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/Spike_sorting_3_0.png -------------------------------------------------------------------------------- /images/Spike_sorting_7_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/Spike_sorting_7_0.png -------------------------------------------------------------------------------- /images/output_12_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_12_0.png -------------------------------------------------------------------------------- /images/output_16_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_16_0.png -------------------------------------------------------------------------------- /images/output_18_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_18_0.png -------------------------------------------------------------------------------- /images/output_20_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_20_0.png -------------------------------------------------------------------------------- /images/output_25_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_25_0.png -------------------------------------------------------------------------------- /images/output_27_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_27_0.png -------------------------------------------------------------------------------- /images/output_31_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_31_0.png -------------------------------------------------------------------------------- /images/output_33_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_33_0.png -------------------------------------------------------------------------------- /images/output_35_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_35_0.png -------------------------------------------------------------------------------- /images/output_3_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_3_0.png -------------------------------------------------------------------------------- /images/output_7_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/akcarsten/spike_sorting/6e72902357510c5cfcd5d907a4bcabc7755405ec/images/output_7_0.png --------------------------------------------------------------------------------