perhaps the easiest and most "self-contained" option - it won’t
207 | install stuff on your own machine but will install stuff within a
208 | virtual machine, leaving your machine untouched
209 |
download and install VirtualBox, it’s free and runs on Windows & Mac
210 | (and GNU/Linux)
211 |
download the pre-configured UbuntuVM.ova provided by me (beware,
212 | it’s a 3.8 GB file)
213 |
in VirtualBox, "Import Appliance…" and point to UbuntuVM.ova
214 |
Then start the virtual machine (username is compneuro and password is
215 | compneuro)
216 |
you’re ready to rumble, I have installed all the software already
217 |
218 |
219 |
220 |
221 |
222 |
Option 4 (Mac) : install Python + scientific libraries on your machine
223 |
224 |
225 |
install Xcode from the mac app store (the download is LARGE, several
226 | GB)
227 |
in Xcode: Preferences/Downloads/Components and Install the "Command
228 | Line Tools"
they have an Academic Version which is free, you just have to fill
245 | out a form and they send you an email with a download link
246 |
here is a blog post detailing how to get the ipython notebook
247 | running on Windows 7
248 |
249 |
250 |
251 |
252 |
253 |
254 |
Testing your installation
255 |
256 |
257 |
Launching iPython
258 |
259 |
260 | To launch iPython, open up a Terminal and type a command to launch:
261 |
262 |
263 |
264 | To make it so Figures appear in their own window on your desktop (like MATLAB):
265 |
266 |
267 |
268 |
ipython --pylab
269 |
270 |
271 |
272 |
273 | To make it so Figures appear in the console itself, right after the
274 | commmand(s) that produced them:
275 |
276 |
277 |
278 |
ipython qtconsole --pylab inline
279 |
280 |
281 |
282 |
283 | To launch a browser-based "notebook" (this is really neat)
284 |
285 |
286 |
287 |
ipython notebook --pylab inline
288 |
289 |
290 |
291 |
292 |
293 |
294 |
Making a plot
295 |
296 |
297 | Type the following:
298 |
299 |
300 |
301 |
302 |
t = arange(0, 1, 0.01)
303 | y = sin(2*pi*t*3)
304 | plot(t,y)
305 |
306 |
307 |
308 |
309 | and you should see this plot:
310 |
311 |
312 |
313 |
314 |
315 |
316 |
317 |
318 |
319 |
320 |
321 |
322 |
323 |
Next steps
324 |
325 |
326 | In the next topic we will talk about dynamical systems — what they
327 | are, and how they can be used to address scientific questions through
328 | computer simulation.
329 |
195 | A. L. Hodgkin and A. F. Huxley.
196 | A quantitative description of membrane current and its application
197 | to conduction and excitation in nerve.
198 | J. Physiol. (Lond.), 117(4):500--544, Aug 1952.
199 | [ bib ]
200 |
201 |
210 | A. L. Hodgkin, A. F. Huxley, A. L. Hodgkin, and A. F. Huxley.
211 | A quantitative description of membrane current and its application
212 | to conduction and excitation in nerve. 1952.
213 | Bull. Math. Biol., 52(1-2):25--71, 1990.
214 | [ bib ]
215 |
216 |
217 |
218 |
219 |
220 |
221 |
222 |
223 |
What is a dynamical system?
224 |
225 |
226 | Systems can be characterized by the specific relation between their
227 | input(s) and output(s). A static system has an output that only
228 | depends on its input. A mechanical example of such a system is an
229 | idealized, massless (mass=0) spring. The length of the spring depends
230 | only on the force (the input) that acts upon it. Change the input
231 | force, and the length of the spring will change, and this will happen
232 | instantaneously (obviously a massless spring is a theoretical
233 | construct). A system becomes dynamical (it is said to have dynamics
234 | when a mass is attached to the spring. Now the position of the mass
235 | (and equivalently, the length of the spring) is no longer directly
236 | dependent on the input force, but is also tied to the acceleration of
237 | the mass, which in turn depends on the sum of all forces acting upon
238 | it (the sum of the input force and the force due to the spring). The
239 | net force depends on the position of the mass, which depends on the
240 | length of the spring, which depends on the spring force. The property
241 | that acceleration of the mass depends on its position makes this a
242 | dynamical system.
243 |
244 |
245 |
246 |
247 |
248 |
249 |
Figure 1: A spring with a mass attached
250 |
251 |
252 |
253 | Dynamical systems can be characterized by differential equations that
254 | relate the state derivatives (e.g. velocity or acceleration) to the
255 | state variables (e.g. position). The differential equation for the
256 | spring-mass system depicted above is:
257 |
264 | Where \(x\) is the position of the mass \(m\) (the length of the spring),
265 | \(\ddot{x}\) is the second derivative of position (i.e. acceleration),
266 | \(k\) is a constant (related to the stiffness of the spring), and \(g\) is
267 | the graviational constant.
268 |
269 |
270 |
271 | The system is said to be a second order system, as the highest
272 | derivative that appears in the differential equation describing the
273 | system, is two. The position \(x\) and its time derivative \(\dot{x}\) are
274 | called states of the system, and \(\dot{x}\) and \(\ddot{x}\) are called
275 | state derivatives.
276 |
277 |
278 |
279 | Most systems out there in nature are dynamical systems. For example
280 | most chemical reactions under natural circumstances are dynamical: the
281 | rate of change of a chemical reaction depends on the amount of
282 | chemical present, in other words the state derivative is proportional
283 | to the state. Dynamical systems exist in biology as well. For example
284 | the rate of change of a certain species depends on its population
285 | size.
286 |
287 |
288 |
289 | Dynamical equations are often described by a set of coupled
290 | differential equations. For example, the reproduction rate of
291 | rabbits (state derivative 1) depends on the population of rabbits
292 | (state 1) and on the population size of foxes (state 2). The
293 | reproduction rate of foxes (state derivative 2) depends on the
294 | population of foxes (state 2) and also on the population of rabbits
295 | (state 1). In this case we have two coupled first-order differential
296 | equations, and hence a system of order two. The so-called
297 | predator-prey model is also known as the Lotka-Volterra equations.
298 |
311 | There are two main reasons: one being practical and one mostly
312 | theoretical. The practical use is prediction. A typical example of a
313 | dynamical system that is modelled for prediction is the weather. The
314 | weather is a very complex, (high-order, nonlinear, coupled and
315 | chaotic) system. More theoretically, one reason to make models is to
316 | test the validity of a functional hypothesis of an observed
317 | phenomenon. A beautiful example is the model made by Hodgkin and
318 | Huxley to understand how action potentials arise and propagate in
319 | neurons [1,2]. They modelled the different
320 | (voltage-gated) ion channels in an axon membrane and showed using
321 | mathematical models that indeed the changes in ion concentrations were
322 | responsible for the electical spikes observed experimentally 7 years
323 | earlierr.
324 |
325 |
326 |
327 |
328 |
329 |
330 |
Figure 2: Hodgkin-Huxley model of voltage-gated ion channels
331 |
332 |
333 |
334 |
335 |
336 |
337 |
Figure 3: Action potentials across the membrane
338 |
339 |
340 |
341 | A second theoretical reason to make models is that it is sometimes
342 | very difficult, if not impossible, to answer a certain question
343 | empirically. As an example we take the following biomechanical
344 | question: Would you be able to jump higher if your biceps femoris
345 | (part of your hamstrings) were two separate muscles each crossing only
346 | one joint rather than being one muscle crossing both the hip and knee
347 | joint? Not a strange question as one could then independently control
348 | the torques around each joint.
349 |
350 |
351 |
352 | In order to answer this question empirically, one would like to do the
353 | following experiment:
354 |
355 |
356 |
357 |
measure the maximal jump height of a subject
358 |
change only the musculoskeletal properties in question
359 |
measure the jump height again
360 |
361 |
362 |
363 | Of course, such an experiment would yield several major ethical,
364 | practical and theoretical drawbacks. It is unlikely that an ethics
365 | committee would approve the transplantation of the origin and
366 | insertion of the hamstrings in order to examine its effect on jump
367 | height. And even so, one would have some difficulties finding
368 | subjects. Even with a volunteer for such a surgery it would not bring
369 | us any closer to an answer. After such a surgery, the subject would
370 | not be able to jump right away, but has to undergo severe revalidation
371 | and surely during such a period many factors will undesirably change
372 | like maximal contractile forces. And even if the subject would fully
373 | recover (apart from the hamstrings transplantation), his or her
374 | nervous system would have to find the new optimal muscle stimulation
375 | pattern.
376 |
377 |
378 |
379 | If one person jumps lower than another person, is that because she
380 | cannot jump as high with her particular muscles, or was it just that
381 | her CNS was not able to find the optimal muscle activation pattern?
382 | Ultimately, one wants to know through what mechanism the subject's
383 | jump performance changes. To investigate this, one would need to know,
384 | for example, the forces produced by the hamstrings as a function of
385 | time, something that is impossible to obtain experimentally. Of
386 | course, this example is somewhat ridiculous, but its message is
387 | hopefully clear that for several questions a strict empirical approach
388 | is not suitable. An alternative is provided by mathematical modelling.
389 |
390 |
391 |
392 |
393 |
394 |
Next steps
395 |
396 |
397 | In the next topic, we will be examining three systems — a
398 | mass-spring system, a system representing weather patterns, and a
399 | system characterizing predator-prey interactions. In each case we will
400 | see how to go from differential equations characterizing the dynamics
401 | of the system, to Python code, and run that code to simulate the
402 | behaviour of the system over time. We will see the great power of
403 | simulation, namely the ability to change aspects of the system at
404 | will, and simulate to explore the resulting change in system
405 | behaviour.
406 |
419 |
420 |
421 |
--------------------------------------------------------------------------------
/org/0_Setup_Your_Computer.org:
--------------------------------------------------------------------------------
1 | #+STARTUP: showall
2 |
3 | #+TITLE: 0. Setup Your Computer
4 | #+AUTHOR: Paul Gribble
5 | #+EMAIL: paul@gribblelab.org
6 | #+DATE: fall 2012
7 | #+HTML_LINK_UP: http://www.gribblelab.org/compneuro/index.html
8 | #+HTML_LINK_HOME: http://www.gribblelab.org/compneuro/index.html
9 |
10 |
11 | * Install Options
12 |
13 | ** Option 1: Download and build source code from websites above
14 | - good luck with that, there are many dependencies, it’s a bit of a mess
15 |
16 | ** Option 2: Install the Enthought Python Distribution (all platforms)
17 | - your best bet may be the [[http://www.enthought.com/products/epd.php][Enthought Python Distribution]]
18 | - they have an [[http://www.enthought.com/products/edudownload.php][Academic Version]] which is free
19 | - also a totally free version here: [[http://www.enthought.com/products/epd_free.php][EPD Free]]
20 | - doesn't necessarily include latest versions of packages (e.g. iPython)
21 |
22 | ** Option 3: Install a software virtual machine running Ubuntu GNU/Linux (all platforms)
23 | - perhaps the easiest and most "self-contained" option - it won’t
24 | install stuff on your own machine but will install stuff within a
25 | virtual machine, leaving your machine untouched
26 | - download and install [[https://www.virtualbox.org/][VirtualBox]], it’s free and runs on Windows & Mac
27 | (and GNU/Linux)
28 | - download the pre-configured [[http://www.gribblelab.org/compneuro/installers/UbuntuVM.ova][UbuntuVM.ova]] provided by me (beware,
29 | it’s a 3.8 GB file)
30 | - in VirtualBox, "Import Appliance..." and point to UbuntuVM.ova
31 | - Then start the virtual machine (username is compneuro and password is
32 | compneuro)
33 | - you’re ready to rumble, I have installed all the software already
34 |
35 | ** Option 4 (Mac) : install Python + scientific libraries on your machine
36 | - install [[http://itunes.apple.com/ca/app/xcode/id497799835?mt=12][Xcode]] from the mac app store (the download is LARGE, several
37 | GB)
38 | - in Xcode: Preferences/Downloads/Components and Install the "Command
39 | Line Tools"
40 | - download and run the [[http://fonnesbeck.github.com/ScipySuperpack/][SciPy Superpack install script]]
41 | - note: you may have to download install [[http://pypi.python.org/pypi/setuptools][python-setuptools]] first... if
42 | the superpack install script doesn’t work, try this
43 | - you’re ready to rumble
44 |
45 | ** Option 5 (windows)
46 | -
47 | - seriously though I have little to no idea about the windows universe
48 | - your best bet may be the [[http://www.enthought.com/products/epd.php][Enthought Python Distribution]]
49 | - they have an [[http://www.enthought.com/products/edudownload.php][Academic Version]] which is free, you just have to fill
50 | out a form and they send you an email with a download link
51 | - here is a [[http://goo.gl/HSVPp][blog post]] detailing how to get the ipython notebook
52 | running on Windows 7
53 |
54 | * Testing your installation
55 |
56 | ** Launching iPython
57 |
58 | To launch iPython, open up a Terminal and type a command to launch:
59 |
60 | To make it so Figures appear in their own window on your desktop (like MATLAB):
61 | #+BEGIN_SRC sh
62 | ipython --pylab
63 | #+END_SRC
64 |
65 | To make it so Figures appear in the console itself, right after the
66 | commmand(s) that produced them:
67 | #+BEGIN_SRC sh
68 | ipython qtconsole --pylab inline
69 | #+END_SRC
70 |
71 | To launch a browser-based "notebook" (this is really neat)
72 | #+BEGIN_SRC sh
73 | ipython notebook --pylab inline
74 | #+END_SRC
75 |
76 | ** Making a plot
77 |
78 | Type the following:
79 |
80 | #+BEGIN_SRC python
81 | t = arange(0, 1, 0.01)
82 | y = sin(2*pi*t*3)
83 | plot(t,y)
84 | #+END_SRC
85 |
86 | and you should see this plot:
87 |
88 | #+ATTR_HTML: height="200px"
89 | [[file:figs/sin.png]]
90 |
91 |
92 | * Next steps
93 |
94 | In the next topic we will talk about dynamical systems --- what they
95 | are, and how they can be used to address scientific questions through
96 | computer simulation.
97 |
98 | [ [[file:1_Dynamical_Systems.html][next]] ]
99 |
100 |
101 |
102 | -----
103 |
104 | * Links
105 | - python : http://www.python.org/
106 | - numpy : http://numpy.scipy.org/
107 | - scipy : http://www.scipy.org/
108 | - matplotlib : http://matplotlib.sourceforge.net/
109 | - ipython : http://ipython.org/
110 | - Free Virtual Machine software virtualbox (mac, windows, linux) :
111 | [[https://www.virtualbox.org/]]
112 | - Commercial Virtual Machine software
113 | - vmware (mac) :
114 | https://www.vmware.com/products/fusion/overview.html
115 | - vmware (windows) :
116 | https://www.vmware.com/products/workstation/overview.html
117 | - parallels desktop (mac) :
118 | http://www.parallels.com/products/desktop/
119 | - parallels workstation (windows, linux) : http://www.parallels.com/products/workstation/
120 | - Free Ubuntu GNU/Linux distributions
121 | - ubuntu : http://www.ubuntu.com/download/desktop
122 | - Ubuntu Shell scripts to install python + scientific stuff and LaTeX
123 | - python gist : https://gist.github.com/3692447
124 | - LaTeX gist : https://gist.github.com/3692459
125 | - [[http://fperez.org/py4science/starter_kit.html][Py4Science]] a Starter Kit
126 | - [[http://neuro.debian.net/][NeuroDebian]] linux-based turnkey software platform for neuroscience
127 |
--------------------------------------------------------------------------------
/org/1_Dynamical_Systems.org:
--------------------------------------------------------------------------------
1 | #+STARTUP: showall
2 |
3 | #+TITLE: 1. Dynamical Systems
4 | #+AUTHOR: Paul Gribble & Dinant Kistemaker
5 | #+EMAIL: paul@gribblelab.org
6 | #+DATE: fall 2012
7 | #+HTML_LINK_UP: http://www.gribblelab.org/compneuro/0_Setup_Your_Computer.html
8 | #+HTML_LINK_HOME: http://www.gribblelab.org/compneuro/index.html
9 | #+BIBLIOGRAPHY: refs plain option:-d limit:t
10 |
11 | -----
12 |
13 | * What is a dynamical system?
14 |
15 | Systems can be characterized by the specific relation between their
16 | input(s) and output(s). A static system has an output that only
17 | depends on its input. A mechanical example of such a system is an
18 | idealized, massless (mass=0) spring. The length of the spring depends
19 | only on the force (the input) that acts upon it. Change the input
20 | force, and the length of the spring will change, and this will happen
21 | instantaneously (obviously a massless spring is a theoretical
22 | construct). A system becomes dynamical (it is said to have /dynamics/
23 | when a mass is attached to the spring. Now the position of the mass
24 | (and equivalently, the length of the spring) is no longer directly
25 | dependent on the input force, but is also tied to the acceleration of
26 | the mass, which in turn depends on the sum of all forces acting upon
27 | it (the sum of the input force and the force due to the spring). The
28 | net force depends on the position of the mass, which depends on the
29 | length of the spring, which depends on the spring force. The property
30 | that acceleration of the mass depends on its position makes this a
31 | dynamical system.
32 |
33 | #+ATTR_HTML: :height 200px :align center
34 | #+CAPTION: A spring with a mass attached
35 | [[file:figs/spring-mass.png]]
36 |
37 | Dynamical systems can be characterized by differential equations that
38 | relate the state derivatives (e.g. velocity or acceleration) to the
39 | state variables (e.g. position). The differential equation for the
40 | spring-mass system depicted above is:
41 |
42 | \begin{equation}
43 | m\ddot{x} = -kx + mg
44 | \end{equation}
45 |
46 | Where $x$ is the position of the mass $m$ (the length of the spring),
47 | $\ddot{x}$ is the second derivative of position (i.e. acceleration),
48 | $k$ is a constant (related to the stiffness of the spring), and $g$ is
49 | the graviational constant.
50 |
51 | The system is said to be a /second order/ system, as the highest
52 | derivative that appears in the differential equation describing the
53 | system, is two. The position $x$ and its time derivative $\dot{x}$ are
54 | called /states/ of the system, and $\dot{x}$ and $\ddot{x}$ are called
55 | /state derivatives/.
56 |
57 | Most systems out there in nature are dynamical systems. For example
58 | most chemical reactions under natural circumstances are dynamical: the
59 | rate of change of a chemical reaction depends on the amount of
60 | chemical present, in other words the state derivative is proportional
61 | to the state. Dynamical systems exist in biology as well. For example
62 | the rate of change of a certain species depends on its population
63 | size.
64 |
65 | Dynamical equations are often described by a set of /coupled
66 | differential equations/. For example, the reproduction rate of
67 | rabbits (state derivative 1) depends on the population of rabbits
68 | (state 1) and on the population size of foxes (state 2). The
69 | reproduction rate of foxes (state derivative 2) depends on the
70 | population of foxes (state 2) and also on the population of rabbits
71 | (state 1). In this case we have two coupled first-order differential
72 | equations, and hence a system of order two. The so-called
73 | predator-prey model is also known as the [[http://en.wikipedia.org/wiki/Lotka_Volterra_equation][Lotka-Volterra equations]].
74 |
75 | \begin{eqnarray}
76 | \dot{x} &= x(\alpha - \beta y)\\
77 | \dot{y} &= -y(\gamma - \delta x)
78 | \end{eqnarray}
79 |
80 | * Why make models?
81 |
82 | There are two main reasons: one being practical and one mostly
83 | theoretical. The practical use is prediction. A typical example of a
84 | dynamical system that is modelled for prediction is the weather. The
85 | weather is a very complex, (high-order, nonlinear, coupled and
86 | chaotic) system. More theoretically, one reason to make models is to
87 | test the validity of a functional hypothesis of an observed
88 | phenomenon. A beautiful example is the model made by [[http://en.wikipedia.org/wiki/Hodgkin-Huxley_model][Hodgkin and
89 | Huxley]] to understand how action potentials arise and propagate in
90 | neurons \cite{HH1952,HH1990}. They modelled the different
91 | (voltage-gated) ion channels in an axon membrane and showed using
92 | mathematical models that indeed the changes in ion concentrations were
93 | responsible for the electical spikes observed experimentally 7 years
94 | earlierr.
95 |
96 | #+CAPTION: Hodgkin-Huxley model of voltage-gated ion channels
97 | #+ATTR_HTML: :height 200px
98 | [[file:figs/HH1.png]]
99 |
100 | #+CAPTION: Action potentials across the membrane
101 | #+ATTR_HTML: :height 200px
102 | [[file:figs/HH2.png]]
103 |
104 | A second theoretical reason to make models is that it is sometimes
105 | very difficult, if not impossible, to answer a certain question
106 | empirically. As an example we take the following biomechanical
107 | question: Would you be able to jump higher if your biceps femoris
108 | (part of your hamstrings) were two separate muscles each crossing only
109 | one joint rather than being one muscle crossing both the hip and knee
110 | joint? Not a strange question as one could then independently control
111 | the torques around each joint.
112 |
113 | In order to answer this question empirically, one would like to do the
114 | following experiment:
115 |
116 | - measure the maximal jump height of a subject
117 | - change only the musculoskeletal properties in question
118 | - measure the jump height again
119 |
120 | Of course, such an experiment would yield several major ethical,
121 | practical and theoretical drawbacks. It is unlikely that an ethics
122 | committee would approve the transplantation of the origin and
123 | insertion of the hamstrings in order to examine its effect on jump
124 | height. And even so, one would have some difficulties finding
125 | subjects. Even with a volunteer for such a surgery it would not bring
126 | us any closer to an answer. After such a surgery, the subject would
127 | not be able to jump right away, but has to undergo severe revalidation
128 | and surely during such a period many factors will undesirably change
129 | like maximal contractile forces. And even if the subject would fully
130 | recover (apart from the hamstrings transplantation), his or her
131 | nervous system would have to find the new optimal muscle stimulation
132 | pattern.
133 |
134 | If one person jumps lower than another person, is that because she
135 | cannot jump as high with her particular muscles, or was it just that
136 | her CNS was not able to find the optimal muscle activation pattern?
137 | Ultimately, one wants to know through what mechanism the subject's
138 | jump performance changes. To investigate this, one would need to know,
139 | for example, the forces produced by the hamstrings as a function of
140 | time, something that is impossible to obtain experimentally. Of
141 | course, this example is somewhat ridiculous, but its message is
142 | hopefully clear that for several questions a strict empirical approach
143 | is not suitable. An alternative is provided by mathematical modelling.
144 |
145 | * Next steps
146 |
147 | In the next topic, we will be examining three systems --- a
148 | mass-spring system, a system representing weather patterns, and a
149 | system characterizing predator-prey interactions. In each case we will
150 | see how to go from differential equations characterizing the dynamics
151 | of the system, to Python code, and run that code to simulate the
152 | behaviour of the system over time. We will see the great power of
153 | simulation, namely the ability to change aspects of the system at
154 | will, and simulate to explore the resulting change in system
155 | behaviour.
156 |
157 | [ [[file:2_Modelling_Dynamical_Systems.html][next]] ]
158 |
159 | -----
160 |
--------------------------------------------------------------------------------
/org/2_Modelling_Dynamical_Systems.org:
--------------------------------------------------------------------------------
1 | #+STARTUP: showall
2 |
3 | #+TITLE: 2. Modelling Dynamical Systems
4 | #+AUTHOR: Paul Gribble & Dinant Kistemaker
5 | #+EMAIL: paul@gribblelab.org
6 | #+DATE: fall 2012
7 | #+HTML_LINK_UP: http://www.gribblelab.org/compneuro/1_Dynamical_Systems.html
8 | #+HTML_LINK_HOME: http://www.gribblelab.org/compneuro/index.html
9 |
10 | -----
11 |
12 | * Characterizing a System Using Differential Equations
13 |
14 | A dynamical system such as the mass-spring system we saw before, can
15 | be characterized by the relationship between state variables $s$ and
16 | their (time) derivatives $\dot{s}$. How do we arrive at the correct
17 | characterization of this relationship? The short answer is, we figure
18 | it out using our knowledge of physics, or we are simply given the
19 | equations by someone else. Let's look at a simple mass-spring system
20 | again.
21 |
22 | #+ATTR_HTML: :height 200px :align center
23 | #+CAPTION: A spring with a mass attached
24 | [[file:figs/spring-mass.png]]
25 |
26 | We know a couple of things about this system. We know from [[http://en.wikipedia.org/wiki/Hooke's_law][Hooke's law]]
27 | of elasticity that the extension of a spring is directly and linearly
28 | proportional to the load applied to it. More precisely, the force that
29 | a spring applies in response to a perturbation from it's /resting
30 | length/ (the length at which it doesn't generate any force), is
31 | linearly proportional, through a constant $k$, to the difference in
32 | length between its current length and its resting length (let's call
33 | this distance $x$). For convention let's assume positive values of $x$
34 | correspond to lengthening the spring beyond its resting length, and
35 | negative values of $x$ correspond to shortening the spring from its
36 | resting length.
37 |
38 | \begin{equation}
39 | F = -kx
40 | \end{equation}
41 |
42 | Let's decide that the /state variable/ that we are interested in for
43 | our system is $x$. We will refer to $x$ instead of $s$ from now on to
44 | denote our state variable.
45 |
46 | We also know from [[http://en.wikipedia.org/wiki/Newton's_laws_of_motion][Newton's laws of motion]] (specifically [[http://en.wikipedia.org/wiki/Newton's_laws_of_motion#Newton.27s_second_law][Newton's
47 | second law]]) that the net force on an object is equal to its mass $m$
48 | multiplied by its acceleration $a$ (the second derivative of
49 | position).
50 |
51 | \begin{equation}
52 | F = ma
53 | \end{equation}
54 |
55 | Instead of using $a$ to denote acceleration let's use a different
56 | notation, in terms of the spring's perturbed length $x$. The rate of
57 | change (velocity) is denoted $\dot{x}$ and the rate of change of the
58 | velocity (i.e. the acceleration) is denoted $\ddot{x}$.
59 |
60 | \begin{equation}
61 | F = m \ddot{x}
62 | \end{equation}
63 |
64 | We also know that the mass is affected by two forces: the force due to
65 | the spring ($-kx$) and also the gravitational force $g$. So the
66 | equation characterizing the /net forces/ on the mass is
67 |
68 | \begin{equation}
69 | \sum{F} = m\ddot{x} = -kx + mg
70 | \end{equation}
71 |
72 | or just
73 |
74 | \begin{equation}
75 | m\ddot{x} = -kx + mg
76 | \end{equation}
77 |
78 | This equation is a /second-order/ differential equation, because the
79 | highest state derivative is a /second derivative/ (i.e. $\ddot{x}$,
80 | the second derivative, i.e. the acceleration, of $x$). The equation
81 | specifies the relationship between the state variables (in this case a
82 | single state variable $x$) and its derivatives (in this case a single
83 | derivative, $\ddot{x}$).
84 |
85 | The reason we want an equation like this, from a practical point of
86 | view, is that we will be using numerical solvers in Python/Scipy to
87 | /integrate/ this differential equation over time, so that we can
88 | /simulate/ the behaviour of the system. What these solvers need is a
89 | Python function that returns state derivatives, given current
90 | states. We can re-arrange the equation above so that it specifies how
91 | to compute the state derivative $\ddot{x}$ given the current state
92 | $\ddot{x}$.
93 |
94 | \begin{equation}
95 | \ddot{x} = \frac{-kx}{m} + g
96 | \end{equation}
97 |
98 | Now we have what we need in order to simulate this system in
99 | Python/Scipy. At any time point, we can compute the acceleration of
100 | the mass by the formula above.
101 |
102 | * Integrating Differential Equations in Python/SciPy
103 |
104 | Here is a Python function that we will be using to simulate the
105 | mass-spring system. All it does, really, is compute the equation
106 | above: what is the value of $\ddot{x}$, given $x$? The one addition we
107 | have is that we are going to keep track not just of one state variable
108 | $x$ but also its first derivative $\dot{x}$ (the rate of change of
109 | $x$, i.e. velocity).
110 |
111 | #+BEGIN_SRC python
112 | def MassSpring(state,t):
113 | # unpack the state vector
114 | x = state[0]
115 | xd = state[1]
116 |
117 | # these are our constants
118 | k = 2.5 # Newtons per metre
119 | m = 1.5 # Kilograms
120 | g = 9.8 # metres per second
121 |
122 | # compute acceleration xdd
123 | xdd = ((-k*x)/m) + g
124 |
125 | # return the two state derivatives
126 | return [xd, xdd]
127 | #+END_SRC
128 |
129 | Note that the function we wrote takes two arguments as inputs: =state=
130 | and =t=, which corresponds to time. This is necessary for the
131 | numerical solver that we will use in Python/Scipy. The =state=
132 | variable is actually an /array/ of two values corresponding to $x$ and
133 | $\dot{x}$.
134 |
135 | How does numerical integration (simulation) work? Here is a summary of the steps that a numerical solver takes. First, you have to provide the system with two things:
136 |
137 | 1. initial conditions (what are the initial states of the system?)
138 | 2. a time vector over which to simulate
139 |
140 | Given this, the numerical solver will go through the following steps to simulate the system:
141 |
142 | - calculate state derivatives $\ddot{x}$ at the initial time ($t=0$)
143 | given the initial states $(x,\dot{x})$
144 | - estimate $x(t+ \Delta t)$ using $x(t=0)$, $\dot{x}(t=0)$ and
145 | $\ddot{x}(t=0)$
146 | - calculate $\ddot{x}(t=t + \Delta t)$ from $x(t=t + \Delta t)$ and
147 | $\dot{x}(t=t + \Delta t)$
148 | - estimate $x(t + 2 \Delta t)$ and $\dot{x}(t + 2 \Delta t)$ using
149 | $x(t=t + \Delta t)$, $\dot{x}(t=t + \Delta t)$ and $\ddot{x}(t=t +
150 | \Delta t)$
151 | - calculate $\ddot{x}(t=t + 2\Delta t)$ from $x(t=t + 2\Delta t)$ and
152 | $\dot{x}(t=t + 2\Delta t)$
153 | - ... etc
154 |
155 | In this way the numerical solver can esimate how the system states
156 | $(x,\dot{x})$ unfold over time, given the initial conditions, and the
157 | known relationship between state derivatives and system states. The
158 | details of the "estimate" steps above are not something we are going
159 | to dive into now. Suffice it to say that current estimation algorithms
160 | are based on the work of two German mathematicians named [[http://en.wikipedia.org/wiki/Runge–Kutta_methods][Runge and
161 | Kutta]] in the beginning of the 20th century. These numerical recipies
162 | are readily available in Scipy ([[http://docs.scipy.org/doc/scipy/reference/integrate.html][docs here]] (and in MATLAB, and other
163 | numerical software) and are known as ODE solvers (ODE stands for
164 | /ordinary differential equation/).
165 |
166 | Here's how we would simulate the mass-spring system above. Launch
167 | iPython with the =--pylab= argument (this automatically imports a
168 | bunch of libraries that we will use, including plotting libraries).
169 |
170 | #+BEGIN_SRC python
171 | from scipy.integrate import odeint
172 |
173 | def MassSpring(state,t):
174 | # unpack the state vector
175 | x = state[0]
176 | xd = state[1]
177 |
178 | # these are our constants
179 | k = -2.5 # Newtons per metre
180 | m = 1.5 # Kilograms
181 | g = 9.8 # metres per second
182 |
183 | # compute acceleration xdd
184 | xdd = ((k*x)/m) + g
185 |
186 | # return the two state derivatives
187 | return [xd, xdd]
188 |
189 | state0 = [0.0, 0.0]
190 | t = arange(0.0, 10.0, 0.1)
191 |
192 | state = odeint(MassSpring, state0, t)
193 |
194 | plot(t, state)
195 | xlabel('TIME (sec)')
196 | ylabel('STATES')
197 | title('Mass-Spring System')
198 | legend(('$x$ (m)', '$\dot{x}$ (m/sec)'))
199 | #+END_SRC
200 |
201 | [[file:code/mass_spring.py][mass\_spring.py]]
202 |
203 | A couple of notes about the code. I have simply chosen, out of the
204 | blue, values for the constants $k$ and $m$. The [[http://en.wikipedia.org/wiki/Gravitational_constant][gravitational constant]]
205 | $g$ is of course known. I have also chosen to simulate the system for
206 | 10 seconds, and I have chosen a time /resolution/ of 100 milliseconds
207 | (0.1 seconds). We will talk later about the issue of what is an
208 | appropriate time resolution for simulation.
209 |
210 | You should see a plot like this:
211 |
212 | #+ATTR_HTML: :height 400px :align center
213 | #+CAPTION: Mass-Spring Simulation
214 | [[file:figs/mass-spring-sim.png]]
215 |
216 | The blue line shows the position $x$ of the mass (the length of the
217 | spring) over time, and the green line shows the rate of change of $x$,
218 | in other words the velocity $\dot{x}$, over time. These are the two
219 | states of the system, simulated over time.
220 |
221 | The way to interpret this simulation is, if we start the system at
222 | $x=0$ and $\dot{x}=0$, and simulate for 10 seconds, this is how the
223 | system would behave.
224 |
225 | ** The power of modelling and simulation
226 |
227 | Now you can appreciate the power of mathematical models and
228 | simulation: given a model that characterizes (to some degree of
229 | accuracy) the behaviour of a system we are interested in, we can use
230 | simulation to perform experiments /in simulation/ instead of in
231 | reality. This can be very powerful. We can ask questions of the model,
232 | in simulation, that may be too difficult, or expensive, or time
233 | consuming, or just plain impossible, to do in real-life empirical
234 | studies. The degree to which we regard the results of simulations as
235 | interpretable, is a direct reflection of the degree to which we
236 | believe that our mathematical model is a reasonable characterization
237 | of the behaviour of the real system.
238 |
239 | ** Exercises
240 |
241 | 1. We have started the system at $x=0$ which means that the spring is
242 | not stretched beyond its resting length (so spring force due to
243 | stretch should equal zero), and $\dot{x}=0$, which means the
244 | spring's velocity is zero, i.e. it is not moving. Why does the
245 | simulation predict that the spring will begin stretching, then
246 | bouncing back and forth?
247 |
248 | 2. What is the influence of the sign and magnitude of the stiffness
249 | parameter $k$?
250 |
251 | 3. In physics, [[http://en.wikipedia.org/wiki/Damping][damping]] can be used to reduce the magnitude of
252 | oscillations. Damping generates a force that is directly
253 | proportional to velocity ($F = -b\dot{x}$). Add damping to the
254 | mass-spring system and re-run the simulation. Specify the value of
255 | the damping constant $b=-2.0$. What happens?
256 |
257 | 4. What is the influence of the sign and magnitude of the damping
258 | coefficient $b$?
259 |
260 | 5. Double the mass, and re-run the simulaton. What happens?
261 |
262 | 6. How would you add an input force to the system?
263 |
264 |
265 | * Lorenz Attractor
266 |
267 | The [[http://en.wikipedia.org/wiki/Lorenz_system][Lorenz system]] is a dynamical system that we will look at briefly,
268 | as it will allow us to discuss several interesting issues around
269 | dynamical systems. It is a system often used to illustrate [[http://en.wikipedia.org/wiki/Nonlinear_system][non-linear
270 | systems]] theory and [[http://en.wikipedia.org/wiki/Chaos_theory][chaos theory]]. It's sometimes used as a simple
271 | demonstration of the [[http://en.wikipedia.org/wiki/Butterfly_effect][butterfly effect]] (sensitivity to initial
272 | conditions).
273 |
274 | The Lorenz system is a simplified mathematical model for atmospheric
275 | convection. Let's not worry about the details of what it represents,
276 | for now the important things to note are that it is a system of three
277 | /coupled/ differential equations, and characterizes a system with
278 | three state variables $(x,y,z$).
279 |
280 | \begin{eqnarray}
281 | \dot{x} &= &\sigma(y-x)\\
282 | \dot{y} &= &(\rho-z)x - y\\
283 | \dot{z} &= &xy-\beta z
284 | \end{eqnarray}
285 |
286 | If you set the three constants $(\sigma,\rho,\beta)$ to specific
287 | values, the system exhibits /chaotic behaviour/.
288 |
289 | \begin{eqnarray}
290 | \sigma &= &10\\
291 | \rho &= &28\\
292 | \beta &= &\frac{8}{3}
293 | \end{eqnarray}
294 |
295 | Let's implement this system in Python/Scipy. We have been given above
296 | the three equations that characterize how the state derivatives
297 | $(\dot{x},\dot{y},\dot{z})$ depend on $(x,y,z)$ and the constants
298 | $(\sigma,\rho,\beta)$. All we have to do is write a function that
299 | implements this, set some initial conditions, decide on a time array
300 | to simulate over, and run the simulation using =odeint()=.
301 |
302 | #+BEGIN_SRC python
303 | from scipy.integrate import odeint
304 |
305 | def Lorenz(state,t):
306 | # unpack the state vector
307 | x = state[0]
308 | y = state[1]
309 | z = state[2]
310 |
311 | # these are our constants
312 | sigma = 10.0
313 | rho = 28.0
314 | beta = 8.0/3.0
315 |
316 | # compute state derivatives
317 | xd = sigma * (y-x)
318 | yd = (rho-z)*x - y
319 | zd = x*y - beta*z
320 |
321 | # return the state derivatives
322 | return [xd, yd, zd]
323 |
324 | state0 = [2.0, 3.0, 4.0]
325 | t = arange(0.0, 30.0, 0.01)
326 |
327 | state = odeint(Lorenz, state0, t)
328 |
329 | # do some fancy 3D plotting
330 | from mpl_toolkits.mplot3d import Axes3D
331 | fig = figure()
332 | ax = fig.gca(projection='3d')
333 | ax.plot(state[:,0],state[:,1],state[:,2])
334 | ax.set_xlabel('x')
335 | ax.set_ylabel('y')
336 | ax.set_zlabel('z')
337 | show()
338 | #+END_SRC
339 |
340 | [[file:code/lorenz1.py][lorenz1.py]]
341 |
342 | You should see something like this:
343 |
344 | #+ATTR_HTML: :height 400px :align center
345 | #+CAPTION: Lorenz Attractor
346 | [[file:figs/lorenz1.png]]
347 |
348 | The three axes on the plot represent the three states $(x,y,z)$
349 | plotted over the 30 seconds of simulated time. We started the system
350 | with three particular values of $(x,y,z)$ (I chose them arbitrarily),
351 | and we set the simulation in motion. This is the trajectory, in
352 | /state-space/, of the Lorenz system.
353 |
354 | You can see an interesting thing... the system seems to have two
355 | stable equilibrium states, or attractors: those circular paths. The
356 | system circles around in one "neighborhood" in state-space, and then
357 | flips over and circles around the second neighborhood. The number of
358 | times it circles in a given neighborhood, and the time at which it
359 | switches, displays chaotic behaviour, in the sense that they are
360 | exquisitly sensitive to initial conditions.
361 |
362 | For example let's re-run the simulation but change the initial
363 | conditions. Let's change them by a very small amount, say
364 | 0.0001... and let's only change the $x$ initial state by that very
365 | small amount. We will simulate for 30 seconds.
366 |
367 | #+BEGIN_SRC python
368 | t = arange(0.0, 30, 0.01)
369 |
370 | # original initial conditions
371 | state1_0 = [2.0, 3.0, 4.0]
372 | state1 = odeint(Lorenz, state1_0, t)
373 |
374 | # rerun with very small change in initial conditions
375 | delta = 0.0001
376 | state2_0 = [2.0+delta, 3.0, 4.0]
377 | state2 = odeint(Lorenz, state2_0, t)
378 |
379 | # animation
380 | figure()
381 | pb, = plot(state1[:,0],state1[:,1],'b-',alpha=0.2)
382 | xlabel('x')
383 | ylabel('y')
384 | p, = plot(state1[0:10,0],state1[0:10,1],'b-')
385 | pp, = plot(state1[10,0],state1[10,1],'b.',markersize=10)
386 | p2, = plot(state2[0:10,0],state2[0:10,1],'r-')
387 | pp2, = plot(state2[10,0],state2[10,1],'r.',markersize=10)
388 | tt = title("%4.2f sec" % 0.00)
389 | # animate
390 | step = 3
391 | for i in xrange(1,shape(state1)[0]-10,step):
392 | p.set_xdata(state1[10+i:20+i,0])
393 | p.set_ydata(state1[10+i:20+i,1])
394 | pp.set_xdata(state1[19+i,0])
395 | pp.set_ydata(state1[19+i,1])
396 | p2.set_xdata(state2[10+i:20+i,0])
397 | p2.set_ydata(state2[10+i:20+i,1])
398 | pp2.set_xdata(state2[19+i,0])
399 | pp2.set_ydata(state2[19+i,1])
400 | tt.set_text("%4.2f sec" % (i*0.01))
401 | draw()
402 |
403 | i = 1939 # the two simulations really diverge here!
404 | s1 = state1[i,:]
405 | s2 = state2[i,:]
406 | d12 = norm(s1-s2) # distance
407 | print ("distance = %f for a %f different in initial condition") % (d12, delta)
408 | #+END_SRC
409 |
410 | [[file:code/lorenz2.py][lorenz2.py]]
411 |
412 | #+BEGIN_EXAMPLE
413 | distance = 32.757253 for a 0.000100 different in initial condition
414 | #+END_EXAMPLE
415 |
416 | You should see an animation of the two state-space trajectories. For
417 | convenience we are only plotting $x$ vs $y$ and ignoring $z$. It turns
418 | out that 3D animations are not trivial in matplotlib (there is a
419 | library called mayavi that is excellent for 3D stuff).
420 |
421 | The original simulation is shown in blue and the new one (in which the
422 | initial condition of $x$ was increased by 0.0001) in red. The two
423 | follow each other quite closely for a long time, and then begin to
424 | diverge at about the 16 second mark. At the end of the animation it
425 | looks like this:
426 |
427 | #+ATTR_HTML: :height 400px :align center
428 | #+CAPTION: Lorenz Attractor
429 | [[file:figs/lorenz2.png]]
430 |
431 | At 19.39 seconds it looks like this:
432 |
433 | #+ATTR_HTML: :height 400px :align center
434 | #+CAPTION: Lorenz Attractor
435 | [[file:figs/lorenz3.png]]
436 |
437 | Note how the two systems are in different "neighborhoods" entirely!
438 |
439 | At the end of the code above we compute the distance between the two
440 | systems (the 3D distance between their respective $(x,y,z)$ positions
441 | in state-space), and the distance is a whopping 32.76 units, for a
442 | 0.0001 difference in initial conditions.
443 |
444 | This illustrates how systems with relatively simple differential
445 | equations characterizing their behaviour, can turn out to be
446 | exquisitely sensitive to initial conditions. Just imagine if the
447 | initial conditions of your simulation were gathered from empirical
448 | observations (like the weather, for example). Now imagine you use a
449 | model simulation to predict whether it will be sunny (left-hand
450 | "neighborhood" of the plot above) or thunderstorms (right-hand
451 | "neighborhood"), 30 days from now. If the answer can flip between one
452 | prediction and the other, based on a 1/10,000 different in
453 | measurement, you had better be sure of your empirical measurement
454 | instruments, when you make a prediction 30 days out! Actually this
455 | won't even solve the problem, no matter how precise your
456 | measurements. The point is that the system as a whole is very
457 | sensitive to even tiny changes in initial conditions. This is why
458 | short-term weather forecasts are relatively accurate, but forecasts
459 | past a couple of days can turn out to be dead wrong.
460 |
461 |
462 | * Predator-Prey model
463 |
464 | The [[http://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equation][Lotka-Volterra equations]] are two coupled first-order nonlinear
465 | differential equations that are used to characterize the dynamics of
466 | biological systems in which a predator population and a prey popuation
467 | interact. The two populations develop over time according to these equations:
468 |
469 | \begin{eqnarray}
470 | \dot{x} &= &x(\alpha-\beta y)\\
471 | \dot{y} &= &-y(\gamma - \sigma x)
472 | \end{eqnarray}
473 |
474 | where $x$ is the number of prey (for example, rabbits), $y$ is the
475 | number of predators (e.g. foxes), and $\dot{x}$ and $\dot{y}$
476 | represent the growth rates (the rates of change over time) of the two
477 | populations. The values $(\alpha,\beta,\gamma,\sigma)$ are parameters
478 | (constants) that characterize different aspects of the two
479 | populations.
480 |
481 | Assumptions of this simple form of the model are:
482 |
483 | 1. prey find ample food at all times
484 | 2. food supply of predators depends entirely on prey population
485 | 3. rate of change of population is proportional to its size
486 | 4. the environment does not change
487 |
488 | The parameters can be interepreted as:
489 |
490 | - $\alpha$ is the natural growth rate of prey in the absence of predation
491 | - $\beta$ is the death rate per encounter of prey due to predation
492 | - $\sigma$ is related to the growth rate of predators
493 | - $\gamma$ is the natural death rate of predators in the absence of food (prey)
494 |
495 | Here is some example code showing how to simulate this system. Just as
496 | before, we need to complete a few steps:
497 |
498 | 1. write a Python function that characterizes how the system's state
499 | derivatives are related to the system's states (this is given by
500 | the equations above)
501 | 2. decide on values of the system parameters
502 | 3. decide on values of the initial conditions of the system (the
503 | initial values of the states)
504 | 4. decide on a time span and time resolution for simulating the system
505 | 5. simulate! (i.e. use an ODE solver to integrate the differential
506 | equations over time)
507 | 6. examine the states, typically by plotting them
508 |
509 | Here is some code:
510 |
511 | #+BEGIN_SRC python
512 | from scipy.integrate import odeint
513 |
514 | def LotkaVolterra(state,t):
515 | x = state[0]
516 | y = state[1]
517 | alpha = 0.1
518 | beta = 0.1
519 | sigma = 0.1
520 | gamma = 0.1
521 | xd = x*(alpha - beta*y)
522 | yd = -y*(gamma - sigma*x)
523 | return [xd,yd]
524 |
525 | t = arange(0,500,1)
526 | state0 = [0.5,0.5]
527 | state = odeint(LotkaVolterra,state0,t)
528 | figure()
529 | plot(t,state)
530 | ylim([0,8])
531 | xlabel('Time')
532 | ylabel('Population Size')
533 | legend(('x (prey)','y (predator)'))
534 | title('Lotka-Volterra equations')
535 | #+END_SRC
536 |
537 | You should see a plot like this:
538 |
539 | #+ATTR_HTML: :height 400px :align center
540 | #+CAPTION: Lotka-Volterra Simulation
541 | [[file:figs/lotkavolterra1.png]]
542 |
543 | We can also plot the trajectory of the system in /state-space/ (much
544 | like we did for the Lorenz system above):
545 |
546 | #+BEGIN_SRC python
547 | # animation in state-space
548 | figure()
549 | pb, = plot(state[:,0],state[:,1],'b-',alpha=0.2)
550 | xlabel('x (prey population size)')
551 | ylabel('y (predator population size)')
552 | p, = plot(state[0:10,0],state[0:10,1],'b-')
553 | pp, = plot(state[10,0],state[10,1],'b.',markersize=10)
554 | tt = title("%4.2f sec" % 0.00)
555 |
556 | # animate
557 | step=2
558 | for i in xrange(1,shape(state)[0]-10,step):
559 | p.set_xdata(state[10+i:20+i,0])
560 | p.set_ydata(state[10+i:20+i,1])
561 | pp.set_xdata(state[19+i,0])
562 | pp.set_ydata(state[19+i,1])
563 | tt.set_text("%d steps" % (i))
564 | draw()
565 | #+END_SRC
566 |
567 | [[file:code/lotkavolterra.py][lotkavolterra.py]]
568 |
569 | You should see a plot like this:
570 |
571 | #+ATTR_HTML: :height 400px :align center
572 | #+CAPTION: Lotka-Volterra State-space plot
573 | [[file:figs/lotkavolterra2.png]]
574 |
575 | ** Exercises
576 |
577 | 1. Increase the $\alpha$ parameter and re-run the simulation. What
578 | happens and why?
579 | 2. Set all parameters to 0.2. What happens and why?
580 | 3. Try the following: $(\alpha,\beta,\gamma,\sigma)$ =
581 | (0.20, 0.20, 0.02, 0.0). What happens and why?
582 |
583 | * Next steps
584 |
585 | We have seen how to take a set of differential equations that
586 | characterize the dynamics of a system, and implement them in Python,
587 | and run a simulation of the behaviour of that system over time. In the
588 | next topic, we will be applying this to models of single neurons, and
589 | simulating the dynamics of voltage-gated ion channels, and examining
590 | how these models predict spiking behaviour.
591 |
592 | [ [[file:3_Modelling_Action_Potentials.html][next]] ]
593 |
--------------------------------------------------------------------------------
/org/3_Modelling_Action_Potentials.org:
--------------------------------------------------------------------------------
1 | #+STARTUP: showall
2 |
3 | #+TITLE: 3. Modelling Action Potentials
4 | #+AUTHOR: Paul Gribble & Dinant Kistemaker
5 | #+EMAIL: paul@gribblelab.org
6 | #+DATE: fall 2012
7 | #+HTML_LINK_UP: http://www.gribblelab.org/compneuro/2_Modelling_Dynamical_Systems.html
8 | #+HTML_LINK_HOME: http://www.gribblelab.org/compneuro/index.html
9 | #+BIBLIOGRAPHY: refs plain option:-d limit:t
10 |
11 | -----
12 |
13 | * Introduction
14 |
15 | In this section we will use a model of voltage-gated ion channels in a
16 | single neuron to simulate action potentials. The model is based on the
17 | work by Hodgkin & Huxley in the 1940s and 1950s
18 | \cite{HH1952,HH1990}. A good reference to refresh your memory about
19 | how ion channels in a neuron work is the Kandel, Schwartz & Jessel
20 | book "Principles of Neural Science" \cite{kandel2000principles}.
21 |
22 | To model the action potential we will use an article by Ekeberg et
23 | al. (1991) published in Biological Cybernetics
24 | \cite{ekeberg1991}. When reading the article you can focus on the
25 | first three pages (up to paragraph 2.3) and try to find answers to the
26 | following questions:
27 |
28 | - How many differential equations are there?
29 | - What is the order of the system described in equations 1-9?
30 | - What are the states and state derivatives of the system?
31 |
32 | * Simulating the Hodgkin & Huxley model
33 |
34 | Before we begin coding up the model, it may be useful to remind you of
35 | a fundamental law of electricity, one that relates electrical
36 | potential $V$ to electric current $I$ and resistance $R$ (or
37 | conductance $G$, the reciprocal of resistance). This of course is
38 | known as [[http://en.wikipedia.org/wiki/Ohm's_law][Ohm's law]]:
39 |
40 | \begin{equation}
41 | V = IR
42 | \end{equation}
43 |
44 | or
45 |
46 | \begin{equation}
47 | V = \frac{I}{G}
48 | \end{equation}
49 |
50 | Our goal here is to code up a dynamical model of the membrane's
51 | electric circuit including two types of ion channels: sodium and
52 | potassium channels. We will use this model to better understand the
53 | process underlying the origin of an action potential.
54 |
55 | ** The neuron model
56 |
57 | #+ATTR_HTML: :width 400px :align center
58 | #+CAPTION: Schematic of Ekeberg et al. 1991 neuron model
59 | [[file:figs/ekeberg_fig1.png]]
60 |
61 | The figure above, adapted from Ekeberg et al., 1991
62 | \cite{ekeberg1991}, schematically illustrates the model of a
63 | neuron. In panel A we see a soma, and multiple dendrites. Each of
64 | these can be modelled by an electrical "compartment" (Panel B) and the
65 | passive interactions between them can be modelled as a pretty standard
66 | electrical circuit (see [[http://en.wikipedia.org/wiki/Biological_neuron_model][Biological Neuron Model]] for more details about
67 | compartmental models of neurons). In panel C, we see an expanded model
68 | of the Soma from panel A. Here, a number of active ion channels are
69 | included in the model of the soma.
70 |
71 | For our purposes here, we will focus on the soma, and we will not
72 | include any additional dendrites in our implementation of the
73 | model. Thus essentially we will be modelling what appears in panel C,
74 | and at that, only a subset.
75 |
76 | In panel C we see that the soma can be modelled as an electrical
77 | circuit with a sodium ion channel (Na), a potassium ion channel (K), a
78 | calcium ion channel (Ca), and a calcium-dependent potassium channel
79 | (K(Ca)). What we will be concerned with simulating, ultimately, is the
80 | intracellular potential E.
81 |
82 | *** Passive Properties
83 |
84 | Equation (1) of Ekeberg is a differential equation describing the
85 | relation between the time derivative of the membrane potential $E$ as
86 | a function of the passive leak current through the membrane, and the
87 | current through the ion channels. Note that Ekeberg uses $E$ instead
88 | of the typical $V$ symbol to represent electrical potential.
89 |
90 | \begin{equation}
91 | \frac{dE}{dt} = \frac{(E_{leak}-E)G_{m} + \sum{\left(E_{comp}-E\right)}G_{core} + I_{channels}}{C_{m}}
92 | \end{equation}
93 |
94 | Don't panic, it's not actually that complicated. What this equation is
95 | saying is that the rate of change of electrical potential across the
96 | current (the left hand side of the equation, $\frac{dE}{dt}$) is equal
97 | to the sum of a bunch of other terms, divided by membrane capacitance
98 | $C_{m}$ (the right hand side of the equation). Recall from basic
99 | physics that [[http://en.wikipedia.org/wiki/Capacitance][capacitance]] is a measure of the ability of something to
100 | store an electrical charge.
101 |
102 | The "bunch of other things" is a sum of three things, actually, (from
103 | left to right): a passive leakage current, plus a term characterizing
104 | the electrical coupling of different compartments, plus the currents
105 | of the various ion channels. Since we are not going to be modelling
106 | dendrites here, we can ignore the middle term on the right hand side
107 | of the equation $\sum{\left(E_{comp}-E\right)}G_{core}$ which
108 | represents the sum of currents from adjacent compartments (we have
109 | none).
110 |
111 | We are also going to include in our model an external current
112 | $I_{ext}$. This can essentially represent the sum of currents coming
113 | in from the dendrites (which we are not explicitly modelling). It can
114 | also represent external current injected in a [[http://en.wikipedia.org/wiki/Patch_clamp][patch-clamp]]
115 | experiment. This is what we as experimenters can manipulate, for
116 | example, to see how neuron spiking behaviour changes. So what we will
117 | actually be working with is this:
118 |
119 | \begin{equation}
120 | \frac{dE}{dt} = \frac{(E_{leak}-E)G_{m} + I_{channels} + I_{ext}}{C_{m}}
121 | \end{equation}
122 |
123 | What we need to do now is unpack the $I_{channels}$ term representing
124 | the currents from all of the ion channels in the model. Initially we
125 | will only be including two, the potassium channel (K) and the sodium
126 | channel (Na).
127 |
128 | *** Sodium channels (Na)
129 |
130 | The current through sodium channels that enter the soma are
131 | represented by equation (2) in Ekeberg et al. (1991):
132 |
133 | \begin{equation}
134 | I_{Na} = (E_{Na} - E_{soma})G_{Na}m^{3}h
135 | \end{equation}
136 |
137 | where $m$ is the activation of the sodium channel and $h$ is the
138 | inactivation of the sodium channel, and the other terms are constant
139 | parameters: $E_{Na}$ is the reversal potential, $G_{Na}$ is the
140 | maximum sodium conductance throught the membrane, and $E_{soma}$ is
141 | the membrane potential of the soma.
142 |
143 | The activation $m$ of the sodium channels is described by the
144 | differential equation (3) in Ekeberg et al. (1991):
145 |
146 | \begin{equation}
147 | \frac{dm}{dt} = \alpha_{m}(1-m) - \beta_{m}m
148 | \end{equation}
149 |
150 | where $\alpha_{m}$ represents the rate at which the channel switches
151 | from a closed to an open state, and $\beta_{m}$ is rate for the
152 | reverse. These two parameters $\alpha$ and $\beta$ depend on the
153 | membrane potential in the soma. In other words the sodium channel is
154 | voltage-gated. Equation (4) in Ekeberg et al. (1991) gives these
155 | relationships:
156 |
157 | \begin{eqnarray}
158 | \alpha_{m} &= &\frac{A(E_{soma}-B)}{1-e^{(B-E_{soma})/C}}\\
159 | \beta_{m} &= &\frac{A(B-E_{soma})}{1-e^{(E_{soma}-B)/C}}
160 | \end{eqnarray}
161 |
162 | A tricky bit in the Ekeberg et al. (1991) paper is that the $A$, $B$
163 | and $C$ parameters above are different for $\alpha$ and $\beta$ even
164 | though there is no difference in the symbols used in the equations.
165 |
166 | The inactivation of the sodium channels is described by a similar set of equations: a differential equation giving the rate of change of the sodium channel deactivation, from Ekeberg et al. (1991) equation (5):
167 |
168 | \begin{equation}
169 | \frac{dh}{dt} = \alpha_{h}(1-h) - \beta_{h}h
170 | \end{equation}
171 |
172 | and equations specifying how $\alpha_{h}$ and $\beta_{h}$ are
173 | voltage-dependent, given in Ekeberg et al. (1991) equation (6):
174 |
175 | \begin{eqnarray}
176 | \alpha_{h} &= &\frac{A(B-E_{soma})}{1-e^{(E_{soma}-B)/C}}\\
177 | \beta_{h} &= &\frac{A}{1-e^{(B-E_{soma})/C}}
178 | \end{eqnarray}
179 |
180 | Note again that although the terms $A$, $B$ and $C$ are different for
181 | $\alpha_{h}$ and $\beta_{h}$ even though they are represented by the
182 | same symbols in the equations.
183 |
184 | So in summary, for the sodium channels, we have two state variables:
185 | $(m,h)$ representing the activation ($m$) and deactivation ($h$) of
186 | the sodium channels. We have a differential equation for each,
187 | describing how the rate of change (the first derivative) of these
188 | states can be calculated: Ekeberg equations (3) and (5). Those
189 | differential equations involve parameters $(\alpha,\beta)$, one set
190 | for $m$ and a second set for $h$. Those $(\alpha,\beta)$ parameters
191 | are computed from Ekeberg equations (4) (for $m$) and (6) (for
192 | $h$). Those equations involve parameters $(A,B,C)$ that have parameter
193 | values specific to $\alpha$ and $\beta$ and $m$ and $h$ (see Table 1
194 | of Ekeberg et al., 1991).
195 |
196 | *** Potassium channels (K)
197 |
198 | The potassium channels are represted in a similar way, although in
199 | this case there is only channel activation, and no inactivation. In
200 | Ekeberg et al. (1991) the three equations (7), (8) and (9) represent
201 | the potassium channels:
202 |
203 | \begin{equation}
204 | I_{k} = (E_{k}-E_{soma})G_{k}n^{4}
205 | \end{equation}
206 |
207 | \begin{equation}
208 | \frac{dn}{dt} = \alpha_{n}(1-n) - \beta_{n}n
209 | \end{equation}
210 |
211 | where $n$ is the state variable representing the activation of
212 | potassium channels. As before we have expressions for $(\alpha,\beta)$
213 | which represent the fact that the potassium channel is also
214 | voltage-gated:
215 |
216 | \begin{eqnarray}
217 | \alpha_{n} &= &\frac{A(E_{soma}-B)}{1-e^{(B-E_{soma})/C}}\\
218 | \beta_{n} &= &\frac{A(B-E_{soma})}{1-e^{(E_{soma}-B)/C}}
219 | \end{eqnarray}
220 |
221 | Again, the parameter values for $(A,B,C)$ can be found in Ekeberg et
222 | al., (1991) Table 1.
223 |
224 | To summarize, the potassium channel has a single state variable $n$
225 | representing the activation of the potassium channel.
226 |
227 | *** Summary
228 |
229 | We have a model now that includes four state variables:
230 |
231 | 1. $E$ representing the potential in the soma, given by differential equation (1) in Ekeberg et al., (1991)
232 | 2. $m$ representing the activation of sodium channels, Ekeberg equation (3)
233 | 3. $h$ representing the inactivation of sodium channels, Ekeberg equation (5)
234 | 4. $n$ representing the activation of potassium channels, Ekeberg equation (8)
235 |
236 | Each of the differential equations that define how to compute state
237 | derivatives, involve $(\alpha,\beta)$ terms that are given by Ekeberg
238 | equations (4) (for $m$), (6) (for $h$) and (9) (for $n$).
239 |
240 | So what we have to do in order to simulate the dynamic behaviour of
241 | this neuron over time, is simply to implement these equations in
242 | Python code, give the system some reasonable initial conditions, and
243 | simulate it over time using the =odeint()= function.
244 |
245 | ** Python code
246 |
247 | A full code listing of a model including sodium and potassium channels
248 | can be found here: [[file:code/ekeberg1.py][ekeberg1.py]]. Admittedly, this system involves more
249 | equations, and more parameters, than the other simple "toy" systems
250 | that we saw in the previous section. The fundamental ideas are the
251 | same however, so let's step through things bit by bit.
252 |
253 | We begin by setting up all of the necessary model parameters (there
254 | are many). They are found in Ekeberg et al. (1991) Tables 1 and 2. I
255 | have chosen to do this using a Python data type called a
256 | [[http://docs.python.org/tutorial/datastructures.html#dictionaries][dictionary]]. This is a useful data type to parcel all of our parameters
257 | together. Unlike an array or list, which we would have to index using
258 | integer values (and then keep track of which one corresponded to which
259 | parameter), with a dictionary, we can index into it using string
260 | labels.
261 |
262 | #+BEGIN_SRC python
263 | # ipython --pylab
264 |
265 | # import some needed functions
266 | from scipy.integrate import odeint
267 |
268 | # set up a dictionary of parameters
269 |
270 | E_params = {
271 | 'E_leak' : -7.0e-2,
272 | 'G_leak' : 3.0e-09,
273 | 'C_m' : 3.0e-11,
274 | 'I_ext' : 0*1.0e-10
275 | }
276 |
277 | Na_params = {
278 | 'Na_E' : 5.0e-2,
279 | 'Na_G' : 1.0e-6,
280 | 'k_Na_act' : 3.0e+0,
281 | 'A_alpha_m_act' : 2.0e+5,
282 | 'B_alpha_m_act' : -4.0e-2,
283 | 'C_alpha_m_act' : 1.0e-3,
284 | 'A_beta_m_act' : 6.0e+4,
285 | 'B_beta_m_act' : -4.9e-2,
286 | 'C_beta_m_act' : 2.0e-2,
287 | 'l_Na_inact' : 1.0e+0,
288 | 'A_alpha_m_inact' : 8.0e+4,
289 | 'B_alpha_m_inact' : -4.0e-2,
290 | 'C_alpha_m_inact' : 1.0e-3,
291 | 'A_beta_m_inact' : 4.0e+2,
292 | 'B_beta_m_inact' : -3.6e-2,
293 | 'C_beta_m_inact' : 2.0e-3
294 | }
295 |
296 | K_params = {
297 | 'k_E' : -9.0e-2,
298 | 'k_G' : 2.0e-7,
299 | 'k_K' : 4.0e+0,
300 | 'A_alpha_m_act' : 2.0e+4,
301 | 'B_alpha_m_act' : -3.1e-2,
302 | 'C_alpha_m_act' : 8.0e-4,
303 | 'A_beta_m_act' : 5.0e+3,
304 | 'B_beta_m_act' : -2.8e-2,
305 | 'C_beta_m_act' : 4.0e-4
306 | }
307 |
308 | params = {
309 | 'E_params' : E_params,
310 | 'Na_params' : Na_params,
311 | 'K_params' : K_params
312 | }
313 | #+END_SRC
314 |
315 | We could have stored the four values in =E_params= in an array like this:
316 |
317 | #+BEGIN_SRC python
318 | E_params = array([-7.0e-2, 3.0e-09, 6.0e-11, 0*1.0e-10])
319 | #+END_SRC
320 |
321 | but then we would have to access particular values by indexing into that array with integers like this:
322 |
323 | #+BEGIN_SRC python
324 | E_leak = E_params[0]
325 | G_leak = E_params[1]
326 | #+END_SRC
327 |
328 | Instead, if we use a dictionary, we can index into the structure using alphanumeric strings as index values, like this:
329 |
330 | #+BEGIN_SRC python
331 | E_params['E_leak']
332 | E_params['G_leak']
333 | #+END_SRC
334 |
335 | You don't have to use a dictionary to store the parameter values, but
336 | I find it a really useful way to maintain readability.
337 |
338 | Our next bit of code is the implementation of the ODE function
339 | itself. Remember, our ultimate goal is to model $E$, the potential
340 | across the soma membrane. We know from Ekeberg equation (1) that the
341 | rate of change of $E$ depends on leakage current and on the sum of
342 | currents from other channels. These other currents are given by
343 | Ekeberg equations (2) (sodium) and (7) (potassium). These equations
344 | involve the three other states in our system: sodium activation $m$,
345 | sodium inactivation $h$ and potassium activation $n$, which are each
346 | defined by their own differential equations, Ekeberg equations (3),
347 | (5) and (8), respectively. It's just a matter of coding things up step
348 | by step.
349 |
350 | #+BEGIN_SRC python
351 | # define our ODE function
352 |
353 | def neuron(state, t, params):
354 | """
355 | Purpose: simulate Hodgkin and Huxley model for the action potential using
356 | the equations from Ekeberg et al, Biol Cyb, 1991.
357 | Input: state ([E m h n] (ie [membrane potential; activation of
358 | Na++ channel; inactivation of Na++ channel; activation of K+
359 | channel]),
360 | t (time),
361 | and the params (parameters of neuron; see Ekeberg et al).
362 | Output: statep (state derivatives).
363 | """
364 |
365 | E = state[0]
366 | m = state[1]
367 | h = state[2]
368 | n = state[3]
369 |
370 | Epar = params['E_params']
371 | Na = params['Na_params']
372 | K = params['K_params']
373 |
374 | # external current (from "voltage clamp", other compartments, other neurons, etc)
375 | I_ext = Epar['I_ext']
376 |
377 | # calculate Na rate functions and I_Na
378 | alpha_act = Na['A_alpha_m_act'] * (E-Na['B_alpha_m_act']) / (1.0 - exp((Na['B_alpha_m_act']-E) / Na['C_alpha_m_act']))
379 | beta_act = Na['A_beta_m_act'] * (Na['B_beta_m_act']-E) / (1.0 - exp((E-Na['B_beta_m_act']) / Na['C_beta_m_act']) )
380 | dmdt = ( alpha_act * (1.0 - m) ) - ( beta_act * m )
381 |
382 | alpha_inact = Na['A_alpha_m_inact'] * (Na['B_alpha_m_inact']-E) / (1.0 - exp((E-Na['B_alpha_m_inact']) / Na['C_alpha_m_inact']))
383 | beta_inact = Na['A_beta_m_inact'] / (1.0 + (exp((Na['B_beta_m_inact']-E) / Na['C_beta_m_inact'])))
384 | dhdt = ( alpha_inact*(1.0 - h) ) - ( beta_inact*h )
385 |
386 | # Na-current:
387 | I_Na =(Na['Na_E']-E) * Na['Na_G'] * (m**Na['k_Na_act']) * h
388 |
389 | # calculate K rate functions and I_K
390 | alpha_kal = K['A_alpha_m_act'] * (E-K['B_alpha_m_act']) / (1.0 - exp((K['B_alpha_m_act']-E) / K['C_alpha_m_act']))
391 | beta_kal = K['A_beta_m_act'] * (K['B_beta_m_act']-E) / (1.0 - exp((E-K['B_beta_m_act']) / K['C_beta_m_act']))
392 | dndt = ( alpha_kal*(1.0 - n) ) - ( beta_kal*n )
393 | I_K = (K['k_E']-E) * K['k_G'] * n**K['k_K']
394 |
395 | # leak current
396 | I_leak = (Epar['E_leak']-E) * Epar['G_leak']
397 |
398 | # calculate derivative of E
399 | dEdt = (I_leak + I_K + I_Na + I_ext) / Epar['C_m']
400 | statep = [dEdt, dmdt, dhdt, dndt]
401 |
402 | return statep
403 | #+END_SRC
404 |
405 | Next we run a simulation by setting up our initial states, and a time
406 | array, and then calling =odeint()=. Note that we are injecting some
407 | external current by changing the value of the
408 | =params['E_params']['I_ext']= entry in the =params= dictionary.
409 |
410 | #+BEGIN_SRC python
411 | # simulate
412 |
413 | # set initial states and time vector
414 | state0 = [-70e-03, 0, 1, 0]
415 | t = arange(0, 0.2, 0.001)
416 |
417 | # let's inject some external current
418 | params['E_params']['I_ext'] = 1.0e-10
419 |
420 | # run simulation
421 | state = odeint(neuron, state0, t, args=(params,))
422 | #+END_SRC
423 |
424 | Finally, we plot the results:
425 |
426 | #+BEGIN_SRC python
427 | # plot the results
428 |
429 | figure(figsize=(8,12))
430 | subplot(4,1,1)
431 | plot(t, state[:,0])
432 | title('membrane potential')
433 | subplot(4,1,2)
434 | plot(t, state[:,1])
435 | title('Na2+ channel activation')
436 | subplot(4,1,3)
437 | plot(t, state[:,2])
438 | title('Na2+ channel inactivation')
439 | subplot(4,1,4)
440 | plot(t, state[:,3])
441 | title('K+ channel activation')
442 | xlabel('TIME (sec)')
443 | #+END_SRC
444 |
445 | Here is what you should see:
446 |
447 | #+ATTR_HTML: :width 400px :align center
448 | #+CAPTION: Spiking neuron simulation based on Ekeberg et al., 1991
449 | [[file:figs/ekeberg1.png]]
450 |
451 | * Things to try
452 |
453 | 1. alter the [[file:code/ekeberg1.py][ekeberg1.py]] code so that the modelled neuron only has the
454 | leakage current and external current. In other words, comment out
455 | the terms related to sodium and potassium channels. Run a
456 | simulation with an initial membrane potential of -70mv and an
457 | external current of 0.0mv. What happens and why?
458 | 2. Change the external current to 1.0e-10 and re-run the
459 | simulation. What happens and why?
460 | 3. Add in the terms related to the sodium channel (activation and
461 | deactivation). Run a simulation with external current of 1.0e-10
462 | and initial states =[-70e-03, 0, 1]=. What happens and why?
463 | 4. Add in the terms related to the potassium channel. Run a simulation
464 | with external current of 1.0e-10 and initial states =[-70e-03, 0,
465 | 1, 0]=. What happens and why?
466 | 5. Play with the external current level (increase it slightly,
467 | decrease it slightly, etc). What is the effect on the behaviour of
468 | the neuron?
469 | 6. What is the minimum amount of external current necessary to
470 | generate an action potential? Why?
471 |
472 | * Next steps
473 |
474 | Next we will be looking at models of motor control. We will be using
475 | human arm movement as the model system. We will first look at
476 | kinematic models of one and two-joint arms, so we can talk about the
477 | problem of coordinate transformations between hand-space and
478 | joint-space, and the non-linear geometrical transformations that must
479 | take place. After that we will move on to talking about models of
480 | muscle, force production, and limb dynamics, with an eye towards a
481 | modelling the neural control of arm movements such as reaching and
482 | pointing.
483 |
484 | [ [[file:4_Computational_Motor_Control_Kinematics.html][Computational Motor Control: Kinematics]] ]
485 |
--------------------------------------------------------------------------------
/org/index.org:
--------------------------------------------------------------------------------
1 | #+STARTUP: showall
2 |
3 | #+TITLE: Computational Modelling in Neuroscience
4 | #+AUTHOR: Paul Gribble
5 | #+EMAIL: paul@gribblelab.org
6 | #+DATE: Fall 2012
7 | #+OPTIONS: toc:nil
8 | #+LINK_UP: http://www.gribblelab.org/teaching.html
9 | #+LINK_HOME: http://www.gribblelab.org/
10 |
11 | -----
12 | * Administrivia
13 | - This is the homepage for Neuroscience 9520: Computational Modelling in Neuroscience
14 | - Class will be Mondays, 2:00pm - 3:30pm, and Thursdays, 11:30am -
15 | 1:00pm, in NSC 245A
16 | - The instructor is Paul Gribble (email: paul [at] gribblelab [dot] org)
17 | - course [[file:syllabus.pdf][syllabus.pdf]]
18 | - We will use several chapters from a book on computational motor
19 | control: "The Computational Neurobiology of Reaching and Pointing"
20 | by Reza Shadmehr and Steven P. Wise, MIT Press, 2005. [ [[http://goo.gl/QKykK][google books
21 | link]] ] [ [[file:readings/SW_00_cover_info.pdf][cover info]] ]
22 |
23 | -----
24 | * Code
25 |
26 | When there is a python script linked in the notes, you will get a
27 | permissions error when clicking on it. I haven't figured out how to
28 | avoid this yet. In the meantime, all code can be downloaded here in
29 | this tarred gzipped archive: [[file:code.tgz][code.tgz]]
30 |
31 | -----
32 | * Course Notes
33 |
34 | 0. [@0] [[file:0_Setup_Your_Computer.html][Setup Your Computer]]
35 | 1. [[file:1_Dynamical_Systems.html][Dynamical Systems]]
36 | 2. [[file:2_Modelling_Dynamical_Systems.html][Modelling Dynamical Systems]]
37 | 3. [[file:3_Modelling_Action_Potentials.html][Modelling Action Potentials]]
38 | 4. [[file:4_Computational_Motor_Control_Kinematics.html][Computational Motor Control: Kinematics]]
39 | 5. [[file:5_Computational_Motor_Control_Dynamics.html][Computational Motor Control: Dynamics]]
40 | 6. [[file:6_Computational_Motor_Control_Muscle_Models.html][Computational Motor Control: Muscle Models]]
41 |
42 | -----
43 | * Schedule & Topics
44 |
45 | -----
46 | ** Sep 10: Introductions & course schedule
47 | - lecture slides: [[file:lecture1.pdf][lecture1.pdf]]
48 | - please read this: [[file:readings/Trappenberg1.pdf][Trappenberg1.pdf]]
49 | - please read this: [[file:readings/OM1.pdf][OM1.pdf]] (1st Ed.) or [[http://grey.colorado.edu/CompCogNeuro/index.php?title=CCNBook/Intro][CCNBook/Intro]]
50 | - your first assignment: [[file:assignment1.pdf][assignment1.pdf]] (due Sep 23)
51 |
52 | ** Sep 13: Getting your computer set up with Python & scientific libraries
53 | - course notes [[file:0_Setup_Your_Computer.html][0: Setup Your Computer]]
54 |
55 | -----
56 | ** Sep 17 : Modelling Dynamical Systems I
57 | - course notes [[file:1_Dynamical_Systems.html][1: Dynamical Systems]]
58 | - course notes [[file:2_Modelling_Dynamical_Systems.html][2: Modelling Dynamical Systems]]
59 |
60 | ** Sep 20: Modelling Dynamical Systems II
61 | - more on dynamical systems
62 | - [[file:assignment2.pdf][assignment2.pdf]] (due Sep 30)
63 | - code example for using optimization: [[file:code/optimizer_example.py][optimizer\_example.py]]
64 | - cool demo of [[http://www.youtube.com/watch?v=Klw7L0OZbFQ][synchronization of metronomes]] (and [[http://www.youtube.com/watch?v=kqFc4wriBvE][japanese version]]),
65 | plus [[https://github.com/paulgribble/metronomes][python code]] for simulating it
66 |
67 | -----
68 | ** Sep 24, 27 : no class (Paul away)
69 |
70 | -----
71 | ** Oct 1, 4 : Modelling Action Potentials - Hodgkin-Huxley models
72 | - [[file:readings/ekeberg1991.pdf][Ekeberg et al. (1991)]] (please read this)
73 | - optional: see the original Hodgkin & Huxley 1952 paper reprinted in
74 | 1990: [[file:readings/HH1990.pdf][Hodgkin & Huxley 1952 (1990)]]
75 | - optional: a chapter on [[file:readings/spiking_neuron_models.pdf][Spiking Neuron Models]] for a general overview
76 | of the field
77 | - course notes [[file:3_Modelling_Action_Potentials.html][3: Modelling Action Potentials]]
78 | - refresher slides on [[file:readings/action_potentials.pdf][action potentials]]
79 | - YouTube videos on [[http://www.youtube.com/watch?v=7EyhsOewnH4][The Action Potential]] and [[http://www.youtube.com/watch?v=LXdTg9jZYvs][Voltage Gated Channels
80 | and the Action Potential]]
81 | - [[file:assignment3.pdf][assignment3.pdf]] (due Oct 7) [[file:code/assignment3_params.py][assignment3\_params.py]]
82 | - [[file:code/assignment2_sol.py][assignment2\_sol.py]]
83 | - [[file:code/assignment3_sol.py][assignment3\_sol.py]]
84 |
85 | -----
86 | ** Oct 8, 11 : no class (thanksgiving, SFN)
87 |
88 | -----
89 | ** Oct 15, 18 : no class (SFN)
90 |
91 | -----
92 | ** Oct 22, 25 : Computational Motor Control: Kinematics
93 | - course notes: [[file:4_Computational_Motor_Control_Kinematics.html][4: Computational Motor Control: Kinematics]]
94 | - read *at least two* of the papers listed in the course notes
95 | - read Shadmehr & Wise book, [[file:readings/SW_18.pdf][Chapter 18]] and [[file:readings/SW_19.pdf][Chapter 19]]
96 | - [[file:assignment4.pdf][assignment4.pdf]]
97 | - [[file:code/minjerk.py][minjerk.py]]
98 |
99 | -----
100 | ** Oct 29, Nov 1 : Computational Motor Control: Dynamics
101 | - [[file:code/assignment4_sol.py][assignment4\_sol.py]]
102 | - course notes: [[file:5_Computational_Motor_Control_Dynamics.html][5: Computational Motor Control: Dynamics]]
103 | - read *at least two* of the papers listed in the course notes
104 | - read Shadmehr & Wise book, [[file:readings/SW_20.pdf][Chapter 20]] and [[file:readings/SW_21.pdf][Chapter 21]] (and [[file:readings/SW_22.pdf][Chapter 22]]
105 | if you are interested in the topic)
106 | - [[file:code/twojointarm.py][twojointarm.py]] utility functions and Python code for doing inverse
107 | and forward dynamics of a two-joint arm in a horizontal plane (no
108 | gravity) with external driving torques, and animating the resulting
109 | arm motion
110 | - [[file:code/twojointarm_game.py][twojointarm\_game.py]] : try your hand at this game in which you have
111 | to control a two-joint arm to hit as many targets as you can before
112 | time runs out. Use the [d,f,j,k] keys to control [sf,se,ef,ee]
113 | joint torques (s=shoulder, e=elbow, f=flexor, e=extensor). Spacebar
114 | will "reset" the arm to its home position, handy if your arm starts
115 | spinning out of control (though each time you use spacebar your
116 | score will be decremented by one). Start the game by typing =python
117 | twojointarm_game.py= at the command line. At the end of the game
118 | your score will be printed out on the command line.
119 | - [[file:assignment5.pdf][assignment5.pdf]]
120 | -----
121 |
122 | ** Nov 5, 8 : Computational Motor Control: Muscle Models
123 | - [[file:code/assignment5_sol.py][assignment5\_sol.py]] and [[file:figs/assignment5_figures.pdf][assignment5\_figures.pdf]] : coming soon ...
124 | - read Shadmehr & Wise book, [[file:readings/SW_07.pdf][Chapter 7]] and [[file:readings/SW_08.pdf][Chapter 8]] and supplementary
125 | documents: [[http://www.shadmehrlab.org/book/musclemodel.pdf][musclemodel.pdf]]
126 | - course notes: [[file:6_Computational_Motor_Control_Muscle_Models.html][6: Computational Motor Control: Muscle Models]]
127 | - assignment: catch up on readings.
128 | - *note* no class on Thurs Nov 8.
129 |
130 | -----
131 | ** Nov 12, 15 : Computational Models of Learning part 1
132 | - some lecture slides: [[file:readings/nn_slides.pdf][nn\_slides.pdf]]
133 | - Readings:
134 | - [[file:readings/Jain_1996_NNetTutorial.pdf][Artificial Neural Networks: A Tutorial]] Jain & Mao, 1996
135 | - [[file:readings/Trappenberg5.pdf][Trappenberg5.pdf]], [[file:readings/Trappenberg6.pdf][Trappenberg6.pdf]], [[file:readings/Robinson92.pdf][Robinson92.pdf]], [[file:readings/Mitchell4.pdf][Mitchell4.pdf]]
136 | (4.8 optional)
137 | - Optional: [[file:readings/Haykin0.pdf][Haykin0.pdf]], [[file:readings/Haykin1.pdf][Haykin1.pdf]], [[file:readings/Haykin4.pdf][Haykin4.pdf]]
138 | - tutorial: [[http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html][Principles of training multi-layer neural network using
139 | backpropagation]]
140 | - A classic reference: McClelland & Rumelhart PDP books [[file:readings/PDP.pdf][PDP.pdf]],
141 | [[file:readings/PDP_Handbook.pdf][PDP\_Handbook.pdf]]
142 | - [[http://www.cs.toronto.edu/~hinton/absps/sciam93.pdf][Simulating Brain Damage]]
143 | - for a really nice overview of all sort of NNs, see: [[https://www.coursera.org/course/neuralnets][Neural Networks
144 | for Machine Learning]] (Geoff Hinton, Univ Toronto, Coursera online
145 | course)
146 | - for thoughts about motor learning, read Shadmher & Wise book,
147 | [[file:readings/SW_24.pdf][Chapter 24]]
148 | - Software: [[http://pybrain.org/][PyBrain]]
149 | - there is also this: [[http://leenissen.dk/fann/wp/help/installing-fann/][FANN: Fast Artificial Neural Network Library]]
150 | - code examples:
151 | - [[file:code/xor_aima.py][xor\_aima.py]] from [[http://aima.cs.berkeley.edu/][Norvig & Russell's book]]
152 | - [[file:code/xor.py][xor.py]] my code, vectorized numpy matrices
153 | - [[file:code/xor_plot.py][xor\_plot.py]] same as above, plots during training to visualize network performance
154 | - [[file:code/xor_cg.py][xor\_cg.py]] my code, uses backprop to compute gradients and conjugate gradient descent to optimize weights
155 | - [[http://yann.lecun.com/exdb/mnist/][MNIST Database of handwritten digits]]
156 | - [[http://arxiv.org/abs/1003.0358][Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition]]
157 | - [[file:readings/NeuralNetworks2.pdf][NeuralNetworks2.pdf]] slides
158 |
159 | -----
160 | ** Nov 19, 22 : Computational Models of Learning part 2
161 | - [[file:code/assignment6.py][assignment6.py]] and [[file:code/traindata.pickle][traindata.pickle]]
162 | - [[file:readings/NeuralNetworks3.pdf][NeuralNetworks3.pdf]] slides
163 | - more code demos of feedforward networks
164 | - handwritten digit example [[file:readings/mnist.tgz][mnist.tgz]]
165 | - vowel classification example [[file:readings/PetersonBarneyVowels.tgz][PetersonBarneyVowels.tgz]]
166 | - facial expression example [[file:readings/RadboudFaces.tgz][RadboudFaces.tgz]]
167 | - recurrent neural networks ([[http://en.wikipedia.org/wiki/Recurrent_neural_network][wiki)]]
168 | - [[http://130.102.79.1/~mikael/papers/rn_dallas.pdf][A guide to recurrent neural networks and backpropagation]] (M. Boden)
169 | - [[http://www.bcl.hamilton.ie/~barak/papers/CMU-CS-90-196.pdf][Dynamic Recurrent Neural Networks]] (B.A. Pearlmutter)
170 | - [[http://minds.jacobs-university.de/sites/default/files/uploads/papers/ESNTutorialRev.pdf][A tutorial on training recurrent neural networks]] (H. Jaeger)
171 | - echo state networks [[http://www.scholarpedia.org/article/Echo_state_network][scholarpedia]]
172 | - Buonomano D. (2009) Harnessing Chaos in Recurrent Neural
173 | Networks. Neuron 63(4):423-425.
174 | - Sussillo, D., & Abbott, L. F. (2009). Generating coherent
175 | patterns of activity from chaotic neural networks. Neuron, 63(4),
176 | 544-557.
177 | - papers:
178 | - [[file:readings/Wada_1993_NeuralNetworks.pdf][Wada, 1993]] A Neural Network Model for Arm Trajectory Formation
179 | Using Forward and Inverse Dynamics Models
180 | - [[file:readings/Lukashin_1993_BiolCybern.pdf][Lukashin, 1993]] A dynamical neural network model for motor
181 | cortical activity during movement: population coding of movement
182 | trajectories
183 | - [[file:readings/Pearlmutter_1989_NeuralComputation.pdf][Pearlmutter, 1989]] Learning State Space Trajectories in Recurrent
184 | Neural Networks
185 | - intro to unsupervised learning
186 | - autoencoders [[http://en.wikipedia.org/wiki/Autoencoder][wiki]]
187 | - Hopfield nets [[http://en.wikipedia.org/wiki/Hopfield_net][wiki]]
188 | - Boltzmann machines [[http://en.wikipedia.org/wiki/Boltzmann_machine][wiki]]
189 | - Restricted Bolztmann machines [[http://en.wikipedia.org/wiki/Restricted_Boltzmann_machine][wiki]]
190 | - multi-layer generative networks
191 | - [[file:readings/Hinton2006Montreal.pdf][Hinton2006Montreal.pdf]]
192 | - [[file:readings/Hinton2007tics.pdf][Hinton2007tics.pdf]]
193 | - video: [[http://www.youtube.com/watch?v%3DAyzOUbkUf3M][The Next Generation of Neural Networks]]
194 | - [[http://www.cs.toronto.edu/~hinton/adi/index.htm][deep network digit demo]]
195 | - Mark Schmidt's [[http://www.di.ens.fr/~mschmidt/Software/minFunc.html][minFunc]] MATLAB routines for unconstrained optimization
196 |
197 | -----
198 | ** Nov 26, 29 : Computational Models of Learning part 3
199 | - [[file:readings/NeuralNetworks4.pdf][NeuralNetworks4.pdf]] slides
200 | - self-organizing maps [[http://en.wikipedia.org/wiki/Kohonen_map][wiki]], [[file:readings/AflaloGraziano2006.pdf][AflaloGraziano2006.pdf]]
201 | - [[file:code/hopfield.tgz][hopfield.tgz]] MATLAB demo code
202 | - [[file:code/som1.m][som1.m]] MATLAB demo code
203 | - autoencoders & deep belief nets
204 | - [[https://code.google.com/p/matrbm/][RBM & DBN MATLAB code]]
205 | - reinforcement learning [[http://en.wikipedia.org/wiki/Reinforcement_learning][wiki]], [[http://webdocs.cs.ualberta.ca/~sutton/book/ebook/the-book.html][Sutton & Barto book]]
206 |
207 | -----
208 | ** Dec 3 : student presentations
209 | - each of the 12 students registered in the course will present one
210 | paper from the literature in their research area in which a
211 | computational modelling approach was used to address a question
212 | about how the brain works.
213 | - presentations are limited to *7 minutes each*! Note: this is
214 | difficult to pull off, you will have to practice your talk out
215 | loud. Also be careful to choose your slides carefully. There will be
216 | a timer and a loud gong.
217 | - Question period will be limited to 1 to 2 minutes per talk.
218 | - The order of talks will be alphabetical by your last name. A first,
219 | Z last. We will need to start at 2pm sharp.
220 |
221 | - Each student giving a talk must also submit a short essay on their
222 | chosen paper. Your essay should follow the "Content and Format"
223 | style of the [[http://www.jneurosci.org/site/misc/ifa_features.xhtml]["Journal Club"]] feature in the Journal of
224 | Neuroscience. You can choose any paper you want, it doesn't have to
225 | be a J. Neurosci. paper and it doesn't have to have been published
226 | within the past 2 months.
227 | - Essays are due Sunday Dec 9th, 2012, no later than 11:59pm
228 | EST. Please send your essay to me by email, as a single .pdf
229 | file. The filename should be =_essay.pdf=
230 | (e.g. =gribble\_essay.pdf=).
231 |
232 | -----
233 | * Links
234 |
235 | ** Python Introductory Tutorials
236 |
237 | - [[http://openbookproject.net/thinkcs/python/english2e/][How to Think Like a Computer Scientist: Learning with Python]]
238 | - [[http://learnpythonthehardway.org/book/][Learn Python The Hard Way]]
239 | - [[http://www.diveintopython.net/][Dive Into Python]]
240 | - [[file:readings/SciCompPython.pdf][Introduction to Scientific Computing with Python]]
241 | - [[http://www.pythontutor.com/][Online Python Tutor]]
242 | - [[https://github.com/profjsb/python-bootcamp][Python Bootcamp]] (github)
243 | - [[http://www.youtube.com/playlist?list=PLRdRinj2mDqsnazUsGeFq8Fi-2lL77vFF][Python Bootcamp August 2012]] (YouTube playlist)
244 | - [[http://register.pythonbootcamp.info/agenda][Python Bootcamp August 2012]] (list of topics & downloads)
245 |
246 | ** Numpy / SciPy / Matplotlib
247 |
248 | - [[http://youtu.be/vWkb7VahaXQ][Using Numpy Arrays to Perform Mathematical Operations in Python]]
249 | (youtube video)
250 | - [[http://scipy-lectures.github.com/][Python Scientific Lecture Notes]]
251 | - [[http://www.scipy.org/Plotting_Tutorial][SciPy Plotting Tutorial]]
252 | - [[http://docs.scipy.org/doc/][Numpy and Scipy Documentation]]
253 | - [[http://www.scipy.org/Tentative_Numpy_Tutorial][Numpy Tutorial]]
254 | - [[http://scipy.org/Cookbook][SciPy Cookbook]]
255 | - [[http://scipy.org/Getting_Started][SciPy Getting Started]]
256 | - [[http://matplotlib.org/gallery.html][matplotlib gallery]]
257 |
258 | ** iPython
259 |
260 | - [[http://ipython.org/videos.html][iPython videos]]
261 | - [[http://youtu.be/2G5YTlheCbw][iPython in-depth: high productivity interactive and parallel python]]
262 | (youtube video) iPython Notebook stuff starts at about 1:15:40, and
263 | parallel programming stuff starts at around 2:13:00
264 | - [[http://nbviewer.ipython.org/][IPython Notebook Viewer]]
265 |
266 | ** Machine Learning Resources
267 | - [[http://scikit-learn.org/][scikit-learn: machine learning in Python]]
268 | - [[http://yann.lecun.com/exdb/mnist/][The MNIST Database of handwritten digits]]
269 | - [[http://archive.ics.uci.edu/ml/][UCI Machine Learning Repository]]
270 | - [[http://cs.nyu.edu/~roweis/data.html][Some datasets for machine learning: digits, faces, text, speech]]
271 | - [[http://www.dacya.ucm.es/jam/download.htm][Software tools for reinforcement learning, neural networks and robotics]]
272 | - [[http://kasrl.org/jaffe.html][The Japanese Female Facial Expression (JAFFE) Database]]
273 | - [[http://www.socsci.ru.nl:8180/RaFD2/RaFD?p=main][Radboud Faces Database]]
274 | - [[http://mplab.ucsd.edu/wordpress/?page_id=48][Machine Perception Laboratory Demos]]
275 | - [[http://mplab.ucsd.edu/grants/project1/free-software/MPTWebSite/introduction.html][Machine Perception Toolbox]]
276 | - [[http://www.cs.toronto.edu/~hinton/][Geoff Hinton's Webpage]] (with lots of demos, tutorials, talks and
277 | papers on Neural Networks)
278 | - [[http://www.cs.toronto.edu/~hinton/csc321/][Introduction to Neural Networks and Machine Learning]] (U of T course
279 | by Geoff Hinton)
280 | - [[http://www.cnbc.cmu.edu/~mharm/research/tools/mikenet/][MikeNet Neural Network Simulator]] (C library)
281 | - [[http://deeplearning.net/][Deep Learning]] resource site for deep belief nets etc
282 | - [[http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf][Learning Deep Architectures for AI]] (book) by Yoshua Bengio
283 | - [[http://www.cs.cmu.edu/afs/cs/academic/class/15782-f06/matlab/][MATLAB neural network code]] demos by Dave Touretzky
284 |
285 | -----
286 |
287 | * These notes
288 |
289 | These notes can be viewed (and downloaded) in their entirety from a
290 | [[https://github.com][github]] repository here: [[https://github.com/paulgribble/CompNeuro][CompNeuro]]
291 |
292 |
--------------------------------------------------------------------------------
/org/mystyle.css:
--------------------------------------------------------------------------------
1 | html {
2 | font-family: sans-serif;
3 | font-weight:100;
4 | font-size: 11pt;
5 | height: 100%;
6 | width: 100%;
7 | margin: 0;
8 | padding: 0;
9 | }
10 |
11 | body {
12 | text-align: justify;
13 | padding-left: 8%;
14 | padding-right: 8%;
15 | padding-bottom: 5%;
16 | padding-top: 2%;
17 | line-height: 150%;
18 | }
19 |
20 | body a {
21 | color: #2580a2;
22 | text-decoration: none;
23 | }
24 |
25 | h1 {
26 | font-weight: normal;
27 | }
28 |
29 | h2 {
30 | font-weight: normal;
31 | }
32 |
33 | h3 {
34 | font-weight: normal;
35 | }
36 |
37 | pre {
38 | line-height: 110%;
39 | }
40 |
41 | div#table-of-contents {
42 | line-height: 120%;
43 | }
44 |
45 | div#postamble {
46 | line-height: 120%;
47 | }
48 |
49 | div#bibliography {
50 | line-height: 120%;
51 | }
52 |
53 | ol li {
54 | line-height: 120%;
55 | padding-bottom: 5pt;
56 | }
57 |
58 | ul li {
59 | line-height: 120%;
60 | padding-bottom: 3pt;
61 | }
62 |
--------------------------------------------------------------------------------
/org/refs.bib:
--------------------------------------------------------------------------------
1 | @Article{HH1990,
2 | Author="Hodgkin, A. L. and Huxley, A. F. and Hodgkin, A. L. and Huxley, A. F. ",
3 | Title="{{A} quantitative description of membrane current and its application to conduction and excitation in nerve. 1952}",
4 | Journal="Bull. Math. Biol.",
5 | Year="1990",
6 | Volume="52",
7 | Number="1-2",
8 | Pages="25--71",
9 | }
10 |
11 | % 12991237
12 | @Article{HH1952,
13 | Author="Hodgkin, A. L. and Huxley, A. F. ",
14 | Title="{{A} quantitative description of membrane current and its application to conduction and excitation in nerve}",
15 | Journal="J. Physiol. (Lond.)",
16 | Year="1952",
17 | Volume="117",
18 | Number="4",
19 | Pages="500--544",
20 | Month="Aug",
21 | }
22 |
23 | @Article{ekeberg1991,
24 | Author="Ekeberg, O. and Wallen, P. and Lansner, A. and Traven, H. and Brodin, L. and Grillner, S. ",
25 | Title="{{A} computer based model for realistic simulations of neural networks. {I}. {T}he single neuron and synaptic interaction}",
26 | Journal="Biol Cybern",
27 | Year="1991",
28 | Volume="65",
29 | Number="2",
30 | Pages="81--90"
31 | }
32 |
33 | @book{kandel2000principles,
34 | title={Principles of neural science},
35 | author={Kandel, E.R. and Schwartz, J.H. and Jessell, T.M. and others},
36 | volume={4},
37 | year={2000},
38 | publisher={McGraw-Hill New York}
39 | }
40 |
41 |
--------------------------------------------------------------------------------
/org/refs.html:
--------------------------------------------------------------------------------
1 |
2 |
9 | A. L. Hodgkin and A. F. Huxley.
10 | A quantitative description of membrane current and its application
11 | to conduction and excitation in nerve.
12 | J. Physiol. (Lond.), 117(4):500--544, Aug 1952.
13 | [ bib ]
14 |
15 |
24 | A. L. Hodgkin, A. F. Huxley, A. L. Hodgkin, and A. F. Huxley.
25 | A quantitative description of membrane current and its application
26 | to conduction and excitation in nerve. 1952.
27 | Bull. Math. Biol., 52(1-2):25--71, 1990.
28 | [ bib ]
29 |
30 |
39 | O. Ekeberg, P. Wallen, A. Lansner, H. Traven, L. Brodin, and S. Grillner.
40 | A computer based model for realistic simulations of neural
41 | networks. I. The single neuron and synaptic interaction.
42 | Biol Cybern, 65(2):81--90, 1991.
43 | [ bib ]
44 |
45 |
2 | @article{ekeberg1991,
3 | author = {Ekeberg, O. and Wallen, P. and Lansner, A. and Traven, H. and Brodin, L. and Grillner, S. },
4 | title = {{{A} computer based model for realistic simulations of neural networks. {I}. {T}he single neuron and synaptic interaction}},
5 | journal = {Biol Cybern},
6 | year = {1991},
7 | volume = {65},
8 | number = {2},
9 | pages = {81--90}
10 | }
11 |
12 |
13 |
14 | @book{kandel2000principles,
15 | title = {Principles of neural science},
16 | author = {Kandel, E.R. and Schwartz, J.H. and Jessell, T.M. and others},
17 | volume = {4},
18 | year = {2000},
19 | publisher = {McGraw-Hill New York}
20 | }
21 |