├── .gitattributes
├── .gitignore
├── LICENSE
├── NMmacro
├── __init__.py
└── models.py
├── class_notebooks
├── README.md
├── class_1_template.ipynb
├── class_3_template.ipynb
├── class_4_template.ipynb
├── class_5_template.ipynb
├── class_6_template.ipynb
└── img
│ ├── jupyter_home.png
│ ├── jupyter_notebook.png
│ ├── jupyter_notebook_new.png
│ └── spyder.png
├── code_examples
├── README.md
├── deterministic_methods.py
├── discretizing_ar1_processes.ipynb
├── gpu_computing
│ ├── ReadMe.md
│ ├── bench_cpu.py
│ ├── bench_gpu.py
│ ├── bench_jit.py
│ ├── benchmarks-logscale.pdf
│ ├── benchmarks.pdf
│ ├── benchmarks_cpu.pdf
│ ├── benchmarks_gpu.csv
│ ├── benchmarks_gpu.r
│ ├── benchmarks_gpu_computing.py
│ ├── loop.c
│ ├── loop.py
│ ├── loop.sh
│ ├── results_cpu.csv
│ ├── results_gpu.csv
│ ├── results_jit.csv
│ ├── run-benchmarks.sh
│ └── slides
│ │ ├── commands_definitions.tex
│ │ ├── img
│ │ ├── Perspective_Projection_Principle.jpg
│ │ ├── benchmarks-logscale.pdf
│ │ ├── benchmarks.pdf
│ │ ├── block-thread.pdf
│ │ ├── gpu_parallel_visual.py
│ │ ├── gpu_parallel_visual_1.pdf
│ │ ├── gpu_parallel_visual_2.pdf
│ │ ├── gpu_parallel_visual_3.pdf
│ │ ├── gpu_parallel_visual_4.pdf
│ │ ├── gpu_parallel_visual_5.pdf
│ │ ├── hw-sw-thread_block.jpg
│ │ ├── nvidia-rtx-2080-ti.jpg
│ │ └── stencil.pdf
│ │ ├── ta6_gpu_computing.pdf
│ │ └── ta6_gpu_computing.tex
├── hermgauss_vs_linspace.ipynb
├── monte_carlo_pi.ipynb
├── sncgm.mod
├── sncgm_as_py_object.ipynb
├── sparse_matrices.ipynb
└── vfi_convergence.m
├── other_applications
├── README.md
└── scraping
│ ├── README.md
│ └── xkcd.py
├── readme.md
├── slides
├── README.md
├── assets
│ ├── xkcd-2434.png
│ └── xkcd-home.png
├── common.sty
├── compile-all-slides.ps1
├── references.bib
├── ta1.pdf
├── ta1.tex
├── ta2.pdf
├── ta2.tex
├── ta3.pdf
├── ta3.tex
├── ta4.pdf
├── ta4.tex
├── ta5.pdf
├── ta5.tex
├── ta6.pdf
└── ta6.tex
└── ta_sessions
├── 0_setup.md
├── 1_introduction.ipynb
├── 2_deterministic_methods.ipynb
├── 3_stochastic_methods.ipynb
├── 4_ge_with_prices_and_heterogeneity.ipynb
├── 5_binning_huggett_aiyagari.ipynb
├── 6_web_scraping.ipynb
└── README.md
/.gitattributes:
--------------------------------------------------------------------------------
1 | # Auto detect text files and perform LF normalization
2 | * text=auto
3 |
4 | # Custom for Visual Studio
5 | *.cs diff=csharp
6 |
7 | # Standard to msysgit
8 | *.doc diff=astextplain
9 | *.DOC diff=astextplain
10 | *.docx diff=astextplain
11 | *.DOCX diff=astextplain
12 | *.dot diff=astextplain
13 | *.DOT diff=astextplain
14 | *.pdf diff=astextplain
15 | *.PDF diff=astextplain
16 | *.rtf diff=astextplain
17 | *.RTF diff=astextplain
18 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 |
2 | # Created by https://www.toptal.com/developers/gitignore/api/windows,linux,pycharm,visualstudiocode,latex,python,jupyternotebooks
3 | # Edit at https://www.toptal.com/developers/gitignore?templates=windows,linux,pycharm,visualstudiocode,latex,python,jupyternotebooks
4 |
5 | ### JupyterNotebooks ###
6 | # gitignore template for Jupyter Notebooks
7 | # website: http://jupyter.org/
8 |
9 | .ipynb_checkpoints
10 | */.ipynb_checkpoints/*
11 |
12 | # IPython
13 | profile_default/
14 | ipython_config.py
15 |
16 | # Remove previous ipynb_checkpoints
17 | # git rm -r .ipynb_checkpoints/
18 |
19 | ### LaTeX ###
20 | ## Core latex/pdflatex auxiliary files:
21 | *.aux
22 | *.lof
23 | *.log
24 | *.lot
25 | *.fls
26 | *.out
27 | *.toc
28 | *.fmt
29 | *.fot
30 | *.cb
31 | *.cb2
32 | .*.lb
33 |
34 | ## Intermediate documents:
35 | *.dvi
36 | *.xdv
37 | *-converted-to.*
38 | # these rules might exclude image files for figures etc.
39 | # *.ps
40 | # *.eps
41 | # *.pdf
42 |
43 | ## Generated if empty string is given at "Please type another file name for output:"
44 | .pdf
45 |
46 | ## Bibliography auxiliary files (bibtex/biblatex/biber):
47 | *.bbl
48 | *.bcf
49 | *.blg
50 | *-blx.aux
51 | *-blx.bib
52 | *.run.xml
53 |
54 | ## Build tool auxiliary files:
55 | *.fdb_latexmk
56 | *.synctex
57 | *.synctex(busy)
58 | *.synctex.gz
59 | *.synctex.gz(busy)
60 | *.pdfsync
61 |
62 | ## Build tool directories for auxiliary files
63 | # latexrun
64 | latex.out/
65 |
66 | ## Auxiliary and intermediate files from other packages:
67 | # algorithms
68 | *.alg
69 | *.loa
70 |
71 | # achemso
72 | acs-*.bib
73 |
74 | # amsthm
75 | *.thm
76 |
77 | # beamer
78 | *.nav
79 | *.pre
80 | *.snm
81 | *.vrb
82 |
83 | # changes
84 | *.soc
85 |
86 | # comment
87 | *.cut
88 |
89 | # cprotect
90 | *.cpt
91 |
92 | # elsarticle (documentclass of Elsevier journals)
93 | *.spl
94 |
95 | # endnotes
96 | *.ent
97 |
98 | # fixme
99 | *.lox
100 |
101 | # feynmf/feynmp
102 | *.mf
103 | *.mp
104 | *.t[1-9]
105 | *.t[1-9][0-9]
106 | *.tfm
107 |
108 | #(r)(e)ledmac/(r)(e)ledpar
109 | *.end
110 | *.?end
111 | *.[1-9]
112 | *.[1-9][0-9]
113 | *.[1-9][0-9][0-9]
114 | *.[1-9]R
115 | *.[1-9][0-9]R
116 | *.[1-9][0-9][0-9]R
117 | *.eledsec[1-9]
118 | *.eledsec[1-9]R
119 | *.eledsec[1-9][0-9]
120 | *.eledsec[1-9][0-9]R
121 | *.eledsec[1-9][0-9][0-9]
122 | *.eledsec[1-9][0-9][0-9]R
123 |
124 | # glossaries
125 | *.acn
126 | *.acr
127 | *.glg
128 | *.glo
129 | *.gls
130 | *.glsdefs
131 | *.lzo
132 | *.lzs
133 |
134 | # uncomment this for glossaries-extra (will ignore makeindex's style files!)
135 | # *.ist
136 |
137 | # gnuplottex
138 | *-gnuplottex-*
139 |
140 | # gregoriotex
141 | *.gaux
142 | *.gtex
143 |
144 | # htlatex
145 | *.4ct
146 | *.4tc
147 | *.idv
148 | *.lg
149 | *.trc
150 | *.xref
151 |
152 | # hyperref
153 | *.brf
154 |
155 | # knitr
156 | *-concordance.tex
157 | # TODO Comment the next line if you want to keep your tikz graphics files
158 | *.tikz
159 | *-tikzDictionary
160 |
161 | # listings
162 | *.lol
163 |
164 | # luatexja-ruby
165 | *.ltjruby
166 |
167 | # makeidx
168 | *.idx
169 | *.ilg
170 | *.ind
171 |
172 | # minitoc
173 | *.maf
174 | *.mlf
175 | *.mlt
176 | *.mtc
177 | *.mtc[0-9]*
178 | *.slf[0-9]*
179 | *.slt[0-9]*
180 | *.stc[0-9]*
181 |
182 | # minted
183 | _minted*
184 | *.pyg
185 |
186 | # morewrites
187 | *.mw
188 |
189 | # nomencl
190 | *.nlg
191 | *.nlo
192 | *.nls
193 |
194 | # pax
195 | *.pax
196 |
197 | # pdfpcnotes
198 | *.pdfpc
199 |
200 | # sagetex
201 | *.sagetex.sage
202 | *.sagetex.py
203 | *.sagetex.scmd
204 |
205 | # scrwfile
206 | *.wrt
207 |
208 | # sympy
209 | *.sout
210 | *.sympy
211 | sympy-plots-for-*.tex/
212 |
213 | # pdfcomment
214 | *.upa
215 | *.upb
216 |
217 | # pythontex
218 | *.pytxcode
219 | pythontex-files-*/
220 |
221 | # tcolorbox
222 | *.listing
223 |
224 | # thmtools
225 | *.loe
226 |
227 | # TikZ & PGF
228 | *.dpth
229 | *.md5
230 | *.auxlock
231 |
232 | # todonotes
233 | *.tdo
234 |
235 | # vhistory
236 | *.hst
237 | *.ver
238 |
239 | # easy-todo
240 | *.lod
241 |
242 | # xcolor
243 | *.xcp
244 |
245 | # xmpincl
246 | *.xmpi
247 |
248 | # xindy
249 | *.xdy
250 |
251 | # xypic precompiled matrices and outlines
252 | *.xyc
253 | *.xyd
254 |
255 | # endfloat
256 | *.ttt
257 | *.fff
258 |
259 | # Latexian
260 | TSWLatexianTemp*
261 |
262 | ## Editors:
263 | # WinEdt
264 | *.bak
265 | *.sav
266 |
267 | # Texpad
268 | .texpadtmp
269 |
270 | # LyX
271 | *.lyx~
272 |
273 | # Kile
274 | *.backup
275 |
276 | # gummi
277 | .*.swp
278 |
279 | # KBibTeX
280 | *~[0-9]*
281 |
282 | # TeXnicCenter
283 | *.tps
284 |
285 | # auto folder when using emacs and auctex
286 | ./auto/*
287 | *.el
288 |
289 | # expex forward references with \gathertags
290 | *-tags.tex
291 |
292 | # standalone packages
293 | *.sta
294 |
295 | # Makeindex log files
296 | *.lpz
297 |
298 | # REVTeX puts footnotes in the bibliography by default, unless the nofootinbib
299 | # option is specified. Footnotes are the stored in a file with suffix Notes.bib.
300 | # Uncomment the next line to have this generated file ignored.
301 | #*Notes.bib
302 |
303 | ### LaTeX Patch ###
304 | # LIPIcs / OASIcs
305 | *.vtc
306 |
307 | # glossaries
308 | *.glstex
309 |
310 | ### Linux ###
311 | *~
312 |
313 | # temporary files which can be created if a process still has a handle open of a deleted file
314 | .fuse_hidden*
315 |
316 | # KDE directory preferences
317 | .directory
318 |
319 | # Linux trash folder which might appear on any partition or disk
320 | .Trash-*
321 |
322 | # .nfs files are created when an open file is removed but is still being accessed
323 | .nfs*
324 |
325 | ### PyCharm ###
326 | # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
327 | # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
328 |
329 | # User-specific stuff
330 | .idea/**/workspace.xml
331 | .idea/**/tasks.xml
332 | .idea/**/usage.statistics.xml
333 | .idea/**/dictionaries
334 | .idea/**/shelf
335 |
336 | # Generated files
337 | .idea/**/contentModel.xml
338 |
339 | # Sensitive or high-churn files
340 | .idea/**/dataSources/
341 | .idea/**/dataSources.ids
342 | .idea/**/dataSources.local.xml
343 | .idea/**/sqlDataSources.xml
344 | .idea/**/dynamic.xml
345 | .idea/**/uiDesigner.xml
346 | .idea/**/dbnavigator.xml
347 |
348 | # Gradle
349 | .idea/**/gradle.xml
350 | .idea/**/libraries
351 |
352 | # Gradle and Maven with auto-import
353 | # When using Gradle or Maven with auto-import, you should exclude module files,
354 | # since they will be recreated, and may cause churn. Uncomment if using
355 | # auto-import.
356 | # .idea/artifacts
357 | # .idea/compiler.xml
358 | # .idea/jarRepositories.xml
359 | # .idea/modules.xml
360 | # .idea/*.iml
361 | # .idea/modules
362 | # *.iml
363 | # *.ipr
364 |
365 | # CMake
366 | cmake-build-*/
367 |
368 | # Mongo Explorer plugin
369 | .idea/**/mongoSettings.xml
370 |
371 | # File-based project format
372 | *.iws
373 |
374 | # IntelliJ
375 | out/
376 |
377 | # mpeltonen/sbt-idea plugin
378 | .idea_modules/
379 |
380 | # JIRA plugin
381 | atlassian-ide-plugin.xml
382 |
383 | # Cursive Clojure plugin
384 | .idea/replstate.xml
385 |
386 | # Crashlytics plugin (for Android Studio and IntelliJ)
387 | com_crashlytics_export_strings.xml
388 | crashlytics.properties
389 | crashlytics-build.properties
390 | fabric.properties
391 |
392 | # Editor-based Rest Client
393 | .idea/httpRequests
394 |
395 | # Android studio 3.1+ serialized cache file
396 | .idea/caches/build_file_checksums.ser
397 |
398 | ### PyCharm Patch ###
399 | # Comment Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-215987721
400 |
401 | # *.iml
402 | # modules.xml
403 | # .idea/misc.xml
404 | # *.ipr
405 |
406 | # Sonarlint plugin
407 | # https://plugins.jetbrains.com/plugin/7973-sonarlint
408 | .idea/**/sonarlint/
409 |
410 | # SonarQube Plugin
411 | # https://plugins.jetbrains.com/plugin/7238-sonarqube-community-plugin
412 | .idea/**/sonarIssues.xml
413 |
414 | # Markdown Navigator plugin
415 | # https://plugins.jetbrains.com/plugin/7896-markdown-navigator-enhanced
416 | .idea/**/markdown-navigator.xml
417 | .idea/**/markdown-navigator-enh.xml
418 | .idea/**/markdown-navigator/
419 |
420 | # Cache file creation bug
421 | # See https://youtrack.jetbrains.com/issue/JBR-2257
422 | .idea/$CACHE_FILE$
423 |
424 | # CodeStream plugin
425 | # https://plugins.jetbrains.com/plugin/12206-codestream
426 | .idea/codestream.xml
427 |
428 | ### Python ###
429 | # Byte-compiled / optimized / DLL files
430 | __pycache__/
431 | *.py[cod]
432 | *$py.class
433 |
434 | # C extensions
435 | *.so
436 |
437 | # Distribution / packaging
438 | .Python
439 | build/
440 | develop-eggs/
441 | dist/
442 | downloads/
443 | eggs/
444 | .eggs/
445 | lib/
446 | lib64/
447 | parts/
448 | sdist/
449 | var/
450 | wheels/
451 | pip-wheel-metadata/
452 | share/python-wheels/
453 | *.egg-info/
454 | .installed.cfg
455 | *.egg
456 | MANIFEST
457 |
458 | # PyInstaller
459 | # Usually these files are written by a python script from a template
460 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
461 | *.manifest
462 | *.spec
463 |
464 | # Installer logs
465 | pip-log.txt
466 | pip-delete-this-directory.txt
467 |
468 | # Unit test / coverage reports
469 | htmlcov/
470 | .tox/
471 | .nox/
472 | .coverage
473 | .coverage.*
474 | .cache
475 | nosetests.xml
476 | coverage.xml
477 | *.cover
478 | *.py,cover
479 | .hypothesis/
480 | .pytest_cache/
481 | pytestdebug.log
482 |
483 | # Translations
484 | *.mo
485 | *.pot
486 |
487 | # Django stuff:
488 | local_settings.py
489 | db.sqlite3
490 | db.sqlite3-journal
491 |
492 | # Flask stuff:
493 | instance/
494 | .webassets-cache
495 |
496 | # Scrapy stuff:
497 | .scrapy
498 |
499 | # Sphinx documentation
500 | docs/_build/
501 | doc/_build/
502 |
503 | # PyBuilder
504 | target/
505 |
506 | # Jupyter Notebook
507 |
508 | # IPython
509 |
510 | # pyenv
511 | .python-version
512 |
513 | # pipenv
514 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
515 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
516 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
517 | # install all needed dependencies.
518 | #Pipfile.lock
519 |
520 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
521 | __pypackages__/
522 |
523 | # Celery stuff
524 | celerybeat-schedule
525 | celerybeat.pid
526 |
527 | # SageMath parsed files
528 | *.sage.py
529 |
530 | # Environments
531 | .env
532 | .venv
533 | env/
534 | venv/
535 | ENV/
536 | env.bak/
537 | venv.bak/
538 | pythonenv*
539 |
540 | # Spyder project settings
541 | .spyderproject
542 | .spyproject
543 |
544 | # Rope project settings
545 | .ropeproject
546 |
547 | # mkdocs documentation
548 | /site
549 |
550 | # mypy
551 | .mypy_cache/
552 | .dmypy.json
553 | dmypy.json
554 |
555 | # Pyre type checker
556 | .pyre/
557 |
558 | # pytype static type analyzer
559 | .pytype/
560 |
561 | # profiling data
562 | .prof
563 |
564 | ### VisualStudioCode ###
565 | .vscode/*
566 | !.vscode/tasks.json
567 | !.vscode/launch.json
568 | *.code-workspace
569 |
570 | ### VisualStudioCode Patch ###
571 | # Ignore all local history of files
572 | .history
573 | .ionide
574 |
575 | ### Windows ###
576 | # Windows thumbnail cache files
577 | Thumbs.db
578 | Thumbs.db:encryptable
579 | ehthumbs.db
580 | ehthumbs_vista.db
581 |
582 | # Dump file
583 | *.stackdump
584 |
585 | # Folder config file
586 | [Dd]esktop.ini
587 |
588 | # Recycle Bin used on file shares
589 | $RECYCLE.BIN/
590 |
591 | # Windows Installer files
592 | *.cab
593 | *.msi
594 | *.msix
595 | *.msm
596 | *.msp
597 |
598 | # Windows shortcuts
599 | *.lnk
600 |
601 | ### PyCharm ###
602 | # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
603 | # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
604 |
605 | # User-specific stuff
606 | .idea/**/workspace.xml
607 | .idea/**/tasks.xml
608 | .idea/**/usage.statistics.xml
609 | .idea/**/dictionaries
610 | .idea/**/shelf
611 |
612 | # Generated files
613 | .idea/**/contentModel.xml
614 |
615 | # Sensitive or high-churn files
616 | .idea/**/dataSources/
617 | .idea/**/dataSources.ids
618 | .idea/**/dataSources.local.xml
619 | .idea/**/sqlDataSources.xml
620 | .idea/**/dynamic.xml
621 | .idea/**/uiDesigner.xml
622 | .idea/**/dbnavigator.xml
623 |
624 | # Gradle
625 | .idea/**/gradle.xml
626 | .idea/**/libraries
627 |
628 | # Gradle and Maven with auto-import
629 | # When using Gradle or Maven with auto-import, you should exclude module files,
630 | # since they will be recreated, and may cause churn. Uncomment if using
631 | # auto-import.
632 | # .idea/artifacts
633 | # .idea/compiler.xml
634 | # .idea/jarRepositories.xml
635 | # .idea/modules.xml
636 | # .idea/*.iml
637 | # .idea/modules
638 | # *.iml
639 | # *.ipr
640 |
641 | # CMake
642 | cmake-build-*/
643 |
644 | # Mongo Explorer plugin
645 | .idea/**/mongoSettings.xml
646 |
647 | # File-based project format
648 | *.iws
649 |
650 | # IntelliJ
651 | out/
652 |
653 | # mpeltonen/sbt-idea plugin
654 | .idea_modules/
655 |
656 | # JIRA plugin
657 | atlassian-ide-plugin.xml
658 |
659 | # Cursive Clojure plugin
660 | .idea/replstate.xml
661 |
662 | # Crashlytics plugin (for Android Studio and IntelliJ)
663 | com_crashlytics_export_strings.xml
664 | crashlytics.properties
665 | crashlytics-build.properties
666 | fabric.properties
667 |
668 | # Editor-based Rest Client
669 | .idea/httpRequests
670 |
671 | # Android studio 3.1+ serialized cache file
672 | .idea/caches/build_file_checksums.ser
673 |
674 | ### PyCharm Patch ###
675 | # Comment Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-215987721
676 |
677 | # *.iml
678 | # modules.xml
679 | # .idea/misc.xml
680 | # *.ipr
681 |
682 | # Sonarlint plugin
683 | # https://plugins.jetbrains.com/plugin/7973-sonarlint
684 | .idea/**/sonarlint/
685 |
686 | # SonarQube Plugin
687 | # https://plugins.jetbrains.com/plugin/7238-sonarqube-community-plugin
688 | .idea/**/sonarIssues.xml
689 |
690 | # Markdown Navigator plugin
691 | # https://plugins.jetbrains.com/plugin/7896-markdown-navigator-enhanced
692 | .idea/**/markdown-navigator.xml
693 | .idea/**/markdown-navigator-enh.xml
694 | .idea/**/markdown-navigator/
695 |
696 | # Cache file creation bug
697 | # See https://youtrack.jetbrains.com/issue/JBR-2257
698 | .idea/$CACHE_FILE$
699 |
700 | # CodeStream plugin
701 | # https://plugins.jetbrains.com/plugin/12206-codestream
702 | .idea/codestream.xml
703 |
704 | # All of ./idea folder (custom rule)
705 | .idea
706 |
707 | # End of https://www.toptal.com/developers/gitignore/api/windows,linux,pycharm,visualstudiocode,latex,python,jupyternotebooks,pycharm
708 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 Andrea Pasqualini
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/NMmacro/__init__.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from scipy import linalg as la
3 |
4 |
5 | class MarkovChain:
6 |
7 | def __init__(self, pi):
8 | if not np.allclose(np.sum(pi, axis=1), np.ones(pi.shape[0])):
9 | raise ValueError('Each row of the input matrix must sum to one.')
10 | self.Pi = pi
11 |
12 | def n_steps_transition(self, n):
13 | return la.matrix_power(self.Pi, n)
14 |
15 | @property
16 | def stationary_distribution(self):
17 | l, v = la.eig(self.Pi)
18 | vector = v[:, np.where(np.isclose(l, 1.))]
19 | return vector / np.sum(vector)
20 |
21 | def simulate(self, T, s0):
22 | """
23 | It simulates a Markov Chain for T periods given that the initial
24 | state is 's'. The parameter 's0' must be an integer between 0 and
25 | Pi.shape[0]-1
26 | """
27 | if T < 1:
28 | raise ValueError('The sample length T must be at least 1.')
29 | if not isinstance(s0, int):
30 | raise TypeError('Initial condition must be an index (integer).')
31 | if s0 < 0 or s0 > self.Pi.shape[0] - 1:
32 | raise ValueError('Initial condition must be a row index of Pi.')
33 |
34 | def draw_state(pdf):
35 | cdf = np.cumsum(pdf)
36 | u = np.random.uniform()
37 | return np.sum(u - cdf > 0)
38 |
39 | sample = np.zeros((T,), dtype=int)
40 | sample[0] = s0
41 | for t in range(1, T):
42 | sample[t] = draw_state(self.Pi[sample[t - 1], :])
43 |
44 | return sample
45 |
--------------------------------------------------------------------------------
/NMmacro/models.py:
--------------------------------------------------------------------------------
1 | from time import time
2 | import numpy as np
3 | from scipy import optimize as opt
4 | from matplotlib import pyplot as plt
5 |
6 |
7 | class NCGM:
8 | """
9 | This is the (deterministic) NeoClassical Growth Model (NCGM). The model
10 | is instantiated with a set of calibrated parameters. The 'solve' methods
11 | (VFI, PFI and TI) will take the grid for the state variable(s) as input
12 | arguments.
13 | """
14 |
15 | def __init__(self, alpha=0.3, beta=0.95, gamma=1.5, delta=0.1):
16 | """
17 | PARAMETERS
18 | ----------
19 | alpha : float (default is 0.3)
20 | The exponent in the production function, a.k.a. the intensity
21 | of capital in production.
22 | beta : float (default is 0.95)
23 | The discount rate of the agent.
24 | gamma : float (default is 1.5)
25 | The coefficient of relative risk aversion of the agent.
26 | delta : float (default is 0.1)
27 | The depreciation rate of capital.
28 | """
29 | self.alpha = alpha
30 | self.beta = beta
31 | self.gamma = gamma
32 | self.delta = delta
33 | self.u = lambda c: (c**(1-self.gamma)) / (1-self.gamma)
34 | self.k_ss = ((1 - (1-delta) * beta) / (alpha * beta))**(1 / (alpha-1))
35 |
36 |
37 | def _euler(self, c0, k):
38 | """
39 | Implements the Euler Equation given a guess for the consumption level
40 | c_t and for various levels of capital holdings k_t. It returns the
41 | quantity resid = LHS - RHS.
42 | """
43 | k1 = k**self.alpha - c0 + (1-self.delta) * k
44 | pc = np.polyfit(k, c0, 1)
45 | ctp1 = np.polyval(pc, k1)
46 | opr = self.alpha * k1 ** (self.alpha-1) + 1 - self.delta
47 | resid = c0 - ctp1 * (self.beta * opr) ** (-1/self.gamma)
48 | return resid
49 |
50 |
51 | def solve_vfi(self, k, tolerance=1e-6):
52 | """
53 | This method takes a grid for the state variable and solves the Bellman
54 | problem by Value Function Iteration. It returns the policy functions
55 | and the computed value at the optimum. It also prints to display how
56 | much time and how many iterations were necessary to converge to the
57 | solution.
58 |
59 | PARAMETERS
60 | ----------
61 | k : numpy.array
62 | The grid for the state variable over which the Value Function is
63 | computed. The resulting policy functions will be computed at the
64 | gridpoints in this array.
65 | tolerance : float (optional, default is 10**(-6))
66 | The value against which the sup-norm is compared to when
67 | determining whether the algorithm converged or not.
68 |
69 | RETURNS
70 | -------
71 | c_opt : numpy.array
72 | The policy function for consumption, evaluated at the
73 | gridpoints 'k'.
74 | k_opt : numpy.array
75 | The policy function for capital holdings, evaluated at the
76 | gridpoints 'k'.
77 | v_opt : numpy.array
78 | The value function computed at the gridpoints 'k'.
79 | """
80 |
81 | n = k.shape[0]
82 |
83 | v_old = np.zeros((n,))
84 | v = np.zeros((n,))
85 | dr = np.zeros((n,), dtype=int)
86 |
87 | criterion = 1
88 | n_iter = 0
89 |
90 | t0 = time()
91 |
92 | while criterion > tolerance:
93 | n_iter += 1
94 | for i in range(n):
95 | C = (k[i] ** self.alpha) + (1 - self.delta) * k[i] - k
96 | negative_C = C < 0
97 | C[negative_C] = np.nan
98 | objective = self.u(C) + self.beta * v_old
99 | v[i] = np.nanmax(objective)
100 | dr[i] = np.nanargmax(objective)
101 | criterion = np.max(np.abs(v - v_old))
102 | v_old[:] = v # forcing a deep copy of the array
103 |
104 | t1 = time()
105 |
106 | k_opt = k[dr]
107 | c_opt = k ** self.alpha + (1-self.delta) * k - k_opt
108 |
109 | print('VFI took {} iterations and {:.3f} seconds to converge'.format(n_iter, t1 - t0))
110 | return (c_opt, k_opt, v)
111 |
112 |
113 | def solve_pfi(self, k, c0, tolerance=1e-6):
114 | """
115 | This method takes a grid for the state variable and solves the Bellman
116 | problem by Policy Function Iteration. As the convergence of PFI depends
117 | on the initial condition, a guess must be provided by the user. The
118 | method returns the policy functions. It also prints to display how much
119 | time and how many iterations were necessary to converge to the
120 | solution.
121 |
122 | PARAMETERS
123 | ----------
124 | k : numpy.array
125 | The grid for the state variable over which the Value Function is
126 | computed. The resulting policy functions will be computed at the
127 | gridpoints in this array.
128 | c0 : numpy.array
129 | An initial condition for the guess on the policy function for
130 | consumption.
131 | tolerance : float (optional, default is 10**(-6))
132 | The value against which the sup-norm is compared to when
133 | determining whether the algorithm converged or not.
134 |
135 | RETURNS
136 | -------
137 | c_opt : numpy.array
138 | The policy function for consumption, evaluated at the
139 | gridpoints 'k'.
140 | k_opt : numpy.array
141 | The policy function for capital holdings, evaluated at the
142 | gridpoints 'k'.
143 | """
144 | c_old = np.zeros(c0.shape)
145 | c_old[:] = c0
146 | n_iter = 0
147 | criterion = 1
148 | t0 = time()
149 |
150 | while criterion > tolerance:
151 | n_iter += 1
152 | kp = (k ** self.alpha - c_old) + (1 - self.delta) * k
153 | pc = np.polyfit(k, c_old, 5)
154 | ctp1 = np.polyval(pc, kp)
155 | opr = self.alpha * kp ** (self.alpha-1) + 1 - self.delta
156 | c1 = ctp1 * (self.beta * opr) ** (-1 / self.gamma)
157 | criterion = np.max(np.abs(c1 - c_old))
158 | c_old[:] = c1
159 |
160 | t1 = time()
161 |
162 | c_opt = c1
163 | k_opt = (k ** self.alpha - c_opt) + (1 - self.delta) * k
164 |
165 | print('PFI took {} iterations and {:.3f} seconds to converge'.format(n_iter, t1 - t0))
166 | return (c_opt, k_opt)
167 |
168 |
169 | def solve_proj(self, k, c0, tolerance=1e-6):
170 | """
171 | This method takes a grid for the state variable and solves the Bellman
172 | problem by Policy Function Iteration. As the convergence of the
173 | projection method depends on the initial condition, an initial
174 | condition must be provided by the user. The method returns the policy
175 | functions. It also prints to display how much time and how many
176 | iterations were necessary to converge to the solution.
177 | """
178 |
179 | t0 = time()
180 |
181 | c_opt = opt.fsolve(self._euler, c0, args=k)
182 |
183 | t1 = time()
184 | k_opt = k ** self.alpha - c_opt + (1-self.delta) * k
185 |
186 | print('Direct projection took {:.2f} seconds.'.format(t1-t0))
187 | return [c_opt, k_opt]
188 |
189 |
190 | def plot_solution(self, k, c_opt, k_opt, v=None, figSize=None):
191 | """
192 | This method plots the policy functions of this model once they have
193 | been obtained. It optionally plots the value function if this is
194 | available. It essentially is a wrapper around matplotlib.pyplot.plot
195 | with a (optionally custom) grid of plots.
196 |
197 | PARAMETERS
198 | ----------
199 | k : numpy.array
200 | The grid of points over which the policy functions have been
201 | computed.
202 | c_opt : numpy.array
203 | The policy function for consumption.
204 | k_opt : numpy.array
205 | The policy function for capital holdings.
206 | v : numpy.array (optional)
207 | The value function.
208 | figSize : tuple
209 | A tuple of floats representing the size of the resulting
210 | figure in inches, formatted as (width, height).
211 |
212 | RETURNS
213 | -------
214 | fig : matplotlib.figure
215 | The figure object instantiated by this wrapper (mainly for later
216 | saving to disk).
217 | ax : list
218 | The list of matplotlib.axes._subplots.AxesSubplot objects.
219 | """
220 |
221 | if v is not None:
222 | fig = plt.subplots(figsize=figSize)
223 |
224 | ax = [None, None, None]
225 | pltgrid = (2, 4)
226 |
227 | ax[0] = plt.subplot2grid(pltgrid, (0, 0), rowspan=2, colspan=2)
228 | ax[1] = plt.subplot2grid(pltgrid, (0, 2), colspan=2)
229 | ax[2] = plt.subplot2grid(pltgrid, (1, 2), colspan=2)
230 |
231 | ax[0].plot(k, v,
232 | linewidth=2,
233 | color='red',
234 | label=r'$V(k)$')
235 | ax[1].plot(k, k_opt,
236 | linewidth=2,
237 | color='red',
238 | label=r"$k'(k)$",
239 | zorder=2)
240 | ax[2].plot(k, c_opt,
241 | linewidth=2,
242 | color='red',
243 | label=r'$c(k)$')
244 | ax[1].plot(k, k,
245 | linewidth=1,
246 | color='black',
247 | linestyle='dashed',
248 | zorder=1)
249 |
250 | ax[0].set_title('Value function')
251 | ax[1].set_title('Capital accumulation decision')
252 | ax[2].set_title('Consumption decision')
253 |
254 | else:
255 | fig, ax = plt.subplots(nrows=1, ncols=2, figsize=figSize)
256 |
257 | ax[0].plot(k, k_opt,
258 | color='red',
259 | linewidth=2,
260 | zorder=2,
261 | label=r"$k'(k)$")
262 | ax[1].plot(k, c_opt,
263 | color='red',
264 | linewidth=2,
265 | zorder=2,
266 | label=r'$c(k)$')
267 | ax[0].plot(k, k,
268 | color='black',
269 | linewidth=1,
270 | linestyle='dashed',
271 | zorder=1)
272 |
273 | ax[0].set_title('Capital accumulation decision')
274 | ax[1].set_title('Consumption decision')
275 |
276 | for a in range(len(ax)):
277 | ax[a].axvline(self.k_ss,
278 | linewidth=1,
279 | color='black',
280 | linestyle='dotted',
281 | zorder=1)
282 | ax[a].grid(alpha=0.3)
283 | ax[a].set_xlabel('$k$')
284 | ax[a].legend()
285 |
286 | plt.tight_layout()
287 |
288 | return [fig, ax]
289 |
--------------------------------------------------------------------------------
/class_notebooks/README.md:
--------------------------------------------------------------------------------
1 | # How to use Jupyter Notebooks
2 |
3 | This year I will use Jupyter Notebooks as main interface during our classes.
4 | Using these notebooks will not be obvious at first.
5 | Opening and editing them is not similar to opening a Stata or Matlab window.
6 | Instead, we need to launch the Jupyter session from the terminal.
7 |
8 | In brief, instead of opening a window, we launch a program that runs in the background (and in a terminal).
9 | The graphical user interface is instead provided by your own web browser.
10 |
11 |
12 | ## Launching a Jupyter session
13 |
14 | Open a terminal.
15 | The application you need to use depends on your Operating System.
16 |
17 | - Windows: you can either use the _Command Prompt_ or _PowerShell_.
18 | - macOS: there is an application called _Terminal_.
19 | - Linux: it depends on your distribution, but if you're using Linux, then you know how to open a terminal :-)
20 |
21 | In what follows, lines in code listings that start with a dollar sign (`$`) denote prompts at the terminal.
22 | This is the case on Bash, which is normally found both on macOS and Linux.
23 | On Windows, the default prompt is `>` (e.g., `C:\>`).
24 |
25 | To launch a Jupyter session, we simply type
26 |
27 | ```bash
28 | $ jupyter notebook
29 | ```
30 |
31 | in the terminal (and we obviously press Enter).
32 | The output will look like the following.
33 |
34 | ```
35 | [I 15:01:01.720 NotebookApp] Serving notebooks from local directory: /home/andrea
36 | [I 15:01:01.720 NotebookApp] The Jupyter Notebook is running at:
37 | [I 15:01:01.720 NotebookApp] http://localhost:8888/?token=0d6d95750966e08068ca76efbd091bf383f1c1538e35f6f1
38 | [I 15:01:01.720 NotebookApp] or http://127.0.0.1:8888/?token=0d6d95750966e08068ca76efbd091bf383f1c1538e35f6f1
39 | [I 15:01:01.720 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
40 | [C 15:01:01.767 NotebookApp]
41 |
42 | To access the notebook, open this file in a browser:
43 | file:///home/andrea/.local/share/jupyter/runtime/nbserver-12162-open.html
44 | Or copy and paste one of these URLs:
45 | http://localhost:8888/?token=0d6d95750966e08068ca76efbd091bf383f1c1538e35f6f1
46 | or http://127.0.0.1:8888/?token=0d6d95750966e08068ca76efbd091bf383f1c1538e35f6f1
47 | ```
48 |
49 | Leave this terminal window as it is (i.e., do not close it).
50 | You will need it to be open if you want to work on Jupyter Notebooks.
51 | As you work, there will be additional lines printed to the terminal: you should not worry, as those lines simply describe what is happening.
52 |
53 | Your default web browser will automatically start (or be brought to the foreground), and it will yield a window like the following.
54 |
55 | 
56 |
57 | Although you operate through a web browser, all you see in this window is happening locally on your computer.
58 | This is to say that there are no security concerns at this stage.
59 |
60 | Using the interface in your web browser, navigate to the folder containing the notebook you want to open.
61 | Jupyter Notebooks carry the file extension `.ipynb`.
62 |
63 | Once you open a notebook, you will be greeted by a screen similar to the following.
64 |
65 | 
66 |
67 |
68 | ## The basics of Jupyter Notebooks
69 |
70 | Describing how to work with Jupyter Notebooks is a lengthy task and I am not up to it.
71 | See the [official documentation](https://jupyter-notebook.readthedocs.io/en/stable/index.html).
72 | However, I can provide an overview of the basics.
73 |
74 | From the navigation window (the one where you can browse your files) create a new notebook.
75 | It will look like this.
76 |
77 | 
78 |
79 | Except from the header and the toolbar, you only see one thing: a _cell_.
80 | There are two main types of cells: _code_ cells and _markdown_ cells.
81 | The former is a place to write your code, so that you can execute it at some point.
82 | The latter instead is a place to write plain text, with the optional formatting offered by the [Markdown syntax](https://daringfireball.net/projects/markdown/basics).
83 | In the ideal scenario, you use markdown cells to write notes about the code, either for your own note-taking or for explaining results (e.g., say you are a data scientist explaining your profit-boosting results to your manager).
84 |
85 | On top of having two types of cells, we have two _modes_ of operation: the _command_ mode and the _editing_ mode.
86 | In the former mode, you move around cells and you execute them.
87 | You are in this mode when the thick line on the left of the cell is blue.
88 | In the latter mode, you modify the contents of your cells.
89 | You are in this mode when the thick line on the left of the cell is green.
90 |
91 | Each code cell that you can edit has the text `In [ ]:` on its left.
92 | The square brackets will populate with numbers as you execute the code cells.
93 | These numbers only keep track of the order of execution.
94 | When you execute a code cell that is supposed to provide output (either in the form of text or figures), Jupyter will automatically fill the space beneath the cell with the requested output.
95 |
96 | If you want to considerably speed up your use of Jupyter Notebooks, you should learn its keyboard shortcuts.
97 | To see them, go to Help -> Keyboard Shortcuts.
98 |
99 |
100 | ## I've heard of Spyder: why are we not using it?
101 |
102 | Good question!
103 | [Spyder](https://www.spyder-ide.org/) is a traditional-looking [Integrated Development Environment (IDE)](https://en.wikipedia.org/wiki/Integrated_development_environment) that resembles applications like Matlab and RStudio.
104 | 
105 |
106 | I have four reasons for using notebooks as opposed to show stuff in an IDE.
107 |
108 | 1. Using Jupyter Notebooks is more pedagogical.
109 | While I will exclusively use code cells in class, you will have the template of the notebook, so that you can use markdown cells to write your own comments.
110 | Learning the code of this course may not be trivial in some places.
111 | Being able to take notes right next to the code is a plus (IMHO).
112 | 1. I want to make the point that using terminals is not scary.
113 | There is some unjustified stigma to using terminals.
114 | They either look too nerdy or they give a sense of insecurity.
115 | People are afraid of terminals because using them is not as intuitive as point-and-click interfaces.
116 | Here I take my small step towards encouraging the use of command-line interfaces.
117 | Why do I think this is important?
118 | [You'll be impressed with what you can do in a terminal](https://ux.stackexchange.com/questions/101990) (that you cannot do in a point-and-click interface).
119 | 1. We as Economists should stop being dinosaurs when it comes to computing.
120 | Some people use Scientific Workplace for writing LaTeX documents.
121 | Some people use Beamer to create their posters for conferences.
122 | Some people use PowerPoint to create graphs that are then included in their LaTeX documents.
123 | We are competent people who should use the right tool for the right job.
124 | Using Jupyter Notebooks allows me to show how we can use web technologies to make our job a bit easier or more exciting (I use notebooks all the time for my research).
125 | Also consider that, at some point, you will need to come up with a personal website.
126 | Jupyter Notebooks can easily be embedded in your website, if you want to show off your skills.
127 | 1. The choice of IDE is a very personal one.
128 | Discussion around the question _"What IDE is best for programming language \_\_\_?"_ just goes on indefinitely.
129 | Spyder may be the first choice (as it has been in the previous two iterations of this course) because it comes together with Anaconda.
130 | Its interface is familiar to Matlab or RStudio users.
131 | But Spyder is not the only IDE for Python out there.
132 | [PyCharm](https://www.jetbrains.com/pycharm/) and [Visual Studio Code](https://code.visualstudio.com/) are other popular alternatives.
133 | Linux people may also bring [Vim](https://www.vim.org/) to the table.
134 | Who am I to choose the IDE for you?
135 |
136 | None of these points _alone_ justifies my decision to move to Jupyter Notebooks.
137 | However, all of them combined convinced me.
138 |
--------------------------------------------------------------------------------
/class_notebooks/class_1_template.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Class 1 - Python and The Basics"
8 | ]
9 | },
10 | {
11 | "cell_type": "markdown",
12 | "metadata": {},
13 | "source": [
14 | "This year I provide templates of Jupyter Notebooks.\n",
15 | "While I will only write code during class, you can take advantage of this space to take notes and replicate what we do in class.\n",
16 | "In any case, there are annotated, complete notebooks on the [GitHub repository](https://github.com/apsql/numerical_methods_macroeconomics) as reference material."
17 | ]
18 | },
19 | {
20 | "cell_type": "markdown",
21 | "metadata": {},
22 | "source": [
23 | "## First Steps"
24 | ]
25 | },
26 | {
27 | "cell_type": "markdown",
28 | "metadata": {},
29 | "source": [
30 | "## Data Types"
31 | ]
32 | },
33 | {
34 | "cell_type": "markdown",
35 | "metadata": {},
36 | "source": [
37 | "## Modules"
38 | ]
39 | },
40 | {
41 | "cell_type": "markdown",
42 | "metadata": {},
43 | "source": [
44 | "## Modules: Numpy"
45 | ]
46 | },
47 | {
48 | "cell_type": "markdown",
49 | "metadata": {},
50 | "source": [
51 | "## Modules: Scipy"
52 | ]
53 | },
54 | {
55 | "cell_type": "markdown",
56 | "metadata": {},
57 | "source": [
58 | "## Modules: Pandas"
59 | ]
60 | },
61 | {
62 | "cell_type": "markdown",
63 | "metadata": {},
64 | "source": [
65 | "## Modules: Matplotlib"
66 | ]
67 | },
68 | {
69 | "cell_type": "markdown",
70 | "metadata": {},
71 | "source": [
72 | "## Modules: Plotly"
73 | ]
74 | }
75 | ],
76 | "metadata": {
77 | "kernelspec": {
78 | "display_name": "Python 3",
79 | "language": "python",
80 | "name": "python3"
81 | },
82 | "language_info": {
83 | "codemirror_mode": {
84 | "name": "ipython",
85 | "version": 3
86 | },
87 | "file_extension": ".py",
88 | "mimetype": "text/x-python",
89 | "name": "python",
90 | "nbconvert_exporter": "python",
91 | "pygments_lexer": "ipython3",
92 | "version": "3.7.6"
93 | }
94 | },
95 | "nbformat": 4,
96 | "nbformat_minor": 2
97 | }
98 |
--------------------------------------------------------------------------------
/class_notebooks/class_3_template.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Script for Class #3"
8 | ]
9 | },
10 | {
11 | "cell_type": "code",
12 | "execution_count": null,
13 | "metadata": {},
14 | "outputs": [],
15 | "source": [
16 | "import numpy as np\n",
17 | "from scipy import linalg as la\n",
18 | "from scipy import stats as st\n",
19 | "from matplotlib import pyplot as plt\n",
20 | "from time import time\n",
21 | "\n",
22 | "from IPython.display import set_matplotlib_formats\n",
23 | "%matplotlib inline\n",
24 | "set_matplotlib_formats('svg')\n",
25 | "plt.rcParams['figure.figsize'] = [10, 5]"
26 | ]
27 | },
28 | {
29 | "cell_type": "markdown",
30 | "metadata": {},
31 | "source": [
32 | "### Tauchen (1986)"
33 | ]
34 | },
35 | {
36 | "cell_type": "code",
37 | "execution_count": null,
38 | "metadata": {},
39 | "outputs": [],
40 | "source": [
41 | "def tauchen(n, m, mu, rho, sigma):\n",
42 | " pass"
43 | ]
44 | },
45 | {
46 | "cell_type": "markdown",
47 | "metadata": {},
48 | "source": [
49 | "### Tauchen and Hussey (1991)"
50 | ]
51 | },
52 | {
53 | "cell_type": "code",
54 | "execution_count": null,
55 | "metadata": {},
56 | "outputs": [],
57 | "source": [
58 | "def tauchussey(n, mu, rho, sigma):\n",
59 | " pass"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "### Rouwenhorst (see Kopecky and Suen, 2010)"
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": null,
72 | "metadata": {},
73 | "outputs": [],
74 | "source": [
75 | "def rouwenhorst(n, mu, rho, sigma):\n",
76 | " pass"
77 | ]
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "metadata": {},
82 | "source": [
83 | "### Ergodic distribution of a discrete Markov chain"
84 | ]
85 | },
86 | {
87 | "cell_type": "code",
88 | "execution_count": null,
89 | "metadata": {},
90 | "outputs": [],
91 | "source": [
92 | "def ergodic_distribution(Pi):\n",
93 | " pass"
94 | ]
95 | },
96 | {
97 | "cell_type": "markdown",
98 | "metadata": {},
99 | "source": [
100 | "## VFI and solution to the stochastic neo-classical growth model"
101 | ]
102 | },
103 | {
104 | "cell_type": "code",
105 | "execution_count": null,
106 | "metadata": {},
107 | "outputs": [],
108 | "source": [
109 | "alpha = 0.3\n",
110 | "beta = 0.95\n",
111 | "delta = 0.1\n",
112 | "gamma = 1.5\n",
113 | "u = lambda c : c**(1-gamma) / (1-gamma)\n",
114 | "\n",
115 | "mu = 0\n",
116 | "rho = 0.7\n",
117 | "sigma = 0.1"
118 | ]
119 | },
120 | {
121 | "cell_type": "code",
122 | "execution_count": null,
123 | "metadata": {},
124 | "outputs": [],
125 | "source": [
126 | "Nk = 500\n",
127 | "\n",
128 | "k_dss = ((1 - (1-delta) * beta) / (alpha * beta)) ** (1 / (alpha-1))\n",
129 | "k_lo, k_hi = np.array([0.1, 2.5]) * k_dss\n",
130 | "\n",
131 | "K = np.linspace(k_lo, k_hi, num=Nk)"
132 | ]
133 | },
134 | {
135 | "cell_type": "code",
136 | "execution_count": null,
137 | "metadata": {},
138 | "outputs": [],
139 | "source": [
140 | "Na = 2\n",
141 | "\n",
142 | "A, P = rouwenhorst(Na, mu, rho, sigma)\n",
143 | "A = np.exp(A)"
144 | ]
145 | },
146 | {
147 | "cell_type": "code",
148 | "execution_count": null,
149 | "metadata": {},
150 | "outputs": [],
151 | "source": [
152 | "print(' Low productivity: exp(a) = {:.3f}\\n'.format(A[0]) + \n",
153 | " ' High productivity: exp(a) = {:.3f}\\n'.format(A[-1]) +\n",
154 | " 'Average productivity: exp(a) = {:.3f}'.format(np.exp(mu + sigma**2/2)))"
155 | ]
156 | },
157 | {
158 | "cell_type": "markdown",
159 | "metadata": {},
160 | "source": [
161 | "Note that the average productivity is not unity.\n",
162 | "Because productivity is [log-normal](https://en.wikipedia.org/wiki/Log-normal_distribution), the average is $\\exp(\\mu + \\sigma^2/2)$."
163 | ]
164 | },
165 | {
166 | "cell_type": "code",
167 | "execution_count": null,
168 | "metadata": {},
169 | "outputs": [],
170 | "source": [
171 | "U = np.zeros((Nk, Na))\n",
172 | "V0 = np.zeros((Nk, Na))\n",
173 | "V1 = np.zeros((Nk, Na))\n",
174 | "DRk = np.zeros((Nk, Na), dtype=int)\n",
175 | "\n",
176 | "criterion = 1.\n",
177 | "tolerance = 1e-6\n",
178 | "n_iter = 0"
179 | ]
180 | },
181 | {
182 | "cell_type": "code",
183 | "execution_count": null,
184 | "metadata": {},
185 | "outputs": [],
186 | "source": [
187 | "t0 = time()\n",
188 | "while criterion > tolerance:\n",
189 | " pass\n",
190 | "t1 = time()\n",
191 | "\n",
192 | "K1 = K[DRk]\n",
193 | "C = np.zeros((Nk, Na))\n",
194 | "for j in range(Na):\n",
195 | " C[:, j] = A[j] * K**alpha + (1 - delta) * K - K1[:, j]\n",
196 | "\n",
197 | "k_ss = np.zeros((Na,))\n",
198 | "for a in range(Na):\n",
199 | " k_ss[a] = K[np.abs(K - K1[:, a].reshape((-1,))).argmin()]\n",
200 | "\n",
201 | "print('Algorithm took {:.3f} seconds with {} iterations'.format((t1-t0),\n",
202 | " n_iter))"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": null,
208 | "metadata": {},
209 | "outputs": [],
210 | "source": [
211 | "colorstate = ['firebrick', 'green']\n",
212 | "V_labels = [r'$V(k, a^l)$', r'$V(k, a^h)$']\n",
213 | "C_labels = [r'$c(k, a^l)$', r'$c(k, a^h)$']\n",
214 | "K_labels = [r\"$k'(k, a^l)$\", r\"$k'(k, a^h)$\"]\n",
215 | "\n",
216 | "fig = plt.subplots(figsize=(8, 6))\n",
217 | "ax = [None] * 3\n",
218 | "\n",
219 | "pltgrid = (2, 2)\n",
220 | "ax[0] = plt.subplot2grid(pltgrid, (0, 0), rowspan=2)\n",
221 | "ax[1] = plt.subplot2grid(pltgrid, (0, 1))\n",
222 | "ax[2] = plt.subplot2grid(pltgrid, (1, 1))\n",
223 | "\n",
224 | "for a in range(Na):\n",
225 | " ax[0].plot(K, V1[:, a],\n",
226 | " linewidth=2,\n",
227 | " color=colorstate[a],\n",
228 | " label=V_labels[a])\n",
229 | " ax[1].plot(K, K1[:, a],\n",
230 | " linewidth=2,\n",
231 | " color=colorstate[a],\n",
232 | " label=K_labels[a],\n",
233 | " zorder=2)\n",
234 | " ax[2].plot(K, C[:, a],\n",
235 | " linewidth=2,\n",
236 | " color=colorstate[a],\n",
237 | " label=C_labels[a])\n",
238 | "ax[1].plot(K, K,\n",
239 | " linewidth=1,\n",
240 | " color='black',\n",
241 | " linestyle='dashed',\n",
242 | " zorder=1)\n",
243 | "\n",
244 | "ax[0].set_title('Value function')\n",
245 | "ax[1].set_title('Capital accumulation decision')\n",
246 | "ax[2].set_title('Consumption decision')\n",
247 | "\n",
248 | "for a in range(3):\n",
249 | " ax[a].axvline(k_ss[0],\n",
250 | " color=colorstate[0],\n",
251 | " linestyle='dotted',\n",
252 | " zorder=1)\n",
253 | " ax[a].axvline(k_ss[1],\n",
254 | " color=colorstate[1],\n",
255 | " linestyle='dotted',\n",
256 | " zorder=1)\n",
257 | " ax[a].grid(alpha=0.3)\n",
258 | " ax[a].set_xlabel('$k$')\n",
259 | " ax[a].legend()\n",
260 | "\n",
261 | "plt.tight_layout()"
262 | ]
263 | },
264 | {
265 | "cell_type": "code",
266 | "execution_count": null,
267 | "metadata": {},
268 | "outputs": [],
269 | "source": [
270 | "print(' Low steady state: k = {:.3f}\\n'.format(k_ss[0]) +\n",
271 | " ' High steady state: k = {:.3f}\\n'.format(k_ss[1]) + \n",
272 | " 'Deterministic steady state: k = {:.3f}'.format(k_dss))"
273 | ]
274 | },
275 | {
276 | "cell_type": "markdown",
277 | "metadata": {},
278 | "source": [
279 | "## Simulating the model"
280 | ]
281 | },
282 | {
283 | "cell_type": "code",
284 | "execution_count": null,
285 | "metadata": {},
286 | "outputs": [],
287 | "source": [
288 | "def draw_state(pdf):\n",
289 | " pass"
290 | ]
291 | },
292 | {
293 | "cell_type": "code",
294 | "execution_count": null,
295 | "metadata": {},
296 | "outputs": [],
297 | "source": [
298 | "def find_nearest(array, value, give_idx=False):\n",
299 | " if array.ndim != 1:\n",
300 | " raise ValueError('Input vector must be uni-dimensional')\n",
301 | " idx = (np.abs(array - value)).argmin()\n",
302 | " if give_idx:\n",
303 | " return idx\n",
304 | " else:\n",
305 | " return array[idx]"
306 | ]
307 | },
308 | {
309 | "cell_type": "code",
310 | "execution_count": null,
311 | "metadata": {},
312 | "outputs": [],
313 | "source": [
314 | "T = 250\n",
315 | "\n",
316 | "a = np.zeros((T,), dtype=int)\n",
317 | "k = np.zeros((T,), dtype=int)"
318 | ]
319 | },
320 | {
321 | "cell_type": "code",
322 | "execution_count": null,
323 | "metadata": {},
324 | "outputs": [],
325 | "source": [
326 | "a[0] = draw_state(ergodic_distribution(P)) # drawing an index for grid A\n",
327 | "k[0] = find_nearest(K, k_dss, give_idx=True) # getting index for K"
328 | ]
329 | },
330 | {
331 | "cell_type": "code",
332 | "execution_count": null,
333 | "metadata": {},
334 | "outputs": [],
335 | "source": [
336 | "for t in range(T-1):\n",
337 | " pass\n",
338 | "\n",
339 | "capital = K[k]\n",
340 | "shocks = A[a]"
341 | ]
342 | },
343 | {
344 | "cell_type": "code",
345 | "execution_count": null,
346 | "metadata": {},
347 | "outputs": [],
348 | "source": [
349 | "production = np.zeros((T,))\n",
350 | "investment = np.zeros((T,))\n",
351 | "consumption = np.zeros((T,))\n",
352 | "\n",
353 | "for t in range(T-1):\n",
354 | " pass"
355 | ]
356 | },
357 | {
358 | "cell_type": "code",
359 | "execution_count": null,
360 | "metadata": {},
361 | "outputs": [],
362 | "source": [
363 | "production[-1] = shocks[-1] * capital[-1] ** alpha\n",
364 | "investment[-1] = np.nan\n",
365 | "consumption[-1] = np.nan"
366 | ]
367 | },
368 | {
369 | "cell_type": "code",
370 | "execution_count": null,
371 | "metadata": {},
372 | "outputs": [],
373 | "source": [
374 | "y_ss = A * k_ss ** alpha\n",
375 | "i_ss = delta * k_ss # k_ss - (1 - delta) * k_ss\n",
376 | "c_ss = y_ss - i_ss"
377 | ]
378 | },
379 | {
380 | "cell_type": "code",
381 | "execution_count": null,
382 | "metadata": {},
383 | "outputs": [],
384 | "source": [
385 | "lows = shocks < 1\n",
386 | "low_in = [i for i in range(1, T) if (lows[i-1] == False and lows[i] == True)]\n",
387 | "low_out = [i for i in range(T-1) if (lows[i] == True and lows[i+1] == False)]\n",
388 | "if lows[0] == True:\n",
389 | " low_in.insert(0, 0)\n",
390 | "if lows[T-1] == True:\n",
391 | " low_out.append(T-1)\n",
392 | "\n",
393 | "prop_sims = {'color': 'blue',\n",
394 | " 'linewidth': 1.5,\n",
395 | " 'zorder': 3,\n",
396 | " 'label': 'Sample path'}\n",
397 | "\n",
398 | "prop_ss_lo = {'color': colorstate[0],\n",
399 | " 'linewidth': 1,\n",
400 | " 'linestyle': 'dashed',\n",
401 | " 'zorder': 2,\n",
402 | " 'label': 'Low steady state'}\n",
403 | "\n",
404 | "prop_ss_hi = {'color': colorstate[1],\n",
405 | " 'linewidth': 1,\n",
406 | " 'linestyle': 'dashed',\n",
407 | " 'zorder': 2,\n",
408 | " 'label': 'High steady state'}\n",
409 | "\n",
410 | "fig1, ax1 = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True,\n",
411 | " figsize=(8, 6))\n",
412 | "\n",
413 | "ax1[0, 0].plot(consumption, **prop_sims)\n",
414 | "ax1[0, 1].plot(investment, **prop_sims)\n",
415 | "ax1[1, 0].plot(capital, **prop_sims)\n",
416 | "ax1[1, 1].plot(production, **prop_sims)\n",
417 | "\n",
418 | "ax1[0, 0].axhline(c_ss[0], **prop_ss_lo)\n",
419 | "ax1[0, 0].axhline(c_ss[1], **prop_ss_hi)\n",
420 | "ax1[0, 1].axhline(i_ss[0], **prop_ss_lo)\n",
421 | "ax1[0, 1].axhline(i_ss[1], **prop_ss_hi)\n",
422 | "ax1[1, 0].axhline(k_ss[0], **prop_ss_lo)\n",
423 | "ax1[1, 0].axhline(k_ss[1], **prop_ss_hi)\n",
424 | "ax1[1, 1].axhline(y_ss[0], **prop_ss_lo)\n",
425 | "ax1[1, 1].axhline(y_ss[1], **prop_ss_hi)\n",
426 | "\n",
427 | "for i in range(2):\n",
428 | " for j in range(2):\n",
429 | " ax1[i, j].set_xlabel('Time')\n",
430 | " ax1[i, j].legend(framealpha=1)\n",
431 | " for a, b in zip(low_in, low_out):\n",
432 | " ax1[i, j].axvspan(a, b, color='black', alpha=0.1, zorder=1)\n",
433 | "\n",
434 | "ax1[0, 0].set_title('Consumption')\n",
435 | "ax1[0, 1].set_title('Investment')\n",
436 | "ax1[1, 0].set_title('Capital')\n",
437 | "ax1[1, 1].set_title('Production')\n",
438 | "\n",
439 | "plt.tight_layout()"
440 | ]
441 | }
442 | ],
443 | "metadata": {
444 | "kernelspec": {
445 | "display_name": "Python 3",
446 | "language": "python",
447 | "name": "python3"
448 | },
449 | "language_info": {
450 | "codemirror_mode": {
451 | "name": "ipython",
452 | "version": 3
453 | },
454 | "file_extension": ".py",
455 | "mimetype": "text/x-python",
456 | "name": "python",
457 | "nbconvert_exporter": "python",
458 | "pygments_lexer": "ipython3",
459 | "version": "3.8.5"
460 | }
461 | },
462 | "nbformat": 4,
463 | "nbformat_minor": 4
464 | }
465 |
--------------------------------------------------------------------------------
/class_notebooks/class_4_template.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "id": "stopped-implement",
6 | "metadata": {},
7 | "source": [
8 | "# Script for Class #4"
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "id": "third-resource",
15 | "metadata": {},
16 | "outputs": [],
17 | "source": [
18 | "import numpy as np\n",
19 | "from scipy import linalg as la\n",
20 | "from scipy import stats as st\n",
21 | "from scipy import optimize as opt\n",
22 | "from matplotlib import pyplot as plt\n",
23 | "from time import time\n",
24 | "\n",
25 | "from IPython.display import set_matplotlib_formats\n",
26 | "%matplotlib inline\n",
27 | "set_matplotlib_formats('svg')\n",
28 | "plt.rcParams['figure.figsize'] = [12, 5]"
29 | ]
30 | },
31 | {
32 | "cell_type": "markdown",
33 | "id": "graphic-postage",
34 | "metadata": {},
35 | "source": [
36 | "## Solving for the General Equilibrium when Explicit Prices are Involved"
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "execution_count": null,
42 | "id": "horizontal-knitting",
43 | "metadata": {},
44 | "outputs": [],
45 | "source": [
46 | "beta = 0.97\n",
47 | "gamma = 1.5\n",
48 | "y = 1.0\n",
49 | "n = 100 + 1\n",
50 | "a = np.linspace(-5, 5, num=n) # ensuring there's a value that is exactly zero, see later"
51 | ]
52 | },
53 | {
54 | "cell_type": "code",
55 | "execution_count": null,
56 | "id": "reserved-criminal",
57 | "metadata": {},
58 | "outputs": [],
59 | "source": [
60 | "rSol = 1 / beta - 1\n",
61 | "print('r* = {:.5f}'.format(rSol))"
62 | ]
63 | },
64 | {
65 | "cell_type": "code",
66 | "execution_count": null,
67 | "id": "lightweight-kidney",
68 | "metadata": {},
69 | "outputs": [],
70 | "source": [
71 | "class Agent:\n",
72 | " \n",
73 | " def __init__(self, beta, gamma, a, y):\n",
74 | " self.beta = beta\n",
75 | " self.gamma = gamma\n",
76 | " self.y = y\n",
77 | " self.a = a\n",
78 | " \n",
79 | " def __call__(self, r, tol=1e-6):\n",
80 | " n = self.a.size\n",
81 | " v = np.zeros((n,1))\n",
82 | " v_new = np.zeros((n,1))\n",
83 | " dr = np.zeros((n,1), dtype=int)\n",
84 | " criterion = 1\n",
85 | " n_iter = 0\n",
86 | " t0 = time()\n",
87 | " while criterion > tol:\n",
88 | " n_iter += 1\n",
89 | " for i in range(n):\n",
90 | " c = self.y + self.a[i] * (1 + r) - self.a\n",
91 | " c[c<=0] = np.nan\n",
92 | " u = c ** (1 - self.gamma) / (1 - self.gamma)\n",
93 | " obj = u + self.beta * v[:, -1]\n",
94 | " v_new[i] = np.nanmax( obj )\n",
95 | " dr[i] = obj.tolist().index(v_new[i])\n",
96 | " v = np.block([v, v_new])\n",
97 | " criterion = np.max(np.abs(v[:, -1] - v[:, -2]))\n",
98 | " t1 = time()\n",
99 | " a_opt = self.a[dr]\n",
100 | " self.v = v\n",
101 | " print('VFI took {0:.3f} seconds, {1} iterations (r={2:.3f}%).'.format(t1-t0, n_iter, r*100))\n",
102 | " # c_opt = self.y + self.a * (1 + r) - a_opt\n",
103 | " return a_opt"
104 | ]
105 | },
106 | {
107 | "cell_type": "code",
108 | "execution_count": null,
109 | "id": "crucial-feedback",
110 | "metadata": {},
111 | "outputs": [],
112 | "source": [
113 | "rLo, rHi = np.array([0.75, 1.25]) * rSol\n",
114 | "ra = Agent(beta, gamma, a, y)"
115 | ]
116 | },
117 | {
118 | "cell_type": "code",
119 | "execution_count": null,
120 | "id": "acquired-agriculture",
121 | "metadata": {},
122 | "outputs": [],
123 | "source": [
124 | "\"fill me in!\""
125 | ]
126 | },
127 | {
128 | "cell_type": "code",
129 | "execution_count": null,
130 | "id": "representative-tenant",
131 | "metadata": {},
132 | "outputs": [],
133 | "source": [
134 | "print('Analytical solution: r = {:.50f}'.format(rSol))\n",
135 | "print(' Numerical solution: r = {:.50f}'.format(rStar))"
136 | ]
137 | },
138 | {
139 | "cell_type": "markdown",
140 | "id": "promising-representation",
141 | "metadata": {},
142 | "source": [
143 | "## From Policy Functions to Endogenous Ergodic Distributions"
144 | ]
145 | },
146 | {
147 | "cell_type": "markdown",
148 | "id": "simplified-somewhere",
149 | "metadata": {},
150 | "source": [
151 | "Same problem as before, add uncertainty in $Y_t$"
152 | ]
153 | },
154 | {
155 | "cell_type": "code",
156 | "execution_count": null,
157 | "id": "rising-twins",
158 | "metadata": {},
159 | "outputs": [],
160 | "source": [
161 | "a_num = 100\n",
162 | "a_min = -5\n",
163 | "a_max = 5\n",
164 | "A = np.linspace(a_min, a_max, num=a_num)\n",
165 | "Y = np.array([0.5, 1.5])\n",
166 | "Pi = np.array([[0.75, 0.25],\n",
167 | " [0.25, 0.75]])\n",
168 | "# Y = np.array([0.25, 1.00, 1.75])\n",
169 | "# Pi = np.array([[0.65, 0.25, 0.10],\n",
170 | "# [0.20, 0.60, 0.20],\n",
171 | "# [0.10, 0.25, 0.65]])\n",
172 | "beta = 0.97\n",
173 | "gamma = 2.0"
174 | ]
175 | },
176 | {
177 | "cell_type": "code",
178 | "execution_count": null,
179 | "id": "italic-compensation",
180 | "metadata": {},
181 | "outputs": [],
182 | "source": [
183 | "def ergodic_distribution(P):\n",
184 | " eigvalues, eigvectors = la.eig(P)\n",
185 | " real_eigvalues, positions = [], []\n",
186 | " for i, l in enumerate(eigvalues):\n",
187 | " if np.imag(l) == 0.0:\n",
188 | " positions.append(i)\n",
189 | " real_eigvalues.append(l)\n",
190 | " real_eigvalues = np.array(real_eigvalues)\n",
191 | " real_eigvectors = np.real( eigvectors[:, positions] )\n",
192 | " unit_eigvalue = np.argmin( np.abs( real_eigvalues - 1 ) )\n",
193 | " ergo_dist = real_eigvectors[:, unit_eigvalue]\n",
194 | " ergo_dist /= ergo_dist.sum()\n",
195 | " return ergo_dist"
196 | ]
197 | },
198 | {
199 | "cell_type": "code",
200 | "execution_count": null,
201 | "id": "universal-spoke",
202 | "metadata": {},
203 | "outputs": [],
204 | "source": [
205 | "def solve_vfi(r, A, Y, beta, gamma, tol=1e-6):\n",
206 | " na = A.size\n",
207 | " ny = Y.size\n",
208 | " V0 = np.zeros((na, ny))\n",
209 | " dr = np.zeros((na, ny), dtype=int)\n",
210 | " crit = 1.0\n",
211 | " n_iter = 0\n",
212 | " t0 = time()\n",
213 | " while crit > tol:\n",
214 | " n_iter += 1\n",
215 | " V1 = np.zeros_like(V0)\n",
216 | " U = np.zeros((na, ny))\n",
217 | " for i in range(na):\n",
218 | " for j in range(ny):\n",
219 | " C = Y[j] + (1 + r) * A[i] - A\n",
220 | " C[C < 0] = np.nan\n",
221 | " U[:, j] = C ** (1 - gamma) / (1 - gamma)\n",
222 | " objective = U + beta * V0 @ Pi.T\n",
223 | " V1[i, :] = np.nanmax(objective, axis=0)\n",
224 | " crit = np.max( np.max( np.abs( V1 - V0 ) ) )\n",
225 | " V0[:] = V1\n",
226 | " t1 = time()\n",
227 | " for i in range(na):\n",
228 | " for j in range(ny):\n",
229 | " C = Y[j] + (1 + r) * A[i] - A\n",
230 | " C[C < 0] = np.nan\n",
231 | " U[:, j] = C ** (1 - gamma) / (1 - gamma)\n",
232 | " objective = U + beta * V0 @ Pi.T\n",
233 | " dr[i, :] = np.nanargmax(objective, axis=0)\n",
234 | " pf_a = A[dr]\n",
235 | " print('VFI solved with r = {0:.10f}%; {1:.3f} seconds'.format(r*100, t1-t0))\n",
236 | " return pf_a"
237 | ]
238 | },
239 | {
240 | "cell_type": "code",
241 | "execution_count": null,
242 | "id": "million-glance",
243 | "metadata": {},
244 | "outputs": [],
245 | "source": [
246 | "def market_clearing(r, beta=0.97, gamma=2.0, tol=1e-6, full_output=False):\n",
247 | " na = A.size\n",
248 | " ny = Y.size\n",
249 | " ns = na * ny\n",
250 | " pa = np.zeros((na, na, ny), dtype=int)\n",
251 | " pf_a = solve_vfi(r, A, Y, beta, gamma)\n",
252 | " pass"
253 | ]
254 | },
255 | {
256 | "cell_type": "code",
257 | "execution_count": null,
258 | "id": "hundred-thompson",
259 | "metadata": {},
260 | "outputs": [],
261 | "source": [
262 | "rStar, diagnostics = \"fill me in!\""
263 | ]
264 | },
265 | {
266 | "cell_type": "code",
267 | "execution_count": null,
268 | "id": "focal-powder",
269 | "metadata": {},
270 | "outputs": [],
271 | "source": [
272 | "diagnostics"
273 | ]
274 | },
275 | {
276 | "cell_type": "code",
277 | "execution_count": null,
278 | "id": "natural-freedom",
279 | "metadata": {},
280 | "outputs": [],
281 | "source": [
282 | "pass"
283 | ]
284 | },
285 | {
286 | "cell_type": "code",
287 | "execution_count": null,
288 | "id": "favorite-composer",
289 | "metadata": {},
290 | "outputs": [],
291 | "source": [
292 | "ergo_dist = ergo_dist.reshape((Y.size, A.size)).T\n",
293 | "marginal_dist_income = ergodic_distribution(Pi)\n",
294 | "dist_if_y_lo = ergo_dist[:, 0] / marginal_dist_income[0]\n",
295 | "dist_if_y_hi = ergo_dist[:, -1] / marginal_dist_income[-1]"
296 | ]
297 | },
298 | {
299 | "cell_type": "code",
300 | "execution_count": null,
301 | "id": "collectible-mounting",
302 | "metadata": {},
303 | "outputs": [],
304 | "source": [
305 | "fig, ax = plt.subplots(nrows=1, ncols=2)\n",
306 | "ax[0].plot(A, A, color='black', alpha=0.5, linestyle='dashed')\n",
307 | "ax[0].plot(A, pf_a[:, 0], color='red', linewidth=2, label=\"$A'(A, Y^l)$\")\n",
308 | "ax[0].plot(A, pf_a[:, 1], color='green', linewidth=2, label=\"$A'(A, Y^h)$\")\n",
309 | "ax[0].legend()\n",
310 | "ax[0].set_title('Pol. fun. assets')\n",
311 | "ax[0].set_xlabel('$A$')\n",
312 | "ax[0].set_ylabel(\"$A'(A, Y)$\")\n",
313 | "ax[1].plot(A, dist_if_y_lo, color='red', linewidth=2, label='$\\lambda(A | Y^l)$')\n",
314 | "ax[1].plot(A, dist_if_y_hi, color='green', linewidth=2, label='$\\lambda(A | Y^h)$')\n",
315 | "ax[1].legend()\n",
316 | "ax[1].set_xlabel('$A$')\n",
317 | "ax[1].set_title('$\\lambda(A | Y)$')\n",
318 | "plt.tight_layout()\n",
319 | "plt.show()"
320 | ]
321 | },
322 | {
323 | "cell_type": "markdown",
324 | "id": "modern-telling",
325 | "metadata": {},
326 | "source": [
327 | "The last plot on the right makes it look like some probabilities on the endogenous distribution are negative. They are not. The $y$-axis has been scaled-and-shifted by `1e-14+1e-2`, which means that each tick label $i$ on the vertical axis are $i \\times 10^{-14} + 1 \\times 10^{-2}$. Therefore, the zero displayed is actually $0.01$. The label $-1$ actually is $0.004 - 10^{-14} > 0$."
328 | ]
329 | }
330 | ],
331 | "metadata": {
332 | "kernelspec": {
333 | "display_name": "Python 3",
334 | "language": "python",
335 | "name": "python3"
336 | },
337 | "language_info": {
338 | "codemirror_mode": {
339 | "name": "ipython",
340 | "version": 3
341 | },
342 | "file_extension": ".py",
343 | "mimetype": "text/x-python",
344 | "name": "python",
345 | "nbconvert_exporter": "python",
346 | "pygments_lexer": "ipython3",
347 | "version": "3.8.5"
348 | }
349 | },
350 | "nbformat": 4,
351 | "nbformat_minor": 5
352 | }
353 |
--------------------------------------------------------------------------------
/class_notebooks/class_5_template.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "id": "brutal-variety",
6 | "metadata": {},
7 | "source": [
8 | "# Binning & Transition Dynamics"
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "id": "missing-mileage",
15 | "metadata": {},
16 | "outputs": [],
17 | "source": [
18 | "from time import time\n",
19 | "import numpy as np\n",
20 | "from scipy import linalg as la\n",
21 | "from scipy import stats as st\n",
22 | "from scipy import optimize as opt\n",
23 | "from matplotlib import pyplot as plt\n",
24 | "\n",
25 | "from IPython.display import set_matplotlib_formats\n",
26 | "%matplotlib inline\n",
27 | "set_matplotlib_formats('svg')\n",
28 | "plt.rcParams['figure.figsize'] = [8, 5]"
29 | ]
30 | },
31 | {
32 | "cell_type": "markdown",
33 | "id": "dynamic-couple",
34 | "metadata": {},
35 | "source": [
36 | "## Binning (a.k.a., Non-Stochastic Simulations)"
37 | ]
38 | },
39 | {
40 | "cell_type": "code",
41 | "execution_count": null,
42 | "id": "activated-disclaimer",
43 | "metadata": {},
44 | "outputs": [],
45 | "source": [
46 | "class Huggett:\n",
47 | "\n",
48 | "\n",
49 | " def __init__(self, a_num, y_num, a_min=-3.0, a_max=15.0, beta=0.97,\n",
50 | " gamma=2.0, mu=0.0, rho=0.53, sigma=0.296, vfi_tol=1e-6):\n",
51 | " self.na = a_num\n",
52 | " self.ny = y_num\n",
53 | " self.ns = a_num * y_num\n",
54 | " self.a_min = a_min\n",
55 | " self.beta = beta\n",
56 | " self.gamma = gamma\n",
57 | " self.mu = mu\n",
58 | " self.rho = rho\n",
59 | " self.sigma = sigma\n",
60 | " self.A = np.linspace(a_min, a_max, a_num)\n",
61 | " log_Y, self.Pi = self._rouwenhorst(y_num, mu, rho, sigma)\n",
62 | " self.Y = np.exp(log_Y)\n",
63 | " self.vfi_tol = vfi_tol\n",
64 | "\n",
65 | "\n",
66 | " def solve_vfi_ip(self, r):\n",
67 | " \"\"\"\n",
68 | " Solves the households' problem with VFI scaling down the state space.\n",
69 | " \"\"\"\n",
70 | " na = self.na // 5\n",
71 | " A = np.linspace(self.A.min(), self.A.max(), na)\n",
72 | " V0 = np.zeros((na, self.ny))\n",
73 | " dr = np.zeros((na, self.ny), dtype=int)\n",
74 | " crit = 1.0\n",
75 | " n_iter = 0\n",
76 | " while crit > self.vfi_tol:\n",
77 | " n_iter += 1\n",
78 | " V1 = np.zeros_like(V0)\n",
79 | " U = np.zeros((na, self.ny))\n",
80 | " for i in range(na):\n",
81 | " for k in range(self.ny):\n",
82 | " C = self.Y[k] + (1 + r) * A[i] - A\n",
83 | " C[C < 0] = np.nan\n",
84 | " U[:, k] = self.u(C)\n",
85 | " objective = U + self.beta * ( V0 @ self.Pi.T )\n",
86 | " V1[i, :] = np.nanmax(objective, axis=0)\n",
87 | " dr[i, :] = np.nanargmax(objective, axis=0)\n",
88 | " crit = np.nanmax( np.nanmax( np.abs( V1 - V0 ) ) )\n",
89 | " V0[:] = V1\n",
90 | " pf_a = A[dr]\n",
91 | " A_opt = np.zeros((self.na, self.ny))\n",
92 | " for k in range(self.ny):\n",
93 | " coeffs = np.polyfit(A, pf_a[:, k], 3)\n",
94 | " A_opt[:, k] = np.polyval(coeffs, self.A)\n",
95 | " A_opt[A_opt <= self.A.min()] = self.A.min()\n",
96 | " return A_opt\n",
97 | "\n",
98 | "\n",
99 | " def market_clearing(self, r, binning=False, full_output=False):\n",
100 | " t0 = time()\n",
101 | " pfa = self.solve_vfi_ip(r)\n",
102 | " Q = self._compute_Q_smooth(pfa) if binning else self._compute_Q(pfa)\n",
103 | " dist = self._ergodic_distribution(Q).reshape((self.ny, self.na)).T\n",
104 | " net_excess_demand = np.sum(dist * pfa)\n",
105 | " t1 = time()\n",
106 | " print('Done! r = {0:.5f}% {1:.3f}s.'.format(r*100, t1-t0))\n",
107 | " if full_output:\n",
108 | " return net_excess_demand, dist\n",
109 | " else:\n",
110 | " return net_excess_demand\n",
111 | "\n",
112 | "\n",
113 | " def u(self, c):\n",
114 | " return (c ** (1 - self.gamma)) / (1 - self.gamma)\n",
115 | "\n",
116 | "\n",
117 | " @staticmethod\n",
118 | " def _rouwenhorst(n, mu, rho, sigma):\n",
119 | " \"\"\"\n",
120 | " Discretizes any stationary AR(1) process.\n",
121 | " \"\"\"\n",
122 | " def compute_P(p, n):\n",
123 | " if n == 2:\n",
124 | " P = np.array([[p, 1-p], [1-p, p]], dtype=float)\n",
125 | " else:\n",
126 | " Q = compute_P(p, n-1)\n",
127 | " A = np.zeros((n, n))\n",
128 | " B = np.zeros((n, n))\n",
129 | " A[:n-1, :n-1] += Q\n",
130 | " A[1:n, 1:n] += Q\n",
131 | " B[:n-1, 1:n] += Q\n",
132 | " B[1:n, :n-1] += Q\n",
133 | " P = p * A + (1-p) * B\n",
134 | " P[1:-1, :] /= 2\n",
135 | " return P\n",
136 | " p = (1 + rho) / 2\n",
137 | " Pi = compute_P(p, n)\n",
138 | " f = np.sqrt(n-1) * (sigma / np.sqrt(1 - rho**2))\n",
139 | " S = np.linspace(-f, f, n) + mu\n",
140 | " return S, Pi\n",
141 | "\n",
142 | "\n",
143 | " @staticmethod\n",
144 | " def _ergodic_distribution(P, tol=1e-12):\n",
145 | " \"\"\"\n",
146 | " Returns the ergodic distribution of a matrix P by iterating it.\n",
147 | " (fast, if P is sparse)\n",
148 | " \"\"\"\n",
149 | " n = P.shape[0]\n",
150 | " p0 = np.zeros((1, n))\n",
151 | " p0[0, 0] = 1.0\n",
152 | " diff = 1.0\n",
153 | " while diff > tol:\n",
154 | " p1 = p0 @ P\n",
155 | " p0 = p1 @ P\n",
156 | " diff = la.norm(p1 - p0)\n",
157 | " return p0.reshape((-1, )) / p0.sum()\n",
158 | "\n",
159 | "\n",
160 | " def _compute_Q(self, pf_a):\n",
161 | " \"\"\"\n",
162 | " Translates a policy function into a transition matrix.\n",
163 | " \"\"\"\n",
164 | " n = self.na\n",
165 | " blocks = []\n",
166 | " for k in range(self.ny):\n",
167 | " pa = np.zeros((n, n), dtype=int)\n",
168 | " for i in range(n):\n",
169 | " j = np.argmin( np.abs( pf_a[i, k] - self.A ) )\n",
170 | " pa[i, j] = 1\n",
171 | " blocks.append(pa)\n",
172 | " PA = la.block_diag(*blocks)\n",
173 | " PY = np.kron( self.Pi, np.eye(self.na) )\n",
174 | " Q = PY @ PA\n",
175 | " return Q\n",
176 | "\n",
177 | "\n",
178 | " def _compute_Q_smooth(self, pf_a):\n",
179 | " \"\"\"\n",
180 | " Translates a policy function into a transition matrix, with binning.\n",
181 | " \"\"\"\n",
182 | " pass"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": null,
188 | "id": "chemical-shield",
189 | "metadata": {},
190 | "outputs": [],
191 | "source": [
192 | "mdl = Huggett(a_num=500, y_num=5)\n",
193 | "rStar, checks = opt.ridder(mdl.market_clearing, 0.020, 0.025, full_output=True)"
194 | ]
195 | },
196 | {
197 | "cell_type": "code",
198 | "execution_count": null,
199 | "id": "filled-canberra",
200 | "metadata": {},
201 | "outputs": [],
202 | "source": [
203 | "checks"
204 | ]
205 | },
206 | {
207 | "cell_type": "code",
208 | "execution_count": null,
209 | "id": "thermal-darwin",
210 | "metadata": {},
211 | "outputs": [],
212 | "source": [
213 | "pfa = mdl.solve_vfi_ip(rStar)\n",
214 | "ned_bin, dist_bin = mdl.market_clearing(rStar, binning=True, full_output=True)\n",
215 | "ned_nobin, dist_nobin = mdl.market_clearing(rStar, binning=False, full_output=True)"
216 | ]
217 | },
218 | {
219 | "cell_type": "code",
220 | "execution_count": null,
221 | "id": "acoustic-length",
222 | "metadata": {},
223 | "outputs": [],
224 | "source": [
225 | "fig, ax = plt.subplots(nrows=2, ncols=2, sharex=True)\n",
226 | "ax[0, 0].plot(mdl.A, dist_nobin.sum(axis=1))\n",
227 | "ax[1, 0].plot(mdl.A, dist_nobin.sum(axis=1).cumsum())\n",
228 | "ax[0, 1].plot(mdl.A, dist_bin.sum(axis=1))\n",
229 | "ax[1, 1].plot(mdl.A, dist_bin.sum(axis=1).cumsum())\n",
230 | "for j in range(2):\n",
231 | " ax[0, j].set_title('Ergodic marginal PDF')\n",
232 | " ax[1, j].set_title('Ergodic marginal CDF')\n",
233 | " for i in range(2):\n",
234 | " ax[i, j].grid(alpha=0.3)\n",
235 | "plt.tight_layout()\n",
236 | "plt.show()"
237 | ]
238 | },
239 | {
240 | "cell_type": "markdown",
241 | "id": "necessary-indiana",
242 | "metadata": {},
243 | "source": [
244 | "_En passant,_ we have just replicated the paper by Huggett 😉"
245 | ]
246 | },
247 | {
248 | "cell_type": "code",
249 | "execution_count": null,
250 | "id": "comprehensive-china",
251 | "metadata": {},
252 | "outputs": [],
253 | "source": [
254 | "print(\" Complete-insurance economy: r = {:.3f}%.\".format(\"???\"))\n",
255 | "print(\"Incomplete-insurance economy: r = {:.3f}%.\".format(rStar * 100))"
256 | ]
257 | },
258 | {
259 | "cell_type": "markdown",
260 | "id": "sixth-darwin",
261 | "metadata": {},
262 | "source": [
263 | "## Transition Dynamics (a.k.a., MIT shocks)"
264 | ]
265 | },
266 | {
267 | "cell_type": "markdown",
268 | "id": "closing-large",
269 | "metadata": {},
270 | "source": [
271 | "_Coming soon..._"
272 | ]
273 | },
274 | {
275 | "cell_type": "markdown",
276 | "id": "entitled-charter",
277 | "metadata": {},
278 | "source": [
279 | "## The Aiyagari (1994) Model"
280 | ]
281 | },
282 | {
283 | "cell_type": "markdown",
284 | "id": "intermediate-institution",
285 | "metadata": {},
286 | "source": [
287 | "See the code provided by Maffezzoli."
288 | ]
289 | }
290 | ],
291 | "metadata": {
292 | "kernelspec": {
293 | "display_name": "Python 3",
294 | "language": "python",
295 | "name": "python3"
296 | },
297 | "language_info": {
298 | "codemirror_mode": {
299 | "name": "ipython",
300 | "version": 3
301 | },
302 | "file_extension": ".py",
303 | "mimetype": "text/x-python",
304 | "name": "python",
305 | "nbconvert_exporter": "python",
306 | "pygments_lexer": "ipython3",
307 | "version": "3.8.8"
308 | }
309 | },
310 | "nbformat": 4,
311 | "nbformat_minor": 5
312 | }
313 |
--------------------------------------------------------------------------------
/class_notebooks/class_6_template.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "id": "distributed-toolbox",
6 | "metadata": {},
7 | "source": [
8 | "# Web Scraping"
9 | ]
10 | },
11 | {
12 | "cell_type": "code",
13 | "execution_count": null,
14 | "id": "eight-baseball",
15 | "metadata": {},
16 | "outputs": [],
17 | "source": [
18 | "from time import time\n",
19 | "import pandas as pd\n",
20 | "import requests # HTTP programming\n",
21 | "from bs4 import BeautifulSoup # HTML parsing\n",
22 | "from selenium import webdriver # Browser automation\n",
23 | "from selenium.common.exceptions import NoSuchElementException"
24 | ]
25 | },
26 | {
27 | "cell_type": "code",
28 | "execution_count": null,
29 | "id": "incoming-lodge",
30 | "metadata": {},
31 | "outputs": [],
32 | "source": [
33 | "latest_xkcd_comic = 2436\n",
34 | "oldest_xkcd_comic = 2350"
35 | ]
36 | },
37 | {
38 | "cell_type": "markdown",
39 | "id": "lucky-corporation",
40 | "metadata": {},
41 | "source": [
42 | "## HTTP programming"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": null,
48 | "id": "revolutionary-tissue",
49 | "metadata": {},
50 | "outputs": [],
51 | "source": [
52 | "class xkcdComicJson:\n",
53 | " \"\"\"\n",
54 | " Uses the JSON interface at https://xkcd.com/ for retrieving information about a single xkcd comic.\n",
55 | " \"\"\"\n",
56 | "\n",
57 | " def __init__(self, comic_no):\n",
58 | " pass\n",
59 | " \n",
60 | " def save_img_to_disk(self, directory='./'):\n",
61 | " response = requests.get(self.img_url)\n",
62 | " response.raise_for_status()\n",
63 | " if directory[-1] != '/':\n",
64 | " directory += '/'\n",
65 | " with open(directory + f'{self.number}-{self.img_name}', mode='wb') as f:\n",
66 | " f.write(response.content)"
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": null,
72 | "id": "animated-viking",
73 | "metadata": {},
74 | "outputs": [],
75 | "source": [
76 | "df_json_rows = []\n",
77 | "t0 = time()\n",
78 | "for no in range(oldest_xkcd_comic, latest_xkcd_comic+1):\n",
79 | " comic = xkcdComicJson(no)\n",
80 | " df_json_rows.append(comic.json)\n",
81 | " # comic.save_img_to_disk()\n",
82 | "t1 = time()\n",
83 | "time_json = t1 - t0\n",
84 | "df_json = pd.DataFrame(df_json_rows)\n",
85 | "print(\"Data download completed in {:.3f} seconds.\".format(time_json))"
86 | ]
87 | },
88 | {
89 | "cell_type": "markdown",
90 | "id": "vietnamese-amplifier",
91 | "metadata": {},
92 | "source": [
93 | "We never grow a `pandas.DataFrame` iteratively, row by row. An accurate and detailed account on the reason is found [here](https://stackoverflow.com/a/56746204)."
94 | ]
95 | },
96 | {
97 | "cell_type": "code",
98 | "execution_count": null,
99 | "id": "organic-necklace",
100 | "metadata": {},
101 | "outputs": [],
102 | "source": [
103 | "df_json.tail()"
104 | ]
105 | },
106 | {
107 | "cell_type": "markdown",
108 | "id": "cheap-institution",
109 | "metadata": {},
110 | "source": [
111 | "## HTML parsing"
112 | ]
113 | },
114 | {
115 | "cell_type": "code",
116 | "execution_count": null,
117 | "id": "funky-warrant",
118 | "metadata": {},
119 | "outputs": [],
120 | "source": [
121 | "class xkcdComicSoup:\n",
122 | " \"\"\"\n",
123 | " Uses Beautiful Soup to parse the HTML page for a given comic.\n",
124 | " \"\"\"\n",
125 | " \n",
126 | " def __init__(self, comic_no):\n",
127 | " pass\n",
128 | " \n",
129 | " def save_img_to_disk(self, directory='./'):\n",
130 | " if directory[-1] != '/':\n",
131 | " directory += '/'\n",
132 | " with open(directory + f'{self.number}-{self.img_name}', mode='wb') as f:\n",
133 | " f.write(self.img_response.content)"
134 | ]
135 | },
136 | {
137 | "cell_type": "code",
138 | "execution_count": null,
139 | "id": "rental-printing",
140 | "metadata": {},
141 | "outputs": [],
142 | "source": [
143 | "df_soup_rows = []\n",
144 | "t0 = time()\n",
145 | "for no in range(oldest_xkcd_comic, latest_xkcd_comic+1):\n",
146 | " comic = xkcdComicSoup(no)\n",
147 | " row = {\n",
148 | " 'number': comic.number,\n",
149 | " 'date': comic.date,\n",
150 | " 'title': comic.title,\n",
151 | " 'caption': comic.caption,\n",
152 | " 'img_name': comic.img_name,\n",
153 | " 'img': comic.img_url\n",
154 | " }\n",
155 | " df_soup_rows.append(row)\n",
156 | " # comic.save_img_to_disk()\n",
157 | "t1 = time()\n",
158 | "time_soup = t1 - t0\n",
159 | "df_soup = pd.DataFrame(df_soup_rows)\n",
160 | "print(\"Data download completed in {:.3f} seconds.\".format(time_soup))"
161 | ]
162 | },
163 | {
164 | "cell_type": "code",
165 | "execution_count": null,
166 | "id": "dirty-dating",
167 | "metadata": {},
168 | "outputs": [],
169 | "source": [
170 | "df_soup.tail()"
171 | ]
172 | },
173 | {
174 | "cell_type": "markdown",
175 | "id": "trying-captain",
176 | "metadata": {},
177 | "source": [
178 | "## Browser Automation"
179 | ]
180 | },
181 | {
182 | "cell_type": "code",
183 | "execution_count": null,
184 | "id": "public-roommate",
185 | "metadata": {},
186 | "outputs": [],
187 | "source": [
188 | "browser = webdriver.Firefox(executable_path='C:/Users/Andrea/Documents/geckodriver.exe')"
189 | ]
190 | },
191 | {
192 | "cell_type": "code",
193 | "execution_count": null,
194 | "id": "above-liver",
195 | "metadata": {},
196 | "outputs": [],
197 | "source": [
198 | "df_dom_rows = []\n",
199 | "t0 = time()\n",
200 | "browser.get('https://xkcd.com') # point the browser to the homepage\n",
201 | "number = 3000\n",
202 | "\n",
203 | "while number > oldest_xkcd_comic:\n",
204 | " # Find the number of the comic\n",
205 | " pass\n",
206 | " \n",
207 | " # Find the title of the comic\n",
208 | " pass\n",
209 | " \n",
210 | " # Find the caption of the comic\n",
211 | " pass\n",
212 | " \n",
213 | " # Find the URL of the comic image\n",
214 | " pass\n",
215 | " \n",
216 | " # Find the name of the PNG file\n",
217 | " pass\n",
218 | " \n",
219 | " # Collect information for dataset\n",
220 | " row = {\n",
221 | " 'number': number,\n",
222 | " 'title': title,\n",
223 | " 'caption': caption,\n",
224 | " 'img_name': img_name,\n",
225 | " 'img': img_url\n",
226 | " }\n",
227 | " \n",
228 | " # Append information to list\n",
229 | " df_dom_rows.append(row)\n",
230 | " \n",
231 | " # Go to the previous comic\n",
232 | " pass\n",
233 | " \n",
234 | "browser.quit() # close the automated browser window\n",
235 | "t1 = time()\n",
236 | "time_dom = t1-t0\n",
237 | "print(\"Data download completed in {:.3f} seconds.\".format(time_dom))"
238 | ]
239 | },
240 | {
241 | "cell_type": "code",
242 | "execution_count": null,
243 | "id": "common-gravity",
244 | "metadata": {},
245 | "outputs": [],
246 | "source": [
247 | "df_dom = pd.DataFrame(df_dom_rows)\n",
248 | "df_dom.head()"
249 | ]
250 | },
251 | {
252 | "cell_type": "code",
253 | "execution_count": null,
254 | "id": "understood-living",
255 | "metadata": {},
256 | "outputs": [],
257 | "source": [
258 | "print('Comics retrieved: {:0d}.'.format(latest_xkcd_comic - oldest_xkcd_comic))\n",
259 | "print('HTTP programming took {:.3f} seconds.'.format(time_json))\n",
260 | "print('HTML parsing took {:.3f} seconds.'.format(time_soup))\n",
261 | "print('Browser automation took {:.3f} seconds.'.format(time_dom))"
262 | ]
263 | }
264 | ],
265 | "metadata": {
266 | "kernelspec": {
267 | "display_name": "Python 3",
268 | "language": "python",
269 | "name": "python3"
270 | },
271 | "language_info": {
272 | "codemirror_mode": {
273 | "name": "ipython",
274 | "version": 3
275 | },
276 | "file_extension": ".py",
277 | "mimetype": "text/x-python",
278 | "name": "python",
279 | "nbconvert_exporter": "python",
280 | "pygments_lexer": "ipython3",
281 | "version": "3.8.8"
282 | }
283 | },
284 | "nbformat": 4,
285 | "nbformat_minor": 5
286 | }
287 |
--------------------------------------------------------------------------------
/class_notebooks/img/jupyter_home.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/class_notebooks/img/jupyter_home.png
--------------------------------------------------------------------------------
/class_notebooks/img/jupyter_notebook.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/class_notebooks/img/jupyter_notebook.png
--------------------------------------------------------------------------------
/class_notebooks/img/jupyter_notebook_new.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/class_notebooks/img/jupyter_notebook_new.png
--------------------------------------------------------------------------------
/class_notebooks/img/spyder.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/class_notebooks/img/spyder.png
--------------------------------------------------------------------------------
/code_examples/README.md:
--------------------------------------------------------------------------------
1 | # Examples
2 |
3 | ## Uses of `NMmacro`
4 |
5 | Here there are some scripts that use methods from the [`NMmacro`](../NMmacro/) package.
6 |
7 |
8 | ## Other notebooks
9 |
10 | The notebook [`discretizing_ar1_processes.ipynb`](./discretizing_ar1_processes.ipynb) shows the use of the [Tauchen (1986)](https://www.sciencedirect.com/science/article/pii/0165176586901680) and [Tauchen and Hussey (1991)](https://doi.org/10.2307/2938261) methods to discretize AR(1) processes.
11 | It compares simulations of two different AR(1) models to simulations of the corresponding discrete Markov Chains to show how the "fit" changes after changes in the structural parameters.
12 |
13 | The notebook [`hermgauss_vs_linspace.ipynb`](./hermgauss_vs_linspace.ipynb) compares a linearly spaced grid (obtained with `numpy.linspace`) to the Gauss-Hermite nodes (obtained with `numpy.polynomial.hermite.hermgauss`).
14 | It provides an intuition on why the [Tauchen and Hussey (1991)](https://doi.org/10.2307/2938261) method is an improvement relative to [Tauchen (1986)](https://www.sciencedirect.com/science/article/pii/0165176586901680).
15 |
16 |
17 | ## Miscellanea
18 |
19 | The file [`vfi_convergence.m`](./vfi_convergence.m) is a Matlab script that shows how VFI happens in the deterministic case of the Neoclassical Growth Model.
20 | It creates a figure that is updated at every click (or button) press.
21 | Every update corresponds to a new proposal for the value function.
22 | It finally shows the approximate solution of the problem.
23 |
24 |
--------------------------------------------------------------------------------
/code_examples/deterministic_methods.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | title: Deterministic Methods
4 | author: Andrea Pasqualini
5 | created: 20 January 2019
6 |
7 | This script showcases the use of the class NMmacro.models.NCGM that I wrote.
8 | """
9 |
10 | #%% Importing necessary packages and setting up working directory
11 |
12 | import sys
13 | sys.path.insert('../')
14 |
15 | import numpy as np
16 | from NMmacro.models import NCGM
17 |
18 |
19 | #%% Calibrating and solving the model
20 |
21 | alpha = 0.3
22 | beta = 0.95
23 | gamma = 1.5
24 | delta = 0.1
25 |
26 | k_ss = ((1 - (1-delta) * beta) / (alpha * beta)) ** (1 / (alpha-1))
27 | k_lo, k_hi = np.array([0.1, 1.9]) * k_ss
28 |
29 | k = np.linspace(start=k_lo, stop=k_hi, num=1000)
30 |
31 | # initial condition for PFI
32 | guess_c_pfi = 0.1 * np.ones(k.shape)
33 |
34 | # initial condition for direct projection
35 | guess_c_proj = 0.4 + 0.35 * k - 0.02 * k**2
36 |
37 | mdl = NCGM(alpha, beta, gamma, delta)
38 | vfi_cp, vfi_kp, vfi_v = mdl.solve_vfi(k)
39 | pfi_cp, pfi_kp = mdl.solve_pfi(k, guess_c_pfi)
40 | pro_cp, pro_kp = mdl.solve_proj(k, guess_c_proj)
41 |
42 |
43 | #%% Plotting results
44 |
45 | mdl.plot_solution(k, vfi_cp, vfi_kp, vfi_v)
46 | mdl.plot_solution(k, pfi_cp, pfi_kp)
47 | mdl.plot_solution(k, pro_cp, pro_kp)
48 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/ReadMe.md:
--------------------------------------------------------------------------------
1 | # GPU Computing for Economists
2 |
3 | This folder contains resources I used to create a class on GPU computing.
4 |
5 |
6 | ## Replication instructions
7 |
8 |
9 | ### C vs Python loop speeds
10 |
11 | The file [`loop.sh`](./loop.sh) executes [`loop.py`](./loop.py) and [`loop.c`](./loop.c).
12 | You need to have both a shell-compatible interpreter (e.g., Bash, zsh) and a GCC-compatible compiler.
13 | Linux and macOS have both out-of-the-box (Debian---and derivatives---users may need to `apt install build-essential`).
14 | Windows has none by default, although you can easily obtain them through the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about) (WSL).
15 |
16 | The `time` function in [`loop.sh`](./loop.sh) is intended as the [Bash/zsh keyword](https://en.wikipedia.org/wiki/Time_(Unix)#Bash) and not as the [standalone GNU program](https://en.wikipedia.org/wiki/Time_(Unix)), although both do the same thing, essentially.
17 |
18 |
19 | ### VFI benchmarks
20 |
21 | The file [`run-benchmarks.sh`](./run-benchmarks.sh) is a shell script that executes [`bench_cpu.py`](./bench_cpu.py), [`bench_jit.py`](./bench_jit.py) and [`bench_gpu.py`](./bench_gpu.py).
22 | They produce the files [`results_cpu.csv`](./results_cpu.csv), [`results_jit.csv`](./results_jit.csv) and [`results_gpu.csv`](./results_gpu.csv) respectively.
23 |
24 | I ran the code on an [Asus N550JK "VivoBook Pro"](https://www.asus.com/Laptops/N550JK/specifications/), which has the following hardware and software.
25 |
26 | - [Intel Core i7-4700HQ](https://ark.intel.com/content/www/us/en/ark/products/75116/intel-core-i7-4700hq-processor-6m-cache-up-to-3-40-ghz.html)
27 | - [Nvidia GeForce GTX 850M](https://www.geforce.com/hardware/notebook-gpus/geforce-gtx-850m/specifications)
28 | - 8 GB of RAM
29 | - 256 GB SSD, SATA 3 (6 GB/s)
30 | - [Ubuntu](https://ubuntu.com/desktop) 19.10 (Linux kernel 5.3.0)
31 |
32 | I installed Miniconda and the following packages
33 |
34 | ```bash
35 | $ conda install numpy numba pandas tqdm cudatoolkit=10.1.243
36 | ```
37 |
38 | The package `tqdm` is used to draw fancy progress bars on the terminal.
39 | The specific version of `cudatoolkit` is a specific hard-dependency of the GTX 850M.
40 |
41 | After the `.csv` files have been generated, I used [`benchmarks_gpu.r`](./benchmarks_gpu.r) with [R](https://www.r-project.org/), together with the [tidyverse](https://www.tidyverse.org/) packages to produce the charts [`benchmarks.pdf`](./benchmarks.pdf) and [`benchmarks-logscale.pdf`](benchmarks-logscale.pdf).
42 |
43 |
44 | ## Slides
45 |
46 | The file [`ta6_gpu_computing.tex`](./slides/ta6_gpu_computing.tex) use the [Metropolis theme](https://github.com/matze/mtheme) and require [xelatex](https://en.wikipedia.org/wiki/XeTeX) and the [Fira font family](http://mozilla.github.io/Fira/) to be compiled with the proper font.
47 |
48 | The file [`gpu_parallel_visual.py`](./slides/img/gpu_parallel_visual.py) generates a bunch of images that illustrate why GPU computing times tend to grow "less exponentially" with the size of the problem than CPU's.
49 |
50 |
51 | ## Credits
52 |
53 | > Render to Caesar the things that are Caesar's.
54 |
55 | Huge shout out to [@giacomobattiston](https://github.com/giacomobattiston), for understanding GPUs together with me.
56 |
57 | - I took the images [`hw-sw-thread_block.jpg`](./slides/img/hw-sw-thread_block.jpg) and [`block-thread.pdf`](./slides/img/block-thread.pdf) from the [Wikipedia page on thread blocks](https://en.wikipedia.org/wiki/Thread_block_(CUDA_programming)).
58 | - I took the image [`stencil.pdf`](./slides/img/stencil.pdf) from the [Wikipedia page on stencil code](https://en.wikipedia.org/wiki/Stencil_code).
59 | - I took the image [`nvidia-rtx-2080-ti.jpg`](./slides/img/nvidia-rtx-2080-ti.jpg) from this [TechSpot article](https://www.techspot.com/products/graphics-cards/nvidia-geforce-rtx-2080-ti-11gb-gddr6-pcie.187702/).
60 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/bench_cpu.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | from sys import argv
4 | from time import time # keep track of time
5 | import numpy as np
6 | import pandas as pd
7 | from tqdm import trange
8 |
9 |
10 | def vmax(V, b_grid, r, y, beta):
11 | V_old = np.copy(V)
12 | for ix in range(b_grid.size):
13 | cons = (1+r) * b_grid[ix] + y - b_grid
14 | cons[cons<=0] = np.nan
15 | period_util = np.log(cons)
16 | V[ix] = np.nanmax(period_util + beta * V_old)
17 |
18 |
19 | def time_vfi_cpu(nk, r=0.01, y=1, beta=0.95, tol=1e-6):
20 | b_grid = np.linspace(0.1, 10, num=nk)
21 | V_cpu = np.zeros((nk,), dtype=np.float64)
22 | # n_iter = 0
23 | t0_cpu = time()
24 | while True:
25 | # n_iter += 1
26 | V_cpu_old = np.copy(V_cpu)
27 | vmax(V_cpu, b_grid, r, y, beta)
28 | crit_cpu = np.max(np.abs(V_cpu - V_cpu_old))
29 | # print(crit_cpu)
30 | if crit_cpu < tol:
31 | # print('VFI converged in {} iterations.'.format(n_iter))
32 | break
33 | t1_cpu = time()
34 | return t1_cpu - t0_cpu
35 |
36 |
37 | if __name__ == '__main__':
38 |
39 | N = int(argv[1])
40 | out_csv_file = argv[2]
41 |
42 | # N = 1000
43 | # out_csv_file = './tmp.csv'
44 |
45 | # grid_sizes = range(32, 4352+1, 32)
46 | grid_sizes = range(25, 1000+1, 25)
47 |
48 | times_cpu = np.zeros((N, len(grid_sizes)))
49 |
50 | print('Solving with CPU...')
51 | for j, nk in enumerate(grid_sizes):
52 | for i in trange(N, desc='nk = {}'.format(nk)):
53 | times_cpu[i, j] = time_vfi_cpu(nk)
54 |
55 | tmp0_cpu = pd.DataFrame(times_cpu, columns=list(map(str, grid_sizes)))
56 | tmp1_cpu = tmp0_cpu.melt(var_name='nk', value_name='time')
57 | results = tmp1_cpu.assign(target='cpu')[['target', 'nk', 'time']]
58 |
59 | results.to_csv(out_csv_file)
60 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/bench_gpu.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | from sys import argv
4 | from time import time # keep track of time
5 | from math import log
6 | from operator import pow as pwr # exponentiation operator (**)
7 | import numpy as np
8 | import pandas as pd
9 | from numba import cuda, void, float64
10 | from tqdm import trange
11 |
12 |
13 | @cuda.jit(void(float64[:], float64[:], float64, float64, float64))
14 | def vmax_cuda(V, k_grid, r, y, beta):
15 | ix = cuda.blockIdx.x * cuda.blockDim.x + cuda.threadIdx.x
16 | VV = pwr(-10.0, 5)
17 | for ixp in range(k_grid.size):
18 | cons = (1 + r) * k_grid[ix] + y - k_grid[ixp]
19 | if cons <= 0:
20 | period_util = pwr(-10, 5)
21 | else:
22 | period_util = log(cons)
23 | expected = V[ixp]
24 | values = period_util + beta * expected
25 | if values > VV:
26 | VV = values
27 | V[ix] = VV
28 |
29 |
30 | def time_vfi_gpu(nk, r=0.01, y=1, beta=0.95, tol=1e-4):
31 | cuda_threads = nk
32 | # cuda_tpb = 640 # Threads Per Block (TPB)
33 | cuda_blocks = 25
34 | cuda_tpb = cuda_threads // cuda_blocks
35 | # cuda_blocks = cuda_threads // cuda_tpb + 1 # ceil division
36 | block_dims = (cuda_tpb, ) # no. of threads per block
37 | grid_dims = (cuda_blocks, ) # no. of blocks on grid
38 | k_grid = np.linspace(0.1, 10, num=nk)
39 | V_gpu = np.zeros((nk,), dtype=np.float64)
40 | t0_gpu = time()
41 | while True:
42 | V_gpu_old = np.copy(V_gpu)
43 | vmax_cuda[grid_dims, block_dims](V_gpu, k_grid, r, y, beta)
44 | cuda.synchronize() # before proceeding, wait that all cores finish
45 | crit_gpu = np.max(np.abs(V_gpu - V_gpu_old))
46 | if crit_gpu < tol:
47 | break
48 | t1_gpu = time()
49 | return t1_gpu - t0_gpu
50 |
51 |
52 | if __name__ == '__main__':
53 |
54 | N = int(argv[1])
55 | out_csv_file = argv[2]
56 |
57 | # grid_sizes = range(32, 4352+1, 32)
58 | grid_sizes = range(25, 1000+1, 25)
59 |
60 | times_gpu = np.zeros((N, len(grid_sizes)))
61 |
62 | print('Solving with GPU...')
63 | for j, nk in enumerate(grid_sizes):
64 | for i in trange(N, desc='nk = {}'.format(nk)):
65 | times_gpu[i, j] = time_vfi_gpu(nk)
66 |
67 | tmp0_gpu = pd.DataFrame(times_gpu, columns=list(map(str, grid_sizes)))
68 | tmp1_gpu = tmp0_gpu.melt(var_name='nk', value_name='time')
69 | results = tmp1_gpu.assign(target='gpu')[['target', 'nk', 'time']]
70 |
71 | results.to_csv(out_csv_file)
72 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/bench_jit.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | from sys import argv
4 | from time import time # keep track of time
5 | from math import log
6 | from operator import pow as pwr # exponentiation operator (**)
7 | import numpy as np
8 | import pandas as pd
9 | from numba import jit, void, float64
10 | from tqdm import trange
11 |
12 |
13 | @jit(void(float64[:], float64[:], float64, float64, float64), nopython=True)
14 | def vmax_jit(V, k_grid, r, y, beta):
15 | for ix in range(k_grid.size):
16 | VV = pwr(-10.0, 5)
17 | for ixp in range(k_grid.size):
18 | cons = (1 + r) * k_grid[ix] + y - k_grid[ixp]
19 | if cons <= 0:
20 | period_util = pwr(-10, 5)
21 | else:
22 | period_util = log(cons)
23 | expected = V[ixp]
24 | values = period_util + beta * expected
25 | if values > VV:
26 | VV = values
27 | V[ix] = VV
28 |
29 |
30 | def time_vfi_jit(nk, r=0.01, y=1, beta=0.95, tol=1e-4):
31 | k_grid = np.linspace(0.1, 10, num=nk)
32 | V_jit = np.zeros((nk,), dtype=np.float64)
33 | t0_jit = time()
34 | while True:
35 | V_jit_old = np.copy(V_jit)
36 | vmax_jit(V_jit, k_grid, r, y, beta)
37 | crit_jit = np.max(np.abs(V_jit - V_jit_old))
38 | if crit_jit < tol:
39 | break
40 | t1_jit = time()
41 | return t1_jit - t0_jit
42 |
43 |
44 | if __name__ == '__main__':
45 |
46 | N = int(argv[1])
47 | out_csv_file = argv[2]
48 |
49 | # grid_sizes = range(32, 4352+1, 32)
50 | grid_sizes = range(25, 1000+1, 25)
51 |
52 | times_jit = np.zeros((N, len(grid_sizes)))
53 |
54 | print('Solving with JIT...')
55 | for j, nk in enumerate(grid_sizes):
56 | for i in trange(N, desc='nk = {}'.format(nk)):
57 | times_jit[i, j] = time_vfi_jit(nk)
58 |
59 | tmp0_jit = pd.DataFrame(times_jit, columns=list(map(str, grid_sizes)))
60 | tmp1_jit = tmp0_jit.melt(var_name='nk', value_name='time')
61 | results = tmp1_jit.assign(target='jit')[['target', 'nk', 'time']]
62 |
63 | results.to_csv(out_csv_file)
64 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/benchmarks-logscale.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/benchmarks-logscale.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/benchmarks.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/benchmarks.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/benchmarks_cpu.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/benchmarks_cpu.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/benchmarks_gpu.r:
--------------------------------------------------------------------------------
1 | library(tidyverse)
2 |
3 | results_cpu <- read_csv('./results_cpu.csv') %>% rename(obs=X1, nb=nk)
4 | results_jit <- read_csv('./results_jit.csv') %>% rename(obs=X1, nb=nk)
5 | results_gpu <- read_csv('./results_gpu.csv') %>% rename(obs=X1, nb=nk)
6 |
7 | master <- bind_rows(results_cpu, results_jit, results_gpu)
8 |
9 | table <- master %>%
10 | spread(key='target', value='time') %>%
11 | select(obs, nb, cpu, jit, gpu)
12 |
13 | statistics <- master %>%
14 | mutate(time = na_if(time, 0)) %>%
15 | group_by(target, nb) %>%
16 | summarize(
17 | avg_time = mean(time, na.rm=TRUE),
18 | std_time = sd(time, na.rm=TRUE),
19 | min_time = min(time, na.rm=TRUE),
20 | q25_time = quantile(time, 0.25, na.rm=TRUE),
21 | q50_time = quantile(time, 0.50, na.rm=TRUE),
22 | q75_time = quantile(time, 0.75, na.rm=TRUE),
23 | max_time = max(time, na.rm=TRUE)
24 | )
25 |
26 | statistics %>% ggplot(aes(x=nb)) +
27 | geom_ribbon(aes(ymin=min_time, ymax=max_time, fill=target), alpha=0.3) +
28 | geom_line(aes(y=avg_time, color=target)) +
29 | labs(color="Function", fill="Function") +
30 | xlab('Number of gridpoints on state space') +
31 | ylab('VFI time (seconds)')
32 | ggsave('./slides/img/benchmarks.pdf', width=20, height=8, units='cm')
33 |
34 | statistics %>% ggplot(aes(x=nb)) +
35 | geom_ribbon(aes(ymin=min_time, ymax=max_time, fill=target), alpha=0.3) +
36 | geom_line(aes(y=avg_time, color=target)) +
37 | coord_trans(y='log10') +
38 | labs(color="Function", fill="Function") +
39 | xlab('Number of gridpoints on state space') +
40 | ylab('VFI time (seconds, log scale)')
41 | ggsave('./slides/img/benchmarks-logscale.pdf', width=20, height=8, units='cm')
42 |
43 | # statistics %>%
44 | # filter(nb == 100 | nb == 1000) %>%
45 | # arrange(nb, desc(target)) %>%
46 | # as.data.frame() %>%
47 | # stargazer::stargazer(out='./summary_results.tex', type='text', align=TRUE,
48 | # summary=FALSE, rownames=FALSE, digits=5)
49 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/benchmarks_gpu_computing.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Thu Mar 5 19:16:22 2020
4 |
5 | @author: Andrea
6 | """
7 |
8 | from sys import argv
9 | from time import time # keep track of time
10 | from math import log
11 | from operator import pow as pwr # exponentiation operator (**)
12 | import numpy as np
13 | import pandas as pd
14 | from numba import jit, cuda, void, float64
15 | from tqdm import trange
16 |
17 |
18 | #%% Function definitions
19 |
20 | # def vmax(V, k_grid, r, y, beta):
21 | # for ix in range(k_grid.size):
22 | # VV = pwr(-10.0, 5)
23 | # for ixp in range(k_grid.size):
24 | # cons = (1 + r) * k_grid[ix] + y - k_grid[ixp]
25 | # if cons <= 0:
26 | # period_util = pwr(-10, 5)
27 | # else:
28 | # period_util = log(cons)
29 | # expected = V[ixp]
30 | # values = period_util + beta * expected
31 | # if values > VV:
32 | # VV = values
33 | # V[ix] = VV
34 |
35 |
36 | def vmax(V, k_grid, r, y, beta):
37 | V_old = np.copy(V)
38 | for ix in range(k_grid.size):
39 | cons = (1+r) * k_grid[ix] + y - k_grid
40 | cons[cons<=0] = np.nan
41 | period_util = np.log(cons)
42 | V[ix] = np.nanmax(period_util + beta * V_old)
43 |
44 |
45 | @jit(void(float64[:], float64[:], float64, float64, float64), nopython=True)
46 | def vmax_jit(V, k_grid, r, y, beta):
47 | for ix in range(k_grid.size):
48 | VV = pwr(-10.0, 5)
49 | for ixp in range(k_grid.size):
50 | cons = (1 + r) * k_grid[ix] + y - k_grid[ixp]
51 | if cons <= 0:
52 | period_util = pwr(-10, 5)
53 | else:
54 | period_util = log(cons)
55 | expected = V[ixp]
56 | values = period_util + beta * expected
57 | if values > VV:
58 | VV = values
59 | V[ix] = VV
60 |
61 |
62 | @cuda.jit(void(float64[:], float64[:], float64, float64, float64))
63 | def vmax_cuda(V, k_grid, r, y, beta):
64 | ix = cuda.blockIdx.x * cuda.blockDim.x + cuda.threadIdx.x
65 | VV = pwr(-10.0, 5)
66 | for ixp in range(k_grid.size):
67 | cons = (1 + r) * k_grid[ix] + y - k_grid[ixp]
68 | if cons <= 0:
69 | period_util = pwr(-10, 5)
70 | else:
71 | period_util = log(cons)
72 | expected = V[ixp]
73 | values = period_util + beta * expected
74 | if values > VV:
75 | VV = values
76 | V[ix] = VV
77 |
78 |
79 | def time_vfi_cpu(nk, r=0.01, y=1, beta=0.95, tol=1e-4):
80 | k_grid = np.linspace(0.1, 10, num=nk)
81 | V_cpu = np.zeros((nk,), dtype=np.float64)
82 | t0_cpu = time()
83 | while True:
84 | V_cpu_old = np.copy(V_cpu)
85 | vmax(V_cpu, k_grid, r, y, beta)
86 | crit_cpu = np.max(np.abs(V_cpu - V_cpu_old))
87 | if crit_cpu < tol:
88 | break
89 | t1_cpu = time()
90 | return t1_cpu - t0_cpu
91 |
92 |
93 | def time_vfi_jit(nk, r=0.01, y=1, beta=0.95, tol=1e-4):
94 | k_grid = np.linspace(0.1, 10, num=nk)
95 | V_jit = np.zeros((nk,), dtype=np.float64)
96 | t0_jit = time()
97 | while True:
98 | V_jit_old = np.copy(V_jit)
99 | vmax_jit(V_jit, k_grid, r, y, beta)
100 | crit_jit = np.max(np.abs(V_jit - V_jit_old))
101 | if crit_jit < tol:
102 | break
103 | t1_jit = time()
104 | return t1_jit - t0_jit
105 |
106 |
107 | def time_vfi_gpu(nk, r=0.01, y=1, beta=0.95, tol=1e-4):
108 | cuda_tpb = 1024 # max Threads Per Block (TPB) on RTX 2080Ti
109 | cuda_threads = nk
110 | cuda_blocks = cuda_threads // cuda_tpb + 1 # ceil division
111 | block_dims = (cuda_tpb, ) # no. of threads per block
112 | grid_dims = (cuda_blocks, ) # no. of blocks on grid
113 | k_grid = np.linspace(0.1, 10, num=nk)
114 | V_gpu = np.zeros((nk,), dtype=np.float64)
115 | t0_gpu = time()
116 | while True:
117 | V_gpu_old = np.copy(V_gpu)
118 | vmax_cuda[grid_dims, block_dims](V_gpu, k_grid, r, y, beta)
119 | cuda.synchronize() # before proceeding, wait that all cores finish
120 | crit_gpu = np.max(np.abs(V_gpu - V_gpu_old))
121 | if crit_gpu < tol:
122 | break
123 | t1_gpu = time()
124 | return t1_gpu - t0_gpu
125 |
126 |
127 |
128 | #%% Main
129 |
130 | if __name__ == '__main__':
131 |
132 | out_csv_file = argv[1]
133 | target = argv[2] # ['all', 'cpu', 'jit', 'gpu']
134 |
135 | N = 100
136 | # grid_sizes = range(32, 4352+1, 32)
137 | grid_sizes = range(5, 1000+1, 5)
138 |
139 | times_cpu = np.zeros((N, len(grid_sizes)))
140 | times_jit = np.zeros((N, len(grid_sizes)))
141 | times_gpu = np.zeros((N, len(grid_sizes)))
142 |
143 | if target == 'cpu':
144 | print('Solving with CPU...')
145 | for j, nk in enumerate(grid_sizes):
146 | for i in trange(N, desc='nk = {}'.format(nk)):
147 | times_cpu[i, j] = time_vfi_cpu(nk)
148 |
149 | elif target == 'jit':
150 | print('Solving with JIT...')
151 | for j, nk in enumerate(grid_sizes):
152 | for i in trange(N, desc='nk = {}'.format(nk)):
153 | times_jit[i, j] = time_vfi_jit(nk)
154 |
155 | elif target == 'gpu':
156 | print('Solving with GPU...')
157 | for j, nk in enumerate(grid_sizes):
158 | for i in trange(N, desc='nk = {}'.format(nk)):
159 | times_gpu[i, j] = time_vfi_gpu(nk)
160 |
161 | elif target == 'all':
162 | for j, nk in enumerate(grid_sizes):
163 | for i in trange(N, desc='nk = {}'.format(nk)):
164 | times_cpu[i, j] = time_vfi_cpu(nk)
165 | times_jit[i, j] = time_vfi_jit(nk)
166 | times_gpu[i, j] = time_vfi_gpu(nk)
167 |
168 | tmp0_cpu = pd.DataFrame(times_cpu, columns=list(map(str, grid_sizes)))
169 | tmp0_jit = pd.DataFrame(times_jit, columns=list(map(str, grid_sizes)))
170 | tmp0_gpu = pd.DataFrame(times_gpu, columns=list(map(str, grid_sizes)))
171 |
172 | tmp1_cpu = tmp0_cpu.melt(var_name='nk', value_name='time')
173 | tmp1_jit = tmp0_jit.melt(var_name='nk', value_name='time')
174 | tmp1_gpu = tmp0_gpu.melt(var_name='nk', value_name='time')
175 |
176 | results_cpu = tmp1_cpu.assign(target='cpu')[['target', 'nk', 'time']]
177 | results_jit = tmp1_jit.assign(target='jit')[['target', 'nk', 'time']]
178 | results_gpu = tmp1_gpu.assign(target='gpu')[['target', 'nk', 'time']]
179 |
180 | results = pd.concat([results_cpu, results_jit], ignore_index=True)
181 | results = pd.concat(results_cpu, results_jit, results_gpu,ignore_index=True)
182 |
183 | results.to_csv(out_csv_file)
184 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/loop.c:
--------------------------------------------------------------------------------
1 | int main () {
2 | int x = 0;
3 | int x_max = 1000000000;
4 | for (int i = 1; i <= x_max; i++) {
5 | x += 1;
6 | }
7 | }
8 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/loop.py:
--------------------------------------------------------------------------------
1 | x = 0
2 | x_max = 1000000000
3 | for i in range(x_max):
4 | x += 1
5 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/loop.sh:
--------------------------------------------------------------------------------
1 | g++ ./loop.c
2 | time ./a.out
3 |
4 | time python3 ./loop.py
5 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/run-benchmarks.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | python ./bench_jit.py 1000 ./results_jit.csv
4 | python ./bench_cpu.py 1000 ./results_cpu.csv
5 | python ./bench_gpu.py 1000 ./results_gpu.csv
6 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/commands_definitions.tex:
--------------------------------------------------------------------------------
1 | \newcommand{\code}[1]{\texttt{\smaller#1}}
2 | \newcommand{\reals}{\mathbb{R}}
3 | \newcommand{\complex}{\mathbb{C}}
4 | \newcommand{\E}{\mathbf{E}}
5 | \newcommand{\var}{\mathrm{Var}}
6 | \newcommand{\cov}{\mathrm{Cov}}
7 |
8 | \definecolor{mygreen}{rgb}{0,0.6,0}
9 | \definecolor{mygray}{rgb}{0.5,0.5,0.5}
10 | \definecolor{mymauve}{rgb}{0.58,0,0.82}
11 | \definecolor{bggray}{rgb}{0.95,0.95,0.975}
12 |
13 | \lstdefinestyle{prompt}{ %
14 | backgroundcolor=\color{bggray},
15 | basicstyle=\ttfamily\footnotesize,
16 | language=bash,
17 | frame=none,
18 | morekeywords={\$},
19 | keywordstyle=\ttfamily\sl,
20 | numbers=left,
21 | numberstyle=\ttfamily\tiny\color{mygray},
22 | autogobble=true
23 | }
24 |
25 | \lstdefinestyle{python_output}{ %
26 | backgroundcolor=\color{bggray},
27 | basicstyle=\ttfamily\footnotesize,
28 | language=bash,
29 | frame=none,
30 | keywordstyle=\color{black},
31 | numbers=left,
32 | numberstyle=\ttfamily\tiny\color{mygray},
33 | stringstyle=\color{black},
34 | autogobble=true
35 | }
36 |
37 | \lstset{ %
38 | backgroundcolor=\color{bggray}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor}; should come as last argument
39 | basicstyle=\ttfamily\footnotesize, % the size of the fonts that are used for the code
40 | breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace
41 | breaklines=true, % sets automatic line breaking
42 | captionpos=b, % sets the caption-position to bottom
43 | commentstyle=\color{mygreen}, % comment style
44 | deletekeywords={...}, % if you want to delete keywords from the given language
45 | escapeinside={\%*}{*)}, % if you want to add LaTeX within your code
46 | extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8
47 | frame=none, % adds a frame around the code
48 | keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible)
49 | keywordstyle=\color{blue}, % keyword style
50 | language=Python, % the language of the code
51 | morekeywords={*,True,False,numpy,scipy,matplotlib,np,sc,plt,la,linalg,whos,det,inv,diag,array,allclose,linspace,pyplot,plt,show,subplots,plot,grid,legend,set_xlabel,set_ylabel,set_title,argmax,num,linewidth,color,label,rc,usetex,math,pandas,DataFrame,random,zeros,nanmax,nanargmax,nan,ones,polyfit,polyval,optimize,fsolve,interpolate,interp1d,stats,norm,diff,cdf,pdf,reshape,polynomial,hermite,hermgauss,sqrt,fftpack,as,sparse,spmatrix,jit,njit,cuda,vectorize,guvectorize,stencil,@jit,@njit,@cuda,@vectorize,@guvectorize,@stencil...}, % if you want to add more keywords to the set
52 | numbers=left, % where to put the line-numbers; possible values are (none, left, right)
53 | numbersep=5pt, % how far the line-numbers are from the code
54 | numberstyle=\ttfamily\tiny\color{mygray}, % the style that is used for the line-numbers
55 | rulecolor=\color{mygray}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here))
56 | showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces'
57 | showstringspaces=false, % underline spaces within strings only
58 | showtabs=false, % show tabs within strings adding particular underscores
59 | stepnumber=1, % the step between two line-numbers. If it's 1, each line will be numbered
60 | stringstyle=\color{mymauve}, % string literal style
61 | tabsize=2, % sets default tabsize to 2 spaces
62 | autogobble=true%, % adjusts indentation and newline characters
63 | %title=\lstname % show the filename of files included with \lstinputlisting; also try caption instead of title
64 | }
65 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/Perspective_Projection_Principle.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/Perspective_Projection_Principle.jpg
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/benchmarks-logscale.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/benchmarks-logscale.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/benchmarks.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/benchmarks.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/block-thread.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/block-thread.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/gpu_parallel_visual.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import matplotlib.pyplot as plt
3 |
4 | n_frames = 10
5 | n_blocks = 15
6 | n_tbp = 15
7 | x = np.zeros((n_tbp, 1), dtype=int)
8 |
9 | fig_size = np.array((16, 8)) / 2.54
10 |
11 | fig, ax = plt.subplots(ncols=n_blocks, figsize=fig_size)
12 | for i in range(n_frames):
13 | x[n_tbp-(i+1), 0] = 1
14 | for j in range(n_blocks):
15 | ax[j].imshow(x, cmap='Oranges')
16 | ax[j].get_xaxis().set_visible(False)
17 | ax[j].get_yaxis().set_visible(False)
18 | fig.savefig('./img/gpu_parallel_visual_{}.pdf'.format(i+1))
19 |
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/gpu_parallel_visual_1.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/gpu_parallel_visual_1.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/gpu_parallel_visual_2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/gpu_parallel_visual_2.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/gpu_parallel_visual_3.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/gpu_parallel_visual_3.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/gpu_parallel_visual_4.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/gpu_parallel_visual_4.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/gpu_parallel_visual_5.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/gpu_parallel_visual_5.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/hw-sw-thread_block.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/hw-sw-thread_block.jpg
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/nvidia-rtx-2080-ti.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/nvidia-rtx-2080-ti.jpg
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/img/stencil.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/img/stencil.pdf
--------------------------------------------------------------------------------
/code_examples/gpu_computing/slides/ta6_gpu_computing.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/code_examples/gpu_computing/slides/ta6_gpu_computing.pdf
--------------------------------------------------------------------------------
/code_examples/sncgm.mod:
--------------------------------------------------------------------------------
1 | %% sncgm.mod
2 | % This Dynare file shows how first and second order approximations to the
3 | % Stochastic NeoClassical Growth Model already do a good job at approximating
4 | % the policy functions of the problem.
5 | %
6 | % To run this file, you need a working installation of Matlab and the
7 | % associated plugin package Dynare (https://www.dynare.org/).
8 | % With the requirements in place, you need to run
9 | % dynare sncgm.mod
10 | % which yields some variables in the workspace. The policy functions are stored
11 | % in the object 'dr'.
12 | %
13 |
--------------------------------------------------------------------------------
/code_examples/vfi_convergence.m:
--------------------------------------------------------------------------------
1 | %% Value Function Iteration for Deterministic Optimal Growth Model
2 |
3 | sigma = 1.50;
4 | delta = 0.10;
5 | beta = 0.95;
6 | alpha = 0.30;
7 | ks = ( (1 - beta * (1 - delta)) / (alpha * beta))^(1 / (alpha - 1) );
8 | kmin = 0.1*ks;
9 | kmax = 1.9*ks;
10 | u = @(c) c.^(1-sigma) ./ (1-sigma);
11 |
12 | nbk = 1000;
13 | dk = (kmax-kmin)/(nbk-1);
14 | k = linspace(kmin,kmax,nbk)';
15 | v0 = zeros(nbk,1);
16 | v = zeros(nbk,1);
17 | dr = zeros(nbk,1);
18 |
19 | tolerance = 1e-6;
20 | criterion = 1;
21 | num_iter = 0;
22 |
23 | storeV = []; % final size depends on num_iter, so cannot preallocate ex-ante
24 | storeK = []; % final size depends on num_iter, so cannot preallocate ex-ante
25 | storeC = []; % final size depends on num_iter, so cannot preallocate ex-ante
26 |
27 |
28 | %% Beginning of VFI
29 |
30 | while criterion > tolerance
31 |
32 | for i = 1 : nbk
33 | c = k(i) ^ alpha + (1 - delta) * k(i) - k(:);
34 | neg = find( c < 0 );
35 | c(neg) = NaN;
36 | [v(i), dr(i)] = max( u(c) + beta * v0(:) );
37 | end
38 |
39 | criterion = max(abs(v-v0));
40 |
41 | % storing objects for later visual representation
42 | tmp_v = v0;
43 | tmp_k = k(dr);
44 | tmp_c = tmp_k.^alpha + (1 - delta) * k - tmp_k;
45 |
46 | storeV = [storeV, tmp_v];
47 | storeK = [storeK, tmp_k];
48 | storeC = [storeC, tmp_c];
49 |
50 | v0 = v;
51 | num_iter = num_iter + 1;
52 | end
53 |
54 |
55 | % evaluating solutions
56 | k1 = k(dr);
57 | c = k.^alpha + (1-delta) * k - k1;
58 |
59 |
60 | %% Plotting solutions
61 |
62 | L = size( storeV, 2 );
63 |
64 | steps = [1:1:10, 11:10:100, 101:20:L];
65 |
66 | shades = 1 - logspace( log10(0.4), log10(0.7), L )';
67 | greyShades = repmat( shades, [1, 3] );
68 |
69 | f = figure('Name', 'Paths explored by VFI algorithm');
70 |
71 | for l = steps
72 | subplot( 2, 2, [1, 3] )
73 | hold on
74 | plot( k, storeV(:, l), ...
75 | 'Color', greyShades(l, :), ...
76 | 'LineWidth', 1 )
77 | hold off
78 | box on; grid on
79 | title( 'Value Function' )
80 | subplot( 2, 2, 2 )
81 | hold on
82 | plot( k, storeK(:, l), ...
83 | 'Color', greyShades(l, :), ...
84 | 'LineWidth', 1 )
85 | hold off
86 | box on; grid on
87 | title( 'PolFun - Capital' )
88 | subplot( 2, 2, 4 )
89 | hold on
90 | plot( k, storeC(:, l), ...
91 | 'Color', greyShades(l, :), ...
92 | 'LineWidth', 1 )
93 | hold off
94 | box on; grid on
95 | title( 'PolFun - Consumption' )
96 |
97 | waitforbuttonpress
98 | end
99 |
100 | subplot( 2, 2, [1, 3] )
101 | hold on
102 | plot( k, v, 'Color', 'red', 'LineWidth', 3 )
103 | xline( ks, 'LineWidth', 1, 'LineStyle', ':', 'Color', 'black' );
104 | hold off
105 | subplot( 2, 2, 2 )
106 | hold on
107 | plot( k, k1, 'Color', 'red', 'LineWidth', 3 )
108 | plot( k, k, 'Color', 'black', 'LineWidth', 1, 'LineStyle', '--' )
109 | xline( ks, 'LineWidth', 1, 'LineStyle', ':', 'Color', 'black' );
110 | hold off
111 | subplot( 2, 2, 4 )
112 | hold on
113 | plot( k, c, 'Color', 'red', 'LineWidth', 3 )
114 | xline( ks, 'LineWidth', 1, 'LineStyle', ':', 'Color', 'black' );
115 | hold off
116 |
117 |
118 | %% Clean up
119 |
120 | clearvars l L u
121 |
--------------------------------------------------------------------------------
/other_applications/README.md:
--------------------------------------------------------------------------------
1 | # Other Python Applications in Economics
2 |
3 | This folder contains Python code that is not related to numerical methods in Macroeconomics.
4 | Instead, I provide examples of what Python can be useful for, and economists might find this interesting.
5 |
6 |
7 | ## Web-scraping with BeautifulSoup
8 |
9 | The folder [`scraping`](./scraping/) contains an example of how to use the package [`beautifulsoup4`](https://www.crummy.com/software/BeautifulSoup/) to programmatically retrieve certain content from the web.
10 | The code highlights the importance of analyzing the HTML structure of a webpage in order to understand where a certain piece of information can be found.
11 |
12 | Note that Beautiful Soup only parses with HTML code that is returned from a web server upon load.
13 | For webpages that dynamically change (because of Javascript, e.g., Facebook, Twitter, Reddit), Beautiful Soup might be useless.
14 | In such cases, [Selenium](https://www.seleniumhq.org/) comes in to help.
15 |
16 |
17 | ## Natural Language Processing (NLP)
18 |
19 | Example for the package [`nltk`](http://www.nltk.org/)
20 |
21 | `incomplete`
22 |
23 |
--------------------------------------------------------------------------------
/other_applications/scraping/README.md:
--------------------------------------------------------------------------------
1 | # Web Scraping using BeautifulSoup
2 |
3 | The contents in this folder show examples of how to use [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) to scrape web content.
4 | Note that BeautifulSoup works best with websites that only use HTML code, while websites running Javascript, PHP or similar code might not play nicely with Beautiful Soup, and you might be unable to scrape the information you are aiming at.
5 |
6 |
7 | ## XKCD
8 | The file [xkcd.py](./xkcd.py) shows a simple application of how to use BeautifulSoup to scrape information from the [XKCD website](https://xkcd.com/).
9 |
10 | The class `XkcdComic` provides a Python interface to comic-related information: the object is instantiated with the ID number of the comic (found in the URL) and saves the title, the caption and the link to the image file.
11 | The class has a method that offers to download the image file to local disk.
12 |
13 | The code also shows an example of how to use the `XkcdComic` class to download comics and create a simple textual log of the files the user downloads.
14 | There is also an example of how to be gentle on web servers, waiting some time between requests to prevent excessive server load (and its consequences, e.g., IP address ban).
15 |
16 | You can freely use the [xkcd.py](./xkcd.py) file as long as you note [Randall Munroe's License notice](https://xkcd.com/license.html), as he is the creator and the owner of the comics.
17 |
18 |
19 | # Disclaimer
20 | I want the reader to be aware of the following: **web scraping might put you into trouble**.
21 | It might be illegal activity in your Country, it might violate a website Terms and Conditions or it might simply be considered unethical.
22 | Whatever you do with the material you find here, make sure you understand the risks and the responsibility that you are taking on.
23 | I suggest you read [this zine](https://www.crummy.com/software/BeautifulSoup/zine/), written by the developer of Beautiful Soup.
24 | Let me also remind you that the code you find here is covered by the [MIT license](../LICENSE), meaning in particular that I do not share the responsibility of what _you_ do with my code.
25 |
--------------------------------------------------------------------------------
/other_applications/scraping/xkcd.py:
--------------------------------------------------------------------------------
1 | """
2 | This code showcases the use of BeautifulSoup for pedagogical purposes. It
3 | shows the way HTML code is parsed. It gives an idea about the way you can
4 | navigate an HTML tree.
5 | Read this before using BeautifulSoup:
6 | https://www.crummy.com/software/BeautifulSoup/zine/
7 | """
8 |
9 | from requests import get
10 | from urllib.request import urlretrieve
11 | from bs4 import BeautifulSoup
12 | from time import sleep
13 |
14 |
15 | class XkcdComic:
16 | """
17 | This class connects to https://xkcd.com/ and fetches data related to
18 | comics on that website. The HTML of the page is obtained using
19 | requests.get(). Parsing of the HTML code is done using BeautifulSoup.
20 | Retrieval of images is done using urllib.request.urlretrieve().
21 | The class offers a method to save images on disk. Please note that terms
22 | at https://xkcd.com/license.html apply at all times. The images are not
23 | my work. Only this code is mine.
24 | """
25 |
26 | def __init__(self, comic_number):
27 | self.number = str(comic_number)
28 | self.url = 'https://xkcd.com/' + self.number + '/'
29 | self._webpage = get(self.url)
30 | self._soup = BeautifulSoup(self._webpage.text, 'html.parser')
31 | self._container = self._soup.find('div', {'id': 'comic'})
32 | self._img_url = 'https:' + self._container.img['src']
33 | self._png_name = self._img_url.split('/')[-1]
34 | self.caption = self._container.img['title']
35 | self.title = self._soup.find('div', {'id': 'ctitle'}).text
36 |
37 | def save_img_to_disk(self, directory='./', filename=None):
38 | if filename is None:
39 | filename = self._img_url.split('/')[-1]
40 | if directory[-1] is not '/':
41 | directory += '/'
42 | urlretrieve(self._img_url, directory + filename)
43 |
44 |
45 | if __name__ == '__main__':
46 |
47 | dir = 'C:/Users/Andrea/Pictures/xkcd/'
48 | numbers = [i for i in range(1900, 1968)]
49 |
50 | with open((dir + 'index.txt'), mode='w', encoding='utf-8') as index:
51 | index.write('Index of comics (w/captions) \n\n\n')
52 |
53 | for no in numbers:
54 |
55 | comic = XkcdComic(no)
56 |
57 | print('Saving to disk comic no. {}: {}'.format(no, comic.title))
58 | fname = '{}-{}'.format(no, comic._png_name)
59 | comic.save_img_to_disk(directory=dir,
60 | filename=fname)
61 | sleep(1) # pause execution for 1 second
62 |
63 | with open((dir + 'index.txt'), mode='a', encoding='utf-8') as index:
64 | index.write('#{}, {}: {}\n\n'.format(comic.number,
65 | comic.title,
66 | comic.caption))
67 |
--------------------------------------------------------------------------------
/readme.md:
--------------------------------------------------------------------------------
1 | # Numerical Methods for Macroeconomics
2 |
3 | [](https://mybinder.org/v2/gh/apsql/numerical_methods_macroeconomics/master)
4 |
5 | This repository contains material for the PhD course in [Macroeconomics 3](https://www.unibocconi.eu/wps/wcm/connect/Bocconi/SitoPubblico_EN/Navigation+Tree/Home/Programs/PhD/PhD+in+Economics+and+Finance/Courses+and+Requirements/), taught by [Marco Maffezzoli](http://faculty.unibocconi.eu/marcomaffezzoli/) at [Bocconi University](https://www.unibocconi.eu/).
6 | I'm the TA for the Spring 2018, Spring 2019, Spring 2020 and Spring 2021 iterations.
7 |
8 | However, this repo will not be limited in scope to the course.
9 | Given enough time, this should be a full-fledged library (with examples) especially aimed at learning Python for (Macro)Economic applications.
10 | I'll update this repo with teaching (or self-learning) material as I progress in life.
11 |
12 |
13 | ## Overview of material in this repo
14 |
15 | The folder [`ta_sessions`](./ta_sessions) contains Jupyter notebooks covering material for (surprise surprise) the TA sessions of the course mentioned above.
16 |
17 | The package [`NMmacro`](./NMmacro) contains code developed around the topics of the course.
18 | The code that can be re-used for various applications.
19 | However, it has not been developed with generality in mind: code in the package is tailored for the sake of teaching how to deal with classical examples in Economics.
20 |
21 | The folder [`code_examples`](./code_examples) includes scripts that use the functions and classes written in [`NMmacro`](./NMmacro).
22 | It's more of a showcase than anything else.
23 |
24 | Inside [`other_applications`](./other_applications) there is some code that showcases the use of Python in applications other than numerical computation.
25 | At the moment, it contains an application of _Beautiful Soup_ to scrape HTML code found online.
26 | I'd like to code examples for some general data cleaning with _Pandas_, _Selenium_ and _NLTK_.
27 |
28 |
29 | ## Other references
30 |
31 | Here is a list of references and related material you might want to check out.
32 | There is no specific reason these are here for, except for my own gut-feeling that suggested me to put them here.
33 |
34 | - [This overwhelming bunch of stuff](http://www.wouterdenhaan.com/notes.htm) from Wouter Den Haan
35 | - [The QuantEcon (Py)Lectures](https://lectures.quantecon.org/py/) website
36 | - [This GitHub repository](https://github.com/zhouweimin233/QuantMacro) of somebody at The University of Alabama at Birmingham with a PhD course similar to this one, with code and notebooks
37 | - [Gianluca Violante's course in Quantitative Macro](https://sites.google.com/a/nyu.edu/glviolante/teaching/quantmacro) at NYU, referenced by the previous source
38 | - [This GitHub repo](https://github.com/jstac/nyu_macro_fall_2018) of John Stachurski for a course at NYU, with (a lot more) math-y material
39 | - [This GitHub repo](https://github.com/OpenSourceMacro/BootCamp2018) of OSM Boot Camp 2018, a [summer school](https://bfi.uchicago.edu/osm18) offered by uChicago
40 | - Aruoba, S.B. and Fernández-Villaverde, J., 2015. "A Comparison of Programming Languages in Macroeconomics." _Journal of Economic Dynamics and Control, 58_, pp.265-273 ([Published version](https://doi.org/10.1016/j.jedc.2015.05.009)) ([Working-paper version](https://www.sas.upenn.edu/~jesusfv/comparison_languages.pdf)) ([Code](https://github.com/jesusfv/Comparison-Programming-Languages-Economics)) ([Update](https://www.sas.upenn.edu/~jesusfv/Update_March_23_2018.pdf))
41 |
--------------------------------------------------------------------------------
/slides/README.md:
--------------------------------------------------------------------------------
1 | # Slides for Macroeconomics 3
2 |
3 | An innovation of this year, facilitated by the need for teaching online, is the use of slides.
4 | In this folder there are both the source LaTeX files and the compiled PDFs.
5 |
6 |
7 | ## Contents
8 |
9 | 1. Introduction to Python and Numerical Methods
10 | 1. Value Function Iteration, Policy Function Iteration and Direct Projection in Deterministic Environments
11 | 1. VFI, PFI and DP in Stochastic Environments and Discretization of AR(1) Processes
12 | 1. General Equilibrium with Prices and Heterogeneous Agents
13 | 1. Bewley-type Models: Huggett & Aiyagari
14 | 2. Web Scraping
15 |
--------------------------------------------------------------------------------
/slides/assets/xkcd-2434.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/slides/assets/xkcd-2434.png
--------------------------------------------------------------------------------
/slides/assets/xkcd-home.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/slides/assets/xkcd-home.png
--------------------------------------------------------------------------------
/slides/common.sty:
--------------------------------------------------------------------------------
1 | \NeedsTeXFormat{LaTeX2e}
2 | \ProvidesPackage{common}[2021/02/18 Common Preamble Package]
3 |
4 | \usepackage[utf8]{inputenc}
5 | \usepackage{txfonts}
6 | \usepackage{amsfonts,amssymb,amsmath}
7 | \usepackage[T1]{fontenc}
8 | \usepackage{appendixnumberbeamer,booktabs,tikz,pgfplots,listings,lstautogobble}
9 | \pgfplotsset{compat=1.17}
10 |
11 | \setbeamertemplate{navigation symbols}{%
12 | \usebeamerfont{footline}%
13 | \usebeamercolor[fg]{footline}%
14 | \hspace{1em}%
15 | \insertframenumber/\inserttotalframenumber
16 | }
17 |
18 | \hypersetup{
19 | colorlinks=true,
20 | linkcolor=black,
21 | urlcolor=blue,
22 | citecolor=black
23 | }
24 |
25 | \definecolor{mygreen}{rgb}{0,0.6,0}
26 | \definecolor{mygray}{rgb}{0.5,0.5,0.5}
27 | \definecolor{mymauve}{rgb}{0.58,0,0.82}
28 | \definecolor{bggray}{rgb}{0.95,0.95,0.975}
29 | \definecolor{dimgray}{RGB}{170, 170, 170}
30 | \definecolor{alert}{RGB}{220, 0, 0}
31 | \definecolor{main}{RGB}{51, 51, 179}
32 | \definecolor{background}{RGB}{255, 255, 255}
33 | \newcommand{\dimmer}[1]{\textcolor{dimgray}{#1}}
34 | \newcommand{\gotobutton}[2]{\hyperlink{#1}{\beamergotobutton{#2}}}
35 | \newcommand{\backbutton}[1]{\hyperlink{#1}{\beamerreturnbutton{Back}}}
36 | \newcommand{\gototitlebutton}[2]{\hfill\gotobutton{#1}{#2}}
37 | \newcommand{\backtitlebutton}[1]{\hfill\backbutton{#1}}
38 | \setbeamercolor{button}{bg=main,fg=background}
39 | \setbeamercolor{alerted text}{fg=alert}
40 |
41 | % \newcommand{\mycite}[1]{\citeauthor{#1} (\citeyear{#1})}
42 | \newcommand{\E}{\mathbf{E}}
43 | \newcommand{\email}[1]{\href{mailto:#1}{#1}}
44 | \newcommand{\website}[1]{\href{https://#1}{#1}}
45 |
46 | \lstdefinestyle{prompt}{ %
47 | backgroundcolor=\color{bggray},
48 | basicstyle=\ttfamily\footnotesize,
49 | language=bash,
50 | frame=none,
51 | morekeywords={\$},
52 | keywordstyle=\color{blue},
53 | numbers=left,
54 | numberstyle=\ttfamily\tiny\color{mygray},
55 | autogobble=true
56 | }
57 |
58 | \lstdefinestyle{python_output}{ %
59 | backgroundcolor=\color{bggray},
60 | basicstyle=\ttfamily\footnotesize,
61 | language=bash,
62 | frame=none,
63 | keywordstyle=\color{black},
64 | numbers=left,
65 | numberstyle=\ttfamily\tiny\color{mygray},
66 | stringstyle=\color{black},
67 | autogobble=true
68 | }
69 |
70 | \lstset{ %
71 | backgroundcolor=\color{bggray}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor}; should come as last argument
72 | basicstyle=\ttfamily\footnotesize, % the size of the fonts that are used for the code
73 | breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace
74 | breaklines=true, % sets automatic line breaking
75 | captionpos=b, % sets the caption-position to bottom
76 | commentstyle=\color{mygreen}, % comment style
77 | deletekeywords={...}, % if you want to delete keywords from the given language
78 | escapeinside={\%*}{*)}, % if you want to add LaTeX within your code
79 | extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8
80 | frame=none, % adds a frame around the code
81 | keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible)
82 | keywordstyle=\color{blue}, % keyword style
83 | language=Python, % the language of the code
84 | morekeywords={*,GET,HTTP,OK,Host...}, % if you want to add more keywords to the set
85 | % morekeywords={*,True,False,numpy,scipy,matplotlib,np,sc,plt,la,linalg,whos,det,inv,diag,array,allclose,linspace,pyplot,plt,show,subplots,plot,grid,legend,set_xlabel,set_ylabel,set_title,argmax,num,linewidth,color,label,rc,usetex,math,pandas,DataFrame,random,zeros,nanmax,nanargmax,nan,ones,polyfit,polyval,optimize,fsolve,interpolate,interp1d,stats,norm,diff,cdf,pdf,reshape,polynomial,hermite,hermgauss,sqrt,fftpack,as,sparse,spmatrix...}, % if you want to add more keywords to the set
86 | numbers=left, % where to put the line-numbers; possible values are (none, left, right)
87 | numbersep=5pt, % how far the line-numbers are from the code
88 | numberstyle=\ttfamily\tiny\color{mygray}, % the style that is used for the line-numbers
89 | rulecolor=\color{mygray}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here))
90 | showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces'
91 | showstringspaces=false, % underline spaces within strings only
92 | showtabs=false, % show tabs within strings adding particular underscores
93 | showlines=true,
94 | stepnumber=1, % the step between two line-numbers. If it's 1, each line will be numbered
95 | stringstyle=\color{mymauve}, % string literal style
96 | tabsize=2, % sets default tabsize to 2 spaces
97 | autogobble=true%, % adjusts indentation and newline characters
98 | %title=\lstname % show the filename of files included with \lstinputlisting; also try caption instead of title
99 | }
100 |
--------------------------------------------------------------------------------
/slides/compile-all-slides.ps1:
--------------------------------------------------------------------------------
1 | Get-ChildItem -File | Where-Object {$_.Extension -eq ".tex"} | ForEach-Object { latexmk.exe -pdf $_.FullName }
2 |
3 | # Remove everything except files with extensions {tex, sty, pdf, bib, md, ps1}
4 | Get-ChildItem -File | `
5 | Where-Object { $_.Extension -ne ".tex" -and $_.Extension -ne ".sty" -and $_.Extension -ne ".pdf" -and $_.Extension -ne ".bib" -and $_.Extension -ne ".md" -and $_.Extension -ne ".ps1" } | `
6 | Remove-Item
7 |
--------------------------------------------------------------------------------
/slides/references.bib:
--------------------------------------------------------------------------------
1 | @article{Edelman2012,
2 | author = {Edelman, Benjamin},
3 | title = {{Using Internet Data for Economic Research}},
4 | journal = {Journal of Economic Perspectives},
5 | volume = {26},
6 | number = {2},
7 | year = {2012},
8 | month = {May},
9 | pages = {189-206},
10 | doi = {10.1257/jep.26.2.189},
11 | url = {https://www.aeaweb.org/articles?id=10.1257/jep.26.2.189}
12 | }
13 | @article{Aiyagari1994,
14 | abstract = {We present a qualitative and quantitative analysis of the standard growth model modified to include precautionary saving motives and liquidity constraints. We address the impact on the aggregate saving rate, the importance of asset trading to individuals, and the relative inequality of wealth and income distributions.},
15 | author = {Aiyagari, S Rao},
16 | doi = {10.2307/2118417},
17 | isbn = {DR000523 00335533 DI976347 97P01742},
18 | issn = {0033-5533},
19 | journal = {The Quarterly Journal of Economics},
20 | month = {aug},
21 | number = {3},
22 | pages = {659--684},
23 | title = {{Uninsured Idiosyncratic Risk and Aggregate Saving}},
24 | url = {http://www.jstor.org/stable/2118417{\%}5Cnhttp://www.jstor.org/page/ https://academic.oup.com/qje/article-lookup/doi/10.2307/2118417},
25 | volume = {109},
26 | year = {1994}
27 | }
28 | @article{Anderson1985,
29 | abstract = {This paper presents a failsafe method for analyzing any linear perfect foresight model. It describes a procedure which either computes the reduced-form solution or indicates why the model has no reduced form. {\textcopyright} 1985.},
30 | author = {Anderson, Gary and Moore, George},
31 | doi = {10.1016/0165-1765(85)90211-3},
32 | issn = {01651765},
33 | journal = {Economics Letters},
34 | month = {jan},
35 | number = {3},
36 | pages = {247--252},
37 | title = {{A Linear Algebraic Procedure for Solving Linear Perfect Foresight Models}},
38 | url = {http://linkinghub.elsevier.com/retrieve/pii/0165176585902113},
39 | volume = {17},
40 | year = {1985}
41 | }
42 | @article{Huggett1993,
43 | abstract = {Why has the average real risk-free interest rate been less than one percent? The question is motivated by the failure of a class of calibrated representative-agent economies to explain the average return to equity and risk-free debt. I construct an economy where agents experience uninsurable idiosyncratic endowment shocks and smooth consumption by holding a risk-free asset. I calibrate the economy and characterize equilibria computationally. With a borrowing constraint of one year's income, the resulting risk-free rate is more than one percent below the rate in the comparable representative-agent economy. {\textcopyright} 1993.},
44 | author = {Huggett, Mark},
45 | doi = {10.1016/0165-1889(93)90024-M},
46 | isbn = {0165-1889},
47 | issn = {01651889},
48 | journal = {Journal of Economic Dynamics and Control},
49 | month = {sep},
50 | number = {5-6},
51 | pages = {953--969},
52 | title = {{The Risk-Free Rate in Heterogeneous-Agent Incomplete-Insurance Economies}},
53 | url = {http://linkinghub.elsevier.com/retrieve/pii/016518899390024M},
54 | volume = {17},
55 | year = {1993}
56 | }
57 | @incollection{Judd1996,
58 | abstract = {This chapter examines local and global approximation methods that have been used or have potential future value in economic and econometric analysis. The chapter presents the related projection method for solving operator equations and illustrates its application to dynamic economic analysis, dynamic games, and asset market equilibrium with asymmetric information. In the chapter, it is shown that a general class of techniques from the numerical partial differential equations literature can be usefully applied and adapted to solve nonlinear economic problems. Despite the specificity of the applications discussed in the chapter the general description makes clear the general usefulness of perturbation and projection methods for economic problems, both theoretical modeling and empirical analysis. The application of perturbation and projection methods and the underlying approximation ideas have substantially improved the efficiency of economic computations. In addition, exploitation of these ideas will surely lead to progress.},
59 | author = {Judd, Kenneth L.},
60 | booktitle = {Handbook of Computational Economics},
61 | chapter = {12},
62 | doi = {10.1016/S1574-0021(96)01014-3},
63 | editor = {Amman, H. M. and Kendrick, D. A. and Rust, J.},
64 | isbn = {9780444898579},
65 | issn = {15740021},
66 | pages = {509--585},
67 | publisher = {Elsevier Science},
68 | title = {{Approximation, Perturbation, and Projection Methods in Economic Analysis}},
69 | url = {http://linkinghub.elsevier.com/retrieve/pii/S1574002196010143},
70 | volume = {1},
71 | year = {1996}
72 | }
73 | @article{Kopecky2010a,
74 | abstract = {The Rouwenhorst method of approximating stationary AR(1) processes has been overlooked by much of the literature despite having many desirable properties unmatched by other methods. In particular, we prove that it can match the conditional and unconditional mean and variance, and the first-order autocorrelation of any stationary AR(1) process. These properties make the Rouwenhorst method more reliable than others in approximating highly persistent processes and generating accurate model solutions. To illustrate this, we compare the performances of the Rouwenhorst method and four others in solving the stochastic growth model and an income fluctuation problem. We find that (i) the choice of approximation method can have a large impact on the computed model solutions, and (ii) the Rouwenhorst method is more robust than others with respect to variation in the persistence of the process, the number of points used in the discrete approximation and the procedure used to generate model statistics. {\textcopyright} 2010.},
75 | author = {Kopecky, Karen A. and Suen, Richard M H},
76 | doi = {10.1016/j.red.2010.02.002},
77 | issn = {10942025},
78 | journal = {Review of Economic Dynamics},
79 | keywords = {Finite state approximations,Numerical methods},
80 | number = {3},
81 | pages = {701--714},
82 | title = {{Finite State Markov-Chain Approximations to Highly Persistent Processes}},
83 | volume = {13},
84 | year = {2010}
85 | }
86 | @article{Lucas1977,
87 | author = {Lucas, Robert E.},
88 | doi = {10.1016/0167-2231(77)90002-1},
89 | issn = {01672231},
90 | journal = {Carnegie-Rochester Conference Series on Public Policy},
91 | month = {jan},
92 | number = {C},
93 | pages = {7--29},
94 | title = {{Understanding Business Cycles}},
95 | url = {http://linkinghub.elsevier.com/retrieve/pii/0167223177900021},
96 | volume = {5},
97 | year = {1977}
98 | }
99 | @article{Ma2009,
100 | abstract = {In recent years, there has been a growing interest in analyzing and quantifying the effects of random inputs in the solution of ordinary/partial differential equations. To this end, the spectral stochastic finite element method (SSFEM) is the most popular method due to its fast convergence rate. Recently, the stochastic sparse grid collocation method has emerged as an attractive alternative to SSFEM. It approximates the solution in the stochastic space using Lagrange polynomial interpolation. The collocation method requires only repetitive calls to an existing deterministic solver, similar to the Monte Carlo method. However, both the SSFEM and current sparse grid collocation methods utilize global polynomials in the stochastic space. Thus when there are steep gradients or finite discontinuities in the stochastic space, these methods converge very slowly or even fail to converge. In this paper, we develop an adaptive sparse grid collocation strategy using piecewise multi-linear hierarchical basis functions. Hierarchical surplus is used as an error indicator to automatically detect the discontinuity region in the stochastic space and adaptively refine the collocation points in this region. Numerical examples, especially for problems related to long-term integration and stochastic discontinuity, are presented. Comparisons with Monte Carlo and multi-element based random domain decomposition methods are also given to show the efficiency and accuracy of the proposed method. {\textcopyright} 2009 Elsevier Inc. All rights reserved.},
101 | author = {Ma, Xiang and Zabaras, Nicholas},
102 | doi = {10.1016/j.jcp.2009.01.006},
103 | isbn = {0021-9991},
104 | issn = {00219991},
105 | journal = {Journal of Computational Physics},
106 | keywords = {Adaptive sparse grid,Collocation,Discontinuities,Hierarchical multiscale method,Smolyak algorithm,Sparse grid,Stochastic partial differential equations},
107 | month = {may},
108 | number = {8},
109 | pages = {3084--3113},
110 | title = {{An Adaptive Hierarchical Sparse Grid Collocation Algorithm for the Solution of Stochastic Differential Equations}},
111 | url = {http://linkinghub.elsevier.com/retrieve/pii/S002199910900028X},
112 | volume = {228},
113 | year = {2009}
114 | }
115 | @article{Maliar2015a,
116 | abstract = {We introduce an algorithm for solving dynamic economic models that merges stochastic simulation and projection approaches: we use simulation to approximate the ergodic measure of the solution, we construct a fixed grid covering the support of the constructed ergodic measure, and we use projection techniques to accurately solve the model on that grid. The grid construction is the key novel piece of our analysis: we select an $\epsilon$-distinguishable subset of simulated points that covers the support of the ergodic measure roughly uniformly. The proposed algorithm is tractable in problems with high dimensionality (hundreds of state variables) on a desktop computer. As an illustration, we solve one- and multicountry neoclassical growth models and a large-scale new Keynesian model with a zero lower bound on nominal interest rates.},
117 | author = {Maliar, Lilia and Maliar, Serguei},
118 | doi = {10.3982/QE364},
119 | isbn = {820447943},
120 | issn = {17597331},
121 | journal = {Quantitative Economics},
122 | keywords = {C61,C63,C68,E31,E52,Ergodic set,ZLB,adaptive grid,clusters,discrepancy,epsilon-distinguishable set,large-scale model,new Keynesian model,stochastic simulation},
123 | pages = {1--47},
124 | title = {{Merging Simulation and Projection Approaches to Solve High-Dimensional Problems with an Application to a New Keynesian Model}},
125 | volume = {6},
126 | year = {2015}
127 | }
128 | @article{Mehra1985,
129 | abstract = {Restrictions that a class of general equilibrium models place upon the average returns of equity and Treasury bills are found to be strongly violated by the U.S. data in the 1889-1978 period. This result is robust to model specification and measurement problems. We conclude that, most likely, an equilibrium model which is not an Arrow-Debreu economy will be the one that simultaneously rationalizes both historically observed large average equity return and the small average risk-free return.},
130 | author = {Mehra, Rajnish and Prescott, Edward C.},
131 | doi = {10.1016/0304-3932(85)90061-3},
132 | isbn = {0304-3932},
133 | issn = {03043932},
134 | journal = {Journal of Monetary Economics},
135 | month = {mar},
136 | number = {2},
137 | pages = {145--161},
138 | pmid = {158960},
139 | title = {{The Equity Premium: A Puzzle}},
140 | url = {http://linkinghub.elsevier.com/retrieve/pii/0304393285900613},
141 | volume = {15},
142 | year = {1985}
143 | }
144 | @article{Tauchen1986,
145 | abstract = {The paper develops a procedure for finding a discrete-valued Markov chain whose sample paths approximate well those of a vector autoregression. The procedure has applications in those areas of economics, finance, and econometrics where approximate solutions to integral equations are required. {\textcopyright} 1986.},
146 | author = {Tauchen, George},
147 | doi = {10.1016/0165-1765(86)90168-0},
148 | isbn = {0165-1765},
149 | issn = {01651765},
150 | journal = {Economics Letters},
151 | month = {jan},
152 | number = {2},
153 | pages = {177--181},
154 | title = {{Finite State Markov-Chain Approximations to Univariate and Vector Autoregressions}},
155 | url = {http://linkinghub.elsevier.com/retrieve/pii/0165176586901680},
156 | volume = {20},
157 | year = {1986}
158 | }
159 | @article{Tauchen1991,
160 | abstract = {The paper develops a discrete state space solution method for a class of nonlinear rational expectations models. The method works by using numerical quadrature rules to approximate the integral operators that arise in stochastic intertemporal models. The method is particularly useful for approximating asset pricing models and has potential applications in other problems as well. An empirical application uses the method to study the relationship between the risk premium and the conditional variability of the equity return under an ARCH endowment process.},
161 | author = {Tauchen, George and Hussey, Robert},
162 | doi = {10.2307/2938261},
163 | isbn = {00129682},
164 | issn = {00129682},
165 | journal = {Econometrica},
166 | number = {2},
167 | pages = {371},
168 | title = {{Quadrature-Based Methods for Obtaining Approximate Solutions to Nonlinear Asset Pricing Models}},
169 | url = {http://www.jstor.org/stable/2938261?origin=crossref},
170 | volume = {59},
171 | year = {1991}
172 | }
173 |
--------------------------------------------------------------------------------
/slides/ta1.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/slides/ta1.pdf
--------------------------------------------------------------------------------
/slides/ta1.tex:
--------------------------------------------------------------------------------
1 | \documentclass[10pt, aspectratio=1610, handout]{beamer}
2 | \usepackage{common}
3 |
4 | \title[Intro to Numerical Methods for Macro]{
5 | \textbf{Introduction to Numerical Methods for Macroeconomics}
6 | }
7 |
8 | \subtitle[Macro 3: TA\#1]{
9 | \textbf{Macroeconomics 3:} TA class \#1
10 | }
11 |
12 | \author[A.~Pasqualini]{
13 | Andrea Pasqualini
14 | }
15 |
16 | \institute[Bocconi]{Bocconi University}
17 |
18 | \date{
19 | 8 February 2021
20 | }
21 |
22 | \begin{document}
23 |
24 | \begin{frame}
25 | \maketitle
26 | \end{frame}
27 |
28 | \begin{frame}
29 | \frametitle{About Myself}
30 |
31 | Hi, I am Andrea Pasqualini!
32 |
33 | \vfill\pause
34 |
35 | \begin{itemize}
36 | \item Graduated a couple of weeks ago! (yay!)
37 | \item Research interests: Banking, Macroeconomics
38 | \item JMP: markups on lending rates and markdowns on deposit rates
39 | \item Side project: Unemployment, SDFs and Dual Labor Markets in Europe
40 | \end{itemize}
41 |
42 | \vfill\pause
43 |
44 | \begin{description}
45 | \item[Email] \email{andrea.pasqualini@unibocconi.it} (also for MS Teams)
46 | \item[Website] \website{andrea.pasqualini.io}
47 | \item[Office] 5-E2-FM02
48 | \item[Material] \href{https://github.com/AndreaPasqualini/numerical_methods_macroeconomics}{https://github.com/AndreaPasqualini/numerical\_methods\_macroeconomics}
49 | \end{description}
50 |
51 | \end{frame}
52 |
53 | \begin{frame}
54 | \frametitle{About the TA Classes for Macro 3}
55 |
56 | \begin{itemize}
57 | \item So far, you saw the theoretical tools in Macroeconomics
58 | \item Now it's time for fun: empirical tools!
59 | \end{itemize}
60 |
61 | \vfill\pause
62 |
63 | \begin{itemize}
64 | \item Solving Macro models analytically may be impossible (it is, very often)
65 | \item Need to obtain numerical solutions
66 | \item Two options
67 | \begin{itemize}
68 | \item Perturbation methods
69 | \item Projection methods
70 | \end{itemize}
71 | \end{itemize}
72 |
73 | \vfill\pause
74 |
75 | \begin{itemize}
76 | \item These TA classes: \textbf{projection methods}
77 | \item Applications: \textbf{macro models with heterogeneous agents}
78 | \end{itemize}
79 |
80 | \vfill\pause
81 |
82 | Objective: \textbf{get familiar with projection methods and related applications}
83 |
84 | \end{frame}
85 |
86 | \begin{frame}
87 | \frametitle{About the Tools}
88 |
89 | Objective: \textbf{manipulate numerical objects (e.g., matrices), plot results}
90 |
91 | \vfill\pause
92 |
93 | \begin{itemize}
94 | \item Many options available: Matlab, R, Python, etc.
95 | \item This course: Python
96 | \end{itemize}
97 |
98 | \vfill\pause
99 |
100 | Advantages of Python
101 | \begin{itemize}
102 | \item Free and open-source, reliable tool
103 | \item Unbeatable flexibility
104 | \end{itemize}
105 |
106 | \vfill\pause
107 |
108 | \begin{itemize}
109 | \item Many options to work with Python: VSCode, Spyder, PyCharm, Jupyter Notebooks, etc.
110 | \item These classes: Jupyter Notebooks (VSCode behind the scenes)
111 | \end{itemize}
112 |
113 | \end{frame}
114 |
115 | \begin{frame}
116 | \frametitle{Intro to Numerical Methods for Economists}
117 |
118 | Objective: \textbf{solve a model}
119 |
120 | \vfill\pause
121 |
122 | \begin{description}
123 | \item[What?] $\E_t \left( f \left( X_{t-1}, X_t, X_{t+1} \right) \right) = 0$ \hspace{2em} \textcolor{gray}{(only rational expectations)}
124 | \pause
125 | \item[Who?] Economists in Macroeconomics, Development Econ, Applied Micro
126 | \pause
127 | \item[Why?] Obtain predictions and counterfactuals, compare with data
128 | \pause
129 | \item[How?] Techniques based on a model's mathematical properties
130 | \pause
131 | \item[When?] All the effing time
132 | \end{description}
133 |
134 | \vfill\pause
135 |
136 | \begin{columns}
137 | \begin{column}{0.475\textwidth}
138 | \begin{block}{Perturbation methods}
139 | \begin{itemize}
140 | \item Rely on Taylor expansion
141 | \item Require differentiability of the model
142 | \item Low computational costs
143 | \end{itemize}
144 | \end{block}
145 | \end{column}
146 | \begin{column}{0.475\textwidth}
147 | \begin{block}{Projection methods}
148 | \begin{itemize}
149 | \item Rely on Bellman equations
150 | \item Allow for heterogeneity, discontinuities
151 | \item High computational costs
152 | \end{itemize}
153 | \end{block}
154 | \end{column}
155 | \end{columns}
156 |
157 | \end{frame}
158 |
159 | \begin{frame}
160 | \frametitle{Intro to Numerical Methods for Economists: Hands-on Example}
161 |
162 | Example: \textbf{Neoclassical Stochastic Growth Model}
163 | \begin{align*}
164 | \max_{C_t, K_{t+1}} &\; \E_0 \sum_{t=0}^{\infty} \beta^t u(C_t) \\
165 | \text{s.t.} &\; \begin{cases}
166 | C_t + K_{t+1} = Z_t K_t^\alpha + (1 + \delta) K_{t-1} &\forall\ t\\
167 | \log(Z_{t+1}) = (1-\rho) \mu + \rho \log(Z_t) + \log(\varepsilon_{t+1}) &\forall\ t\\
168 | \varepsilon_{t+1} \overset{iid}{\sim} \mathcal{N}(0, \sigma^2) &\forall\ t\\
169 | C_t, K_{t+1} > 0 &\forall\ t\\
170 | K_{0}, Z_{0} \text{ given}
171 | \end{cases}
172 | \end{align*}
173 |
174 | \vfill\pause
175 |
176 | \begin{columns}[T]
177 | \begin{column}{0.475\textwidth}
178 | Variables
179 | \begin{itemize}
180 | \item Endogenously predetermined: $Z_t$, $K_t$
181 | \item Exogenous shocks: $\varepsilon_{t+1}$
182 | \item Controls: $C_t$, $K_{t+1}$
183 | \item Forward looking: $C_{t+1}$
184 | \end{itemize}
185 | \end{column}
186 | \begin{column}{0.475\textwidth}
187 | Equations for the equilibrium
188 | \begin{equation*}
189 | \begin{cases}
190 | u'(C_t) = \beta \cdot \E_t \left( u'(C_{t+1}) \left[ \alpha K_{t+1}^{\alpha-1} + 1 - \delta \right] \right) \\
191 | C_t + K_{t+1} = Z_t K_t^\alpha + (1 + \delta) K_t \\
192 | \log(Z_{t+1}) = (1-\rho) \mu + \rho \log(Z_t) + \log(\varepsilon_{t+1})
193 | \end{cases}
194 | \end{equation*}
195 | \end{column}
196 | \end{columns}
197 |
198 | \end{frame}
199 |
200 |
201 | \begin{frame}
202 | \frametitle{Intro to Numerical Methods for Economists: Perturbation Methods}
203 |
204 | \textbf{Focus on the equations that characterize the equilibrium (w/ CRRA utility)}
205 | \begin{equation*}
206 | \begin{cases}
207 | C_t^{-\gamma} = \beta \cdot \E_t \left( C_{t+1}^{-\gamma} \left[ \alpha K_{t+1}^{\alpha-1} + 1 - \delta \right] \right) \\
208 | C_t + K_{t+1} = Z_t K_t^\alpha + (1 + \delta) K_t \\
209 | \log(Z_{t+1}) = (1-\rho) \mu + \rho \log(Z_t) + \log(\varepsilon_{t+1})
210 | \end{cases}
211 | \end{equation*}
212 |
213 | \vfill\pause
214 |
215 | There exist
216 | \begin{itemize}
217 | \item A (deterministic) steady state
218 | \item Derivatives of each equation
219 | \end{itemize}
220 |
221 | \vfill\pause
222 |
223 | Log-linear representation of the model (1\textsuperscript{st} order Taylor expansion around the steady state)
224 | \begin{equation*}
225 | \begin{cases}
226 | c_t = \E_t ( c_{t+1} ) - \frac{1}{\gamma} \E_t \left( \alpha + (\alpha-1) k_{t+1} \right) \\
227 | c_t + k_{t+1} = z_t + \alpha k_t + (1+\delta) k_t \\
228 | z_{t+1} = \rho z_t + \log(\varepsilon_{t+1})
229 | \end{cases}
230 | \end{equation*}
231 |
232 | \vfill\pause
233 |
234 | Can solve this system of linear equations with linear algebra (e.g., Schur decomposition)
235 |
236 | \end{frame}
237 |
238 | \begin{frame}
239 | \frametitle{Intro to Numerical Methods for Economists: Projection Methods}
240 |
241 | \textbf{Focus on the optimization problem (write the associated Bellman equation)}
242 | \begin{align*}
243 | V(K, Z) = \max_{C, K'} &\; u(C) + \beta \E \left( V(K', Z') | Z \right) \\
244 | \text{s.t.} &\; \begin{cases}
245 | C + K' = Z K^\alpha + (1 + \delta) K \\
246 | \log(Z') = (1-\rho) \mu + \rho \log(Z) + \log(\varepsilon'), & \varepsilon' \overset{iid}{\sim} \mathcal{N}(0, \sigma^2) %\\ C, K' > 0
247 | \end{cases}
248 | \end{align*}
249 |
250 | \vfill\pause
251 |
252 | There exists
253 | \begin{itemize}
254 | \item A contraction mapping $\mathbf{T}$ induced by the ``max'' operator
255 | \item A unique fixed point $V(K, Z)$
256 | \end{itemize}
257 |
258 | \vfill\pause
259 |
260 | In a computer
261 | \begin{itemize}
262 | \item Define the domains for $K$ and $Z$
263 | \item Define a function that maximizes $u(C) + \beta \E (\ldots) \text{ s.t.} \ldots$
264 | \item Iterate the function until convergence
265 | \end{itemize}
266 |
267 | \vfill\pause
268 |
269 | Can crack this by letting the computer loop the contraction mapping $\mathbf{T}$
270 |
271 | \end{frame}
272 |
273 | \begin{frame}
274 | \frametitle{Intro to Numerical Methods for Economists: Projection Methods (cont'd)}
275 |
276 | For more complicated supply-demand models (let the ``real'' equilibrium price be $P^*$)
277 | \begin{enumerate}
278 | \item Guess an equilibrium price $P^{(h)}$
279 | \item Obtain the policy functions associated to the Bellman equation, for the given price $P^{(h)}$
280 | \item Define the excess demand function $D(P)$
281 | \item Observe that $D(P)$ is decreasing in $P$
282 | \begin{itemize}
283 | \item If $D(P^{(h)}) > 0$, then $P^{(h)} < P^*$
284 | \item If $D(P^{(h)}) < 0$, then $P^{(h)} > P^*$
285 | \end{itemize}
286 | \item Propose a new guess $P^{(h+1)}$ accordingly
287 | \item Repeat steps 1--5 until $| P^{(h+1)} - P^{(h)} | < \epsilon$
288 | \end{enumerate}
289 |
290 | \end{frame}
291 |
292 | \begin{frame}
293 | \frametitle{Intro to Numerical Methods for Economists: Takeaway's}
294 |
295 | We will see both projection (these classes) and perturbation methods (in Macro 4, hopefully)
296 |
297 | \vfill\pause
298 |
299 | What do we do with these numbers? E.g.
300 | \begin{itemize}
301 | \item Analyses of policy functions (if non-trivial)
302 | \item Impulse-Response Functions (IRFs)
303 | \item Counterfactual simulations
304 | \end{itemize}
305 |
306 | \vfill\pause
307 |
308 | Why do we need all of this?
309 | \begin{itemize}
310 | \item Can a mechanism explain macro phenomena? Write model, see variables move up/down
311 | \item Do these mechanisms matter quantitatively? Write model, compare simulations with data
312 | \end{itemize}
313 |
314 | \vfill\pause
315 |
316 | Why do projection methods matter?
317 | \begin{itemize}
318 | \item Models that suffer from derivatives (i.e., where higher order moments matter)
319 | \item Models with heterogeneous agents
320 | \item Models with binding constraits
321 | \end{itemize}
322 |
323 | \end{frame}
324 |
325 | \begin{frame}
326 | \frametitle{Intro to Python}
327 |
328 | Moving to a Jupyter Notebook
329 |
330 | \end{frame}
331 |
332 | \end{document}
333 |
--------------------------------------------------------------------------------
/slides/ta2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/slides/ta2.pdf
--------------------------------------------------------------------------------
/slides/ta3.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/slides/ta3.pdf
--------------------------------------------------------------------------------
/slides/ta3.tex:
--------------------------------------------------------------------------------
1 | \documentclass[10pt, aspectratio=1610, handout]{beamer}
2 | \usepackage{common}
3 |
4 | \title[VFI, PFI and DP]{
5 | \textbf{VFI, PFI and DP in Stochastic Environments}
6 | }
7 |
8 | \subtitle[Macro 3: TA\#3]{
9 | \textbf{Macroeconomics 3:} TA class \#3
10 | }
11 |
12 | \author[A.~Pasqualini]{
13 | Andrea Pasqualini
14 | }
15 |
16 | \institute[Bocconi]{Bocconi University}
17 |
18 | \date{
19 | 22 February 2021
20 | }
21 |
22 | \begin{document}
23 |
24 | \begin{frame}
25 | \maketitle
26 | \end{frame}
27 |
28 | \begin{frame}
29 | \frametitle{Plan for Today}
30 |
31 | Objective: \textbf{Solve numerically for $V(K, A)$ and $K'(K, A)$}
32 |
33 | \vfill\pause
34 |
35 | Operating example: Neoclassical Growth Model (stochastic version)
36 | \begin{align*}
37 | V(K, \alert{A}) \equiv \max_{C, K'} &\; \frac{C^{1 - \gamma}}{1 - \gamma} + \beta \alert{\E} \left( V(K', \alert{A'}) \alert{| A} \right) \\
38 | \text{s.t.} &\;
39 | \begin{cases}
40 | C + K' \leq \alert{A} K^\alpha + (1 - \delta) K \\
41 | C, K' > 0 \\
42 | \alert{\log(A') = (1-\rho) \log(\mu) + \rho \log(A) + \epsilon} \\
43 | \alert{\epsilon \overset{iid}{\sim} \mathcal{N}(0, \sigma^2)}
44 | \end{cases}
45 | \end{align*}
46 |
47 | \vfill\pause
48 |
49 | \begin{columns}[T]
50 | \begin{column}{0.45\textwidth}
51 | \begin{itemize}
52 | \item Same methods as last time
53 | \item Same objects of interest
54 | \item Adding discretization methods for time series processes
55 | \end{itemize}
56 | \end{column}
57 | \begin{column}{0.45\textwidth}
58 | Shocks are operationally useful
59 | \begin{itemize}
60 | \item Simulations
61 | \item Impulse-Response Functions
62 | \item Forecast Error-Variance Decomposition
63 | \end{itemize}
64 | \end{column}
65 | \end{columns}
66 |
67 | \end{frame}
68 |
69 | \begin{frame}
70 | \frametitle{The Discretization Problem}
71 |
72 | Objective: \textbf{approximate a continuous stochastic process with a discrete one}
73 |
74 | \vfill\pause
75 |
76 | Same problem faced in our last class: the computer has no concept of set density
77 |
78 | \vfill\pause
79 |
80 | \begin{align*}
81 | \log(A') &= (1 - \rho) \log(\mu) + \rho \log(A) + \epsilon & \epsilon \overset{iid}{\sim} \mathcal{N}(0, \sigma^2)
82 | \end{align*}
83 | \begin{align*}
84 | \underbrace{\begin{bmatrix}
85 | \log(A_1) \\ \log(A_2) \\ \vdots \\ \log(A_m)
86 | \end{bmatrix}}_{\text{grid for $\log(A)$}}
87 | &=
88 | \underbrace{\begin{bmatrix}
89 | \Pi_{1,1} & \Pi_{1,2} & \cdots & \Pi_{1,m} \\
90 | \Pi_{2,1} & \Pi_{2,2} & \cdots & \Pi_{2,m} \\
91 | \vdots & \vdots & \ddots & \vdots \\
92 | \Pi_{m,1} & \Pi_{m,2} & \cdots & \Pi_{m,m} \\
93 | \end{bmatrix}}_{\text{transition probabilities, $\Pi$}}
94 | \cdot
95 | \underbrace{\begin{bmatrix}
96 | \log(A_1) \\ \log(A_2) \\ \vdots \\ \log(A_m)
97 | \end{bmatrix}}_{\text{grid for $\log(A)$}}
98 | \end{align*}
99 |
100 | \vfill\pause
101 |
102 | \begin{itemize}
103 | \item Need to set up a grid for $A$ \dimmer{(same as before)}
104 | \item Need to figure out the transition probabilities \dimmer{(new!)}
105 | \begin{itemize}
106 | \item Important to compute the conditional expected continuation value $\E \left( V(K', A') | A \right)$
107 | \end{itemize}
108 | \end{itemize}
109 |
110 | \end{frame}
111 |
112 | \begin{frame}
113 | \frametitle{The Discretization Problem (cont'd)}
114 |
115 | Objective: \textbf{approximate a continuous stochastic process with a discrete one}
116 |
117 | \vfill
118 |
119 | \begin{columns}
120 | \begin{column}{0.35\textwidth}
121 | Must match
122 | \begin{itemize}
123 | \item Unconditional exp.~value
124 | \item Conditional exp.~value
125 | \item Unconditional variance
126 | \item Conditional variance
127 | \item (Optional) skewness
128 | \item (Optional) kurtosis
129 | \item (Optional) higher-order moments
130 | \end{itemize}
131 | \end{column}
132 | \begin{column}{0.6\textwidth}
133 | \begin{figure}
134 | \centering
135 | \begin{tikzpicture}
136 | \begin{axis}[footnotesize, width=8cm, height=7cm, domain=-3:3, xtick={-3, -2, -1, 0, 1, 2, 3}, ytick={0, 0.1, 0.2, 0.3, 0.4}, xmajorgrids=true, ymajorgrids=true, grid style=dashed, title={$\log(Z')|Z \sim \mathcal{N}(0, 1)$}]
137 | \addplot[samples=500, alert, very thick]{exp(-x^2) / (sqrt(2 * pi))};
138 | \only<2->{\addplot[ybar interval, fill=main, opacity=0.2] coordinates {(-2.5, 0.032) (-1.5, 0.183) (-0.5, 0.387) (0.5, 0.183) (1.5, 0.032) (2.5, 0)};}
139 | \end{axis}
140 | \end{tikzpicture}
141 |
142 | {\scriptsize\dimmer{\textbf{Note:} the parameters in this figure are chosen exclusively for illustration purposes}}
143 | \end{figure}
144 | \end{column}
145 | \end{columns}
146 |
147 | \end{frame}
148 |
149 | \begin{frame}
150 | \frametitle{Overview of Methods}
151 |
152 | \textbf{Tauchen}
153 | \begin{itemize}
154 | \item Constructs a histogram for a conditional distribution function
155 | \item Can control the grid directly
156 | \item Easy to code, easy intuition
157 | \item Approximation errors with high-persistence processes
158 | \end{itemize}
159 |
160 | \vfill\pause
161 |
162 | \textbf{Tauchen-Hussey}
163 | \begin{itemize}
164 | \item Constructs a histogram for a conditional distribution function
165 | \item Imposes a fancy grid, no control over it (except for no.~of points)
166 | \item Approximation errors with high-persistence processes
167 | \end{itemize}
168 |
169 | \vfill\pause
170 |
171 | \textbf{Rouwenhorst}
172 | \begin{itemize}
173 | \item Recursively approximates a conditional distribution function
174 | \item No control on grid (except for no.~of points)
175 | \item Robust to high-persistence processes
176 | \item ``It just works,'' non-obvious intuition
177 | \end{itemize}
178 |
179 | \end{frame}
180 |
181 | \begin{frame}
182 | \frametitle{The Tauchen Algorithm}
183 |
184 | \begin{enumerate}
185 | \item Forget about the unconditional average of the process \hfill \dimmer{(will recover it later)}
186 | \vfill\pause
187 | \item Create a grid for the support $S$ of the probability distribution function \hfill \dimmer{(this is a vector)}
188 | \vfill\pause
189 | \item Compute all possible transitions $S' - \rho S$ \hfill \dimmer{(this is a matrix)}
190 | \vfill\pause
191 | \item Evaluate the relevant CDF (e.g., Gaussian) at the possible transitions
192 | \vfill\pause
193 | \item Make the resulting matrix such that each row sums to one
194 | \vfill\pause
195 | \item Shift the grid by the unconditional average, if needed
196 | \end{enumerate}
197 |
198 | \end{frame}
199 |
200 | \begin{frame}
201 | \frametitle{The Tauchen-Hussey Algorithm}
202 |
203 | \begin{enumerate}
204 | \item Forget about the unconditional average of the process \hfill \dimmer{(will recover it later)}
205 | \vfill\pause
206 | \item Obtain the grid for $S$ by computing the zeros of a Gauss-Hermite polynomial of degree $m$
207 | \vfill\pause
208 | \item Rescale the grid points by $\sqrt{2\sigma^2}$ \hfill \dimmer{(this is a vector)}
209 | \vfill\pause
210 | \item Compute the relevant \textit{conditional} PDF at the possible transitions \hfill \dimmer{(this is a matrix)}
211 | \vfill\pause
212 | \item Rescale the computed conditional PDF to account for discrete points
213 | \vfill\pause
214 | \item Normalize the matrix so that rows sum to one
215 | \vfill\pause
216 | \item Shift the grid by the unconditional average, if needed
217 | \end{enumerate}
218 |
219 | \end{frame}
220 |
221 | \begin{frame}
222 | \frametitle{The Rouwenhorst Algorithm}
223 |
224 | \begin{enumerate}
225 | \item Set $p, q \in (0, 1)$
226 | \vfill\pause
227 | \item For $m=2$ grid points, construct $\Pi_2$ as
228 | \begin{equation*}
229 | \Pi_2 =
230 | \begin{bmatrix}
231 | p & 1 - p \\
232 | 1 - q & q
233 | \end{bmatrix}
234 | \end{equation*}
235 | \vfill\pause
236 | \item For $m > 2$ grid points
237 | \begin{enumerate}
238 | \item Construct $\Pi_m$ as
239 | \begin{equation*}
240 | \Pi_m =
241 | p
242 | \begin{bmatrix}
243 | \Pi_{m-1} & 0 \\ 0 & 0
244 | \end{bmatrix}
245 | + (1 - p)
246 | \begin{bmatrix}
247 | 0 & \Pi_{m-1} \\ 0 & 0
248 | \end{bmatrix}
249 | + (1 - q)
250 | \begin{bmatrix}
251 | 0 & 0 \\ \Pi_{m-1} & 0
252 | \end{bmatrix}
253 | + q
254 | \begin{bmatrix}
255 | 0 & 0 \\ 0 & \Pi_{m-1}
256 | \end{bmatrix}
257 | \end{equation*}
258 | \item Divide by 2 all but the top and bottom rows of $\Pi_m$ \hfill \dimmer{(those rows in the middle sum to 2)}
259 | \end{enumerate}
260 | \vfill\pause
261 | \item Create a grid of linearly spaced points for the support of the PDF
262 | \begin{enumerate}
263 | \item Compute $f = \sqrt{N - 1} \cdot \sigma / \sqrt{1 - \rho^2}$ \hfill \dimmer{(it relates to the uncond.~variance of the AR(1))}
264 | \item Create $A = \{ a_1, \ldots, a_m \}$ with $a_1 = -f$ and $a_m = f$
265 | \item Shift $A$ by the unconditional average, if necessary
266 | \end{enumerate}
267 | \end{enumerate}
268 |
269 | \vfill\pause
270 |
271 | \begin{itemize}
272 | \item Setting $p = q$ ensures homoskedasticity in the structure of shocks/innovations
273 | \item Setting $p = q = (1 + \rho) / 2$ matches the variance of the original AR(1) process
274 | \end{itemize}
275 |
276 | \end{frame}
277 |
278 | \begin{frame}
279 | \frametitle{Ergodic Distribution}
280 |
281 | Objective: \textbf{Compute the ergodic PDF of a Markov Chain}
282 |
283 | \vfill
284 |
285 | The ergodic distribution $\pi$ of a Markov Chain with transition matrix $\Pi$ is such that
286 | \begin{equation*}
287 | \begin{cases}
288 | \pi = \Pi' \pi \\
289 | \pi \iota = 1
290 | \end{cases}
291 | \end{equation*}
292 | where $\iota$ is a vector of 1's
293 |
294 | \vfill
295 |
296 | The system of equations above says that
297 | \begin{itemize}
298 | \item The vector $\pi$ is one eigenvector of the matrix $\Pi$\dots
299 | \item \dots\ in particular, the one whose elements sum to one
300 | \end{itemize}
301 | There are countless ways to compute the ergodic distribution, but this one works quite well
302 |
303 | \end{frame}
304 |
305 | \begin{frame}
306 | \frametitle{Calibration}
307 |
308 | \begin{table}
309 | \centering
310 | \begin{tabular}{clc}
311 | \toprule
312 | Symbol & Meaning & Value \\
313 | \midrule
314 | \dimmer{$\alpha$} & \dimmer{Capital intensity in PF} & \dimmer{0.30} \\
315 | \dimmer{$\beta$ } & \dimmer{Discount parameter } & \dimmer{0.95} \\
316 | \dimmer{$\gamma$} & \dimmer{CRRA parameter } & \dimmer{1.50} \\
317 | \dimmer{$\delta$} & \dimmer{Capital depreciation } & \dimmer{0.10} \\
318 | $\mu$ & Uncond.~avg.~of productivity & 1.00 \\
319 | $\rho$ & Persistence of productivity & 0.70 \\
320 | $\sigma$ & St.dev.~of productivity shocks & 0.10 \\
321 | \bottomrule
322 | \end{tabular}
323 | \end{table}
324 |
325 | \vfill
326 |
327 | \dimmer{The same disclaimer as in the previous class applies}
328 |
329 | \dimmer{$\to$ The calibration presented here is not credible in any meaningful empirical setting}
330 |
331 | \end{frame}
332 |
333 | \begin{frame}
334 | \frametitle{Simulation}
335 |
336 | Consider the necessary and sufficient conditions for the equilibrium in any model with rational expectations
337 | \begin{equation*}
338 | \E_t \left( f \left( X_{t-1}, X_t, X_{t+1} \right) \right) = 0
339 | \end{equation*}
340 |
341 | \vfill\pause
342 |
343 | The solution to such model is a ``policy function'' $g(\cdot)$ such that
344 | \begin{equation*}
345 | X_{t+1} = g(X_{t-1}, X_t)
346 | \end{equation*}
347 | What we call here ``policy function'' $g(\cdot)$ is a vector function containing
348 | \begin{itemize}
349 | \item The policy functions (strictly speaking) from the Bellman problem
350 | \item The laws of motion (e.g., the one for capital)
351 | \item Exogenous stochastic processes (e.g., the one for productivity)
352 | \end{itemize}
353 |
354 | \vfill\pause
355 |
356 | A simulation takes some initial conditions for $X_{t-1}$ and $X_t$ and applies the function $g(\cdot)$ repeatedly for a given series of shocks
357 |
358 | \end{frame}
359 |
360 | \begin{frame}
361 | \frametitle{Simulation (cont'd)}
362 |
363 | Steps to simulate from the Stochastic Neoclassical Growth Model
364 |
365 | \vfill\pause
366 |
367 | \begin{enumerate}
368 | \item Set a number of periods $T$ to simulate
369 | \vfill\pause
370 | \item Set $K_0$, that is the initial condition
371 | \vfill\pause
372 | \item For each $t \in \{0, \ldots, T\}$
373 | \vfill\pause
374 | \begin{enumerate}
375 | \item Draw a state $A_t$ from the relevant CDF
376 | \vfill\pause
377 | \item Compute current consumption and future capital holdings using the policy functions $C_t = C(K_t, A_t)$ and $K_{t+1} = K'(K_t, A_t)$
378 | \vfill\pause
379 | \item Compute all other endogenous variables using other equations of the model (e.g., production, investment)
380 | \end{enumerate}
381 | \end{enumerate}
382 |
383 | \end{frame}
384 |
385 | \begin{frame}
386 | \frametitle{Impulse-Response Functions}
387 |
388 | Objective: \textbf{Marginal effect of an exogenous shock on an endogenous variable}
389 |
390 | \vfill\pause
391 |
392 | IRFs are the marginal effects of shocks on endogenous variables predicted by the model
393 |
394 | \vfill\pause
395 |
396 | Formally, the response at horizon $h$ of variable $X_{t+h}$ to a shock (impulse) to $S_t$ is
397 | \begin{equation*}
398 | IRF_{X,S}(h) \equiv \frac{\partial X_{t+h}}{\partial S_t}
399 | \end{equation*}
400 |
401 | \vfill\pause
402 |
403 | IRFs are simple simulations
404 | \begin{itemize}
405 | \item The initial condition is typically the steady state of the model
406 | \item At time $t$, a sudden unexpected shock realizes
407 | \item At time $t+h$, for all $h>0$, all shocks are shut down
408 | \end{itemize}
409 |
410 | \end{frame}
411 |
412 | \begin{frame}
413 | \frametitle{Exercises}
414 |
415 | \begin{enumerate}
416 | \item Use the code for VFI/PFI I have shown in class \#2
417 | \begin{enumerate}
418 | \item Write code that, given an initial condition for the state variable, simulates the model
419 | \item In what sense such simulation is uninteresting?
420 | \end{enumerate}
421 | \vfill
422 | \item Use the code for VFI I have shown in this class
423 | \begin{enumerate}
424 | \item How would the discretization of the stochastic process work if $\rho = 0$ (i.e., the process of $A$ itself is a sequence of i.i.d.~random variables)?
425 | \item Code up the related discretization method and solve for the policy function
426 | \end{enumerate}
427 | \vfill
428 | \item Consider the code I have shown in this class for simulating the model
429 | \begin{enumerate}
430 | \item My code forces the shock to be on the grid for $A$: how would you modify the numerical policy functions (and those only!) to accomodate for any $A \in \mathbb{R}$?
431 | \item Code up your answer to the previous question
432 | \item Simulate the model for some periods (e.g., $T=250$)
433 | \item Compute the impulse-response functions of consumption, investment and production to a one-standard deviation shock to productivity
434 | \item Provide the economic intuition behind the the IRFs you have obtained
435 | \end{enumerate}
436 | \end{enumerate}
437 |
438 | \end{frame}
439 |
440 | \end{document}
441 |
--------------------------------------------------------------------------------
/slides/ta4.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/slides/ta4.pdf
--------------------------------------------------------------------------------
/slides/ta4.tex:
--------------------------------------------------------------------------------
1 | \documentclass[10pt, aspectratio=1610, natbib, handout]{beamer}
2 | \usepackage{common}
3 |
4 | \title[GE with Prices]{
5 | \textbf{General Equilibrium in Representative- and Heterogeneous-Agents Models with Explicit Prices}
6 | }
7 |
8 | \subtitle[Macro 3: TA\#4]{
9 | \textbf{Macroeconomics 3:} TA class \#4
10 | }
11 |
12 | \author[A.~Pasqualini]{
13 | Andrea Pasqualini
14 | }
15 |
16 | \institute[Bocconi]{Bocconi University}
17 |
18 | \date{
19 | 1 March 2021
20 | }
21 |
22 | \begin{document}
23 |
24 | \begin{frame}
25 | \maketitle
26 | \end{frame}
27 |
28 | \begin{frame}
29 | \frametitle{Plan for Today}
30 |
31 | Objective: \textbf{Solve for the equilibrium when explicit prices are involved}
32 |
33 | \vfill\pause
34 |
35 | Two (sub-) goals:
36 | \begin{itemize}
37 | \item Learn how to solve GE macro models when prices are involved
38 | \item Learn how to solve GE macro models when agents are heterogeneous
39 | \end{itemize}
40 |
41 | \end{frame}
42 |
43 | \begin{frame}
44 | \frametitle{Working Example}
45 |
46 | Consider this simple exchange economy with exogenous endowments
47 | \begin{align*}
48 | \max_{C_t, A_{t+1}} &\; \E_0 \left( \sum_{t=0}^{\infty} \beta^t \frac{C_t^{1-\gamma}}{1-\gamma} \right) \\
49 | \text{s.t.} &\;
50 | \begin{cases}
51 | C_t + A_{t+1} \leq \alert{Y_t} + (1 + \alert{r}_t) A_t \\
52 | A_{t+1} \geq \underline{A}
53 | \end{cases}
54 | \end{align*}
55 |
56 | \vfill\pause
57 |
58 | Today we look at two versions:
59 | \begin{itemize}
60 | \item $Y_t = Y$ deterministically \hfill\dimmer{(representative-agent economy)}
61 | \item $Y_t$ is stochastic and idiosyncratic \hfill\dimmer{(heterogeneous-agents economy)}
62 | \end{itemize}
63 |
64 | \vfill\pause
65 |
66 | When $Y_t$ is stochastic, we assume $Y_t \in \{ Y^l, Y^h \}$, with
67 | \begin{equation*}
68 | P(Y_{t+1} | Y_t) = \Pi =
69 | \begin{bmatrix}
70 | \pi & 1 - \pi \\
71 | 1 - \pi & \pi
72 | \end{bmatrix}
73 | \end{equation*}
74 |
75 | \end{frame}
76 |
77 | \begin{frame}
78 | \frametitle{RA Economy: Market Clearing}
79 |
80 | The model is numerically uninteresting, the closed form solution for the price $r$ is given by imposing $A_t^* = 0$ for all periods $t$, because we have one representative agent, therefore the net financial position must be zero
81 | \begin{equation*}
82 | \begin{cases}
83 | r_t^* = 1 / \beta - 1 & \forall\ t \\
84 | C_t^* = Y & \forall\ t \\
85 | A_t^* = 0 & \forall\ t
86 | \end{cases}
87 | \end{equation*}
88 |
89 | \vfill\pause
90 |
91 | The Bellman equation
92 | \begin{align*}
93 | V(A) = \max_{C, A'} &\; \frac{C^{1-\gamma}}{1-\gamma} + \beta\ V(A') \\
94 | \text{s.t.} &\;
95 | \begin{cases}
96 | C + A' \leq Y + (1 + r) A \\
97 | A' \geq \underline{A}
98 | \end{cases}
99 | \end{align*}
100 |
101 | \vfill\pause
102 |
103 | The market clearing condition $A_t^* = 0$ translates into this condition on the policy function: $A'(0) = 0$
104 | \begin{itemize}
105 | \item If $A'(0) > 0$, excess demand: RA wants to save, but nobody is there to sell assets
106 | \item If $A'(0) < 0$, excess supply: RA wants to borrow, but nobody is there to buy assets
107 | \end{itemize}
108 |
109 | \end{frame}
110 |
111 | \begin{frame}
112 | \frametitle{RA Economy: Strategy for Numerical Solution}
113 |
114 | New element relative to past TA classes: price $r$
115 | \begin{itemize}
116 | \item Solve VFI/PFI given a numerical a value for $r$
117 | \item Check market clearing condition for asset holdings
118 | \begin{itemize}
119 | \item If net excess demand \textgreater\ 0 (i.e., excess demand), $r$ was too low: do it all again with higher $r$
120 | \item If net excess demand \textless\ 0 (i.e., excess supply), $r$ was too high: do it all again with lower $r$
121 | \item If there is zero excess supply/demand, $r$ was just right: model solved!
122 | \end{itemize}
123 | \end{itemize}
124 |
125 | \vfill\pause
126 |
127 | The net excess demand in this context is exactly $A'(0)$
128 |
129 | \vfill\pause
130 |
131 | \dimmer{You can see why VFI/PFI must be fast: need to solve for policy functions over and over again}
132 |
133 | \end{frame}
134 |
135 | \begin{frame}
136 | \frametitle{RA Economy: Intuition on Why/How This Works}
137 |
138 | \begin{columns}[T]
139 | \begin{column}{0.5\textwidth}
140 | \begin{itemize}
141 | \item Net excess demand: $Z(r) \equiv D(r) - S(r)$
142 | \item From theory, $Z(r)$ is decreasing
143 | \item From theory, $\exists\ r^*: Z(r^*) = 0$
144 | \end{itemize}
145 | \end{column}
146 | \begin{column}{0.4\textwidth}
147 | \textbf{Algorithm:} Given a guess $r^{(j)}$
148 | \begin{itemize}
149 | \item if $Z \left( r^{(j)} \right) > 0$, then $r^{(j)} < r^*$
150 | \item if $Z \left( r^{(j)} \right) < 0$, then $r^{(j)} > r^*$
151 | \item Set $r^{(j+1)}$ accordingly and repeat
152 | \end{itemize}
153 | \end{column}
154 | \end{columns}
155 |
156 | \vfill\pause
157 |
158 | \begin{figure}
159 | \centering
160 | \begin{tikzpicture}
161 | \begin{axis}[footnotesize, xmin=0, xmax=5, enlarge x limits={1}, width=12cm, height=5cm, ticks=none, axis lines=middle, xlabel={$r$}, ylabel={$Z(r)$}]
162 | \addplot[domain=0.5:4.5, thick, color=blue, samples=3]{2.5 - x};
163 | \draw[black, dashed] (1.25, 0) -- (1.25, 1.25) -- (0, 1.25); % r1
164 | \draw[black, dashed] (3.75, 0) -- (3.75, -1.25) -- (0, -1.25); % r2
165 | \draw[black, dashed] (1.75, 0) -- (1.75, 0.75) -- (0, 0.75); % r3
166 | \draw[black, dashed] (3.25, 0) -- (3.25, -0.75) -- (0, -0.75); % r4
167 | \fill[red]{(2.5, 0) circle (2pt)};
168 | \node[below] at (2.5, 0) {$r^*$};
169 | \node[below] at (1.25, 0) {$r^{(1)}$};
170 | \node[above] at (3.75, 0) {$r^{(2)}$};
171 | \node[below] at (1.75, 0) {$r^{(3)}$};
172 | \node[above] at (3.25, 0) {$r^{(4)}$};
173 | \node[left] at (0, 1.25) {$Z(r^{(1)})$};
174 | \node[left] at (0, -1.25) {$Z(r^{(3)})$};
175 | \node[left] at (0, 0.75) {$Z(r^{(4)})$};
176 | \node[left] at (0, -0.75) {$Z(r^{(2)})$};
177 | \end{axis}
178 | \end{tikzpicture}
179 | \end{figure}
180 |
181 | \end{frame}
182 |
183 | \begin{frame}
184 | \frametitle{RA Economy: Coding Approach}
185 |
186 | \textbf{Objective:} write a function that takes a price and returns the net excess demand at that price \\
187 | \textbf{Objective:} use a zero-finding routine that finds the zero of the aforementioned function
188 |
189 | \vfill\pause
190 |
191 | The function $Z(r)$, given calibrated parameters and relevant grids
192 | \begin{enumerate}
193 | \item Solves VFI/PFI and extracts the policy functions
194 | \item Computes the net excess demand
195 | \item Returns the numerical value of the net excess demand
196 | \end{enumerate}
197 |
198 | \vfill\pause
199 |
200 | Then, use any of the appropriate functions in \texttt{scipy.optimize}:
201 | \begin{itemize}
202 | \item \texttt{bisect}
203 | \item \texttt{brentq}
204 | \item \texttt{ridder}
205 | \item \texttt{toms748}
206 | \end{itemize}
207 |
208 | \vfill
209 |
210 | Learn more at \url{https://docs.scipy.org/doc/scipy/reference/optimize.html}
211 |
212 | \end{frame}
213 |
214 | \begin{frame}
215 | \frametitle{HA Economy: Market Clearing}
216 |
217 | \begin{columns}[T]
218 | \begin{column}{0.48\textwidth}
219 | The Bellman equation
220 | \begin{align*}
221 | V(A, Y) = \max_{C, A'} &\; \frac{C^{1-\gamma}}{1-\gamma} + \beta\ \E \left( V(A', Y') \middle| A, Y \right) \\
222 | \text{s.t.} &\;
223 | \begin{cases}
224 | C + A' \leq Y + (1 + r) A \\
225 | A' \geq \underline{A} \\
226 | P(Y' | Y) = \Pi
227 | \end{cases}
228 | \end{align*}
229 | \end{column}
230 | \begin{column}{0.48\textwidth}
231 | We have
232 | \begin{itemize}
233 | \item An exogenous Markov chain $P(Y' | Y)$
234 | \item A policy function $A'(A, Y)$
235 | \end{itemize}
236 | \vspace{1em}
237 | We obtain
238 | \begin{itemize}
239 | \item An endogenous distribution $\lambda_t(A, Y)$
240 | \item An ergodic endogenous distr.~$\lambda(A, Y)$
241 | \end{itemize}
242 | \end{column}
243 | \end{columns}
244 |
245 | \vfill\pause
246 |
247 | Market clearing: total savings = total borrowings
248 | \begin{equation*}
249 | \int_A \int_Y \lambda(A, Y)\ A'(A, Y)\ \text{d} Y\ \text{d} A = 0
250 | \end{equation*}
251 | \begin{itemize}
252 | \item For the household, $r$ is taken as given (like a parameter)
253 | \item For the equilibrium, $r$ depends on the infinite-dimensional object $\lambda(A, Y)$
254 | \item We say that $\lambda(A, Y)$ is an infinite-dimensional state variable (for the equilibrium!): infeasible in a computer, must approximate
255 | \end{itemize}
256 |
257 | \vfill\pause
258 |
259 | \textbf{Objective:} approximate the ergodic endogenous distribution $\lambda(A, Y)$
260 | \end{frame}
261 |
262 | \begin{frame}
263 | \frametitle{HA Economy: The Endogenous Distribution of Agents}
264 |
265 | \begin{itemize}
266 | \item The exogenous matrix $\Pi$ maps $Y$ into $Y'$
267 | \item The endogenous policy function $A'(A, Y)$ maps $(A, Y)$ into $A'$
268 | \item Combine them to map $(A, Y)$ into $(A', Y')$
269 | \end{itemize}
270 |
271 | \vfill\pause
272 |
273 | Formally, let $\lambda_t(A, Y)$ be the endogenous joint distribution of agents at period $t$
274 | \begin{equation*}
275 | \lambda_{t+1}(A', Y') = P(Y' | Y) \cdot A'(A, Y) \cdot \lambda_t(A, Y)
276 | \end{equation*}
277 |
278 | \vfill\pause
279 |
280 | \begin{itemize}
281 | \item The transition from $(A, Y)$ to $(A', Y')$ is regulated by an endogenous Markov process
282 | \item The distribution $\lambda(A, Y)$ is the ergodic distribution associated to such Markov process
283 | \item We normally focus on \textbf{ergodic recursive equilibria} \hfill\dimmer{(else, too much going on)}
284 | \end{itemize}
285 |
286 | \end{frame}
287 |
288 | \begin{frame}
289 | \frametitle{HA Economy: Strategy for Numerical Solution}
290 |
291 | \begin{itemize}
292 | \item Solve VFI/PFI given a numerical value for $r$
293 | \item Recode the policy function as a set of transition matrices ${(\bar{A}^k)}_{k=0}^{m}$ such that
294 | \begin{equation*}
295 | \bar{A}^k_{[i, j]} \equiv
296 | \begin{cases}
297 | 1 & \text{ if } A'(A_i, Y_k) = A_j \\
298 | 0 & \text{ if } A'(A_i, Y_k) \neq A_j
299 | \end{cases}
300 | \end{equation*}
301 | \item Combine the matrices ${(\bar{A}^k)}_{k=0}^{m}$ in a block diagonal matrix such that
302 | \begin{equation*}
303 | \underset{[n m \times n m]}{\bar{A}} \equiv
304 | \begin{bmatrix}
305 | \bar{A}^1 & 0 & \cdots & 0 \\
306 | 0 & \bar{A}^2 & \cdots & 0 \\
307 | \vdots & \vdots & \ddots & \vdots \\
308 | 0 & 0 & \cdots & \bar{A}^{m}
309 | \end{bmatrix}
310 | \end{equation*}
311 | \item Compute the endogenous transition matrix Q as \hfill\dimmer{(maps $(A, Y)$ into $(A', Y')$)}
312 | \begin{equation*}
313 | \underset{[n m \times n m]}{Q} \equiv (\Pi \otimes I_n) \cdot \bar{A}
314 | \end{equation*}
315 | \item Compute the ergodic distribution associated with the transition matrix $Q$: that is $\lambda(A, Y)$
316 | \end{itemize}
317 |
318 | \end{frame}
319 |
320 | \begin{frame}
321 | \frametitle{HA Economy: Coding Approach}
322 |
323 | \textbf{Objective:} write a function that takes a price and returns the net excess demand at that price \\
324 | \textbf{Objective:} use a zero-finding routine that finds the zero of the aforementioned function
325 |
326 | \vfill\pause
327 |
328 | The function $Z(r)$, given calibrated parameters and relevant grids
329 | \begin{enumerate}
330 | \item Solves VFI/PFI and extracts the policy functions
331 | \item \alert{Constructs the ergodic distribution of agents}
332 | \item Computes the net excess demand
333 | \item Returns the numerical value of the net excess demand
334 | \end{enumerate}
335 |
336 | \vfill\pause
337 |
338 | Then, use any of the appropriate functions in \texttt{scipy.optimize}:
339 | \begin{itemize}
340 | \item \texttt{bisect}
341 | \item \texttt{brentq}
342 | \item \texttt{ridder}
343 | \item \texttt{toms748}
344 | \end{itemize}
345 |
346 | \vfill
347 |
348 | Learn more at \url{https://docs.scipy.org/doc/scipy/reference/optimize.html}
349 |
350 | \end{frame}
351 |
352 | \begin{frame}
353 | \frametitle{Practice Time}
354 |
355 | Moving to a Jupyter Notebook
356 |
357 | \end{frame}
358 |
359 | \begin{frame}
360 | \frametitle{Exercises}
361 |
362 | \begin{enumerate}
363 | \item Use the code I have showed for both examples
364 | \begin{enumerate}
365 | \item Replace VFI with PFI
366 | \item Report on the speed improvements
367 | \end{enumerate}
368 | \vfill
369 | \item The second example we saw today is essentially the Huggett model
370 | \begin{enumerate}
371 | \item Adapt the code such that it is written as one coherent Python \texttt{class}
372 | \item Generalize the code to accept any AR(1) process for the endowment process
373 | \item In what sense the transition matrix $Q$ is obtained the quick-and-dirty way? How could you address the issue?
374 | \end{enumerate}
375 | \end{enumerate}
376 |
377 | \end{frame}
378 |
379 | \end{document}
380 |
--------------------------------------------------------------------------------
/slides/ta5.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/slides/ta5.pdf
--------------------------------------------------------------------------------
/slides/ta5.tex:
--------------------------------------------------------------------------------
1 | \documentclass[10pt, aspectratio=1610, natbib, handout]{beamer}
2 | \usepackage{common}
3 |
4 | \title[Huggett-Aiyagari]{
5 | \textbf{Bewley-type Models: Huggett \& Aiyagari}
6 | }
7 |
8 | \subtitle[Macro 3: TA\#5]{
9 | \textbf{Macroeconomics 3:} TA class \#5
10 | }
11 |
12 | \author[A.~Pasqualini]{
13 | Andrea Pasqualini
14 | }
15 |
16 | \institute[Bocconi]{Bocconi University}
17 |
18 | \date{
19 | 8 March 2021
20 | }
21 |
22 | \begin{document}
23 |
24 | \begin{frame}
25 | \maketitle
26 | \end{frame}
27 |
28 | \begin{frame}
29 | \frametitle{Plan for Today}
30 |
31 | Objective: \textbf{Replicate the results in \cite{Huggett1993} and \cite{Aiyagari1994}}
32 |
33 | \vfill\pause
34 |
35 | Sub-goals:
36 | \begin{itemize}
37 | \item Understand the papers
38 | \item Learn about ``binning'' (a.k.a., non-stochastic simulations)
39 | \item Learn about transition dynamics (a.k.a., MIT shocks)
40 | \end{itemize}
41 |
42 | \end{frame}
43 |
44 | \begin{frame}
45 | \frametitle{Overview}
46 |
47 | \cite{Huggett1993}
48 | \begin{itemize}
49 | \item Tries to address the Equity Premium Puzzle
50 | \item Uses a simple exchange economy with incomplete insurance
51 | \item Solid numerical approach: guaranteed to converge
52 | \item Finds that incomplete cannot account for the EPP
53 | \end{itemize}
54 |
55 | \vfill\pause
56 |
57 | \cite{Aiyagari1994}
58 | \begin{itemize}
59 | \item Tries to see if precautionary savings explain aggregate savings
60 | \item Uses an RBC economy with incomplete insurance
61 | \item Tentative numerical approach: converges, maybe, nobody knows why
62 | \item Finds that precautionary savings cannot account for aggregate savings
63 | \end{itemize}
64 |
65 | \end{frame}
66 |
67 | \begin{frame}
68 | \frametitle{Huggett: Overview}
69 |
70 | \begin{description}
71 | \item[Question] How to address the Equity Premium Puzzle (EPP) in macro models?
72 | \vfill\pause
73 | \item[Approach] A heterogeneous-agents model with incomplete insurance
74 | \vfill\pause
75 | \item[Challenge] Need to nail mechanism and quantification
76 | \vfill\pause
77 | \item[Findings] Preventing over-saving does it qualitatively, but not quantitatively
78 | \end{description}
79 |
80 | \end{frame}
81 |
82 | \begin{frame}
83 | \frametitle{Huggett: Methodology}
84 |
85 | \textbf{Equity Premium Puzzle:} spread between risk-free rate and risky rate too small in models relative to the data
86 |
87 | \vfill\pause
88 |
89 | Two avenues
90 | \begin{itemize}
91 | \item Risky rate too small in models
92 | \item Risk-free rate too large in models
93 | \end{itemize}
94 |
95 | \vfill\pause
96 |
97 | Huggett
98 | \begin{itemize}
99 | \item Risk-free rate is too large in models \hfill \dimmer{(at the time, innovative!)}
100 | \item Can reduce the risk-free rate by reducing the demand for the risk-free asset
101 | \item Two alternatives
102 | \begin{itemize}
103 | \item Prevent lenders from saving too much \hfill \dimmer{(unrealistic assumption)}
104 | \item Prevent borrowers from taking too much debt \hfill \dimmer{(more realistic)}
105 | \end{itemize}
106 | \item Representative-agent models cannot cut it: requires a zero net financial position in GE
107 | \end{itemize}
108 |
109 | \end{frame}
110 |
111 | \begin{frame}
112 | \frametitle{Huggett: Model}
113 |
114 | Ex-ante identical consumers solve the following
115 | \begin{align*}
116 | \max_{C_t, A_{t+1}} &\; \E_0 \left( \sum_{t=0}^{\infty} \beta^t \frac{C_t^{1-\gamma}}{1-\gamma} \right) \\
117 | \text{s.t.} &\;
118 | \begin{cases}
119 | C_t + A_{t+1} \leq Y_t + (1 + r_t) A_t & \forall\ t \\
120 | A_{t+1} \geq \underline{A} & \forall\ t \\
121 | \log(Y_{t+1}) = (1 - \rho) \mu + \rho \log(Y_t) + \varepsilon_{t+1} & \forall\ t \\
122 | \varepsilon_{t} \overset{iid}{\sim} \mathcal{N}(0, \sigma^2) & \forall\ t
123 | \end{cases}
124 | \end{align*}
125 |
126 | \vfill\pause
127 |
128 | The borrowing constraint $\underline{A}$ is such that it may bind for some consumers (i.e., $\underline{A}$ is higher than the natural debt limit)
129 |
130 | \end{frame}
131 |
132 | \begin{frame}
133 | \frametitle{Huggett: Numerical Approach}
134 |
135 | \begin{enumerate}
136 | \item At iteration $j$, guess an equilibrium interest rate $r^{(j)}$
137 | \vfill\pause
138 | \item Solve for the policy function $A'(A, Y)$
139 | \vfill\pause
140 | \item Combine $A'(A, Y)$ with $\Pi$ to obtain the endogenous transition matrix $Q$
141 | \vfill\pause
142 | \item Compute the ergodic distribution $\lambda(A, Y)$ by iterating $Q$ enough times
143 | \vfill\pause
144 | \item Compute the net excess demand $E^d(r) = \sum_A \sum_Y \lambda(A, Y)\ A'(A, Y)$
145 | \vfill\pause
146 | \item Use a root-solver to bring the LHS to the RHS (i.e., zero)
147 | \begin{itemize}
148 | \item If $E^d \left( r^{(j)} \right) > 0$, then $r^{(j)} < r^*$
149 | \item If $E^d \left( r^{(j)} \right) < 0$, then $r^{(j)} > r^*$
150 | \item Set $r^{(j+1)}$ accordingly and repeat 2--6
151 | \end{itemize}
152 | \end{enumerate}
153 |
154 | \end{frame}
155 |
156 | \begin{frame}
157 | \frametitle{Aiyagari: Overview}
158 |
159 | \begin{description}
160 | \item[Question] What are the determinants of aggregate savings in the US?
161 | \vfill\pause
162 | \item[Approach] A heterogeneous-agents RBC model
163 | \vfill\pause
164 | \item[Challenge] Lucas' diversification argument: idiosyncrasies average out in the aggregate
165 | \vfill\pause
166 | \item[Findings]
167 | \begin{itemize}
168 | \item Precautionary savings do not matter for aggregate savings, quantitatively
169 | \item Can generate precautionary savings without prudence
170 | \item Model matches the US cross-sectional distribution of household wealth
171 | \end{itemize}
172 | \end{description}
173 |
174 | \end{frame}
175 |
176 | \begin{frame}
177 | \frametitle{Aiyagari: Methodology}
178 |
179 | Fix aggregate (capital) savings $K$
180 |
181 | \vfill\pause
182 |
183 | Look at composition of the aggregate
184 | \begin{itemize}
185 | \item Thourough exploration of endogenous distribution of agents
186 | \item Focus on left tail of wealth distribution (constrained, or almost, agents)
187 | \end{itemize}
188 |
189 | \vfill\pause
190 |
191 | Features of the model
192 | \begin{itemize}
193 | \item Relatability: just an RBC model
194 | \item Source of heterogeneity: labor endowments
195 | \item No aggregate uncertainty: prices $r$ and $w$ fixed in equilibrium
196 | \end{itemize}
197 |
198 | \end{frame}
199 |
200 | \begin{frame}
201 | \frametitle{Aiyagari: Model}
202 |
203 | \begin{columns}[T]
204 | \begin{column}{0.45\textwidth}
205 | Households (demand side)
206 | \begin{align*}
207 | \max_{c_t, k_{t+1}} &\; \E_0 \left( \sum_{t=0}^{\infty} \beta^t\ \frac{c_t^{1-\gamma}}{1-\gamma} \right) \\
208 | \text{s.t.} &\;
209 | \begin{cases}
210 | c_t + k_{t+1} \leq w l_t + (1 + r) k_t & \forall\ t \\
211 | c_t, k_{t+1} \geq 0 & \forall\ t \\
212 | l_{t+1} = (1 - \rho) \mu + \rho l_t + \varepsilon_{t+1} & \forall\ t \\
213 | \varepsilon_t \overset{iid}{\sim} \mathcal{N}(0, \sigma^2) & \forall\ t
214 | \end{cases}
215 | \end{align*}
216 | \end{column}
217 | \begin{column}{0.45\textwidth}
218 | Firms (supply side) \\ \dimmer{(note: no time subscripts)}
219 | \begin{align*}
220 | \max_{K, L} &\; A K^\alpha L^{1-\alpha} - r K - w L
221 | \end{align*}
222 | \end{column}
223 | \end{columns}
224 |
225 | \vfill\pause
226 |
227 | Market clearing (recursive notation, ergodic equilibrium)
228 | \begin{equation*}
229 | K = \int_{k} \int_{l} \lambda(k, l)\ k'(k, l) \text{d} l\ \text{d} k
230 | \end{equation*}
231 |
232 | \end{frame}
233 |
234 | \begin{frame}
235 | \frametitle{Aiyagari: Numerical Approach}
236 |
237 | \begin{enumerate}
238 | \item At iteration $j$, guess an aggregate level of capital holdings $K^{(j)}$
239 | \vfill\pause
240 | \item Use the FOCs of the firm to compute $r$ and $w$
241 | \vfill\pause
242 | \item Solve for the households' policy function $k'(k, l)$
243 | \vfill\pause
244 | \item Combine $\Pi$ with $k'(k, l)$ to obtain the endogenous transition matrix $Q$
245 | \vfill\pause
246 | \item Compute the ergodic distribution $\lambda(k, l)$ by iterating $Q$ enough times
247 | \vfill\pause
248 | \item Compute aggregate savings $\hat{K}$ as
249 | \begin{equation*}
250 | \hat{K} \equiv \sum_{k} \sum_l \lambda(k, l)\ k'(k, l)
251 | \end{equation*}
252 | \vfill\pause
253 | \item Check if $\hat{K}$ is consistent with $K^{(j)}$
254 | \begin{itemize}
255 | \item If $\hat{K} \neq K^{(j)}$, then set $K^{(j+1)}$ using the dampening scheme for $\theta \in [0, 1]$
256 | \begin{equation*}
257 | K^{(j+1)} \equiv \theta \hat{K} + (1 - \theta) K^{(j)}
258 | \end{equation*}
259 | \item Repeat steps 2--7
260 | \end{itemize}
261 | \end{enumerate}
262 |
263 | \end{frame}
264 |
265 | \begin{frame}
266 | \frametitle{Are These Bewley-Type Models? Yes!}
267 |
268 | \begin{itemize}
269 | \item Consumers are ex-ante identical
270 | \begin{itemize}
271 | \item One maximization problem describes everybody (ex-ante!)
272 | \item Idiosyncratic uncertainty and policy functions place consumers differently on the distribution of asset holdings
273 | \end{itemize}
274 |
275 | \vfill\pause
276 |
277 | \item In equilibrium, consumers are heterogeneous
278 | \begin{itemize}
279 | \item At the ``dawn of time,'' there is a distribution of agents, depending on endowments
280 | \item Based on (different) income, consumers choose (different) savings
281 | \end{itemize}
282 |
283 | \vfill\pause
284 |
285 | \item At the \textit{ergodic} equilibrium
286 | \begin{itemize}
287 | \item Consumers are distributed according to an endogenous ergodic distribution
288 | \item The \textit{ergodic} distr.~does \textbf{not} mean that there is no dynamics (i.e., not a deterministic steady s.)
289 | \item Across periods, consumers are ``reshuffled'' such that the ergodic distribution is maintained
290 | \end{itemize}
291 |
292 | \vfill\pause
293 |
294 | \item Technical note (nothing to do with Bewley type-ness)
295 | \begin{itemize}
296 | \item There is idiosyncratic uncertainty
297 | \item There is \textbf{no} aggregate uncertainty
298 | \end{itemize}
299 | \end{itemize}
300 |
301 | \end{frame}
302 |
303 | \begin{frame}
304 | \frametitle{Comparing Huggett \& Aiyagari}
305 |
306 | They look like the same model\dots\ They are, almost
307 |
308 | \vfill\pause
309 |
310 | \begin{table}
311 | \centering
312 | \begin{tabular}{lll}
313 | \toprule
314 | & \textbf{Huggett} & \textbf{Aiyagari} \\
315 | \cmidrule{2-3}
316 | Supply side & Exogenous & Endogenous \\
317 | Financial market & Asset & Capital \\
318 | Research question & Equity Premium Puzzle & Composition of aggr.~savings \\
319 | Numerical solution & Net excess demand & Consistency w/~guess \\
320 | \bottomrule
321 | \end{tabular}
322 | \end{table}
323 |
324 | \vfill\pause
325 |
326 | \textbf{Fundamental difference} in numerical algorithm:
327 | \begin{itemize}
328 | \item Aiyagari assumes that $K$ is a sufficient statistic for $\lambda(k, l)$
329 | \item Assumption is baked-in the optimization problem of firms (i.e., FOC wrt $K$)
330 | \item This assumption makes the algorithm unreliable, from a maths/theory point of view
331 | \item ``It converges, but nobody knows why''
332 | \begin{itemize}
333 | \item Is the convergence point an equilibrium?
334 | \item Is the equilibrium unique?
335 | \item Is this equilibrium cherry-picked?
336 | \end{itemize}
337 | \end{itemize}
338 |
339 | \end{frame}
340 |
341 | \begin{frame}
342 | \frametitle{Practice Time}
343 |
344 | Moving to a Jupyter Notebook
345 |
346 | \end{frame}
347 |
348 | \appendix
349 |
350 |
351 | \begin{frame}
352 | \frametitle{References}
353 |
354 | \bibliographystyle{apalike}
355 | \bibliography{references}
356 |
357 | \end{frame}
358 |
359 |
360 | \end{document}
361 |
--------------------------------------------------------------------------------
/slides/ta6.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AndreaPasqualini/numerical_methods_macroeconomics/60333ff64f88f2cb17ac045ee35541e7fc7d47d6/slides/ta6.pdf
--------------------------------------------------------------------------------
/ta_sessions/0_setup.md:
--------------------------------------------------------------------------------
1 | # Setting up Python for Data Science (and Macro)
2 |
3 | These are instructions you might want to follow if you want to set up your local machine (e.g., your laptop) to run Python.
4 |
5 |
6 | ## Installing Python through Anaconda
7 |
8 | Python alone cannot do much.
9 | It's a bit like installing a new OS on a computer: can you do stuff with it?
10 | Yes, but not much: you'll need additional applications.
11 | With Python, we need [modules](https://docs.python.org/3/tutorial/modules.html).
12 |
13 | As [it is not easy to manage modules](https://en.wikipedia.org/wiki/Dependency_hell) manually, we're going to need a [package manager](https://en.wikipedia.org/wiki/Package_manager).
14 | The one we choose for this setup is [Anaconda](https://www.anaconda.com/), which is popular among people who need to do numerical and statistical work (some prefer calling it data science).
15 |
16 | Go to https://www.anaconda.com/download/ and download the file that is relevant for your OS.
17 | You should choose the Python 3.x version, as the 2.x is [legacy (read: old) software](https://en.wikipedia.org/wiki/Legacy_system) that is there for compatibility (i.e., targeting audiences different from us).
18 | You can find instructions on how to install Anaconda for [Windows](https://docs.anaconda.com/anaconda/install/windows/), [macOS](https://docs.anaconda.com/anaconda/install/mac-os/) and [Linux](https://docs.anaconda.com/anaconda/install/linux/).
19 |
20 | As the typical Anaconda installation includes all the packages we'll need (including the Python interpreter), **we're done**.
21 | To confirm this is the case, search for [Spyder](https://www.spyder-ide.org/) among your applications and launch it: this will be our [IDE](https://en.wikipedia.org/wiki/Integrated_development_environment) for the rest of the course.
22 |
23 | If one day you want to get rid of Anaconda, you can just follow the [instructions to uninstall](https://docs.anaconda.com/anaconda/install/uninstall/) it.
24 |
25 | Anaconda by default installs the most common packages typical "data scientists" use.
26 | We are not going to use all of them in this course.
27 | If you're very mindful about your computer and you're interested in having a leaner installation, read the following section.
28 | You're also going to find extra software you might want to check out.
29 |
30 |
31 | ## [Optional] A leaner way: Miniconda
32 |
33 | The magic behind Anaconda is called `conda`.
34 | It is a small [CLI](https://en.wikipedia.org/wiki/Command-line_interface) program that manages all the modules and makes sure they work together (it also manages [virtual environments](https://docs.python.org/3/library/venv.html), but that's a feature we're not going to use).
35 | If you care about your local machine being lean and avoiding unnecessary software, then you might want to check it out.
36 |
37 | You can find the appropriate installer at https://conda.io/miniconda.html.
38 | Again, you should choose Python 3.x instead of 2.x.
39 |
40 | For instructions on how to install (as well as uninstall, if ever interested), refer to the official documentation for [Windows](https://conda.io/docs/user-guide/install/windows.html), [macOS](https://conda.io/docs/user-guide/install/macos.html) and [Linux](https://conda.io/docs/user-guide/install/linux.html).
41 | Once you installed the conda system, you will access it through a terminal (aka, command prompt or PowerShell in Windows).
42 |
43 | Using the syntax `conda install pkg1 pkg2 pkg3`, you can install what we need.
44 | We are going to use the following packages for the course:
45 |
46 | - `numpy` (N-dimensional numeric arrays);
47 | - `scipy` (mathematical and statistical recipes);
48 | - `matplotlib` (plotting); and
49 | - `spyder` (IDE).
50 |
51 | Additionally, you might be interested in the following packages
52 |
53 | - `jupyter` (the web-based app to produce [notebooks](https://jupyter.org/) with Python code and [markdown](https://daringfireball.net/projects/markdown/) commentaries);
54 | - `pandas` (Stata-like features to manage real-world data);
55 | - `sympy` (symbolic mathematics);
56 | - `numba` (LLVM interface and parallel computing); and
57 | - `xlrd` (interface to Excel files).
58 |
59 | As Python adheres to the [Unix philosophy](https://en.wikipedia.org/wiki/Unix_philosophy), these packages will depend on many others.
60 | Luckily, `conda` will pull in other necessary dependencies (it's a package manager, which main job is exactly this).
61 |
62 | To understand the difference between Anaconda and Miniconda, you can look at the list of packages included in Anaconda (entirely omitted in Miniconda) for [Windows](https://docs.anaconda.com/anaconda/packages/py3.7_win-64/), [macOS](https://docs.anaconda.com/anaconda/packages/py3.7_osx-64/) and [Linux](https://docs.anaconda.com/anaconda/packages/py3.7_linux-64/).
63 |
64 |
65 | ## [Optional] Other useful tools
66 |
67 | If you think you'll often deal with programming, you might want to have a look at the following great software:
68 |
69 | - [Sublime Text](https://www.sublimetext.com/) (an extensible, lean text editor for any programming language);
70 | - [Visual Studio Code](https://code.visualstudio.com/) (an extensible text editor with IDE-like features for the more popular programming languages);
71 | - [JetBrains PyCharm](https://www.jetbrains.com/pycharm/) (a full-fledged Python IDE, with all you need to develop _anything_ in Python);
72 | - [Git](https://git-scm.com/) (a versioning system for source code management; free hosting at [GitHub](https://github.com/), [BitBucket](https://bitbucket.org/) or [GitLab](https://about.gitlab.com/));
73 | - [SQLite Browser](https://sqlitebrowser.org/) (a [GUI](https://en.wikipedia.org/wiki/Graphical_user_interface) for exploring SQL databases).
74 |
--------------------------------------------------------------------------------
/ta_sessions/README.md:
--------------------------------------------------------------------------------
1 | # Material for TA sessions at Bocconi University
2 |
3 | ## Structure of TA sessions
4 |
5 | **This is a tentative schedule**: even though most of the material is ready and present in this folder, unexpected things might come up.
6 | However, this is what I will try to cover in each of the 6 meetings:
7 |
8 | 1. Introduction to Numerical Methods in Macroeconomics and introduction to Python
9 | - Perturbation methods
10 | - Projection methods
11 | - Numpy
12 | - Scipy
13 | - Matplotlib
14 | 2. Global solution methods under no uncertainty (example: household's problem in the neoclassical growth model)
15 | - Value Function iteration (VFI)
16 | - Policy Function iteration (PFI)
17 | - Time iteration (TI)
18 | 3. Global solution methods under uncertainty (example: household's problem in the stochastic neoclassical growth model)
19 | - Markov chains as discretized AR(1) processes
20 | - VFI, PFI and TI under stochastic environments
21 | - Simulation
22 | - Endogenous grid methods (if time allows)
23 | 4. Bewley-like models: idiosyncratic shocks to endowments in a simple exchange economy
24 | - Reading [Huggett (1993)](https://doi.org/10.1016/0165-1889(93)90024-M)
25 | - Replicating it
26 | 5. Bewley-like models: idiosyncratic shocks to labor income in a RBC model
27 | - Reading [Aiyagari (1994)](https://doi.org/10.2307/2118417)
28 | - Replicating it
29 | 6. Advanced tools
30 | - Binning (theory in class, here only code)
31 | - Transition dynamics (aka, MIT shocks; theory in class, here only code)
32 | - [Krusell and Smith (1998)](https://doi.org/10.1086/250034): overview of code and highlight of main steps (if time allows)
33 | - [Reiter's (2009)](https://doi.org/10.1016/j.jedc.2008.08.010) method: solving models with idiosyncratic _and_ aggregate shocks (if time allows)
34 |
35 | For each session, there is a [Jupyter notebook](https://jupyter.org/).
36 | I might use extra material in class (e.g., slides) but they will have no additional content relative to the notebooks, so I will not post them here.
37 |
38 |
39 | ### Extra topics: web scraping, machine learning and natural language processing
40 |
41 | Economists are learning their way through programming, and some used Python to [scrape the web](https://en.wikipedia.org/wiki/Web_scraping), run [machine learning](https://en.wikipedia.org/wiki/Machine_learning) algorithms or [parse natural language](https://en.wikipedia.org/wiki/Natural_language_processing).
42 | These practices are becoming popular, so they deserve a bit of our attention.
43 |
44 | While I will not have time to cover them, I want to showcase how those tools can be useful to us.
45 | I will post some code I used in the past and, if I have time, I will accompany it with notebooks explaining the main steps and decisions.
46 |
--------------------------------------------------------------------------------