├── lectures
├── _static
│ ├── lecture_specific
│ │ ├── opt_transport
│ │ │ ├── optimal_transport_splitting_experiment.aux
│ │ │ ├── optimal_transport_splitting_experiment.pdf
│ │ │ ├── optimal_transport_splitting_experiment.png
│ │ │ ├── optimal_transport_splitting_experiment.synctex.gz
│ │ │ └── optimal_transport_splitting_experiment.tex
│ │ ├── mle
│ │ │ └── fp.dta
│ │ ├── ifp
│ │ │ ├── pi2.pdf
│ │ │ ├── ifp_policies.png
│ │ │ ├── ifp_histogram.png
│ │ │ └── ifp_agg_savings.png
│ │ ├── ols
│ │ │ ├── maketable1.dta
│ │ │ ├── maketable2.dta
│ │ │ └── maketable4.dta
│ │ ├── optgrowth
│ │ │ ├── 3ndp.pdf
│ │ │ └── solution_og_ex2.png
│ │ ├── short_path
│ │ │ ├── graph.png
│ │ │ ├── graph2.png
│ │ │ ├── graph3.png
│ │ │ └── graph4.png
│ │ ├── kalman
│ │ │ ├── kalman_ex3.png
│ │ │ ├── kl_ex1_fig.png
│ │ │ └── kl_ex2_fig.png
│ │ ├── linear_models
│ │ │ ├── tsh.png
│ │ │ ├── tsh0.png
│ │ │ ├── tsh_hg.png
│ │ │ ├── ensemble_mean.png
│ │ │ ├── iteration_notes.pdf
│ │ │ ├── solution_lss_ex1.png
│ │ │ ├── solution_lss_ex2.png
│ │ │ ├── covariance_stationary.png
│ │ │ └── paths_and_stationarity.png
│ │ ├── markov_perf
│ │ │ ├── judd_fig1.png
│ │ │ ├── judd_fig2.png
│ │ │ ├── mpe_vs_monopolist.png
│ │ │ └── duopoly_mpe.py
│ │ ├── aiyagari
│ │ │ └── aiyagari_obit.pdf
│ │ ├── finite_markov
│ │ │ ├── web_graph.png
│ │ │ ├── mc_ex1_plot.png
│ │ │ ├── hamilton_graph.png
│ │ │ ├── mc_aperiodicity1.png
│ │ │ ├── mc_aperiodicity2.png
│ │ │ ├── mc_irreducibility1.png
│ │ │ ├── mc_irreducibility2.png
│ │ │ ├── mc_aperiodicity1.gv
│ │ │ ├── mc_aperiodicity2.gv
│ │ │ ├── mc_irreducibility2.gv
│ │ │ ├── mc_irreducibility1.gv
│ │ │ └── web_graph_data.txt
│ │ ├── pandas_panel
│ │ │ └── venn_diag.png
│ │ ├── troubleshooting
│ │ │ └── launch.png
│ │ ├── heavy_tails
│ │ │ ├── rank_size_fig1.png
│ │ │ └── light_heavy_fig1.png
│ │ ├── lqcontrol
│ │ │ ├── solution_lqc_ex1.png
│ │ │ ├── solution_lqc_ex2.png
│ │ │ ├── solution_lqc_ex3_g1.png
│ │ │ ├── solution_lqc_ex3_g10.png
│ │ │ └── solution_lqc_ex3_g50.png
│ │ ├── schelling
│ │ │ ├── schelling_fig1.png
│ │ │ ├── schelling_fig2.png
│ │ │ ├── schelling_fig3.png
│ │ │ └── schelling_fig4.png
│ │ ├── wealth_dynamics
│ │ │ └── htop_again.png
│ │ ├── linear_algebra
│ │ │ └── course_notes.pdf
│ │ ├── wald_friedman
│ │ │ ├── wald_dec_rule.pdf
│ │ │ ├── wald_dec_rule.png
│ │ │ ├── wald_dec_rule.tex
│ │ │ ├── wf_first_pass.py
│ │ │ └── wald_class.py
│ │ ├── arellano
│ │ │ ├── arellano_bond_prices.png
│ │ │ ├── arellano_bond_prices_2.png
│ │ │ ├── arellano_default_probs.png
│ │ │ ├── arellano_time_series.png
│ │ │ └── arellano_value_funcs.png
│ │ ├── career
│ │ │ └── career_solutions_ex1_py.png
│ │ ├── wald_friedman_2
│ │ │ ├── wald_dec_rule.pdf
│ │ │ ├── wald_dec_rule.png
│ │ │ └── wald_dec_rule.tex
│ │ ├── cass_koopmans_1
│ │ │ └── fig_stable_manifold.png
│ │ ├── uncertainty_traps
│ │ │ ├── uncertainty_traps_45.png
│ │ │ ├── uncertainty_traps_mu.png
│ │ │ └── uncertainty_traps_sim.png
│ │ ├── mccall
│ │ │ ├── mccall_vf_plot1.py
│ │ │ ├── mccall_resw_c.py
│ │ │ ├── mccall_resw_beta.py
│ │ │ ├── mccall_resw_alpha.py
│ │ │ └── mccall_resw_gamma.py
│ │ ├── coleman_policy_iter
│ │ │ └── solve_time_iter.py
│ │ ├── perm_income
│ │ │ └── perm_inc_ir.py
│ │ ├── optgrowth_fast
│ │ │ ├── ogm.py
│ │ │ └── ogm_crra.py
│ │ └── odu
│ │ │ └── odu.py
│ ├── qe-logo-large.png
│ ├── lectures-favicon.ico
│ └── includes
│ │ ├── lecture_howto_py.raw
│ │ └── header.raw
├── zreferences.md
├── _admonition
│ └── gpu.md
├── intro.md
├── web_graph_data.txt
├── status.md
├── troubleshooting.md
├── _toc.yml
├── graph.txt
├── _config.yml
├── cross_product_trick.md
├── os_egm.md
├── sir_model.md
├── rand_resp.md
├── inventory_dynamics.md
├── os_egm_jax.md
├── qr_decomp.md
├── ar1_bayes.md
└── ifp_opi.md
├── .gitignore
├── _notebook_repo
├── environment.yml
└── README.md
├── .github
├── runs-on.yml
├── dependabot.yml
├── workflows
│ ├── linkcheck.yml
│ ├── cache.yml
│ ├── collab.yml
│ └── publish.yml
└── copilot-instructions.md
├── environment.yml
├── README.md
└── scripts
└── test-jax-install.py
/lectures/_static/lecture_specific/opt_transport/optimal_transport_splitting_experiment.aux:
--------------------------------------------------------------------------------
1 | \relax
2 | \gdef \@abspage@last{1}
3 |
--------------------------------------------------------------------------------
/lectures/_static/qe-logo-large.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/qe-logo-large.png
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | _build/
3 | lectures/_build/
4 | .ipynb_checkpoints/
5 | .virtual_documents/
6 | node_modules/
7 | package-lock.json
--------------------------------------------------------------------------------
/lectures/_static/lectures-favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lectures-favicon.ico
--------------------------------------------------------------------------------
/_notebook_repo/environment.yml:
--------------------------------------------------------------------------------
1 | name: lecture-python
2 | channels:
3 | - default
4 | dependencies:
5 | - python=3.8
6 | - anaconda
7 |
8 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mle/fp.dta:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/mle/fp.dta
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ifp/pi2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/ifp/pi2.pdf
--------------------------------------------------------------------------------
/.github/runs-on.yml:
--------------------------------------------------------------------------------
1 | images:
2 | quantecon_ubuntu2404:
3 | platform: "linux"
4 | arch: "x64"
5 | ami: "ami-0edec81935264b6d3"
6 | region: "us-west-2"
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ifp/ifp_policies.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/ifp/ifp_policies.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ols/maketable1.dta:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/ols/maketable1.dta
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ols/maketable2.dta:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/ols/maketable2.dta
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ols/maketable4.dta:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/ols/maketable4.dta
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth/3ndp.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/optgrowth/3ndp.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/short_path/graph.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/short_path/graph.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ifp/ifp_histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/ifp/ifp_histogram.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/kalman/kalman_ex3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/kalman/kalman_ex3.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/kalman/kl_ex1_fig.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/kalman/kl_ex1_fig.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/kalman/kl_ex2_fig.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/kalman/kl_ex2_fig.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/tsh.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/tsh.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/tsh0.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/tsh0.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/short_path/graph2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/short_path/graph2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/short_path/graph3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/short_path/graph3.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/short_path/graph4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/short_path/graph4.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ifp/ifp_agg_savings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/ifp/ifp_agg_savings.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/tsh_hg.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/tsh_hg.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/markov_perf/judd_fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/markov_perf/judd_fig1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/markov_perf/judd_fig2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/markov_perf/judd_fig2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/aiyagari/aiyagari_obit.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/aiyagari/aiyagari_obit.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/web_graph.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/finite_markov/web_graph.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/pandas_panel/venn_diag.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/pandas_panel/venn_diag.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/troubleshooting/launch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/troubleshooting/launch.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_ex1_plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/finite_markov/mc_ex1_plot.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/heavy_tails/rank_size_fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/heavy_tails/rank_size_fig1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth/solution_og_ex2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/optgrowth/solution_og_ex2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/schelling/schelling_fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/schelling/schelling_fig1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/schelling/schelling_fig2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/schelling/schelling_fig2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/schelling/schelling_fig3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/schelling/schelling_fig3.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/schelling/schelling_fig4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/schelling/schelling_fig4.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wealth_dynamics/htop_again.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/wealth_dynamics/htop_again.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/hamilton_graph.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/finite_markov/hamilton_graph.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/heavy_tails/light_heavy_fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/heavy_tails/light_heavy_fig1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_algebra/course_notes.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_algebra/course_notes.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/ensemble_mean.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/ensemble_mean.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wald_friedman/wald_dec_rule.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/wald_friedman/wald_dec_rule.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wald_friedman/wald_dec_rule.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/wald_friedman/wald_dec_rule.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/arellano/arellano_bond_prices.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/arellano/arellano_bond_prices.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/arellano/arellano_bond_prices_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/arellano/arellano_bond_prices_2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/arellano/arellano_default_probs.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/arellano/arellano_default_probs.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/arellano/arellano_time_series.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/arellano/arellano_time_series.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/arellano/arellano_value_funcs.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/arellano/arellano_value_funcs.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/career/career_solutions_ex1_py.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/career/career_solutions_ex1_py.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_aperiodicity1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/finite_markov/mc_aperiodicity1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_aperiodicity2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/finite_markov/mc_aperiodicity2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/iteration_notes.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/iteration_notes.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/solution_lss_ex1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/solution_lss_ex1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/solution_lss_ex2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/solution_lss_ex2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex3_g1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex3_g1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex3_g10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex3_g10.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex3_g50.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/lqcontrol/solution_lqc_ex3_g50.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/markov_perf/mpe_vs_monopolist.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/markov_perf/mpe_vs_monopolist.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wald_friedman_2/wald_dec_rule.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/wald_friedman_2/wald_dec_rule.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wald_friedman_2/wald_dec_rule.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/wald_friedman_2/wald_dec_rule.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_irreducibility1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/finite_markov/mc_irreducibility1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_irreducibility2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/finite_markov/mc_irreducibility2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/cass_koopmans_1/fig_stable_manifold.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/cass_koopmans_1/fig_stable_manifold.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_aperiodicity1.gv:
--------------------------------------------------------------------------------
1 | digraph G{
2 | rankdir=LR;
3 | "a" -> "b" [label = "1.0"];
4 | "b" -> "c" [label = "1.0"];
5 | "c" -> "a" [label = "1.0"];
6 | }
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/covariance_stationary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/covariance_stationary.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/linear_models/paths_and_stationarity.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/linear_models/paths_and_stationarity.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/uncertainty_traps/uncertainty_traps_45.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/uncertainty_traps/uncertainty_traps_45.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/uncertainty_traps/uncertainty_traps_mu.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/uncertainty_traps/uncertainty_traps_mu.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/uncertainty_traps/uncertainty_traps_sim.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/uncertainty_traps/uncertainty_traps_sim.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/opt_transport/optimal_transport_splitting_experiment.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/opt_transport/optimal_transport_splitting_experiment.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/opt_transport/optimal_transport_splitting_experiment.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/opt_transport/optimal_transport_splitting_experiment.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/opt_transport/optimal_transport_splitting_experiment.synctex.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-python.myst/main/lectures/_static/lecture_specific/opt_transport/optimal_transport_splitting_experiment.synctex.gz
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_aperiodicity2.gv:
--------------------------------------------------------------------------------
1 | digraph G{
2 | rankdir=LR;
3 | "a" -> "b" [label = "1.0"];
4 | "b" -> "c" [label = "0.5"];
5 | "b" -> "a" [label = "0.5"];
6 | "c" -> "b" [label = "0.5"];
7 | "c" -> "d" [label = "0.5"];
8 | "d" -> "c" [label = "1.0"];
9 | }
--------------------------------------------------------------------------------
/lectures/zreferences.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | (references)=
13 | # References
14 |
15 | ```{bibliography} _static/quant-econ.bib
16 | ```
17 |
18 |
--------------------------------------------------------------------------------
/lectures/_static/includes/lecture_howto_py.raw:
--------------------------------------------------------------------------------
1 | .. raw:: html
2 |
3 |
8 |
--------------------------------------------------------------------------------
/_notebook_repo/README.md:
--------------------------------------------------------------------------------
1 | # lecture-python.notebooks
2 |
3 | [](https://mybinder.org/v2/gh/QuantEcon/lecture-python.notebooks/master)
4 |
5 | Notebooks for https://python.quantecon.org
6 |
7 | **Note:** This README should be edited [here](https://github.com/quantecon/lecture-python.myst/_notebook_repo)
8 |
--------------------------------------------------------------------------------
/lectures/_static/includes/header.raw:
--------------------------------------------------------------------------------
1 | .. raw:: html
2 |
3 |
8 |
--------------------------------------------------------------------------------
/lectures/_admonition/gpu.md:
--------------------------------------------------------------------------------
1 | ```{admonition} GPU
2 | :class: warning
3 |
4 | This lecture was built using a machine with access to a GPU.
5 |
6 | [Google Colab](https://colab.research.google.com/) has a free tier with GPUs
7 | that you can access as follows:
8 |
9 | 1. Click on the "play" icon top right
10 | 2. Select Colab
11 | 3. Set the runtime environment to include a GPU
12 | ```
13 |
--------------------------------------------------------------------------------
/lectures/intro.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | # Intermediate Quantitative Economics with Python
13 |
14 | This website presents a set of lectures on quantitative economic modeling.
15 |
16 | ```{tableofcontents}
17 | ```
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_irreducibility2.gv:
--------------------------------------------------------------------------------
1 | digraph G{
2 | rankdir=LR;
3 | "poor" -> "poor" [label = "1.0"];
4 | "middle class" -> "poor" [label = "0.1"];
5 | "middle class" -> "middle class" [label = "0.8"];
6 | "middle class" -> "rich" [label = "0.1"];
7 | "rich" -> "middle class" [label = "0.2"];
8 | "rich" -> "rich" [label = "0.8"];
9 | }
10 |
11 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_vf_plot1.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 |
3 | mcm = McCallModel()
4 | V, U = solve_mccall_model(mcm)
5 |
6 | fig, ax = plt.subplots(figsize=(10, 6))
7 |
8 | ax.plot(mcm.w_vec, V, 'b-', lw=2, alpha=0.7, label='$V$')
9 | ax.plot(mcm.w_vec, [U]*len(mcm.w_vec), 'g-', lw=2, alpha=0.7, label='$U$')
10 | ax.set_xlim(min(mcm.w_vec), max(mcm.w_vec))
11 | ax.legend(loc='upper left')
12 | ax.grid()
13 |
14 | plt.show()
15 |
--------------------------------------------------------------------------------
/environment.yml:
--------------------------------------------------------------------------------
1 | name: quantecon
2 | channels:
3 | - default
4 | dependencies:
5 | - python=3.13
6 | - anaconda=2025.06
7 | - pip
8 | - pip:
9 | - jupyter-book==1.0.4post1
10 | - quantecon-book-theme==0.15.0
11 | - sphinx-tojupyter==0.4.0
12 | - sphinxext-rediraffe==0.2.7
13 | - sphinx-exercise==1.2.0
14 | - sphinx-proof==0.2.1
15 | - sphinxcontrib-youtube==1.4.1
16 | - sphinx-togglebutton==0.3.2
17 | - sphinx-reredirects==0.1.4
18 |
19 |
20 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/mc_irreducibility1.gv:
--------------------------------------------------------------------------------
1 | digraph G{
2 | rankdir=LR;
3 | "poor" -> "poor" [label = "0.9"];
4 | "poor" -> "middle class" [label = "0.1"];
5 | "middle class" -> "poor" [label = "0.4"];
6 | "middle class" -> "middle class" [label = "0.4"];
7 | "middle class" -> "rich" [label = "0.2"];
8 | "rich" -> "poor" [label = "0.1"];
9 | "rich" -> "middle class" [label = "0.1"];
10 | "rich" -> "rich" [label = "0.8"];
11 | }
--------------------------------------------------------------------------------
/lectures/web_graph_data.txt:
--------------------------------------------------------------------------------
1 | a -> d;
2 | a -> f;
3 | b -> j;
4 | b -> k;
5 | b -> m;
6 | c -> c;
7 | c -> g;
8 | c -> j;
9 | c -> m;
10 | d -> f;
11 | d -> h;
12 | d -> k;
13 | e -> d;
14 | e -> h;
15 | e -> l;
16 | f -> a;
17 | f -> b;
18 | f -> j;
19 | f -> l;
20 | g -> b;
21 | g -> j;
22 | h -> d;
23 | h -> g;
24 | h -> l;
25 | h -> m;
26 | i -> g;
27 | i -> h;
28 | i -> n;
29 | j -> e;
30 | j -> i;
31 | j -> k;
32 | k -> n;
33 | l -> m;
34 | m -> g;
35 | n -> c;
36 | n -> j;
37 | n -> m;
38 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/finite_markov/web_graph_data.txt:
--------------------------------------------------------------------------------
1 | a -> d;
2 | a -> f;
3 | b -> j;
4 | b -> k;
5 | b -> m;
6 | c -> c;
7 | c -> g;
8 | c -> j;
9 | c -> m;
10 | d -> f;
11 | d -> h;
12 | d -> k;
13 | e -> d;
14 | e -> h;
15 | e -> l;
16 | f -> a;
17 | f -> b;
18 | f -> j;
19 | f -> l;
20 | g -> b;
21 | g -> j;
22 | h -> d;
23 | h -> g;
24 | h -> l;
25 | h -> m;
26 | i -> g;
27 | i -> h;
28 | i -> n;
29 | j -> e;
30 | j -> i;
31 | j -> k;
32 | k -> n;
33 | l -> m;
34 | m -> g;
35 | n -> c;
36 | n -> j;
37 | n -> m;
38 |
--------------------------------------------------------------------------------
/.github/dependabot.yml:
--------------------------------------------------------------------------------
1 | # To get started with Dependabot version updates, you'll need to specify which
2 | # package ecosystems to update and where the package manifests are located.
3 | # Please see the documentation for all configuration options:
4 | # https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
5 |
6 | version: 2
7 | updates:
8 | - package-ecosystem: github-actions
9 | directory: /
10 | commit-message:
11 | prefix: ⬆️
12 | schedule:
13 | interval: weekly
14 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Intermediate Quantitative Economics with Python
2 |
3 | This website presents a set of lectures on quantitative economic modeling.
4 |
5 | ## Jupyter notebooks
6 |
7 | Jupyter notebook versions of each lecture are available for download
8 | via the website.
9 |
10 | ## Contributions
11 |
12 | To comment on the lectures please add to or open an issue in the issue tracker (see above).
13 |
14 | We welcome pull requests!
15 |
16 | Please read the [QuantEcon style guide](https://manual.quantecon.org/intro.html) first, so that you can match our style.
17 |
--------------------------------------------------------------------------------
/scripts/test-jax-install.py:
--------------------------------------------------------------------------------
1 | import jax
2 | import jax.numpy as jnp
3 |
4 | devices = jax.devices()
5 | print(f"The available devices are: {devices}")
6 |
7 | @jax.jit
8 | def matrix_multiply(a, b):
9 | return jnp.dot(a, b)
10 |
11 | # Example usage:
12 | key = jax.random.PRNGKey(0)
13 | x = jax.random.normal(key, (1000, 1000))
14 | y = jax.random.normal(key, (1000, 1000))
15 | z = matrix_multiply(x, y)
16 |
17 | # Now the function is JIT compiled and will likely run on GPU (if available)
18 | print(z)
19 |
20 | devices = jax.devices()
21 | print(f"The available devices are: {devices}")
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_resw_c.py:
--------------------------------------------------------------------------------
1 | grid_size = 25
2 | c_vals = np.linspace(2, 12, grid_size) # values of unemployment compensation
3 | w_bar_vals = np.empty_like(c_vals)
4 |
5 | mcm = McCallModel()
6 |
7 | fig, ax = plt.subplots(figsize=(10, 6))
8 |
9 | for i, c in enumerate(c_vals):
10 | mcm.c = c
11 | w_bar = compute_reservation_wage(mcm)
12 | w_bar_vals[i] = w_bar
13 |
14 | ax.set_xlabel('unemployment compensation')
15 | ax.set_ylabel('reservation wage')
16 | txt = r'$\bar w$ as a function of $c$'
17 | ax.plot(c_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
18 | ax.legend(loc='upper left')
19 | ax.grid()
20 |
21 | plt.show()
22 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_resw_beta.py:
--------------------------------------------------------------------------------
1 | grid_size = 25
2 | β_vals = np.linspace(0.8, 0.99, grid_size)
3 | w_bar_vals = np.empty_like(β_vals)
4 |
5 | mcm = McCallModel()
6 |
7 | fig, ax = plt.subplots(figsize=(10, 6))
8 |
9 | for i, β in enumerate(β_vals):
10 | mcm.β = β
11 | w_bar = compute_reservation_wage(mcm)
12 | w_bar_vals[i] = w_bar
13 |
14 | ax.set_xlabel('discount factor')
15 | ax.set_ylabel('reservation wage')
16 | ax.set_xlim(β_vals.min(), β_vals.max())
17 | txt = r'$\bar w$ as a function of $\beta$'
18 | ax.plot(β_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
19 | ax.legend(loc='upper left')
20 | ax.grid()
21 |
22 | plt.show()
23 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_resw_alpha.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 |
3 | grid_size = 25
4 | α_vals = np.linspace(0.05, 0.5, grid_size)
5 | w_bar_vals = np.empty_like(α_vals)
6 |
7 | mcm = McCallModel()
8 |
9 | fig, ax = plt.subplots(figsize=(10, 6))
10 |
11 | for i, α in enumerate(α_vals):
12 | mcm.α = α
13 | w_bar = compute_reservation_wage(mcm)
14 | w_bar_vals[i] = w_bar
15 |
16 | ax.set_xlabel('job separation rate')
17 | ax.set_ylabel('reservation wage')
18 | ax.set_xlim(α_vals.min(), α_vals.max())
19 | txt = r'$\bar w$ as a function of $\alpha$'
20 | ax.plot(α_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
21 | ax.legend(loc='upper right')
22 | ax.grid()
23 |
24 | plt.show()
25 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_resw_gamma.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 |
3 | grid_size = 25
4 | γ_vals = np.linspace(0.05, 0.95, grid_size)
5 | w_bar_vals = np.empty_like(γ_vals)
6 |
7 | mcm = McCallModel()
8 |
9 | fig, ax = plt.subplots(figsize=(10, 6))
10 |
11 | for i, γ in enumerate(γ_vals):
12 | mcm.γ = γ
13 | w_bar = compute_reservation_wage(mcm)
14 | w_bar_vals[i] = w_bar
15 |
16 | ax.set_xlabel('job offer rate')
17 | ax.set_ylabel('reservation wage')
18 | ax.set_xlim(γ_vals.min(), γ_vals.max())
19 | txt = r'$\bar w$ as a function of $\gamma$'
20 | ax.plot(γ_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
21 | ax.legend(loc='upper left')
22 | ax.grid()
23 |
24 | plt.show()
25 |
26 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/coleman_policy_iter/solve_time_iter.py:
--------------------------------------------------------------------------------
1 | def solve_model_time_iter(model, # Class with model information
2 | σ, # Initial condition
3 | tol=1e-4,
4 | max_iter=1000,
5 | verbose=True,
6 | print_skip=25):
7 |
8 | # Set up loop
9 | i = 0
10 | error = tol + 1
11 |
12 | while i < max_iter and error > tol:
13 | σ_new = K(σ, model)
14 | error = np.max(np.abs(σ - σ_new))
15 | i += 1
16 | if verbose and i % print_skip == 0:
17 | print(f"Error at iteration {i} is {error}.")
18 | σ = σ_new
19 |
20 | if error > tol:
21 | print("Failed to converge!")
22 | elif verbose:
23 | print(f"\nConverged in {i} iterations.")
24 |
25 | return σ_new
26 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/markov_perf/duopoly_mpe.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import quantecon as qe
3 |
4 | # Parameters
5 | a0 = 10.0
6 | a1 = 2.0
7 | β = 0.96
8 | γ = 12.0
9 |
10 | # In LQ form
11 | A = np.eye(3)
12 | B1 = np.array([[0.], [1.], [0.]])
13 | B2 = np.array([[0.], [0.], [1.]])
14 |
15 |
16 | R1 = [[ 0., -a0 / 2, 0.],
17 | [-a0 / 2., a1, a1 / 2.],
18 | [ 0, a1 / 2., 0.]]
19 |
20 | R2 = [[ 0., 0., -a0 / 2],
21 | [ 0., 0., a1 / 2.],
22 | [-a0 / 2, a1 / 2., a1]]
23 |
24 | Q1 = Q2 = γ
25 | S1 = S2 = W1 = W2 = M1 = M2 = 0.0
26 |
27 | # Solve using QE's nnash function
28 | F1, F2, P1, P2 = qe.nnash(A, B1, B2, R1, R2, Q1,
29 | Q2, S1, S2, W1, W2, M1,
30 | M2, beta=β)
31 |
32 | # Display policies
33 | print("Computed policies for firm 1 and firm 2:\n")
34 | print(f"F1 = {F1}")
35 | print(f"F2 = {F2}")
36 | print("\n")
37 |
--------------------------------------------------------------------------------
/lectures/status.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | # Execution Statistics
13 |
14 | This table contains the latest execution statistics.
15 |
16 | ```{nb-exec-table}
17 | ```
18 |
19 | (status:machine-details)=
20 |
21 | These lectures are built on `linux` instances through `github actions`.
22 |
23 | These lectures are using the following python version
24 |
25 | ```{code-cell} ipython
26 | !python --version
27 | ```
28 |
29 | and the following package versions
30 |
31 | ```{code-cell} ipython
32 | :tags: [hide-output]
33 | !conda list
34 | ```
35 |
36 | This lecture series has access to the following GPU
37 |
38 | ```{code-cell} ipython
39 | !nvidia-smi
40 | ```
41 |
42 | You can check the backend used by JAX using:
43 |
44 | ```{code-cell} ipython3
45 | import jax
46 | # Check if JAX is using GPU
47 | print(f"JAX backend: {jax.devices()[0].platform}")
48 | ```
--------------------------------------------------------------------------------
/.github/workflows/linkcheck.yml:
--------------------------------------------------------------------------------
1 | name: Link Checker (lychee)
2 | on:
3 | schedule:
4 | # UTC 23:00 is early morning in Australia (9am)
5 | - cron: '0 23 * * 1'
6 | workflow_dispatch:
7 | jobs:
8 | link-checking:
9 | name: Link Checking
10 | runs-on: "ubuntu-latest"
11 | permissions:
12 | issues: write # required for peter-evans/create-issue-from-file
13 | steps:
14 | # Checkout the live site (html)
15 | - name: Checkout
16 | uses: actions/checkout@v6
17 | with:
18 | ref: gh-pages
19 | - name: Link Checker
20 | id: lychee
21 | uses: lycheeverse/lychee-action@v2
22 | with:
23 | fail: false
24 | args: --accept 200,403,503 *.html
25 | - name: Create Issue From File
26 | if: steps.lychee.outputs.exit_code != 0
27 | uses: peter-evans/create-issue-from-file@v6
28 | with:
29 | title: Link Checker Report
30 | content-filepath: ./lychee/out.md
31 | labels: report, automated issue, linkchecker
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/perm_income/perm_inc_ir.py:
--------------------------------------------------------------------------------
1 | r = 0.05
2 | β = 1 / (1 + r)
3 | T = 20 # Time horizon
4 | S = 5 # Impulse date
5 | σ1 = σ2 = 0.15
6 |
7 |
8 | def time_path(permanent=False):
9 | "Time path of consumption and debt given shock sequence"
10 | w1 = np.zeros(T+1)
11 | w2 = np.zeros(T+1)
12 | b = np.zeros(T+1)
13 | c = np.zeros(T+1)
14 | if permanent:
15 | w1[S+1] = 1.0
16 | else:
17 | w2[S+1] = 1.0
18 | for t in range(1, T):
19 | b[t+1] = b[t] - σ2 * w2[t]
20 | c[t+1] = c[t] + σ1 * w1[t+1] + (1 - β) * σ2 * w2[t+1]
21 | return b, c
22 |
23 |
24 | fig, axes = plt.subplots(2, 1, figsize=(10, 8))
25 | p_args = {'lw': 2, 'alpha': 0.7}
26 | titles = ['transitory', 'permanent']
27 |
28 | L = 0.175
29 |
30 | for ax, truefalse, title in zip(axes, (True, False), titles):
31 | b, c = time_path(permanent=truefalse)
32 | ax.set_title(f'Impulse reponse: {title} income shock')
33 | ax.plot(list(range(T+1)), c, 'g-', label="consumption", **p_args)
34 | ax.plot(list(range(T+1)), b, 'b-', label="debt", **p_args)
35 | ax.plot((S, S), (-L, L), 'k-', lw=0.5)
36 | ax.grid(alpha=0.5)
37 | ax.set(xlabel=r'Time', ylim=(-L, L))
38 |
39 | axes[0].legend(loc='lower right')
40 |
41 | plt.tight_layout()
42 | plt.show()
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wald_friedman/wald_dec_rule.tex:
--------------------------------------------------------------------------------
1 | \documentclass[convert={density=300,size=1080x800,outext=.png}]{standalone}
2 | \usepackage{tikz}
3 | \usetikzlibrary{decorations.pathreplacing}
4 | \begin{document}
5 |
6 | %.. tikz::
7 | \begin{tikzpicture}
8 | [scale=5, every node/.style={color=black}, decoration={brace,amplitude=7pt}] \coordinate (a0) at (0, 0.0);
9 | \coordinate (a1) at (1, 0.0);
10 | \coordinate (a2) at (2, 0.0);
11 | \coordinate (a3) at (3, 0.0);
12 | \coordinate (s0) at (0, 0.1);
13 | \coordinate (s1) at (1, 0.1);
14 | \coordinate (s2) at (2, 0.1);
15 | \coordinate (s3) at (3, 0.1);
16 | % axis
17 | \draw[thick] (0, 0) -- (3, 0) node[below] {};
18 | %curly bracket
19 | \draw [decorate, very thick] (s0) -- (s1)
20 | node [midway, anchor=south, outer sep=10pt]{accept $f_0$};
21 | \draw [decorate, very thick] (s1) -- (s2)
22 | node [midway, anchor=south, outer sep=10pt]{draw again};
23 | \draw [decorate, very thick] (s2) -- (s3)
24 | node [midway, anchor=south, outer sep=10pt]{accept $f_1$};
25 | \node[circle, draw, thin, blue, fill=white!10, scale=0.45] at (a1){};
26 | \node[below, outer sep=5pt] at (a1){$B$};
27 | \node[circle, draw, thin, blue, fill=white!10, scale=0.45] at (a2){};
28 | \node[below, outer sep=5pt] at (a2){$A$};
29 | \node[below, outer sep=25pt] at (1.5, 0){value of $L_m$};
30 | \end{tikzpicture}
31 |
32 | \end{document}
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth_fast/ogm.py:
--------------------------------------------------------------------------------
1 | from numba import float64
2 | from numba.experimental import jitclass
3 |
4 | opt_growth_data = [
5 | ('α', float64), # Production parameter
6 | ('β', float64), # Discount factor
7 | ('μ', float64), # Shock location parameter
8 | ('s', float64), # Shock scale parameter
9 | ('grid', float64[:]), # Grid (array)
10 | ('shocks', float64[:]) # Shock draws (array)
11 | ]
12 |
13 | @jitclass(opt_growth_data)
14 | class OptimalGrowthModel:
15 |
16 | def __init__(self,
17 | α=0.4,
18 | β=0.96,
19 | μ=0,
20 | s=0.1,
21 | grid_max=4,
22 | grid_size=120,
23 | shock_size=250,
24 | seed=1234):
25 |
26 | self.α, self.β, self.μ, self.s = α, β, μ, s
27 |
28 | # Set up grid
29 | self.grid = np.linspace(1e-5, grid_max, grid_size)
30 |
31 | # Store shocks (with a seed, so results are reproducible)
32 | np.random.seed(seed)
33 | self.shocks = np.exp(μ + s * np.random.randn(shock_size))
34 |
35 |
36 | def f(self, k):
37 | "The production function"
38 | return k**self.α
39 |
40 |
41 | def u(self, c):
42 | "The utility function"
43 | return np.log(c)
44 |
45 | def f_prime(self, k):
46 | "Derivative of f"
47 | return self.α * (k**(self.α - 1))
48 |
49 |
50 | def u_prime(self, c):
51 | "Derivative of u"
52 | return 1/c
53 |
54 | def u_prime_inv(self, c):
55 | "Inverse of u'"
56 | return 1/c
57 |
--------------------------------------------------------------------------------
/.github/workflows/cache.yml:
--------------------------------------------------------------------------------
1 | name: Build Cache [using jupyter-book]
2 | on:
3 | schedule:
4 | # Execute cache weekly at 3am on Monday
5 | - cron: '0 3 * * 1'
6 | workflow_dispatch:
7 | jobs:
8 | cache:
9 | runs-on: "runs-on=${{ github.run_id }}/family=g4dn.2xlarge/image=quantecon_ubuntu2404/disk=large"
10 | steps:
11 | - uses: actions/checkout@v6
12 | with:
13 | ref: ${{ github.event.pull_request.head.sha }}
14 | - name: Setup Anaconda
15 | uses: conda-incubator/setup-miniconda@v3
16 | with:
17 | auto-update-conda: true
18 | auto-activate-base: true
19 | miniconda-version: 'latest'
20 | python-version: "3.13"
21 | environment-file: environment.yml
22 | activate-environment: quantecon
23 | - name: Install JAX and Numpyro
24 | shell: bash -l {0}
25 | run: |
26 | pip install -U "jax[cuda13]"
27 | pip install numpyro
28 | python scripts/test-jax-install.py
29 | - name: Check nvidia drivers
30 | shell: bash -l {0}
31 | run: |
32 | nvidia-smi
33 | - name: Build HTML
34 | shell: bash -l {0}
35 | run: |
36 | jb build lectures --path-output ./ -W --keep-going
37 | - name: Upload Execution Reports
38 | uses: actions/upload-artifact@v6
39 | if: failure()
40 | with:
41 | name: execution-reports
42 | path: _build/html/reports
43 | - name: Upload "_build" folder (cache)
44 | uses: actions/upload-artifact@v6
45 | with:
46 | name: build-cache
47 | path: _build
48 | include-hidden-files: true
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wald_friedman_2/wald_dec_rule.tex:
--------------------------------------------------------------------------------
1 | \documentclass[convert={density=300,size=1080x800,outext=.png}]{standalone}
2 | \usepackage{tikz}
3 | \usetikzlibrary{decorations.pathreplacing}
4 | \begin{document}
5 |
6 | %.. tikz::
7 | \begin{tikzpicture}
8 | [scale=5, every node/.style={color=black}, decoration={brace,amplitude=7pt}] \coordinate (a0) at (0, 0.0);
9 | \coordinate (a1) at (1, 0.0);
10 | \coordinate (a2) at (2, 0.0);
11 | \coordinate (a3) at (3, 0.0);
12 | \coordinate (s0) at (0, 0.1);
13 | \coordinate (s1) at (1, 0.1);
14 | \coordinate (s2) at (2, 0.1);
15 | \coordinate (s3) at (3, 0.1);
16 | % axis
17 | \draw[thick] (0, 0) -- (3, 0) node[below] {};
18 | %curly bracket
19 | \draw [decorate, very thick] (s0) -- (s1)
20 | node [midway, anchor=south, outer sep=10pt]{accept $f_0$};
21 | \draw [decorate, very thick] (s1) -- (s2)
22 | node [midway, anchor=south, outer sep=10pt]{draw again};
23 | \draw [decorate, very thick] (s2) -- (s3)
24 | node [midway, anchor=south, outer sep=10pt]{accept $f_1$};
25 | \node[circle, draw, thin, blue, fill=white!10, scale=0.45] at (a0){};
26 | \node[below, outer sep=5pt] at (a0){$0$};
27 | \node[circle, draw, thin, blue, fill=white!10, scale=0.45] at (a1){};
28 | \node[below, outer sep=5pt] at (a1){$B$};
29 | \node[circle, draw, thin, blue, fill=white!10, scale=0.45] at (a2){};
30 | \node[below, outer sep=5pt] at (a2){$A$};
31 | \node[circle, draw, thin, blue, fill=white!10, scale=0.45] at (a3){};
32 | \node[below, outer sep=5pt] at (a3){$1$};
33 | \node[below, outer sep=25pt] at (1.5, 0){values of $\pi$};
34 | \end{tikzpicture}
35 |
36 | \end{document}
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth_fast/ogm_crra.py:
--------------------------------------------------------------------------------
1 | from numba import float64
2 | from numba.experimental import jitclass
3 |
4 | opt_growth_data = [
5 | ('α', float64), # Production parameter
6 | ('β', float64), # Discount factor
7 | ('μ', float64), # Shock location parameter
8 | ('γ', float64), # Preference parameter
9 | ('s', float64), # Shock scale parameter
10 | ('grid', float64[:]), # Grid (array)
11 | ('shocks', float64[:]) # Shock draws (array)
12 | ]
13 |
14 | @jitclass(opt_growth_data)
15 | class OptimalGrowthModel_CRRA:
16 |
17 | def __init__(self,
18 | α=0.4,
19 | β=0.96,
20 | μ=0,
21 | s=0.1,
22 | γ=1.5,
23 | grid_max=4,
24 | grid_size=120,
25 | shock_size=250,
26 | seed=1234):
27 |
28 | self.α, self.β, self.γ, self.μ, self.s = α, β, γ, μ, s
29 |
30 | # Set up grid
31 | self.grid = np.linspace(1e-5, grid_max, grid_size)
32 |
33 | # Store shocks (with a seed, so results are reproducible)
34 | np.random.seed(seed)
35 | self.shocks = np.exp(μ + s * np.random.randn(shock_size))
36 |
37 |
38 | def f(self, k):
39 | "The production function."
40 | return k**self.α
41 |
42 | def u(self, c):
43 | "The utility function."
44 | return c**(1 - self.γ) / (1 - self.γ)
45 |
46 | def f_prime(self, k):
47 | "Derivative of f."
48 | return self.α * (k**(self.α - 1))
49 |
50 | def u_prime(self, c):
51 | "Derivative of u."
52 | return c**(-self.γ)
53 |
54 | def u_prime_inv(c):
55 | return c**(-1 / self.γ)
56 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wald_friedman/wf_first_pass.py:
--------------------------------------------------------------------------------
1 | import scipy.interpolate as interp
2 | import quantecon as qe
3 |
4 | def expect_loss_choose_0(p, L0):
5 | "For a given probability return expected loss of choosing model 0"
6 | return (1 - p) * L0
7 |
8 | def expect_loss_choose_1(p, L1):
9 | "For a given probability return expected loss of choosing model 1"
10 | return p * L1
11 |
12 | def EJ(p, f0, f1, J):
13 | """
14 | Evaluates the expectation of our value function J. To do this, we
15 | need the current probability that model 0 is correct (p), the
16 | distributions (f0, f1), and the function J.
17 | """
18 | # Get the current distribution we believe (p*f0 + (1-p)*f1)
19 | curr_dist = p * f0 + (1 - p) * f1
20 |
21 | # Get tomorrow's expected distribution through Bayes law
22 | tp1_dist = np.clip((p * f0) / (p * f0 + (1 - p) * f1), 0, 1)
23 |
24 | # Evaluate the expectation
25 | EJ = curr_dist @ J(tp1_dist)
26 |
27 | return EJ
28 |
29 | def expect_loss_cont(p, c, f0, f1, J):
30 | return c + EJ(p, f0, f1, J)
31 |
32 |
33 | def bellman_operator(pgrid, c, f0, f1, L0, L1, J):
34 | """
35 | Evaluates the value function for a given continuation value
36 | function; that is, evaluates
37 |
38 | J(p) = min((1 - p) L0, p L1, c + E J(p'))
39 |
40 | Uses linear interpolation between points.
41 | """
42 | m = np.size(pgrid)
43 | assert m == np.size(J)
44 |
45 | J_out = np.zeros(m)
46 | J_interp = interp.UnivariateSpline(pgrid, J, k=1, ext=0)
47 |
48 | for (p_ind, p) in enumerate(pgrid):
49 | # Payoff of choosing model 0
50 | p_c_0 = expect_loss_choose_0(p, L0)
51 | p_c_1 = expect_loss_choose_1(p, L1)
52 | p_con = expect_loss_cont(p, c, f0, f1, J_interp)
53 |
54 | J_out[p_ind] = min(p_c_0, p_c_1, p_con)
55 |
56 | return J_out
57 |
58 |
59 | # == Now run at given parameters == #
60 |
61 | # First set up distributions
62 | p_m1 = np.linspace(0, 1, 50)
63 | f0 = np.clip(st.beta.pdf(p_m1, a=1, b=1), 1e-8, np.inf)
64 | f0 = f0 / np.sum(f0)
65 | f1 = np.clip(st.beta.pdf(p_m1, a=9, b=9), 1e-8, np.inf)
66 | f1 = f1 / np.sum(f1)
67 |
68 | # Build a grid
69 | pg = np.linspace(0, 1, 251)
70 | # Turn the Bellman operator into a function with one argument
71 | bell_op = lambda vf: bellman_operator(pg, 0.5, f0, f1, 5.0, 5.0, vf)
72 | # Pass it to qe's built in iteration routine
73 | J = qe.compute_fixed_point(bell_op,
74 | np.zeros(pg.size), # Initial guess
75 | error_tol=1e-6,
76 | verbose=True,
77 | print_skip=5)
78 |
79 |
--------------------------------------------------------------------------------
/lectures/troubleshooting.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | (troubleshooting)=
13 | ```{raw} jupyter
14 |
19 | ```
20 |
21 | # Troubleshooting
22 |
23 | ```{contents} Contents
24 | :depth: 2
25 | ```
26 |
27 | This page is for readers experiencing errors when running the code from the lectures.
28 |
29 | ## Fixing Your Local Environment
30 |
31 | The basic assumption of the lectures is that code in a lecture should execute whenever
32 |
33 | 1. it is executed in a Jupyter notebook and
34 | 1. the notebook is running on a machine with the latest version of Anaconda Python.
35 |
36 | You have installed Anaconda, haven't you, following the instructions in [this lecture](https://python-programming.quantecon.org/getting_started.html)?
37 |
38 | Assuming that you have, the most common source of problems for our readers is that their Anaconda distribution is not up to date.
39 |
40 | [Here's a useful article](https://www.anaconda.com/blog/keeping-anaconda-date)
41 | on how to update Anaconda.
42 |
43 | Another option is to simply remove Anaconda and reinstall.
44 |
45 | You also need to keep the external code libraries, such as [QuantEcon.py](https://quantecon.org/quantecon-py/) up to date.
46 |
47 | For this task you can either
48 |
49 | * use `pip install --upgrade quantecon` on the command line, or
50 | * execute `!pip install --upgrade quantecon` within a Jupyter notebook.
51 |
52 | If your local environment is still not working you can do two things.
53 |
54 | First, you can use a remote machine instead, by clicking on the Launch Notebook icon available for each lecture
55 |
56 | ```{image} _static/lecture_specific/troubleshooting/launch.png
57 |
58 | ```
59 |
60 | Second, you can report an issue, so we can try to fix your local set up.
61 |
62 | We like getting feedback on the lectures so please don't hesitate to get in
63 | touch.
64 |
65 | ## Reporting an Issue
66 |
67 | One way to give feedback is to raise an issue through our [issue tracker](https://github.com/QuantEcon/lecture-python/issues).
68 |
69 | Please be as specific as possible. Tell us where the problem is and as much
70 | detail about your local set up as you can provide.
71 |
72 | Finally, you can provide direct feedback to [contact@quantecon.org](mailto:contact@quantecon.org)
73 |
74 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/opt_transport/optimal_transport_splitting_experiment.tex:
--------------------------------------------------------------------------------
1 | \documentclass[tikz, border=1mm]{standalone}
2 | \usepackage{tikz}
3 | \usepackage{tikz-cd}
4 | \usetikzlibrary{positioning}
5 | \usetikzlibrary{arrows}
6 | \usetikzlibrary{calc}
7 | \usetikzlibrary{intersections}
8 | \usetikzlibrary{matrix}
9 | \usetikzlibrary{decorations}
10 | \usepackage{pgf}
11 | \usepackage{pgfplots}
12 | \pgfplotsset{compat=1.16} % Tom %'d out May 8, 2020 because it was causing hickups
13 | \usetikzlibrary{shapes, fit}
14 | \usetikzlibrary{arrows.meta} % from fazeleh
15 | \usetikzlibrary{decorations.pathreplacing} %for brac
16 |
17 |
18 | \begin{document}
19 |
20 | \begin{tikzpicture}
21 | \node[circle, draw, scale=1.3856, red] (1) at (1, 1) {$q_1$};
22 | \node[circle, draw, scale=1.6, red] (2) at (3, 3) {$q_2$};
23 | \node[circle, draw, scale=1.3856, red] (3) at (5, 0) {$q_3$};
24 | \node[circle, draw, scale=1.6, red] (4) at (7, 2) {$q_4$};
25 | \node[circle, draw, scale=1.1312, blue] (01) at (1.5, 2.5) {${p}_1$};
26 | \node[circle, draw, scale=0.8, blue] (02) at (0, 0) {${p}_2$};
27 | \node[circle, draw, scale=1.3856, blue] (03) at (3, 1) {${p}_3$};
28 | \node[circle, draw, scale=1.1312, blue] (04) at (6.5, 0.5) {${p}_4$};
29 | \node[circle, draw, scale=0.8, blue] (05) at (8, 3) {${p}_5$};
30 | \node[circle, draw, scale=1.1312, blue] (06) at (5, 3) {${p}_6$};
31 | \node[circle, draw, scale=1.3856, blue] (07) at (4.5, 1.5) {${p}_7$};
32 |
33 |
34 | \draw[->, thick, blue]
35 | (01) edge [bend left=0, left, -{Stealth[scale=1]}, line width=1pt] node {}(1)
36 | (01) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (2)
37 | (02) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (1)
38 | (03) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (1)
39 | (03) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (2)
40 | (03) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (3)
41 | (04) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (3)
42 | (04) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (4)
43 | (05) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (4)
44 | (06) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (2)
45 | (06) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (4)
46 | (07) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (2)
47 | (07) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (3)
48 | (07) edge [bend left=0, below, -{Stealth[scale=1]}, line width=1pt] node {} (4);
49 | \end{tikzpicture}
50 |
51 | \end{document}
52 |
--------------------------------------------------------------------------------
/lectures/_toc.yml:
--------------------------------------------------------------------------------
1 | format: jb-book
2 | root: intro
3 | parts:
4 | - caption: Tools and Techniques
5 | numbered: true
6 | chapters:
7 | - file: sir_model
8 | - file: linear_algebra
9 | - file: qr_decomp
10 | - file: eig_circulant
11 | - file: svd_intro
12 | - file: var_dmd
13 | - file: newton_method
14 | - caption: Elementary Statistics
15 | numbered: true
16 | chapters:
17 | - file: prob_matrix
18 | - file: stats_examples
19 | - file: lln_clt
20 | - file: prob_meaning
21 | - file: multi_hyper
22 | - file: multivariate_normal
23 | - file: hoist_failure
24 | - file: back_prop
25 | - file: rand_resp
26 | - file: util_rand_resp
27 | - caption: Bayes Law
28 | numbered: true
29 | chapters:
30 | - file: bayes_nonconj
31 | - file: ar1_bayes
32 | - file: ar1_turningpts
33 | - caption: Statistics and Information
34 | numbered: true
35 | chapters:
36 | - file: divergence_measures
37 | - file: likelihood_ratio_process
38 | - file: likelihood_ratio_process_2
39 | - file: likelihood_var
40 | - file: imp_sample
41 | - file: wald_friedman
42 | - file: wald_friedman_2
43 | - file: exchangeable
44 | - file: likelihood_bayes
45 | - file: mix_model
46 | - file: navy_captain
47 | - caption: Linear Programming
48 | numbered: true
49 | chapters:
50 | - file: opt_transport
51 | - file: von_neumann_model
52 | - caption: Introduction to Dynamics
53 | numbered: true
54 | chapters:
55 | - file: finite_markov
56 | - file: inventory_dynamics
57 | - file: linear_models
58 | - file: samuelson
59 | - file: kesten_processes
60 | - file: wealth_dynamics
61 | - file: kalman
62 | - file: kalman_2
63 | - caption: Search
64 | numbered: true
65 | chapters:
66 | - file: mccall_model
67 | - file: mccall_model_with_separation
68 | - file: mccall_model_with_sep_markov
69 | - file: mccall_fitted_vfi
70 | - file: mccall_persist_trans
71 | - file: career
72 | - file: jv
73 | - file: odu
74 | - file: mccall_q
75 | - caption: Introduction to Optimal Savings
76 | numbered: true
77 | chapters:
78 | - file: os
79 | - file: os_numerical
80 | - file: os_stochastic
81 | - file: os_time_iter
82 | - file: os_egm
83 | - file: os_egm_jax
84 | - caption: Household Problems
85 | numbered: true
86 | chapters:
87 | - file: ifp_discrete
88 | - file: ifp_opi
89 | - file: ifp_egm
90 | - file: ifp_egm_transient_shocks
91 | - file: ifp_advanced
92 | - caption: LQ Control
93 | numbered: true
94 | chapters:
95 | - file: lqcontrol
96 | - file: lagrangian_lqdp
97 | - file: cross_product_trick
98 | - file: perm_income
99 | - file: perm_income_cons
100 | - file: lq_inventories
101 | - caption: Optimal Growth
102 | numbered: true
103 | chapters:
104 | - file: cass_koopmans_1
105 | - file: cass_koopmans_2
106 | - file: cass_fiscal
107 | - file: cass_fiscal_2
108 | - file: ak2
109 | - caption: Multiple Agent Models
110 | numbered: true
111 | chapters:
112 | - file: lake_model
113 | - file: endogenous_lake
114 | - file: rational_expectations
115 | - file: re_with_feedback
116 | - file: markov_perf
117 | - file: uncertainty_traps
118 | - file: aiyagari
119 | - file: ak_aiyagari
120 | - caption: Asset Pricing and Finance
121 | numbered: true
122 | chapters:
123 | - file: markov_asset
124 | - file: ge_arrow
125 | - file: harrison_kreps
126 | - file: morris_learn
127 | - caption: Data and Empirics
128 | numbered: true
129 | chapters:
130 | - file: pandas_panel
131 | - file: ols
132 | - file: mle
133 | - caption: Auctions
134 | numbered: true
135 | chapters:
136 | - file: two_auctions
137 | - file: house_auction
138 | - caption: Other
139 | numbered: true
140 | chapters:
141 | - file: troubleshooting
142 | - file: zreferences
143 | - file: status
144 |
--------------------------------------------------------------------------------
/.github/workflows/collab.yml:
--------------------------------------------------------------------------------
1 | name: Build Project on Google Collab (Execution)
2 | on:
3 | schedule:
4 | # Execute weekly on Monday at 4am UTC (offset from cache.yml)
5 | - cron: '0 4 * * 1'
6 | workflow_dispatch:
7 | jobs:
8 | execution-checks:
9 | runs-on: "runs-on=${{ github.run_id }}/family=g4dn.2xlarge/image=ubuntu24-gpu-x64/disk=large"
10 | permissions:
11 | issues: write # required for creating issues on execution failure
12 | container:
13 | image: docker://us-docker.pkg.dev/colab-images/public/runtime:latest
14 | options: --gpus all
15 | steps:
16 | - uses: actions/checkout@v6
17 | # Install build software
18 | - name: Install Build Software & LaTeX
19 | shell: bash -l {0}
20 | run: |
21 | pip install jupyter-book==1.0.3 quantecon-book-theme==0.8.2 sphinx-tojupyter==0.3.0 sphinxext-rediraffe==0.2.7 sphinxcontrib-youtube==1.3.0 sphinx-togglebutton==0.3.2 arviz sphinx-proof sphinx-exercise sphinx-reredirects
22 | apt-get update
23 | apt-get install dvipng texlive texlive-latex-extra texlive-fonts-recommended cm-super
24 | - name: Check nvidia drivers
25 | shell: bash -l {0}
26 | run: |
27 | nvidia-smi
28 | - name: Check python version
29 | shell: bash -l {0}
30 | run: |
31 | python --version
32 | - name: Display Pip Versions
33 | shell: bash -l {0}
34 | run: pip list
35 | - name: Download "build" folder (cache)
36 | uses: dawidd6/action-download-artifact@v11
37 | with:
38 | workflow: cache.yml
39 | branch: main
40 | name: build-cache
41 | path: _build
42 | # Build of HTML (Execution Testing)
43 | - name: Build HTML
44 | shell: bash -l {0}
45 | run: |
46 | jb build lectures --path-output ./ -n -W --keep-going
47 | - name: Upload Execution Reports
48 | uses: actions/upload-artifact@v6
49 | if: failure()
50 | with:
51 | name: execution-reports
52 | path: _build/html/reports
53 | - name: Create execution failure report
54 | if: failure()
55 | run: |
56 | cat > execution-failure-report.md << 'EOF'
57 | # Colab Execution Failure Report
58 |
59 | The weekly Google Colab execution check has failed. This indicates that one or more notebooks failed to execute properly in the Colab environment.
60 |
61 | ## Details
62 |
63 | **Workflow Run:** [${{ github.run_id }}](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})
64 | **Date:** ${{ github.event.head_commit.timestamp || github.event.schedule }}
65 | **Branch:** ${{ github.ref_name }}
66 | **Commit:** ${{ github.sha }}
67 |
68 | ## Execution Reports
69 |
70 | Detailed execution reports have been uploaded as artifacts to this workflow run. Please check the following:
71 |
72 | 1. Download the `execution-reports` artifact from the workflow run
73 | 2. Review the failed notebook execution logs
74 | 3. Fix any execution issues in the notebooks
75 | 4. Test locally or in Colab before merging
76 |
77 | ## Next Steps
78 |
79 | 1. Investigate the failure by reviewing the execution reports
80 | 2. Fix the identified issues
81 | 3. Test the fixes
82 | 4. Close this issue once resolved
83 |
84 | This is an automated issue created by the weekly Colab execution check.
85 | EOF
86 | - name: Create Issue on Execution Failure
87 | if: failure()
88 | uses: peter-evans/create-issue-from-file@v6
89 | with:
90 | title: "Weekly Colab Execution Check Failed - ${{ github.run_id }}"
91 | content-filepath: execution-failure-report.md
92 | labels: execution-failure, automated-issue, colab
93 |
--------------------------------------------------------------------------------
/lectures/graph.txt:
--------------------------------------------------------------------------------
1 | node0, node1 0.04, node8 11.11, node14 72.21
2 | node1, node46 1247.25, node6 20.59, node13 64.94
3 | node2, node66 54.18, node31 166.80, node45 1561.45
4 | node3, node20 133.65, node6 2.06, node11 42.43
5 | node4, node75 3706.67, node5 0.73, node7 1.02
6 | node5, node45 1382.97, node7 3.33, node11 34.54
7 | node6, node31 63.17, node9 0.72, node10 13.10
8 | node7, node50 478.14, node9 3.15, node10 5.85
9 | node8, node69 577.91, node11 7.45, node12 3.18
10 | node9, node70 2454.28, node13 4.42, node20 16.53
11 | node10, node89 5352.79, node12 1.87, node16 25.16
12 | node11, node94 4961.32, node18 37.55, node20 65.08
13 | node12, node84 3914.62, node24 34.32, node28 170.04
14 | node13, node60 2135.95, node38 236.33, node40 475.33
15 | node14, node67 1878.96, node16 2.70, node24 38.65
16 | node15, node91 3597.11, node17 1.01, node18 2.57
17 | node16, node36 392.92, node19 3.49, node38 278.71
18 | node17, node76 783.29, node22 24.78, node23 26.45
19 | node18, node91 3363.17, node23 16.23, node28 55.84
20 | node19, node26 20.09, node20 0.24, node28 70.54
21 | node20, node98 3523.33, node24 9.81, node33 145.80
22 | node21, node56 626.04, node28 36.65, node31 27.06
23 | node22, node72 1447.22, node39 136.32, node40 124.22
24 | node23, node52 336.73, node26 2.66, node33 22.37
25 | node24, node66 875.19, node26 1.80, node28 14.25
26 | node25, node70 1343.63, node32 36.58, node35 45.55
27 | node26, node47 135.78, node27 0.01, node42 122.00
28 | node27, node65 480.55, node35 48.10, node43 246.24
29 | node28, node82 2538.18, node34 21.79, node36 15.52
30 | node29, node64 635.52, node32 4.22, node33 12.61
31 | node30, node98 2616.03, node33 5.61, node35 13.95
32 | node31, node98 3350.98, node36 20.44, node44 125.88
33 | node32, node97 2613.92, node34 3.33, node35 1.46
34 | node33, node81 1854.73, node41 3.23, node47 111.54
35 | node34, node73 1075.38, node42 51.52, node48 129.45
36 | node35, node52 17.57, node41 2.09, node50 78.81
37 | node36, node71 1171.60, node54 101.08, node57 260.46
38 | node37, node75 269.97, node38 0.36, node46 80.49
39 | node38, node93 2767.85, node40 1.79, node42 8.78
40 | node39, node50 39.88, node40 0.95, node41 1.34
41 | node40, node75 548.68, node47 28.57, node54 53.46
42 | node41, node53 18.23, node46 0.28, node54 162.24
43 | node42, node59 141.86, node47 10.08, node72 437.49
44 | node43, node98 2984.83, node54 95.06, node60 116.23
45 | node44, node91 807.39, node46 1.56, node47 2.14
46 | node45, node58 79.93, node47 3.68, node49 15.51
47 | node46, node52 22.68, node57 27.50, node67 65.48
48 | node47, node50 2.82, node56 49.31, node61 172.64
49 | node48, node99 2564.12, node59 34.52, node60 66.44
50 | node49, node78 53.79, node50 0.51, node56 10.89
51 | node50, node85 251.76, node53 1.38, node55 20.10
52 | node51, node98 2110.67, node59 23.67, node60 73.79
53 | node52, node94 1471.80, node64 102.41, node66 123.03
54 | node53, node72 22.85, node56 4.33, node67 88.35
55 | node54, node88 967.59, node59 24.30, node73 238.61
56 | node55, node84 86.09, node57 2.13, node64 60.80
57 | node56, node76 197.03, node57 0.02, node61 11.06
58 | node57, node86 701.09, node58 0.46, node60 7.01
59 | node58, node83 556.70, node64 29.85, node65 34.32
60 | node59, node90 820.66, node60 0.72, node71 0.67
61 | node60, node76 48.03, node65 4.76, node67 1.63
62 | node61, node98 1057.59, node63 0.95, node64 4.88
63 | node62, node91 132.23, node64 2.94, node76 38.43
64 | node63, node66 4.43, node72 70.08, node75 56.34
65 | node64, node80 47.73, node65 0.30, node76 11.98
66 | node65, node94 594.93, node66 0.64, node73 33.23
67 | node66, node98 395.63, node68 2.66, node73 37.53
68 | node67, node82 153.53, node68 0.09, node70 0.98
69 | node68, node94 232.10, node70 3.35, node71 1.66
70 | node69, node99 247.80, node70 0.06, node73 8.99
71 | node70, node76 27.18, node72 1.50, node73 8.37
72 | node71, node89 104.50, node74 8.86, node91 284.64
73 | node72, node76 15.32, node84 102.77, node92 133.06
74 | node73, node83 52.22, node76 1.40, node90 243.00
75 | node74, node81 1.07, node76 0.52, node78 8.08
76 | node75, node92 68.53, node76 0.81, node77 1.19
77 | node76, node85 13.18, node77 0.45, node78 2.36
78 | node77, node80 8.94, node78 0.98, node86 64.32
79 | node78, node98 355.90, node81 2.59
80 | node79, node81 0.09, node85 1.45, node91 22.35
81 | node80, node92 121.87, node88 28.78, node98 264.34
82 | node81, node94 99.78, node89 39.52, node92 99.89
83 | node82, node91 47.44, node88 28.05, node93 11.99
84 | node83, node94 114.95, node86 8.75, node88 5.78
85 | node84, node89 19.14, node94 30.41, node98 121.05
86 | node85, node97 94.51, node87 2.66, node89 4.90
87 | node86, node97 85.09
88 | node87, node88 0.21, node91 11.14, node92 21.23
89 | node88, node93 1.31, node91 6.83, node98 6.12
90 | node89, node97 36.97, node99 82.12
91 | node90, node96 23.53, node94 10.47, node99 50.99
92 | node91, node97 22.17
93 | node92, node96 10.83, node97 11.24, node99 34.68
94 | node93, node94 0.19, node97 6.71, node99 32.77
95 | node94, node98 5.91, node96 2.03
96 | node95, node98 6.17, node99 0.27
97 | node96, node98 3.32, node97 0.43, node99 5.87
98 | node97, node98 0.30
99 | node98, node99 0.33
100 | node99,
101 |
--------------------------------------------------------------------------------
/lectures/_config.yml:
--------------------------------------------------------------------------------
1 | title: Intermediate Quantitative Economics with Python
2 | author: Thomas J. Sargent & John Stachurski
3 | logo: _static/qe-logo-large.png
4 | description: This website presents a set of lectures on quantitative economic modeling, designed and written by Thomas J. Sargent and John Stachurski.
5 |
6 | parse:
7 | myst_enable_extensions:
8 | - amsmath
9 | - colon_fence
10 | - deflist
11 | - dollarmath
12 | - html_admonition
13 | - html_image
14 | - linkify
15 | - replacements
16 | - smartquotes
17 | - substitution
18 |
19 | only_build_toc_files: true
20 | execute:
21 | execute_notebooks: "cache"
22 | timeout: 2400
23 |
24 | bibtex_bibfiles:
25 | - _static/quant-econ.bib
26 |
27 | html:
28 | baseurl: https://python.quantecon.org/
29 |
30 | latex:
31 | latex_documents:
32 | targetname: quantecon-python.tex
33 |
34 | sphinx:
35 | extra_extensions: [sphinx_multitoc_numbering, sphinxext.rediraffe, sphinx_tojupyter, sphinxcontrib.youtube, sphinx.ext.todo, sphinx_exercise, sphinx_proof, sphinx_togglebutton, sphinx.ext.intersphinx, sphinx_reredirects]
36 | config:
37 | # false-positive links
38 | linkcheck_ignore: ['https://online.stat.psu.edu/stat415/book/export/html/834']
39 | bibtex_reference_style: author_year
40 | suppress_warnings: ["mystnb.unknown_mime_type"]
41 | nb_merge_streams: true
42 | nb_mime_priority_overrides: [
43 | # HTML
44 | ['html', 'application/vnd.jupyter.widget-view+json', 10],
45 | ['html', 'application/javascript', 20],
46 | ['html', 'text/html', 30],
47 | ['html', 'text/latex', 40],
48 | ['html', 'image/svg+xml', 50],
49 | ['html', 'image/png', 60],
50 | ['html', 'image/jpeg', 70],
51 | ['html', 'text/markdown', 80],
52 | ['html', 'text/plain', 90],
53 | # Jupyter Notebooks
54 | ['jupyter', 'application/vnd.jupyter.widget-view+json', 10],
55 | ['jupyter', 'application/javascript', 20],
56 | ['jupyter', 'text/html', 30],
57 | ['jupyter', 'text/latex', 40],
58 | ['jupyter', 'image/svg+xml', 50],
59 | ['jupyter', 'image/png', 60],
60 | ['jupyter', 'image/jpeg', 70],
61 | ['jupyter', 'text/markdown', 80],
62 | ['jupyter', 'text/plain', 90],
63 | # LaTeX
64 | ['latex', 'text/latex', 10],
65 | ['latex', 'application/pdf', 20],
66 | ['latex', 'image/png', 30],
67 | ['latex', 'image/jpeg', 40],
68 | ['latex', 'text/markdown', 50],
69 | ['latex', 'text/plain', 60],
70 | # Link Checker
71 | ['linkcheck', 'text/plain', 10],
72 | ]
73 | html_favicon: _static/lectures-favicon.ico
74 | html_theme: quantecon_book_theme
75 | html_static_path: ['_static']
76 | html_theme_options:
77 | authors:
78 | - name: Thomas J. Sargent
79 | url: http://www.tomsargent.com/
80 | - name: John Stachurski
81 | url: https://johnstachurski.net/
82 | header_organisation_url: https://quantecon.org
83 | header_organisation: QuantEcon
84 | repository_url: https://github.com/QuantEcon/lecture-python.myst
85 | nb_repository_url: https://github.com/QuantEcon/lecture-python.notebooks
86 | path_to_docs: lectures
87 | twitter: quantecon
88 | twitter_logo_url: https://assets.quantecon.org/img/qe-twitter-logo.png
89 | og_logo_url: https://assets.quantecon.org/img/qe-og-logo.png
90 | description: This website presents a set of lectures on quantitative economic modeling, designed and written by Thomas J. Sargent and John Stachurski.
91 | keywords: Python, QuantEcon, Quantitative Economics, Economics, Sloan, Alfred P. Sloan Foundation, Tom J. Sargent, John Stachurski
92 | analytics:
93 | google_analytics_id: G-J0SMYR4SG3
94 | launch_buttons:
95 | colab_url : https://colab.research.google.com
96 | intersphinx_mapping:
97 | intro:
98 | - "https://intro.quantecon.org/"
99 | - null
100 | advanced:
101 | - "https://python-advanced.quantecon.org"
102 | - null
103 | jax:
104 | - "https://jax.quantecon.org/"
105 | - null
106 | mathjax3_config:
107 | tex:
108 | macros:
109 | "argmax" : "arg\\,max"
110 | "argmin" : "arg\\,min"
111 | mathjax_path: https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js
112 | # Local Redirects
113 | rediraffe_redirects:
114 | index_toc.md: intro.md
115 | # Remote Redirects
116 | redirects:
117 | heavy_tails: https://intro.quantecon.org/heavy_tails.html
118 | ar1_processes: https://intro.quantecon.org/ar1_processes.html
119 | geom_series: https://intro.quantecon.org/geom_series.html
120 | lp_intro: https://intro.quantecon.org/lp_intro.html
121 | short_path: https://intro.quantecon.org/short_path.html
122 | schelling: https://intro.quantecon.org/schelling.html
123 | scalar_dynam: https://intro.quantecon.org/scalar_dynam.html
124 | complex_and_trig: https://intro.quantecon.org/complex_and_trig.html
125 | # sphinx-proof
126 | proof_minimal_theme: true
127 | # sphinx-exercise
128 | exercise_style: "solution_follow_exercise"
129 | # sphinx-tojupyter
130 | tojupyter_static_file_path: ["source/_static", "_static"]
131 | tojupyter_target_html: true
132 | tojupyter_urlpath: "https://python.quantecon.org/"
133 | tojupyter_image_urlpath: "https://python.quantecon.org/_static/"
134 | tojupyter_lang_synonyms: ["ipython", "ipython3", "python"]
135 | tojupyter_kernels:
136 | python3:
137 | kernelspec:
138 | display_name: "Python"
139 | language: python3
140 | name: python3
141 | file_extension: ".py"
142 | tojupyter_images_markdown: true
143 |
--------------------------------------------------------------------------------
/.github/workflows/publish.yml:
--------------------------------------------------------------------------------
1 | name: Build & Publish to GH Pages
2 | on:
3 | push:
4 | tags:
5 | - 'publish*'
6 | jobs:
7 | publish:
8 | if: github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags')
9 | runs-on: "runs-on=${{ github.run_id }}/family=g4dn.2xlarge/image=quantecon_ubuntu2404/disk=large"
10 | steps:
11 | - name: Checkout
12 | uses: actions/checkout@v6
13 | with:
14 | fetch-depth: 0
15 | - name: Setup Anaconda
16 | uses: conda-incubator/setup-miniconda@v3
17 | with:
18 | auto-update-conda: true
19 | auto-activate-base: true
20 | miniconda-version: 'latest'
21 | python-version: "3.13"
22 | environment-file: environment.yml
23 | activate-environment: quantecon
24 | - name: Install JAX and Numpyro
25 | shell: bash -l {0}
26 | run: |
27 | pip install -U "jax[cuda13]"
28 | pip install numpyro
29 | python scripts/test-jax-install.py
30 | - name: Check nvidia drivers
31 | shell: bash -l {0}
32 | run: |
33 | nvidia-smi
34 | - name: Display Conda Environment Versions
35 | shell: bash -l {0}
36 | run: conda list
37 | - name: Display Pip Versions
38 | shell: bash -l {0}
39 | run: pip list
40 | # Download Build Cache from cache.yml
41 | - name: Download "build" folder (cache)
42 | uses: dawidd6/action-download-artifact@v11
43 | with:
44 | workflow: cache.yml
45 | branch: main
46 | name: build-cache
47 | path: _build
48 | # Build Assets (Download Notebooks, PDF via LaTeX)
49 | - name: Build PDF from LaTeX
50 | shell: bash -l {0}
51 | run: |
52 | jb build lectures --builder pdflatex --path-output ./ -n -W --keep-going
53 | - name: Upload Execution Reports (LaTeX)
54 | uses: actions/upload-artifact@v6
55 | if: failure()
56 | with:
57 | name: execution-reports-latex
58 | path: _build/latex/reports
59 | - name: Copy LaTeX PDF for GH-PAGES
60 | shell: bash -l {0}
61 | run: |
62 | mkdir -p _build/html/_pdf
63 | cp -u _build/latex/*.pdf _build/html/_pdf
64 | - name: Build Download Notebooks (sphinx-tojupyter)
65 | shell: bash -l {0}
66 | run: |
67 | jb build lectures --path-output ./ --builder=custom --custom-builder=jupyter -n -W --keep-going
68 | zip -r download-notebooks.zip _build/jupyter
69 | - name: Upload Execution Reports (Download Notebooks)
70 | uses: actions/upload-artifact@v6
71 | if: failure()
72 | with:
73 | name: execution-reports-notebooks
74 | path: _build/jupyter/reports
75 | - uses: actions/upload-artifact@v6
76 | with:
77 | name: download-notebooks
78 | path: download-notebooks.zip
79 | - name: Copy Download Notebooks for GH-PAGES
80 | shell: bash -l {0}
81 | run: |
82 | mkdir -p _build/html/_notebooks
83 | cp -u _build/jupyter/*.ipynb _build/html/_notebooks
84 | # Final Build of HTML (with assets)
85 | - name: Build HTML
86 | shell: bash -l {0}
87 | run: |
88 | jb build lectures --path-output ./ -n -W --keep-going
89 | - name: Upload Execution Reports (HTML)
90 | uses: actions/upload-artifact@v6
91 | if: failure()
92 | with:
93 | name: execution-reports-html
94 | path: _build/html/reports
95 | # Create HTML archive for release assets
96 | - name: Create HTML archive
97 | shell: bash -l {0}
98 | run: |
99 | tar -czf lecture-python-html-${{ github.ref_name }}.tar.gz -C _build/html .
100 | sha256sum lecture-python-html-${{ github.ref_name }}.tar.gz > html-checksum.txt
101 |
102 | # Create metadata manifest
103 | cat > html-manifest.json << EOF
104 | {
105 | "tag": "${{ github.ref_name }}",
106 | "commit": "${{ github.sha }}",
107 | "timestamp": "$(date -Iseconds)",
108 | "size_mb": $(du -sm _build/html | cut -f1),
109 | "file_count": $(find _build/html -type f | wc -l)
110 | }
111 | EOF
112 | - name: Upload archives to release
113 | uses: softprops/action-gh-release@v2
114 | with:
115 | files: |
116 | lecture-python-html-${{ github.ref_name }}.tar.gz
117 | html-checksum.txt
118 | html-manifest.json
119 | env:
120 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
121 | - name: Deploy website to gh-pages
122 | uses: peaceiris/actions-gh-pages@v4
123 | with:
124 | github_token: ${{ secrets.GITHUB_TOKEN }}
125 | publish_dir: _build/html/
126 | cname: python.quantecon.org
127 | - name: Prepare lecture-python.notebooks sync
128 | shell: bash -l {0}
129 | run: |
130 | mkdir -p _build/lecture-python.notebooks
131 | cp -a _notebook_repo/. _build/lecture-python.notebooks
132 | cp _build/jupyter/*.ipynb _build/lecture-python.notebooks
133 | ls -a _build/lecture-python.notebooks
134 | - name: Commit notebooks to lecture-python.notebooks
135 | shell: bash -l {0}
136 | env:
137 | QE_SERVICES_PAT: ${{ secrets.QUANTECON_SERVICES_PAT }}
138 | run: |
139 | git clone https://quantecon-services:$QE_SERVICES_PAT@github.com/quantecon/lecture-python.notebooks
140 |
141 | cp _build/lecture-python.notebooks/*.ipynb lecture-python.notebooks
142 |
143 | cd lecture-python.notebooks
144 | git config user.name "QuantEcon Services"
145 | git config user.email "admin@quantecon.org"
146 | git add *.ipynb
147 | git commit -m "auto publishing updates to notebooks"
148 | git push origin main
149 |
--------------------------------------------------------------------------------
/lectures/cross_product_trick.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | format_version: 0.13
7 | jupytext_version: 1.11.1
8 | kernelspec:
9 | display_name: Python 3
10 | language: python
11 | name: python3
12 | ---
13 |
14 | # Eliminating Cross Products
15 |
16 | ## Overview
17 |
18 | This lecture describes formulas for eliminating
19 |
20 | * cross products between states and control in linear-quadratic dynamic programming problems
21 |
22 | * covariances between state and measurement noises in Kalman filtering problems
23 |
24 |
25 | For a linear-quadratic dynamic programming problem, the idea involves these steps
26 |
27 | * transform states and controls in a way that leads to an equivalent problem with no cross-products between transformed states and controls
28 | * solve the transformed problem using standard formulas for problems with no cross-products between states and controls presented in this lecture {doc}`Linear Control: Foundations `
29 | * transform the optimal decision rule for the altered problem into the optimal decision rule for the original problem with cross-products between states and controls
30 |
31 | +++
32 |
33 | ## Undiscounted Dynamic Programming Problem
34 |
35 | Here is a nonstochastic undiscounted LQ dynamic programming with cross products between
36 | states and controls in the objective function.
37 |
38 |
39 |
40 | The problem is defined by the 5-tuple of matrices $(A, B, R, Q, H)$
41 | where $R$ and $Q$ are positive definite symmetric matrices and
42 | $A \sim m \times m, B \sim m \times k, Q \sim k \times k, R \sim m \times m$ and $H \sim k \times m$.
43 |
44 |
45 | The problem is to choose $\{x_{t+1}, u_t\}_{t=0}^\infty$ to maximize
46 |
47 | $$
48 | - \sum_{t=0}^\infty (x_t' R x_t + u_t' Q u_t + 2 u_t H x_t)
49 | $$
50 |
51 | subject to the linear constraints
52 |
53 | $$ x_{t+1} = A x_t + B u_t, \quad t \geq 0 $$
54 |
55 | where $x_0$ is a given initial condition.
56 |
57 | The solution to this undiscounted infinite-horizon problem is a time-invariant feedback rule
58 |
59 | $$ u_t = -F x_t $$
60 |
61 | where
62 |
63 | $$ F = -(Q + B'PB)^{-1} B'PA $$
64 |
65 | and $P \sim m \times m $ is a positive definite solution of the algebraic matrix Riccati equation
66 |
67 | $$
68 | P = R + A'PA - (A'PB + H')(Q + B'PB)^{-1}(B'PA + H).
69 | $$
70 |
71 |
72 | +++
73 |
74 | It can be verified that an **equivalent** problem without cross-products between states and controls
75 | is defined by a 4-tuple of matrices : $(A^*, B, R^*, Q) $.
76 |
77 | That the omitted matrix $H=0$ indicates that there are no cross products between states and controls
78 | in the equivalent problem.
79 |
80 | The matrices $(A^*, B, R^*, Q) $ defining the equivalent problem and the value function, policy function matrices $P, F^*$ that solve it are related to the matrices $(A, B, R, Q, H)$ defining the original problem and the value function, policy function matrices $P, F$ that solve the original problem by
81 |
82 | \begin{align*}
83 | A^* & = A - B Q^{-1} H, \\
84 | R^* & = R - H'Q^{-1} H, \\
85 | P & = R^* + {A^*}' P A - ({A^*}' P B) (Q + B' P B)^{-1} B' P A^*, \\
86 | F^* & = (Q + B' P B)^{-1} B' P A^*, \\
87 | F & = F^* + Q^{-1} H.
88 | \end{align*}
89 |
90 | +++
91 |
92 | ## Kalman Filter
93 |
94 | The **duality** that prevails between a linear-quadratic optimal control and a Kalman filtering problem means that there is an analogous transformation that allows us to transform a Kalman filtering problem
95 | with non-zero covariance matrix between between shocks to states and shocks to measurements to an equivalent Kalman filtering problem with zero covariance between shocks to states and measurments.
96 |
97 | Let's look at the appropriate transformations.
98 |
99 |
100 | First, let's recall the Kalman filter with covariance between noises to states and measurements.
101 |
102 | The hidden Markov model is
103 |
104 | \begin{align*}
105 | x_{t+1} & = A x_t + B w_{t+1}, \\
106 | z_{t+1} & = D x_t + F w_{t+1},
107 | \end{align*}
108 |
109 | where $A \sim m \times m, B \sim m \times p $ and $D \sim k \times m, F \sim k \times p $,
110 | and $w_{t+1}$ is the time $t+1$ component of a sequence of i.i.d. $p \times 1$ normally distibuted
111 | random vectors with mean vector zero and covariance matrix equal to a $p \times p$ identity matrix.
112 |
113 | Thus, $x_t$ is $m \times 1$ and $z_t$ is $k \times 1$.
114 |
115 | The Kalman filtering formulas are
116 |
117 |
118 | \begin{align*}
119 | K(\Sigma_t) & = (A \Sigma_t D' + BF')(D \Sigma_t D' + FF')^{-1}, \\
120 | \Sigma_{t+1}& = A \Sigma_t A' + BB' - (A \Sigma_t D' + BF')(D \Sigma_t D' + FF')^{-1} (D \Sigma_t A' + FB').
121 | \end{align*} (eq:Kalman102)
122 |
123 |
124 | Define tranformed matrices
125 |
126 | \begin{align*}
127 | A^* & = A - BF' (FF')^{-1} D, \\
128 | B^* {B^*}' & = BB' - BF' (FF')^{-1} FB'.
129 | \end{align*}
130 |
131 | ### Algorithm
132 |
133 | A consequence of formulas {eq}`eq:Kalman102} is that we can use the following algorithm to solve Kalman filtering problems that involve non zero covariances between state and signal noises.
134 |
135 | First, compute $\Sigma, K^*$ using the ordinary Kalman filtering formula with $BF' = 0$, i.e.,
136 | with zero covariance matrix between random shocks to states and random shocks to measurements.
137 |
138 | That is, compute $K^*$ and $\Sigma$ that satisfy
139 |
140 | \begin{align*}
141 | K^* & = (A^* \Sigma D')(D \Sigma D' + FF')^{-1} \\
142 | \Sigma & = A^* \Sigma {A^*}' + B^* {B^*}' - (A^* \Sigma D')(D \Sigma D' + FF')^{-1} (D \Sigma {A^*}').
143 | \end{align*}
144 |
145 | The Kalman gain for the original problem **with non-zero covariance** between shocks to states and measurements is then
146 |
147 | $$
148 | K = K^* + BF' (FF')^{-1},
149 | $$
150 |
151 | The state reconstruction covariance matrix $\Sigma$ for the original problem equals the state reconstrution covariance matrix for the transformed problem.
152 |
153 | +++
154 |
155 | ## Duality table
156 |
157 | Here is a handy table to remember how the Kalman filter and dynamic program are related.
158 |
159 |
160 | | Dynamic Program | Kalman Filter |
161 | | :-------------: | :-----------: |
162 | | $A$ | $A'$ |
163 | | $B$ | $D'$ |
164 | | $H$ | $FB'$ |
165 | | $Q$ | $FF'$ |
166 | | $R$ | $BB'$ |
167 | | $F$ | $K'$ |
168 | | $P$ | $\Sigma$ |
169 |
170 | +++
171 |
172 |
173 | ```{code-cell} ipython3
174 |
175 | ```
176 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/odu/odu.py:
--------------------------------------------------------------------------------
1 | from scipy.interpolate import LinearNDInterpolator
2 | from scipy.integrate import fixed_quad
3 | from numpy import maximum as npmax
4 |
5 |
6 | class SearchProblem:
7 | """
8 | A class to store a given parameterization of the "offer distribution
9 | unknown" model.
10 |
11 | Parameters
12 | ----------
13 | β : scalar(float), optional(default=0.95)
14 | The discount parameter
15 | c : scalar(float), optional(default=0.6)
16 | The unemployment compensation
17 | F_a : scalar(float), optional(default=1)
18 | First parameter of β distribution on F
19 | F_b : scalar(float), optional(default=1)
20 | Second parameter of β distribution on F
21 | G_a : scalar(float), optional(default=3)
22 | First parameter of β distribution on G
23 | G_b : scalar(float), optional(default=1.2)
24 | Second parameter of β distribution on G
25 | w_max : scalar(float), optional(default=2)
26 | Maximum wage possible
27 | w_grid_size : scalar(int), optional(default=40)
28 | Size of the grid on wages
29 | π_grid_size : scalar(int), optional(default=40)
30 | Size of the grid on probabilities
31 |
32 | Attributes
33 | ----------
34 | β, c, w_max : see Parameters
35 | w_grid : np.ndarray
36 | Grid points over wages, ndim=1
37 | π_grid : np.ndarray
38 | Grid points over π, ndim=1
39 | grid_points : np.ndarray
40 | Combined grid points, ndim=2
41 | F : scipy.stats._distn_infrastructure.rv_frozen
42 | Beta distribution with params (F_a, F_b), scaled by w_max
43 | G : scipy.stats._distn_infrastructure.rv_frozen
44 | Beta distribution with params (G_a, G_b), scaled by w_max
45 | f : function
46 | Density of F
47 | g : function
48 | Density of G
49 | π_min : scalar(float)
50 | Minimum of grid over π
51 | π_max : scalar(float)
52 | Maximum of grid over π
53 | """
54 |
55 | def __init__(self, β=0.95, c=0.6, F_a=1, F_b=1, G_a=3, G_b=1.2,
56 | w_max=2, w_grid_size=40, π_grid_size=40):
57 |
58 | self.β, self.c, self.w_max = β, c, w_max
59 | self.F = beta(F_a, F_b, scale=w_max)
60 | self.G = beta(G_a, G_b, scale=w_max)
61 | self.f, self.g = self.F.pdf, self.G.pdf # Density functions
62 | self.π_min, self.π_max = 1e-3, 1 - 1e-3 # Avoids instability
63 | self.w_grid = np.linspace(0, w_max, w_grid_size)
64 | self.π_grid = np.linspace(self.π_min, self.π_max, π_grid_size)
65 | x, y = np.meshgrid(self.w_grid, self.π_grid)
66 | self.grid_points = np.column_stack((x.ravel(order='F'), y.ravel(order='F')))
67 |
68 |
69 | def q(self, w, π):
70 | """
71 | Updates π using Bayes' rule and the current wage observation w.
72 |
73 | Returns
74 | -------
75 |
76 | new_π : scalar(float)
77 | The updated probability
78 |
79 | """
80 |
81 | new_π = 1.0 / (1 + ((1 - π) * self.g(w)) / (π * self.f(w)))
82 |
83 | # Return new_π when in [π_min, π_max] and else end points
84 | new_π = np.maximum(np.minimum(new_π, self.π_max), self.π_min)
85 |
86 | return new_π
87 |
88 | def bellman_operator(self, v):
89 | """
90 |
91 | The Bellman operator. Including for comparison. Value function
92 | iteration is not recommended for this problem. See the
93 | reservation wage operator below.
94 |
95 | Parameters
96 | ----------
97 | v : array_like(float, ndim=1, length=len(π_grid))
98 | An approximate value function represented as a
99 | one-dimensional array.
100 |
101 | Returns
102 | -------
103 | new_v : array_like(float, ndim=1, length=len(π_grid))
104 | The updated value function
105 |
106 | """
107 | # == Simplify names == #
108 | f, g, β, c, q = self.f, self.g, self.β, self.c, self.q
109 |
110 | vf = LinearNDInterpolator(self.grid_points, v)
111 | N = len(v)
112 | new_v = np.empty(N)
113 |
114 | for i in range(N):
115 | w, π = self.grid_points[i, :]
116 | v1 = w / (1 - β)
117 | integrand = lambda m: vf(m, q(m, π)) * (π * f(m) +
118 | (1 - π) * g(m))
119 | integral, error = fixed_quad(integrand, 0, self.w_max)
120 | v2 = c + β * integral
121 | new_v[i] = max(v1, v2)
122 |
123 | return new_v
124 |
125 | def get_greedy(self, v):
126 | """
127 | Compute optimal actions taking v as the value function.
128 |
129 | Parameters
130 | ----------
131 | v : array_like(float, ndim=1, length=len(π_grid))
132 | An approximate value function represented as a
133 | one-dimensional array.
134 |
135 | Returns
136 | -------
137 | policy : array_like(float, ndim=1, length=len(π_grid))
138 | The decision to accept or reject an offer where 1 indicates
139 | accept and 0 indicates reject
140 |
141 | """
142 | # == Simplify names == #
143 | f, g, β, c, q = self.f, self.g, self.β, self.c, self.q
144 |
145 | vf = LinearNDInterpolator(self.grid_points, v)
146 | N = len(v)
147 | policy = np.zeros(N, dtype=int)
148 |
149 | for i in range(N):
150 | w, π = self.grid_points[i, :]
151 | v1 = w / (1 - β)
152 | integrand = lambda m: vf(m, q(m, π)) * (π * f(m) +
153 | (1 - π) * g(m))
154 | integral, error = fixed_quad(integrand, 0, self.w_max)
155 | v2 = c + β * integral
156 | policy[i] = v1 > v2 # Evaluates to 1 or 0
157 |
158 | return policy
159 |
160 | def res_wage_operator(self, ϕ):
161 | """
162 |
163 | Updates the reservation wage function guess ϕ via the operator
164 | Q.
165 |
166 | Parameters
167 | ----------
168 | ϕ : array_like(float, ndim=1, length=len(π_grid))
169 | This is reservation wage guess
170 |
171 | Returns
172 | -------
173 | new_ϕ : array_like(float, ndim=1, length=len(π_grid))
174 | The updated reservation wage guess.
175 |
176 | """
177 | # == Simplify names == #
178 | β, c, f, g, q = self.β, self.c, self.f, self.g, self.q
179 | # == Turn ϕ into a function == #
180 | ϕ_f = lambda p: np.interp(p, self.π_grid, ϕ)
181 |
182 | new_ϕ = np.empty(len(ϕ))
183 | for i, π in enumerate(self.π_grid):
184 | def integrand(x):
185 | "Integral expression on right-hand side of operator"
186 | return npmax(x, ϕ_f(q(x, π))) * (π * f(x) + (1 - π) * g(x))
187 | integral, error = fixed_quad(integrand, 0, self.w_max)
188 | new_ϕ[i] = (1 - β) * c + β * integral
189 |
190 | return new_ϕ
191 |
--------------------------------------------------------------------------------
/.github/copilot-instructions.md:
--------------------------------------------------------------------------------
1 | # Intermediate Quantitative Economics with Python - Lecture Materials
2 |
3 | **Always reference these instructions first and fallback to search or bash commands only when you encounter unexpected information that does not match the info here.**
4 |
5 | ## Working Effectively
6 |
7 | ### Bootstrap and Build the Repository
8 | Execute these commands in order to set up a complete working environment:
9 |
10 | **NEVER CANCEL: Each step has specific timing expectations - wait for completion.**
11 |
12 | ```bash
13 | # 1. Set up conda environment (takes ~3 minutes)
14 | conda env create -f environment.yml
15 | # NEVER CANCEL: Wait 3-5 minutes for completion
16 |
17 | # 2. Activate environment (required for all subsequent commands)
18 | source /usr/share/miniconda/etc/profile.d/conda.sh
19 | conda activate quantecon
20 |
21 | # 3. Install PyTorch with CUDA support (takes ~3 minutes)
22 | pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
23 | # NEVER CANCEL: Wait 3-5 minutes for completion
24 |
25 | # 4. Install Pyro (takes ~10 seconds)
26 | pip install pyro-ppl
27 |
28 | # 5. Install JAX with CUDA support (takes ~30 seconds)
29 | pip install --upgrade "jax[cuda12-local]==0.6.2"
30 |
31 | # 6. Install NumPyro (takes ~5 seconds)
32 | pip install numpyro
33 |
34 | # 7. Test JAX installation (takes ~5 seconds)
35 | python scripts/test-jax-install.py
36 | ```
37 |
38 | ### Build Commands
39 |
40 | **CRITICAL TIMING WARNING: NEVER CANCEL BUILD COMMANDS - They take 45+ minutes to complete.**
41 |
42 | ```bash
43 | # HTML build (primary build target) - takes 45-60 minutes
44 | jb build lectures --path-output ./ -W --keep-going
45 | # NEVER CANCEL: Set timeout to 90+ minutes. Executes 80+ notebooks sequentially.
46 |
47 | # PDF build via LaTeX - takes 30-45 minutes
48 | jb build lectures --builder pdflatex --path-output ./ -n -W --keep-going
49 | # NEVER CANCEL: Set timeout to 75+ minutes.
50 |
51 | # Jupyter notebook build - takes 30-45 minutes
52 | jb build lectures --path-output ./ --builder=custom --custom-builder=jupyter -n -W --keep-going
53 | # NEVER CANCEL: Set timeout to 75+ minutes.
54 | ```
55 |
56 | ### Environment and Dependency Details
57 |
58 | - **Python**: 3.12 with Anaconda 2024.10
59 | - **Primary Framework**: Jupyter Book 1.0.3 with MyST markdown
60 | - **Scientific Computing**: JAX 0.6.2, PyTorch (nightly), NumPyro, SciPy
61 | - **Content Format**: MyST markdown files in `/lectures/` directory
62 | - **Build System**: Sphinx-based via Jupyter Book
63 | - **Execution**: Notebooks are executed during build and cached
64 |
65 | ### Validation Scenarios
66 |
67 | **Always run these validation steps after making changes:**
68 |
69 | 1. **Environment Test**: `python scripts/test-jax-install.py` - verifies JAX/scientific stack
70 | 2. **Build Test**: Start HTML build and verify first few notebooks execute successfully
71 | 3. **Content Verification**: Check `_build/html/` directory contains expected output files
72 | 4. **Notebook Validation**: Verify generated notebooks in `_build/jupyter/` are executable
73 |
74 | ### Known Limitations and Workarounds
75 |
76 | - **Network Access**: Intersphinx inventory warnings for external sites (intro.quantecon.org, python-advanced.quantecon.org) are expected in sandboxed environments - build continues normally
77 | - **GPU Support**: JAX runs in CPU mode in most environments - this is expected and functional
78 | - **Build Cache**: Uses `_build/.jupyter_cache` to avoid re-executing unchanged notebooks
79 | - **Memory Usage**: Large notebooks may require substantial RAM during execution phase
80 |
81 | ## Project Architecture
82 |
83 | ### Key Directories
84 | ```
85 | lectures/ # MyST markdown lecture files (80+ files)
86 | ├── _config.yml # Jupyter Book configuration
87 | ├── _toc.yml # Table of contents structure
88 | ├── _static/ # Static assets (images, CSS, etc.)
89 | └── *.md # Individual lecture files
90 |
91 | _build/ # Build outputs (created during build)
92 | ├── html/ # HTML website output
93 | ├── latex/ # PDF build intermediate files
94 | ├── jupyter/ # Generated notebook files
95 | └── .jupyter_cache/ # Execution cache
96 |
97 | scripts/ # Utility scripts
98 | └── test-jax-install.py # JAX installation validator
99 |
100 | .github/ # CI/CD workflows
101 | └── workflows/ # GitHub Actions definitions
102 | ```
103 |
104 | ### Content Structure
105 | - **80+ lecture files** covering intermediate quantitative economics
106 | - **MyST markdown format** with embedded Python code blocks
107 | - **Executable notebooks** - code is run during build process
108 | - **Multiple output formats**: HTML website, PDF via LaTeX, downloadable notebooks
109 |
110 | ### Build Targets
111 | 1. **HTML**: Main website at `_build/html/` - primary deliverable
112 | 2. **PDF**: Single PDF document via LaTeX at `_build/latex/`
113 | 3. **Notebooks**: Individual .ipynb files at `_build/jupyter/`
114 |
115 | ## Development Workflow
116 |
117 | ### Making Changes
118 | 1. **Always activate environment first**: `conda activate quantecon`
119 | 2. **Edit lecture files**: Modify `.md` files in `/lectures/` directory
120 | 3. **Test changes**: Run quick build test on subset if possible
121 | 4. **Full validation**: Complete HTML build to verify all notebooks execute
122 | 5. **Check outputs**: Verify `_build/html/` contains expected results
123 |
124 | ### Common Tasks
125 |
126 | **View repository structure:**
127 | ```bash
128 | ls -la /home/runner/work/lecture-python.myst/lecture-python.myst/
129 | # Output: .git .github .gitignore README.md _notebook_repo environment.yml lectures scripts
130 | ```
131 |
132 | **Check lecture content:**
133 | ```bash
134 | ls lectures/ | head -10
135 | # Shows: intro.md, various economics topic files (.md format)
136 | ```
137 |
138 | **Monitor build progress:**
139 | - Build shows progress as "reading sources... [X%] filename"
140 | - Each notebook execution time varies: 5-120 seconds per file
141 | - Total build time: 45-60 minutes for full HTML build
142 |
143 | **Environment verification:**
144 | ```bash
145 | conda list | grep -E "(jax|torch|jupyter-book)"
146 | # Should show: jax 0.6.2, torch 2.9.0.dev, jupyter-book 1.0.3
147 | ```
148 |
149 | ## Troubleshooting
150 |
151 | ### Common Issues
152 | - **"cuInit(0) failed"**: Expected JAX warning in CPU-only environments - build continues normally
153 | - **Intersphinx warnings**: Network inventory fetch failures are expected - build continues normally
154 | - **Debugger warnings**: "frozen modules" warnings during notebook execution are normal
155 | - **Long execution times**: Some notebooks (like ar1_bayes.md) take 60+ seconds - this is normal
156 |
157 | ### Performance Notes
158 | - **First build**: Takes longest due to fresh notebook execution
159 | - **Subsequent builds**: Faster due to caching system
160 | - **Cache location**: `_build/.jupyter_cache` stores execution results
161 | - **Cache invalidation**: Changes to notebook content triggers re-execution
162 |
163 | ## CI/CD Integration
164 |
165 | The repository uses GitHub Actions with:
166 | - **Cache workflow**: Weekly rebuild of execution cache
167 | - **CI workflow**: Pull request validation builds
168 | - **Publish workflow**: Production deployment on tags
169 |
170 | **Local builds should match CI behavior** - use same commands and expect similar timing.
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wald_friedman/wald_class.py:
--------------------------------------------------------------------------------
1 | class WaldFriedman:
2 | """
3 | Insert relevant docstrings here
4 | """
5 | def __init__(self, c, L0, L1, f0, f1, m=25):
6 | self.c = c
7 | self.L0, self.L1 = L0, L1
8 | self.m = m
9 | self.pgrid = np.linspace(0.0, 1.0, m)
10 |
11 | # Renormalize distributions so nothing is "too" small
12 | f0 = np.clip(f0, 1e-8, 1-1e-8)
13 | f1 = np.clip(f1, 1e-8, 1-1e-8)
14 | self.f0 = f0 / np.sum(f0)
15 | self.f1 = f1 / np.sum(f1)
16 | self.J = np.zeros(m)
17 |
18 | def current_distribution(self, p):
19 | """
20 | This function takes a value for the probability with which
21 | the correct model is model 0 and returns the mixed
22 | distribution that corresponds with that belief.
23 | """
24 | return p*self.f0 + (1-p)*self.f1
25 |
26 | def bayes_update_k(self, p, k):
27 | """
28 | This function takes a value for p, and a realization of the
29 | random variable and calculates the value for p tomorrow.
30 | """
31 | f0_k = self.f0[k]
32 | f1_k = self.f1[k]
33 |
34 | p_tp1 = p * f0_k / (p * f0_k + (1 - p) * f1_k)
35 |
36 | return np.clip(p_tp1, 0, 1)
37 |
38 | def bayes_update_all(self, p):
39 | """
40 | This is similar to `bayes_update_k` except it returns a
41 | new value for p for each realization of the random variable
42 | """
43 | return np.clip(p * self.f0 / (p * self.f0 + (1 - p) * self.f1), 0, 1)
44 |
45 | def payoff_choose_f0(self, p):
46 | "For a given probability specify the cost of accepting model 0"
47 | return (1 - p) * self.L0
48 |
49 | def payoff_choose_f1(self, p):
50 | "For a given probability specify the cost of accepting model 1"
51 | return p * self.L1
52 |
53 | def EJ(self, p, J):
54 | """
55 | This function evaluates the expectation of the value function
56 | at period t+1. It does so by taking the current probability
57 | distribution over outcomes:
58 |
59 | p(z_{k+1}) = p_k f_0(z_{k+1}) + (1-p_k) f_1(z_{k+1})
60 |
61 | and evaluating the value function at the possible states
62 | tomorrow J(p_{t+1}) where
63 |
64 | p_{t+1} = p f0 / ( p f0 + (1-p) f1)
65 |
66 | Parameters
67 | ----------
68 | p : Scalar(Float64)
69 | The current believed probability that model 0 is the true
70 | model.
71 | J : Function
72 | The current value function for a decision to continue
73 |
74 | Returns
75 | -------
76 | EJ : Scalar(Float64)
77 | The expected value of the value function tomorrow
78 | """
79 | # Pull out information
80 | f0, f1 = self.f0, self.f1
81 |
82 | # Get the current believed distribution and tomorrows possible dists
83 | # Need to clip to make sure things don't blow up (go to infinity)
84 | curr_dist = self.current_distribution(p)
85 | tp1_dist = self.bayes_update_all(p)
86 |
87 | # Evaluate the expectation
88 | EJ = curr_dist @ J(tp1_dist)
89 |
90 | return EJ
91 |
92 | def payoff_continue(self, p, J):
93 | """
94 | For a given probability distribution and value function give
95 | cost of continuing the search for correct model
96 | """
97 | return self.c + self.EJ(p, J)
98 |
99 | def bellman_operator(self, J):
100 | """
101 | Evaluates the value function for a given continuation value
102 | function; that is, evaluates
103 |
104 | J(p) = min(pL0, (1-p)L1, c + E[J(p')])
105 |
106 | Uses linear interpolation between points
107 | """
108 | payoff_choose_f0 = self.payoff_choose_f0
109 | payoff_choose_f1 = self.payoff_choose_f1
110 | payoff_continue = self.payoff_continue
111 | c, L0, L1, f0, f1 = self.c, self.L0, self.L1, self.f0, self.f1
112 | m, pgrid = self.m, self.pgrid
113 |
114 | J_out = np.empty(m)
115 | J_interp = interp.UnivariateSpline(pgrid, J, k=1, ext=0)
116 |
117 | for (p_ind, p) in enumerate(pgrid):
118 | # Payoff of choosing model 0
119 | p_c_0 = payoff_choose_f0(p)
120 | p_c_1 = payoff_choose_f1(p)
121 | p_con = payoff_continue(p, J_interp)
122 |
123 | J_out[p_ind] = min(p_c_0, p_c_1, p_con)
124 |
125 | return J_out
126 |
127 | def solve_model(self):
128 | J = qe.compute_fixed_point(self.bellman_operator, np.zeros(self.m),
129 | error_tol=1e-7, verbose=False)
130 |
131 | self.J = J
132 | return J
133 |
134 | def find_cutoff_rule(self, J):
135 | """
136 | This function takes a value function and returns the corresponding
137 | cutoffs of where you transition between continue and choosing a
138 | specific model
139 | """
140 | payoff_choose_f0 = self.payoff_choose_f0
141 | payoff_choose_f1 = self.payoff_choose_f1
142 | m, pgrid = self.m, self.pgrid
143 |
144 | # Evaluate cost at all points on grid for choosing a model
145 | p_c_0 = payoff_choose_f0(pgrid)
146 | p_c_1 = payoff_choose_f1(pgrid)
147 |
148 | # The cutoff points can be found by differencing these costs with
149 | # the Bellman equation (J is always less than or equal to p_c_i)
150 | lb = pgrid[np.searchsorted(p_c_1 - J, 1e-10) - 1]
151 | ub = pgrid[np.searchsorted(J - p_c_0, -1e-10)]
152 |
153 | return (lb, ub)
154 |
155 | def simulate(self, f, p0=0.5):
156 | """
157 | This function takes an initial condition and simulates until it
158 | stops (when a decision is made).
159 | """
160 | # Check whether vf is computed
161 | if np.sum(self.J) < 1e-8:
162 | self.solve_model()
163 |
164 | # Unpack useful info
165 | lb, ub = self.find_cutoff_rule(self.J)
166 | update_p = self.bayes_update_k
167 | curr_dist = self.current_distribution
168 |
169 | # Initialize a couple useful variables
170 | decision_made = False
171 | p = p0
172 | t = 0
173 |
174 | while decision_made is False:
175 | # Maybe should specify which distribution is correct one so that
176 | # the draws come from the "right" distribution
177 | k = int(qe.random.draw(np.cumsum(f)))
178 | t = t+1
179 | p = update_p(p, k)
180 | if p < lb:
181 | decision_made = True
182 | decision = 1
183 | elif p > ub:
184 | decision_made = True
185 | decision = 0
186 |
187 | return decision, p, t
188 |
189 | def simulate_tdgp_f0(self, p0=0.5):
190 | """
191 | Uses the distribution f0 as the true data generating
192 | process
193 | """
194 | decision, p, t = self.simulate(self.f0, p0)
195 |
196 | if decision == 0:
197 | correct = True
198 | else:
199 | correct = False
200 |
201 | return correct, p, t
202 |
203 | def simulate_tdgp_f1(self, p0=0.5):
204 | """
205 | Uses the distribution f1 as the true data generating
206 | process
207 | """
208 | decision, p, t = self.simulate(self.f1, p0)
209 |
210 | if decision == 1:
211 | correct = True
212 | else:
213 | correct = False
214 |
215 | return correct, p, t
216 |
217 | def stopping_dist(self, ndraws=250, tdgp="f0"):
218 | """
219 | Simulates repeatedly to get distributions of time needed to make a
220 | decision and how often they are correct.
221 | """
222 | if tdgp == "f0":
223 | simfunc = self.simulate_tdgp_f0
224 | else:
225 | simfunc = self.simulate_tdgp_f1
226 |
227 | # Allocate space
228 | tdist = np.empty(ndraws, int)
229 | cdist = np.empty(ndraws, bool)
230 |
231 | for i in range(ndraws):
232 | correct, p, t = simfunc()
233 | tdist[i] = t
234 | cdist[i] = correct
235 |
236 | return cdist, tdist
237 |
--------------------------------------------------------------------------------
/lectures/os_egm.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} jupyter
13 |
18 | ```
19 |
20 | # {index}`Optimal Savings V: The Endogenous Grid Method `
21 |
22 | ```{contents} Contents
23 | :depth: 2
24 | ```
25 |
26 |
27 | ## Overview
28 |
29 | Previously, we solved the optimal savings problem using
30 |
31 | 1. {doc}`value function iteration `
32 | 1. {doc}`Euler equation based time iteration `
33 |
34 | We found time iteration to be significantly more accurate and efficient.
35 |
36 | In this lecture, we'll look at a clever twist on time iteration called the **endogenous grid method** (EGM).
37 |
38 | EGM is a numerical method for implementing policy iteration invented by [Chris Carroll](https://econ.jhu.edu/directory/christopher-carroll/).
39 |
40 | The original reference is {cite}`Carroll2006`.
41 |
42 | For now we will focus on a clean and simple implementation of EGM that stays
43 | close to the underlying mathematics.
44 |
45 | Then, in {doc}`os_egm_jax`, we will construct a fully vectorized and parallelized version of EGM based on JAX.
46 |
47 | Let's start with some standard imports:
48 |
49 | ```{code-cell} python3
50 | import matplotlib.pyplot as plt
51 | import numpy as np
52 | import quantecon as qe
53 | ```
54 |
55 | ## Key Idea
56 |
57 | First we remind ourselves of the theory and then we turn to numerical methods.
58 |
59 | ### Theory
60 |
61 | We work with the model set out in {doc}`os_time_iter`, following the same terminology and notation.
62 |
63 | As we saw, the Coleman-Reffett operator is a nonlinear operator $K$ engineered so that the optimal policy
64 | $\sigma^*$ is a fixed point of $K$.
65 |
66 | It takes as its argument a continuous strictly increasing consumption policy $\sigma \in \Sigma$.
67 |
68 | It returns a new function $K \sigma$, where $(K \sigma)(x)$ is the $c \in (0, \infty)$ that solves
69 |
70 | ```{math}
71 | :label: egm_coledef
72 |
73 | u'(c)
74 | = \beta \int (u' \circ \sigma) (f(x - c) z ) f'(x - c) z \phi(dz)
75 | ```
76 |
77 | ### Exogenous Grid
78 |
79 | As discussed in {doc}`os_time_iter`, to implement the method on a
80 | computer, we represent a policy function by a set of values on a finite grid.
81 |
82 | The function itself is reconstructed from this representation when necessary,
83 | using interpolation or some other method.
84 |
85 | Our previous strategy in {doc}`os_time_iter` for obtaining a finite representation of an updated consumption policy was to
86 |
87 | * fix a grid of income points $\{x_i\}$
88 | * calculate the consumption value $c_i$ corresponding to each $x_i$ using
89 | {eq}`egm_coledef` and a root-finding routine
90 |
91 | Each $c_i$ is then interpreted as the value of the function $K \sigma$ at $x_i$.
92 |
93 | Thus, with the pairs $\{(x_i, c_i)\}$ in hand, we can reconstruct $K \sigma$ via approximation.
94 |
95 | Iteration then continues...
96 |
97 |
98 | ### Endogenous Grid
99 |
100 | The method discussed above requires a root-finding routine to find the
101 | $c_i$ corresponding to a given income value $x_i$.
102 |
103 | Root-finding is costly because it typically involves a significant number of
104 | function evaluations.
105 |
106 | As pointed out by Carroll {cite}`Carroll2006`, we can avoid this step if
107 | $x_i$ is chosen endogenously.
108 |
109 | The only assumption required is that $u'$ is invertible on $(0, \infty)$.
110 |
111 | Let $(u')^{-1}$ be the inverse function of $u'$.
112 |
113 | The idea is this:
114 |
115 | * First, we fix an *exogenous* grid $\{s_i\}$ for savings ($s = x - c$).
116 | * Then we obtain $c_i$ via
117 |
118 | ```{math}
119 | :label: egm_getc
120 |
121 | c_i =
122 | (u')^{-1}
123 | \left\{
124 | \beta \int (u' \circ \sigma) (f(s_i) z ) \, f'(s_i) \, z \, \phi(dz)
125 | \right\}
126 | ```
127 |
128 | * Finally, for each $c_i$ we set $x_i = c_i + s_i$.
129 |
130 | Importantly, each $(x_i, c_i)$ pair constructed in this manner satisfies {eq}`egm_coledef`.
131 |
132 | With the points $\{x_i, c_i\}$ in hand, we can reconstruct $K \sigma$ via approximation as before.
133 |
134 | The name EGM comes from the fact that the grid $\{x_i\}$ is determined **endogenously**.
135 |
136 |
137 | ## Implementation
138 |
139 | As in {doc}`os_time_iter`, we will start with a simple setting where
140 |
141 | * $u(c) = \ln c$,
142 | * the function $f$ has a Cobb-Douglas specification, and
143 | * the shocks are lognormal.
144 |
145 | This will allow us to make comparisons with the analytical solutions.
146 |
147 | ```{code-cell} python3
148 | def v_star(x, α, β, μ):
149 | """
150 | True value function
151 | """
152 | c1 = np.log(1 - α * β) / (1 - β)
153 | c2 = (μ + α * np.log(α * β)) / (1 - α)
154 | c3 = 1 / (1 - β)
155 | c4 = 1 / (1 - α * β)
156 | return c1 + c2 * (c3 - c4) + c4 * np.log(x)
157 |
158 | def σ_star(x, α, β):
159 | """
160 | True optimal policy
161 | """
162 | return (1 - α * β) * x
163 | ```
164 |
165 | We reuse the `Model` structure from {doc}`os_time_iter`.
166 |
167 | ```{code-cell} python3
168 | from typing import NamedTuple, Callable
169 |
170 | class Model(NamedTuple):
171 | u: Callable # utility function
172 | f: Callable # production function
173 | β: float # discount factor
174 | μ: float # shock location parameter
175 | ν: float # shock scale parameter
176 | s_grid: np.ndarray # exogenous savings grid
177 | shocks: np.ndarray # shock draws
178 | α: float # production function parameter
179 | u_prime: Callable # derivative of utility
180 | f_prime: Callable # derivative of production
181 | u_prime_inv: Callable # inverse of u_prime
182 |
183 |
184 | def create_model(
185 | u: Callable,
186 | f: Callable,
187 | β: float = 0.96,
188 | μ: float = 0.0,
189 | ν: float = 0.1,
190 | grid_max: float = 4.0,
191 | grid_size: int = 120,
192 | shock_size: int = 250,
193 | seed: int = 1234,
194 | α: float = 0.4,
195 | u_prime: Callable = None,
196 | f_prime: Callable = None,
197 | u_prime_inv: Callable = None
198 | ) -> Model:
199 | """
200 | Creates an instance of the optimal savings model.
201 | """
202 | # Set up exogenous savings grid
203 | s_grid = np.linspace(1e-4, grid_max, grid_size)
204 |
205 | # Store shocks (with a seed, so results are reproducible)
206 | np.random.seed(seed)
207 | shocks = np.exp(μ + ν * np.random.randn(shock_size))
208 |
209 | return Model(
210 | u, f, β, μ, ν, s_grid, shocks, α, u_prime, f_prime, u_prime_inv
211 | )
212 | ```
213 |
214 | ### The Operator
215 |
216 | Here's an implementation of $K$ using EGM as described above.
217 |
218 | ```{code-cell} python3
219 | def K(
220 | c_in: np.ndarray, # Consumption values on the endogenous grid
221 | x_in: np.ndarray, # Current endogenous grid
222 | model: Model # Model specification
223 | ):
224 | """
225 | An implementation of the Coleman-Reffett operator using EGM.
226 |
227 | """
228 |
229 | # Simplify names
230 | u, f, β, μ, ν, s_grid, shocks, α, u_prime, f_prime, u_prime_inv = model
231 |
232 | # Linear interpolation of policy on the endogenous grid
233 | σ = lambda x: np.interp(x, x_in, c_in)
234 |
235 | # Allocate memory for new consumption array
236 | c_out = np.empty_like(s_grid)
237 |
238 | for i, s in enumerate(s_grid):
239 | # Approximate marginal utility ∫ u'(σ(f(s, α)z)) f'(s, α) z ϕ(z)dz
240 | vals = u_prime(σ(f(s, α) * shocks)) * f_prime(s, α) * shocks
241 | mu = np.mean(vals)
242 | # Compute consumption
243 | c_out[i] = u_prime_inv(β * mu)
244 |
245 | # Determine corresponding endogenous grid
246 | x_out = s_grid + c_out # x_i = s_i + c_i
247 |
248 | return c_out, x_out
249 | ```
250 |
251 | Note the lack of any root-finding algorithm.
252 |
253 | ```{note}
254 | The routine is still not particularly fast because we are using pure Python loops.
255 |
256 | But in the next lecture ({doc}`os_egm_jax`) we will use a fully vectorized and efficient solution.
257 | ```
258 |
259 | ### Testing
260 |
261 | First we create an instance.
262 |
263 | ```{code-cell} python3
264 | # Define utility and production functions with derivatives
265 | u = lambda c: np.log(c)
266 | u_prime = lambda c: 1 / c
267 | u_prime_inv = lambda x: 1 / x
268 | f = lambda k, α: k**α
269 | f_prime = lambda k, α: α * k**(α - 1)
270 |
271 | model = create_model(u=u, f=f, u_prime=u_prime,
272 | f_prime=f_prime, u_prime_inv=u_prime_inv)
273 | s_grid = model.s_grid
274 | ```
275 |
276 | Here's our solver routine:
277 |
278 | ```{code-cell} python3
279 | def solve_model_time_iter(
280 | model: Model, # Model details
281 | c_init: np.ndarray, # initial guess of consumption on EG
282 | x_init: np.ndarray, # initial guess of endogenous grid
283 | tol: float = 1e-5, # Error tolerance
284 | max_iter: int = 1000, # Max number of iterations of K
285 | verbose: bool = True # If true print output
286 | ):
287 | """
288 | Solve the model using time iteration with EGM.
289 | """
290 | c, x = c_init, x_init
291 | error = tol + 1
292 | i = 0
293 |
294 | while error > tol and i < max_iter:
295 | c_new, x_new = K(c, x, model)
296 | error = np.max(np.abs(c_new - c))
297 | c, x = c_new, x_new
298 | i += 1
299 | if verbose:
300 | print(f"Iteration {i}, error = {error}")
301 |
302 | if i == max_iter:
303 | print("Warning: maximum iterations reached")
304 |
305 | return c, x
306 | ```
307 |
308 | Let's call it:
309 |
310 | ```{code-cell} python3
311 | c_init = np.copy(s_grid)
312 | x_init = s_grid + c_init
313 | c, x = solve_model_time_iter(model, c_init, x_init)
314 | ```
315 |
316 | Here is a plot of the resulting policy, compared with the true policy:
317 |
318 | ```{code-cell} python3
319 | fig, ax = plt.subplots()
320 |
321 | ax.plot(x, c, lw=2,
322 | alpha=0.8, label='approximate policy function')
323 |
324 | ax.plot(x, σ_star(x, model.α, model.β), 'k--',
325 | lw=2, alpha=0.8, label='true policy function')
326 |
327 | ax.legend()
328 | plt.show()
329 | ```
330 |
331 | The maximal absolute deviation between the two policies is
332 |
333 | ```{code-cell} python3
334 | np.max(np.abs(c - σ_star(x, model.α, model.β)))
335 | ```
336 |
337 | Here's the execution time:
338 |
339 | ```{code-cell} python3
340 | with qe.Timer():
341 | c, x = solve_model_time_iter(model, c_init, x_init, verbose=False)
342 | ```
343 |
344 | EGM is faster than time iteration because it avoids numerical root-finding.
345 |
346 | Instead, we invert the marginal utility function directly, which is much more efficient.
347 |
348 | In {doc}`os_egm_jax`, we will use a fully vectorized
349 | and efficient version of EGM that is also parallelized using JAX.
350 |
351 | This provides an extremely fast way to solve the optimal consumption problem we
352 | have been studying for the last few lectures.
353 |
--------------------------------------------------------------------------------
/lectures/sir_model.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} jupyter
13 |
18 | ```
19 |
20 | # {index}`Modeling COVID 19 `
21 |
22 | ```{contents} Contents
23 | :depth: 2
24 | ```
25 |
26 | ## Overview
27 |
28 | This is a Python version of the code for analyzing the COVID-19 pandemic
29 | provided by [Andrew Atkeson](https://sites.google.com/site/andyatkeson/).
30 |
31 | See, in particular
32 |
33 | * [NBER Working Paper No. 26867](https://www.nber.org/papers/w26867)
34 | * [COVID-19 Working papers and code](https://sites.google.com/site/andyatkeson/home?authuser=0)
35 |
36 | The purpose of his notes is to introduce economists to quantitative modeling
37 | of infectious disease dynamics.
38 |
39 | Dynamics are modeled using a standard SIR (Susceptible-Infected-Removed) model
40 | of disease spread.
41 |
42 | The model dynamics are represented by a system of ordinary differential
43 | equations.
44 |
45 | The main objective is to study the impact of suppression through social
46 | distancing on the spread of the infection.
47 |
48 | The focus is on US outcomes but the parameters can be adjusted to study
49 | other countries.
50 |
51 | We will use the following standard imports:
52 |
53 | ```{code-cell} ipython3
54 | import matplotlib.pyplot as plt
55 | import numpy as np
56 | from numpy import exp
57 | ```
58 |
59 | We will also use SciPy's numerical routine odeint for solving differential
60 | equations.
61 |
62 | ```{code-cell} ipython3
63 | from scipy.integrate import odeint
64 | ```
65 |
66 | This routine calls into compiled code from the FORTRAN library odepack.
67 |
68 | ## The SIR Model
69 |
70 | In the version of the SIR model we will analyze there are four states.
71 |
72 | All individuals in the population are assumed to be in one of these four states.
73 |
74 | The states are: susceptible (S), exposed (E), infected (I) and removed (R).
75 |
76 | Comments:
77 |
78 | * Those in state R have been infected and either recovered or died.
79 | * Those who have recovered are assumed to have acquired immunity.
80 | * Those in the exposed group are not yet infectious.
81 |
82 | ### Time Path
83 |
84 | The flow across states follows the path $S \to E \to I \to R$.
85 |
86 | All individuals in the population are eventually infected when
87 | the transmission rate is positive and $i(0) > 0$.
88 |
89 | The interest is primarily in
90 |
91 | * the number of infections at a given time (which determines whether or not the health care system is overwhelmed) and
92 | * how long the caseload can be deferred (hopefully until a vaccine arrives)
93 |
94 | Using lower case letters for the fraction of the population in each state, the
95 | dynamics are
96 |
97 | ```{math}
98 | :label: sir_system
99 |
100 | \begin{aligned}
101 | \dot s(t) & = - \beta(t) \, s(t) \, i(t)
102 | \\
103 | \dot e(t) & = \beta(t) \, s(t) \, i(t) - σ e(t)
104 | \\
105 | \dot i(t) & = σ e(t) - γ i(t)
106 | \end{aligned}
107 | ```
108 |
109 | In these equations,
110 |
111 | * $\beta(t)$ is called the **transmission rate** (the rate at which individuals bump into others and expose them to the virus).
112 | * $\sigma$ is called the **infection rate** (the rate at which those who are exposed become infected)
113 | * $\gamma$ is called the **recovery rate** (the rate at which infected people recover or die).
114 | * the dot symbol $\dot y$ represents the time derivative $dy/dt$.
115 |
116 | We do not need to model the fraction $r$ of the population in state $R$ separately because the states form a partition.
117 |
118 | In particular, the "removed" fraction of the population is $r = 1 - s - e - i$.
119 |
120 | We will also track $c = i + r$, which is the cumulative caseload
121 | (i.e., all those who have or have had the infection).
122 |
123 | The system {eq}`sir_system` can be written in vector form as
124 |
125 | ```{math}
126 | :label: dfcv
127 |
128 | \dot x = F(x, t), \qquad x := (s, e, i)
129 | ```
130 |
131 | for suitable definition of $F$ (see the code below).
132 |
133 | ### Parameters
134 |
135 | Both $\sigma$ and $\gamma$ are thought of as fixed, biologically determined parameters.
136 |
137 | As in Atkeson's note, we set
138 |
139 | * $\sigma = 1/5.2$ to reflect an average incubation period of 5.2 days.
140 | * $\gamma = 1/18$ to match an average illness duration of 18 days.
141 |
142 | The transmission rate is modeled as
143 |
144 | * $\beta(t) := R(t) \gamma$ where $R(t)$ is the **effective reproduction number** at time $t$.
145 |
146 | (The notation is slightly confusing, since $R(t)$ is different to
147 | $R$, the symbol that represents the removed state.)
148 |
149 | ## Implementation
150 |
151 | First we set the population size to match the US.
152 |
153 | ```{code-cell} ipython3
154 | pop_size = 3.3e8
155 | ```
156 |
157 | Next we fix parameters as described above.
158 |
159 | ```{code-cell} ipython3
160 | γ = 1 / 18
161 | σ = 1 / 5.2
162 | ```
163 |
164 | Now we construct a function that represents $F$ in {eq}`dfcv`
165 |
166 | ```{code-cell} ipython3
167 | def F(x, t, R0=1.6):
168 | """
169 | Time derivative of the state vector.
170 |
171 | * x is the state vector (array_like)
172 | * t is time (scalar)
173 | * R0 is the effective transmission rate, defaulting to a constant
174 |
175 | """
176 | s, e, i = x
177 |
178 | # New exposure of susceptibles
179 | β = R0(t) * γ if callable(R0) else R0 * γ
180 | ne = β * s * i
181 |
182 | # Time derivatives
183 | ds = - ne
184 | de = ne - σ * e
185 | di = σ * e - γ * i
186 |
187 | return ds, de, di
188 | ```
189 |
190 | Note that `R0` can be either constant or a given function of time.
191 |
192 | The initial conditions are set to
193 |
194 | ```{code-cell} ipython3
195 | # initial conditions of s, e, i
196 | i_0 = 1e-7
197 | e_0 = 4 * i_0
198 | s_0 = 1 - i_0 - e_0
199 | ```
200 |
201 | In vector form the initial condition is
202 |
203 | ```{code-cell} ipython3
204 | x_0 = s_0, e_0, i_0
205 | ```
206 |
207 | We solve for the time path numerically using odeint, at a sequence of dates
208 | `t_vec`.
209 |
210 | ```{code-cell} ipython3
211 | def solve_path(R0, t_vec, x_init=x_0):
212 | """
213 | Solve for i(t) and c(t) via numerical integration,
214 | given the time path for R0.
215 |
216 | """
217 | G = lambda x, t: F(x, t, R0)
218 | s_path, e_path, i_path = odeint(G, x_init, t_vec).transpose()
219 |
220 | c_path = 1 - s_path - e_path # cumulative cases
221 | return i_path, c_path
222 | ```
223 |
224 | ## Experiments
225 |
226 | Let's run some experiments using this code.
227 |
228 | The time period we investigate will be 550 days, or around 18 months:
229 |
230 | ```{code-cell} ipython3
231 | t_length = 550
232 | grid_size = 1000
233 | t_vec = np.linspace(0, t_length, grid_size)
234 | ```
235 |
236 | ### Experiment 1: Constant R0 Case
237 |
238 | Let's start with the case where `R0` is constant.
239 |
240 | We calculate the time path of infected people under different assumptions for `R0`:
241 |
242 | ```{code-cell} ipython3
243 | R0_vals = np.linspace(1.6, 3.0, 6)
244 | labels = [f'$R0 = {r:.2f}$' for r in R0_vals]
245 | i_paths, c_paths = [], []
246 |
247 | for r in R0_vals:
248 | i_path, c_path = solve_path(r, t_vec)
249 | i_paths.append(i_path)
250 | c_paths.append(c_path)
251 | ```
252 |
253 | Here's some code to plot the time paths.
254 |
255 | ```{code-cell} ipython3
256 | def plot_paths(paths, labels, times=t_vec):
257 |
258 | fig, ax = plt.subplots()
259 |
260 | for path, label in zip(paths, labels):
261 | ax.plot(times, path, label=label)
262 |
263 | ax.legend(loc='upper left')
264 |
265 | plt.show()
266 | ```
267 |
268 | Let's plot current cases as a fraction of the population.
269 |
270 | ```{code-cell} ipython3
271 | plot_paths(i_paths, labels)
272 | ```
273 |
274 | As expected, lower effective transmission rates defer the peak of infections.
275 |
276 | They also lead to a lower peak in current cases.
277 |
278 | Here are cumulative cases, as a fraction of population:
279 |
280 | ```{code-cell} ipython3
281 | plot_paths(c_paths, labels)
282 | ```
283 |
284 | ### Experiment 2: Changing Mitigation
285 |
286 | Let's look at a scenario where mitigation (e.g., social distancing) is
287 | successively imposed.
288 |
289 | Here's a specification for `R0` as a function of time.
290 |
291 | ```{code-cell} ipython3
292 | def R0_mitigating(t, r0=3, η=1, r_bar=1.6):
293 | R0 = r0 * exp(- η * t) + (1 - exp(- η * t)) * r_bar
294 | return R0
295 | ```
296 |
297 | The idea is that `R0` starts off at 3 and falls to 1.6.
298 |
299 | This is due to progressive adoption of stricter mitigation measures.
300 |
301 | The parameter `η` controls the rate, or the speed at which restrictions are
302 | imposed.
303 |
304 | We consider several different rates:
305 |
306 | ```{code-cell} ipython3
307 | η_vals = 1/5, 1/10, 1/20, 1/50, 1/100
308 | labels = [fr'$\eta = {η:.2f}$' for η in η_vals]
309 | ```
310 |
311 | This is what the time path of `R0` looks like at these alternative rates:
312 |
313 | ```{code-cell} ipython3
314 | fig, ax = plt.subplots()
315 |
316 | for η, label in zip(η_vals, labels):
317 | ax.plot(t_vec, R0_mitigating(t_vec, η=η), label=label)
318 |
319 | ax.legend()
320 | plt.show()
321 | ```
322 |
323 | Let's calculate the time path of infected people:
324 |
325 | ```{code-cell} ipython3
326 | i_paths, c_paths = [], []
327 |
328 | for η in η_vals:
329 | R0 = lambda t: R0_mitigating(t, η=η)
330 | i_path, c_path = solve_path(R0, t_vec)
331 | i_paths.append(i_path)
332 | c_paths.append(c_path)
333 | ```
334 |
335 | These are current cases under the different scenarios:
336 |
337 | ```{code-cell} ipython3
338 | plot_paths(i_paths, labels)
339 | ```
340 |
341 | Here are cumulative cases, as a fraction of population:
342 |
343 | ```{code-cell} ipython3
344 | plot_paths(c_paths, labels)
345 | ```
346 |
347 | ## Ending Lockdown
348 |
349 | The following replicates [additional results](https://drive.google.com/file/d/1uS7n-7zq5gfSgrL3S0HByExmpq4Bn3oh/view) by Andrew Atkeson on the timing of lifting lockdown.
350 |
351 | Consider these two mitigation scenarios:
352 |
353 | 1. $R_t = 0.5$ for 30 days and then $R_t = 2$ for the remaining 17 months. This corresponds to lifting lockdown in 30 days.
354 | 1. $R_t = 0.5$ for 120 days and then $R_t = 2$ for the remaining 14 months. This corresponds to lifting lockdown in 4 months.
355 |
356 | The parameters considered here start the model with 25,000 active infections
357 | and 75,000 agents already exposed to the virus and thus soon to be contagious.
358 |
359 | ```{code-cell} ipython3
360 | # initial conditions
361 | i_0 = 25_000 / pop_size
362 | e_0 = 75_000 / pop_size
363 | s_0 = 1 - i_0 - e_0
364 | x_0 = s_0, e_0, i_0
365 | ```
366 |
367 | Let's calculate the paths:
368 |
369 | ```{code-cell} ipython3
370 | R0_paths = (lambda t: 0.5 if t < 30 else 2,
371 | lambda t: 0.5 if t < 120 else 2)
372 |
373 | labels = [f'scenario {i}' for i in (1, 2)]
374 |
375 | i_paths, c_paths = [], []
376 |
377 | for R0 in R0_paths:
378 | i_path, c_path = solve_path(R0, t_vec, x_init=x_0)
379 | i_paths.append(i_path)
380 | c_paths.append(c_path)
381 | ```
382 |
383 | Here is the number of active infections:
384 |
385 | ```{code-cell} ipython3
386 | plot_paths(i_paths, labels)
387 | ```
388 |
389 | What kind of mortality can we expect under these scenarios?
390 |
391 | Suppose that 1% of cases result in death
392 |
393 | ```{code-cell} ipython3
394 | ν = 0.01
395 | ```
396 |
397 | This is the cumulative number of deaths:
398 |
399 | ```{code-cell} ipython3
400 | paths = [path * ν * pop_size for path in c_paths]
401 | plot_paths(paths, labels)
402 | ```
403 |
404 | This is the daily death rate:
405 |
406 | ```{code-cell} ipython3
407 | paths = [path * ν * γ * pop_size for path in i_paths]
408 | plot_paths(paths, labels)
409 | ```
410 |
411 | Pushing the peak of curve further into the future may reduce cumulative deaths
412 | if a vaccine is found.
413 |
--------------------------------------------------------------------------------
/lectures/rand_resp.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | # Randomized Response Surveys
13 |
14 |
15 | ## Overview
16 |
17 | Social stigmas can inhibit people from confessing potentially embarrassing activities or opinions.
18 |
19 | When people are reluctant to participate a sample survey about personally sensitive issues, they might decline to participate, and even if they do participate, they might choose to provide incorrect answers to sensitive questions.
20 |
21 | These problems induce **selection** biases that present challenges to interpreting and designing surveys.
22 |
23 | To illustrate how social scientists have thought about estimating the prevalence of such embarrassing activities and opinions, this lecture describes a classic approach of S. L. Warner {cite}`warner1965randomized`.
24 |
25 | Warner used elementary probability to construct a way to protect the privacy of **individual** respondents to surveys while still estimating the fraction of a **collection** of individuals who have a socially stigmatized characteristic or who engage in a socially stigmatized activity.
26 |
27 | Warner's idea was to add **noise** between the respondent's answer and the **signal** about that answer that the survey maker ultimately receives.
28 |
29 | Knowing about the structure of the noise assures the respondent that the survey maker does not observe his answer.
30 |
31 | Statistical properties of the noise injection procedure provide the respondent **plausible deniability**.
32 |
33 | Related ideas underlie modern **differential privacy** systems.
34 |
35 | (See https://en.wikipedia.org/wiki/Differential_privacy)
36 |
37 |
38 | ## Warner's Strategy
39 |
40 | As usual, let's bring in the Python modules we'll be using.
41 |
42 |
43 | ```{code-cell} ipython3
44 | import numpy as np
45 | import pandas as pd
46 | ```
47 |
48 | Suppose that every person in population either belongs to Group A or Group B.
49 |
50 | We want to estimate the proportion $\pi$ who belong to Group A while protecting individual respondents' privacy.
51 |
52 |
53 | Warner {cite}`warner1965randomized` proposed and analyzed the following procedure.
54 |
55 | - A random sample of $n$ people is drawn with replacement from the population and each person is interviewed.
56 | - Draw $n$ random samples from the population with replacement and interview each person.
57 | - Prepare a **random spinner** that with $p$ probability points to the Letter A and with $(1-p)$ probability points to the Letter B.
58 | - Each subject spins a random spinner and sees an outcome (A or B) that the interviewer does **not observe**.
59 | - The subject states whether he belongs to the group to which the spinner points.
60 | - If the spinner points to the group that the spinner belongs, the subject reports "yes"; otherwise he reports "no".
61 | - The subject answers the question truthfully.
62 |
63 | Warner constructed a maximum likelihood estimators of the proportion of the population in set A.
64 |
65 | Let
66 |
67 | - $\pi$ : True probability of A in the population
68 | - $p$ : Probability that the spinner points to A
69 | - $X_{i}=\begin{cases}1,\text{ if the } i\text{th} \ \text{ subject says yes}\\0,\text{ if the } i\text{th} \ \text{ subject says no}\end{cases}$
70 |
71 |
72 | Index the sample set so that the first $n_1$ report "yes", while the second $n-n_1$ report "no".
73 |
74 | The likelihood function of a sample set is
75 |
76 | $$
77 | L=\left[\pi p + (1-\pi)(1-p)\right]^{n_{1}}\left[(1-\pi) p +\pi (1-p)\right]^{n-n_{1}}
78 | $$ (eq:one)
79 |
80 | The log of the likelihood function is:
81 |
82 | $$
83 | \log(L)= n_1 \log \left[\pi p + (1-\pi)(1-p)\right] + (n-n_{1}) \log \left[(1-\pi) p +\pi (1-p)\right]
84 | $$ (eq:two)
85 |
86 | The first-order necessary condition for maximizing the log likelihood function with respect to $\pi$ is:
87 |
88 | $$
89 | \frac{(n-n_1)(2p-1)}{(1-\pi) p +\pi (1-p)}=\frac{n_1 (2p-1)}{\pi p + (1-\pi)(1-p)}
90 | $$
91 |
92 | or
93 |
94 | $$
95 | \pi p + (1-\pi)(1-p)=\frac{n_1}{n}
96 | $$ (eq:3)
97 |
98 | If $p \neq \frac{1}{2}$, then the maximum likelihood estimator (MLE) of $\pi$ is:
99 |
100 | $$
101 | \hat{\pi}=\frac{p-1}{2p-1}+\frac{n_1}{(2p-1)n}
102 | $$ (eq:four)
103 |
104 | We compute the mean and variance of the MLE estimator $\hat \pi$ to be:
105 |
106 | $$
107 | \begin{aligned}
108 | \mathbb{E}(\hat{\pi})&= \frac{1}{2 p-1}\left[p-1+\frac{1}{n} \sum_{i=1}^{n} \mathbb{E} X_i \right] \\
109 | &=\frac{1}{2 p-1} \left[ p -1 + \pi p + (1-\pi)(1-p)\right] \\
110 | &=\pi
111 | \end{aligned}
112 | $$ (eq:five)
113 |
114 | and
115 |
116 | $$
117 | \begin{aligned}
118 | Var(\hat{\pi})&=\frac{n Var(X_i)}{(2p - 1 )^2 n^2} \\
119 | &= \frac{\left[\pi p + (1-\pi)(1-p)\right]\left[(1-\pi) p +\pi (1-p)\right]}{(2p - 1 )^2 n^2}\\
120 | &=\frac{\frac{1}{4}+(2 p^2 - 2 p +\frac{1}{2})(- 2 \pi^2 + 2 \pi -\frac{1}{2})}{(2p - 1 )^2 n^2}\\
121 | &=\frac{1}{n}\left[\frac{1}{16(p-\frac{1}{2})^2}-(\pi-\frac{1}{2})^2 \right]
122 | \end{aligned}
123 | $$ (eq:six)
124 |
125 | Equation {eq}`eq:five` indicates that $\hat{\pi}$ is an **unbiased estimator** of $\pi$ while equation {eq}`eq:six` tell us the variance of the estimator.
126 |
127 | To compute a confidence interval, first rewrite {eq}`eq:six` as:
128 |
129 | $$
130 | Var(\hat{\pi})=\frac{\frac{1}{4}-(\pi-\frac{1}{2})^2}{n}+\frac{\frac{1}{16(p-\frac{1}{2})^2}-\frac{1}{4}}{n}
131 | $$ (eq:seven)
132 |
133 | This equation indicates that the variance of $\hat{\pi}$ can be represented as a sum of the variance due to sampling plus the variance due to the random device.
134 |
135 | From the expressions above we can find that:
136 |
137 | - When $p$ is $\frac{1}{2}$, expression {eq}`eq:one` degenerates to a constant.
138 |
139 | - When $p$ is $1$ or $0$, the randomized estimate degenerates to an estimator without randomized sampling.
140 |
141 |
142 | We shall only discuss situations in which $p \in (\frac{1}{2},1)$
143 |
144 | (a situation in which $p \in (0,\frac{1}{2})$ is symmetric).
145 |
146 | From expressions {eq}`eq:five` and {eq}`eq:seven` we can deduce that:
147 |
148 | - The MSE of $\hat{\pi}$ decreases as $p$ increases.
149 |
150 |
151 | ## Comparing Two Survey Designs
152 |
153 | Let's compare the preceding randomized-response method with a stylized non-randomized response method.
154 |
155 | In our non-randomized response method, we suppose that:
156 |
157 | - Members of Group A tells the truth with probability $T_a$ while the members of Group B tells the truth with probability $T_b$
158 | - $Y_i$ is $1$ or $0$ according to whether the sample's $i\text{th}$ member's report is in Group A or not.
159 |
160 | Then we can estimate $\pi$ as:
161 |
162 | $$
163 | \hat{\pi}=\frac{\sum_{i=1}^{n}Y_i}{n}
164 | $$ (eq:eight)
165 |
166 | We calculate the expectation, bias, and variance of the estimator to be:
167 |
168 | $$
169 | \begin{aligned}
170 | \mathbb{E}(\hat{\pi})&=\pi T_a + \left[ (1-\pi)(1-T_b)\right]\\
171 | \end{aligned}
172 | $$ (eq:nine)
173 |
174 | $$
175 | \begin{aligned}
176 | Bias(\hat{\pi})&=\mathbb{E}(\hat{\pi}-\pi)\\
177 | &=\pi [T_a + T_b -2 ] + [1- T_b] \\
178 | \end{aligned}
179 | $$ (eq:ten)
180 |
181 | $$
182 | \begin{aligned}
183 | Var(\hat{\pi})&=\frac{ \left[ \pi T_a + (1-\pi)(1-T_b)\right] \left[1- \pi T_a -(1-\pi)(1-T_b)\right] }{n}
184 | \end{aligned}
185 | $$ (eq:eleven)
186 |
187 | It is useful to define a
188 |
189 | $$
190 | \text{MSE Ratio}=\frac{\text{Mean Square Error Randomized}}{\text{Mean Square Error Regular}}
191 | $$
192 |
193 | We can compute MSE Ratios for different survey designs associated with different parameter values.
194 |
195 | The following Python code computes objects we want to stare at in order to make comparisons
196 | under different values of $\pi_A$ and $n$:
197 |
198 | ```{code-cell} ipython3
199 | class Comparison:
200 | def __init__(self, A, n):
201 | self.A = A
202 | self.n = n
203 | TaTb = np.array([[0.95, 1], [0.9, 1], [0.7, 1],
204 | [0.5, 1], [1, 0.95], [1, 0.9],
205 | [1, 0.7], [1, 0.5], [0.95, 0.95],
206 | [0.9, 0.9], [0.7, 0.7], [0.5, 0.5]])
207 | self.p_arr = np.array([0.6, 0.7, 0.8, 0.9])
208 | self.p_map = dict(zip(self.p_arr, [f"MSE Ratio: p = {x}" for x in self.p_arr]))
209 | self.template = pd.DataFrame(columns=self.p_arr)
210 | self.template[['T_a','T_b']] = TaTb
211 | self.template['Bias'] = None
212 |
213 | def theoretical(self):
214 | A = self.A
215 | n = self.n
216 | df = self.template.copy()
217 | df['Bias'] = A * (df['T_a'] + df['T_b'] - 2) + (1 - df['T_b'])
218 | for p in self.p_arr:
219 | df[p] = (1 / (16 * (p - 1/2)**2) - (A - 1/2)**2) / n / \
220 | (df['Bias']**2 + ((A * df['T_a'] + (1 - A) * (1 - df['T_b'])) * (1 - A * df['T_a'] - (1 - A) * (1 - df['T_b'])) / n))
221 | df[p] = df[p].round(2)
222 | df = df.set_index(["T_a", "T_b", "Bias"]).rename(columns=self.p_map)
223 | return df
224 |
225 | def MCsimulation(self, size=1000, seed=123456):
226 | A = self.A
227 | n = self.n
228 | df = self.template.copy()
229 | np.random.seed(seed)
230 | sample = np.random.rand(size, self.n) <= A
231 | random_device = np.random.rand(size, n)
232 | mse_rd = {}
233 | for p in self.p_arr:
234 | spinner = random_device <= p
235 | rd_answer = sample * spinner + (1 - sample) * (1 - spinner)
236 | n1 = rd_answer.sum(axis=1)
237 | pi_hat = (p - 1) / (2 * p - 1) + n1 / n / (2 * p - 1)
238 | mse_rd[p] = np.sum((pi_hat - A)**2)
239 | for inum, irow in df.iterrows():
240 | truth_a = np.random.rand(size, self.n) <= irow.T_a
241 | truth_b = np.random.rand(size, self.n) <= irow.T_b
242 | trad_answer = sample * truth_a + (1 - sample) * (1 - truth_b)
243 | pi_trad = trad_answer.sum(axis=1) / n
244 | df.loc[inum, 'Bias'] = pi_trad.mean() - A
245 | mse_trad = np.sum((pi_trad - A)**2)
246 | for p in self.p_arr:
247 | df.loc[inum, p] = (mse_rd[p] / mse_trad).round(2)
248 | df = df.set_index(["T_a", "T_b", "Bias"]).rename(columns=self.p_map)
249 | return df
250 | ```
251 |
252 | Let's put the code to work for parameter values
253 |
254 | - $\pi_A=0.6$
255 | - $n=1000$
256 |
257 | We can generate MSE Ratios theoretically using the above formulas.
258 |
259 | We can also perform Monte Carlo simulations of a MSE Ratio.
260 |
261 | ```{code-cell} ipython3
262 | cp1 = Comparison(0.6, 1000)
263 | df1_theoretical = cp1.theoretical()
264 | df1_theoretical
265 | ```
266 |
267 | ```{code-cell} ipython3
268 | df1_mc = cp1.MCsimulation()
269 | df1_mc
270 | ```
271 |
272 | The theoretical calculations do a good job of predicting Monte Carlo results.
273 |
274 | We see that in many situations, especially when the bias is not small, the MSE of the randomized-sampling methods is smaller than that of the non-randomized sampling method.
275 |
276 | These differences become larger as $p$ increases.
277 |
278 | By adjusting parameters $\pi_A$ and $n$, we can study outcomes in different situations.
279 |
280 | For example, for another situation described in Warner {cite}`warner1965randomized`:
281 |
282 | - $\pi_A=0.5$
283 | - $n=1000$
284 |
285 | we can use the code
286 |
287 | ```{code-cell} ipython3
288 | cp2 = Comparison(0.5, 1000)
289 | df2_theoretical = cp2.theoretical()
290 | df2_theoretical
291 | ```
292 |
293 | ```{code-cell} ipython3
294 | df2_mc = cp2.MCsimulation()
295 | df2_mc
296 | ```
297 |
298 | We can also revisit a calculation in the concluding section of Warner {cite}`warner1965randomized` in which
299 |
300 | - $\pi_A=0.6$
301 | - $n=2000$
302 |
303 | We use the code
304 |
305 | ```{code-cell} ipython3
306 | cp3 = Comparison(0.6, 2000)
307 | df3_theoretical = cp3.theoretical()
308 | df3_theoretical
309 | ```
310 |
311 | ```{code-cell} ipython3
312 | df3_mc = cp3.MCsimulation()
313 | df3_mc
314 | ```
315 |
316 | Evidently, as $n$ increases, the randomized response method does better performance in more situations.
317 |
318 | ## Concluding Remarks
319 |
320 | {doc}`This QuantEcon lecture ` describes some alternative randomized response surveys.
321 |
322 | That lecture presents a utilitarian analysis of those alternatives conducted by Lars Ljungqvist
323 | {cite}`ljungqvist1993unified`.
324 |
--------------------------------------------------------------------------------
/lectures/inventory_dynamics.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} jupyter
13 |
18 | ```
19 |
20 | # Inventory Dynamics
21 |
22 | ```{index} single: Markov process, inventory
23 | ```
24 |
25 | ```{contents} Contents
26 | :depth: 2
27 | ```
28 |
29 | ## Overview
30 |
31 | In this lecture we will study the time path of inventories for firms that
32 | follow so-called s-S inventory dynamics.
33 |
34 | Such firms
35 |
36 | 1. wait until inventory falls below some level $s$ and then
37 | 1. order sufficient quantities to bring their inventory back up to capacity $S$.
38 |
39 | These kinds of policies are common in practice and also optimal in certain circumstances.
40 |
41 | A review of early literature and some macroeconomic implications can be found in {cite}`caplin1985variability`.
42 |
43 | Here our main aim is to learn more about simulation, time series and Markov dynamics.
44 |
45 | While our Markov environment and many of the concepts we consider are related to those found in our {doc}`lecture on finite Markov chains `, the state space is a continuum in the current application.
46 |
47 | Let's start with some imports
48 |
49 | ```{code-cell} ipython3
50 | import matplotlib.pyplot as plt
51 | import numpy as np
52 | from numba import jit, float64, prange
53 | from numba.experimental import jitclass
54 | ```
55 |
56 | ## Sample Paths
57 |
58 | Consider a firm with inventory $X_t$.
59 |
60 | The firm waits until $X_t \leq s$ and then restocks up to $S$ units.
61 |
62 | It faces stochastic demand $\{ D_t \}$, which we assume is IID.
63 |
64 | With notation $a^+ := \max\{a, 0\}$, inventory dynamics can be written
65 | as
66 |
67 | $$
68 | X_{t+1} =
69 | \begin{cases}
70 | ( S - D_{t+1})^+ & \quad \text{if } X_t \leq s \\
71 | ( X_t - D_{t+1} )^+ & \quad \text{if } X_t > s
72 | \end{cases}
73 | $$
74 |
75 | In what follows, we will assume that each $D_t$ is lognormal, so that
76 |
77 | $$
78 | D_t = \exp(\mu + \sigma Z_t)
79 | $$
80 |
81 | where $\mu$ and $\sigma$ are parameters and $\{Z_t\}$ is IID
82 | and standard normal.
83 |
84 | Here's a class that stores parameters and generates time paths for inventory.
85 |
86 | ```{code-cell} python3
87 | firm_data = [
88 | ('s', float64), # restock trigger level
89 | ('S', float64), # capacity
90 | ('mu', float64), # shock location parameter
91 | ('sigma', float64) # shock scale parameter
92 | ]
93 |
94 |
95 | @jitclass(firm_data)
96 | class Firm:
97 |
98 | def __init__(self, s=10, S=100, mu=1.0, sigma=0.5):
99 |
100 | self.s, self.S, self.mu, self.sigma = s, S, mu, sigma
101 |
102 | def update(self, x):
103 | "Update the state from t to t+1 given current state x."
104 |
105 | Z = np.random.randn()
106 | D = np.exp(self.mu + self.sigma * Z)
107 | if x <= self.s:
108 | return max(self.S - D, 0)
109 | else:
110 | return max(x - D, 0)
111 |
112 | def sim_inventory_path(self, x_init, sim_length):
113 |
114 | X = np.empty(sim_length)
115 | X[0] = x_init
116 |
117 | for t in range(sim_length-1):
118 | X[t+1] = self.update(X[t])
119 | return X
120 | ```
121 |
122 | Let's run a first simulation, of a single path:
123 |
124 | ```{code-cell} ipython3
125 | firm = Firm()
126 |
127 | s, S = firm.s, firm.S
128 | sim_length = 100
129 | x_init = 50
130 |
131 | X = firm.sim_inventory_path(x_init, sim_length)
132 |
133 | fig, ax = plt.subplots()
134 | bbox = (0., 1.02, 1., .102)
135 | legend_args = {'ncol': 3,
136 | 'bbox_to_anchor': bbox,
137 | 'loc': 3,
138 | 'mode': 'expand'}
139 |
140 | ax.plot(X, label="inventory")
141 | ax.plot(np.full(sim_length, s), 'k--', label="$s$")
142 | ax.plot(np.full(sim_length, S), 'k-', label="$S$")
143 | ax.set_ylim(0, S+10)
144 | ax.set_xlabel("time")
145 | ax.legend(**legend_args)
146 |
147 | plt.show()
148 | ```
149 |
150 | Now let's simulate multiple paths in order to build a more complete picture of
151 | the probabilities of different outcomes:
152 |
153 | ```{code-cell} ipython3
154 | sim_length=200
155 | fig, ax = plt.subplots()
156 |
157 | ax.plot(np.full(sim_length, s), 'k--', label="$s$")
158 | ax.plot(np.full(sim_length, S), 'k-', label="$S$")
159 | ax.set_ylim(0, S+10)
160 | ax.legend(**legend_args)
161 |
162 | for i in range(400):
163 | X = firm.sim_inventory_path(x_init, sim_length)
164 | ax.plot(X, 'b', alpha=0.2, lw=0.5)
165 |
166 | plt.show()
167 | ```
168 |
169 | ## Marginal Distributions
170 |
171 | Now let’s look at the marginal distribution $\psi_T$ of $X_T$ for some
172 | fixed $T$.
173 |
174 | We will do this by generating many draws of $X_T$ given initial
175 | condition $X_0$.
176 |
177 | With these draws of $X_T$ we can build up a picture of its distribution $\psi_T$.
178 |
179 | Here's one visualization, with $T=50$.
180 |
181 | ```{code-cell} ipython3
182 | T = 50
183 | M = 200 # Number of draws
184 |
185 | ymin, ymax = 0, S + 10
186 |
187 | fig, axes = plt.subplots(1, 2, figsize=(11, 6))
188 |
189 | for ax in axes:
190 | ax.grid(alpha=0.4)
191 |
192 | ax = axes[0]
193 |
194 | ax.set_ylim(ymin, ymax)
195 | ax.set_ylabel('$X_t$', fontsize=16)
196 | ax.vlines((T,), -1.5, 1.5)
197 |
198 | ax.set_xticks((T,))
199 | ax.set_xticklabels((r'$T$',))
200 |
201 | sample = np.empty(M)
202 | for m in range(M):
203 | X = firm.sim_inventory_path(x_init, 2 * T)
204 | ax.plot(X, 'b-', lw=1, alpha=0.5)
205 | ax.plot((T,), (X[T+1],), 'ko', alpha=0.5)
206 | sample[m] = X[T+1]
207 |
208 | axes[1].set_ylim(ymin, ymax)
209 |
210 | axes[1].hist(sample,
211 | bins=16,
212 | density=True,
213 | orientation='horizontal',
214 | histtype='bar',
215 | alpha=0.5)
216 |
217 | plt.show()
218 | ```
219 |
220 | We can build up a clearer picture by drawing more samples
221 |
222 | ```{code-cell} ipython3
223 | T = 50
224 | M = 50_000
225 |
226 | fig, ax = plt.subplots()
227 |
228 | sample = np.empty(M)
229 | for m in range(M):
230 | X = firm.sim_inventory_path(x_init, T+1)
231 | sample[m] = X[T]
232 |
233 | ax.hist(sample,
234 | bins=36,
235 | density=True,
236 | histtype='bar',
237 | alpha=0.75)
238 |
239 | plt.show()
240 | ```
241 |
242 | Note that the distribution is bimodal
243 |
244 | * Most firms have restocked twice but a few have restocked only once (see figure with paths above).
245 | * Firms in the second category have lower inventory.
246 |
247 | We can also approximate the distribution using a [kernel density estimator](https://en.wikipedia.org/wiki/Kernel_density_estimation).
248 |
249 | Kernel density estimators can be thought of as smoothed histograms.
250 |
251 | They are preferable to histograms when the distribution being estimated is likely to be smooth.
252 |
253 | We will use a kernel density estimator from [scikit-learn](https://scikit-learn.org/stable/)
254 |
255 | ```{code-cell} ipython3
256 | from sklearn.neighbors import KernelDensity
257 |
258 | def plot_kde(sample, ax, label=''):
259 |
260 | xmin, xmax = 0.9 * min(sample), 1.1 * max(sample)
261 | xgrid = np.linspace(xmin, xmax, 200)
262 | kde = KernelDensity(kernel='gaussian').fit(sample[:, None])
263 | log_dens = kde.score_samples(xgrid[:, None])
264 |
265 | ax.plot(xgrid, np.exp(log_dens), label=label)
266 | ```
267 |
268 | ```{code-cell} ipython3
269 | fig, ax = plt.subplots()
270 | plot_kde(sample, ax)
271 | plt.show()
272 | ```
273 |
274 | The allocation of probability mass is similar to what was shown by the
275 | histogram just above.
276 |
277 | ## Exercises
278 |
279 | ```{exercise}
280 | :label: id_ex1
281 |
282 | This model is asymptotically stationary, with a unique stationary
283 | distribution.
284 |
285 | (See the discussion of stationarity in {doc}`our lecture on AR(1) processes ` for background --- the fundamental concepts are the same.)
286 |
287 | In particular, the sequence of marginal distributions $\{\psi_t\}$
288 | is converging to a unique limiting distribution that does not depend on
289 | initial conditions.
290 |
291 | Although we will not prove this here, we can investigate it using simulation.
292 |
293 | Your task is to generate and plot the sequence $\{\psi_t\}$ at times
294 | $t = 10, 50, 250, 500, 750$ based on the discussion above.
295 |
296 | (The kernel density estimator is probably the best way to present each
297 | distribution.)
298 |
299 | You should see convergence, in the sense that differences between successive distributions are getting smaller.
300 |
301 | Try different initial conditions to verify that, in the long run, the distribution is invariant across initial conditions.
302 | ```
303 |
304 | ```{solution-start} id_ex1
305 | :class: dropdown
306 | ```
307 |
308 | Below is one possible solution:
309 |
310 | The computations involve a lot of CPU cycles so we have tried to write the
311 | code efficiently.
312 |
313 | This meant writing a specialized function rather than using the class above.
314 |
315 | ```{code-cell} ipython3
316 | s, S, mu, sigma = firm.s, firm.S, firm.mu, firm.sigma
317 |
318 | @jit(parallel=True)
319 | def shift_firms_forward(current_inventory_levels, num_periods):
320 |
321 | num_firms = len(current_inventory_levels)
322 | new_inventory_levels = np.empty(num_firms)
323 |
324 | for f in prange(num_firms):
325 | x = current_inventory_levels[f]
326 | for t in range(num_periods):
327 | Z = np.random.randn()
328 | D = np.exp(mu + sigma * Z)
329 | if x <= s:
330 | x = max(S - D, 0)
331 | else:
332 | x = max(x - D, 0)
333 | new_inventory_levels[f] = x
334 |
335 | return new_inventory_levels
336 | ```
337 |
338 | ```{code-cell} ipython3
339 | x_init = 50
340 | num_firms = 50_000
341 |
342 | sample_dates = 0, 10, 50, 250, 500, 750
343 |
344 | first_diffs = np.diff(sample_dates)
345 |
346 | fig, ax = plt.subplots()
347 |
348 | X = np.full(num_firms, x_init)
349 |
350 | current_date = 0
351 | for d in first_diffs:
352 | X = shift_firms_forward(X, d)
353 | current_date += d
354 | plot_kde(X, ax, label=f't = {current_date}')
355 |
356 | ax.set_xlabel('inventory')
357 | ax.set_ylabel('probability')
358 | ax.legend()
359 | plt.show()
360 | ```
361 |
362 | Notice that by $t=500$ or $t=750$ the densities are barely
363 | changing.
364 |
365 | We have reached a reasonable approximation of the stationary density.
366 |
367 | You can convince yourself that initial conditions don’t matter by
368 | testing a few of them.
369 |
370 | For example, try rerunning the code above with all firms starting at
371 | $X_0 = 20$ or $X_0 = 80$.
372 |
373 | ```{solution-end}
374 | ```
375 |
376 | ```{exercise}
377 | :label: id_ex2
378 |
379 | Using simulation, calculate the probability that firms that start with
380 | $X_0 = 70$ need to order twice or more in the first 50 periods.
381 |
382 | You will need a large sample size to get an accurate reading.
383 | ```
384 |
385 |
386 | ```{solution-start} id_ex2
387 | :class: dropdown
388 | ```
389 |
390 | Here is one solution.
391 |
392 | Again, the computations are relatively intensive so we have written a a
393 | specialized function rather than using the class above.
394 |
395 | We will also use parallelization across firms.
396 |
397 | ```{code-cell} ipython3
398 | @jit(parallel=True)
399 | def compute_freq(sim_length=50, x_init=70, num_firms=1_000_000):
400 |
401 | firm_counter = 0 # Records number of firms that restock 2x or more
402 | for m in prange(num_firms):
403 | x = x_init
404 | restock_counter = 0 # Will record number of restocks for firm m
405 |
406 | for t in range(sim_length):
407 | Z = np.random.randn()
408 | D = np.exp(mu + sigma * Z)
409 | if x <= s:
410 | x = max(S - D, 0)
411 | restock_counter += 1
412 | else:
413 | x = max(x - D, 0)
414 |
415 | if restock_counter > 1:
416 | firm_counter += 1
417 |
418 | return firm_counter / num_firms
419 | ```
420 |
421 | Note the time the routine takes to run, as well as the output.
422 |
423 | ```{code-cell} ipython3
424 | %%time
425 |
426 | freq = compute_freq()
427 | print(f"Frequency of at least two stock outs = {freq}")
428 | ```
429 |
430 | Try switching the `parallel` flag to `False` in the jitted function
431 | above.
432 |
433 | Depending on your system, the difference can be substantial.
434 |
435 | (On our desktop machine, the speed up is by a factor of 5.)
436 |
437 | ```{solution-end}
438 | ```
439 |
--------------------------------------------------------------------------------
/lectures/os_egm_jax.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} jupyter
13 |
18 | ```
19 |
20 | # {index}`Optimal Savings VI: EGM with JAX `
21 |
22 | ```{include} _admonition/gpu.md
23 | ```
24 |
25 | ```{contents} Contents
26 | :depth: 2
27 | ```
28 |
29 |
30 | ## Overview
31 |
32 | In this lecture, we'll implement the endogenous grid method (EGM) using JAX.
33 |
34 | This lecture builds on {doc}`os_egm`, which introduced EGM using NumPy.
35 |
36 | By converting to JAX, we can leverage fast linear algebra, hardware accelerators, and JIT compilation for improved performance.
37 |
38 | We'll also use JAX's `vmap` function to fully vectorize the Coleman-Reffett operator.
39 |
40 | Let's start with some standard imports:
41 |
42 | ```{code-cell} python3
43 | import matplotlib.pyplot as plt
44 | import jax
45 | import jax.numpy as jnp
46 | import quantecon as qe
47 | from typing import NamedTuple
48 | ```
49 |
50 | ## Implementation
51 |
52 | For details on the savings problem and the endogenous grid method (EGM), please see {doc}`os_egm`.
53 |
54 | Here we focus on the JAX implementation of EGM.
55 |
56 | We use the same setting as in {doc}`os_egm`:
57 |
58 | * $u(c) = \ln c$,
59 | * production is Cobb-Douglas, and
60 | * the shocks are lognormal.
61 |
62 | Here are the analytical solutions for comparison.
63 |
64 | ```{code-cell} python3
65 | def v_star(x, α, β, μ):
66 | """
67 | True value function
68 | """
69 | c1 = jnp.log(1 - α * β) / (1 - β)
70 | c2 = (μ + α * jnp.log(α * β)) / (1 - α)
71 | c3 = 1 / (1 - β)
72 | c4 = 1 / (1 - α * β)
73 | return c1 + c2 * (c3 - c4) + c4 * jnp.log(x)
74 |
75 | def σ_star(x, α, β):
76 | """
77 | True optimal policy
78 | """
79 | return (1 - α * β) * x
80 | ```
81 |
82 | The `Model` class stores only the data (grids, shocks, and parameters).
83 |
84 | Utility and production functions will be defined globally to work with JAX's JIT compiler.
85 |
86 | ```{code-cell} python3
87 | class Model(NamedTuple):
88 | β: float # discount factor
89 | μ: float # shock location parameter
90 | s: float # shock scale parameter
91 | s_grid: jnp.ndarray # exogenous savings grid
92 | shocks: jnp.ndarray # shock draws
93 | α: float # production function parameter
94 |
95 |
96 | def create_model(
97 | β: float = 0.96,
98 | μ: float = 0.0,
99 | s: float = 0.1,
100 | grid_max: float = 4.0,
101 | grid_size: int = 120,
102 | shock_size: int = 250,
103 | seed: int = 1234,
104 | α: float = 0.4
105 | ) -> Model:
106 | """
107 | Creates an instance of the optimal savings model.
108 | """
109 | # Set up exogenous savings grid
110 | s_grid = jnp.linspace(1e-4, grid_max, grid_size)
111 |
112 | # Store shocks (with a seed, so results are reproducible)
113 | key = jax.random.PRNGKey(seed)
114 | shocks = jnp.exp(μ + s * jax.random.normal(key, shape=(shock_size,)))
115 |
116 | return Model(β, μ, s, s_grid, shocks, α)
117 | ```
118 |
119 |
120 | We define utility and production functions globally.
121 |
122 | ```{code-cell} python3
123 | # Define utility and production functions with derivatives
124 | u = lambda c: jnp.log(c)
125 | u_prime = lambda c: 1 / c
126 | u_prime_inv = lambda x: 1 / x
127 | f = lambda k, α: k**α
128 | f_prime = lambda k, α: α * k**(α - 1)
129 | ```
130 | Here's the Coleman-Reffett operator using EGM.
131 |
132 | The key JAX feature here is `vmap`, which vectorizes the computation over the grid points.
133 |
134 | ```{code-cell} python3
135 | def K(
136 | c_in: jnp.ndarray, # Consumption values on the endogenous grid
137 | x_in: jnp.ndarray, # Current endogenous grid
138 | model: Model # Model specification
139 | ):
140 | """
141 | The Coleman-Reffett operator using EGM
142 |
143 | """
144 | β, μ, s, s_grid, shocks, α = model
145 | σ = lambda x_val: jnp.interp(x_val, x_in, c_in)
146 |
147 | # Define function to compute consumption at a single grid point
148 | def compute_c(s):
149 | # Approximate marginal utility ∫ u'(σ(f(s, α)z)) f'(s, α) z ϕ(z)dz
150 | vals = u_prime(σ(f(s, α) * shocks)) * f_prime(s, α) * shocks
151 | mu = jnp.mean(vals)
152 | # Calculate consumption
153 | return u_prime_inv(β * mu)
154 |
155 | # Vectorize and calculate on all exogenous grid points
156 | compute_c_vectorized = jax.vmap(compute_c)
157 | c_out = compute_c_vectorized(s_grid)
158 |
159 | # Determine corresponding endogenous grid
160 | x_out = s_grid + c_out # x_i = s_i + c_i
161 |
162 | return c_out, x_out
163 | ```
164 |
165 |
166 | Now we create a model instance.
167 |
168 | ```{code-cell} python3
169 | model = create_model()
170 | s_grid = model.s_grid
171 | ```
172 |
173 | The solver uses JAX's `jax.lax.while_loop` for the iteration and is JIT-compiled for speed.
174 |
175 | ```{code-cell} python3
176 | @jax.jit
177 | def solve_model_time_iter(
178 | model: Model,
179 | c_init: jnp.ndarray,
180 | x_init: jnp.ndarray,
181 | tol: float = 1e-5,
182 | max_iter: int = 1000
183 | ):
184 | """
185 | Solve the model using time iteration with EGM.
186 | """
187 |
188 | def condition(loop_state):
189 | i, c, x, error = loop_state
190 | return (error > tol) & (i < max_iter)
191 |
192 | def body(loop_state):
193 | i, c, x, error = loop_state
194 | c_new, x_new = K(c, x, model)
195 | error = jnp.max(jnp.abs(c_new - c))
196 | return i + 1, c_new, x_new, error
197 |
198 | # Initialize loop state
199 | initial_state = (0, c_init, x_init, tol + 1)
200 |
201 | # Run the loop
202 | i, c, x, error = jax.lax.while_loop(condition, body, initial_state)
203 |
204 | return c, x
205 | ```
206 |
207 | We solve the model starting from an initial guess.
208 |
209 | ```{code-cell} python3
210 | c_init = jnp.copy(s_grid)
211 | x_init = s_grid + c_init
212 | c, x = solve_model_time_iter(model, c_init, x_init)
213 | ```
214 |
215 | Let's plot the resulting policy against the analytical solution.
216 |
217 | ```{code-cell} python3
218 | fig, ax = plt.subplots()
219 |
220 | ax.plot(x, c, lw=2,
221 | alpha=0.8, label='approximate policy function')
222 |
223 | ax.plot(x, σ_star(x, model.α, model.β), 'k--',
224 | lw=2, alpha=0.8, label='true policy function')
225 |
226 | ax.legend()
227 | plt.show()
228 | ```
229 |
230 | The fit is very good.
231 |
232 | ```{code-cell} python3
233 | max_dev = jnp.max(jnp.abs(c - σ_star(x, model.α, model.β)))
234 | print(f"Maximum absolute deviation: {max_dev:.7}")
235 | ```
236 |
237 | The JAX implementation is very fast thanks to JIT compilation and vectorization.
238 |
239 | ```{code-cell} python3
240 | with qe.Timer(precision=8):
241 | c, x = solve_model_time_iter(model, c_init, x_init)
242 | jax.block_until_ready(c)
243 | ```
244 |
245 | This speed comes from:
246 |
247 | * JIT compilation of the entire solver
248 | * Vectorization via `vmap` in the Coleman-Reffett operator
249 | * Use of `jax.lax.while_loop` instead of a Python loop
250 | * Efficient JAX array operations throughout
251 |
252 | ## Exercises
253 |
254 | ```{exercise}
255 | :label: cake_egm_jax_ex1
256 |
257 | Solve the optimal savings problem with CRRA utility
258 |
259 | $$
260 | u(c) = \frac{c^{1 - \gamma} - 1}{1 - \gamma}
261 | $$
262 |
263 | Compare the optimal policies for values of $\gamma$ approaching 1 from above (e.g., 1.05, 1.1, 1.2).
264 |
265 | Show that as $\gamma \to 1$, the optimal policy converges to the policy obtained with log utility ($\gamma = 1$).
266 |
267 | Hint: Use values of $\gamma$ close to 1 to ensure the endogenous grids have similar coverage and make visual comparison easier.
268 | ```
269 |
270 | ```{solution-start} cake_egm_jax_ex1
271 | :class: dropdown
272 | ```
273 |
274 | We need to create a version of the Coleman-Reffett operator and solver that work with CRRA utility.
275 |
276 | The key is to parameterize the utility functions by $\gamma$.
277 |
278 | ```{code-cell} python3
279 | def u_crra(c, γ):
280 | return (c**(1 - γ) - 1) / (1 - γ)
281 |
282 | def u_prime_crra(c, γ):
283 | return c**(-γ)
284 |
285 | def u_prime_inv_crra(x, γ):
286 | return x**(-1/γ)
287 | ```
288 |
289 | Now we create a version of the Coleman-Reffett operator that takes $\gamma$ as a parameter.
290 |
291 | ```{code-cell} python3
292 | def K_crra(
293 | c_in: jnp.ndarray, # Consumption values on the endogenous grid
294 | x_in: jnp.ndarray, # Current endogenous grid
295 | model: Model, # Model specification
296 | γ: float # CRRA parameter
297 | ):
298 | """
299 | The Coleman-Reffett operator using EGM with CRRA utility
300 | """
301 | # Simplify names
302 | β, α = model.β, model.α
303 | s_grid, shocks = model.s_grid, model.shocks
304 |
305 | # Linear interpolation of policy using endogenous grid
306 | σ = lambda x_val: jnp.interp(x_val, x_in, c_in)
307 |
308 | # Define function to compute consumption at a single grid point
309 | def compute_c(s):
310 | vals = u_prime_crra(σ(f(s, α) * shocks), γ) * f_prime(s, α) * shocks
311 | return u_prime_inv_crra(β * jnp.mean(vals), γ)
312 |
313 | # Vectorize over grid using vmap
314 | compute_c_vectorized = jax.vmap(compute_c)
315 | c_out = compute_c_vectorized(s_grid)
316 |
317 | # Determine corresponding endogenous grid
318 | x_out = s_grid + c_out # x_i = s_i + c_i
319 |
320 | return c_out, x_out
321 | ```
322 |
323 | We also need a solver that uses this operator.
324 |
325 | ```{code-cell} python3
326 | @jax.jit
327 | def solve_model_crra(model: Model,
328 | c_init: jnp.ndarray,
329 | x_init: jnp.ndarray,
330 | γ: float,
331 | tol: float = 1e-5,
332 | max_iter: int = 1000):
333 | """
334 | Solve the model using time iteration with EGM and CRRA utility.
335 | """
336 |
337 | def condition(loop_state):
338 | i, c, x, error = loop_state
339 | return (error > tol) & (i < max_iter)
340 |
341 | def body(loop_state):
342 | i, c, x, error = loop_state
343 | c_new, x_new = K_crra(c, x, model, γ)
344 | error = jnp.max(jnp.abs(c_new - c))
345 | return i + 1, c_new, x_new, error
346 |
347 | # Initialize loop state
348 | initial_state = (0, c_init, x_init, tol + 1)
349 |
350 | # Run the loop
351 | i, c, x, error = jax.lax.while_loop(condition, body, initial_state)
352 |
353 | return c, x
354 | ```
355 |
356 | Now we solve for $\gamma = 1$ (log utility) and values approaching 1 from above.
357 |
358 | ```{code-cell} python3
359 | γ_values = [1.0, 1.05, 1.1, 1.2]
360 | policies = {}
361 | endogenous_grids = {}
362 |
363 | model_crra = create_model()
364 |
365 | for γ in γ_values:
366 | c_init = jnp.copy(model_crra.s_grid)
367 | x_init = model_crra.s_grid + c_init
368 | c_gamma, x_gamma = solve_model_crra(model_crra, c_init, x_init, γ)
369 | jax.block_until_ready(c_gamma)
370 | policies[γ] = c_gamma
371 | endogenous_grids[γ] = x_gamma
372 | print(f"Solved for γ = {γ}")
373 | ```
374 |
375 | Plot the policies on their endogenous grids.
376 |
377 | ```{code-cell} python3
378 | fig, ax = plt.subplots()
379 |
380 | for γ in γ_values:
381 | x = endogenous_grids[γ]
382 | if γ == 1.0:
383 | ax.plot(x, policies[γ], 'k-', linewidth=2,
384 | label=f'γ = {γ:.2f} (log utility)', alpha=0.8)
385 | else:
386 | ax.plot(x, policies[γ], label=f'γ = {γ:.2f}', alpha=0.8)
387 |
388 | ax.set_xlabel('State x')
389 | ax.set_ylabel('Consumption σ(x)')
390 | ax.legend()
391 | ax.set_title('Optimal policies: CRRA utility approaching log case')
392 | plt.show()
393 | ```
394 |
395 | Note that the plots for $\gamma > 1$ do not cover the entire x-axis range shown.
396 |
397 | This is because the endogenous grid $x = s + \sigma(s)$ depends on the consumption policy, which varies with $\gamma$.
398 |
399 | Let's check the maximum deviation between the log utility case ($\gamma = 1.0$) and values approaching from above.
400 |
401 | ```{code-cell} python3
402 | for γ in [1.05, 1.1, 1.2]:
403 | max_diff = jnp.max(jnp.abs(policies[1.0] - policies[γ]))
404 | print(f"Max difference between γ=1.0 and γ={γ}: {max_diff:.6}")
405 | ```
406 |
407 | As expected, the differences decrease as $\gamma$ approaches 1 from above, confirming convergence.
408 |
409 | ```{solution-end}
410 | ```
411 |
--------------------------------------------------------------------------------
/lectures/qr_decomp.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | format_version: 0.13
7 | jupytext_version: 1.10.3
8 | kernelspec:
9 | display_name: Python 3
10 | language: python
11 | name: python3
12 | ---
13 |
14 | # QR Decomposition
15 |
16 | ## Overview
17 |
18 | This lecture describes the QR decomposition and how it relates to
19 |
20 | * Orthogonal projection and least squares
21 |
22 | * A Gram-Schmidt process
23 |
24 | * Eigenvalues and eigenvectors
25 |
26 |
27 | We'll write some Python code to help consolidate our understandings.
28 |
29 | ## Matrix Factorization
30 |
31 | The QR decomposition (also called the QR factorization) of a matrix is a decomposition of a matrix into the product of an orthogonal matrix and a triangular matrix.
32 |
33 | A QR decomposition of a real matrix $A$
34 | takes the form
35 |
36 | $$
37 | A=QR
38 | $$
39 |
40 | where
41 |
42 | * $Q$ is an orthogonal matrix (so that $Q^TQ = I$)
43 |
44 | * $R$ is an upper triangular matrix
45 |
46 |
47 | We'll use a **Gram-Schmidt process** to compute a QR decomposition
48 |
49 | Because doing so is so educational, we'll write our own Python code to do the job
50 |
51 | ## Gram-Schmidt process
52 |
53 | We'll start with a **square** matrix $A$.
54 |
55 | If a square matrix $A$ is nonsingular, then a $QR$ factorization is unique.
56 |
57 | We'll deal with a rectangular matrix $A$ later.
58 |
59 | Actually, our algorithm will work with a rectangular $A$ that is not square.
60 |
61 | ### Gram-Schmidt process for square $A$
62 |
63 | Here we apply a Gram-Schmidt process to the **columns** of matrix $A$.
64 |
65 | In particular, let
66 |
67 | $$
68 | A= \left[ \begin{array}{c|c|c|c} a_1 & a_2 & \cdots & a_n \end{array} \right]
69 | $$
70 |
71 | Let $|| · ||$ denote the L2 norm.
72 |
73 | The Gram-Schmidt algorithm repeatedly combines the following two steps in a particular order
74 |
75 | * **normalize** a vector to have unit norm
76 |
77 | * **orthogonalize** the next vector
78 |
79 | To begin, we set $u_1 = a_1$ and then **normalize**:
80 |
81 | $$
82 | u_1=a_1, \ \ \ e_1=\frac{u_1}{||u_1||}
83 | $$
84 |
85 | We **orthogonalize** first to compute $u_2$ and then **normalize** to create $e_2$:
86 |
87 | $$
88 | u_2=a_2-(a_2· e_1)e_1, \ \ \ e_2=\frac{u_2}{||u_2||}
89 | $$
90 |
91 | We invite the reader to verify that $e_1$ is orthogonal to $e_2$ by checking that
92 | $e_1 \cdot e_2 = 0$.
93 |
94 | The Gram-Schmidt procedure continues iterating.
95 |
96 | Thus, for $k= 2, \ldots, n-1$ we construct
97 |
98 | $$
99 | u_{k+1}=a_{k+1}-(a_{k+1}· e_1)e_1-\cdots-(a_{k+1}· e_k)e_k, \ \ \ e_{k+1}=\frac{u_{k+1}}{||u_{k+1}||}
100 | $$
101 |
102 |
103 | Here $(a_j \cdot e_i)$ can be interpreted as the linear least squares **regression coefficient** of $a_j$ on $e_i$
104 |
105 | * it is the inner product of $a_j$ and $e_i$ divided by the inner product of $e_i$ where
106 | $e_i \cdot e_i = 1$, as *normalization* has assured us.
107 |
108 | * this regression coefficient has an interpretation as being a **covariance** divided by a **variance**
109 |
110 |
111 | It can be verified that
112 |
113 | $$
114 | A= \left[ \begin{array}{c|c|c|c} a_1 & a_2 & \cdots & a_n \end{array} \right]=
115 | \left[ \begin{array}{c|c|c|c} e_1 & e_2 & \cdots & e_n \end{array} \right]
116 | \left[ \begin{matrix} a_1·e_1 & a_2·e_1 & \cdots & a_n·e_1\\ 0 & a_2·e_2 & \cdots & a_n·e_2
117 | \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n·e_n \end{matrix} \right]
118 | $$
119 |
120 | Thus, we have constructed the decomposision
121 |
122 | $$
123 | A = Q R
124 | $$
125 |
126 | where
127 |
128 | $$
129 | Q = \left[ \begin{array}{c|c|c|c} a_1 & a_2 & \cdots & a_n \end{array} \right]=
130 | \left[ \begin{array}{c|c|c|c} e_1 & e_2 & \cdots & e_n \end{array} \right]
131 | $$
132 |
133 | and
134 |
135 | $$
136 | R = \left[ \begin{matrix} a_1·e_1 & a_2·e_1 & \cdots & a_n·e_1\\ 0 & a_2·e_2 & \cdots & a_n·e_2
137 | \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n·e_n \end{matrix} \right]
138 | $$
139 |
140 | ### $A$ not square
141 |
142 | Now suppose that $A$ is an $n \times m$ matrix where $m > n$.
143 |
144 | Then a $QR$ decomposition is
145 |
146 | $$
147 | A= \left[ \begin{array}{c|c|c|c} a_1 & a_2 & \cdots & a_m \end{array} \right]=\left[ \begin{array}{c|c|c|c} e_1 & e_2 & \cdots & e_n \end{array} \right]
148 | \left[ \begin{matrix} a_1·e_1 & a_2·e_1 & \cdots & a_n·e_1 & a_{n+1}\cdot e_1 & \cdots & a_{m}\cdot e_1 \\
149 | 0 & a_2·e_2 & \cdots & a_n·e_2 & a_{n+1}\cdot e_2 & \cdots & a_{m}\cdot e_2 \\ \vdots & \vdots & \ddots & \quad \vdots & \vdots & \ddots & \vdots
150 | \\ 0 & 0 & \cdots & a_n·e_n & a_{n+1}\cdot e_n & \cdots & a_{m}\cdot e_n \end{matrix} \right]
151 | $$
152 |
153 | which implies that
154 |
155 | \begin{align*}
156 | a_1 & = (a_1\cdot e_1) e_1 \cr
157 | a_2 & = (a_2\cdot e_1) e_1 + (a_2\cdot e_2) e_2 \cr
158 | \vdots & \quad \vdots \cr
159 | a_n & = (a_n\cdot e_1) e_1 + (a_n\cdot e_2) e_2 + \cdots + (a_n \cdot e_n) e_n \cr
160 | a_{n+1} & = (a_{n+1}\cdot e_1) e_1 + (a_{n+1}\cdot e_2) e_2 + \cdots + (a_{n+1}\cdot e_n) e_n \cr
161 | \vdots & \quad \vdots \cr
162 | a_m & = (a_m\cdot e_1) e_1 + (a_m\cdot e_2) e_2 + \cdots + (a_m \cdot e_n) e_n \cr
163 | \end{align*}
164 |
165 | ## Some Code
166 |
167 | Now let's write some homemade Python code to implement a QR decomposition by deploying the Gram-Schmidt process described above.
168 |
169 | ```{code-cell} ipython3
170 | import numpy as np
171 | from scipy.linalg import qr
172 | ```
173 |
174 | ```{code-cell} ipython3
175 | def QR_Decomposition(A):
176 | n, m = A.shape # get the shape of A
177 |
178 | Q = np.empty((n, n)) # initialize matrix Q
179 | u = np.empty((n, n)) # initialize matrix u
180 |
181 | u[:, 0] = A[:, 0]
182 | Q[:, 0] = u[:, 0] / np.linalg.norm(u[:, 0])
183 |
184 | for i in range(1, n):
185 |
186 | u[:, i] = A[:, i]
187 | for j in range(i):
188 | u[:, i] -= (A[:, i] @ Q[:, j]) * Q[:, j] # get each u vector
189 |
190 | Q[:, i] = u[:, i] / np.linalg.norm(u[:, i]) # compute each e vetor
191 |
192 | R = np.zeros((n, m))
193 | for i in range(n):
194 | for j in range(i, m):
195 | R[i, j] = A[:, j] @ Q[:, i]
196 |
197 | return Q, R
198 | ```
199 |
200 | The preceding code is fine but can benefit from some further housekeeping.
201 |
202 | We want to do this because later in this notebook we want to compare results from using our homemade code above with the code for a QR that the Python `scipy` package delivers.
203 |
204 | There can be be sign differences between the $Q$ and $R$ matrices produced by different numerical algorithms.
205 |
206 | All of these are valid QR decompositions because of how the sign differences cancel out when we compute $QR$.
207 |
208 | However, to make the results from our homemade function and the QR module in `scipy` comparable, let's require that $Q$ have positive diagonal entries.
209 |
210 | We do this by adjusting the signs of the columns in $Q$ and the rows in $R$ appropriately.
211 |
212 | To accomplish this we'll define a pair of functions.
213 |
214 | ```{code-cell} ipython3
215 | def diag_sign(A):
216 | "Compute the signs of the diagonal of matrix A"
217 |
218 | D = np.diag(np.sign(np.diag(A)))
219 |
220 | return D
221 |
222 | def adjust_sign(Q, R):
223 | """
224 | Adjust the signs of the columns in Q and rows in R to
225 | impose positive diagonal of Q
226 | """
227 |
228 | D = diag_sign(Q)
229 |
230 | Q[:, :] = Q @ D
231 | R[:, :] = D @ R
232 |
233 | return Q, R
234 | ```
235 |
236 | ## Example
237 |
238 | Now let's do an example.
239 |
240 | ```{code-cell} ipython3
241 | A = np.array([[1.0, 1.0, 0.0], [1.0, 0.0, 1.0], [0.0, 1.0, 1.0]])
242 | # A = np.array([[1.0, 0.5, 0.2], [0.5, 0.5, 1.0], [0.0, 1.0, 1.0]])
243 | # A = np.array([[1.0, 0.5, 0.2], [0.5, 0.5, 1.0]])
244 |
245 | A
246 | ```
247 |
248 | ```{code-cell} ipython3
249 | Q, R = adjust_sign(*QR_Decomposition(A))
250 | ```
251 |
252 | ```{code-cell} ipython3
253 | Q
254 | ```
255 |
256 | ```{code-cell} ipython3
257 | R
258 | ```
259 |
260 | Let's compare outcomes with what the `scipy` package produces
261 |
262 | ```{code-cell} ipython3
263 | Q_scipy, R_scipy = adjust_sign(*qr(A))
264 | ```
265 |
266 | ```{code-cell} ipython3
267 | print('Our Q: \n', Q)
268 | print('\n')
269 | print('Scipy Q: \n', Q_scipy)
270 | ```
271 |
272 | ```{code-cell} ipython3
273 | print('Our R: \n', R)
274 | print('\n')
275 | print('Scipy R: \n', R_scipy)
276 | ```
277 |
278 | The above outcomes give us the good news that our homemade function agrees with what
279 | scipy produces.
280 |
281 |
282 | Now let's do a QR decomposition for a rectangular matrix $A$ that is $n \times m$ with
283 | $m > n$.
284 |
285 | ```{code-cell} ipython3
286 | A = np.array([[1, 3, 4], [2, 0, 9]])
287 | ```
288 |
289 | ```{code-cell} ipython3
290 | Q, R = adjust_sign(*QR_Decomposition(A))
291 | Q, R
292 | ```
293 |
294 | ```{code-cell} ipython3
295 | Q_scipy, R_scipy = adjust_sign(*qr(A))
296 | Q_scipy, R_scipy
297 | ```
298 |
299 | ## Using QR Decomposition to Compute Eigenvalues
300 |
301 | Now for a useful fact about the QR algorithm.
302 |
303 | The following iterations on the QR decomposition can be used to compute **eigenvalues**
304 | of a **square** matrix $A$.
305 |
306 | Here is the algorithm:
307 |
308 | 1. Set $A_0 = A$ and form $A_0 = Q_0 R_0$
309 |
310 | 2. Form $A_1 = R_0 Q_0 $ . Note that $A_1$ is similar to $A_0$ (easy to verify) and so has the same eigenvalues.
311 |
312 | 3. Form $A_1 = Q_1 R_1$ (i.e., form the $QR$ decomposition of $A_1$).
313 |
314 | 4. Form $ A_2 = R_1 Q_1 $ and then $A_2 = Q_2 R_2$ .
315 |
316 | 5. Iterate to convergence.
317 |
318 | 6. Compute eigenvalues of $A$ and compare them to the diagonal values of the limiting $A_n$ found from this process.
319 |
320 | ```{todo}
321 | @mmcky to migrate this to use [sphinx-proof](https://sphinx-proof.readthedocs.io/en/latest/syntax.html#algorithms)
322 | ```
323 |
324 | **Remark:** this algorithm is close to one of the most efficient ways of computing eigenvalues!
325 |
326 | Let's write some Python code to try out the algorithm
327 |
328 | ```{code-cell} ipython3
329 | def QR_eigvals(A, tol=1e-12, maxiter=1000):
330 | "Find the eigenvalues of A using QR decomposition."
331 |
332 | A_old = np.copy(A)
333 | A_new = np.copy(A)
334 |
335 | diff = np.inf
336 | i = 0
337 | while (diff > tol) and (i < maxiter):
338 | A_old[:, :] = A_new
339 | Q, R = QR_Decomposition(A_old)
340 |
341 | A_new[:, :] = R @ Q
342 |
343 | diff = np.abs(A_new - A_old).max()
344 | i += 1
345 |
346 | eigvals = np.diag(A_new)
347 |
348 | return eigvals
349 | ```
350 |
351 | Now let's try the code and compare the results with what `scipy.linalg.eigvals` gives us
352 |
353 | Here goes
354 |
355 | ```{code-cell} ipython3
356 | # experiment this with one random A matrix
357 | A = np.random.random((3, 3))
358 | ```
359 |
360 | ```{code-cell} ipython3
361 | sorted(QR_eigvals(A))
362 | ```
363 |
364 | Compare with the `scipy` package.
365 |
366 | ```{code-cell} ipython3
367 | sorted(np.linalg.eigvals(A))
368 | ```
369 |
370 | ## $QR$ and PCA
371 |
372 | There are interesting connections between the $QR$ decomposition and principal components analysis (PCA).
373 |
374 | Here are some.
375 |
376 | 1. Let $X'$ be a $k \times n$ random matrix where the $j$th column is a random draw
377 | from ${\mathcal N}(\mu, \Sigma)$ where $\mu$ is $k \times 1$ vector of means and $\Sigma$ is a $k \times k$
378 | covariance matrix. We want $n > > k$ -- this is an "econometrics example".
379 |
380 | 2. Form $X' = Q R $ where $Q $ is $k \times k$ and $R$ is $k \times n$.
381 |
382 | 3. Form the eigenvalues of $ R R'$, i.e., we'll compute $R R' = \tilde P \Lambda \tilde P' $.
383 |
384 | 4. Form $X' X = Q \tilde P \Lambda \tilde P' Q'$ and compare it with the eigen decomposition
385 | $ X'X = P \hat \Lambda P'$.
386 |
387 | 5. It will turn out that that $\Lambda = \hat \Lambda$ and that $P = Q \tilde P$.
388 |
389 |
390 | Let's verify conjecture 5 with some Python code.
391 |
392 | Start by simulating a random $\left(n, k\right)$ matrix $X$.
393 |
394 | ```{code-cell} ipython3
395 | k = 5
396 | n = 1000
397 |
398 | # generate some random moments
399 | 𝜇 = np.random.random(size=k)
400 | C = np.random.random((k, k))
401 | Σ = C.T @ C
402 | ```
403 |
404 | ```{code-cell} ipython3
405 | # X is random matrix where each column follows multivariate normal dist.
406 | X = np.random.multivariate_normal(𝜇, Σ, size=n)
407 | ```
408 |
409 | ```{code-cell} ipython3
410 | X.shape
411 | ```
412 |
413 | Let's apply the QR decomposition to $X^{\prime}$.
414 |
415 | ```{code-cell} ipython3
416 | Q, R = adjust_sign(*QR_Decomposition(X.T))
417 | ```
418 |
419 | Check the shapes of $Q$ and $R$.
420 |
421 | ```{code-cell} ipython3
422 | Q.shape, R.shape
423 | ```
424 |
425 | Now we can construct $R R^{\prime}=\tilde{P} \Lambda \tilde{P}^{\prime}$ and form an eigen decomposition.
426 |
427 | ```{code-cell} ipython3
428 | RR = R @ R.T
429 |
430 | 𝜆, P_tilde = np.linalg.eigh(RR)
431 | Λ = np.diag(𝜆)
432 | ```
433 |
434 | We can also apply the decomposition to $X^{\prime} X=P \hat{\Lambda} P^{\prime}$.
435 |
436 | ```{code-cell} ipython3
437 | XX = X.T @ X
438 |
439 | 𝜆_hat, P = np.linalg.eigh(XX)
440 | Λ_hat = np.diag(𝜆_hat)
441 | ```
442 |
443 | Compare the eigenvalues that are on the diagonals of $\Lambda$ and $\hat{\Lambda}$.
444 |
445 | ```{code-cell} ipython3
446 | 𝜆, 𝜆_hat
447 | ```
448 |
449 | Let's compare $P$ and $Q \tilde{P}$.
450 |
451 | Again we need to be careful about sign differences between the columns of $P$ and $Q\tilde{P}$.
452 |
453 | ```{code-cell} ipython3
454 | QP_tilde = Q @ P_tilde
455 |
456 | np.abs(P @ diag_sign(P) - QP_tilde @ diag_sign(QP_tilde)).max()
457 | ```
458 |
459 | Let's verify that $X^{\prime}X$ can be decomposed as $Q \tilde{P} \Lambda \tilde{P}^{\prime} Q^{\prime}$.
460 |
461 | ```{code-cell} ipython3
462 | QPΛPQ = Q @ P_tilde @ Λ @ P_tilde.T @ Q.T
463 | ```
464 |
465 | ```{code-cell} ipython3
466 | np.abs(QPΛPQ - XX).max()
467 | ```
--------------------------------------------------------------------------------
/lectures/ar1_bayes.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .myst
5 | format_name: myst
6 | format_version: 0.13
7 | jupytext_version: 1.13.8
8 | kernelspec:
9 | display_name: Python 3
10 | language: python
11 | name: python3
12 | ---
13 |
14 | # Posterior Distributions for AR(1) Parameters
15 |
16 | ```{include} _admonition/gpu.md
17 | ```
18 |
19 | ```{code-cell} ipython3
20 | :tags: [hide-output]
21 |
22 | !pip install numpyro jax
23 | ```
24 |
25 | In addition to what's included in base Anaconda, we need to install the following packages
26 |
27 | ```{code-cell} ipython3
28 | :tags: [hide-output]
29 |
30 | !pip install arviz pymc
31 | ```
32 |
33 | We'll begin with some Python imports.
34 |
35 | ```{code-cell} ipython3
36 |
37 | import arviz as az
38 | import pymc as pmc
39 | import numpyro
40 | from numpyro import distributions as dist
41 |
42 | import numpy as np
43 | import jax.numpy as jnp
44 | from jax import random
45 | import matplotlib.pyplot as plt
46 |
47 | import logging
48 | logging.basicConfig()
49 | logger = logging.getLogger('pymc')
50 | logger.setLevel(logging.CRITICAL)
51 |
52 | ```
53 |
54 | This lecture uses Bayesian methods offered by [pymc](https://www.pymc.io/projects/docs/en/stable/) and [numpyro](https://num.pyro.ai/en/stable/) to make statistical inferences about two parameters of a univariate first-order autoregression.
55 |
56 |
57 | The model is a good laboratory for illustrating
58 | consequences of alternative ways of modeling the distribution of the initial $y_0$:
59 |
60 | - As a fixed number
61 |
62 | - As a random variable drawn from the stationary distribution of the $\{y_t\}$ stochastic process
63 |
64 |
65 | The first component of the statistical model is
66 |
67 | $$
68 | y_{t+1} = \rho y_t + \sigma_x \epsilon_{t+1}, \quad t \geq 0
69 | $$ (eq:themodel)
70 |
71 | where the scalars $\rho$ and $\sigma_x$ satisfy $|\rho| < 1$ and $\sigma_x > 0$;
72 | $\{\epsilon_{t+1}\}$ is a sequence of i.i.d. normal random variables with mean $0$ and variance $1$.
73 |
74 | The second component of the statistical model is
75 |
76 | $$
77 | y_0 \sim {\cal N}(\mu_0, \sigma_0^2)
78 | $$ (eq:themodel_2)
79 |
80 |
81 |
82 | Consider a sample $\{y_t\}_{t=0}^T$ governed by this statistical model.
83 |
84 | The model
85 | implies that the likelihood function of $\{y_t\}_{t=0}^T$ can be **factored**:
86 |
87 | $$
88 | f(y_T, y_{T-1}, \ldots, y_0) = f(y_T| y_{T-1}) f(y_{T-1}| y_{T-2}) \cdots f(y_1 | y_0 ) f(y_0)
89 | $$
90 |
91 | where we use $f$ to denote a generic probability density.
92 |
93 | The statistical model {eq}`eq:themodel`-{eq}`eq:themodel_2` implies
94 |
95 | $$
96 | \begin{aligned}
97 | f(y_t | y_{t-1}) & \sim {\mathcal N}(\rho y_{t-1}, \sigma_x^2) \\
98 | f(y_0) & \sim {\mathcal N}(\mu_0, \sigma_0^2)
99 | \end{aligned}
100 | $$
101 |
102 | We want to study how inferences about the unknown parameters $(\rho, \sigma_x)$ depend on what is assumed about the parameters $\mu_0, \sigma_0$ of the distribution of $y_0$.
103 |
104 | Below, we study two widely used alternative assumptions:
105 |
106 | - $(\mu_0,\sigma_0) = (y_0, 0)$ which means that $y_0$ is drawn from the distribution ${\mathcal N}(y_0, 0)$; in effect, we are **conditioning on an observed initial value**.
107 |
108 | - $\mu_0,\sigma_0$ are functions of $\rho, \sigma_x$ because $y_0$ is drawn from the stationary distribution implied by $\rho, \sigma_x$.
109 |
110 |
111 |
112 | **Note:** We do **not** treat a third possible case in which $\mu_0,\sigma_0$ are free parameters to be estimated.
113 |
114 | Unknown parameters are $\rho, \sigma_x$.
115 |
116 | We have independent **prior probability distributions** for $\rho, \sigma_x$ and want to compute a posterior probability distribution after observing a sample $\{y_{t}\}_{t=0}^T$.
117 |
118 | The notebook uses `pymc4` and `numpyro` to compute a posterior distribution of $\rho, \sigma_x$. We will use NUTS samplers to generate samples from the posterior in a chain. Both of these libraries support NUTS samplers.
119 |
120 | NUTS is a form of Monte Carlo Markov Chain (MCMC) algorithm that bypasses random walk behaviour and allows for convergence to a target distribution more quickly. This not only has the advantage of speed, but allows for complex models to be fitted without having to employ specialised knowledge regarding the theory underlying those fitting methods.
121 |
122 | Thus, we explore consequences of making these alternative assumptions about the distribution of $y_0$:
123 |
124 | - A first procedure is to condition on whatever value of $y_0$ is observed. This amounts to assuming that the probability distribution of the random variable $y_0$ is a Dirac delta function that puts probability one on the observed value of $y_0$.
125 |
126 | - A second procedure assumes that $y_0$ is drawn from the stationary distribution of a process described by {eq}`eq:themodel`
127 | so that $y_0 \sim {\cal N} \left(0, {\sigma_x^2\over (1-\rho)^2} \right) $
128 |
129 | When the initial value $y_0$ is far out in a tail of the stationary distribution, conditioning on an initial value gives a posterior that is **more accurate** in a sense that we'll explain.
130 |
131 | Basically, when $y_0$ happens to be in a tail of the stationary distribution and we **don't condition on $y_0$**, the likelihood function for $\{y_t\}_{t=0}^T$ adjusts the posterior distribution of the parameter pair $\rho, \sigma_x $ to make the observed value of $y_0$ more likely than it really is under the stationary distribution, thereby adversely twisting the posterior in short samples.
132 |
133 | An example below shows how not conditioning on $y_0$ adversely shifts the posterior probability distribution of $\rho$ toward larger values.
134 |
135 |
136 | We begin by solving a **direct problem** that simulates an AR(1) process.
137 |
138 | How we select the initial value $y_0$ matters.
139 |
140 | * If we think $y_0$ is drawn from the stationary distribution ${\mathcal N}(0, \frac{\sigma_x^{2}}{1-\rho^2})$, then it is a good idea to use this distribution as $f(y_0)$. Why? Because $y_0$ contains information about $\rho, \sigma_x$.
141 |
142 | * If we suspect that $y_0$ is far in the tails of the stationary distribution -- so that variation in early observations in the sample have a significant **transient component** -- it is better to condition on $y_0$ by setting $f(y_0) = 1$.
143 |
144 |
145 | To illustrate the issue, we'll begin by choosing an initial $y_0$ that is far out in a tail of the stationary distribution.
146 |
147 | ```{code-cell} ipython3
148 |
149 | def ar1_simulate(rho, sigma, y0, T):
150 |
151 | # Allocate space and draw epsilons
152 | y = np.empty(T)
153 | eps = np.random.normal(0.,sigma,T)
154 |
155 | # Initial condition and step forward
156 | y[0] = y0
157 | for t in range(1, T):
158 | y[t] = rho*y[t-1] + eps[t]
159 |
160 | return y
161 |
162 | sigma = 1.
163 | rho = 0.5
164 | T = 50
165 |
166 | np.random.seed(145353452)
167 | y = ar1_simulate(rho, sigma, 10, T)
168 | ```
169 |
170 | ```{code-cell} ipython3
171 | plt.plot(y)
172 | plt.tight_layout()
173 | ```
174 |
175 | Now we shall use Bayes' law to construct a posterior distribution, conditioning on the initial value of $y_0$.
176 |
177 | (Later we'll assume that $y_0$ is drawn from the stationary distribution, but not now.)
178 |
179 | First we'll use **pymc4**.
180 |
181 | ## PyMC Implementation
182 |
183 | For a normal distribution in `pymc`,
184 | $var = 1/\tau = \sigma^{2}$.
185 |
186 | ```{code-cell} ipython3
187 |
188 | AR1_model = pmc.Model()
189 |
190 | with AR1_model:
191 |
192 | # Start with priors
193 | rho = pmc.Uniform('rho', lower=-1., upper=1.) # Assume stable rho
194 | sigma = pmc.HalfNormal('sigma', sigma = np.sqrt(10))
195 |
196 | # Expected value of y at the next period (rho * y)
197 | yhat = rho * y[:-1]
198 |
199 | # Likelihood of the actual realization
200 | y_like = pmc.Normal('y_obs', mu=yhat, sigma=sigma, observed=y[1:])
201 | ```
202 |
203 | [pmc.sample](https://www.pymc.io/projects/docs/en/v5.10.0/api/generated/pymc.sample.html#pymc-sample) by default uses the NUTS samplers to generate samples as shown in the below cell:
204 |
205 | ```{code-cell} ipython3
206 | :tag: [hide-output]
207 |
208 | with AR1_model:
209 | trace = pmc.sample(50000, tune=10000, return_inferencedata=True)
210 | ```
211 |
212 | ```{code-cell} ipython3
213 | with AR1_model:
214 | az.plot_trace(trace, figsize=(17,6))
215 | ```
216 |
217 | Evidently, the posteriors aren't centered on the true values of $.5, 1$ that we used to generate the data.
218 |
219 | This is a symptom of the classic **Hurwicz bias** for first order autoregressive processes (see Leonid Hurwicz {cite}`hurwicz1950least`.)
220 |
221 | The Hurwicz bias is worse the smaller is the sample (see {cite}`Orcutt_Winokur_69`).
222 |
223 |
224 | Be that as it may, here is more information about the posterior.
225 |
226 | ```{code-cell} ipython3
227 | with AR1_model:
228 | summary = az.summary(trace, round_to=4)
229 |
230 | summary
231 | ```
232 |
233 | Now we shall compute a posterior distribution after seeing the same data but instead assuming that $y_0$ is drawn from the stationary distribution.
234 |
235 | This means that
236 |
237 | $$
238 | y_0 \sim N \left(0, \frac{\sigma_x^{2}}{1 - \rho^{2}} \right)
239 | $$
240 |
241 | We alter the code as follows:
242 |
243 | ```{code-cell} ipython3
244 | AR1_model_y0 = pmc.Model()
245 |
246 | with AR1_model_y0:
247 |
248 | # Start with priors
249 | rho = pmc.Uniform('rho', lower=-1., upper=1.) # Assume stable rho
250 | sigma = pmc.HalfNormal('sigma', sigma=np.sqrt(10))
251 |
252 | # Standard deviation of ergodic y
253 | y_sd = sigma / np.sqrt(1 - rho**2)
254 |
255 | # yhat
256 | yhat = rho * y[:-1]
257 | y_data = pmc.Normal('y_obs', mu=yhat, sigma=sigma, observed=y[1:])
258 | y0_data = pmc.Normal('y0_obs', mu=0., sigma=y_sd, observed=y[0])
259 | ```
260 |
261 | ```{code-cell} ipython3
262 | :tag: [hide-output]
263 |
264 | with AR1_model_y0:
265 | trace_y0 = pmc.sample(50000, tune=10000, return_inferencedata=True)
266 |
267 | # Grey vertical lines are the cases of divergence
268 | ```
269 |
270 | ```{code-cell} ipython3
271 | with AR1_model_y0:
272 | az.plot_trace(trace_y0, figsize=(17,6))
273 | ```
274 |
275 | ```{code-cell} ipython3
276 | with AR1_model:
277 | summary_y0 = az.summary(trace_y0, round_to=4)
278 |
279 | summary_y0
280 | ```
281 |
282 | Please note how the posterior for $\rho$ has shifted to the right relative to when we conditioned on $y_0$ instead of assuming that $y_0$ is drawn from the stationary distribution.
283 |
284 | Think about why this happens.
285 |
286 | ```{hint}
287 | It is connected to how Bayes Law (conditional probability) solves an **inverse problem** by putting high probability on parameter values
288 | that make observations more likely.
289 | ```
290 |
291 | We'll return to this issue after we use `numpyro` to compute posteriors under our two alternative assumptions about the distribution of $y_0$.
292 |
293 | We'll now repeat the calculations using `numpyro`.
294 |
295 | ## Numpyro Implementation
296 |
297 | ```{code-cell} ipython3
298 |
299 |
300 | def plot_posterior(sample):
301 | """
302 | Plot trace and histogram
303 | """
304 | # To np array
305 | rhos = sample['rho']
306 | sigmas = sample['sigma']
307 | rhos, sigmas, = np.array(rhos), np.array(sigmas)
308 |
309 | fig, axs = plt.subplots(2, 2, figsize=(17, 6))
310 | # Plot trace
311 | axs[0, 0].plot(rhos) # rho
312 | axs[1, 0].plot(sigmas) # sigma
313 |
314 | # Plot posterior
315 | axs[0, 1].hist(rhos, bins=50, density=True, alpha=0.7)
316 | axs[0, 1].set_xlim([0, 1])
317 | axs[1, 1].hist(sigmas, bins=50, density=True, alpha=0.7)
318 |
319 | axs[0, 0].set_title("rho")
320 | axs[0, 1].set_title("rho")
321 | axs[1, 0].set_title("sigma")
322 | axs[1, 1].set_title("sigma")
323 | plt.show()
324 | ```
325 |
326 | ```{code-cell} ipython3
327 | def AR1_model(data):
328 | # set prior
329 | rho = numpyro.sample('rho', dist.Uniform(low=-1., high=1.))
330 | sigma = numpyro.sample('sigma', dist.HalfNormal(scale=np.sqrt(10)))
331 |
332 | # Expected value of y at the next period (rho * y)
333 | yhat = rho * data[:-1]
334 |
335 | # Likelihood of the actual realization.
336 | y_data = numpyro.sample('y_obs', dist.Normal(loc=yhat, scale=sigma), obs=data[1:])
337 |
338 | ```
339 |
340 | ```{code-cell} ipython3
341 | :tag: [hide-output]
342 |
343 | # Make jnp array
344 | y = jnp.array(y)
345 |
346 | # Set NUTS kernal
347 | NUTS_kernel = numpyro.infer.NUTS(AR1_model)
348 |
349 | # Run MCMC
350 | mcmc = numpyro.infer.MCMC(NUTS_kernel, num_samples=50000, num_warmup=10000, progress_bar=False)
351 | mcmc.run(rng_key=random.PRNGKey(1), data=y)
352 | ```
353 |
354 | ```{code-cell} ipython3
355 | plot_posterior(mcmc.get_samples())
356 | ```
357 |
358 | ```{code-cell} ipython3
359 | mcmc.print_summary()
360 | ```
361 |
362 | Next, we again compute the posterior under the assumption that $y_0$ is drawn from the stationary distribution, so that
363 |
364 | $$
365 | y_0 \sim N \left(0, \frac{\sigma_x^{2}}{1 - \rho^{2}} \right)
366 | $$
367 |
368 | Here's the new code to achieve this.
369 |
370 | ```{code-cell} ipython3
371 | def AR1_model_y0(data):
372 | # Set prior
373 | rho = numpyro.sample('rho', dist.Uniform(low=-1., high=1.))
374 | sigma = numpyro.sample('sigma', dist.HalfNormal(scale=np.sqrt(10)))
375 |
376 | # Standard deviation of ergodic y
377 | y_sd = sigma / jnp.sqrt(1 - rho**2)
378 |
379 | # Expected value of y at the next period (rho * y)
380 | yhat = rho * data[:-1]
381 |
382 | # Likelihood of the actual realization.
383 | y_data = numpyro.sample('y_obs', dist.Normal(loc=yhat, scale=sigma), obs=data[1:])
384 | y0_data = numpyro.sample('y0_obs', dist.Normal(loc=0., scale=y_sd), obs=data[0])
385 | ```
386 |
387 | ```{code-cell} ipython3
388 | :tag: [hide-output]
389 |
390 | # Make jnp array
391 | y = jnp.array(y)
392 |
393 | # Set NUTS kernal
394 | NUTS_kernel = numpyro.infer.NUTS(AR1_model_y0)
395 |
396 | # Run MCMC
397 | mcmc2 = numpyro.infer.MCMC(NUTS_kernel, num_samples=50000, num_warmup=10000, progress_bar=False)
398 | mcmc2.run(rng_key=random.PRNGKey(1), data=y)
399 | ```
400 |
401 | ```{code-cell} ipython3
402 | plot_posterior(mcmc2.get_samples())
403 | ```
404 |
405 | ```{code-cell} ipython3
406 | mcmc2.print_summary()
407 | ```
408 |
409 | Look what happened to the posterior!
410 |
411 | It has moved far from the true values of the parameters used to generate the data because of how Bayes' Law (i.e., conditional probability)
412 | is telling `numpyro` to explain what it interprets as "explosive" observations early in the sample.
413 |
414 | Bayes' Law is able to generate a plausible likelihood for the first observation by driving $\rho \rightarrow 1$ and $\sigma \uparrow$ in order to raise the variance of the stationary distribution.
415 |
416 | Our example illustrates the importance of what you assume about the distribution of initial conditions.
417 |
--------------------------------------------------------------------------------
/lectures/ifp_opi.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | format_version: 0.13
7 | jupytext_version: 1.16.1
8 | kernelspec:
9 | display_name: Python 3 (ipykernel)
10 | language: python
11 | name: python3
12 | ---
13 |
14 | # The Income Fluctuation Problem II: Optimistic Policy Iteration
15 |
16 | ```{include} _admonition/gpu.md
17 | ```
18 |
19 | ## Overview
20 |
21 | In {doc}`ifp_discrete` we studied the income fluctuation problem and solved it
22 | using value function iteration (VFI).
23 |
24 | In this lecture we'll solve the same problem using **optimistic policy
25 | iteration** (OPI), which is very general, typically faster than VFI and only
26 | slightly more complex.
27 |
28 | OPI combines elements of both value function iteration and policy iteration.
29 |
30 | A detailed discussion of the algorithm can be found in [DP1](https://dp.quantecon.org).
31 |
32 | Here our aim is to implement OPI and test whether or not it yields significant
33 | speed improvements over standard VFI for the income fluctuation problem.
34 |
35 | In addition to Anaconda, this lecture will need the following libraries:
36 |
37 | ```{code-cell} ipython3
38 | :tags: [hide-output]
39 |
40 | !pip install quantecon jax
41 | ```
42 |
43 | We will use the following imports:
44 |
45 | ```{code-cell} ipython3
46 | import quantecon as qe
47 | import jax
48 | import jax.numpy as jnp
49 | import matplotlib.pyplot as plt
50 | from typing import NamedTuple
51 | from time import time
52 | ```
53 |
54 |
55 |
56 | ## Model and Primitives
57 |
58 | The model and parameters are the same as in {doc}`ifp_discrete`.
59 |
60 | We repeat the key elements here for convenience.
61 |
62 | The household's problem is to maximize
63 |
64 | $$
65 | \mathbb{E} \, \sum_{t=0}^{\infty} \beta^t u(c_t)
66 | $$
67 |
68 | subject to
69 |
70 | $$
71 | a_{t+1} + c_t \leq R a_t + y_t
72 | $$
73 |
74 | where $u(c) = c^{1-\gamma}/(1-\gamma)$.
75 |
76 | Here's the model structure:
77 |
78 | ```{code-cell} ipython3
79 | class Model(NamedTuple):
80 | β: float # Discount factor
81 | R: float # Gross interest rate
82 | γ: float # CRRA parameter
83 | a_grid: jnp.ndarray # Asset grid
84 | y_grid: jnp.ndarray # Income grid
85 | Q: jnp.ndarray # Markov matrix for income
86 |
87 |
88 | def create_consumption_model(
89 | R=1.01, # Gross interest rate
90 | β=0.98, # Discount factor
91 | γ=2, # CRRA parameter
92 | a_min=0.01, # Min assets
93 | a_max=10.0, # Max assets
94 | a_size=150, # Grid size
95 | ρ=0.9, ν=0.1, y_size=100 # Income parameters
96 | ):
97 | """
98 | Creates an instance of the consumption-savings model.
99 |
100 | """
101 | a_grid = jnp.linspace(a_min, a_max, a_size)
102 | mc = qe.tauchen(n=y_size, rho=ρ, sigma=ν)
103 | y_grid, Q = jnp.exp(mc.state_values), jax.device_put(mc.P)
104 | return Model(β, R, γ, a_grid, y_grid, Q)
105 | ```
106 |
107 | ## Operators and Policies
108 |
109 | We repeat some functions from {doc}`ifp_discrete`.
110 |
111 | Here is the right hand side of the Bellman equation:
112 |
113 | ```{code-cell} ipython3
114 | def B(v, model, i, j, ip):
115 | """
116 | The right-hand side of the Bellman equation before maximization, which takes
117 | the form
118 |
119 | B(a, y, a′) = u(Ra + y - a′) + β Σ_y′ v(a′, y′) Q(y, y′)
120 |
121 | The indices are (i, j, ip) -> (a, y, a′).
122 | """
123 | β, R, γ, a_grid, y_grid, Q = model
124 | a, y, ap = a_grid[i], y_grid[j], a_grid[ip]
125 | c = R * a + y - ap
126 | EV = jnp.sum(v[ip, :] * Q[j, :])
127 | return jnp.where(c > 0, c**(1-γ)/(1-γ) + β * EV, -jnp.inf)
128 | ```
129 |
130 | Now we successively apply `vmap` to vectorize over all indices:
131 |
132 | ```{code-cell} ipython3
133 | B_1 = jax.vmap(B, in_axes=(None, None, None, None, 0))
134 | B_2 = jax.vmap(B_1, in_axes=(None, None, None, 0, None))
135 | B_vmap = jax.vmap(B_2, in_axes=(None, None, 0, None, None))
136 | ```
137 |
138 | Here's the Bellman operator:
139 |
140 | ```{code-cell} ipython3
141 | def T(v, model):
142 | "The Bellman operator."
143 | a_indices = jnp.arange(len(model.a_grid))
144 | y_indices = jnp.arange(len(model.y_grid))
145 | B_values = B_vmap(v, model, a_indices, y_indices, a_indices)
146 | return jnp.max(B_values, axis=-1)
147 | ```
148 |
149 | Here's the function that computes a $v$-greedy policy:
150 |
151 | ```{code-cell} ipython3
152 | def get_greedy(v, model):
153 | "Computes a v-greedy policy, returned as a set of indices."
154 | a_indices = jnp.arange(len(model.a_grid))
155 | y_indices = jnp.arange(len(model.y_grid))
156 | B_values = B_vmap(v, model, a_indices, y_indices, a_indices)
157 | return jnp.argmax(B_values, axis=-1)
158 | ```
159 |
160 | Now we define the policy operator $T_\sigma$, which is the Bellman operator with
161 | policy $\sigma$ fixed.
162 |
163 | For a given policy $\sigma$, the policy operator is defined by
164 |
165 | $$
166 | (T_\sigma v)(a, y) = u(Ra + y - \sigma(a, y)) + \beta \sum_{y'} v(\sigma(a, y), y') Q(y, y')
167 | $$
168 |
169 | ```{code-cell} ipython3
170 | def T_σ(v, σ, model, i, j):
171 | """
172 | The σ-policy operator for indices (i, j) -> (a, y).
173 | """
174 | β, R, γ, a_grid, y_grid, Q = model
175 |
176 | # Get values at current state
177 | a, y = a_grid[i], y_grid[j]
178 | # Get policy choice
179 | ap = a_grid[σ[i, j]]
180 |
181 | # Compute current reward
182 | c = R * a + y - ap
183 | r = jnp.where(c > 0, c**(1-γ)/(1-γ), -jnp.inf)
184 |
185 | # Compute expected value
186 | EV = jnp.sum(v[σ[i, j], :] * Q[j, :])
187 |
188 | return r + β * EV
189 | ```
190 |
191 | Apply vmap to vectorize:
192 |
193 | ```{code-cell} ipython3
194 | T_σ_1 = jax.vmap(T_σ, in_axes=(None, None, None, None, 0))
195 | T_σ_vmap = jax.vmap(T_σ_1, in_axes=(None, None, None, 0, None))
196 |
197 | def T_σ_vec(v, σ, model):
198 | """Vectorized version of T_σ."""
199 | a_size, y_size = len(model.a_grid), len(model.y_grid)
200 | a_indices = jnp.arange(a_size)
201 | y_indices = jnp.arange(y_size)
202 | return T_σ_vmap(v, σ, model, a_indices, y_indices)
203 | ```
204 |
205 | Now we need a function to apply the policy operator m times:
206 |
207 | ```{code-cell} ipython3
208 | def iterate_policy_operator(σ, v, m, model):
209 | """
210 | Apply the policy operator T_σ exactly m times to v.
211 | """
212 | def update(i, v):
213 | return T_σ_vec(v, σ, model)
214 |
215 | v = jax.lax.fori_loop(0, m, update, v)
216 | return v
217 | ```
218 |
219 | ## Value Function Iteration
220 |
221 | For comparison, here's VFI from {doc}`ifp_discrete`:
222 |
223 | ```{code-cell} ipython3
224 | @jax.jit
225 | def value_function_iteration(model, tol=1e-5, max_iter=10_000):
226 | """
227 | Implements VFI using successive approximation.
228 | """
229 | def body_fun(k_v_err):
230 | k, v, error = k_v_err
231 | v_new = T(v, model)
232 | error = jnp.max(jnp.abs(v_new - v))
233 | return k + 1, v_new, error
234 |
235 | def cond_fun(k_v_err):
236 | k, v, error = k_v_err
237 | return jnp.logical_and(error > tol, k < max_iter)
238 |
239 | v_init = jnp.zeros((len(model.a_grid), len(model.y_grid)))
240 | k, v_star, error = jax.lax.while_loop(cond_fun, body_fun,
241 | (1, v_init, tol + 1))
242 | return v_star, get_greedy(v_star, model)
243 | ```
244 |
245 | ## Optimistic Policy Iteration
246 |
247 | Now we implement OPI.
248 |
249 | The algorithm alternates between
250 |
251 | 1. Performing $m$ policy operator iterations to update the value function
252 | 2. Computing a new greedy policy based on the updated value function
253 |
254 | ```{code-cell} ipython3
255 | @jax.jit
256 | def optimistic_policy_iteration(model, m=10, tol=1e-5, max_iter=10_000):
257 | """
258 | Implements optimistic policy iteration with step size m.
259 |
260 | Parameters:
261 | -----------
262 | model : Model
263 | The consumption-savings model
264 | m : int
265 | Number of policy operator iterations per step
266 | tol : float
267 | Tolerance for convergence
268 | max_iter : int
269 | Maximum number of iterations
270 | """
271 | v_init = jnp.zeros((len(model.a_grid), len(model.y_grid)))
272 |
273 | def condition_function(inputs):
274 | i, v, error = inputs
275 | return jnp.logical_and(error > tol, i < max_iter)
276 |
277 | def update(inputs):
278 | i, v, error = inputs
279 | last_v = v
280 | σ = get_greedy(v, model)
281 | v = iterate_policy_operator(σ, v, m, model)
282 | error = jnp.max(jnp.abs(v - last_v))
283 | i += 1
284 | return i, v, error
285 |
286 | num_iter, v, error = jax.lax.while_loop(condition_function,
287 | update,
288 | (0, v_init, tol + 1))
289 |
290 | return v, get_greedy(v, model)
291 | ```
292 |
293 | ## Timing Comparison
294 |
295 | Let's create a model and compare the performance of VFI and OPI.
296 |
297 | ```{code-cell} ipython3
298 | model = create_consumption_model()
299 | ```
300 |
301 | First, let's time VFI:
302 |
303 | ```{code-cell} ipython3
304 | print("Starting VFI.")
305 | start = time()
306 | v_star_vfi, σ_star_vfi = value_function_iteration(model)
307 | v_star_vfi.block_until_ready()
308 | vfi_time_with_compile = time() - start
309 | print(f"VFI completed in {vfi_time_with_compile:.2f} seconds.")
310 | ```
311 |
312 | Run it again to eliminate compile time:
313 |
314 | ```{code-cell} ipython3
315 | start = time()
316 | v_star_vfi, σ_star_vfi = value_function_iteration(model)
317 | v_star_vfi.block_until_ready()
318 | vfi_time = time() - start
319 | print(f"VFI completed in {vfi_time:.2f} seconds.")
320 | ```
321 |
322 | Now let's time OPI with different values of m:
323 |
324 | ```{code-cell} ipython3
325 | print("Starting OPI with m=50.")
326 | start = time()
327 | v_star_opi, σ_star_opi = optimistic_policy_iteration(model, m=50)
328 | v_star_opi.block_until_ready()
329 | opi_time_with_compile = time() - start
330 | print(f"OPI completed in {opi_time_with_compile:.2f} seconds.")
331 | ```
332 |
333 | Run it again:
334 |
335 | ```{code-cell} ipython3
336 | start = time()
337 | v_star_opi, σ_star_opi = optimistic_policy_iteration(model, m=50)
338 | v_star_opi.block_until_ready()
339 | opi_time = time() - start
340 | print(f"OPI completed in {opi_time:.2f} seconds.")
341 | ```
342 |
343 | Check that we get the same result:
344 |
345 | ```{code-cell} ipython3
346 | print(f"Values match: {jnp.allclose(v_star_vfi, v_star_opi)}")
347 | ```
348 |
349 | The value functions match, confirming both algorithms converge to the same solution.
350 |
351 | Let's visually compare the asset dynamics under both policies:
352 |
353 | ```{code-cell} ipython3
354 | fig, axes = plt.subplots(1, 2, figsize=(12, 5))
355 |
356 | # VFI policy
357 | for j, label in zip([0, -1], ['low income', 'high income']):
358 | a_next_vfi = model.a_grid[σ_star_vfi[:, j]]
359 | axes[0].plot(model.a_grid, a_next_vfi, label=label)
360 | axes[0].plot(model.a_grid, model.a_grid, 'k--', linewidth=0.5, alpha=0.5)
361 | axes[0].set(xlabel='current assets', ylabel='next period assets', title='VFI')
362 | axes[0].legend()
363 |
364 | # OPI policy
365 | for j, label in zip([0, -1], ['low income', 'high income']):
366 | a_next_opi = model.a_grid[σ_star_opi[:, j]]
367 | axes[1].plot(model.a_grid, a_next_opi, label=label)
368 | axes[1].plot(model.a_grid, model.a_grid, 'k--', linewidth=0.5, alpha=0.5)
369 | axes[1].set(xlabel='current assets', ylabel='next period assets', title='OPI')
370 | axes[1].legend()
371 |
372 | plt.tight_layout()
373 | plt.show()
374 | ```
375 |
376 | The policies are visually indistinguishable, confirming both methods produce the same solution.
377 |
378 | Here's the speedup:
379 |
380 | ```{code-cell} ipython3
381 | print(f"Speedup factor: {vfi_time / opi_time:.2f}")
382 | ```
383 |
384 | Let's try different values of m to see how it affects performance:
385 |
386 | ```{code-cell} ipython3
387 | m_vals = [1, 5, 10, 25, 50, 100, 200, 400]
388 | opi_times = []
389 |
390 | for m in m_vals:
391 | start = time()
392 | v_star, σ_star = optimistic_policy_iteration(model, m=m)
393 | v_star.block_until_ready()
394 | elapsed = time() - start
395 | opi_times.append(elapsed)
396 | print(f"OPI with m={m:3d} completed in {elapsed:.2f} seconds.")
397 | ```
398 |
399 | Plot the results:
400 |
401 | ```{code-cell} ipython3
402 | fig, ax = plt.subplots()
403 | ax.plot(m_vals, opi_times, 'o-', label='OPI')
404 | ax.axhline(vfi_time, linestyle='--', color='red', label='VFI')
405 | ax.set_xlabel('m (policy steps per iteration)')
406 | ax.set_ylabel('time (seconds)')
407 | ax.legend()
408 | ax.set_title('OPI execution time vs step size m')
409 | plt.show()
410 | ```
411 |
412 | Here's a summary of the results
413 |
414 | * OPI outperforms VFI for a large range of $m$ values.
415 |
416 | * For very large $m$, OPI performance begins to degrade as we spend too much
417 | time iterating the policy operator.
418 |
419 |
420 | ## Exercises
421 |
422 | ```{exercise}
423 | :label: ifp_opi_ex1
424 |
425 | The speed gains achieved by OPI are quite robust to parameter changes.
426 |
427 | Confirm this by experimenting with different parameter values for the income process ($\rho$ and $\nu$).
428 |
429 | Measure how they affect the relative performance of VFI vs OPI.
430 |
431 | Try:
432 | * $\rho \in \{0.8, 0.9, 0.95\}$
433 | * $\nu \in \{0.05, 0.1, 0.2\}$
434 |
435 | For each combination, compute the speedup factor (VFI time / OPI time) and report your findings.
436 | ```
437 |
438 | ```{solution-start} ifp_opi_ex1
439 | :class: dropdown
440 | ```
441 |
442 | Here's one solution:
443 |
444 | ```{code-cell} ipython3
445 | ρ_vals = [0.8, 0.9, 0.95]
446 | ν_vals = [0.05, 0.1, 0.2]
447 |
448 | results = []
449 |
450 | for ρ in ρ_vals:
451 | for ν in ν_vals:
452 | print(f"\nTesting ρ={ρ}, ν={ν}")
453 |
454 | # Create model
455 | model = create_consumption_model(ρ=ρ, ν=ν)
456 |
457 | # Time VFI
458 | start = time()
459 | v_vfi, σ_vfi = value_function_iteration(model)
460 | v_vfi.block_until_ready()
461 | vfi_t = time() - start
462 |
463 | # Time OPI
464 | start = time()
465 | v_opi, σ_opi = optimistic_policy_iteration(model, m=10)
466 | v_opi.block_until_ready()
467 | opi_t = time() - start
468 |
469 | speedup = vfi_t / opi_t
470 | results.append((ρ, ν, speedup))
471 | print(f" VFI: {vfi_t:.2f}s, OPI: {opi_t:.2f}s, Speedup: {speedup:.2f}x")
472 |
473 | # Print summary
474 | print("\nSummary of speedup factors:")
475 | for ρ, ν, speedup in results:
476 | print(f"ρ={ρ}, ν={ν}: {speedup:.2f}x")
477 | ```
478 |
479 | ```{solution-end}
480 | ```
481 |
--------------------------------------------------------------------------------