├── .gitignore
├── lectures
├── _static
│ ├── qe-logo-large.png
│ ├── lectures-favicon.ico
│ ├── lecture_specific
│ │ ├── ifp
│ │ │ ├── pi2.pdf
│ │ │ ├── ifp_histogram.png
│ │ │ ├── ifp_policies.png
│ │ │ └── ifp_agg_savings.png
│ │ ├── optgrowth
│ │ │ ├── 3ndp.pdf
│ │ │ ├── solution_og_ex2.png
│ │ │ ├── cd_analytical.py
│ │ │ ├── solve_model.py
│ │ │ └── bellman_operator.py
│ │ ├── short_path
│ │ │ ├── graph.png
│ │ │ ├── graph2.png
│ │ │ ├── graph3.png
│ │ │ └── graph4.png
│ │ ├── troubleshooting
│ │ │ └── launch.png
│ │ ├── orth_proj
│ │ │ ├── orth_proj_def1.png
│ │ │ ├── orth_proj_def2.png
│ │ │ ├── orth_proj_def3.png
│ │ │ ├── orth_proj_thm1.png
│ │ │ ├── orth_proj_thm2.png
│ │ │ ├── orth_proj_thm3.png
│ │ │ ├── orth_proj_def1.tex
│ │ │ ├── orth_proj_def2.tex
│ │ │ ├── orth_proj_def3.tex
│ │ │ ├── orth_proj_thm1.tex
│ │ │ ├── orth_proj_thm2.tex
│ │ │ └── orth_proj_thm3.tex
│ │ ├── heavy_tails
│ │ │ ├── rank_size_fig1.png
│ │ │ └── light_heavy_fig1.png
│ │ ├── wealth_dynamics
│ │ │ └── htop_again.png
│ │ ├── career
│ │ │ └── career_solutions_ex1_py.png
│ │ ├── lucas_model
│ │ │ ├── solution_mass_ex2.png
│ │ │ └── lucastree.py
│ │ ├── asset_pricing_lph
│ │ │ └── AssetPricing_v1.jpg
│ │ ├── lake_model
│ │ │ └── lake_distribution_wages.png
│ │ ├── mccall_model_with_separation
│ │ │ ├── mccall_resw_c.png
│ │ │ ├── mccall_resw_beta.png
│ │ │ └── mccall_resw_alpha.png
│ │ ├── cake_eating_numerical
│ │ │ └── analytical.py
│ │ ├── mccall
│ │ │ ├── mccall_vf_plot1.py
│ │ │ ├── mccall_resw_c.py
│ │ │ ├── mccall_resw_beta.py
│ │ │ ├── mccall_resw_alpha.py
│ │ │ └── mccall_resw_gamma.py
│ │ ├── coleman_policy_iter
│ │ │ └── solve_time_iter.py
│ │ ├── optgrowth_fast
│ │ │ ├── ogm.py
│ │ │ └── ogm_crra.py
│ │ └── odu
│ │ │ └── odu.py
│ ├── includes
│ │ ├── lecture_howto_py.raw
│ │ └── header.raw
│ └── downloads
│ │ └── amss_environment.yml
├── zreferences.md
├── intro.md
├── status.md
├── _toc.yml
├── troubleshooting.md
├── _config.yml
├── egm_policy_iter.md
├── optgrowth_fast.md
├── sir_model.md
├── inventory_dynamics.md
├── mccall_fitted_vfi.md
├── mccall_correlated.md
├── coleman_policy_iter.md
├── mccall_model_with_separation.md
├── career.md
├── jv.md
└── cake_eating_problem.md
├── _notebook_repo
├── environment.yml
└── README.md
├── environment.yml
├── README.md
└── .github
└── workflows
├── linkcheck.yml
├── cache.yml
├── ci.yml
└── publish.yml
/.gitignore:
--------------------------------------------------------------------------------
1 | _build/
2 | .DS_Store
--------------------------------------------------------------------------------
/lectures/_static/qe-logo-large.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/qe-logo-large.png
--------------------------------------------------------------------------------
/lectures/_static/lectures-favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lectures-favicon.ico
--------------------------------------------------------------------------------
/_notebook_repo/environment.yml:
--------------------------------------------------------------------------------
1 | name: lecture-python
2 | channels:
3 | - default
4 | dependencies:
5 | - python=3.8
6 | - anaconda
7 |
8 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ifp/pi2.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/ifp/pi2.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth/3ndp.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/optgrowth/3ndp.pdf
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ifp/ifp_histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/ifp/ifp_histogram.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ifp/ifp_policies.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/ifp/ifp_policies.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/short_path/graph.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/short_path/graph.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/short_path/graph2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/short_path/graph2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/short_path/graph3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/short_path/graph3.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/short_path/graph4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/short_path/graph4.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/ifp/ifp_agg_savings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/ifp/ifp_agg_savings.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/troubleshooting/launch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/troubleshooting/launch.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth/solution_og_ex2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/optgrowth/solution_og_ex2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_def1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/orth_proj/orth_proj_def1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_def2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/orth_proj/orth_proj_def2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_def3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/orth_proj/orth_proj_def3.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_thm1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/orth_proj/orth_proj_thm1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_thm2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/orth_proj/orth_proj_thm2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_thm3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/orth_proj/orth_proj_thm3.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/heavy_tails/rank_size_fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/heavy_tails/rank_size_fig1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/wealth_dynamics/htop_again.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/wealth_dynamics/htop_again.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/career/career_solutions_ex1_py.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/career/career_solutions_ex1_py.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/heavy_tails/light_heavy_fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/heavy_tails/light_heavy_fig1.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/lucas_model/solution_mass_ex2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/lucas_model/solution_mass_ex2.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/asset_pricing_lph/AssetPricing_v1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/asset_pricing_lph/AssetPricing_v1.jpg
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/lake_model/lake_distribution_wages.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/lake_model/lake_distribution_wages.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall_model_with_separation/mccall_resw_c.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/mccall_model_with_separation/mccall_resw_c.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall_model_with_separation/mccall_resw_beta.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/mccall_model_with_separation/mccall_resw_beta.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall_model_with_separation/mccall_resw_alpha.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/QuantEcon/lecture-dynamics/main/lectures/_static/lecture_specific/mccall_model_with_separation/mccall_resw_alpha.png
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/cake_eating_numerical/analytical.py:
--------------------------------------------------------------------------------
1 | def c_star(x, β, γ):
2 |
3 | return (1 - β ** (1/γ)) * x
4 |
5 |
6 | def v_star(x, β, γ):
7 |
8 | return (1 - β**(1 / γ))**(-γ) * (x**(1-γ) / (1-γ))
9 |
10 |
--------------------------------------------------------------------------------
/lectures/zreferences.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | (references)=
13 | # References
14 |
15 | ```{bibliography} _static/quant-econ.bib
16 | ```
17 |
18 |
--------------------------------------------------------------------------------
/lectures/_static/includes/lecture_howto_py.raw:
--------------------------------------------------------------------------------
1 | .. raw:: html
2 |
3 |
8 |
--------------------------------------------------------------------------------
/_notebook_repo/README.md:
--------------------------------------------------------------------------------
1 | # lecture-python.notebooks
2 |
3 | [](https://mybinder.org/v2/gh/QuantEcon/lecture-python.notebooks/master)
4 |
5 | Notebooks for https://python.quantecon.org
6 |
7 | **Note:** This README should be edited [here](https://github.com/quantecon/lecture-python.myst/_notebook_repo)
8 |
--------------------------------------------------------------------------------
/lectures/_static/includes/header.raw:
--------------------------------------------------------------------------------
1 | .. raw:: html
2 |
3 |
8 |
--------------------------------------------------------------------------------
/lectures/intro.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | # Introduction to Economic Dynamics
13 |
14 | This website presents an introductory set of lectures on economic dynamics.
15 |
16 | ```{tableofcontents}
17 | ```
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth/cd_analytical.py:
--------------------------------------------------------------------------------
1 |
2 | def v_star(y, α, β, μ):
3 | """
4 | True value function
5 | """
6 | c1 = np.log(1 - α * β) / (1 - β)
7 | c2 = (μ + α * np.log(α * β)) / (1 - α)
8 | c3 = 1 / (1 - β)
9 | c4 = 1 / (1 - α * β)
10 | return c1 + c2 * (c3 - c4) + c4 * np.log(y)
11 |
12 | def σ_star(y, α, β):
13 | """
14 | True optimal policy
15 | """
16 | return (1 - α * β) * y
17 |
18 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_vf_plot1.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 |
3 | mcm = McCallModel()
4 | V, U = solve_mccall_model(mcm)
5 |
6 | fig, ax = plt.subplots(figsize=(10, 6))
7 |
8 | ax.plot(mcm.w_vec, V, 'b-', lw=2, alpha=0.7, label='$V$')
9 | ax.plot(mcm.w_vec, [U]*len(mcm.w_vec), 'g-', lw=2, alpha=0.7, label='$U$')
10 | ax.set_xlim(min(mcm.w_vec), max(mcm.w_vec))
11 | ax.legend(loc='upper left')
12 | ax.grid()
13 |
14 | plt.show()
15 |
--------------------------------------------------------------------------------
/lectures/_static/downloads/amss_environment.yml:
--------------------------------------------------------------------------------
1 | name: amss
2 | channels:
3 | - default
4 | - conda-forge
5 | dependencies:
6 | - pip
7 | - python
8 | - jupyter
9 | - jupyterlab
10 | - nbconvert
11 | - pandoc
12 | - pandas
13 | - numba
14 | - scipy=1.4.1
15 | - numpy
16 | - matplotlib
17 | - networkx
18 | - sphinx=2.4.4
19 | - interpolation
20 | - seaborn
21 | - pip:
22 | - sphinxcontrib-jupyter
23 | - sphinxcontrib-bibtex
24 | - quantecon
25 | - joblib
26 |
--------------------------------------------------------------------------------
/environment.yml:
--------------------------------------------------------------------------------
1 | name: quantecon
2 | channels:
3 | - default
4 | dependencies:
5 | - python=3.11
6 | - anaconda=2024.02
7 | - pip
8 | - pip:
9 | - jupyter-book==0.15.1
10 | - docutils==0.17.1
11 | - quantecon-book-theme==0.7.1
12 | - sphinx-reredirects==0.1.3
13 | - sphinx-tojupyter==0.3.0
14 | - sphinxext-rediraffe==0.2.7
15 | - sphinx-exercise==0.4.1
16 | - ghp-import==2.1.0
17 | - sphinxcontrib-youtube==1.2.0
18 | - sphinx-togglebutton==0.3.2
19 | - arviz==0.13.0
20 | - kaleido
21 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Introduction to Economic Dynamics
2 |
3 | This website presents an introductory set of lectures on economic dynamics.
4 |
5 | ## Jupyter notebooks
6 |
7 | Jupyter notebook versions of each lecture are available for download
8 | via the website.
9 |
10 | ## Contributions
11 |
12 | To comment on the lectures please add to or open an issue in the issue tracker (see above).
13 |
14 | We welcome pull requests!
15 |
16 | Please read the [QuantEcon style guide](https://manual.quantecon.org/intro.html) first, so that you can match our style.
17 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_resw_c.py:
--------------------------------------------------------------------------------
1 | grid_size = 25
2 | c_vals = np.linspace(2, 12, grid_size) # values of unemployment compensation
3 | w_bar_vals = np.empty_like(c_vals)
4 |
5 | mcm = McCallModel()
6 |
7 | fig, ax = plt.subplots(figsize=(10, 6))
8 |
9 | for i, c in enumerate(c_vals):
10 | mcm.c = c
11 | w_bar = compute_reservation_wage(mcm)
12 | w_bar_vals[i] = w_bar
13 |
14 | ax.set_xlabel('unemployment compensation')
15 | ax.set_ylabel('reservation wage')
16 | txt = r'$\bar w$ as a function of $c$'
17 | ax.plot(c_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
18 | ax.legend(loc='upper left')
19 | ax.grid()
20 |
21 | plt.show()
22 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_resw_beta.py:
--------------------------------------------------------------------------------
1 | grid_size = 25
2 | β_vals = np.linspace(0.8, 0.99, grid_size)
3 | w_bar_vals = np.empty_like(β_vals)
4 |
5 | mcm = McCallModel()
6 |
7 | fig, ax = plt.subplots(figsize=(10, 6))
8 |
9 | for i, β in enumerate(β_vals):
10 | mcm.β = β
11 | w_bar = compute_reservation_wage(mcm)
12 | w_bar_vals[i] = w_bar
13 |
14 | ax.set_xlabel('discount factor')
15 | ax.set_ylabel('reservation wage')
16 | ax.set_xlim(β_vals.min(), β_vals.max())
17 | txt = r'$\bar w$ as a function of $\beta$'
18 | ax.plot(β_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
19 | ax.legend(loc='upper left')
20 | ax.grid()
21 |
22 | plt.show()
23 |
--------------------------------------------------------------------------------
/lectures/status.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | # Execution Statistics
13 |
14 | This table contains the latest execution statistics.
15 |
16 | ```{nb-exec-table}
17 | ```
18 |
19 | (status:machine-details)=
20 |
21 | These lectures are built on `linux` instances through `github actions`.
22 |
23 | These lectures are using the following python version
24 |
25 | ```{code-cell} ipython
26 | !python --version
27 | ```
28 |
29 | and the following package versions
30 |
31 | ```{code-cell} ipython
32 | :tags: [hide-output]
33 | !conda list
34 | ```
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_resw_alpha.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 |
3 | grid_size = 25
4 | α_vals = np.linspace(0.05, 0.5, grid_size)
5 | w_bar_vals = np.empty_like(α_vals)
6 |
7 | mcm = McCallModel()
8 |
9 | fig, ax = plt.subplots(figsize=(10, 6))
10 |
11 | for i, α in enumerate(α_vals):
12 | mcm.α = α
13 | w_bar = compute_reservation_wage(mcm)
14 | w_bar_vals[i] = w_bar
15 |
16 | ax.set_xlabel('job separation rate')
17 | ax.set_ylabel('reservation wage')
18 | ax.set_xlim(α_vals.min(), α_vals.max())
19 | txt = r'$\bar w$ as a function of $\alpha$'
20 | ax.plot(α_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
21 | ax.legend(loc='upper right')
22 | ax.grid()
23 |
24 | plt.show()
25 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/mccall/mccall_resw_gamma.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 |
3 | grid_size = 25
4 | γ_vals = np.linspace(0.05, 0.95, grid_size)
5 | w_bar_vals = np.empty_like(γ_vals)
6 |
7 | mcm = McCallModel()
8 |
9 | fig, ax = plt.subplots(figsize=(10, 6))
10 |
11 | for i, γ in enumerate(γ_vals):
12 | mcm.γ = γ
13 | w_bar = compute_reservation_wage(mcm)
14 | w_bar_vals[i] = w_bar
15 |
16 | ax.set_xlabel('job offer rate')
17 | ax.set_ylabel('reservation wage')
18 | ax.set_xlim(γ_vals.min(), γ_vals.max())
19 | txt = r'$\bar w$ as a function of $\gamma$'
20 | ax.plot(γ_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
21 | ax.legend(loc='upper left')
22 | ax.grid()
23 |
24 | plt.show()
25 |
26 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth/solve_model.py:
--------------------------------------------------------------------------------
1 | def solve_model(og,
2 | tol=1e-4,
3 | max_iter=1000,
4 | verbose=True,
5 | print_skip=25):
6 | """
7 | Solve model by iterating with the Bellman operator.
8 |
9 | """
10 |
11 | # Set up loop
12 | v = og.u(og.grid) # Initial condition
13 | i = 0
14 | error = tol + 1
15 |
16 | while i < max_iter and error > tol:
17 | v_greedy, v_new = T(v, og)
18 | error = np.max(np.abs(v - v_new))
19 | i += 1
20 | if verbose and i % print_skip == 0:
21 | print(f"Error at iteration {i} is {error}.")
22 | v = v_new
23 |
24 | if error > tol:
25 | print("Failed to converge!")
26 | elif verbose:
27 | print(f"\nConverged in {i} iterations.")
28 |
29 | return v_greedy, v_new
30 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/coleman_policy_iter/solve_time_iter.py:
--------------------------------------------------------------------------------
1 | def solve_model_time_iter(model, # Class with model information
2 | σ, # Initial condition
3 | tol=1e-4,
4 | max_iter=1000,
5 | verbose=True,
6 | print_skip=25):
7 |
8 | # Set up loop
9 | i = 0
10 | error = tol + 1
11 |
12 | while i < max_iter and error > tol:
13 | σ_new = K(σ, model)
14 | error = np.max(np.abs(σ - σ_new))
15 | i += 1
16 | if verbose and i % print_skip == 0:
17 | print(f"Error at iteration {i} is {error}.")
18 | σ = σ_new
19 |
20 | if error > tol:
21 | print("Failed to converge!")
22 | elif verbose:
23 | print(f"\nConverged in {i} iterations.")
24 |
25 | return σ_new
26 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_def1.tex:
--------------------------------------------------------------------------------
1 | \documentclass[convert={density=300,size=1080x800,outext=.png}]{standalone}
2 | \usepackage{tikz}
3 |
4 | \usetikzlibrary{arrows.meta, arrows}
5 |
6 | \begin{document}
7 |
8 | %.. tikz::
9 | \begin{tikzpicture}
10 | [scale=5, axis/.style={<->, >=stealth'}, important line/.style={thick}, dotted line/.style={dotted, thick,red}, every node/.style={color=black}]\coordinate(O) at (0,0);
11 | \coordinate (X) at (-0.2,0.3);
12 | \coordinate (Z) at (0.6,0.3);
13 | \draw[axis] (-0.4,0) -- (0.9,0) node(xline)[right] {};
14 | \draw[axis] (0,-0.3) -- (0,0.7) node(yline)[above] {};
15 | \draw[important line,blue, ->] (O) -- (X) node[left] {$x$};
16 | \draw[important line,blue, ->] (O) -- (Z) node[right] {$z$};
17 | \draw[dotted line] (-0.03,0.045) -- (0.03,0.075);
18 | \draw[dotted line] (0.06,0.03) -- (0.03,0.075);
19 |
20 | \end{tikzpicture}
21 |
22 | \end{document}
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_def2.tex:
--------------------------------------------------------------------------------
1 | \documentclass[convert={density=300,size=1080x800,outext=.png}]{standalone}
2 | \usepackage{tikz}
3 | \usetikzlibrary{arrows.meta, arrows}
4 | \begin{document}
5 |
6 | %.. tikz::
7 | \begin{tikzpicture}
8 | [scale=5, axis/.style={<->, >=stealth'}, important line/.style={thick}, dotted line/.style={dotted, thick,red}, every node/.style={color=black} ] \coordinate(O) at (0,0);
9 | \coordinate (X) at (-0.2,0.3);
10 | \coordinate (Z1) at (-0.3,-0.15);
11 | \coordinate (Z2) at (0.8,0.4);
12 | \draw[axis] (-0.4,0) -- (0.9,0) node(xline)[right] {};
13 | \draw[axis] (0,-0.3) -- (0,0.7) node(yline)[above] {};
14 | \draw[important line,blue, ->] (O) -- (X) node[left] {$x$};
15 | \draw[important line] (Z1) -- (Z2) node[right] {$S$};
16 | \draw[dotted line] (-0.03,0.045) -- (0.03,0.075);
17 | \draw[dotted line] (0.06,0.03) -- (0.03,0.075);
18 | \end{tikzpicture}
19 |
20 | \end{document}
21 |
22 |
23 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_def3.tex:
--------------------------------------------------------------------------------
1 | \documentclass[convert={density=300,size=1080x800,outext=.png}]{standalone}
2 | \usepackage{tikz}
3 | \usetikzlibrary{arrows.meta, arrows}
4 | \begin{document}
5 |
6 | %.. tikz::
7 | \begin{tikzpicture}
8 | [scale=5, axis/.style={<->, >=stealth'}, important line/.style={thick}, dotted line/.style={dotted, thick,red}, dashed line/.style={dashed, thin}, every node/.style={color=black}] \coordinate(O) at (0,0);
9 | \coordinate (S1) at (-0.4,-0.2);
10 | \coordinate (S2) at (0.8,0.4);
11 | \coordinate (S3) at (-0.25,0.5);
12 | \coordinate (S4) at (0.12,-0.24);
13 | \draw[axis] (-0.5,0) -- (0.9,0) node(xline)[right] {};
14 | \draw[axis] (0,-0.3) -- (0,0.7) node(yline)[above] {};
15 | \draw[important line, thick] (S1) -- (S2) node[right] {$S$};
16 | \draw[important line, thick] (S4) -- (S3) node[left] {$S^{\perp}$};
17 | \draw[dotted line] (-0.03,0.06) -- (0.03,0.09);
18 | \draw[dotted line] (0.06,0.03) -- (0.03,0.09);
19 | \end{tikzpicture}
20 |
21 | \end{document}
--------------------------------------------------------------------------------
/lectures/_toc.yml:
--------------------------------------------------------------------------------
1 | format: jb-book
2 | root: intro
3 | parts:
4 | - caption: Introduction to Dynamics
5 | numbered: true
6 | chapters:
7 | - file: sir_model
8 | - file: inventory_dynamics
9 | - file: samuelson
10 | - file: kesten_processes
11 | - file: wealth_dynamics
12 | - caption: Asset Pricing & Finance
13 | numbered: true
14 | chapters:
15 | - file: markov_asset
16 | - file: ge_arrow
17 | - file: harrison_kreps
18 | - file: orth_proj
19 | - file: asset_pricing_lph
20 | - file: black_litterman
21 | - file: BCG_complete_mkts
22 | - file: BCG_incomplete_mkts
23 | - caption: Search
24 | numbered: true
25 | chapters:
26 | - file: mccall_model
27 | - file: mccall_model_with_separation
28 | - file: mccall_fitted_vfi
29 | - file: mccall_correlated
30 | - file: career
31 | - file: jv
32 | - file: odu
33 | - file: mccall_q
34 | - file: lake_model
35 | - caption: Optimal Savings
36 | numbered: true
37 | chapters:
38 | - file: cake_eating_problem
39 | - file: cake_eating_numerical
40 | - file: optgrowth
41 | - file: optgrowth_fast
42 | - file: coleman_policy_iter
43 | - file: egm_policy_iter
44 | - file: ifp
45 | - file: ifp_advanced
46 | - caption: Other
47 | numbered: true
48 | chapters:
49 | - file: troubleshooting
50 | - file: zreferences
51 | - file: status
52 |
--------------------------------------------------------------------------------
/.github/workflows/linkcheck.yml:
--------------------------------------------------------------------------------
1 | name: Link Checker [Anaconda, Linux]
2 | on:
3 | pull_request:
4 | types: [opened, reopened]
5 | schedule:
6 | # UTC 12:00 is early morning in Australia
7 | - cron: '0 12 * * *'
8 | jobs:
9 | link-check-linux:
10 | name: Link Checking (${{ matrix.python-version }}, ${{ matrix.os }})
11 | runs-on: ${{ matrix.os }}
12 | strategy:
13 | fail-fast: false
14 | matrix:
15 | os: ["ubuntu-latest"]
16 | python-version: ["3.11"]
17 | steps:
18 | - name: Checkout
19 | uses: actions/checkout@v4
20 | - name: Setup Anaconda
21 | uses: conda-incubator/setup-miniconda@v3
22 | with:
23 | auto-update-conda: true
24 | auto-activate-base: true
25 | miniconda-version: 'latest'
26 | python-version: "3.11"
27 | environment-file: environment.yml
28 | activate-environment: quantecon
29 | - name: Download "build" folder (cache)
30 | uses: dawidd6/action-download-artifact@v3
31 | with:
32 | workflow: cache.yml
33 | branch: main
34 | name: build-cache
35 | path: _build
36 | - name: Link Checker
37 | shell: bash -l {0}
38 | run: jb build lectures --path-output=./ --builder=custom --custom-builder=linkcheck
39 | - name: Upload Link Checker Reports
40 | uses: actions/upload-artifact@v4
41 | if: failure()
42 | with:
43 | name: linkcheck-reports
44 | path: _build/linkcheck
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_thm1.tex:
--------------------------------------------------------------------------------
1 | \documentclass[convert={density=300,size=1080x800,outext=.png}]{standalone}
2 | \usepackage{tikz}
3 | \usetikzlibrary{arrows.meta, arrows}
4 | \begin{document}
5 |
6 | %.. tikz::
7 | \begin{tikzpicture}
8 | [scale=5, axis/.style={<->, >=stealth'}, important line/.style={thick}, dotted line/.style={dotted, thick,red}, dashed line/.style={dashed, thin}, every node/.style={color=black}] \coordinate(O) at (0,0);
9 | \coordinate (y-yhat) at (-0.2,0.4);
10 | \coordinate (yhat) at (0.6,0.3);
11 | \coordinate (y) at (0.4,0.7);
12 | \coordinate (Z1) at (-0.4,-0.2);
13 | \coordinate (Z2) at (0.8,0.4);
14 | \draw[axis] (-0.5,0) -- (0.9,0) node(xline)[right] {};
15 | \draw[axis] (0,-0.3) -- (0,0.7) node(yline)[above] {};
16 | \draw[important line,blue,thick, ->] (O) -- (yhat) node[below] {$\hat y$};
17 | \draw[important line,blue, ->] (O) -- (y-yhat) node[left] {$y - \hat y$};
18 | \draw[important line, thick] (Z1) -- (O) node[right] {};
19 | \draw[important line, thick] (yhat) -- (Z2) node[right] {$S$};
20 | \draw[important line, blue,->] (O) -- (y) node[right] {$y$};
21 | \draw[dotted line] (-0.03,0.06) -- (0.03,0.09);
22 | \draw[dotted line] (0.06,0.03) -- (0.03,0.09);
23 | \draw[dotted line] (0.54,0.27) -- (0.51,0.33);
24 | \draw[dotted line] (0.57,0.36) -- (0.51,0.33);
25 | \draw[dashed line, black] (y) -- (yhat);
26 | \draw[-latex, very thin] (0.5,0.4) to [out=210,in=50] (-0.1,0.2);
27 |
28 | \end{tikzpicture}
29 |
30 | \end{document}
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_thm2.tex:
--------------------------------------------------------------------------------
1 | \documentclass[convert={density=300,size=1080x800,outext=.png}]{standalone}
2 | \usepackage{tikz}
3 | \usetikzlibrary{arrows.meta, arrows}
4 | \begin{document}
5 |
6 | %.. tikz::
7 | \begin{tikzpicture}
8 | [scale=5, axis/.style={<->, >=stealth'}, important line/.style={thick}, dotted line/.style={dotted, thick,red}, dashed line/.style={dashed, thin}, every node/.style={color=black}] \coordinate(O) at (0,0);
9 | \coordinate (y') at (-0.4,0.1);
10 | \coordinate (Py) at (0.6,0.3);
11 | \coordinate (y) at (0.4,0.7);
12 | \coordinate (Z1) at (-0.4,-0.2);
13 | \coordinate (Z2) at (0.8,0.4);
14 | \coordinate (Py') at (-0.28,-0.14);
15 | \draw[axis] (-0.5,0) -- (0.9,0) node(xline)[right] {};
16 | \draw[axis] (0,-0.3) -- (0,0.7) node(yline)[above] {};
17 | \draw[important line,blue,thick, ->] (O) -- (Py) node[anchor = north west, text width=2em] {$P y$};
18 | \draw[important line,blue, ->] (O) -- (y') node[left] {$y'$};
19 | \draw[important line, thick] (Z1) -- (O) node[right] {};
20 | \draw[important line, thick] (Py) -- (Z2) node[right] {$S$};
21 | \draw[important line, blue,->] (O) -- (y) node[right] {$y$};
22 | \draw[dotted line] (0.54,0.27) -- (0.51,0.33);
23 | \draw[dotted line] (0.57,0.36) -- (0.51,0.33);
24 | \draw[dotted line] (-0.22,-0.11) -- (-0.25,-0.05);
25 | \draw[dotted line] (-0.31,-0.08) -- (-0.25,-0.05);
26 | \draw[dashed line, black] (y) -- (Py);
27 | \draw[dashed line, black] (y') -- (Py') node[anchor = north west, text width=5em] {$P y'$};
28 | \end{tikzpicture}
29 |
30 | \end{document}
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth_fast/ogm.py:
--------------------------------------------------------------------------------
1 | from numba import float64
2 | from numba.experimental import jitclass
3 |
4 | opt_growth_data = [
5 | ('α', float64), # Production parameter
6 | ('β', float64), # Discount factor
7 | ('μ', float64), # Shock location parameter
8 | ('s', float64), # Shock scale parameter
9 | ('grid', float64[:]), # Grid (array)
10 | ('shocks', float64[:]) # Shock draws (array)
11 | ]
12 |
13 | @jitclass(opt_growth_data)
14 | class OptimalGrowthModel:
15 |
16 | def __init__(self,
17 | α=0.4,
18 | β=0.96,
19 | μ=0,
20 | s=0.1,
21 | grid_max=4,
22 | grid_size=120,
23 | shock_size=250,
24 | seed=1234):
25 |
26 | self.α, self.β, self.μ, self.s = α, β, μ, s
27 |
28 | # Set up grid
29 | self.grid = np.linspace(1e-5, grid_max, grid_size)
30 |
31 | # Store shocks (with a seed, so results are reproducible)
32 | np.random.seed(seed)
33 | self.shocks = np.exp(μ + s * np.random.randn(shock_size))
34 |
35 |
36 | def f(self, k):
37 | "The production function"
38 | return k**self.α
39 |
40 |
41 | def u(self, c):
42 | "The utility function"
43 | return np.log(c)
44 |
45 | def f_prime(self, k):
46 | "Derivative of f"
47 | return self.α * (k**(self.α - 1))
48 |
49 |
50 | def u_prime(self, c):
51 | "Derivative of u"
52 | return 1/c
53 |
54 | def u_prime_inv(self, c):
55 | "Inverse of u'"
56 | return 1/c
57 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth/bellman_operator.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from interpolation import interp
3 | from numba import njit, prange
4 | from quantecon.optimize.scalar_maximization import brent_max
5 |
6 |
7 | def operator_factory(og, parallel_flag=True):
8 | """
9 | A function factory for building the Bellman operator, as well as
10 | a function that computes greedy policies.
11 |
12 | Here og is an instance of OptimalGrowthModel.
13 | """
14 |
15 | f, u, β = og.f, og.u, og.β
16 | grid, shocks = og.grid, og.shocks
17 |
18 | @njit
19 | def objective(c, v, y):
20 | """
21 | The right-hand side of the Bellman equation
22 | """
23 | # First turn v into a function via interpolation
24 | v_func = lambda x: interp(grid, v, x)
25 | return u(c) + β * np.mean(v_func(f(y - c) * shocks))
26 |
27 | @njit(parallel=parallel_flag)
28 | def T(v):
29 | """
30 | The Bellman operator
31 | """
32 | v_new = np.empty_like(v)
33 | for i in prange(len(grid)):
34 | y = grid[i]
35 | # Solve for optimal v at y
36 | v_max = brent_max(objective, 1e-10, y, args=(v, y))[1]
37 | v_new[i] = v_max
38 | return v_new
39 |
40 | @njit
41 | def get_greedy(v):
42 | """
43 | Computes the v-greedy policy of a given function v
44 | """
45 | σ = np.empty_like(v)
46 | for i in range(len(grid)):
47 | y = grid[i]
48 | # Solve for optimal c at y
49 | c_max = brent_max(objective, 1e-10, y, args=(v, y))[0]
50 | σ[i] = c_max
51 | return σ
52 |
53 | return T, get_greedy
54 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/optgrowth_fast/ogm_crra.py:
--------------------------------------------------------------------------------
1 | from numba import float64
2 | from numba.experimental import jitclass
3 |
4 | opt_growth_data = [
5 | ('α', float64), # Production parameter
6 | ('β', float64), # Discount factor
7 | ('μ', float64), # Shock location parameter
8 | ('γ', float64), # Preference parameter
9 | ('s', float64), # Shock scale parameter
10 | ('grid', float64[:]), # Grid (array)
11 | ('shocks', float64[:]) # Shock draws (array)
12 | ]
13 |
14 | @jitclass(opt_growth_data)
15 | class OptimalGrowthModel_CRRA:
16 |
17 | def __init__(self,
18 | α=0.4,
19 | β=0.96,
20 | μ=0,
21 | s=0.1,
22 | γ=1.5,
23 | grid_max=4,
24 | grid_size=120,
25 | shock_size=250,
26 | seed=1234):
27 |
28 | self.α, self.β, self.γ, self.μ, self.s = α, β, γ, μ, s
29 |
30 | # Set up grid
31 | self.grid = np.linspace(1e-5, grid_max, grid_size)
32 |
33 | # Store shocks (with a seed, so results are reproducible)
34 | np.random.seed(seed)
35 | self.shocks = np.exp(μ + s * np.random.randn(shock_size))
36 |
37 |
38 | def f(self, k):
39 | "The production function."
40 | return k**self.α
41 |
42 | def u(self, c):
43 | "The utility function."
44 | return c**(1 - self.γ) / (1 - self.γ)
45 |
46 | def f_prime(self, k):
47 | "Derivative of f."
48 | return self.α * (k**(self.α - 1))
49 |
50 | def u_prime(self, c):
51 | "Derivative of u."
52 | return c**(-self.γ)
53 |
54 | def u_prime_inv(c):
55 | return c**(-1 / self.γ)
56 |
--------------------------------------------------------------------------------
/.github/workflows/cache.yml:
--------------------------------------------------------------------------------
1 | name: Build Cache [using jupyter-book]
2 | on:
3 | push:
4 | branches:
5 | - main
6 | jobs:
7 | tests:
8 | runs-on: ubuntu-latest
9 | steps:
10 | - name: Checkout
11 | uses: actions/checkout@v4
12 | - name: Setup Anaconda
13 | uses: conda-incubator/setup-miniconda@v3
14 | with:
15 | auto-update-conda: true
16 | auto-activate-base: true
17 | miniconda-version: 'latest'
18 | python-version: "3.11"
19 | environment-file: environment.yml
20 | activate-environment: quantecon
21 | - name: Graphics Support #TODO: Review if graphviz is needed
22 | run: |
23 | sudo apt-get -qq update && sudo apt-get install -y graphviz
24 | - name: Install latex dependencies
25 | run: |
26 | sudo apt-get -qq update
27 | sudo apt-get install -y \
28 | texlive-latex-recommended \
29 | texlive-latex-extra \
30 | texlive-fonts-recommended \
31 | texlive-fonts-extra \
32 | texlive-xetex \
33 | latexmk \
34 | xindy \
35 | dvipng \
36 | cm-super
37 | - name: Build HTML
38 | shell: bash -l {0}
39 | run: |
40 | jb build lectures --path-output ./ -W --keep-going
41 | - name: Upload Execution Reports (HTML)
42 | uses: actions/upload-artifact@v4
43 | if: failure()
44 | with:
45 | name: execution-reports
46 | path: _build/html/reports
47 | - name: Upload "_build" folder (cache)
48 | uses: actions/upload-artifact@v4
49 | with:
50 | name: build-cache
51 | path: _build
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/orth_proj/orth_proj_thm3.tex:
--------------------------------------------------------------------------------
1 | \documentclass[convert={density=300,size=1080x800,outext=.png}]{standalone}
2 | \usepackage{tikz}
3 | \usetikzlibrary{arrows.meta, arrows}
4 | \begin{document}
5 |
6 | %.. tikz::
7 | \begin{tikzpicture}
8 | [scale=5, axis/.style={<->, >=stealth'}, important line/.style={thick}, dotted line/.style={dotted, thick,red}, dashed line/.style={dashed, thin}, every node/.style={color=black}] \coordinate(O) at (0,0);
9 | \coordinate (uhat) at (-0.2,0.4);
10 | \coordinate (yhat) at (0.6,0.3);
11 | \coordinate (y) at (0.4,0.7);
12 | \coordinate (S1) at (-0.4,-0.2);
13 | \coordinate (S2) at (0.8,0.4);
14 | \coordinate (S3) at (-0.3,0.6);
15 | \coordinate (S4) at (0.12,-0.24);
16 | \draw[axis] (-0.5,0) -- (0.9,0) node(xline)[right] {};
17 | \draw[axis] (0,-0.3) -- (0,0.7) node(yline)[above] {};
18 | \draw[important line,blue,thick, ->] (O) -- (yhat) node[anchor = north west, text width=4em] {$P y$};
19 | \draw[important line,blue, ->] (O) -- (uhat) node[anchor = north east, text width=4em] {$M y$};
20 | \draw[important line,thick] (uhat) -- (S3) node [anchor = south east, text width=0.5em] {$S^{\perp}$};
21 | \draw[important line,thick] (O) -- (S4);
22 | \draw[important line, thick] (S1) -- (O) node[right] {};
23 | \draw[important line, thick] (yhat) -- (S2) node[right] {$S$};
24 | \draw[important line, blue,->] (O) -- (y) node[right] {$y$};
25 | \draw[dotted line] (-0.03,0.06) -- (0.03,0.09);
26 | \draw[dotted line] (0.06,0.03) -- (0.03,0.09);
27 | \draw[dotted line] (0.54,0.27) -- (0.51,0.33);
28 | \draw[dotted line] (0.57,0.36) -- (0.51,0.33);
29 | \draw[dotted line] (-0.17,0.34) -- (-0.11,0.37);
30 | \draw[dotted line] (-0.14,0.43) -- (-0.11,0.37);
31 | \draw[dashed line, black] (y) -- (yhat);
32 | \draw[dashed line, black] (y) -- (uhat);
33 | \end{tikzpicture}
34 |
35 | \end{document}
--------------------------------------------------------------------------------
/lectures/troubleshooting.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | (troubleshooting)=
13 | ```{raw} html
14 |
19 | ```
20 |
21 | # Troubleshooting
22 |
23 | This page is for readers experiencing errors when running the code from the lectures.
24 |
25 | ## Fixing Your Local Environment
26 |
27 | The basic assumption of the lectures is that code in a lecture should execute whenever
28 |
29 | 1. it is executed in a Jupyter notebook and
30 | 1. the notebook is running on a machine with the latest version of Anaconda Python.
31 |
32 | You have installed Anaconda, haven't you, following the instructions in [this lecture](https://python-programming.quantecon.org/getting_started.html)?
33 |
34 | Assuming that you have, the most common source of problems for our readers is that their Anaconda distribution is not up to date.
35 |
36 | [Here's a useful article](https://www.anaconda.com/blog/keeping-anaconda-date)
37 | on how to update Anaconda.
38 |
39 | Another option is to simply remove Anaconda and reinstall.
40 |
41 | You also need to keep the external code libraries, such as [QuantEcon.py](https://quantecon.org/quantecon-py) up to date.
42 |
43 | For this task you can either
44 |
45 | * use conda install -y quantecon on the command line, or
46 | * execute !conda install -y quantecon within a Jupyter notebook.
47 |
48 | If your local environment is still not working you can do two things.
49 |
50 | First, you can use a remote machine instead, by clicking on the Launch Notebook icon available for each lecture
51 |
52 | ```{image} _static/lecture_specific/troubleshooting/launch.png
53 |
54 | ```
55 |
56 | Second, you can report an issue, so we can try to fix your local set up.
57 |
58 | We like getting feedback on the lectures so please don't hesitate to get in
59 | touch.
60 |
61 | ## Reporting an Issue
62 |
63 | One way to give feedback is to raise an issue through our [issue tracker](https://github.com/QuantEcon/lecture-python/issues).
64 |
65 | Please be as specific as possible. Tell us where the problem is and as much
66 | detail about your local set up as you can provide.
67 |
68 | Another feedback option is to use our [discourse forum](https://discourse.quantecon.org/).
69 |
70 | Finally, you can provide direct feedback to [contact@quantecon.org](mailto:contact@quantecon.org)
71 |
72 |
--------------------------------------------------------------------------------
/.github/workflows/ci.yml:
--------------------------------------------------------------------------------
1 | name: Build HTML [using jupyter-book]
2 | on: [pull_request]
3 | jobs:
4 | preview:
5 | runs-on: ubuntu-latest
6 | steps:
7 | - name: Checkout
8 | uses: actions/checkout@v4
9 | - name: Setup Anaconda
10 | uses: conda-incubator/setup-miniconda@v3
11 | with:
12 | auto-update-conda: true
13 | auto-activate-base: true
14 | miniconda-version: 'latest'
15 | python-version: "3.11"
16 | environment-file: environment.yml
17 | activate-environment: quantecon
18 | - name: Graphics Support #TODO: Review if graphviz is needed
19 | run: |
20 | sudo apt-get -qq update && sudo apt-get install -y graphviz
21 | - name: Install latex dependencies
22 | run: |
23 | sudo apt-get -qq update
24 | sudo apt-get install -y \
25 | texlive-latex-recommended \
26 | texlive-latex-extra \
27 | texlive-fonts-recommended \
28 | texlive-fonts-extra \
29 | texlive-xetex \
30 | latexmk \
31 | xindy \
32 | dvipng \
33 | cm-super
34 | - name: Display Conda Environment Versions
35 | shell: bash -l {0}
36 | run: conda list
37 | - name: Display Pip Versions
38 | shell: bash -l {0}
39 | run: pip list
40 | - name: Download "build" folder (cache)
41 | uses: dawidd6/action-download-artifact@v3
42 | with:
43 | workflow: cache.yml
44 | branch: main
45 | name: build-cache
46 | path: _build
47 | # Build Assets (Download Notebooks and PDF via LaTeX)
48 | - name: Build PDF from LaTeX
49 | shell: bash -l {0}
50 | run: |
51 | jb build lectures --builder pdflatex --path-output ./ -n --keep-going
52 | mkdir -p _build/html/_pdf
53 | cp -u _build/latex/*.pdf _build/html/_pdf
54 | - name: Build Download Notebooks (sphinx-tojupyter)
55 | shell: bash -l {0}
56 | run: |
57 | jb build lectures --path-output ./ --builder=custom --custom-builder=jupyter
58 | mkdir -p _build/html/_notebooks
59 | cp -u _build/jupyter/*.ipynb _build/html/_notebooks
60 | # Build HTML (Website)
61 | # BUG: rm .doctress to remove `sphinx` rendering issues for ipywidget mimetypes
62 | # and clear the sphinx cache for building final HTML documents.
63 | - name: Build HTML
64 | shell: bash -l {0}
65 | run: |
66 | rm -r _build/.doctrees
67 | jb build lectures --path-output ./ -nW --keep-going
68 | - name: Upload Execution Reports (HTML)
69 | uses: actions/upload-artifact@v4
70 | if: failure()
71 | with:
72 | name: execution-reports
73 | path: _build/html/reports
74 | - name: Preview Deploy to Netlify
75 | uses: nwtgck/actions-netlify@v2
76 | with:
77 | publish-dir: '_build/html/'
78 | production-branch: main
79 | github-token: ${{ secrets.GITHUB_TOKEN }}
80 | deploy-message: "Preview Deploy from GitHub Actions"
81 | env:
82 | NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
83 | NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
84 |
--------------------------------------------------------------------------------
/.github/workflows/publish.yml:
--------------------------------------------------------------------------------
1 | name: Build & Publish to GH-PAGES
2 | on:
3 | push:
4 | tags:
5 | - 'publish*'
6 | jobs:
7 | publish:
8 | if: github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags')
9 | runs-on: ubuntu-latest
10 | steps:
11 | - name: Checkout
12 | uses: actions/checkout@v4
13 | - name: Setup Anaconda
14 | uses: conda-incubator/setup-miniconda@v3
15 | with:
16 | auto-update-conda: true
17 | auto-activate-base: true
18 | miniconda-version: 'latest'
19 | python-version: "3.11"
20 | environment-file: environment.yml
21 | activate-environment: quantecon
22 | - name: Install latex dependencies
23 | run: |
24 | sudo apt-get -qq update
25 | sudo apt-get install -y \
26 | texlive-latex-recommended \
27 | texlive-latex-extra \
28 | texlive-fonts-recommended \
29 | texlive-fonts-extra \
30 | texlive-xetex \
31 | latexmk \
32 | xindy \
33 | dvipng \
34 | cm-super
35 | - name: Display Conda Environment Versions
36 | shell: bash -l {0}
37 | run: conda list
38 | - name: Display Pip Versions
39 | shell: bash -l {0}
40 | run: pip list
41 | - name: Download "build" folder (cache)
42 | uses: dawidd6/action-download-artifact@v3
43 | with:
44 | workflow: cache.yml
45 | branch: main
46 | name: build-cache
47 | path: _build
48 | # Build Assets (Download Notebooks and PDF via LaTeX)
49 | - name: Build PDF from LaTeX
50 | shell: bash -l {0}
51 | run: |
52 | jb build lectures --builder pdflatex --path-output ./ -n --keep-going
53 | - name: Copy LaTeX PDF for GH-PAGES
54 | shell: bash -l {0}
55 | run: |
56 | mkdir -p _build/html/_pdf
57 | cp -u _build/latex/*.pdf _build/html/_pdf
58 | - name: Build Download Notebooks (sphinx-tojupyter)
59 | shell: bash -l {0}
60 | run: |
61 | jb build lectures --path-output ./ --builder=custom --custom-builder=jupyter
62 | - name: Copy Download Notebooks for GH-PAGES
63 | shell: bash -l {0}
64 | run: |
65 | mkdir -p _build/html/_notebooks
66 | cp -u _build/jupyter/*.ipynb _build/html/_notebooks
67 | # Build HTML (Website)
68 | # BUG: rm .doctress to remove `sphinx` rendering issues for ipywidget mimetypes
69 | # and clear the sphinx cache for building final HTML documents.
70 | - name: Build HTML
71 | shell: bash -l {0}
72 | run: |
73 | rm -r _build/.doctrees
74 | jb build lectures --path-output ./
75 | - name: Deploy to Netlify
76 | uses: nwtgck/actions-netlify@v2
77 | with:
78 | publish-dir: '_build/html/'
79 | production-branch: main
80 | github-token: ${{ secrets.GITHUB_TOKEN }}
81 | deploy-message: "Deploy from GitHub Actions"
82 | env:
83 | NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
84 | NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
85 | - name: Deploy website to gh-pages
86 | uses: peaceiris/actions-gh-pages@v3
87 | with:
88 | github_token: ${{ secrets.GITHUB_TOKEN }}
89 | publish_dir: _build/html/
90 | cname: dynamics.quantecon.org
91 | - name: Upload "_build" folder (cache)
92 | uses: actions/upload-artifact@v4
93 | with:
94 | name: build-publish
95 | path: _build
96 | # Sync notebooks
97 | - name: Prepare lecture-dynamics.notebooks sync
98 | shell: bash -l {0}
99 | run: |
100 | mkdir -p _build/lecture-dynamics.notebooks
101 | cp -a _notebook_repo/. _build/lecture-dynamics.notebooks
102 | cp _build/jupyter/*.ipynb _build/lecture-dynamics.notebooks
103 | ls -a _build/lecture-dynamics.notebooks
104 | - name: Commit latest notebooks to lecture-dynamics.notebooks
105 | uses: cpina/github-action-push-to-another-repository@main
106 | env:
107 | API_TOKEN_GITHUB: ${{ secrets.QUANTECON_SERVICES_PAT }}
108 | with:
109 | source-directory: '_build/lecture-dynamics.notebooks/'
110 | destination-repository-username: 'QuantEcon'
111 | destination-repository-name: 'lecture-dynamics.notebooks'
112 | commit-message: 'auto publishing updates to notebooks'
113 | destination-github-username: 'quantecon-services'
114 | user-email: services@quantecon.org
115 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/lucas_model/lucastree.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from scipy.stats import lognorm
3 | from scipy.integrate import fixed_quad
4 |
5 |
6 | class LucasTree:
7 | """
8 | Class to store parameters of a the Lucas tree model, a grid for the
9 | iteration step and some other helpful bits and pieces.
10 |
11 | Parameters
12 | ----------
13 | γ : scalar(float)
14 | The coefficient of risk aversion in the household's CRRA utility
15 | function
16 | β : scalar(float)
17 | The household's discount factor
18 | α : scalar(float)
19 | The correlation coefficient in the shock process
20 | σ : scalar(float)
21 | The volatility of the shock process
22 | grid_size : int
23 | The size of the grid to use
24 |
25 | Attributes
26 | ----------
27 | γ, β, α, σ, grid_size : see Parameters
28 | grid : ndarray
29 | Properties for grid upon which prices are evaluated
30 | ϕ : scipy.stats.lognorm
31 | The distribution for the shock process
32 |
33 | Examples
34 | --------
35 | >>> tree = LucasTree(γ=2, β=0.95, α=0.90, σ=0.1)
36 | >>> price_vals = solve_lucas_model(tree)
37 |
38 | """
39 |
40 | def __init__(self,
41 | γ=2,
42 | β=0.95,
43 | α=0.90,
44 | σ=0.1,
45 | grid_size=100):
46 |
47 | self.γ, self.β, self.α, self.σ = γ, β, α, σ
48 |
49 | # == Set the grid interval to contain most of the mass of the
50 | # stationary distribution of the consumption endowment == #
51 | ssd = self.σ / np.sqrt(1 - self.α**2)
52 | grid_min, grid_max = np.exp(-4 * ssd), np.exp(4 * ssd)
53 | self.grid = np.linspace(grid_min, grid_max, grid_size)
54 | self.grid_size = grid_size
55 |
56 | # == set up distribution for shocks == #
57 | self.ϕ = lognorm(σ)
58 | self.draws = self.ϕ.rvs(500)
59 |
60 | # == h(y) = β * int G(y,z)^(1-γ) ϕ(dz) == #
61 | self.h = np.empty(self.grid_size)
62 | for i, y in enumerate(self.grid):
63 | self.h[i] = β * np.mean((y**α * self.draws)**(1 - γ))
64 |
65 |
66 |
67 | ## == Now the functions that act on a Lucas Tree == #
68 |
69 | def lucas_operator(f, tree, Tf=None):
70 | """
71 | The approximate Lucas operator, which computes and returns the
72 | updated function Tf on the grid points.
73 |
74 | Parameters
75 | ----------
76 | f : array_like(float)
77 | A candidate function on R_+ represented as points on a grid
78 | and should be flat NumPy array with len(f) = len(grid)
79 |
80 | tree : instance of LucasTree
81 | Stores the parameters of the problem
82 |
83 | Tf : array_like(float)
84 | Optional storage array for Tf
85 |
86 | Returns
87 | -------
88 | Tf : array_like(float)
89 | The updated function Tf
90 |
91 | Notes
92 | -----
93 | The argument `Tf` is optional, but recommended. If it is passed
94 | into this function, then we do not have to allocate any memory
95 | for the array here. As this function is often called many times
96 | in an iterative algorithm, this can save significant computation
97 | time.
98 |
99 | """
100 | grid, h = tree.grid, tree.h
101 | α, β = tree.α, tree.β
102 | z_vec = tree.draws
103 |
104 | # == turn f into a function == #
105 | Af = lambda x: np.interp(x, grid, f)
106 |
107 | # == set up storage if needed == #
108 | if Tf is None:
109 | Tf = np.empty_like(f)
110 |
111 | # == Apply the T operator to f using Monte Carlo integration == #
112 | for i, y in enumerate(grid):
113 | Tf[i] = h[i] + β * np.mean(Af(y**α * z_vec))
114 |
115 | return Tf
116 |
117 | def solve_lucas_model(tree, tol=1e-6, max_iter=500):
118 | """
119 | Compute the equilibrium price function associated with Lucas
120 | tree
121 |
122 | Parameters
123 | ----------
124 | tree : An instance of LucasTree
125 | Contains parameters
126 | tol : float
127 | error tolerance
128 | max_iter : int
129 | the maximum number of iterations
130 |
131 | Returns
132 | -------
133 | price : array_like(float)
134 | The prices at the grid points in the attribute `grid` of the object
135 |
136 | """
137 |
138 | # == simplify notation == #
139 | grid, grid_size = tree.grid, tree.grid_size
140 | γ = tree.γ
141 |
142 | # == Create storage array for lucas_operator. Reduces memory
143 | # allocation and speeds code up == #
144 | Tf = np.empty(grid_size)
145 |
146 | i = 0
147 | f = np.empty(grid_size) # Initial guess of f
148 | error = tol + 1
149 |
150 | while error > tol and i < max_iter:
151 | f_new = lucas_operator(f, tree, Tf)
152 | error = np.max(np.abs(f_new - f))
153 | f[:] = f_new
154 | i += 1
155 |
156 | price = f * grid**γ # Back out price vector
157 |
158 | return price
--------------------------------------------------------------------------------
/lectures/_config.yml:
--------------------------------------------------------------------------------
1 | title: Introduction to Economic Dynamics
2 | author: Thomas J. Sargent and John Stachurski
3 | logo: _static/qe-logo-large.png
4 | description: This website presents an introductory set of lectures on economic dynamics.
5 |
6 | parse:
7 | myst_enable_extensions:
8 | - amsmath
9 | - colon_fence
10 | - deflist
11 | - dollarmath
12 | - html_admonition
13 | - html_image
14 | - linkify
15 | - replacements
16 | - smartquotes
17 | - substitution
18 |
19 | only_build_toc_files: true
20 | execute:
21 | execute_notebooks: "cache"
22 | timeout: 2400
23 |
24 | bibtex_bibfiles:
25 | - _static/quant-econ.bib
26 |
27 | html:
28 | baseurl: https://dynamics.quantecon.org/
29 |
30 | latex:
31 | latex_documents:
32 | targetname: quantecon-dynamics.tex
33 |
34 | sphinx:
35 | extra_extensions: [sphinx_multitoc_numbering, sphinxext.rediraffe, sphinx_tojupyter, sphinxcontrib.youtube, sphinx.ext.todo, sphinx_exercise, sphinx_togglebutton, sphinx.ext.intersphinx, sphinx_reredirects]
36 | config:
37 | bibtex_reference_style: author_year
38 | nb_mime_priority_overrides: [
39 | # HTML
40 | ['html', 'application/vnd.jupyter.widget-view+json', 10],
41 | ['html', 'application/javascript', 20],
42 | ['html', 'text/html', 30],
43 | ['html', 'text/latex', 40],
44 | ['html', 'image/svg+xml', 50],
45 | ['html', 'image/png', 60],
46 | ['html', 'image/jpeg', 70],
47 | ['html', 'text/markdown', 80],
48 | ['html', 'text/plain', 90],
49 | # Jupyter Notebooks
50 | ['jupyter', 'application/vnd.jupyter.widget-view+json', 10],
51 | ['jupyter', 'application/javascript', 20],
52 | ['jupyter', 'text/html', 30],
53 | ['jupyter', 'text/latex', 40],
54 | ['jupyter', 'image/svg+xml', 50],
55 | ['jupyter', 'image/png', 60],
56 | ['jupyter', 'image/jpeg', 70],
57 | ['jupyter', 'text/markdown', 80],
58 | ['jupyter', 'text/plain', 90],
59 | # LaTeX
60 | ['latex', 'text/latex', 10],
61 | ['latex', 'application/pdf', 20],
62 | ['latex', 'image/png', 30],
63 | ['latex', 'image/jpeg', 40],
64 | ['latex', 'text/markdown', 50],
65 | ['latex', 'text/plain', 60],
66 | # Link Checker
67 | ['linkcheck', 'text/plain', 10],
68 | ]
69 | html_favicon: _static/lectures-favicon.ico
70 | html_theme: quantecon_book_theme
71 | html_static_path: ['_static']
72 | html_theme_options:
73 | authors:
74 | - name: Thomas J. Sargent
75 | url: http://www.tomsargent.com/
76 | - name: John Stachurski
77 | url: https://johnstachurski.net/
78 | header_organisation_url: https://quantecon.org
79 | header_organisation: QuantEcon
80 | repository_url: https://github.com/QuantEcon/lecture-dynamics.myst
81 | nb_repository_url: https://github.com/QuantEcon/lecture-dynamics.notebooks
82 | twitter: quantecon
83 | twitter_logo_url: https://assets.quantecon.org/img/qe-twitter-logo.png
84 | og_logo_url: https://assets.quantecon.org/img/qe-og-logo.png
85 | description: This website presents an introductory set of lectures on economic dynamics.
86 | keywords: Python, QuantEcon, Quantitative Economics, Economics, Sloan, Alfred P. Sloan Foundation, Tom J. Sargent, John Stachurski
87 | analytics:
88 | google_analytics_id: G-Y9JY7E8Z7N
89 | launch_buttons:
90 | colab_url : https://colab.research.google.com
91 | intersphinx_mapping:
92 | intro:
93 | - https://intro.quantecon.org/
94 | - null
95 | dle:
96 | - https://quantecon.github.io/lecture-dle/
97 | - null
98 | dps:
99 | - https://quantecon.github.io/lecture-dps/
100 | - null
101 | eqm:
102 | - https://quantecon.github.io/lecture-eqm/
103 | - null
104 | stats:
105 | - https://quantecon.github.io/lecture-stats/
106 | - null
107 | tools:
108 | - https://quantecon.github.io/lecture-tools-techniques/
109 | - null
110 | dynam:
111 | - https://quantecon.github.io/lecture-dynamics/
112 | - null
113 | mathjax3_config:
114 | tex:
115 | macros:
116 | "argmax" : "arg\\,max"
117 | "argmin" : "arg\\,min"
118 | mathjax_path: https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js
119 | # Local Redirects
120 | rediraffe_redirects:
121 | index_toc.md: intro.md
122 | # Remote Redirects
123 | # redirects:
124 | # heavy_tails: https://intro.quantecon.org/heavy_tails.html
125 | tojupyter_static_file_path: ["source/_static", "_static"]
126 | tojupyter_target_html: true
127 | tojupyter_urlpath: "https://dynamics.quantecon.org/"
128 | tojupyter_image_urlpath: "https://dynamics.quantecon.org/_static/"
129 | tojupyter_lang_synonyms: ["ipython", "ipython3", "python"]
130 | tojupyter_kernels:
131 | python3:
132 | kernelspec:
133 | display_name: "Python"
134 | language: python3
135 | name: python3
136 | file_extension: ".py"
137 | tojupyter_images_markdown: true
138 |
--------------------------------------------------------------------------------
/lectures/_static/lecture_specific/odu/odu.py:
--------------------------------------------------------------------------------
1 | from scipy.interpolate import LinearNDInterpolator
2 | from scipy.integrate import fixed_quad
3 | from numpy import maximum as npmax
4 |
5 |
6 | class SearchProblem:
7 | """
8 | A class to store a given parameterization of the "offer distribution
9 | unknown" model.
10 |
11 | Parameters
12 | ----------
13 | β : scalar(float), optional(default=0.95)
14 | The discount parameter
15 | c : scalar(float), optional(default=0.6)
16 | The unemployment compensation
17 | F_a : scalar(float), optional(default=1)
18 | First parameter of β distribution on F
19 | F_b : scalar(float), optional(default=1)
20 | Second parameter of β distribution on F
21 | G_a : scalar(float), optional(default=3)
22 | First parameter of β distribution on G
23 | G_b : scalar(float), optional(default=1.2)
24 | Second parameter of β distribution on G
25 | w_max : scalar(float), optional(default=2)
26 | Maximum wage possible
27 | w_grid_size : scalar(int), optional(default=40)
28 | Size of the grid on wages
29 | π_grid_size : scalar(int), optional(default=40)
30 | Size of the grid on probabilities
31 |
32 | Attributes
33 | ----------
34 | β, c, w_max : see Parameters
35 | w_grid : np.ndarray
36 | Grid points over wages, ndim=1
37 | π_grid : np.ndarray
38 | Grid points over π, ndim=1
39 | grid_points : np.ndarray
40 | Combined grid points, ndim=2
41 | F : scipy.stats._distn_infrastructure.rv_frozen
42 | Beta distribution with params (F_a, F_b), scaled by w_max
43 | G : scipy.stats._distn_infrastructure.rv_frozen
44 | Beta distribution with params (G_a, G_b), scaled by w_max
45 | f : function
46 | Density of F
47 | g : function
48 | Density of G
49 | π_min : scalar(float)
50 | Minimum of grid over π
51 | π_max : scalar(float)
52 | Maximum of grid over π
53 | """
54 |
55 | def __init__(self, β=0.95, c=0.6, F_a=1, F_b=1, G_a=3, G_b=1.2,
56 | w_max=2, w_grid_size=40, π_grid_size=40):
57 |
58 | self.β, self.c, self.w_max = β, c, w_max
59 | self.F = beta(F_a, F_b, scale=w_max)
60 | self.G = beta(G_a, G_b, scale=w_max)
61 | self.f, self.g = self.F.pdf, self.G.pdf # Density functions
62 | self.π_min, self.π_max = 1e-3, 1 - 1e-3 # Avoids instability
63 | self.w_grid = np.linspace(0, w_max, w_grid_size)
64 | self.π_grid = np.linspace(self.π_min, self.π_max, π_grid_size)
65 | x, y = np.meshgrid(self.w_grid, self.π_grid)
66 | self.grid_points = np.column_stack((x.ravel(order='F'), y.ravel(order='F')))
67 |
68 |
69 | def q(self, w, π):
70 | """
71 | Updates π using Bayes' rule and the current wage observation w.
72 |
73 | Returns
74 | -------
75 |
76 | new_π : scalar(float)
77 | The updated probability
78 |
79 | """
80 |
81 | new_π = 1.0 / (1 + ((1 - π) * self.g(w)) / (π * self.f(w)))
82 |
83 | # Return new_π when in [π_min, π_max] and else end points
84 | new_π = np.maximum(np.minimum(new_π, self.π_max), self.π_min)
85 |
86 | return new_π
87 |
88 | def bellman_operator(self, v):
89 | """
90 |
91 | The Bellman operator. Including for comparison. Value function
92 | iteration is not recommended for this problem. See the
93 | reservation wage operator below.
94 |
95 | Parameters
96 | ----------
97 | v : array_like(float, ndim=1, length=len(π_grid))
98 | An approximate value function represented as a
99 | one-dimensional array.
100 |
101 | Returns
102 | -------
103 | new_v : array_like(float, ndim=1, length=len(π_grid))
104 | The updated value function
105 |
106 | """
107 | # == Simplify names == #
108 | f, g, β, c, q = self.f, self.g, self.β, self.c, self.q
109 |
110 | vf = LinearNDInterpolator(self.grid_points, v)
111 | N = len(v)
112 | new_v = np.empty(N)
113 |
114 | for i in range(N):
115 | w, π = self.grid_points[i, :]
116 | v1 = w / (1 - β)
117 | integrand = lambda m: vf(m, q(m, π)) * (π * f(m) +
118 | (1 - π) * g(m))
119 | integral, error = fixed_quad(integrand, 0, self.w_max)
120 | v2 = c + β * integral
121 | new_v[i] = max(v1, v2)
122 |
123 | return new_v
124 |
125 | def get_greedy(self, v):
126 | """
127 | Compute optimal actions taking v as the value function.
128 |
129 | Parameters
130 | ----------
131 | v : array_like(float, ndim=1, length=len(π_grid))
132 | An approximate value function represented as a
133 | one-dimensional array.
134 |
135 | Returns
136 | -------
137 | policy : array_like(float, ndim=1, length=len(π_grid))
138 | The decision to accept or reject an offer where 1 indicates
139 | accept and 0 indicates reject
140 |
141 | """
142 | # == Simplify names == #
143 | f, g, β, c, q = self.f, self.g, self.β, self.c, self.q
144 |
145 | vf = LinearNDInterpolator(self.grid_points, v)
146 | N = len(v)
147 | policy = np.zeros(N, dtype=int)
148 |
149 | for i in range(N):
150 | w, π = self.grid_points[i, :]
151 | v1 = w / (1 - β)
152 | integrand = lambda m: vf(m, q(m, π)) * (π * f(m) +
153 | (1 - π) * g(m))
154 | integral, error = fixed_quad(integrand, 0, self.w_max)
155 | v2 = c + β * integral
156 | policy[i] = v1 > v2 # Evaluates to 1 or 0
157 |
158 | return policy
159 |
160 | def res_wage_operator(self, ϕ):
161 | """
162 |
163 | Updates the reservation wage function guess ϕ via the operator
164 | Q.
165 |
166 | Parameters
167 | ----------
168 | ϕ : array_like(float, ndim=1, length=len(π_grid))
169 | This is reservation wage guess
170 |
171 | Returns
172 | -------
173 | new_ϕ : array_like(float, ndim=1, length=len(π_grid))
174 | The updated reservation wage guess.
175 |
176 | """
177 | # == Simplify names == #
178 | β, c, f, g, q = self.β, self.c, self.f, self.g, self.q
179 | # == Turn ϕ into a function == #
180 | ϕ_f = lambda p: np.interp(p, self.π_grid, ϕ)
181 |
182 | new_ϕ = np.empty(len(ϕ))
183 | for i, π in enumerate(self.π_grid):
184 | def integrand(x):
185 | "Integral expression on right-hand side of operator"
186 | return npmax(x, ϕ_f(q(x, π))) * (π * f(x) + (1 - π) * g(x))
187 | integral, error = fixed_quad(integrand, 0, self.w_max)
188 | new_ϕ[i] = (1 - β) * c + β * integral
189 |
190 | return new_ϕ
191 |
--------------------------------------------------------------------------------
/lectures/egm_policy_iter.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} html
13 |
18 | ```
19 |
20 | # {index}`Optimal Growth IV: The Endogenous Grid Method `
21 |
22 | ```{contents} Contents
23 | :depth: 2
24 | ```
25 |
26 | In addition to what's in Anaconda, this lecture will need the following libraries:
27 |
28 | ```{code-cell} ipython
29 | ---
30 | tags: [hide-output]
31 | ---
32 | !pip install interpolation
33 | ```
34 |
35 | ## Overview
36 |
37 | Previously, we solved the stochastic optimal growth model using
38 |
39 | 1. {doc}`value function iteration `
40 | 1. {doc}`Euler equation based time iteration `
41 |
42 | We found time iteration to be significantly more accurate and efficient.
43 |
44 | In this lecture, we'll look at a clever twist on time iteration called the **endogenous grid method** (EGM).
45 |
46 | EGM is a numerical method for implementing policy iteration invented by [Chris Carroll](http://www.econ2.jhu.edu/people/ccarroll/).
47 |
48 | The original reference is {cite}`Carroll2006`.
49 |
50 | Let's start with some standard imports:
51 |
52 | ```{code-cell} ipython
53 | import matplotlib.pyplot as plt
54 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
55 | import numpy as np
56 | from interpolation import interp
57 | from numba import njit
58 | ```
59 |
60 | ## Key Idea
61 |
62 | Let's start by reminding ourselves of the theory and then see how the numerics fit in.
63 |
64 | ### Theory
65 |
66 | Take the model set out in {doc}`the time iteration lecture `, following the same terminology and notation.
67 |
68 | The Euler equation is
69 |
70 | ```{math}
71 | :label: egm_euler
72 |
73 | (u'\circ \sigma^*)(y)
74 | = \beta \int (u'\circ \sigma^*)(f(y - \sigma^*(y)) z) f'(y - \sigma^*(y)) z \phi(dz)
75 | ```
76 |
77 | As we saw, the Coleman-Reffett operator is a nonlinear operator $K$ engineered so that $\sigma^*$ is a fixed point of $K$.
78 |
79 | It takes as its argument a continuous strictly increasing consumption policy $\sigma \in \Sigma$.
80 |
81 | It returns a new function $K \sigma$, where $(K \sigma)(y)$ is the $c \in (0, \infty)$ that solves
82 |
83 | ```{math}
84 | :label: egm_coledef
85 |
86 | u'(c)
87 | = \beta \int (u' \circ \sigma) (f(y - c) z ) f'(y - c) z \phi(dz)
88 | ```
89 |
90 | ### Exogenous Grid
91 |
92 | As discussed in {doc}`the lecture on time iteration `, to implement the method on a computer, we need a numerical approximation.
93 |
94 | In particular, we represent a policy function by a set of values on a finite grid.
95 |
96 | The function itself is reconstructed from this representation when necessary, using interpolation or some other method.
97 |
98 | {doc}`Previously `, to obtain a finite representation of an updated consumption policy, we
99 |
100 | * fixed a grid of income points $\{y_i\}$
101 | * calculated the consumption value $c_i$ corresponding to each
102 | $y_i$ using {eq}`egm_coledef` and a root-finding routine
103 |
104 | Each $c_i$ is then interpreted as the value of the function $K \sigma$ at $y_i$.
105 |
106 | Thus, with the points $\{y_i, c_i\}$ in hand, we can reconstruct $K \sigma$ via approximation.
107 |
108 | Iteration then continues...
109 |
110 | ### Endogenous Grid
111 |
112 | The method discussed above requires a root-finding routine to find the
113 | $c_i$ corresponding to a given income value $y_i$.
114 |
115 | Root-finding is costly because it typically involves a significant number of
116 | function evaluations.
117 |
118 | As pointed out by Carroll {cite}`Carroll2006`, we can avoid this if
119 | $y_i$ is chosen endogenously.
120 |
121 | The only assumption required is that $u'$ is invertible on $(0, \infty)$.
122 |
123 | Let $(u')^{-1}$ be the inverse function of $u'$.
124 |
125 | The idea is this:
126 |
127 | * First, we fix an *exogenous* grid $\{k_i\}$ for capital ($k = y - c$).
128 | * Then we obtain $c_i$ via
129 |
130 | ```{math}
131 | :label: egm_getc
132 |
133 | c_i =
134 | (u')^{-1}
135 | \left\{
136 | \beta \int (u' \circ \sigma) (f(k_i) z ) \, f'(k_i) \, z \, \phi(dz)
137 | \right\}
138 | ```
139 |
140 | * Finally, for each $c_i$ we set $y_i = c_i + k_i$.
141 |
142 | It is clear that each $(y_i, c_i)$ pair constructed in this manner satisfies {eq}`egm_coledef`.
143 |
144 | With the points $\{y_i, c_i\}$ in hand, we can reconstruct $K \sigma$ via approximation as before.
145 |
146 | The name EGM comes from the fact that the grid $\{y_i\}$ is determined **endogenously**.
147 |
148 | ## Implementation
149 |
150 | As {doc}`before `, we will start with a simple setting
151 | where
152 |
153 | * $u(c) = \ln c$,
154 | * production is Cobb-Douglas, and
155 | * the shocks are lognormal.
156 |
157 | This will allow us to make comparisons with the analytical solutions
158 |
159 | ```{code-cell} python3
160 | :load: _static/lecture_specific/optgrowth/cd_analytical.py
161 | ```
162 |
163 | We reuse the `OptimalGrowthModel` class
164 |
165 | ```{code-cell} python3
166 | :load: _static/lecture_specific/optgrowth_fast/ogm.py
167 | ```
168 |
169 | ### The Operator
170 |
171 | Here's an implementation of $K$ using EGM as described above.
172 |
173 | ```{code-cell} python3
174 | @njit
175 | def K(σ_array, og):
176 | """
177 | The Coleman-Reffett operator using EGM
178 |
179 | """
180 |
181 | # Simplify names
182 | f, β = og.f, og.β
183 | f_prime, u_prime = og.f_prime, og.u_prime
184 | u_prime_inv = og.u_prime_inv
185 | grid, shocks = og.grid, og.shocks
186 |
187 | # Determine endogenous grid
188 | y = grid + σ_array # y_i = k_i + c_i
189 |
190 | # Linear interpolation of policy using endogenous grid
191 | σ = lambda x: interp(y, σ_array, x)
192 |
193 | # Allocate memory for new consumption array
194 | c = np.empty_like(grid)
195 |
196 | # Solve for updated consumption value
197 | for i, k in enumerate(grid):
198 | vals = u_prime(σ(f(k) * shocks)) * f_prime(k) * shocks
199 | c[i] = u_prime_inv(β * np.mean(vals))
200 |
201 | return c
202 | ```
203 |
204 | Note the lack of any root-finding algorithm.
205 |
206 | ### Testing
207 |
208 | First we create an instance.
209 |
210 | ```{code-cell} python3
211 | og = OptimalGrowthModel()
212 | grid = og.grid
213 | ```
214 |
215 | Here's our solver routine:
216 |
217 | ```{code-cell} python3
218 | :load: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py
219 | ```
220 |
221 | Let's call it:
222 |
223 | ```{code-cell} python3
224 | σ_init = np.copy(grid)
225 | σ = solve_model_time_iter(og, σ_init)
226 | ```
227 |
228 | Here is a plot of the resulting policy, compared with the true policy:
229 |
230 | ```{code-cell} python3
231 | y = grid + σ # y_i = k_i + c_i
232 |
233 | fig, ax = plt.subplots()
234 |
235 | ax.plot(y, σ, lw=2,
236 | alpha=0.8, label='approximate policy function')
237 |
238 | ax.plot(y, σ_star(y, og.α, og.β), 'k--',
239 | lw=2, alpha=0.8, label='true policy function')
240 |
241 | ax.legend()
242 | plt.show()
243 | ```
244 |
245 | The maximal absolute deviation between the two policies is
246 |
247 | ```{code-cell} python3
248 | np.max(np.abs(σ - σ_star(y, og.α, og.β)))
249 | ```
250 |
251 | How long does it take to converge?
252 |
253 | ```{code-cell} python3
254 | %%timeit -n 3 -r 1
255 | σ = solve_model_time_iter(og, σ_init, verbose=False)
256 | ```
257 |
258 | Relative to time iteration, which as already found to be highly efficient, EGM
259 | has managed to shave off still more run time without compromising accuracy.
260 |
261 | This is due to the lack of a numerical root-finding step.
262 |
263 | We can now solve the optimal growth model at given parameters extremely fast.
264 |
--------------------------------------------------------------------------------
/lectures/optgrowth_fast.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | (optgrowth)=
13 | ```{raw} html
14 |
19 | ```
20 |
21 | # {index}`Optimal Growth II: Accelerating the Code with Numba `
22 |
23 | In addition to what's in Anaconda, this lecture will need the following libraries:
24 |
25 | ```{code-cell} ipython
26 | ---
27 | tags: [hide-output]
28 | ---
29 | !pip install quantecon
30 | !pip install interpolation
31 | ```
32 |
33 | ## Overview
34 |
35 | {doc}`Previously `, we studied a stochastic optimal
36 | growth model with one representative agent.
37 |
38 | We solved the model using dynamic programming.
39 |
40 | In writing our code, we focused on clarity and flexibility.
41 |
42 | These are important, but there's often a trade-off between flexibility and
43 | speed.
44 |
45 | The reason is that, when code is less flexible, we can exploit structure more
46 | easily.
47 |
48 | (This is true about algorithms and mathematical problems more generally:
49 | more specific problems have more structure, which, with some thought, can be
50 | exploited for better results.)
51 |
52 | So, in this lecture, we are going to accept less flexibility while gaining
53 | speed, using just-in-time (JIT) compilation to
54 | accelerate our code.
55 |
56 | Let's start with some imports:
57 |
58 | ```{code-cell} ipython
59 | import matplotlib.pyplot as plt
60 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
61 | import numpy as np
62 | from interpolation import interp
63 | from numba import jit, njit
64 | from quantecon.optimize.scalar_maximization import brent_max
65 | ```
66 |
67 | We are using an interpolation function from
68 | [interpolation.py](https://github.com/EconForge/interpolation.py) because it
69 | helps us JIT-compile our code.
70 |
71 | The function `brent_max` is also designed for embedding in JIT-compiled code.
72 |
73 | These are alternatives to similar functions in SciPy (which, unfortunately, are not JIT-aware).
74 |
75 | ## The Model
76 |
77 | ```{index} single: Optimal Growth; Model
78 | ```
79 |
80 | The model is the same as discussed in our {doc}`previous lecture `
81 | on optimal growth.
82 |
83 | We will start with log utility:
84 |
85 | $$
86 | u(c) = \ln(c)
87 | $$
88 |
89 | We continue to assume that
90 |
91 | * $f(k) = k^{\alpha}$
92 | * $\phi$ is the distribution of $\xi := \exp(\mu + s \zeta)$ when $\zeta$ is standard normal
93 |
94 | We will once again use value function iteration to solve the model.
95 |
96 | In particular, the algorithm is unchanged, and the only difference is in the implementation itself.
97 |
98 | As before, we will be able to compare with the true solutions
99 |
100 | ```{code-cell} python3
101 | :load: _static/lecture_specific/optgrowth/cd_analytical.py
102 | ```
103 |
104 | ## Computation
105 |
106 | ```{index} single: Dynamic Programming; Computation
107 | ```
108 |
109 | We will again store the primitives of the optimal growth model in a class.
110 |
111 | But now we are going to use [Numba's](https://python-programming.quantecon.org/numba.html) `@jitclass` decorator to target our class for JIT compilation.
112 |
113 | Because we are going to use Numba to compile our class, we need to specify the data types.
114 |
115 | You will see this as a list called `opt_growth_data` above our class.
116 |
117 | Unlike in the {doc}`previous lecture `, we
118 | hardwire the production and utility specifications into the
119 | class.
120 |
121 | This is where we sacrifice flexibility in order to gain more speed.
122 |
123 | ```{code-cell} python3
124 | :load: _static/lecture_specific/optgrowth_fast/ogm.py
125 | ```
126 |
127 | The class includes some methods such as `u_prime` that we do not need now
128 | but will use in later lectures.
129 |
130 | ### The Bellman Operator
131 |
132 | We will use JIT compilation to accelerate the Bellman operator.
133 |
134 | First, here's a function that returns the value of a particular consumption choice `c`, given state `y`, as per the Bellman equation {eq}`fpb30`.
135 |
136 | ```{code-cell} python3
137 | @njit
138 | def state_action_value(c, y, v_array, og):
139 | """
140 | Right hand side of the Bellman equation.
141 |
142 | * c is consumption
143 | * y is income
144 | * og is an instance of OptimalGrowthModel
145 | * v_array represents a guess of the value function on the grid
146 |
147 | """
148 |
149 | u, f, β, shocks = og.u, og.f, og.β, og.shocks
150 |
151 | v = lambda x: interp(og.grid, v_array, x)
152 |
153 | return u(c) + β * np.mean(v(f(y - c) * shocks))
154 | ```
155 |
156 | Now we can implement the Bellman operator, which maximizes the right hand side
157 | of the Bellman equation:
158 |
159 | ```{code-cell} python3
160 | @jit(nopython=True)
161 | def T(v, og):
162 | """
163 | The Bellman operator.
164 |
165 | * og is an instance of OptimalGrowthModel
166 | * v is an array representing a guess of the value function
167 |
168 | """
169 |
170 | v_new = np.empty_like(v)
171 | v_greedy = np.empty_like(v)
172 |
173 | for i in range(len(og.grid)):
174 | y = og.grid[i]
175 |
176 | # Maximize RHS of Bellman equation at state y
177 | result = brent_max(state_action_value, 1e-10, y, args=(y, v, og))
178 | v_greedy[i], v_new[i] = result[0], result[1]
179 |
180 | return v_greedy, v_new
181 | ```
182 |
183 | We use the `solve_model` function to perform iteration until convergence.
184 |
185 | ```{code-cell} python3
186 | :load: _static/lecture_specific/optgrowth/solve_model.py
187 | ```
188 |
189 | Let's compute the approximate solution at the default parameters.
190 |
191 | First we create an instance:
192 |
193 | ```{code-cell} python3
194 | og = OptimalGrowthModel()
195 | ```
196 |
197 | Now we call `solve_model`, using the `%%time` magic to check how long it
198 | takes.
199 |
200 | ```{code-cell} python3
201 | %%time
202 | v_greedy, v_solution = solve_model(og)
203 | ```
204 |
205 | You will notice that this is *much* faster than our {doc}`original implementation `.
206 |
207 | Here is a plot of the resulting policy, compared with the true policy:
208 |
209 | ```{code-cell} python3
210 | fig, ax = plt.subplots()
211 |
212 | ax.plot(og.grid, v_greedy, lw=2,
213 | alpha=0.8, label='approximate policy function')
214 |
215 | ax.plot(og.grid, σ_star(og.grid, og.α, og.β), 'k--',
216 | lw=2, alpha=0.8, label='true policy function')
217 |
218 | ax.legend()
219 | plt.show()
220 | ```
221 |
222 | Again, the fit is excellent --- this is as expected since we have not changed
223 | the algorithm.
224 |
225 | The maximal absolute deviation between the two policies is
226 |
227 | ```{code-cell} python3
228 | np.max(np.abs(v_greedy - σ_star(og.grid, og.α, og.β)))
229 | ```
230 |
231 | ## Exercises
232 |
233 | ```{exercise}
234 | :label: ogfast_ex1
235 |
236 | Time how long it takes to iterate with the Bellman operator
237 | 20 times, starting from initial condition $v(y) = u(y)$.
238 |
239 | Use the default parameterization.
240 | ```
241 |
242 | ```{solution-start} ogfast_ex1
243 | :class: dropdown
244 | ```
245 |
246 | Let's set up the initial condition.
247 |
248 | ```{code-cell} ipython3
249 | v = og.u(og.grid)
250 | ```
251 |
252 | Here's the timing:
253 |
254 | ```{code-cell} ipython3
255 | %%time
256 |
257 | for i in range(20):
258 | v_greedy, v_new = T(v, og)
259 | v = v_new
260 | ```
261 |
262 | Compared with our {ref}`timing ` for the non-compiled version of
263 | value function iteration, the JIT-compiled code is usually an order of magnitude faster.
264 |
265 | ```{solution-end}
266 | ```
267 |
268 | ```{exercise}
269 | :label: ogfast_ex2
270 |
271 | Modify the optimal growth model to use the CRRA utility specification.
272 |
273 | $$
274 | u(c) = \frac{c^{1 - \gamma} } {1 - \gamma}
275 | $$
276 |
277 | Set `γ = 1.5` as the default value and maintaining other specifications.
278 |
279 | (Note that `jitclass` currently does not support inheritance, so you will
280 | have to copy the class and change the relevant parameters and methods.)
281 |
282 | Compute an estimate of the optimal policy, plot it and compare visually with
283 | the same plot from the {ref}`analogous exercise ` in the first optimal
284 | growth lecture.
285 |
286 | Compare execution time as well.
287 | ```
288 |
289 |
290 | ```{solution-start} ogfast_ex2
291 | :class: dropdown
292 | ```
293 |
294 | Here's our CRRA version of `OptimalGrowthModel`:
295 |
296 | ```{code-cell} python3
297 | :load: _static/lecture_specific/optgrowth_fast/ogm_crra.py
298 | ```
299 |
300 | Let's create an instance:
301 |
302 | ```{code-cell} python3
303 | og_crra = OptimalGrowthModel_CRRA()
304 | ```
305 |
306 | Now we call `solve_model`, using the `%%time` magic to check how long it
307 | takes.
308 |
309 | ```{code-cell} python3
310 | %%time
311 | v_greedy, v_solution = solve_model(og_crra)
312 | ```
313 |
314 | Here is a plot of the resulting policy:
315 |
316 | ```{code-cell} python3
317 | fig, ax = plt.subplots()
318 |
319 | ax.plot(og.grid, v_greedy, lw=2,
320 | alpha=0.6, label='Approximate value function')
321 |
322 | ax.legend(loc='lower right')
323 | plt.show()
324 | ```
325 |
326 | This matches the solution that we obtained in our non-jitted code,
327 | {ref}`in the exercises `.
328 |
329 | Execution time is an order of magnitude faster.
330 |
331 | ```{solution-end}
332 | ```
333 |
334 |
335 | ```{exercise-start}
336 | :label: ogfast_ex3
337 | ```
338 |
339 | In this exercise we return to the original log utility specification.
340 |
341 | Once an optimal consumption policy $\sigma$ is given, income follows
342 |
343 | $$
344 | y_{t+1} = f(y_t - \sigma(y_t)) \xi_{t+1}
345 | $$
346 |
347 | The next figure shows a simulation of 100 elements of this sequence for three
348 | different discount factors (and hence three different policies).
349 |
350 | ```{figure} /_static/lecture_specific/optgrowth/solution_og_ex2.png
351 | ```
352 |
353 | In each sequence, the initial condition is $y_0 = 0.1$.
354 |
355 | The discount factors are `discount_factors = (0.8, 0.9, 0.98)`.
356 |
357 | We have also dialed down the shocks a bit with `s = 0.05`.
358 |
359 | Otherwise, the parameters and primitives are the same as the log-linear model discussed earlier in the lecture.
360 |
361 | Notice that more patient agents typically have higher wealth.
362 |
363 | Replicate the figure modulo randomness.
364 |
365 | ```{exercise-end}
366 | ```
367 |
368 | ```{solution-start} ogfast_ex3
369 | :class: dropdown
370 | ```
371 |
372 | Here's one solution:
373 |
374 | ```{code-cell} python3
375 | def simulate_og(σ_func, og, y0=0.1, ts_length=100):
376 | '''
377 | Compute a time series given consumption policy σ.
378 | '''
379 | y = np.empty(ts_length)
380 | ξ = np.random.randn(ts_length-1)
381 | y[0] = y0
382 | for t in range(ts_length-1):
383 | y[t+1] = (y[t] - σ_func(y[t]))**og.α * np.exp(og.μ + og.s * ξ[t])
384 | return y
385 | ```
386 |
387 | ```{code-cell} python3
388 | fig, ax = plt.subplots()
389 |
390 | for β in (0.8, 0.9, 0.98):
391 |
392 | og = OptimalGrowthModel(β=β, s=0.05)
393 |
394 | v_greedy, v_solution = solve_model(og, verbose=False)
395 |
396 | # Define an optimal policy function
397 | σ_func = lambda x: interp(og.grid, v_greedy, x)
398 | y = simulate_og(σ_func, og)
399 | ax.plot(y, lw=2, alpha=0.6, label=rf'$\beta = {β}$')
400 |
401 | ax.legend(loc='lower right')
402 | plt.show()
403 | ```
404 |
405 | ```{solution-end}
406 | ```
407 |
--------------------------------------------------------------------------------
/lectures/sir_model.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} html
13 |
18 | ```
19 |
20 | # {index}`Modeling COVID 19 `
21 |
22 | ## Overview
23 |
24 | This is a Python version of the code for analyzing the COVID-19 pandemic
25 | provided by [Andrew Atkeson](https://sites.google.com/site/andyatkeson/).
26 |
27 | See, in particular
28 |
29 | * [NBER Working Paper No. 26867](https://www.nber.org/papers/w26867)
30 | * [COVID-19 Working papers and code](https://sites.google.com/site/andyatkeson/home?authuser=0)
31 |
32 | The purpose of his notes is to introduce economists to quantitative modeling
33 | of infectious disease dynamics.
34 |
35 | Dynamics are modeled using a standard SIR (Susceptible-Infected-Removed) model
36 | of disease spread.
37 |
38 | The model dynamics are represented by a system of ordinary differential
39 | equations.
40 |
41 | The main objective is to study the impact of suppression through social
42 | distancing on the spread of the infection.
43 |
44 | The focus is on US outcomes but the parameters can be adjusted to study
45 | other countries.
46 |
47 | We will use the following standard imports:
48 |
49 | ```{code-cell} ipython3
50 | import matplotlib.pyplot as plt
51 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
52 | import numpy as np
53 | from numpy import exp
54 | ```
55 |
56 | We will also use SciPy's numerical routine odeint for solving differential
57 | equations.
58 |
59 | ```{code-cell} ipython3
60 | from scipy.integrate import odeint
61 | ```
62 |
63 | This routine calls into compiled code from the FORTRAN library odepack.
64 |
65 | ## The SIR Model
66 |
67 | In the version of the SIR model we will analyze there are four states.
68 |
69 | All individuals in the population are assumed to be in one of these four states.
70 |
71 | The states are: susceptible (S), exposed (E), infected (I) and removed (R).
72 |
73 | Comments:
74 |
75 | * Those in state R have been infected and either recovered or died.
76 | * Those who have recovered are assumed to have acquired immunity.
77 | * Those in the exposed group are not yet infectious.
78 |
79 | ### Time Path
80 |
81 | The flow across states follows the path $S \to E \to I \to R$.
82 |
83 | All individuals in the population are eventually infected when
84 | the transmission rate is positive and $i(0) > 0$.
85 |
86 | The interest is primarily in
87 |
88 | * the number of infections at a given time (which determines whether or not the health care system is overwhelmed) and
89 | * how long the caseload can be deferred (hopefully until a vaccine arrives)
90 |
91 | Using lower case letters for the fraction of the population in each state, the
92 | dynamics are
93 |
94 | ```{math}
95 | :label: sir_system
96 |
97 | \begin{aligned}
98 | \dot s(t) & = - \beta(t) \, s(t) \, i(t)
99 | \\
100 | \dot e(t) & = \beta(t) \, s(t) \, i(t) - σ e(t)
101 | \\
102 | \dot i(t) & = σ e(t) - γ i(t)
103 | \end{aligned}
104 | ```
105 |
106 | In these equations,
107 |
108 | * $\beta(t)$ is called the *transmission rate* (the rate at which individuals bump into others and expose them to the virus).
109 | * $\sigma$ is called the *infection rate* (the rate at which those who are exposed become infected)
110 | * $\gamma$ is called the *recovery rate* (the rate at which infected people recover or die).
111 | * the dot symbol $\dot y$ represents the time derivative $dy/dt$.
112 |
113 | We do not need to model the fraction $r$ of the population in state $R$ separately because the states form a partition.
114 |
115 | In particular, the "removed" fraction of the population is $r = 1 - s - e - i$.
116 |
117 | We will also track $c = i + r$, which is the cumulative caseload
118 | (i.e., all those who have or have had the infection).
119 |
120 | The system {eq}`sir_system` can be written in vector form as
121 |
122 | ```{math}
123 | :label: dfcv
124 |
125 | \dot x = F(x, t), \qquad x := (s, e, i)
126 | ```
127 |
128 | for suitable definition of $F$ (see the code below).
129 |
130 | ### Parameters
131 |
132 | Both $\sigma$ and $\gamma$ are thought of as fixed, biologically determined parameters.
133 |
134 | As in Atkeson's note, we set
135 |
136 | * $\sigma = 1/5.2$ to reflect an average incubation period of 5.2 days.
137 | * $\gamma = 1/18$ to match an average illness duration of 18 days.
138 |
139 | The transmission rate is modeled as
140 |
141 | * $\beta(t) := R(t) \gamma$ where $R(t)$ is the *effective reproduction number* at time $t$.
142 |
143 | (The notation is slightly confusing, since $R(t)$ is different to
144 | $R$, the symbol that represents the removed state.)
145 |
146 | ## Implementation
147 |
148 | First we set the population size to match the US.
149 |
150 | ```{code-cell} ipython3
151 | pop_size = 3.3e8
152 | ```
153 |
154 | Next we fix parameters as described above.
155 |
156 | ```{code-cell} ipython3
157 | γ = 1 / 18
158 | σ = 1 / 5.2
159 | ```
160 |
161 | Now we construct a function that represents $F$ in {eq}`dfcv`
162 |
163 | ```{code-cell} ipython3
164 | def F(x, t, R0=1.6):
165 | """
166 | Time derivative of the state vector.
167 |
168 | * x is the state vector (array_like)
169 | * t is time (scalar)
170 | * R0 is the effective transmission rate, defaulting to a constant
171 |
172 | """
173 | s, e, i = x
174 |
175 | # New exposure of susceptibles
176 | β = R0(t) * γ if callable(R0) else R0 * γ
177 | ne = β * s * i
178 |
179 | # Time derivatives
180 | ds = - ne
181 | de = ne - σ * e
182 | di = σ * e - γ * i
183 |
184 | return ds, de, di
185 | ```
186 |
187 | Note that `R0` can be either constant or a given function of time.
188 |
189 | The initial conditions are set to
190 |
191 | ```{code-cell} ipython3
192 | # initial conditions of s, e, i
193 | i_0 = 1e-7
194 | e_0 = 4 * i_0
195 | s_0 = 1 - i_0 - e_0
196 | ```
197 |
198 | In vector form the initial condition is
199 |
200 | ```{code-cell} ipython3
201 | x_0 = s_0, e_0, i_0
202 | ```
203 |
204 | We solve for the time path numerically using odeint, at a sequence of dates
205 | `t_vec`.
206 |
207 | ```{code-cell} ipython3
208 | def solve_path(R0, t_vec, x_init=x_0):
209 | """
210 | Solve for i(t) and c(t) via numerical integration,
211 | given the time path for R0.
212 |
213 | """
214 | G = lambda x, t: F(x, t, R0)
215 | s_path, e_path, i_path = odeint(G, x_init, t_vec).transpose()
216 |
217 | c_path = 1 - s_path - e_path # cumulative cases
218 | return i_path, c_path
219 | ```
220 |
221 | ## Experiments
222 |
223 | Let's run some experiments using this code.
224 |
225 | The time period we investigate will be 550 days, or around 18 months:
226 |
227 | ```{code-cell} ipython3
228 | t_length = 550
229 | grid_size = 1000
230 | t_vec = np.linspace(0, t_length, grid_size)
231 | ```
232 |
233 | ### Experiment 1: Constant R0 Case
234 |
235 | Let's start with the case where `R0` is constant.
236 |
237 | We calculate the time path of infected people under different assumptions for `R0`:
238 |
239 | ```{code-cell} ipython3
240 | R0_vals = np.linspace(1.6, 3.0, 6)
241 | labels = [f'$R0 = {r:.2f}$' for r in R0_vals]
242 | i_paths, c_paths = [], []
243 |
244 | for r in R0_vals:
245 | i_path, c_path = solve_path(r, t_vec)
246 | i_paths.append(i_path)
247 | c_paths.append(c_path)
248 | ```
249 |
250 | Here's some code to plot the time paths.
251 |
252 | ```{code-cell} ipython3
253 | def plot_paths(paths, labels, times=t_vec):
254 |
255 | fig, ax = plt.subplots()
256 |
257 | for path, label in zip(paths, labels):
258 | ax.plot(times, path, label=label)
259 |
260 | ax.legend(loc='upper left')
261 |
262 | plt.show()
263 | ```
264 |
265 | Let's plot current cases as a fraction of the population.
266 |
267 | ```{code-cell} ipython3
268 | plot_paths(i_paths, labels)
269 | ```
270 |
271 | As expected, lower effective transmission rates defer the peak of infections.
272 |
273 | They also lead to a lower peak in current cases.
274 |
275 | Here are cumulative cases, as a fraction of population:
276 |
277 | ```{code-cell} ipython3
278 | plot_paths(c_paths, labels)
279 | ```
280 |
281 | ### Experiment 2: Changing Mitigation
282 |
283 | Let's look at a scenario where mitigation (e.g., social distancing) is
284 | successively imposed.
285 |
286 | Here's a specification for `R0` as a function of time.
287 |
288 | ```{code-cell} ipython3
289 | def R0_mitigating(t, r0=3, η=1, r_bar=1.6):
290 | R0 = r0 * exp(- η * t) + (1 - exp(- η * t)) * r_bar
291 | return R0
292 | ```
293 |
294 | The idea is that `R0` starts off at 3 and falls to 1.6.
295 |
296 | This is due to progressive adoption of stricter mitigation measures.
297 |
298 | The parameter `η` controls the rate, or the speed at which restrictions are
299 | imposed.
300 |
301 | We consider several different rates:
302 |
303 | ```{code-cell} ipython3
304 | η_vals = 1/5, 1/10, 1/20, 1/50, 1/100
305 | labels = [fr'$\eta = {η:.2f}$' for η in η_vals]
306 | ```
307 |
308 | This is what the time path of `R0` looks like at these alternative rates:
309 |
310 | ```{code-cell} ipython3
311 | fig, ax = plt.subplots()
312 |
313 | for η, label in zip(η_vals, labels):
314 | ax.plot(t_vec, R0_mitigating(t_vec, η=η), label=label)
315 |
316 | ax.legend()
317 | plt.show()
318 | ```
319 |
320 | Let's calculate the time path of infected people:
321 |
322 | ```{code-cell} ipython3
323 | i_paths, c_paths = [], []
324 |
325 | for η in η_vals:
326 | R0 = lambda t: R0_mitigating(t, η=η)
327 | i_path, c_path = solve_path(R0, t_vec)
328 | i_paths.append(i_path)
329 | c_paths.append(c_path)
330 | ```
331 |
332 | These are current cases under the different scenarios:
333 |
334 | ```{code-cell} ipython3
335 | plot_paths(i_paths, labels)
336 | ```
337 |
338 | Here are cumulative cases, as a fraction of population:
339 |
340 | ```{code-cell} ipython3
341 | plot_paths(c_paths, labels)
342 | ```
343 |
344 | ## Ending Lockdown
345 |
346 | The following replicates [additional results](https://drive.google.com/file/d/1uS7n-7zq5gfSgrL3S0HByExmpq4Bn3oh/view) by Andrew Atkeson on the timing of lifting lockdown.
347 |
348 | Consider these two mitigation scenarios:
349 |
350 | 1. $R_t = 0.5$ for 30 days and then $R_t = 2$ for the remaining 17 months. This corresponds to lifting lockdown in 30 days.
351 | 1. $R_t = 0.5$ for 120 days and then $R_t = 2$ for the remaining 14 months. This corresponds to lifting lockdown in 4 months.
352 |
353 | The parameters considered here start the model with 25,000 active infections
354 | and 75,000 agents already exposed to the virus and thus soon to be contagious.
355 |
356 | ```{code-cell} ipython3
357 | # initial conditions
358 | i_0 = 25_000 / pop_size
359 | e_0 = 75_000 / pop_size
360 | s_0 = 1 - i_0 - e_0
361 | x_0 = s_0, e_0, i_0
362 | ```
363 |
364 | Let's calculate the paths:
365 |
366 | ```{code-cell} ipython3
367 | R0_paths = (lambda t: 0.5 if t < 30 else 2,
368 | lambda t: 0.5 if t < 120 else 2)
369 |
370 | labels = [f'scenario {i}' for i in (1, 2)]
371 |
372 | i_paths, c_paths = [], []
373 |
374 | for R0 in R0_paths:
375 | i_path, c_path = solve_path(R0, t_vec, x_init=x_0)
376 | i_paths.append(i_path)
377 | c_paths.append(c_path)
378 | ```
379 |
380 | Here is the number of active infections:
381 |
382 | ```{code-cell} ipython3
383 | plot_paths(i_paths, labels)
384 | ```
385 |
386 | What kind of mortality can we expect under these scenarios?
387 |
388 | Suppose that 1% of cases result in death
389 |
390 | ```{code-cell} ipython3
391 | ν = 0.01
392 | ```
393 |
394 | This is the cumulative number of deaths:
395 |
396 | ```{code-cell} ipython3
397 | paths = [path * ν * pop_size for path in c_paths]
398 | plot_paths(paths, labels)
399 | ```
400 |
401 | This is the daily death rate:
402 |
403 | ```{code-cell} ipython3
404 | paths = [path * ν * γ * pop_size for path in i_paths]
405 | plot_paths(paths, labels)
406 | ```
407 |
408 | Pushing the peak of curve further into the future may reduce cumulative deaths
409 | if a vaccine is found.
410 |
--------------------------------------------------------------------------------
/lectures/inventory_dynamics.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} html
13 |
18 | ```
19 |
20 | # Inventory Dynamics
21 |
22 | ```{index} single: Markov process, inventory
23 | ```
24 |
25 | ## Overview
26 |
27 | In this lecture we will study the time path of inventories for firms that
28 | follow so-called s-S inventory dynamics.
29 |
30 | Such firms
31 |
32 | 1. wait until inventory falls below some level $s$ and then
33 | 1. order sufficient quantities to bring their inventory back up to capacity $S$.
34 |
35 | These kinds of policies are common in practice and also optimal in certain circumstances.
36 |
37 | A review of early literature and some macroeconomic implications can be found in {cite}`caplin1985variability`.
38 |
39 | Here our main aim is to learn more about simulation, time series and Markov dynamics.
40 |
41 | While our Markov environment and many of the concepts we consider are related to those found in our {doc}`lecture on finite Markov chains `, the state space is a continuum in the current application.
42 |
43 | Let's start with some imports
44 |
45 | ```{code-cell} ipython3
46 | import matplotlib.pyplot as plt
47 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
48 | import numpy as np
49 | from numba import njit, float64, prange
50 | from numba.experimental import jitclass
51 | ```
52 |
53 | ## Sample Paths
54 |
55 | Consider a firm with inventory $X_t$.
56 |
57 | The firm waits until $X_t \leq s$ and then restocks up to $S$ units.
58 |
59 | It faces stochastic demand $\{ D_t \}$, which we assume is IID.
60 |
61 | With notation $a^+ := \max\{a, 0\}$, inventory dynamics can be written
62 | as
63 |
64 | $$
65 | X_{t+1} =
66 | \begin{cases}
67 | ( S - D_{t+1})^+ & \quad \text{if } X_t \leq s \\
68 | ( X_t - D_{t+1} )^+ & \quad \text{if } X_t > s
69 | \end{cases}
70 | $$
71 |
72 | In what follows, we will assume that each $D_t$ is lognormal, so that
73 |
74 | $$
75 | D_t = \exp(\mu + \sigma Z_t)
76 | $$
77 |
78 | where $\mu$ and $\sigma$ are parameters and $\{Z_t\}$ is IID
79 | and standard normal.
80 |
81 | Here's a class that stores parameters and generates time paths for inventory.
82 |
83 | ```{code-cell} python3
84 | firm_data = [
85 | ('s', float64), # restock trigger level
86 | ('S', float64), # capacity
87 | ('mu', float64), # shock location parameter
88 | ('sigma', float64) # shock scale parameter
89 | ]
90 |
91 |
92 | @jitclass(firm_data)
93 | class Firm:
94 |
95 | def __init__(self, s=10, S=100, mu=1.0, sigma=0.5):
96 |
97 | self.s, self.S, self.mu, self.sigma = s, S, mu, sigma
98 |
99 | def update(self, x):
100 | "Update the state from t to t+1 given current state x."
101 |
102 | Z = np.random.randn()
103 | D = np.exp(self.mu + self.sigma * Z)
104 | if x <= self.s:
105 | return max(self.S - D, 0)
106 | else:
107 | return max(x - D, 0)
108 |
109 | def sim_inventory_path(self, x_init, sim_length):
110 |
111 | X = np.empty(sim_length)
112 | X[0] = x_init
113 |
114 | for t in range(sim_length-1):
115 | X[t+1] = self.update(X[t])
116 | return X
117 | ```
118 |
119 | Let's run a first simulation, of a single path:
120 |
121 | ```{code-cell} ipython3
122 | firm = Firm()
123 |
124 | s, S = firm.s, firm.S
125 | sim_length = 100
126 | x_init = 50
127 |
128 | X = firm.sim_inventory_path(x_init, sim_length)
129 |
130 | fig, ax = plt.subplots()
131 | bbox = (0., 1.02, 1., .102)
132 | legend_args = {'ncol': 3,
133 | 'bbox_to_anchor': bbox,
134 | 'loc': 3,
135 | 'mode': 'expand'}
136 |
137 | ax.plot(X, label="inventory")
138 | ax.plot(np.full(sim_length, s), 'k--', label="$s$")
139 | ax.plot(np.full(sim_length, S), 'k-', label="$S$")
140 | ax.set_ylim(0, S+10)
141 | ax.set_xlabel("time")
142 | ax.legend(**legend_args)
143 |
144 | plt.show()
145 | ```
146 |
147 | Now let's simulate multiple paths in order to build a more complete picture of
148 | the probabilities of different outcomes:
149 |
150 | ```{code-cell} ipython3
151 | sim_length=200
152 | fig, ax = plt.subplots()
153 |
154 | ax.plot(np.full(sim_length, s), 'k--', label="$s$")
155 | ax.plot(np.full(sim_length, S), 'k-', label="$S$")
156 | ax.set_ylim(0, S+10)
157 | ax.legend(**legend_args)
158 |
159 | for i in range(400):
160 | X = firm.sim_inventory_path(x_init, sim_length)
161 | ax.plot(X, 'b', alpha=0.2, lw=0.5)
162 |
163 | plt.show()
164 | ```
165 |
166 | ## Marginal Distributions
167 |
168 | Now let’s look at the marginal distribution $\psi_T$ of $X_T$ for some
169 | fixed $T$.
170 |
171 | We will do this by generating many draws of $X_T$ given initial
172 | condition $X_0$.
173 |
174 | With these draws of $X_T$ we can build up a picture of its distribution $\psi_T$.
175 |
176 | Here's one visualization, with $T=50$.
177 |
178 | ```{code-cell} ipython3
179 | T = 50
180 | M = 200 # Number of draws
181 |
182 | ymin, ymax = 0, S + 10
183 |
184 | fig, axes = plt.subplots(1, 2, figsize=(11, 6))
185 |
186 | for ax in axes:
187 | ax.grid(alpha=0.4)
188 |
189 | ax = axes[0]
190 |
191 | ax.set_ylim(ymin, ymax)
192 | ax.set_ylabel('$X_t$', fontsize=16)
193 | ax.vlines((T,), -1.5, 1.5)
194 |
195 | ax.set_xticks((T,))
196 | ax.set_xticklabels((r'$T$',))
197 |
198 | sample = np.empty(M)
199 | for m in range(M):
200 | X = firm.sim_inventory_path(x_init, 2 * T)
201 | ax.plot(X, 'b-', lw=1, alpha=0.5)
202 | ax.plot((T,), (X[T+1],), 'ko', alpha=0.5)
203 | sample[m] = X[T+1]
204 |
205 | axes[1].set_ylim(ymin, ymax)
206 |
207 | axes[1].hist(sample,
208 | bins=16,
209 | density=True,
210 | orientation='horizontal',
211 | histtype='bar',
212 | alpha=0.5)
213 |
214 | plt.show()
215 | ```
216 |
217 | We can build up a clearer picture by drawing more samples
218 |
219 | ```{code-cell} ipython3
220 | T = 50
221 | M = 50_000
222 |
223 | fig, ax = plt.subplots()
224 |
225 | sample = np.empty(M)
226 | for m in range(M):
227 | X = firm.sim_inventory_path(x_init, T+1)
228 | sample[m] = X[T]
229 |
230 | ax.hist(sample,
231 | bins=36,
232 | density=True,
233 | histtype='bar',
234 | alpha=0.75)
235 |
236 | plt.show()
237 | ```
238 |
239 | Note that the distribution is bimodal
240 |
241 | * Most firms have restocked twice but a few have restocked only once (see figure with paths above).
242 | * Firms in the second category have lower inventory.
243 |
244 | We can also approximate the distribution using a [kernel density estimator](https://en.wikipedia.org/wiki/Kernel_density_estimation).
245 |
246 | Kernel density estimators can be thought of as smoothed histograms.
247 |
248 | They are preferable to histograms when the distribution being estimated is likely to be smooth.
249 |
250 | We will use a kernel density estimator from [scikit-learn](https://scikit-learn.org/stable/)
251 |
252 | ```{code-cell} ipython3
253 | from sklearn.neighbors import KernelDensity
254 |
255 | def plot_kde(sample, ax, label=''):
256 |
257 | xmin, xmax = 0.9 * min(sample), 1.1 * max(sample)
258 | xgrid = np.linspace(xmin, xmax, 200)
259 | kde = KernelDensity(kernel='gaussian').fit(sample[:, None])
260 | log_dens = kde.score_samples(xgrid[:, None])
261 |
262 | ax.plot(xgrid, np.exp(log_dens), label=label)
263 | ```
264 |
265 | ```{code-cell} ipython3
266 | fig, ax = plt.subplots()
267 | plot_kde(sample, ax)
268 | plt.show()
269 | ```
270 |
271 | The allocation of probability mass is similar to what was shown by the
272 | histogram just above.
273 |
274 | ## Exercises
275 |
276 | ```{exercise}
277 | :label: id_ex1
278 |
279 | This model is asymptotically stationary, with a unique stationary
280 | distribution.
281 |
282 | (See the discussion of stationarity in {doc}`our lecture on AR(1) processes ` for background --- the fundamental concepts are the same.)
283 |
284 | In particular, the sequence of marginal distributions $\{\psi_t\}$
285 | is converging to a unique limiting distribution that does not depend on
286 | initial conditions.
287 |
288 | Although we will not prove this here, we can investigate it using simulation.
289 |
290 | Your task is to generate and plot the sequence $\{\psi_t\}$ at times
291 | $t = 10, 50, 250, 500, 750$ based on the discussion above.
292 |
293 | (The kernel density estimator is probably the best way to present each
294 | distribution.)
295 |
296 | You should see convergence, in the sense that differences between successive distributions are getting smaller.
297 |
298 | Try different initial conditions to verify that, in the long run, the distribution is invariant across initial conditions.
299 | ```
300 |
301 | ```{solution-start} id_ex1
302 | :class: dropdown
303 | ```
304 |
305 | Below is one possible solution:
306 |
307 | The computations involve a lot of CPU cycles so we have tried to write the
308 | code efficiently.
309 |
310 | This meant writing a specialized function rather than using the class above.
311 |
312 | ```{code-cell} ipython3
313 | s, S, mu, sigma = firm.s, firm.S, firm.mu, firm.sigma
314 |
315 | @njit(parallel=True)
316 | def shift_firms_forward(current_inventory_levels, num_periods):
317 |
318 | num_firms = len(current_inventory_levels)
319 | new_inventory_levels = np.empty(num_firms)
320 |
321 | for f in prange(num_firms):
322 | x = current_inventory_levels[f]
323 | for t in range(num_periods):
324 | Z = np.random.randn()
325 | D = np.exp(mu + sigma * Z)
326 | if x <= s:
327 | x = max(S - D, 0)
328 | else:
329 | x = max(x - D, 0)
330 | new_inventory_levels[f] = x
331 |
332 | return new_inventory_levels
333 | ```
334 |
335 | ```{code-cell} ipython3
336 | x_init = 50
337 | num_firms = 50_000
338 |
339 | sample_dates = 0, 10, 50, 250, 500, 750
340 |
341 | first_diffs = np.diff(sample_dates)
342 |
343 | fig, ax = plt.subplots()
344 |
345 | X = np.full(num_firms, x_init)
346 |
347 | current_date = 0
348 | for d in first_diffs:
349 | X = shift_firms_forward(X, d)
350 | current_date += d
351 | plot_kde(X, ax, label=f't = {current_date}')
352 |
353 | ax.set_xlabel('inventory')
354 | ax.set_ylabel('probability')
355 | ax.legend()
356 | plt.show()
357 | ```
358 |
359 | Notice that by $t=500$ or $t=750$ the densities are barely
360 | changing.
361 |
362 | We have reached a reasonable approximation of the stationary density.
363 |
364 | You can convince yourself that initial conditions don’t matter by
365 | testing a few of them.
366 |
367 | For example, try rerunning the code above with all firms starting at
368 | $X_0 = 20$ or $X_0 = 80$.
369 |
370 | ```{solution-end}
371 | ```
372 |
373 | ```{exercise}
374 | :label: id_ex2
375 |
376 | Using simulation, calculate the probability that firms that start with
377 | $X_0 = 70$ need to order twice or more in the first 50 periods.
378 |
379 | You will need a large sample size to get an accurate reading.
380 | ```
381 |
382 |
383 | ```{solution-start} id_ex2
384 | :class: dropdown
385 | ```
386 |
387 | Here is one solution.
388 |
389 | Again, the computations are relatively intensive so we have written a a
390 | specialized function rather than using the class above.
391 |
392 | We will also use parallelization across firms.
393 |
394 | ```{code-cell} ipython3
395 | @njit(parallel=True)
396 | def compute_freq(sim_length=50, x_init=70, num_firms=1_000_000):
397 |
398 | firm_counter = 0 # Records number of firms that restock 2x or more
399 | for m in prange(num_firms):
400 | x = x_init
401 | restock_counter = 0 # Will record number of restocks for firm m
402 |
403 | for t in range(sim_length):
404 | Z = np.random.randn()
405 | D = np.exp(mu + sigma * Z)
406 | if x <= s:
407 | x = max(S - D, 0)
408 | restock_counter += 1
409 | else:
410 | x = max(x - D, 0)
411 |
412 | if restock_counter > 1:
413 | firm_counter += 1
414 |
415 | return firm_counter / num_firms
416 | ```
417 |
418 | Note the time the routine takes to run, as well as the output.
419 |
420 | ```{code-cell} ipython3
421 | %%time
422 |
423 | freq = compute_freq()
424 | print(f"Frequency of at least two stock outs = {freq}")
425 | ```
426 |
427 | Try switching the `parallel` flag to `False` in the jitted function
428 | above.
429 |
430 | Depending on your system, the difference can be substantial.
431 |
432 | (On our desktop machine, the speed up is by a factor of 5.)
433 |
434 | ```{solution-end}
435 | ```
436 |
--------------------------------------------------------------------------------
/lectures/mccall_fitted_vfi.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} html
13 |
18 | ```
19 |
20 | # Job Search III: Fitted Value Function Iteration
21 |
22 | In addition to what's in Anaconda, this lecture will need the following libraries:
23 |
24 | ```{code-cell} ipython
25 | ---
26 | tags: [hide-output]
27 | ---
28 | !pip install interpolation
29 | ```
30 |
31 | ## Overview
32 |
33 | In this lecture we again study the {doc}`McCall job search model with separation `, but now with a continuous wage distribution.
34 |
35 | While we already considered continuous wage distributions briefly in the
36 | exercises of the {doc}`first job search lecture `,
37 | the change was relatively trivial in that case.
38 |
39 | This is because we were able to reduce the problem to solving for a single
40 | scalar value (the continuation value).
41 |
42 | Here, with separation, the change is less trivial, since a continuous wage distribution leads to an uncountably infinite state space.
43 |
44 | The infinite state space leads to additional challenges, particularly when it
45 | comes to applying value function iteration (VFI).
46 |
47 | These challenges will lead us to modify VFI by adding an interpolation step.
48 |
49 | The combination of VFI and this interpolation step is called **fitted value function iteration** (fitted VFI).
50 |
51 | Fitted VFI is very common in practice, so we will take some time to work through the details.
52 |
53 | We will use the following imports:
54 |
55 | ```{code-cell} ipython3
56 | import matplotlib.pyplot as plt
57 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
58 | import numpy as np
59 | from interpolation import interp
60 | from numba import njit, float64
61 | from numba.experimental import jitclass
62 | ```
63 |
64 | ## The Algorithm
65 |
66 | The model is the same as the McCall model with job separation we {doc}`studied before `, except that the wage offer distribution is continuous.
67 |
68 | We are going to start with the two Bellman equations we obtained for the model with job separation after {ref}`a simplifying transformation `.
69 |
70 | Modified to accommodate continuous wage draws, they take the following form:
71 |
72 | ```{math}
73 | :label: bell1mcmc
74 |
75 | d = \int \max \left\{ v(w'), \, u(c) + \beta d \right\} q(w') d w'
76 | ```
77 |
78 | and
79 |
80 | ```{math}
81 | :label: bell2mcmc
82 |
83 | v(w) = u(w) + \beta
84 | \left[
85 | (1-\alpha)v(w) + \alpha d
86 | \right]
87 | ```
88 |
89 | The unknowns here are the function $v$ and the scalar $d$.
90 |
91 | The difference between these and the pair of Bellman equations we previously worked on are
92 |
93 | 1. in {eq}`bell1mcmc`, what used to be a sum over a finite number of wage values is an integral over an infinite set.
94 | 1. The function $v$ in {eq}`bell2mcmc` is defined over all $w \in \mathbb R_+$.
95 |
96 | The function $q$ in {eq}`bell1mcmc` is the density of the wage offer distribution.
97 |
98 | Its support is taken as equal to $\mathbb R_+$.
99 |
100 | ### Value Function Iteration
101 |
102 | In theory, we should now proceed as follows:
103 |
104 | 1. Begin with a guess $v, d$ for the solutions to {eq}`bell1mcmc`--{eq}`bell2mcmc`.
105 | 1. Plug $v, d$ into the right hand side of {eq}`bell1mcmc`--{eq}`bell2mcmc` and
106 | compute the left hand side to obtain updates $v', d'$
107 | 1. Unless some stopping condition is satisfied, set $(v, d) = (v', d')$
108 | and go to step 2.
109 |
110 | However, there is a problem we must confront before we implement this procedure:
111 | The iterates of the value function can neither be calculated exactly nor stored on a computer.
112 |
113 | To see the issue, consider {eq}`bell2mcmc`.
114 |
115 | Even if $v$ is a known function, the only way to store its update $v'$
116 | is to record its value $v'(w)$ for every $w \in \mathbb R_+$.
117 |
118 | Clearly, this is impossible.
119 |
120 | ### Fitted Value Function Iteration
121 |
122 | What we will do instead is use **fitted value function iteration**.
123 |
124 | The procedure is as follows:
125 |
126 | Let a current guess $v$ be given.
127 |
128 | Now we record the value of the function $v'$ at only
129 | finitely many "grid" points $w_1 < w_2 < \cdots < w_I$ and then reconstruct $v'$ from this information when required.
130 |
131 | More precisely, the algorithm will be
132 |
133 | (fvi_alg)=
134 | 1. Begin with an array $\mathbf v$ representing the values of an initial guess of the value function on some grid points $\{w_i\}$.
135 | 1. Build a function $v$ on the state space $\mathbb R_+$ by interpolation or approximation, based on $\mathbf v$ and $\{ w_i\}$.
136 | 1. Obtain and record the samples of the updated function $v'(w_i)$ on each grid point $w_i$.
137 | 1. Unless some stopping condition is satisfied, take this as the new array and go to step 1.
138 |
139 | How should we go about step 2?
140 |
141 | This is a problem of function approximation, and there are many ways to approach it.
142 |
143 | What's important here is that the function approximation scheme must not only
144 | produce a good approximation to each $v$, but also that it combines well with the broader iteration algorithm described above.
145 |
146 | One good choice from both respects is continuous piecewise linear interpolation.
147 |
148 | This method
149 |
150 | 1. combines well with value function iteration (see., e.g.,
151 | {cite}`gordon1995stable` or {cite}`stachurski2008continuous`) and
152 | 1. preserves useful shape properties such as monotonicity and concavity/convexity.
153 |
154 | Linear interpolation will be implemented using a JIT-aware Python interpolation library called [interpolation.py](https://github.com/EconForge/interpolation.py).
155 |
156 | The next figure illustrates piecewise linear interpolation of an arbitrary
157 | function on grid points $0, 0.2, 0.4, 0.6, 0.8, 1$.
158 |
159 | ```{code-cell} python3
160 | def f(x):
161 | y1 = 2 * np.cos(6 * x) + np.sin(14 * x)
162 | return y1 + 2.5
163 |
164 | c_grid = np.linspace(0, 1, 6)
165 | f_grid = np.linspace(0, 1, 150)
166 |
167 | def Af(x):
168 | return interp(c_grid, f(c_grid), x)
169 |
170 | fig, ax = plt.subplots()
171 |
172 | ax.plot(f_grid, f(f_grid), 'b-', label='true function')
173 | ax.plot(f_grid, Af(f_grid), 'g-', label='linear approximation')
174 | ax.vlines(c_grid, c_grid * 0, f(c_grid), linestyle='dashed', alpha=0.5)
175 |
176 | ax.legend(loc="upper center")
177 |
178 | ax.set(xlim=(0, 1), ylim=(0, 6))
179 | plt.show()
180 | ```
181 |
182 | ## Implementation
183 |
184 | The first step is to build a jitted class for the McCall model with separation and
185 | a continuous wage offer distribution.
186 |
187 | We will take the utility function to be the log function for this application, with $u(c) = \ln c$.
188 |
189 | We will adopt the lognormal distribution for wages, with $w = \exp(\mu + \sigma z)$
190 | when $z$ is standard normal and $\mu, \sigma$ are parameters.
191 |
192 | ```{code-cell} python3
193 | @njit
194 | def lognormal_draws(n=1000, μ=2.5, σ=0.5, seed=1234):
195 | np.random.seed(seed)
196 | z = np.random.randn(n)
197 | w_draws = np.exp(μ + σ * z)
198 | return w_draws
199 | ```
200 |
201 | Here's our class.
202 |
203 | ```{code-cell} python3
204 | mccall_data_continuous = [
205 | ('c', float64), # unemployment compensation
206 | ('α', float64), # job separation rate
207 | ('β', float64), # discount factor
208 | ('w_grid', float64[:]), # grid of points for fitted VFI
209 | ('w_draws', float64[:]) # draws of wages for Monte Carlo
210 | ]
211 |
212 | @jitclass(mccall_data_continuous)
213 | class McCallModelContinuous:
214 |
215 | def __init__(self,
216 | c=1,
217 | α=0.1,
218 | β=0.96,
219 | grid_min=1e-10,
220 | grid_max=5,
221 | grid_size=100,
222 | w_draws=lognormal_draws()):
223 |
224 | self.c, self.α, self.β = c, α, β
225 |
226 | self.w_grid = np.linspace(grid_min, grid_max, grid_size)
227 | self.w_draws = w_draws
228 |
229 | def update(self, v, d):
230 |
231 | # Simplify names
232 | c, α, β = self.c, self.α, self.β
233 | w = self.w_grid
234 | u = lambda x: np.log(x)
235 |
236 | # Interpolate array represented value function
237 | vf = lambda x: interp(w, v, x)
238 |
239 | # Update d using Monte Carlo to evaluate integral
240 | d_new = np.mean(np.maximum(vf(self.w_draws), u(c) + β * d))
241 |
242 | # Update v
243 | v_new = u(w) + β * ((1 - α) * v + α * d)
244 |
245 | return v_new, d_new
246 | ```
247 |
248 | We then return the current iterate as an approximate solution.
249 |
250 | ```{code-cell} python3
251 | @njit
252 | def solve_model(mcm, tol=1e-5, max_iter=2000):
253 | """
254 | Iterates to convergence on the Bellman equations
255 |
256 | * mcm is an instance of McCallModel
257 | """
258 |
259 | v = np.ones_like(mcm.w_grid) # Initial guess of v
260 | d = 1 # Initial guess of d
261 | i = 0
262 | error = tol + 1
263 |
264 | while error > tol and i < max_iter:
265 | v_new, d_new = mcm.update(v, d)
266 | error_1 = np.max(np.abs(v_new - v))
267 | error_2 = np.abs(d_new - d)
268 | error = max(error_1, error_2)
269 | v = v_new
270 | d = d_new
271 | i += 1
272 |
273 | return v, d
274 | ```
275 |
276 | Here's a function `compute_reservation_wage` that takes an instance of `McCallModelContinuous`
277 | and returns the associated reservation wage.
278 |
279 | If $v(w) < h$ for all $w$, then the function returns np.inf
280 |
281 | ```{code-cell} python3
282 | @njit
283 | def compute_reservation_wage(mcm):
284 | """
285 | Computes the reservation wage of an instance of the McCall model
286 | by finding the smallest w such that v(w) >= h.
287 |
288 | If no such w exists, then w_bar is set to np.inf.
289 | """
290 | u = lambda x: np.log(x)
291 |
292 | v, d = solve_model(mcm)
293 | h = u(mcm.c) + mcm.β * d
294 |
295 | w_bar = np.inf
296 | for i, wage in enumerate(mcm.w_grid):
297 | if v[i] > h:
298 | w_bar = wage
299 | break
300 |
301 | return w_bar
302 | ```
303 |
304 | The exercises ask you to explore the solution and how it changes with parameters.
305 |
306 | ## Exercises
307 |
308 | ```{exercise}
309 | :label: mfv_ex1
310 |
311 | Use the code above to explore what happens to the reservation wage when the wage parameter $\mu$
312 | changes.
313 |
314 | Use the default parameters and $\mu$ in `mu_vals = np.linspace(0.0, 2.0, 15)`.
315 |
316 | Is the impact on the reservation wage as you expected?
317 | ```
318 |
319 | ```{solution-start} mfv_ex1
320 | :class: dropdown
321 | ```
322 |
323 | Here is one solution
324 |
325 | ```{code-cell} python3
326 | mcm = McCallModelContinuous()
327 | mu_vals = np.linspace(0.0, 2.0, 15)
328 | w_bar_vals = np.empty_like(mu_vals)
329 |
330 | fig, ax = plt.subplots()
331 |
332 | for i, m in enumerate(mu_vals):
333 | mcm.w_draws = lognormal_draws(μ=m)
334 | w_bar = compute_reservation_wage(mcm)
335 | w_bar_vals[i] = w_bar
336 |
337 | ax.set(xlabel='mean', ylabel='reservation wage')
338 | ax.plot(mu_vals, w_bar_vals, label=r'$\bar w$ as a function of $\mu$')
339 | ax.legend()
340 |
341 | plt.show()
342 | ```
343 |
344 | Not surprisingly, the agent is more inclined to wait when the distribution of
345 | offers shifts to the right.
346 |
347 | ```{solution-end}
348 | ```
349 |
350 | ```{exercise}
351 | :label: mfv_ex2
352 |
353 | Let us now consider how the agent responds to an increase in volatility.
354 |
355 | To try to understand this, compute the reservation wage when the wage offer
356 | distribution is uniform on $(m - s, m + s)$ and $s$ varies.
357 |
358 | The idea here is that we are holding the mean constant and spreading the
359 | support.
360 |
361 | (This is a form of *mean-preserving spread*.)
362 |
363 | Use `s_vals = np.linspace(1.0, 2.0, 15)` and `m = 2.0`.
364 |
365 | State how you expect the reservation wage to vary with $s$.
366 |
367 | Now compute it. Is this as you expected?
368 | ```
369 |
370 | ```{solution-start} mfv_ex2
371 | :class: dropdown
372 | ```
373 |
374 | Here is one solution
375 |
376 | ```{code-cell} python3
377 | mcm = McCallModelContinuous()
378 | s_vals = np.linspace(1.0, 2.0, 15)
379 | m = 2.0
380 | w_bar_vals = np.empty_like(s_vals)
381 |
382 | fig, ax = plt.subplots()
383 |
384 | for i, s in enumerate(s_vals):
385 | a, b = m - s, m + s
386 | mcm.w_draws = np.random.uniform(low=a, high=b, size=10_000)
387 | w_bar = compute_reservation_wage(mcm)
388 | w_bar_vals[i] = w_bar
389 |
390 | ax.set(xlabel='volatility', ylabel='reservation wage')
391 | ax.plot(s_vals, w_bar_vals, label=r'$\bar w$ as a function of wage volatility')
392 | ax.legend()
393 |
394 | plt.show()
395 | ```
396 |
397 | The reservation wage increases with volatility.
398 |
399 | One might think that higher volatility would make the agent more inclined to
400 | take a given offer, since doing so represents certainty and waiting represents
401 | risk.
402 |
403 | But job search is like holding an option: the worker is only exposed to upside risk (since, in a free market, no one can force them to take a bad offer).
404 |
405 | More volatility means higher upside potential, which encourages the agent to wait.
406 |
407 | ```{solution-end}
408 | ```
409 |
--------------------------------------------------------------------------------
/lectures/mccall_correlated.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} html
13 |
18 | ```
19 |
20 | # Job Search IV: Correlated Wage Offers
21 |
22 | In addition to what's in Anaconda, this lecture will need the following libraries:
23 |
24 | ```{code-cell} ipython
25 | ---
26 | tags: [hide-output]
27 | ---
28 | !pip install quantecon
29 | !pip install interpolation
30 | ```
31 |
32 | ## Overview
33 |
34 | In this lecture we solve a {doc}`McCall style job search model ` with persistent and
35 | transitory components to wages.
36 |
37 | In other words, we relax the unrealistic assumption that randomness in wages is independent over time.
38 |
39 | At the same time, we will go back to assuming that jobs are permanent and no separation occurs.
40 |
41 | This is to keep the model relatively simple as we study the impact of correlation.
42 |
43 | We will use the following imports:
44 |
45 | ```{code-cell} ipython3
46 | import matplotlib.pyplot as plt
47 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
48 | import numpy as np
49 | import quantecon as qe
50 | from interpolation import interp
51 | from numpy.random import randn
52 | from numba import njit, prange, float64
53 | from numba.experimental import jitclass
54 | ```
55 |
56 | ## The Model
57 |
58 | Wages at each point in time are given by
59 |
60 | $$
61 | w_t = \exp(z_t) + y_t
62 | $$
63 |
64 | where
65 |
66 | $$
67 | y_t \sim \exp(\mu + s \zeta_t)
68 | \quad \text{and} \quad
69 | z_{t+1} = d + \rho z_t + \sigma \epsilon_{t+1}
70 | $$
71 |
72 | Here $\{ \zeta_t \}$ and $\{ \epsilon_t \}$ are both IID and standard normal.
73 |
74 | Here $\{y_t\}$ is a transitory component and $\{z_t\}$ is persistent.
75 |
76 | As before, the worker can either
77 |
78 | 1. accept an offer and work permanently at that wage, or
79 | 1. take unemployment compensation $c$ and wait till next period.
80 |
81 | The value function satisfies the Bellman equation
82 |
83 | $$
84 | v^*(w, z) =
85 | \max
86 | \left\{
87 | \frac{u(w)}{1-\beta}, u(c) + \beta \, \mathbb E_z v^*(w', z')
88 | \right\}
89 | $$
90 |
91 | In this express, $u$ is a utility function and $\mathbb E_z$ is expectation of next period variables given current $z$.
92 |
93 | The variable $z$ enters as a state in the Bellman equation because its current value helps predict future wages.
94 |
95 | ### A Simplification
96 |
97 | There is a way that we can reduce dimensionality in this problem, which greatly accelerates computation.
98 |
99 | To start, let $f^*$ be the continuation value function, defined
100 | by
101 |
102 | $$
103 | f^*(z) := u(c) + \beta \, \mathbb E_z v^*(w', z')
104 | $$
105 |
106 | The Bellman equation can now be written
107 |
108 | $$
109 | v^*(w, z) = \max \left\{ \frac{u(w)}{1-\beta}, \, f^*(z) \right\}
110 | $$
111 |
112 | Combining the last two expressions, we see that the continuation value
113 | function satisfies
114 |
115 | $$
116 | f^*(z) = u(c) + \beta \, \mathbb E_z \max \left\{ \frac{u(w')}{1-\beta}, f^*(z') \right\}
117 | $$
118 |
119 | We’ll solve this functional equation for $f^*$ by introducing the
120 | operator
121 |
122 | $$
123 | Qf(z) = u(c) + \beta \, \mathbb E_z \max \left\{ \frac{u(w')}{1-\beta}, f(z') \right\}
124 | $$
125 |
126 | By construction, $f^*$ is a fixed point of $Q$, in the sense that
127 | $Q f^* = f^*$.
128 |
129 | Under mild assumptions, it can be shown that $Q$ is a [contraction mapping](https://en.wikipedia.org/wiki/Contraction_mapping) over a suitable space of continuous functions on $\mathbb R$.
130 |
131 | By Banach's contraction mapping theorem, this means that $f^*$ is the unique fixed point and we can calculate it by iterating with $Q$ from any reasonable initial condition.
132 |
133 | Once we have $f^*$, we can solve the search problem by stopping when the reward for accepting exceeds the continuation value, or
134 |
135 | $$
136 | \frac{u(w)}{1-\beta} \geq f^*(z)
137 | $$
138 |
139 | For utility we take $u(c) = \ln(c)$.
140 |
141 | The reservation wage is the wage where equality holds in the last expression.
142 |
143 | That is,
144 |
145 | ```{math}
146 | :label: corr_mcm_barw
147 |
148 | \bar w (z) := \exp(f^*(z) (1-\beta))
149 | ```
150 |
151 | Our main aim is to solve for the reservation rule and study its properties and implications.
152 |
153 | ## Implementation
154 |
155 | Let $f$ be our initial guess of $f^*$.
156 |
157 | When we iterate, we use the {doc}`fitted value function iteration ` algorithm.
158 |
159 | In particular, $f$ and all subsequent iterates are stored as a vector of values on a grid.
160 |
161 | These points are interpolated into a function as required, using piecewise linear interpolation.
162 |
163 | The integral in the definition of $Qf$ is calculated by Monte Carlo.
164 |
165 | The following list helps Numba by providing some type information about the data we will work with.
166 |
167 | ```{code-cell} python3
168 | job_search_data = [
169 | ('μ', float64), # transient shock log mean
170 | ('s', float64), # transient shock log variance
171 | ('d', float64), # shift coefficient of persistent state
172 | ('ρ', float64), # correlation coefficient of persistent state
173 | ('σ', float64), # state volatility
174 | ('β', float64), # discount factor
175 | ('c', float64), # unemployment compensation
176 | ('z_grid', float64[:]), # grid over the state space
177 | ('e_draws', float64[:,:]) # Monte Carlo draws for integration
178 | ]
179 | ```
180 |
181 | Here's a class that stores the data and the right hand side of the Bellman equation.
182 |
183 | Default parameter values are embedded in the class.
184 |
185 | ```{code-cell} ipython3
186 | @jitclass(job_search_data)
187 | class JobSearch:
188 |
189 | def __init__(self,
190 | μ=0.0, # transient shock log mean
191 | s=1.0, # transient shock log variance
192 | d=0.0, # shift coefficient of persistent state
193 | ρ=0.9, # correlation coefficient of persistent state
194 | σ=0.1, # state volatility
195 | β=0.98, # discount factor
196 | c=5, # unemployment compensation
197 | mc_size=1000,
198 | grid_size=100):
199 |
200 | self.μ, self.s, self.d, = μ, s, d,
201 | self.ρ, self.σ, self.β, self.c = ρ, σ, β, c
202 |
203 | # Set up grid
204 | z_mean = d / (1 - ρ)
205 | z_sd = σ / np.sqrt(1 - ρ**2)
206 | k = 3 # std devs from mean
207 | a, b = z_mean - k * z_sd, z_mean + k * z_sd
208 | self.z_grid = np.linspace(a, b, grid_size)
209 |
210 | # Draw and store shocks
211 | np.random.seed(1234)
212 | self.e_draws = randn(2, mc_size)
213 |
214 | def parameters(self):
215 | """
216 | Return all parameters as a tuple.
217 | """
218 | return self.μ, self.s, self.d, \
219 | self.ρ, self.σ, self.β, self.c
220 | ```
221 |
222 | Next we implement the $Q$ operator.
223 |
224 | ```{code-cell} ipython3
225 | @njit(parallel=True)
226 | def Q(js, f_in, f_out):
227 | """
228 | Apply the operator Q.
229 |
230 | * js is an instance of JobSearch
231 | * f_in and f_out are arrays that represent f and Qf respectively
232 |
233 | """
234 |
235 | μ, s, d, ρ, σ, β, c = js.parameters()
236 | M = js.e_draws.shape[1]
237 |
238 | for i in prange(len(js.z_grid)):
239 | z = js.z_grid[i]
240 | expectation = 0.0
241 | for m in range(M):
242 | e1, e2 = js.e_draws[:, m]
243 | z_next = d + ρ * z + σ * e1
244 | go_val = interp(js.z_grid, f_in, z_next) # f(z')
245 | y_next = np.exp(μ + s * e2) # y' draw
246 | w_next = np.exp(z_next) + y_next # w' draw
247 | stop_val = np.log(w_next) / (1 - β)
248 | expectation += max(stop_val, go_val)
249 | expectation = expectation / M
250 | f_out[i] = np.log(c) + β * expectation
251 | ```
252 |
253 | Here's a function to compute an approximation to the fixed point of $Q$.
254 |
255 | ```{code-cell} ipython3
256 | def compute_fixed_point(js,
257 | use_parallel=True,
258 | tol=1e-4,
259 | max_iter=1000,
260 | verbose=True,
261 | print_skip=25):
262 |
263 | f_init = np.full(len(js.z_grid), np.log(js.c))
264 | f_out = np.empty_like(f_init)
265 |
266 | # Set up loop
267 | f_in = f_init
268 | i = 0
269 | error = tol + 1
270 |
271 | while i < max_iter and error > tol:
272 | Q(js, f_in, f_out)
273 | error = np.max(np.abs(f_in - f_out))
274 | i += 1
275 | if verbose and i % print_skip == 0:
276 | print(f"Error at iteration {i} is {error}.")
277 | f_in[:] = f_out
278 |
279 | if error > tol:
280 | print("Failed to converge!")
281 | elif verbose:
282 | print(f"\nConverged in {i} iterations.")
283 |
284 | return f_out
285 | ```
286 |
287 | Let's try generating an instance and solving the model.
288 |
289 | ```{code-cell} ipython3
290 | js = JobSearch()
291 |
292 | qe.tic()
293 | f_star = compute_fixed_point(js, verbose=True)
294 | qe.toc()
295 | ```
296 |
297 | Next we will compute and plot the reservation wage function defined in {eq}`corr_mcm_barw`.
298 |
299 | ```{code-cell} ipython3
300 | res_wage_function = np.exp(f_star * (1 - js.β))
301 |
302 | fig, ax = plt.subplots()
303 | ax.plot(js.z_grid, res_wage_function, label="reservation wage given $z$")
304 | ax.set(xlabel="$z$", ylabel="wage")
305 | ax.legend()
306 | plt.show()
307 | ```
308 |
309 | Notice that the reservation wage is increasing in the current state $z$.
310 |
311 | This is because a higher state leads the agent to predict higher future wages,
312 | increasing the option value of waiting.
313 |
314 | Let's try changing unemployment compensation and look at its impact on the
315 | reservation wage:
316 |
317 | ```{code-cell} ipython3
318 | c_vals = 1, 2, 3
319 |
320 | fig, ax = plt.subplots()
321 |
322 | for c in c_vals:
323 | js = JobSearch(c=c)
324 | f_star = compute_fixed_point(js, verbose=False)
325 | res_wage_function = np.exp(f_star * (1 - js.β))
326 | ax.plot(js.z_grid, res_wage_function, label=rf"$\bar w$ at $c = {c}$")
327 |
328 | ax.set(xlabel="$z$", ylabel="wage")
329 | ax.legend()
330 | plt.show()
331 | ```
332 |
333 | As expected, higher unemployment compensation shifts the reservation wage up
334 | at all state values.
335 |
336 | ## Unemployment Duration
337 |
338 | Next we study how mean unemployment duration varies with unemployment compensation.
339 |
340 | For simplicity we’ll fix the initial state at $z_t = 0$.
341 |
342 | ```{code-cell} ipython3
343 | def compute_unemployment_duration(js, seed=1234):
344 |
345 | f_star = compute_fixed_point(js, verbose=False)
346 | μ, s, d, ρ, σ, β, c = js.parameters()
347 | z_grid = js.z_grid
348 | np.random.seed(seed)
349 |
350 | @njit
351 | def f_star_function(z):
352 | return interp(z_grid, f_star, z)
353 |
354 | @njit
355 | def draw_tau(t_max=10_000):
356 | z = 0
357 | t = 0
358 |
359 | unemployed = True
360 | while unemployed and t < t_max:
361 | # draw current wage
362 | y = np.exp(μ + s * np.random.randn())
363 | w = np.exp(z) + y
364 | res_wage = np.exp(f_star_function(z) * (1 - β))
365 | # if optimal to stop, record t
366 | if w >= res_wage:
367 | unemployed = False
368 | τ = t
369 | # else increment data and state
370 | else:
371 | z = ρ * z + d + σ * np.random.randn()
372 | t += 1
373 | return τ
374 |
375 | @njit(parallel=True)
376 | def compute_expected_tau(num_reps=100_000):
377 | sum_value = 0
378 | for i in prange(num_reps):
379 | sum_value += draw_tau()
380 | return sum_value / num_reps
381 |
382 | return compute_expected_tau()
383 | ```
384 |
385 | Let's test this out with some possible values for unemployment compensation.
386 |
387 | ```{code-cell} ipython3
388 | c_vals = np.linspace(1.0, 10.0, 8)
389 | durations = np.empty_like(c_vals)
390 | for i, c in enumerate(c_vals):
391 | js = JobSearch(c=c)
392 | τ = compute_unemployment_duration(js)
393 | durations[i] = τ
394 | ```
395 |
396 | Here is a plot of the results.
397 |
398 | ```{code-cell} ipython3
399 | fig, ax = plt.subplots()
400 | ax.plot(c_vals, durations)
401 | ax.set_xlabel("unemployment compensation")
402 | ax.set_ylabel("mean unemployment duration")
403 | plt.show()
404 | ```
405 |
406 | Not surprisingly, unemployment duration increases when unemployment compensation is higher.
407 |
408 | This is because the value of waiting increases with unemployment compensation.
409 |
410 | ## Exercises
411 |
412 | ```{exercise}
413 | :label: mc_ex1
414 |
415 | Investigate how mean unemployment duration varies with the discount factor $\beta$.
416 |
417 | * What is your prior expectation?
418 | * Do your results match up?
419 | ```
420 |
421 | ```{solution-start} mc_ex1
422 | :class: dropdown
423 | ```
424 |
425 | Here is one solution
426 |
427 | ```{code-cell} ipython3
428 | beta_vals = np.linspace(0.94, 0.99, 8)
429 | durations = np.empty_like(beta_vals)
430 | for i, β in enumerate(beta_vals):
431 | js = JobSearch(β=β)
432 | τ = compute_unemployment_duration(js)
433 | durations[i] = τ
434 | ```
435 |
436 | ```{code-cell} ipython3
437 | fig, ax = plt.subplots()
438 | ax.plot(beta_vals, durations)
439 | ax.set_xlabel(r"$\beta$")
440 | ax.set_ylabel("mean unemployment duration")
441 | plt.show()
442 | ```
443 |
444 | The figure shows that more patient individuals tend to wait longer before accepting an offer.
445 |
446 | ```{solution-end}
447 | ```
448 |
--------------------------------------------------------------------------------
/lectures/coleman_policy_iter.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | ```{raw} html
13 |
18 | ```
19 |
20 | # {index}`Optimal Growth III: Time Iteration `
21 |
22 | In addition to what's in Anaconda, this lecture will need the following libraries:
23 |
24 | ```{code-cell} ipython
25 | ---
26 | tags: [hide-output]
27 | ---
28 | !pip install quantecon
29 | !pip install interpolation
30 | ```
31 |
32 | ## Overview
33 |
34 | In this lecture, we'll continue our {doc}`earlier ` study of the stochastic optimal growth model.
35 |
36 | In that lecture, we solved the associated dynamic programming
37 | problem using value function iteration.
38 |
39 | The beauty of this technique is its broad applicability.
40 |
41 | With numerical problems, however, we can often attain higher efficiency in
42 | specific applications by deriving methods that are carefully tailored to the
43 | application at hand.
44 |
45 | The stochastic optimal growth model has plenty of structure to exploit for
46 | this purpose, especially when we adopt some concavity and smoothness
47 | assumptions over primitives.
48 |
49 | We'll use this structure to obtain an Euler equation based method.
50 |
51 | This will be an extension of the time iteration method considered
52 | in our elementary lecture on {doc}`cake eating `.
53 |
54 | In a {doc}`subsequent lecture `, we'll see that time
55 | iteration can be further adjusted to obtain even more efficiency.
56 |
57 | Let's start with some imports:
58 |
59 | ```{code-cell} ipython
60 | import matplotlib.pyplot as plt
61 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
62 | import numpy as np
63 | from interpolation import interp
64 | from quantecon.optimize import brentq
65 | from numba import njit
66 | ```
67 |
68 | ## The Euler Equation
69 |
70 | Our first step is to derive the Euler equation, which is a generalization of
71 | the Euler equation we obtained in the {doc}`lecture on cake eating `.
72 |
73 | We take the model set out in {doc}`the stochastic growth model lecture ` and add the following assumptions:
74 |
75 | 1. $u$ and $f$ are continuously differentiable and strictly concave
76 | 1. $f(0) = 0$
77 | 1. $\lim_{c \to 0} u'(c) = \infty$ and $\lim_{c \to \infty} u'(c) = 0$
78 | 1. $\lim_{k \to 0} f'(k) = \infty$ and $\lim_{k \to \infty} f'(k) = 0$
79 |
80 | The last two conditions are usually called **Inada conditions**.
81 |
82 | Recall the Bellman equation
83 |
84 | ```{math}
85 | :label: cpi_fpb30
86 |
87 | v^*(y) = \max_{0 \leq c \leq y}
88 | \left\{
89 | u(c) + \beta \int v^*(f(y - c) z) \phi(dz)
90 | \right\}
91 | \quad \text{for all} \quad
92 | y \in \mathbb R_+
93 | ```
94 |
95 | Let the optimal consumption policy be denoted by $\sigma^*$.
96 |
97 | We know that $\sigma^*$ is a $v^*$-greedy policy so that $\sigma^*(y)$ is the maximizer in {eq}`cpi_fpb30`.
98 |
99 | The conditions above imply that
100 |
101 | * $\sigma^*$ is the unique optimal policy for the stochastic optimal growth model
102 | * the optimal policy is continuous, strictly increasing and also **interior**, in the sense that $0 < \sigma^*(y) < y$ for all strictly positive $y$, and
103 | * the value function is strictly concave and continuously differentiable, with
104 |
105 | ```{math}
106 | :label: cpi_env
107 |
108 | (v^*)'(y) = u' (\sigma^*(y) ) := (u' \circ \sigma^*)(y)
109 | ```
110 |
111 | The last result is called the **envelope condition** due to its relationship with the [envelope theorem](https://en.wikipedia.org/wiki/Envelope_theorem).
112 |
113 | To see why {eq}`cpi_env` holds, write the Bellman equation in the equivalent
114 | form
115 |
116 | $$
117 | v^*(y) = \max_{0 \leq k \leq y}
118 | \left\{
119 | u(y-k) + \beta \int v^*(f(k) z) \phi(dz)
120 | \right\},
121 | $$
122 |
123 | Differentiating with respect to $y$, and then evaluating at the optimum yields {eq}`cpi_env`.
124 |
125 | (Section 12.1 of [EDTC](https://johnstachurski.net/edtc.html) contains full proofs of these results, and closely related discussions can be found in many other texts.)
126 |
127 | Differentiability of the value function and interiority of the optimal policy
128 | imply that optimal consumption satisfies the first order condition associated
129 | with {eq}`cpi_fpb30`, which is
130 |
131 | ```{math}
132 | :label: cpi_foc
133 |
134 | u'(\sigma^*(y)) = \beta \int (v^*)'(f(y - \sigma^*(y)) z) f'(y - \sigma^*(y)) z \phi(dz)
135 | ```
136 |
137 | Combining {eq}`cpi_env` and the first-order condition {eq}`cpi_foc` gives the **Euler equation**
138 |
139 | ```{math}
140 | :label: cpi_euler
141 |
142 | (u'\circ \sigma^*)(y)
143 | = \beta \int (u'\circ \sigma^*)(f(y - \sigma^*(y)) z) f'(y - \sigma^*(y)) z \phi(dz)
144 | ```
145 |
146 | We can think of the Euler equation as a functional equation
147 |
148 | ```{math}
149 | :label: cpi_euler_func
150 |
151 | (u'\circ \sigma)(y)
152 | = \beta \int (u'\circ \sigma)(f(y - \sigma(y)) z) f'(y - \sigma(y)) z \phi(dz)
153 | ```
154 |
155 | over interior consumption policies $\sigma$, one solution of which is the optimal policy $\sigma^*$.
156 |
157 | Our aim is to solve the functional equation {eq}`cpi_euler_func` and hence obtain $\sigma^*$.
158 |
159 | ### The Coleman-Reffett Operator
160 |
161 | Recall the Bellman operator
162 |
163 | ```{math}
164 | :label: fcbell20_coleman
165 |
166 | Tv(y) := \max_{0 \leq c \leq y}
167 | \left\{
168 | u(c) + \beta \int v(f(y - c) z) \phi(dz)
169 | \right\}
170 | ```
171 |
172 | Just as we introduced the Bellman operator to solve the Bellman equation, we
173 | will now introduce an operator over policies to help us solve the Euler
174 | equation.
175 |
176 | This operator $K$ will act on the set of all $\sigma \in \Sigma$
177 | that are continuous, strictly increasing and interior.
178 |
179 | Henceforth we denote this set of policies by $\mathscr P$
180 |
181 | 1. The operator $K$ takes as its argument a $\sigma \in \mathscr P$ and
182 | 1. returns a new function $K\sigma$, where $K\sigma(y)$ is the $c \in (0, y)$ that solves.
183 |
184 | ```{math}
185 | :label: cpi_coledef
186 |
187 | u'(c)
188 | = \beta \int (u' \circ \sigma) (f(y - c) z ) f'(y - c) z \phi(dz)
189 | ```
190 |
191 | We call this operator the **Coleman-Reffett operator** to acknowledge the work of
192 | {cite}`Coleman1990` and {cite}`Reffett1996`.
193 |
194 | In essence, $K\sigma$ is the consumption policy that the Euler equation tells
195 | you to choose today when your future consumption policy is $\sigma$.
196 |
197 | The important thing to note about $K$ is that, by
198 | construction, its fixed points coincide with solutions to the functional
199 | equation {eq}`cpi_euler_func`.
200 |
201 | In particular, the optimal policy $\sigma^*$ is a fixed point.
202 |
203 | Indeed, for fixed $y$, the value $K\sigma^*(y)$ is the $c$ that
204 | solves
205 |
206 | $$
207 | u'(c)
208 | = \beta \int (u' \circ \sigma^*) (f(y - c) z ) f'(y - c) z \phi(dz)
209 | $$
210 |
211 | In view of the Euler equation, this is exactly $\sigma^*(y)$.
212 |
213 | ### Is the Coleman-Reffett Operator Well Defined?
214 |
215 | In particular, is there always a unique $c \in (0, y)$ that solves
216 | {eq}`cpi_coledef`?
217 |
218 | The answer is yes, under our assumptions.
219 |
220 | For any $\sigma \in \mathscr P$, the right side of {eq}`cpi_coledef`
221 |
222 | * is continuous and strictly increasing in $c$ on $(0, y)$
223 | * diverges to $+\infty$ as $c \uparrow y$
224 |
225 | The left side of {eq}`cpi_coledef`
226 |
227 | * is continuous and strictly decreasing in $c$ on $(0, y)$
228 | * diverges to $+\infty$ as $c \downarrow 0$
229 |
230 | Sketching these curves and using the information above will convince you that they cross exactly once as $c$ ranges over $(0, y)$.
231 |
232 | With a bit more analysis, one can show in addition that $K \sigma \in \mathscr P$
233 | whenever $\sigma \in \mathscr P$.
234 |
235 | ### Comparison with VFI (Theory)
236 |
237 | It is possible to prove that there is a tight relationship between iterates of
238 | $K$ and iterates of the Bellman operator.
239 |
240 | Mathematically, the two operators are *topologically conjugate*.
241 |
242 | Loosely speaking, this means that if iterates of one operator converge then
243 | so do iterates of the other, and vice versa.
244 |
245 | Moreover, there is a sense in which they converge at the same rate, at least
246 | in theory.
247 |
248 | However, it turns out that the operator $K$ is more stable numerically
249 | and hence more efficient in the applications we consider.
250 |
251 | Examples are given below.
252 |
253 | ## Implementation
254 |
255 | As in our {doc}`previous study `, we continue to assume that
256 |
257 | * $u(c) = \ln c$
258 | * $f(k) = k^{\alpha}$
259 | * $\phi$ is the distribution of $\xi := \exp(\mu + s \zeta)$ when $\zeta$ is standard normal
260 |
261 | This will allow us to compare our results to the analytical solutions
262 |
263 | ```{code-cell} python3
264 | :load: _static/lecture_specific/optgrowth/cd_analytical.py
265 | ```
266 |
267 | As discussed above, our plan is to solve the model using time iteration, which
268 | means iterating with the operator $K$.
269 |
270 | For this we need access to the functions $u'$ and $f, f'$.
271 |
272 | These are available in a class called `OptimalGrowthModel` that we
273 | constructed in an {doc}`earlier lecture `.
274 |
275 | ```{code-cell} python3
276 | :load: _static/lecture_specific/optgrowth_fast/ogm.py
277 | ```
278 |
279 | Now we implement a method called `euler_diff`, which returns
280 |
281 | ```{math}
282 | :label: euler_diff
283 |
284 | u'(c) - \beta \int (u' \circ \sigma) (f(y - c) z ) f'(y - c) z \phi(dz)
285 | ```
286 |
287 | ```{code-cell} ipython
288 | @njit
289 | def euler_diff(c, σ, y, og):
290 | """
291 | Set up a function such that the root with respect to c,
292 | given y and σ, is equal to Kσ(y).
293 |
294 | """
295 |
296 | β, shocks, grid = og.β, og.shocks, og.grid
297 | f, f_prime, u_prime = og.f, og.f_prime, og.u_prime
298 |
299 | # First turn σ into a function via interpolation
300 | σ_func = lambda x: interp(grid, σ, x)
301 |
302 | # Now set up the function we need to find the root of.
303 | vals = u_prime(σ_func(f(y - c) * shocks)) * f_prime(y - c) * shocks
304 | return u_prime(c) - β * np.mean(vals)
305 | ```
306 |
307 | The function `euler_diff` evaluates integrals by Monte Carlo and
308 | approximates functions using linear interpolation.
309 |
310 | We will use a root-finding algorithm to solve {eq}`euler_diff` for $c$ given
311 | state $y$ and $σ$, the current guess of the policy.
312 |
313 | Here's the operator $K$, that implements the root-finding step.
314 |
315 | ```{code-cell} ipython3
316 | @njit
317 | def K(σ, og):
318 | """
319 | The Coleman-Reffett operator
320 |
321 | Here og is an instance of OptimalGrowthModel.
322 | """
323 |
324 | β = og.β
325 | f, f_prime, u_prime = og.f, og.f_prime, og.u_prime
326 | grid, shocks = og.grid, og.shocks
327 |
328 | σ_new = np.empty_like(σ)
329 | for i, y in enumerate(grid):
330 | # Solve for optimal c at y
331 | c_star = brentq(euler_diff, 1e-10, y-1e-10, args=(σ, y, og))[0]
332 | σ_new[i] = c_star
333 |
334 | return σ_new
335 | ```
336 |
337 | ### Testing
338 |
339 | Let's generate an instance and plot some iterates of $K$, starting from $σ(y) = y$.
340 |
341 | ```{code-cell} python3
342 | og = OptimalGrowthModel()
343 | grid = og.grid
344 |
345 | n = 15
346 | σ = grid.copy() # Set initial condition
347 |
348 | fig, ax = plt.subplots()
349 | lb = 'initial condition $\sigma(y) = y$'
350 | ax.plot(grid, σ, color=plt.cm.jet(0), alpha=0.6, label=lb)
351 |
352 | for i in range(n):
353 | σ = K(σ, og)
354 | ax.plot(grid, σ, color=plt.cm.jet(i / n), alpha=0.6)
355 |
356 | # Update one more time and plot the last iterate in black
357 | σ = K(σ, og)
358 | ax.plot(grid, σ, color='k', alpha=0.8, label='last iterate')
359 |
360 | ax.legend()
361 |
362 | plt.show()
363 | ```
364 |
365 | We see that the iteration process converges quickly to a limit
366 | that resembles the solution we obtained in {doc}`the previous lecture `.
367 |
368 | Here is a function called `solve_model_time_iter` that takes an instance of
369 | `OptimalGrowthModel` and returns an approximation to the optimal policy,
370 | using time iteration.
371 |
372 | ```{code-cell} python3
373 | :load: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py
374 | ```
375 |
376 | Let's call it:
377 |
378 | ```{code-cell} python3
379 | σ_init = np.copy(og.grid)
380 | σ = solve_model_time_iter(og, σ_init)
381 | ```
382 |
383 | Here is a plot of the resulting policy, compared with the true policy:
384 |
385 | ```{code-cell} python3
386 | fig, ax = plt.subplots()
387 |
388 | ax.plot(og.grid, σ, lw=2,
389 | alpha=0.8, label='approximate policy function')
390 |
391 | ax.plot(og.grid, σ_star(og.grid, og.α, og.β), 'k--',
392 | lw=2, alpha=0.8, label='true policy function')
393 |
394 | ax.legend()
395 | plt.show()
396 | ```
397 |
398 | Again, the fit is excellent.
399 |
400 | The maximal absolute deviation between the two policies is
401 |
402 | ```{code-cell} python3
403 | np.max(np.abs(σ - σ_star(og.grid, og.α, og.β)))
404 | ```
405 |
406 | How long does it take to converge?
407 |
408 | ```{code-cell} python3
409 | %%timeit -n 3 -r 1
410 | σ = solve_model_time_iter(og, σ_init, verbose=False)
411 | ```
412 |
413 | Convergence is very fast, even compared to our {doc}`JIT-compiled value function iteration `.
414 |
415 | Overall, we find that time iteration provides a very high degree of efficiency
416 | and accuracy, at least for this model.
417 |
418 | ## Exercises
419 |
420 | ```{exercise}
421 | :label: cpi_ex1
422 |
423 | Solve the model with CRRA utility
424 |
425 | $$
426 | u(c) = \frac{c^{1 - \gamma}} {1 - \gamma}
427 | $$
428 |
429 | Set `γ = 1.5`.
430 |
431 | Compute and plot the optimal policy.
432 | ```
433 |
434 | ```{solution-start} cpi_ex1
435 | :class: dropdown
436 | ```
437 |
438 | We use the class `OptimalGrowthModel_CRRA` from our {doc}`VFI lecture `.
439 |
440 | ```{code-cell} python3
441 | :load: _static/lecture_specific/optgrowth_fast/ogm_crra.py
442 | ```
443 |
444 | Let's create an instance:
445 |
446 | ```{code-cell} python3
447 | og_crra = OptimalGrowthModel_CRRA()
448 | ```
449 |
450 | Now we solve and plot the policy:
451 |
452 | ```{code-cell} python3
453 | %%time
454 | σ = solve_model_time_iter(og_crra, σ_init)
455 |
456 |
457 | fig, ax = plt.subplots()
458 |
459 | ax.plot(og.grid, σ, lw=2,
460 | alpha=0.8, label='approximate policy function')
461 |
462 | ax.legend()
463 | plt.show()
464 | ```
465 |
466 | ```{solution-end}
467 | ```
468 |
--------------------------------------------------------------------------------
/lectures/mccall_model_with_separation.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | (mccall_with_sep)=
13 | ```{raw} html
14 |
19 | ```
20 |
21 | # Job Search II: Search and Separation
22 |
23 | ```{index} single: An Introduction to Job Search
24 | ```
25 |
26 | In addition to what's in Anaconda, this lecture will need the following libraries:
27 |
28 | ```{code-cell} ipython
29 | ---
30 | tags: [hide-output]
31 | ---
32 | !pip install quantecon
33 | ```
34 |
35 | ## Overview
36 |
37 | Previously {doc}`we looked ` at the McCall job search model {cite}`McCall1970` as a way of understanding unemployment and worker decisions.
38 |
39 | One unrealistic feature of the model is that every job is permanent.
40 |
41 | In this lecture, we extend the McCall model by introducing job separation.
42 |
43 | Once separation enters the picture, the agent comes to view
44 |
45 | * the loss of a job as a capital loss, and
46 | * a spell of unemployment as an *investment* in searching for an acceptable job
47 |
48 | The other minor addition is that a utility function will be included to make
49 | worker preferences slightly more sophisticated.
50 |
51 | We'll need the following imports
52 |
53 | ```{code-cell} ipython
54 | import matplotlib.pyplot as plt
55 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
56 | import numpy as np
57 | from numba import njit, float64
58 | from numba.experimental import jitclass
59 | from quantecon.distributions import BetaBinomial
60 | ```
61 |
62 | ## The Model
63 |
64 | The model is similar to the {doc}`baseline McCall job search model `.
65 |
66 | It concerns the life of an infinitely lived worker and
67 |
68 | * the opportunities he or she (let's say he to save one character) has to work at different wages
69 | * exogenous events that destroy his current job
70 | * his decision making process while unemployed
71 |
72 | The worker can be in one of two states: employed or unemployed.
73 |
74 | He wants to maximize
75 |
76 | ```{math}
77 | :label: objective
78 |
79 | {\mathbb E} \sum_{t=0}^\infty \beta^t u(y_t)
80 | ```
81 |
82 | At this stage the only difference from the {doc}`baseline model ` is that we've added some flexibility to preferences by
83 | introducing a utility function $u$.
84 |
85 | It satisfies $u'> 0$ and $u'' < 0$.
86 |
87 | ### The Wage Process
88 |
89 | For now we will drop the separation of state process and wage process that we
90 | maintained for the {doc}`baseline model `.
91 |
92 | In particular, we simply suppose that wage offers $\{ w_t \}$ are IID with common distribution $q$.
93 |
94 | The set of possible wage values is denoted by $\mathbb W$.
95 |
96 | (Later we will go back to having a separate state process $\{s_t\}$
97 | driving random outcomes, since this formulation is usually convenient in more sophisticated
98 | models.)
99 |
100 | ### Timing and Decisions
101 |
102 | At the start of each period, the agent can be either
103 |
104 | * unemployed or
105 | * employed at some existing wage level $w_e$.
106 |
107 | At the start of a given period, the current wage offer $w_t$ is observed.
108 |
109 | If currently *employed*, the worker
110 |
111 | 1. receives utility $u(w_e)$ and
112 | 1. is fired with some (small) probability $\alpha$.
113 |
114 | If currently *unemployed*, the worker either accepts or rejects the current offer $w_t$.
115 |
116 | If he accepts, then he begins work immediately at wage $w_t$.
117 |
118 | If he rejects, then he receives unemployment compensation $c$.
119 |
120 | The process then repeats.
121 |
122 | ```{note}
123 | We do not allow for job search while employed---this topic is taken up in a {doc}`later lecture `.
124 | ```
125 |
126 | ## Solving the Model
127 |
128 | We drop time subscripts in what follows and primes denote next period values.
129 |
130 | Let
131 |
132 | * $v(w_e)$ be total lifetime value accruing to a worker who enters the current period *employed* with existing wage $w_e$
133 | * $h(w)$ be total lifetime value accruing to a worker who who enters the current period *unemployed* and receives
134 | wage offer $w$.
135 |
136 | Here *value* means the value of the objective function {eq}`objective` when the worker makes optimal decisions at all future points in time.
137 |
138 | Our first aim is to obtain these functions.
139 |
140 | ### The Bellman Equations
141 |
142 | Suppose for now that the worker can calculate the functions $v$ and $h$ and use them in his decision making.
143 |
144 | Then $v$ and $h$ should satisfy
145 |
146 | ```{math}
147 | :label: bell1_mccall
148 |
149 | v(w_e) = u(w_e) + \beta
150 | \left[
151 | (1-\alpha)v(w_e) + \alpha \sum_{w' \in \mathbb W} h(w') q(w')
152 | \right]
153 | ```
154 |
155 | and
156 |
157 | ```{math}
158 | :label: bell2_mccall
159 |
160 | h(w) = \max \left\{ v(w), \, u(c) + \beta \sum_{w' \in \mathbb W} h(w') q(w') \right\}
161 | ```
162 |
163 | Equation {eq}`bell1_mccall` expresses the value of being employed at wage $w_e$ in terms of
164 |
165 | * current reward $u(w_e)$ plus
166 | * discounted expected reward tomorrow, given the $\alpha$ probability of being fired
167 |
168 | Equation {eq}`bell2_mccall` expresses the value of being unemployed with offer
169 | $w$ in hand as a maximum over the value of two options: accept or reject
170 | the current offer.
171 |
172 | Accepting transitions the worker to employment and hence yields reward $v(w)$.
173 |
174 | Rejecting leads to unemployment compensation and unemployment tomorrow.
175 |
176 | Equations {eq}`bell1_mccall` and {eq}`bell2_mccall` are the Bellman equations for this model.
177 |
178 | They provide enough information to solve for both $v$ and $h$.
179 |
180 | (ast_mcm)=
181 | ### A Simplifying Transformation
182 |
183 | Rather than jumping straight into solving these equations, let's see if we can
184 | simplify them somewhat.
185 |
186 | (This process will be analogous to our {ref}`second pass ` at the plain vanilla
187 | McCall model, where we simplified the Bellman equation.)
188 |
189 | First, let
190 |
191 | ```{math}
192 | :label: defd_mm
193 |
194 | d := \sum_{w' \in \mathbb W} h(w') q(w')
195 | ```
196 |
197 | be the expected value of unemployment tomorrow.
198 |
199 | We can now write {eq}`bell2_mccall` as
200 |
201 | $$
202 | h(w) = \max \left\{ v(w), \, u(c) + \beta d \right\}
203 | $$
204 |
205 | or, shifting time forward one period
206 |
207 | $$
208 | \sum_{w' \in \mathbb W} h(w') q(w')
209 | = \sum_{w' \in \mathbb W} \max \left\{ v(w'), \, u(c) + \beta d \right\} q(w')
210 | $$
211 |
212 | Using {eq}`defd_mm` again now gives
213 |
214 | ```{math}
215 | :label: bell02_mccall
216 |
217 | d = \sum_{w' \in \mathbb W} \max \left\{ v(w'), \, u(c) + \beta d \right\} q(w')
218 | ```
219 |
220 | Finally, {eq}`bell1_mccall` can now be rewritten as
221 |
222 | ```{math}
223 | :label: bell01_mccall
224 |
225 | v(w) = u(w) + \beta
226 | \left[
227 | (1-\alpha)v(w) + \alpha d
228 | \right]
229 | ```
230 |
231 | In the last expression, we wrote $w_e$ as $w$ to make the notation
232 | simpler.
233 |
234 | ### The Reservation Wage
235 |
236 | Suppose we can use {eq}`bell02_mccall` and {eq}`bell01_mccall` to solve for
237 | $d$ and $v$.
238 |
239 | (We will do this soon.)
240 |
241 | We can then determine optimal behavior for the worker.
242 |
243 | From {eq}`bell2_mccall`, we see that an unemployed agent accepts current offer
244 | $w$ if $v(w) \geq u(c) + \beta d$.
245 |
246 | This means precisely that the value of accepting is higher than the expected value of rejecting.
247 |
248 | It is clear that $v$ is (at least weakly) increasing in $w$, since the agent is never made worse off by a higher wage offer.
249 |
250 | Hence, we can express the optimal choice as accepting wage offer $w$ if and only if
251 |
252 | $$
253 | w \geq \bar w
254 | \quad \text{where} \quad
255 | \bar w \text{ solves } v(\bar w) = u(c) + \beta d
256 | $$
257 |
258 | ### Solving the Bellman Equations
259 |
260 | We'll use the same iterative approach to solving the Bellman equations that we
261 | adopted in the {doc}`first job search lecture `.
262 |
263 | Here this amounts to
264 |
265 | 1. make guesses for $d$ and $v$
266 | 1. plug these guesses into the right-hand sides of {eq}`bell02_mccall` and {eq}`bell01_mccall`
267 | 1. update the left-hand sides from this rule and then repeat
268 |
269 | In other words, we are iterating using the rules
270 |
271 | ```{math}
272 | :label: bell1001
273 |
274 | d_{n+1} = \sum_{w' \in \mathbb W}
275 | \max \left\{ v_n(w'), \, u(c) + \beta d_n \right\} q(w')
276 | ```
277 |
278 | ```{math}
279 | :label: bell2001
280 |
281 | v_{n+1}(w) = u(w) + \beta
282 | \left[
283 | (1-\alpha)v_n(w) + \alpha d_n
284 | \right]
285 | ```
286 |
287 | starting from some initial conditions $d_0, v_0$.
288 |
289 | As before, the system always converges to the true solutions---in this case,
290 | the $v$ and $d$ that solve {eq}`bell02_mccall` and {eq}`bell01_mccall`.
291 |
292 | (A proof can be obtained via the Banach contraction mapping theorem.)
293 |
294 | ## Implementation
295 |
296 | Let's implement this iterative process.
297 |
298 | In the code, you'll see that we use a class to store the various parameters and other
299 | objects associated with a given model.
300 |
301 | This helps to tidy up the code and provides an object that's easy to pass to functions.
302 |
303 | The default utility function is a CRRA utility function
304 |
305 | ```{code-cell} python3
306 | @njit
307 | def u(c, σ=2.0):
308 | return (c**(1 - σ) - 1) / (1 - σ)
309 | ```
310 |
311 | Also, here's a default wage distribution, based around the BetaBinomial
312 | distribution:
313 |
314 | ```{code-cell} python3
315 | n = 60 # n possible outcomes for w
316 | w_default = np.linspace(10, 20, n) # wages between 10 and 20
317 | a, b = 600, 400 # shape parameters
318 | dist = BetaBinomial(n-1, a, b)
319 | q_default = dist.pdf()
320 | ```
321 |
322 | Here's our jitted class for the McCall model with separation.
323 |
324 | ```{code-cell} python3
325 | mccall_data = [
326 | ('α', float64), # job separation rate
327 | ('β', float64), # discount factor
328 | ('c', float64), # unemployment compensation
329 | ('w', float64[:]), # list of wage values
330 | ('q', float64[:]) # pmf of random variable w
331 | ]
332 |
333 | @jitclass(mccall_data)
334 | class McCallModel:
335 | """
336 | Stores the parameters and functions associated with a given model.
337 | """
338 |
339 | def __init__(self, α=0.2, β=0.98, c=6.0, w=w_default, q=q_default):
340 |
341 | self.α, self.β, self.c, self.w, self.q = α, β, c, w, q
342 |
343 |
344 | def update(self, v, d):
345 |
346 | α, β, c, w, q = self.α, self.β, self.c, self.w, self.q
347 |
348 | v_new = np.empty_like(v)
349 |
350 | for i in range(len(w)):
351 | v_new[i] = u(w[i]) + β * ((1 - α) * v[i] + α * d)
352 |
353 | d_new = np.sum(np.maximum(v, u(c) + β * d) * q)
354 |
355 | return v_new, d_new
356 | ```
357 |
358 | Now we iterate until successive realizations are closer together than some small tolerance level.
359 |
360 | We then return the current iterate as an approximate solution.
361 |
362 | ```{code-cell} python3
363 | @njit
364 | def solve_model(mcm, tol=1e-5, max_iter=2000):
365 | """
366 | Iterates to convergence on the Bellman equations
367 |
368 | * mcm is an instance of McCallModel
369 | """
370 |
371 | v = np.ones_like(mcm.w) # Initial guess of v
372 | d = 1 # Initial guess of d
373 | i = 0
374 | error = tol + 1
375 |
376 | while error > tol and i < max_iter:
377 | v_new, d_new = mcm.update(v, d)
378 | error_1 = np.max(np.abs(v_new - v))
379 | error_2 = np.abs(d_new - d)
380 | error = max(error_1, error_2)
381 | v = v_new
382 | d = d_new
383 | i += 1
384 |
385 | return v, d
386 | ```
387 |
388 | ### The Reservation Wage: First Pass
389 |
390 | The optimal choice of the agent is summarized by the reservation wage.
391 |
392 | As discussed above, the reservation wage is the $\bar w$ that solves
393 | $v(\bar w) = h$ where $h := u(c) + \beta d$ is the continuation
394 | value.
395 |
396 | Let's compare $v$ and $h$ to see what they look like.
397 |
398 | We'll use the default parameterizations found in the code above.
399 |
400 | ```{code-cell} python3
401 | mcm = McCallModel()
402 | v, d = solve_model(mcm)
403 | h = u(mcm.c) + mcm.β * d
404 |
405 | fig, ax = plt.subplots()
406 |
407 | ax.plot(mcm.w, v, 'b-', lw=2, alpha=0.7, label='$v$')
408 | ax.plot(mcm.w, [h] * len(mcm.w),
409 | 'g-', lw=2, alpha=0.7, label='$h$')
410 | ax.set_xlim(min(mcm.w), max(mcm.w))
411 | ax.legend()
412 |
413 | plt.show()
414 | ```
415 |
416 | The value $v$ is increasing because higher $w$ generates a higher wage flow conditional on staying employed.
417 |
418 | ### The Reservation Wage: Computation
419 |
420 | Here's a function `compute_reservation_wage` that takes an instance of `McCallModel`
421 | and returns the associated reservation wage.
422 |
423 | ```{code-cell} python3
424 | @njit
425 | def compute_reservation_wage(mcm):
426 | """
427 | Computes the reservation wage of an instance of the McCall model
428 | by finding the smallest w such that v(w) >= h.
429 |
430 | If no such w exists, then w_bar is set to np.inf.
431 | """
432 |
433 | v, d = solve_model(mcm)
434 | h = u(mcm.c) + mcm.β * d
435 |
436 | i = np.searchsorted(v, h, side='right')
437 | w_bar = mcm.w[i]
438 |
439 | return w_bar
440 | ```
441 |
442 | Next we will investigate how the reservation wage varies with parameters.
443 |
444 | ## Impact of Parameters
445 |
446 | In each instance below, we'll show you a figure and then ask you to reproduce it in the exercises.
447 |
448 | ### The Reservation Wage and Unemployment Compensation
449 |
450 | First, let's look at how $\bar w$ varies with unemployment compensation.
451 |
452 | In the figure below, we use the default parameters in the `McCallModel` class, apart from
453 | c (which takes the values given on the horizontal axis)
454 |
455 | ```{figure} /_static/lecture_specific/mccall_model_with_separation/mccall_resw_c.png
456 |
457 | ```
458 |
459 | As expected, higher unemployment compensation causes the worker to hold out for higher wages.
460 |
461 | In effect, the cost of continuing job search is reduced.
462 |
463 | ### The Reservation Wage and Discounting
464 |
465 | Next, let's investigate how $\bar w$ varies with the discount factor.
466 |
467 | The next figure plots the reservation wage associated with different values of
468 | $\beta$
469 |
470 | ```{figure} /_static/lecture_specific/mccall_model_with_separation/mccall_resw_beta.png
471 |
472 | ```
473 |
474 | Again, the results are intuitive: More patient workers will hold out for higher wages.
475 |
476 | ### The Reservation Wage and Job Destruction
477 |
478 | Finally, let's look at how $\bar w$ varies with the job separation rate $\alpha$.
479 |
480 | Higher $\alpha$ translates to a greater chance that a worker will face termination in each period once employed.
481 |
482 | ```{figure} /_static/lecture_specific/mccall_model_with_separation/mccall_resw_alpha.png
483 |
484 | ```
485 |
486 | Once more, the results are in line with our intuition.
487 |
488 | If the separation rate is high, then the benefit of holding out for a higher wage falls.
489 |
490 | Hence the reservation wage is lower.
491 |
492 | ## Exercises
493 |
494 | ```{exercise-start}
495 | :label: mmws_ex1
496 | ```
497 |
498 | Reproduce all the reservation wage figures shown above.
499 |
500 | Regarding the values on the horizontal axis, use
501 |
502 | ```{code-cell} python3
503 | grid_size = 25
504 | c_vals = np.linspace(2, 12, grid_size) # unemployment compensation
505 | beta_vals = np.linspace(0.8, 0.99, grid_size) # discount factors
506 | alpha_vals = np.linspace(0.05, 0.5, grid_size) # separation rate
507 | ```
508 |
509 | ```{exercise-end}
510 | ```
511 |
512 | ```{solution-start} mmws_ex1
513 | :class: dropdown
514 | ```
515 |
516 | Here's the first figure.
517 |
518 | ```{code-cell} python3
519 | mcm = McCallModel()
520 |
521 | w_bar_vals = np.empty_like(c_vals)
522 |
523 | fig, ax = plt.subplots()
524 |
525 | for i, c in enumerate(c_vals):
526 | mcm.c = c
527 | w_bar = compute_reservation_wage(mcm)
528 | w_bar_vals[i] = w_bar
529 |
530 | ax.set(xlabel='unemployment compensation',
531 | ylabel='reservation wage')
532 | ax.plot(c_vals, w_bar_vals, label=r'$\bar w$ as a function of $c$')
533 | ax.legend()
534 |
535 | plt.show()
536 | ```
537 |
538 | Here's the second one.
539 |
540 | ```{code-cell} python3
541 | fig, ax = plt.subplots()
542 |
543 | for i, β in enumerate(beta_vals):
544 | mcm.β = β
545 | w_bar = compute_reservation_wage(mcm)
546 | w_bar_vals[i] = w_bar
547 |
548 | ax.set(xlabel='discount factor', ylabel='reservation wage')
549 | ax.plot(beta_vals, w_bar_vals, label=r'$\bar w$ as a function of $\beta$')
550 | ax.legend()
551 |
552 | plt.show()
553 | ```
554 |
555 | Here's the third.
556 |
557 | ```{code-cell} python3
558 | fig, ax = plt.subplots()
559 |
560 | for i, α in enumerate(alpha_vals):
561 | mcm.α = α
562 | w_bar = compute_reservation_wage(mcm)
563 | w_bar_vals[i] = w_bar
564 |
565 | ax.set(xlabel='separation rate', ylabel='reservation wage')
566 | ax.plot(alpha_vals, w_bar_vals, label=r'$\bar w$ as a function of $\alpha$')
567 | ax.legend()
568 |
569 | plt.show()
570 | ```
571 |
572 | ```{solution-end}
573 | ```
574 |
--------------------------------------------------------------------------------
/lectures/career.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | (career)=
13 | ```{raw} html
14 |
19 | ```
20 |
21 | # Job Search V: Modeling Career Choice
22 |
23 | ```{index} single: Modeling; Career Choice
24 | ```
25 |
26 | In addition to what's in Anaconda, this lecture will need the following libraries:
27 |
28 | ```{code-cell} ipython
29 | ---
30 | tags: [hide-output]
31 | ---
32 | !pip install quantecon
33 | ```
34 |
35 | ## Overview
36 |
37 | Next, we study a computational problem concerning career and job choices.
38 |
39 | The model is originally due to Derek Neal {cite}`Neal1999`.
40 |
41 | This exposition draws on the presentation in {cite}`Ljungqvist2012`, section 6.5.
42 |
43 | We begin with some imports:
44 |
45 | ```{code-cell} ipython
46 | import matplotlib.pyplot as plt
47 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
48 | import numpy as np
49 | import quantecon as qe
50 | from numba import njit, prange
51 | from quantecon.distributions import BetaBinomial
52 | from scipy.special import binom, beta
53 | from mpl_toolkits.mplot3d.axes3d import Axes3D
54 | from matplotlib import cm
55 | ```
56 |
57 | ### Model Features
58 |
59 | * Career and job within career both chosen to maximize expected discounted wage flow.
60 | * Infinite horizon dynamic programming with two state variables.
61 |
62 | ## Model
63 |
64 | In what follows we distinguish between a career and a job, where
65 |
66 | * a *career* is understood to be a general field encompassing many possible jobs, and
67 | * a *job* is understood to be a position with a particular firm
68 |
69 | For workers, wages can be decomposed into the contribution of job and career
70 |
71 | * $w_t = \theta_t + \epsilon_t$, where
72 | * $\theta_t$ is the contribution of career at time $t$
73 | * $\epsilon_t$ is the contribution of the job at time $t$
74 |
75 | At the start of time $t$, a worker has the following options
76 |
77 | * retain a current (career, job) pair $(\theta_t, \epsilon_t)$
78 | --- referred to hereafter as "stay put"
79 | * retain a current career $\theta_t$ but redraw a job $\epsilon_t$
80 | --- referred to hereafter as "new job"
81 | * redraw both a career $\theta_t$ and a job $\epsilon_t$
82 | --- referred to hereafter as "new life"
83 |
84 | Draws of $\theta$ and $\epsilon$ are independent of each other and
85 | past values, with
86 |
87 | * $\theta_t \sim F$
88 | * $\epsilon_t \sim G$
89 |
90 | Notice that the worker does not have the option to retain a job but redraw
91 | a career --- starting a new career always requires starting a new job.
92 |
93 | A young worker aims to maximize the expected sum of discounted wages
94 |
95 | ```{math}
96 | :label: exw
97 |
98 | \mathbb{E} \sum_{t=0}^{\infty} \beta^t w_t
99 | ```
100 |
101 | subject to the choice restrictions specified above.
102 |
103 | Let $v(\theta, \epsilon)$ denote the value function, which is the
104 | maximum of {eq}`exw` overall feasible (career, job) policies, given the
105 | initial state $(\theta, \epsilon)$.
106 |
107 | The value function obeys
108 |
109 | $$
110 | v(\theta, \epsilon) = \max\{I, II, III\}
111 | $$
112 |
113 | where
114 |
115 | ```{math}
116 | :label: eyes
117 |
118 | \begin{aligned}
119 | & I = \theta + \epsilon + \beta v(\theta, \epsilon) \\
120 | & II = \theta + \int \epsilon' G(d \epsilon') + \beta \int v(\theta, \epsilon') G(d \epsilon') \nonumber \\
121 | & III = \int \theta' F(d \theta') + \int \epsilon' G(d \epsilon') + \beta \int \int v(\theta', \epsilon') G(d \epsilon') F(d \theta') \nonumber
122 | \end{aligned}
123 | ```
124 |
125 | Evidently $I$, $II$ and $III$ correspond to "stay put", "new job" and "new life", respectively.
126 |
127 | ### Parameterization
128 |
129 | As in {cite}`Ljungqvist2012`, section 6.5, we will focus on a discrete version of the model, parameterized as follows:
130 |
131 | * both $\theta$ and $\epsilon$ take values in the set
132 | `np.linspace(0, B, grid_size)` --- an even grid of points between
133 | $0$ and $B$ inclusive
134 | * `grid_size = 50`
135 | * `B = 5`
136 | * `β = 0.95`
137 |
138 | The distributions $F$ and $G$ are discrete distributions
139 | generating draws from the grid points `np.linspace(0, B, grid_size)`.
140 |
141 | A very useful family of discrete distributions is the Beta-binomial family,
142 | with probability mass function
143 |
144 | $$
145 | p(k \,|\, n, a, b)
146 | = {n \choose k} \frac{B(k + a, n - k + b)}{B(a, b)},
147 | \qquad k = 0, \ldots, n
148 | $$
149 |
150 | Interpretation:
151 |
152 | * draw $q$ from a Beta distribution with shape parameters $(a, b)$
153 | * run $n$ independent binary trials, each with success probability $q$
154 | * $p(k \,|\, n, a, b)$ is the probability of $k$ successes in these $n$ trials
155 |
156 | Nice properties:
157 |
158 | * very flexible class of distributions, including uniform, symmetric unimodal, etc.
159 | * only three parameters
160 |
161 | Here's a figure showing the effect on the pmf of different shape parameters when $n=50$.
162 |
163 | ```{code-cell} python3
164 | def gen_probs(n, a, b):
165 | probs = np.zeros(n+1)
166 | for k in range(n+1):
167 | probs[k] = binom(n, k) * beta(k + a, n - k + b) / beta(a, b)
168 | return probs
169 |
170 | n = 50
171 | a_vals = [0.5, 1, 100]
172 | b_vals = [0.5, 1, 100]
173 | fig, ax = plt.subplots(figsize=(10, 6))
174 | for a, b in zip(a_vals, b_vals):
175 | ab_label = f'$a = {a:.1f}$, $b = {b:.1f}$'
176 | ax.plot(list(range(0, n+1)), gen_probs(n, a, b), '-o', label=ab_label)
177 | ax.legend()
178 | plt.show()
179 | ```
180 |
181 | ## Implementation
182 |
183 | We will first create a class `CareerWorkerProblem` which will hold the
184 | default parameterizations of the model and an initial guess for the value function.
185 |
186 | ```{code-cell} python3
187 | class CareerWorkerProblem:
188 |
189 | def __init__(self,
190 | B=5.0, # Upper bound
191 | β=0.95, # Discount factor
192 | grid_size=50, # Grid size
193 | F_a=1,
194 | F_b=1,
195 | G_a=1,
196 | G_b=1):
197 |
198 | self.β, self.grid_size, self.B = β, grid_size, B
199 |
200 | self.θ = np.linspace(0, B, grid_size) # Set of θ values
201 | self.ϵ = np.linspace(0, B, grid_size) # Set of ϵ values
202 |
203 | self.F_probs = BetaBinomial(grid_size - 1, F_a, F_b).pdf()
204 | self.G_probs = BetaBinomial(grid_size - 1, G_a, G_b).pdf()
205 | self.F_mean = np.sum(self.θ * self.F_probs)
206 | self.G_mean = np.sum(self.ϵ * self.G_probs)
207 |
208 | # Store these parameters for str and repr methods
209 | self._F_a, self._F_b = F_a, F_b
210 | self._G_a, self._G_b = G_a, G_b
211 | ```
212 |
213 | The following function takes an instance of `CareerWorkerProblem` and returns
214 | the corresponding Bellman operator $T$ and the greedy policy function.
215 |
216 | In this model, $T$ is defined by $Tv(\theta, \epsilon) = \max\{I, II, III\}$, where
217 | $I$, $II$ and $III$ are as given in {eq}`eyes`.
218 |
219 | ```{code-cell} python3
220 | def operator_factory(cw, parallel_flag=True):
221 |
222 | """
223 | Returns jitted versions of the Bellman operator and the
224 | greedy policy function
225 |
226 | cw is an instance of ``CareerWorkerProblem``
227 | """
228 |
229 | θ, ϵ, β = cw.θ, cw.ϵ, cw.β
230 | F_probs, G_probs = cw.F_probs, cw.G_probs
231 | F_mean, G_mean = cw.F_mean, cw.G_mean
232 |
233 | @njit(parallel=parallel_flag)
234 | def T(v):
235 | "The Bellman operator"
236 |
237 | v_new = np.empty_like(v)
238 |
239 | for i in prange(len(v)):
240 | for j in prange(len(v)):
241 | v1 = θ[i] + ϵ[j] + β * v[i, j] # Stay put
242 | v2 = θ[i] + G_mean + β * v[i, :] @ G_probs # New job
243 | v3 = G_mean + F_mean + β * F_probs @ v @ G_probs # New life
244 | v_new[i, j] = max(v1, v2, v3)
245 |
246 | return v_new
247 |
248 | @njit
249 | def get_greedy(v):
250 | "Computes the v-greedy policy"
251 |
252 | σ = np.empty(v.shape)
253 |
254 | for i in range(len(v)):
255 | for j in range(len(v)):
256 | v1 = θ[i] + ϵ[j] + β * v[i, j]
257 | v2 = θ[i] + G_mean + β * v[i, :] @ G_probs
258 | v3 = G_mean + F_mean + β * F_probs @ v @ G_probs
259 | if v1 > max(v2, v3):
260 | action = 1
261 | elif v2 > max(v1, v3):
262 | action = 2
263 | else:
264 | action = 3
265 | σ[i, j] = action
266 |
267 | return σ
268 |
269 | return T, get_greedy
270 | ```
271 |
272 | Lastly, `solve_model` will take an instance of `CareerWorkerProblem` and
273 | iterate using the Bellman operator to find the fixed point of the Bellman equation.
274 |
275 | ```{code-cell} python3
276 | def solve_model(cw,
277 | use_parallel=True,
278 | tol=1e-4,
279 | max_iter=1000,
280 | verbose=True,
281 | print_skip=25):
282 |
283 | T, _ = operator_factory(cw, parallel_flag=use_parallel)
284 |
285 | # Set up loop
286 | v = np.full((cw.grid_size, cw.grid_size), 100.) # Initial guess
287 | i = 0
288 | error = tol + 1
289 |
290 | while i < max_iter and error > tol:
291 | v_new = T(v)
292 | error = np.max(np.abs(v - v_new))
293 | i += 1
294 | if verbose and i % print_skip == 0:
295 | print(f"Error at iteration {i} is {error}.")
296 | v = v_new
297 |
298 | if error > tol:
299 | print("Failed to converge!")
300 |
301 | elif verbose:
302 | print(f"\nConverged in {i} iterations.")
303 |
304 | return v_new
305 | ```
306 |
307 | Here's the solution to the model -- an approximate value function
308 |
309 | ```{code-cell} python3
310 | cw = CareerWorkerProblem()
311 | T, get_greedy = operator_factory(cw)
312 | v_star = solve_model(cw, verbose=False)
313 | greedy_star = get_greedy(v_star)
314 |
315 | fig = plt.figure(figsize=(8, 6))
316 | ax = fig.add_subplot(111, projection='3d')
317 | tg, eg = np.meshgrid(cw.θ, cw.ϵ)
318 | ax.plot_surface(tg,
319 | eg,
320 | v_star.T,
321 | cmap=cm.jet,
322 | alpha=0.5,
323 | linewidth=0.25)
324 | ax.set(xlabel='θ', ylabel='ϵ', zlim=(150, 200))
325 | ax.view_init(ax.elev, 225)
326 | plt.show()
327 | ```
328 |
329 | And here is the optimal policy
330 |
331 | ```{code-cell} python3
332 | fig, ax = plt.subplots(figsize=(6, 6))
333 | tg, eg = np.meshgrid(cw.θ, cw.ϵ)
334 | lvls = (0.5, 1.5, 2.5, 3.5)
335 | ax.contourf(tg, eg, greedy_star.T, levels=lvls, cmap=cm.winter, alpha=0.5)
336 | ax.contour(tg, eg, greedy_star.T, colors='k', levels=lvls, linewidths=2)
337 | ax.set(xlabel='θ', ylabel='ϵ')
338 | ax.text(1.8, 2.5, 'new life', fontsize=14)
339 | ax.text(4.5, 2.5, 'new job', fontsize=14, rotation='vertical')
340 | ax.text(4.0, 4.5, 'stay put', fontsize=14)
341 | plt.show()
342 | ```
343 |
344 | Interpretation:
345 |
346 | * If both job and career are poor or mediocre, the worker will experiment with a new job and new career.
347 | * If career is sufficiently good, the worker will hold it and experiment with new jobs until a sufficiently good one is found.
348 | * If both job and career are good, the worker will stay put.
349 |
350 | Notice that the worker will always hold on to a sufficiently good career, but not necessarily hold on to even the best paying job.
351 |
352 | The reason is that high lifetime wages require both variables to be large, and
353 | the worker cannot change careers without changing jobs.
354 |
355 | * Sometimes a good job must be sacrificed in order to change to a better career.
356 |
357 | ## Exercises
358 |
359 | ```{exercise-start}
360 | :label: career_ex1
361 | ```
362 |
363 | Using the default parameterization in the class `CareerWorkerProblem`,
364 | generate and plot typical sample paths for $\theta$ and $\epsilon$
365 | when the worker follows the optimal policy.
366 |
367 | In particular, modulo randomness, reproduce the following figure (where the horizontal axis represents time)
368 |
369 | ```{figure} /_static/lecture_specific/career/career_solutions_ex1_py.png
370 | ```
371 |
372 | ```{hint}
373 | :class: dropdown
374 | To generate the draws from the distributions $F$ and $G$, use `quantecon.random.draw()`.
375 | ```
376 |
377 | ```{exercise-end}
378 | ```
379 |
380 |
381 | ```{solution-start} career_ex1
382 | :class: dropdown
383 | ```
384 |
385 | Simulate job/career paths.
386 |
387 | In reading the code, recall that `optimal_policy[i, j]` = policy at
388 | $(\theta_i, \epsilon_j)$ = either 1, 2 or 3; meaning 'stay put',
389 | 'new job' and 'new life'.
390 |
391 | ```{code-cell} python3
392 | F = np.cumsum(cw.F_probs)
393 | G = np.cumsum(cw.G_probs)
394 | v_star = solve_model(cw, verbose=False)
395 | T, get_greedy = operator_factory(cw)
396 | greedy_star = get_greedy(v_star)
397 |
398 | def gen_path(optimal_policy, F, G, t=20):
399 | i = j = 0
400 | θ_index = []
401 | ϵ_index = []
402 | for t in range(t):
403 | if optimal_policy[i, j] == 1: # Stay put
404 | pass
405 |
406 | elif greedy_star[i, j] == 2: # New job
407 | j = qe.random.draw(G)
408 |
409 | else: # New life
410 | i, j = qe.random.draw(F), qe.random.draw(G)
411 | θ_index.append(i)
412 | ϵ_index.append(j)
413 | return cw.θ[θ_index], cw.ϵ[ϵ_index]
414 |
415 |
416 | fig, axes = plt.subplots(2, 1, figsize=(10, 8))
417 | for ax in axes:
418 | θ_path, ϵ_path = gen_path(greedy_star, F, G)
419 | ax.plot(ϵ_path, label='ϵ')
420 | ax.plot(θ_path, label='θ')
421 | ax.set_ylim(0, 6)
422 |
423 | plt.legend()
424 | plt.show()
425 | ```
426 |
427 | ```{solution-end}
428 | ```
429 |
430 | ```{exercise}
431 | :label: career_ex2
432 |
433 | Let's now consider how long it takes for the worker to settle down to a
434 | permanent job, given a starting point of $(\theta, \epsilon) = (0, 0)$.
435 |
436 | In other words, we want to study the distribution of the random variable
437 |
438 | $$
439 | T^* := \text{the first point in time from which the worker's job no longer changes}
440 | $$
441 |
442 | Evidently, the worker's job becomes permanent if and only if $(\theta_t, \epsilon_t)$ enters the
443 | "stay put" region of $(\theta, \epsilon)$ space.
444 |
445 | Letting $S$ denote this region, $T^*$ can be expressed as the
446 | first passage time to $S$ under the optimal policy:
447 |
448 | $$
449 | T^* := \inf\{t \geq 0 \,|\, (\theta_t, \epsilon_t) \in S\}
450 | $$
451 |
452 | Collect 25,000 draws of this random variable and compute the median (which should be about 7).
453 |
454 | Repeat the exercise with $\beta=0.99$ and interpret the change.
455 | ```
456 |
457 | ```{solution-start} career_ex2
458 | :class: dropdown
459 | ```
460 |
461 | The median for the original parameterization can be computed as follows
462 |
463 | ```{code-cell} python3
464 | cw = CareerWorkerProblem()
465 | F = np.cumsum(cw.F_probs)
466 | G = np.cumsum(cw.G_probs)
467 | T, get_greedy = operator_factory(cw)
468 | v_star = solve_model(cw, verbose=False)
469 | greedy_star = get_greedy(v_star)
470 |
471 | @njit
472 | def passage_time(optimal_policy, F, G):
473 | t = 0
474 | i = j = 0
475 | while True:
476 | if optimal_policy[i, j] == 1: # Stay put
477 | return t
478 | elif optimal_policy[i, j] == 2: # New job
479 | j = qe.random.draw(G)
480 | else: # New life
481 | i, j = qe.random.draw(F), qe.random.draw(G)
482 | t += 1
483 |
484 | @njit(parallel=True)
485 | def median_time(optimal_policy, F, G, M=25000):
486 | samples = np.empty(M)
487 | for i in prange(M):
488 | samples[i] = passage_time(optimal_policy, F, G)
489 | return np.median(samples)
490 |
491 | median_time(greedy_star, F, G)
492 | ```
493 |
494 | To compute the median with $\beta=0.99$ instead of the default
495 | value $\beta=0.95$, replace `cw = CareerWorkerProblem()` with
496 | `cw = CareerWorkerProblem(β=0.99)`.
497 |
498 | The medians are subject to randomness but should be about 7 and 14 respectively.
499 |
500 | Not surprisingly, more patient workers will wait longer to settle down to their final job.
501 |
502 | ```{solution-end}
503 | ```
504 |
505 |
506 | ```{exercise}
507 | :label: career_ex3
508 |
509 | Set the parameterization to `G_a = G_b = 100` and generate a new optimal policy
510 | figure -- interpret.
511 | ```
512 |
513 | ```{solution-start} career_ex3
514 | :class: dropdown
515 | ```
516 |
517 | Here is one solution
518 |
519 | ```{code-cell} python3
520 | cw = CareerWorkerProblem(G_a=100, G_b=100)
521 | T, get_greedy = operator_factory(cw)
522 | v_star = solve_model(cw, verbose=False)
523 | greedy_star = get_greedy(v_star)
524 |
525 | fig, ax = plt.subplots(figsize=(6, 6))
526 | tg, eg = np.meshgrid(cw.θ, cw.ϵ)
527 | lvls = (0.5, 1.5, 2.5, 3.5)
528 | ax.contourf(tg, eg, greedy_star.T, levels=lvls, cmap=cm.winter, alpha=0.5)
529 | ax.contour(tg, eg, greedy_star.T, colors='k', levels=lvls, linewidths=2)
530 | ax.set(xlabel='θ', ylabel='ϵ')
531 | ax.text(1.8, 2.5, 'new life', fontsize=14)
532 | ax.text(4.5, 1.5, 'new job', fontsize=14, rotation='vertical')
533 | ax.text(4.0, 4.5, 'stay put', fontsize=14)
534 | plt.show()
535 | ```
536 |
537 | In the new figure, you see that the region for which the worker
538 | stays put has grown because the distribution for $\epsilon$
539 | has become more concentrated around the mean, making high-paying jobs
540 | less realistic.
541 |
542 | ```{solution-end}
543 | ```
544 |
--------------------------------------------------------------------------------
/lectures/jv.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | (jv)=
13 | ```{raw} html
14 |
19 | ```
20 |
21 | # {index}`Job Search VI: On-the-Job Search `
22 |
23 | ```{index} single: Models; On-the-Job Search
24 | ```
25 |
26 | In addition to what's in Anaconda, this lecture will need the following libraries:
27 |
28 | ```{code-cell} ipython
29 | ---
30 | tags: [hide-output]
31 | ---
32 | !pip install interpolation
33 | ```
34 |
35 | ## Overview
36 |
37 | In this section, we solve a simple on-the-job search model
38 |
39 | * based on {cite}`Ljungqvist2012`, exercise 6.18, and {cite}`Jovanovic1979`
40 |
41 | Let's start with some imports:
42 |
43 | ```{code-cell} ipython
44 | import matplotlib.pyplot as plt
45 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
46 | import numpy as np
47 | import scipy.stats as stats
48 | from interpolation import interp
49 | from numba import njit, prange
50 | ```
51 |
52 | ### Model Features
53 |
54 | ```{index} single: On-the-Job Search; Model Features
55 | ```
56 |
57 | * job-specific human capital accumulation combined with on-the-job search
58 | * infinite-horizon dynamic programming with one state variable and two controls
59 |
60 | ## Model
61 |
62 | ```{index} single: On-the-Job Search; Model
63 | ```
64 |
65 | Let $x_t$ denote the time-$t$ job-specific human capital of a worker employed at a given firm and let $w_t$ denote current wages.
66 |
67 | Let $w_t = x_t(1 - s_t - \phi_t)$, where
68 |
69 | * $\phi_t$ is investment in job-specific human capital for the current role and
70 | * $s_t$ is search effort, devoted to obtaining new offers from other firms.
71 |
72 | For as long as the worker remains in the current job, evolution of $\{x_t\}$ is given by $x_{t+1} = g(x_t, \phi_t)$.
73 |
74 | When search effort at $t$ is $s_t$, the worker receives a new job offer with probability $\pi(s_t) \in [0, 1]$.
75 |
76 | The value of the offer, measured in job-specific human capital, is $u_{t+1}$, where $\{u_t\}$ is IID with common distribution $f$.
77 |
78 | The worker can reject the current offer and continue with existing job.
79 |
80 | Hence $x_{t+1} = u_{t+1}$ if he/she accepts and $x_{t+1} = g(x_t, \phi_t)$ otherwise.
81 |
82 | Let $b_{t+1} \in \{0,1\}$ be a binary random variable, where $b_{t+1} = 1$ indicates that the worker receives an offer at the end of time $t$.
83 |
84 | We can write
85 |
86 | ```{math}
87 | :label: jd
88 |
89 | x_{t+1}
90 | = (1 - b_{t+1}) g(x_t, \phi_t) + b_{t+1}
91 | \max \{ g(x_t, \phi_t), u_{t+1}\}
92 | ```
93 |
94 | Agent's objective: maximize expected discounted sum of wages via controls $\{s_t\}$ and $\{\phi_t\}$.
95 |
96 | Taking the expectation of $v(x_{t+1})$ and using {eq}`jd`,
97 | the Bellman equation for this problem can be written as
98 |
99 | ```{math}
100 | :label: jvbell
101 |
102 | v(x)
103 | = \max_{s + \phi \leq 1}
104 | \left\{
105 | x (1 - s - \phi) + \beta (1 - \pi(s)) v[g(x, \phi)] +
106 | \beta \pi(s) \int v[g(x, \phi) \vee u] f(du)
107 | \right\}
108 | ```
109 |
110 | Here nonnegativity of $s$ and $\phi$ is understood, while
111 | $a \vee b := \max\{a, b\}$.
112 |
113 | ### Parameterization
114 |
115 | ```{index} single: On-the-Job Search; Parameterization
116 | ```
117 |
118 | In the implementation below, we will focus on the parameterization
119 |
120 | $$
121 | g(x, \phi) = A (x \phi)^{\alpha},
122 | \quad
123 | \pi(s) = \sqrt s
124 | \quad \text{and} \quad
125 | f = \text{Beta}(2, 2)
126 | $$
127 |
128 | with default parameter values
129 |
130 | * $A = 1.4$
131 | * $\alpha = 0.6$
132 | * $\beta = 0.96$
133 |
134 | The $\text{Beta}(2,2)$ distribution is supported on $(0,1)$ - it has a unimodal, symmetric density peaked at 0.5.
135 |
136 | (jvboecalc)=
137 | ### Back-of-the-Envelope Calculations
138 |
139 | Before we solve the model, let's make some quick calculations that
140 | provide intuition on what the solution should look like.
141 |
142 | To begin, observe that the worker has two instruments to build
143 | capital and hence wages:
144 |
145 | 1. invest in capital specific to the current job via $\phi$
146 | 1. search for a new job with better job-specific capital match via $s$
147 |
148 | Since wages are $x (1 - s - \phi)$, marginal cost of investment via either $\phi$ or $s$ is identical.
149 |
150 | Our risk-neutral worker should focus on whatever instrument has the highest expected return.
151 |
152 | The relative expected return will depend on $x$.
153 |
154 | For example, suppose first that $x = 0.05$
155 |
156 | * If $s=1$ and $\phi = 0$, then since $g(x,\phi) = 0$,
157 | taking expectations of {eq}`jd` gives expected next period capital equal to $\pi(s) \mathbb{E} u
158 | = \mathbb{E} u = 0.5$.
159 | * If $s=0$ and $\phi=1$, then next period capital is $g(x, \phi) = g(0.05, 1) \approx 0.23$.
160 |
161 | Both rates of return are good, but the return from search is better.
162 |
163 | Next, suppose that $x = 0.4$
164 |
165 | * If $s=1$ and $\phi = 0$, then expected next period capital is again $0.5$
166 | * If $s=0$ and $\phi = 1$, then $g(x, \phi) = g(0.4, 1) \approx 0.8$
167 |
168 | Return from investment via $\phi$ dominates expected return from search.
169 |
170 | Combining these observations gives us two informal predictions:
171 |
172 | 1. At any given state $x$, the two controls $\phi$ and $s$ will
173 | function primarily as substitutes --- worker will focus on whichever instrument has the higher expected return.
174 | 1. For sufficiently small $x$, search will be preferable to investment in
175 | job-specific human capital. For larger $x$, the reverse will be true.
176 |
177 | Now let's turn to implementation, and see if we can match our predictions.
178 |
179 | ## Implementation
180 |
181 | ```{index} single: On-the-Job Search; Programming Implementation
182 | ```
183 |
184 | We will set up a class `JVWorker` that holds the parameters of the model described above
185 |
186 | ```{code-cell} python3
187 | class JVWorker:
188 | r"""
189 | A Jovanovic-type model of employment with on-the-job search.
190 |
191 | """
192 |
193 | def __init__(self,
194 | A=1.4,
195 | α=0.6,
196 | β=0.96, # Discount factor
197 | π=np.sqrt, # Search effort function
198 | a=2, # Parameter of f
199 | b=2, # Parameter of f
200 | grid_size=50,
201 | mc_size=100,
202 | ɛ=1e-4):
203 |
204 | self.A, self.α, self.β, self.π = A, α, β, π
205 | self.mc_size, self.ɛ = mc_size, ɛ
206 |
207 | self.g = njit(lambda x, ϕ: A * (x * ϕ)**α) # Transition function
208 | self.f_rvs = np.random.beta(a, b, mc_size)
209 |
210 | # Max of grid is the max of a large quantile value for f and the
211 | # fixed point y = g(y, 1)
212 | ɛ = 1e-4
213 | grid_max = max(A**(1 / (1 - α)), stats.beta(a, b).ppf(1 - ɛ))
214 |
215 | # Human capital
216 | self.x_grid = np.linspace(ɛ, grid_max, grid_size)
217 | ```
218 |
219 | The function `operator_factory` takes an instance of this class and returns a
220 | jitted version of the Bellman operator `T`, i.e.
221 |
222 | $$
223 | Tv(x)
224 | = \max_{s + \phi \leq 1} w(s, \phi)
225 | $$
226 |
227 | where
228 |
229 | ```{math}
230 | :label: defw
231 |
232 | w(s, \phi)
233 | := x (1 - s - \phi) + \beta (1 - \pi(s)) v[g(x, \phi)] +
234 | \beta \pi(s) \int v[g(x, \phi) \vee u] f(du)
235 | ```
236 |
237 | When we represent $v$, it will be with a NumPy array `v` giving values on grid `x_grid`.
238 |
239 | But to evaluate the right-hand side of {eq}`defw`, we need a function, so
240 | we replace the arrays `v` and `x_grid` with a function `v_func` that gives linear
241 | interpolation of `v` on `x_grid`.
242 |
243 | Inside the `for` loop, for each `x` in the grid over the state space, we
244 | set up the function $w(z) = w(s, \phi)$ defined in {eq}`defw`.
245 |
246 | The function is maximized over all feasible $(s, \phi)$ pairs.
247 |
248 | Another function, `get_greedy` returns the optimal choice of $s$ and $\phi$
249 | at each $x$, given a value function.
250 |
251 | ```{code-cell} python3
252 | def operator_factory(jv, parallel_flag=True):
253 |
254 | """
255 | Returns a jitted version of the Bellman operator T
256 |
257 | jv is an instance of JVWorker
258 |
259 | """
260 |
261 | π, β = jv.π, jv.β
262 | x_grid, ɛ, mc_size = jv.x_grid, jv.ɛ, jv.mc_size
263 | f_rvs, g = jv.f_rvs, jv.g
264 |
265 | @njit
266 | def state_action_values(z, x, v):
267 | s, ϕ = z
268 | v_func = lambda x: interp(x_grid, v, x)
269 |
270 | integral = 0
271 | for m in range(mc_size):
272 | u = f_rvs[m]
273 | integral += v_func(max(g(x, ϕ), u))
274 | integral = integral / mc_size
275 |
276 | q = π(s) * integral + (1 - π(s)) * v_func(g(x, ϕ))
277 | return x * (1 - ϕ - s) + β * q
278 |
279 | @njit(parallel=parallel_flag)
280 | def T(v):
281 | """
282 | The Bellman operator
283 | """
284 |
285 | v_new = np.empty_like(v)
286 | for i in prange(len(x_grid)):
287 | x = x_grid[i]
288 |
289 | # Search on a grid
290 | search_grid = np.linspace(ɛ, 1, 15)
291 | max_val = -1
292 | for s in search_grid:
293 | for ϕ in search_grid:
294 | current_val = state_action_values((s, ϕ), x, v) if s + ϕ <= 1 else -1
295 | if current_val > max_val:
296 | max_val = current_val
297 | v_new[i] = max_val
298 |
299 | return v_new
300 |
301 | @njit
302 | def get_greedy(v):
303 | """
304 | Computes the v-greedy policy of a given function v
305 | """
306 | s_policy, ϕ_policy = np.empty_like(v), np.empty_like(v)
307 |
308 | for i in range(len(x_grid)):
309 | x = x_grid[i]
310 | # Search on a grid
311 | search_grid = np.linspace(ɛ, 1, 15)
312 | max_val = -1
313 | for s in search_grid:
314 | for ϕ in search_grid:
315 | current_val = state_action_values((s, ϕ), x, v) if s + ϕ <= 1 else -1
316 | if current_val > max_val:
317 | max_val = current_val
318 | max_s, max_ϕ = s, ϕ
319 | s_policy[i], ϕ_policy[i] = max_s, max_ϕ
320 | return s_policy, ϕ_policy
321 |
322 | return T, get_greedy
323 | ```
324 |
325 | To solve the model, we will write a function that uses the Bellman operator
326 | and iterates to find a fixed point.
327 |
328 | ```{code-cell} python3
329 | def solve_model(jv,
330 | use_parallel=True,
331 | tol=1e-4,
332 | max_iter=1000,
333 | verbose=True,
334 | print_skip=25):
335 |
336 | """
337 | Solves the model by value function iteration
338 |
339 | * jv is an instance of JVWorker
340 |
341 | """
342 |
343 | T, _ = operator_factory(jv, parallel_flag=use_parallel)
344 |
345 | # Set up loop
346 | v = jv.x_grid * 0.5 # Initial condition
347 | i = 0
348 | error = tol + 1
349 |
350 | while i < max_iter and error > tol:
351 | v_new = T(v)
352 | error = np.max(np.abs(v - v_new))
353 | i += 1
354 | if verbose and i % print_skip == 0:
355 | print(f"Error at iteration {i} is {error}.")
356 | v = v_new
357 |
358 | if error > tol:
359 | print("Failed to converge!")
360 | elif verbose:
361 | print(f"\nConverged in {i} iterations.")
362 |
363 | return v_new
364 | ```
365 |
366 | ## Solving for Policies
367 |
368 | ```{index} single: On-the-Job Search; Solving for Policies
369 | ```
370 |
371 | Let's generate the optimal policies and see what they look like.
372 |
373 | (jv_policies)=
374 | ```{code-cell} python3
375 | jv = JVWorker()
376 | T, get_greedy = operator_factory(jv)
377 | v_star = solve_model(jv)
378 | s_star, ϕ_star = get_greedy(v_star)
379 | ```
380 |
381 | Here are the plots:
382 |
383 | ```{code-cell} python3
384 | plots = [s_star, ϕ_star, v_star]
385 | titles = ["s policy", "ϕ policy", "value function"]
386 |
387 | fig, axes = plt.subplots(3, 1, figsize=(12, 12))
388 |
389 | for ax, plot, title in zip(axes, plots, titles):
390 | ax.plot(jv.x_grid, plot)
391 | ax.set(title=title)
392 | ax.grid()
393 |
394 | axes[-1].set_xlabel("x")
395 | plt.show()
396 | ```
397 |
398 | The horizontal axis is the state $x$, while the vertical axis gives $s(x)$ and $\phi(x)$.
399 |
400 | Overall, the policies match well with our predictions from {ref}`above `
401 |
402 | * Worker switches from one investment strategy to the other depending on relative return.
403 | * For low values of $x$, the best option is to search for a new job.
404 | * Once $x$ is larger, worker does better by investing in human capital specific to the current position.
405 |
406 | ## Exercises
407 |
408 | ```{exercise-start}
409 | :label: jv_ex1
410 | ```
411 |
412 | Let's look at the dynamics for the state process $\{x_t\}$ associated with these policies.
413 |
414 | The dynamics are given by {eq}`jd` when $\phi_t$ and $s_t$ are
415 | chosen according to the optimal policies, and $\mathbb{P}\{b_{t+1} = 1\}
416 | = \pi(s_t)$.
417 |
418 | Since the dynamics are random, analysis is a bit subtle.
419 |
420 | One way to do it is to plot, for each $x$ in a relatively fine grid
421 | called `plot_grid`, a
422 | large number $K$ of realizations of $x_{t+1}$ given $x_t =
423 | x$.
424 |
425 | Plot this with one dot for each realization, in the form of a 45 degree
426 | diagram, setting
427 |
428 | ```{code-block} python3
429 | jv = JVWorker(grid_size=25, mc_size=50)
430 | plot_grid_max, plot_grid_size = 1.2, 100
431 | plot_grid = np.linspace(0, plot_grid_max, plot_grid_size)
432 | fig, ax = plt.subplots()
433 | ax.set_xlim(0, plot_grid_max)
434 | ax.set_ylim(0, plot_grid_max)
435 | ```
436 |
437 | By examining the plot, argue that under the optimal policies, the state
438 | $x_t$ will converge to a constant value $\bar x$ close to unity.
439 |
440 | Argue that at the steady state, $s_t \approx 0$ and $\phi_t \approx 0.6$.
441 |
442 | ```{exercise-end}
443 | ```
444 |
445 | ```{solution-start} jv_ex1
446 | :class: dropdown
447 | ```
448 |
449 | Here’s code to produce the 45 degree diagram
450 |
451 | ```{code-cell} python3
452 | jv = JVWorker(grid_size=25, mc_size=50)
453 | π, g, f_rvs, x_grid = jv.π, jv.g, jv.f_rvs, jv.x_grid
454 | T, get_greedy = operator_factory(jv)
455 | v_star = solve_model(jv, verbose=False)
456 | s_policy, ϕ_policy = get_greedy(v_star)
457 |
458 | # Turn the policy function arrays into actual functions
459 | s = lambda y: interp(x_grid, s_policy, y)
460 | ϕ = lambda y: interp(x_grid, ϕ_policy, y)
461 |
462 | def h(x, b, u):
463 | return (1 - b) * g(x, ϕ(x)) + b * max(g(x, ϕ(x)), u)
464 |
465 |
466 | plot_grid_max, plot_grid_size = 1.2, 100
467 | plot_grid = np.linspace(0, plot_grid_max, plot_grid_size)
468 | fig, ax = plt.subplots(figsize=(8, 8))
469 | ticks = (0.25, 0.5, 0.75, 1.0)
470 | ax.set(xticks=ticks, yticks=ticks,
471 | xlim=(0, plot_grid_max),
472 | ylim=(0, plot_grid_max),
473 | xlabel='$x_t$', ylabel='$x_{t+1}$')
474 |
475 | ax.plot(plot_grid, plot_grid, 'k--', alpha=0.6) # 45 degree line
476 | for x in plot_grid:
477 | for i in range(jv.mc_size):
478 | b = 1 if np.random.uniform(0, 1) < π(s(x)) else 0
479 | u = f_rvs[i]
480 | y = h(x, b, u)
481 | ax.plot(x, y, 'go', alpha=0.25)
482 |
483 | plt.show()
484 | ```
485 |
486 | Looking at the dynamics, we can see that
487 |
488 | - If $x_t$ is below about 0.2 the dynamics are random, but
489 | $x_{t+1} > x_t$ is very likely.
490 | - As $x_t$ increases the dynamics become deterministic, and
491 | $x_t$ converges to a steady state value close to 1.
492 |
493 | Referring back to the figure {ref}`here ` we see that $x_t \approx 1$ means that
494 | $s_t = s(x_t) \approx 0$ and
495 | $\phi_t = \phi(x_t) \approx 0.6$.
496 |
497 | ```{solution-end}
498 | ```
499 |
500 |
501 | ```{exercise}
502 | :label: jv_ex2
503 |
504 | In {ref}`jv_ex1`, we found that $s_t$ converges to zero
505 | and $\phi_t$ converges to about 0.6.
506 |
507 | Since these results were calculated at a value of $\beta$ close to
508 | one, let's compare them to the best choice for an *infinitely* patient worker.
509 |
510 | Intuitively, an infinitely patient worker would like to maximize steady state
511 | wages, which are a function of steady state capital.
512 |
513 | You can take it as given---it's certainly true---that the infinitely patient worker does not
514 | search in the long run (i.e., $s_t = 0$ for large $t$).
515 |
516 | Thus, given $\phi$, steady state capital is the positive fixed point
517 | $x^*(\phi)$ of the map $x \mapsto g(x, \phi)$.
518 |
519 | Steady state wages can be written as $w^*(\phi) = x^*(\phi) (1 - \phi)$.
520 |
521 | Graph $w^*(\phi)$ with respect to $\phi$, and examine the best
522 | choice of $\phi$.
523 |
524 | Can you give a rough interpretation for the value that you see?
525 | ```
526 |
527 | ```{solution-start} jv_ex2
528 | :class: dropdown
529 | ```
530 |
531 | The figure can be produced as follows
532 |
533 | ```{code-cell} python3
534 | jv = JVWorker()
535 |
536 | def xbar(ϕ):
537 | A, α = jv.A, jv.α
538 | return (A * ϕ**α)**(1 / (1 - α))
539 |
540 | ϕ_grid = np.linspace(0, 1, 100)
541 | fig, ax = plt.subplots(figsize=(9, 7))
542 | ax.set(xlabel='$\phi$')
543 | ax.plot(ϕ_grid, [xbar(ϕ) * (1 - ϕ) for ϕ in ϕ_grid], label='$w^*(\phi)$')
544 | ax.legend()
545 |
546 | plt.show()
547 | ```
548 |
549 | Observe that the maximizer is around 0.6.
550 |
551 | This is similar to the long-run value for $\phi$ obtained in
552 | {ref}`jv_ex1`.
553 |
554 | Hence the behavior of the infinitely patent worker is similar to that
555 | of the worker with $\beta = 0.96$.
556 |
557 | This seems reasonable and helps us confirm that our dynamic programming
558 | solutions are probably correct.
559 |
560 | ```{solution-end}
561 | ```
562 |
--------------------------------------------------------------------------------
/lectures/cake_eating_problem.md:
--------------------------------------------------------------------------------
1 | ---
2 | jupytext:
3 | text_representation:
4 | extension: .md
5 | format_name: myst
6 | kernelspec:
7 | display_name: Python 3
8 | language: python
9 | name: python3
10 | ---
11 |
12 | # Cake Eating I: Introduction to Optimal Saving
13 |
14 | ## Overview
15 |
16 | In this lecture we introduce a simple "cake eating" problem.
17 |
18 | The intertemporal problem is: how much to enjoy today and how much to leave
19 | for the future?
20 |
21 | Although the topic sounds trivial, this kind of trade-off between current
22 | and future utility is at the heart of many savings and consumption problems.
23 |
24 | Once we master the ideas in this simple environment, we will apply them to
25 | progressively more challenging---and useful---problems.
26 |
27 | The main tool we will use to solve the cake eating problem is dynamic programming.
28 |
29 | Readers might find it helpful to review the following lectures before reading this one:
30 |
31 | * The {doc}`shortest paths lecture `
32 | * The {doc}`basic McCall model `
33 | * The {doc}`McCall model with separation `
34 | * The {doc}`McCall model with separation and a continuous wage distribution `
35 |
36 | In what follows, we require the following imports:
37 |
38 | ```{code-cell} ipython
39 | import matplotlib.pyplot as plt
40 | plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
41 | import numpy as np
42 | ```
43 |
44 | ## The Model
45 |
46 | We consider an infinite time horizon $t=0, 1, 2, 3..$
47 |
48 | At $t=0$ the agent is given a complete cake with size $\bar x$.
49 |
50 | Let $x_t$ denote the size of the cake at the beginning of each period,
51 | so that, in particular, $x_0=\bar x$.
52 |
53 | We choose how much of the cake to eat in any given period $t$.
54 |
55 | After choosing to consume $c_t$ of the cake in period $t$ there is
56 |
57 | $$
58 | x_{t+1} = x_t - c_t
59 | $$
60 |
61 | left in period $t+1$.
62 |
63 | Consuming quantity $c$ of the cake gives current utility $u(c)$.
64 |
65 | We adopt the CRRA utility function
66 |
67 | ```{math}
68 | :label: crra_utility
69 |
70 | u(c) = \frac{c^{1-\gamma}}{1-\gamma} \qquad (\gamma \gt 0, \, \gamma \neq 1)
71 | ```
72 |
73 | In Python this is
74 |
75 | ```{code-cell} python3
76 | def u(c, γ):
77 |
78 | return c**(1 - γ) / (1 - γ)
79 | ```
80 |
81 | Future cake consumption utility is discounted according to $\beta\in(0, 1)$.
82 |
83 | In particular, consumption of $c$ units $t$ periods hence has present value $\beta^t u(c)$
84 |
85 | The agent's problem can be written as
86 |
87 | ```{math}
88 | :label: cake_objective
89 |
90 | \max_{\{c_t\}} \sum_{t=0}^\infty \beta^t u(c_t)
91 | ```
92 |
93 | subject to
94 |
95 | ```{math}
96 | :label: cake_feasible
97 |
98 | x_{t+1} = x_t - c_t
99 | \quad \text{and} \quad
100 | 0\leq c_t\leq x_t
101 | ```
102 |
103 | for all $t$.
104 |
105 | A consumption path $\{c_t\}$ satisfying {eq}`cake_feasible` where
106 | $x_0 = \bar x$ is called **feasible**.
107 |
108 | In this problem, the following terminology is standard:
109 |
110 | * $x_t$ is called the **state variable**
111 | * $c_t$ is called the **control variable** or the **action**
112 | * $\beta$ and $\gamma$ are **parameters**
113 |
114 | ### Trade-Off
115 |
116 | The key trade-off in the cake-eating problem is this:
117 |
118 | * Delaying consumption is costly because of the discount factor.
119 | * But delaying some consumption is also attractive because $u$ is concave.
120 |
121 | The concavity of $u$ implies that the consumer gains value from
122 | *consumption smoothing*, which means spreading consumption out over time.
123 |
124 | This is because concavity implies diminishing marginal utility---a progressively smaller gain in utility for each additional spoonful of cake consumed within one period.
125 |
126 | ### Intuition
127 |
128 | The reasoning given above suggests that the discount factor $\beta$ and the curvature parameter $\gamma$ will play a key role in determining the rate of consumption.
129 |
130 | Here's an educated guess as to what impact these parameters will have.
131 |
132 | First, higher $\beta$ implies less discounting, and hence the agent is more patient, which should reduce the rate of consumption.
133 |
134 | Second, higher $\gamma$ implies that marginal utility $u'(c) =
135 | c^{-\gamma}$ falls faster with $c$.
136 |
137 | This suggests more smoothing, and hence a lower rate of consumption.
138 |
139 | In summary, we expect the rate of consumption to be *decreasing in both
140 | parameters*.
141 |
142 | Let's see if this is true.
143 |
144 | ## The Value Function
145 |
146 | The first step of our dynamic programming treatment is to obtain the Bellman
147 | equation.
148 |
149 | The next step is to use it to calculate the solution.
150 |
151 | ### The Bellman Equation
152 |
153 | To this end, we let $v(x)$ be maximum lifetime utility attainable from
154 | the current time when $x$ units of cake are left.
155 |
156 | That is,
157 |
158 | ```{math}
159 | :label: value_fun
160 |
161 | v(x) = \max \sum_{t=0}^{\infty} \beta^t u(c_t)
162 | ```
163 |
164 | where the maximization is over all paths $\{ c_t \}$ that are feasible
165 | from $x_0 = x$.
166 |
167 | At this point, we do not have an expression for $v$, but we can still
168 | make inferences about it.
169 |
170 | For example, as was the case with the {doc}`McCall model `, the
171 | value function will satisfy a version of the *Bellman equation*.
172 |
173 | In the present case, this equation states that $v$ satisfies
174 |
175 | ```{math}
176 | :label: bellman-cep
177 |
178 | v(x) = \max_{0\leq c \leq x} \{u(c) + \beta v(x-c)\}
179 | \quad \text{for any given } x \geq 0.
180 | ```
181 |
182 | The intuition here is essentially the same it was for the McCall model.
183 |
184 | Choosing $c$ optimally means trading off current vs future rewards.
185 |
186 | Current rewards from choice $c$ are just $u(c)$.
187 |
188 | Future rewards given current cake size $x$, measured from next period and
189 | assuming optimal behavior, are $v(x-c)$.
190 |
191 | These are the two terms on the right hand side of {eq}`bellman-cep`, after
192 | suitable discounting.
193 |
194 | If $c$ is chosen optimally using this trade off strategy, then we obtain maximal lifetime rewards from our current state $x$.
195 |
196 | Hence, $v(x)$ equals the right hand side of {eq}`bellman-cep`, as claimed.
197 |
198 | ### An Analytical Solution
199 |
200 | It has been shown that, with $u$ as the CRRA utility function in
201 | {eq}`crra_utility`, the function
202 |
203 | ```{math}
204 | :label: crra_vstar
205 |
206 | v^*(x_t) = \left( 1-\beta^{1/\gamma} \right)^{-\gamma}u(x_t)
207 | ```
208 |
209 | solves the Bellman equation and hence is equal to the value function.
210 |
211 | You are asked to confirm that this is true in the exercises below.
212 |
213 | The solution {eq}`crra_vstar` depends heavily on the CRRA utility function.
214 |
215 | In fact, if we move away from CRRA utility, usually there is no analytical
216 | solution at all.
217 |
218 | In other words, beyond CRRA utility, we know that the value function still
219 | satisfies the Bellman equation, but we do not have a way of writing it
220 | explicitly, as a function of the state variable and the parameters.
221 |
222 | We will deal with that situation numerically when the time comes.
223 |
224 | Here is a Python representation of the value function:
225 |
226 | ```{code-cell} python3
227 | def v_star(x, β, γ):
228 |
229 | return (1 - β**(1 / γ))**(-γ) * u(x, γ)
230 | ```
231 |
232 | And here's a figure showing the function for fixed parameters:
233 |
234 | ```{code-cell} python3
235 | β, γ = 0.95, 1.2
236 | x_grid = np.linspace(0.1, 5, 100)
237 |
238 | fig, ax = plt.subplots()
239 |
240 | ax.plot(x_grid, v_star(x_grid, β, γ), label='value function')
241 |
242 | ax.set_xlabel('$x$', fontsize=12)
243 | ax.legend(fontsize=12)
244 |
245 | plt.show()
246 | ```
247 |
248 | ## The Optimal Policy
249 |
250 | Now that we have the value function, it is straightforward to calculate the
251 | optimal action at each state.
252 |
253 | We should choose consumption to maximize the
254 | right hand side of the Bellman equation {eq}`bellman-cep`.
255 |
256 | $$
257 | c^* = \arg \max_{c} \{u(c) + \beta v(x - c)\}
258 | $$
259 |
260 | We can think of this optimal choice as a function of the state $x$, in
261 | which case we call it the **optimal policy**.
262 |
263 | We denote the optimal policy by $\sigma^*$, so that
264 |
265 | $$
266 | \sigma^*(x) := \arg \max_{c} \{u(c) + \beta v(x - c)\}
267 | \quad \text{for all } x
268 | $$
269 |
270 | If we plug the analytical expression {eq}`crra_vstar` for the value function
271 | into the right hand side and compute the optimum, we find that
272 |
273 | ```{math}
274 | :label: crra_opt_pol
275 |
276 | \sigma^*(x) = \left( 1-\beta^{1/\gamma} \right) x
277 | ```
278 |
279 | Now let's recall our intuition on the impact of parameters.
280 |
281 | We guessed that the consumption rate would be decreasing in both parameters.
282 |
283 | This is in fact the case, as can be seen from {eq}`crra_opt_pol`.
284 |
285 | Here's some plots that illustrate.
286 |
287 | ```{code-cell} python3
288 | def c_star(x, β, γ):
289 |
290 | return (1 - β ** (1/γ)) * x
291 | ```
292 |
293 | Continuing with the values for $\beta$ and $\gamma$ used above, the
294 | plot is
295 |
296 | ```{code-cell} python3
297 | fig, ax = plt.subplots()
298 | ax.plot(x_grid, c_star(x_grid, β, γ), label='default parameters')
299 | ax.plot(x_grid, c_star(x_grid, β + 0.02, γ), label=r'higher $\beta$')
300 | ax.plot(x_grid, c_star(x_grid, β, γ + 0.2), label=r'higher $\gamma$')
301 | ax.set_ylabel(r'$\sigma(x)$')
302 | ax.set_xlabel('$x$')
303 | ax.legend()
304 |
305 | plt.show()
306 | ```
307 |
308 | ## The Euler Equation
309 |
310 | In the discussion above we have provided a complete solution to the cake
311 | eating problem in the case of CRRA utility.
312 |
313 | There is in fact another way to solve for the optimal policy, based on the
314 | so-called **Euler equation**.
315 |
316 | Although we already have a complete solution, now is a good time to study the
317 | Euler equation.
318 |
319 | This is because, for more difficult problems, this equation
320 | provides key insights that are hard to obtain by other methods.
321 |
322 | ### Statement and Implications
323 |
324 | The Euler equation for the present problem can be stated as
325 |
326 | ```{math}
327 | :label: euler-cep
328 |
329 | u^{\prime} (c^*_{t})=\beta u^{\prime}(c^*_{t+1})
330 | ```
331 |
332 | This is necessary condition for the optimal path.
333 |
334 | It says that, along the optimal path, marginal rewards are equalized across time, after appropriate discounting.
335 |
336 | This makes sense: optimality is obtained by smoothing consumption up to the
337 | point where no marginal gains remain.
338 |
339 | We can also state the Euler equation in terms of the policy function.
340 |
341 | A **feasible consumption policy** is a map $x \mapsto \sigma(x)$
342 | satisfying $0 \leq \sigma(x) \leq x$.
343 |
344 | The last restriction says that we cannot consume more than the remaining
345 | quantity of cake.
346 |
347 | A feasible consumption policy $\sigma$ is said to **satisfy the Euler equation** if, for
348 | all $x > 0$,
349 |
350 | ```{math}
351 | :label: euler_pol
352 |
353 | u^{\prime}( \sigma(x) )
354 | = \beta u^{\prime} (\sigma(x - \sigma(x)))
355 | ```
356 |
357 | Evidently {eq}`euler_pol` is just the policy equivalent of {eq}`euler-cep`.
358 |
359 | It turns out that a feasible policy is optimal if and
360 | only if it satisfies the Euler equation.
361 |
362 | In the exercises, you are asked to verify that the optimal policy
363 | {eq}`crra_opt_pol` does indeed satisfy this functional equation.
364 |
365 | ```{note}
366 | A **functional equation** is an equation where the unknown object is a function.
367 | ```
368 |
369 | For a proof of sufficiency of the Euler equation in a very general setting,
370 | see proposition 2.2 of {cite}`ma2020income`.
371 |
372 | The following arguments focus on necessity, explaining why an optimal path or
373 | policy should satisfy the Euler equation.
374 |
375 | ### Derivation I: A Perturbation Approach
376 |
377 | Let's write $c$ as a shorthand for consumption path $\{c_t\}_{t=0}^\infty$.
378 |
379 | The overall cake-eating maximization problem can be written as
380 |
381 | $$
382 | \max_{c \in F} U(c)
383 | \quad \text{where } U(c) := \sum_{t=0}^\infty \beta^t u(c_t)
384 | $$
385 |
386 | and $F$ is the set of feasible consumption paths.
387 |
388 | We know that differentiable functions have a zero gradient at a maximizer.
389 |
390 | So the optimal path $c^* := \{c^*_t\}_{t=0}^\infty$ must satisfy
391 | $U'(c^*) = 0$.
392 |
393 | ```{note}
394 | If you want to know exactly how the derivative $U'(c^*)$ is
395 | defined, given that the argument $c^*$ is a vector of infinite
396 | length, you can start by learning about [Gateaux derivatives](https://en.wikipedia.org/wiki/Gateaux_derivative). However, such
397 | knowledge is not assumed in what follows.
398 | ```
399 |
400 | In other words, the rate of change in $U$ must be zero for any
401 | infinitesimally small (and feasible) perturbation away from the optimal path.
402 |
403 | So consider a feasible perturbation that reduces consumption at time $t$ to
404 | $c^*_t - h$
405 | and increases it in the next period to $c^*_{t+1} + h$.
406 |
407 | Consumption does not change in any other period.
408 |
409 | We call this perturbed path $c^h$.
410 |
411 | By the preceding argument about zero gradients, we have
412 |
413 | $$
414 | \lim_{h \to 0} \frac{U(c^h) - U(c^*)}{h} = U'(c^*) = 0
415 | $$
416 |
417 | Recalling that consumption only changes at $t$ and $t+1$, this
418 | becomes
419 |
420 | $$
421 | \lim_{h \to 0}
422 | \frac{\beta^t u(c^*_t - h) + \beta^{t+1} u(c^*_{t+1} + h)
423 | - \beta^t u(c^*_t) - \beta^{t+1} u(c^*_{t+1}) }{h} = 0
424 | $$
425 |
426 | After rearranging, the same expression can be written as
427 |
428 | $$
429 | \lim_{h \to 0}
430 | \frac{u(c^*_t - h) - u(c^*_t) }{h}
431 | + \beta \lim_{h \to 0}
432 | \frac{ u(c^*_{t+1} + h) - u(c^*_{t+1}) }{h} = 0
433 | $$
434 |
435 | or, taking the limit,
436 |
437 | $$
438 | - u'(c^*_t) + \beta u'(c^*_{t+1}) = 0
439 | $$
440 |
441 | This is just the Euler equation.
442 |
443 | ### Derivation II: Using the Bellman Equation
444 |
445 | Another way to derive the Euler equation is to use the Bellman equation {eq}`bellman-cep`.
446 |
447 | Taking the derivative on the right hand side of the Bellman equation with
448 | respect to $c$ and setting it to zero, we get
449 |
450 | ```{math}
451 | :label: bellman_FOC
452 |
453 | u^{\prime}(c)=\beta v^{\prime}(x - c)
454 | ```
455 |
456 | To obtain $v^{\prime}(x - c)$, we set
457 | $g(c,x) = u(c) + \beta v(x - c)$, so that, at the optimal choice of
458 | consumption,
459 |
460 | ```{math}
461 | :label: bellman_equality
462 |
463 | v(x) = g(c,x)
464 | ```
465 |
466 | Differentiating both sides while acknowledging that the maximizing consumption will depend
467 | on $x$, we get
468 |
469 | $$
470 | v' (x) =
471 | \frac{\partial }{\partial c} g(c,x) \frac{\partial c}{\partial x}
472 | + \frac{\partial }{\partial x} g(c,x)
473 | $$
474 |
475 | When $g(c,x)$ is maximized at $c$, we have $\frac{\partial }{\partial c} g(c,x) = 0$.
476 |
477 | Hence the derivative simplifies to
478 |
479 | ```{math}
480 | :label: bellman_envelope
481 |
482 | v' (x) =
483 | \frac{\partial g(c,x)}{\partial x}
484 | = \frac{\partial }{\partial x} \beta v(x - c)
485 | = \beta v^{\prime}(x - c)
486 | ```
487 |
488 | (This argument is an example of the [Envelope Theorem](https://en.wikipedia.org/wiki/Envelope_theorem).)
489 |
490 | But now an application of {eq}`bellman_FOC` gives
491 |
492 | ```{math}
493 | :label: bellman_v_prime
494 |
495 | u^{\prime}(c) = v^{\prime}(x)
496 | ```
497 |
498 | Thus, the derivative of the value function is equal to marginal utility.
499 |
500 | Combining this fact with {eq}`bellman_envelope` recovers the Euler equation.
501 |
502 | ## Exercises
503 |
504 | ```{exercise}
505 | :label: cep_ex1
506 |
507 | How does one obtain the expressions for the value function and optimal policy
508 | given in {eq}`crra_vstar` and {eq}`crra_opt_pol` respectively?
509 |
510 | The first step is to make a guess of the functional form for the consumption
511 | policy.
512 |
513 | So suppose that we do not know the solutions and start with a guess that the
514 | optimal policy is linear.
515 |
516 | In other words, we conjecture that there exists a positive $\theta$ such that setting $c_t^*=\theta x_t$ for all $t$ produces an optimal path.
517 |
518 | Starting from this conjecture, try to obtain the solutions {eq}`crra_vstar` and {eq}`crra_opt_pol`.
519 |
520 | In doing so, you will need to use the definition of the value function and the
521 | Bellman equation.
522 | ```
523 |
524 | ```{solution} cep_ex1
525 | :class: dropdown
526 |
527 | We start with the conjecture $c_t^*=\theta x_t$, which leads to a path
528 | for the state variable (cake size) given by
529 |
530 | $$
531 | x_{t+1}=x_t(1-\theta)
532 | $$
533 |
534 | Then $x_t = x_{0}(1-\theta)^t$ and hence
535 |
536 | $$
537 | \begin{aligned}
538 | v(x_0)
539 | & = \sum_{t=0}^{\infty} \beta^t u(\theta x_t)\\
540 | & = \sum_{t=0}^{\infty} \beta^t u(\theta x_0 (1-\theta)^t ) \\
541 | & = \sum_{t=0}^{\infty} \theta^{1-\gamma} \beta^t (1-\theta)^{t(1-\gamma)} u(x_0) \\
542 | & = \frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}u(x_{0})
543 | \end{aligned}
544 | $$
545 |
546 | From the Bellman equation, then,
547 |
548 | $$
549 | \begin{aligned}
550 | v(x) & = \max_{0\leq c\leq x}
551 | \left\{
552 | u(c) +
553 | \beta\frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}\cdot u(x-c)
554 | \right\} \\
555 | & = \max_{0\leq c\leq x}
556 | \left\{
557 | \frac{c^{1-\gamma}}{1-\gamma} +
558 | \beta\frac{\theta^{1-\gamma}}
559 | {1-\beta(1-\theta)^{1-\gamma}}
560 | \cdot\frac{(x-c)^{1-\gamma}}{1-\gamma}
561 | \right\}
562 | \end{aligned}
563 | $$
564 |
565 | From the first order condition, we obtain
566 |
567 | $$
568 | c^{-\gamma} + \beta\frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}\cdot(x-c)^{-\gamma}(-1) = 0
569 | $$
570 |
571 | or
572 |
573 | $$
574 | c^{-\gamma} = \beta\frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}\cdot(x-c)^{-\gamma}
575 | $$
576 |
577 | With $c = \theta x$ we get
578 |
579 | $$
580 | \left(\theta x\right)^{-\gamma} = \beta\frac{\theta^{1-\gamma}}{1-\beta(1-\theta)^{1-\gamma}}\cdot(x(1-\theta))^{-
581 | \gamma}
582 | $$
583 |
584 | Some rearrangement produces
585 |
586 | $$
587 | \theta = 1-\beta^{\frac{1}{\gamma}}
588 | $$
589 |
590 | This confirms our earlier expression for the optimal policy:
591 |
592 | $$
593 | c_t^* = \left(1-\beta^{\frac{1}{\gamma}}\right)x_t
594 | $$
595 |
596 | Substituting $\theta$ into the value function above gives
597 |
598 | $$
599 | v^*(x_t) = \frac{\left(1-\beta^{\frac{1}{\gamma}}\right)^{1-\gamma}}
600 | {1-\beta\left(\beta^{\frac{{1-\gamma}}{\gamma}}\right)} u(x_t) \\
601 | $$
602 |
603 | Rearranging gives
604 |
605 | $$
606 | v^*(x_t) = \left(1-\beta^\frac{1}{\gamma}\right)^{-\gamma}u(x_t)
607 | $$
608 |
609 | Our claims are now verified.
610 | ```
611 |
--------------------------------------------------------------------------------