├── .gitignore ├── LICENSE.md ├── README.md ├── lecturenotes ├── math.sty ├── notes.sty ├── notes1.pdf ├── notes1.tex ├── notes2.pdf ├── notes2.tex ├── notes3.pdf ├── notes3.tex ├── paper.sty ├── phasediagrams.key └── phasediagrams.pdf └── problemsets ├── math.sty ├── notes.sty ├── paper.sty ├── ps1.pdf ├── ps1.tex ├── ps2.pdf ├── ps2.tex ├── ps3.pdf ├── ps3.tex ├── ps4.pdf └── ps4.tex /.gitignore: -------------------------------------------------------------------------------- 1 | # LaTeX build artifacts 2 | *.aux 3 | *.bbl 4 | *.blg 5 | *.brf 6 | *.fdb_latexmk 7 | *.fls 8 | *.log 9 | *.out 10 | *.synctex.gz 11 | *.toc 12 | *.lof 13 | *.lot 14 | *.nav 15 | *.snm 16 | 17 | 18 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | Attribution 4.0 International 2 | 3 | ======================================================================= 4 | 5 | Creative Commons Corporation ("Creative Commons") is not a law firm and 6 | does not provide legal services or legal advice. Distribution of 7 | Creative Commons public licenses does not create a lawyer-client or 8 | other relationship. Creative Commons makes its licenses and related 9 | information available on an "as-is" basis. Creative Commons gives no 10 | warranties regarding its licenses, any material licensed under their 11 | terms and conditions, or any related information. Creative Commons 12 | disclaims all liability for damages resulting from their use to the 13 | fullest extent possible. 14 | 15 | Using Creative Commons Public Licenses 16 | 17 | Creative Commons public licenses provide a standard set of terms and 18 | conditions that creators and other rights holders may use to share 19 | original works of authorship and other material subject to copyright 20 | and certain other rights specified in the public license below. The 21 | following considerations are for informational purposes only, are not 22 | exhaustive, and do not form part of our licenses. 23 | 24 | Considerations for licensors: Our public licenses are 25 | intended for use by those authorized to give the public 26 | permission to use material in ways otherwise restricted by 27 | copyright and certain other rights. Our licenses are 28 | irrevocable. Licensors should read and understand the terms 29 | and conditions of the license they choose before applying it. 30 | Licensors should also secure all rights necessary before 31 | applying our licenses so that the public can reuse the 32 | material as expected. Licensors should clearly mark any 33 | material not subject to the license. This includes other CC- 34 | licensed material, or material used under an exception or 35 | limitation to copyright. More considerations for licensors: 36 | wiki.creativecommons.org/Considerations_for_licensors 37 | 38 | Considerations for the public: By using one of our public 39 | licenses, a licensor grants the public permission to use the 40 | licensed material under specified terms and conditions. If 41 | the licensor's permission is not necessary for any reason--for 42 | example, because of any applicable exception or limitation to 43 | copyright--then that use is not regulated by the license. Our 44 | licenses grant only permissions under copyright and certain 45 | other rights that a licensor has authority to grant. Use of 46 | the licensed material may still be restricted for other 47 | reasons, including because others have copyright or other 48 | rights in the material. A licensor may make special requests, 49 | such as asking that all changes be marked or described. 50 | Although not required by our licenses, you are encouraged to 51 | respect those requests where reasonable. More considerations 52 | for the public: 53 | wiki.creativecommons.org/Considerations_for_licensees 54 | 55 | ======================================================================= 56 | 57 | Creative Commons Attribution 4.0 International Public License 58 | 59 | By exercising the Licensed Rights (defined below), You accept and agree 60 | to be bound by the terms and conditions of this Creative Commons 61 | Attribution 4.0 International Public License ("Public License"). To the 62 | extent this Public License may be interpreted as a contract, You are 63 | granted the Licensed Rights in consideration of Your acceptance of 64 | these terms and conditions, and the Licensor grants You such rights in 65 | consideration of benefits the Licensor receives from making the 66 | Licensed Material available under these terms and conditions. 67 | 68 | 69 | Section 1 -- Definitions. 70 | 71 | a. Adapted Material means material subject to Copyright and Similar 72 | Rights that is derived from or based upon the Licensed Material 73 | and in which the Licensed Material is translated, altered, 74 | arranged, transformed, or otherwise modified in a manner requiring 75 | permission under the Copyright and Similar Rights held by the 76 | Licensor. For purposes of this Public License, where the Licensed 77 | Material is a musical work, performance, or sound recording, 78 | Adapted Material is always produced where the Licensed Material is 79 | synched in timed relation with a moving image. 80 | 81 | b. Adapter's License means the license You apply to Your Copyright 82 | and Similar Rights in Your contributions to Adapted Material in 83 | accordance with the terms and conditions of this Public License. 84 | 85 | c. Copyright and Similar Rights means copyright and/or similar rights 86 | closely related to copyright including, without limitation, 87 | performance, broadcast, sound recording, and Sui Generis Database 88 | Rights, without regard to how the rights are labeled or 89 | categorized. For purposes of this Public License, the rights 90 | specified in Section 2(b)(1)-(2) are not Copyright and Similar 91 | Rights. 92 | 93 | d. Effective Technological Measures means those measures that, in the 94 | absence of proper authority, may not be circumvented under laws 95 | fulfilling obligations under Article 11 of the WIPO Copyright 96 | Treaty adopted on December 20, 1996, and/or similar international 97 | agreements. 98 | 99 | e. Exceptions and Limitations means fair use, fair dealing, and/or 100 | any other exception or limitation to Copyright and Similar Rights 101 | that applies to Your use of the Licensed Material. 102 | 103 | f. Licensed Material means the artistic or literary work, database, 104 | or other material to which the Licensor applied this Public 105 | License. 106 | 107 | g. Licensed Rights means the rights granted to You subject to the 108 | terms and conditions of this Public License, which are limited to 109 | all Copyright and Similar Rights that apply to Your use of the 110 | Licensed Material and that the Licensor has authority to license. 111 | 112 | h. Licensor means the individual(s) or entity(ies) granting rights 113 | under this Public License. 114 | 115 | i. Share means to provide material to the public by any means or 116 | process that requires permission under the Licensed Rights, such 117 | as reproduction, public display, public performance, distribution, 118 | dissemination, communication, or importation, and to make material 119 | available to the public including in ways that members of the 120 | public may access the material from a place and at a time 121 | individually chosen by them. 122 | 123 | j. Sui Generis Database Rights means rights other than copyright 124 | resulting from Directive 96/9/EC of the European Parliament and of 125 | the Council of 11 March 1996 on the legal protection of databases, 126 | as amended and/or succeeded, as well as other essentially 127 | equivalent rights anywhere in the world. 128 | 129 | k. You means the individual or entity exercising the Licensed Rights 130 | under this Public License. Your has a corresponding meaning. 131 | 132 | 133 | Section 2 -- Scope. 134 | 135 | a. License grant. 136 | 137 | 1. Subject to the terms and conditions of this Public License, 138 | the Licensor hereby grants You a worldwide, royalty-free, 139 | non-sublicensable, non-exclusive, irrevocable license to 140 | exercise the Licensed Rights in the Licensed Material to: 141 | 142 | a. reproduce and Share the Licensed Material, in whole or 143 | in part; and 144 | 145 | b. produce, reproduce, and Share Adapted Material. 146 | 147 | 2. Exceptions and Limitations. For the avoidance of doubt, where 148 | Exceptions and Limitations apply to Your use, this Public 149 | License does not apply, and You do not need to comply with 150 | its terms and conditions. 151 | 152 | 3. Term. The term of this Public License is specified in Section 153 | 6(a). 154 | 155 | 4. Media and formats; technical modifications allowed. The 156 | Licensor authorizes You to exercise the Licensed Rights in 157 | all media and formats whether now known or hereafter created, 158 | and to make technical modifications necessary to do so. The 159 | Licensor waives and/or agrees not to assert any right or 160 | authority to forbid You from making technical modifications 161 | necessary to exercise the Licensed Rights, including 162 | technical modifications necessary to circumvent Effective 163 | Technological Measures. For purposes of this Public License, 164 | simply making modifications authorized by this Section 2(a) 165 | (4) never produces Adapted Material. 166 | 167 | 5. Downstream recipients. 168 | 169 | a. Offer from the Licensor -- Licensed Material. Every 170 | recipient of the Licensed Material automatically 171 | receives an offer from the Licensor to exercise the 172 | Licensed Rights under the terms and conditions of this 173 | Public License. 174 | 175 | b. No downstream restrictions. You may not offer or impose 176 | any additional or different terms or conditions on, or 177 | apply any Effective Technological Measures to, the 178 | Licensed Material if doing so restricts exercise of the 179 | Licensed Rights by any recipient of the Licensed 180 | Material. 181 | 182 | 6. No endorsement. Nothing in this Public License constitutes or 183 | may be construed as permission to assert or imply that You 184 | are, or that Your use of the Licensed Material is, connected 185 | with, or sponsored, endorsed, or granted official status by, 186 | the Licensor or others designated to receive attribution as 187 | provided in Section 3(a)(1)(A)(i). 188 | 189 | b. Other rights. 190 | 191 | 1. Moral rights, such as the right of integrity, are not 192 | licensed under this Public License, nor are publicity, 193 | privacy, and/or other similar personality rights; however, to 194 | the extent possible, the Licensor waives and/or agrees not to 195 | assert any such rights held by the Licensor to the limited 196 | extent necessary to allow You to exercise the Licensed 197 | Rights, but not otherwise. 198 | 199 | 2. Patent and trademark rights are not licensed under this 200 | Public License. 201 | 202 | 3. To the extent possible, the Licensor waives any right to 203 | collect royalties from You for the exercise of the Licensed 204 | Rights, whether directly or through a collecting society 205 | under any voluntary or waivable statutory or compulsory 206 | licensing scheme. In all other cases the Licensor expressly 207 | reserves any right to collect such royalties. 208 | 209 | 210 | Section 3 -- License Conditions. 211 | 212 | Your exercise of the Licensed Rights is expressly made subject to the 213 | following conditions. 214 | 215 | a. Attribution. 216 | 217 | 1. If You Share the Licensed Material (including in modified 218 | form), You must: 219 | 220 | a. retain the following if it is supplied by the Licensor 221 | with the Licensed Material: 222 | 223 | i. identification of the creator(s) of the Licensed 224 | Material and any others designated to receive 225 | attribution, in any reasonable manner requested by 226 | the Licensor (including by pseudonym if 227 | designated); 228 | 229 | ii. a copyright notice; 230 | 231 | iii. a notice that refers to this Public License; 232 | 233 | iv. a notice that refers to the disclaimer of 234 | warranties; 235 | 236 | v. a URI or hyperlink to the Licensed Material to the 237 | extent reasonably practicable; 238 | 239 | b. indicate if You modified the Licensed Material and 240 | retain an indication of any previous modifications; and 241 | 242 | c. indicate the Licensed Material is licensed under this 243 | Public License, and include the text of, or the URI or 244 | hyperlink to, this Public License. 245 | 246 | 2. You may satisfy the conditions in Section 3(a)(1) in any 247 | reasonable manner based on the medium, means, and context in 248 | which You Share the Licensed Material. For example, it may be 249 | reasonable to satisfy the conditions by providing a URI or 250 | hyperlink to a resource that includes the required 251 | information. 252 | 253 | 3. If requested by the Licensor, You must remove any of the 254 | information required by Section 3(a)(1)(A) to the extent 255 | reasonably practicable. 256 | 257 | 4. If You Share Adapted Material You produce, the Adapter's 258 | License You apply must not prevent recipients of the Adapted 259 | Material from complying with this Public License. 260 | 261 | 262 | Section 4 -- Sui Generis Database Rights. 263 | 264 | Where the Licensed Rights include Sui Generis Database Rights that 265 | apply to Your use of the Licensed Material: 266 | 267 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right 268 | to extract, reuse, reproduce, and Share all or a substantial 269 | portion of the contents of the database; 270 | 271 | b. if You include all or a substantial portion of the database 272 | contents in a database in which You have Sui Generis Database 273 | Rights, then the database in which You have Sui Generis Database 274 | Rights (but not its individual contents) is Adapted Material; and 275 | 276 | c. You must comply with the conditions in Section 3(a) if You Share 277 | all or a substantial portion of the contents of the database. 278 | 279 | For the avoidance of doubt, this Section 4 supplements and does not 280 | replace Your obligations under this Public License where the Licensed 281 | Rights include other Copyright and Similar Rights. 282 | 283 | 284 | Section 5 -- Disclaimer of Warranties and Limitation of Liability. 285 | 286 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE 287 | EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS 288 | AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF 289 | ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, 290 | IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, 291 | WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR 292 | PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, 293 | ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT 294 | KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT 295 | ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. 296 | 297 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE 298 | TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, 299 | NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, 300 | INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, 301 | COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR 302 | USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN 303 | ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR 304 | DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR 305 | IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. 306 | 307 | c. The disclaimer of warranties and limitation of liability provided 308 | above shall be interpreted in a manner that, to the extent 309 | possible, most closely approximates an absolute disclaimer and 310 | waiver of all liability. 311 | 312 | 313 | Section 6 -- Term and Termination. 314 | 315 | a. This Public License applies for the term of the Copyright and 316 | Similar Rights licensed here. However, if You fail to comply with 317 | this Public License, then Your rights under this Public License 318 | terminate automatically. 319 | 320 | b. Where Your right to use the Licensed Material has terminated under 321 | Section 6(a), it reinstates: 322 | 323 | 1. automatically as of the date the violation is cured, provided 324 | it is cured within 30 days of Your discovery of the 325 | violation; or 326 | 327 | 2. upon express reinstatement by the Licensor. 328 | 329 | For the avoidance of doubt, this Section 6(b) does not affect any 330 | right the Licensor may have to seek remedies for Your violations 331 | of this Public License. 332 | 333 | c. For the avoidance of doubt, the Licensor may also offer the 334 | Licensed Material under separate terms or conditions or stop 335 | distributing the Licensed Material at any time; however, doing so 336 | will not terminate this Public License. 337 | 338 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public 339 | License. 340 | 341 | 342 | Section 7 -- Other Terms and Conditions. 343 | 344 | a. The Licensor shall not be bound by any additional or different 345 | terms or conditions communicated by You unless expressly agreed. 346 | 347 | b. Any arrangements, understandings, or agreements regarding the 348 | Licensed Material not stated herein are separate from and 349 | independent of the terms and conditions of this Public License. 350 | 351 | 352 | Section 8 -- Interpretation. 353 | 354 | a. For the avoidance of doubt, this Public License does not, and 355 | shall not be interpreted to, reduce, limit, restrict, or impose 356 | conditions on any use of the Licensed Material that could lawfully 357 | be made without permission under this Public License. 358 | 359 | b. To the extent possible, if any provision of this Public License is 360 | deemed unenforceable, it shall be automatically reformed to the 361 | minimum extent necessary to make it enforceable. If the provision 362 | cannot be reformed, it shall be severed from this Public License 363 | without affecting the enforceability of the remaining terms and 364 | conditions. 365 | 366 | c. No term or condition of this Public License will be waived and no 367 | failure to comply consented to unless expressly agreed to by the 368 | Licensor. 369 | 370 | d. Nothing in this Public License constitutes or may be interpreted 371 | as a limitation upon, or waiver of, any privileges and immunities 372 | that apply to the Licensor or You, including from the legal 373 | processes of any jurisdiction or authority. 374 | 375 | 376 | ======================================================================= 377 | 378 | Creative Commons is not a party to its public 379 | licenses. Notwithstanding, Creative Commons may elect to apply one of 380 | its public licenses to material it publishes and in those instances 381 | will be considered the “Licensor.” The text of the Creative Commons 382 | public licenses is dedicated to the public domain under the CC0 Public 383 | Domain Dedication. Except for the limited purpose of indicating that 384 | material is shared under a Creative Commons public license or as 385 | otherwise permitted by the Creative Commons policies published at 386 | creativecommons.org/policies, Creative Commons does not authorize the 387 | use of the trademark "Creative Commons" or any other trademark or logo 388 | of Creative Commons without its prior written consent including, 389 | without limitation, in connection with any unauthorized modifications 390 | to any of its public licenses or any other arrangements, 391 | understandings, or agreements concerning use of licensed material. For 392 | the avoidance of doubt, this paragraph does not form part of the 393 | public licenses. 394 | 395 | Creative Commons may be contacted at creativecommons.org. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Mathematics for Macroeconomics: Source Files 2 | 3 | This repository contains the source files for the course "Mathematics for Macroeconomics", developed by [Pascal Michaillat](https://pascalmichaillat.org/), and taught at the [London School of Economics & Political Science](https://www.lse.ac.uk). 4 | 5 | ## Course webpage 6 | 7 | The course materials are available at https://pascalmichaillat.org/x/. 8 | 9 | ## Lecture notes 10 | 11 | The source files for the lecture notes are located in the `lecturenotes` folder. The lecture notes are written in LaTeX and compiled to PDF using pdfTeX: 12 | 13 | + `notes1.tex`, `notes1.pdf` - Lecture notes on dynamic programming 14 | + `notes2.tex`, `notes2.pdf` - Lecture notes on optimal control 15 | + `notes3.tex`, `notes3.pdf` - Lecture notes on differential equations (`phasediagrams.key` and `phasediagrams.pdf` contain the phase diagrams used in `notes3.tex`) 16 | 17 | ## Problem sets 18 | 19 | The source files for the problem sets are located in the `problemsets` folder. The problem sets are written in LaTeX and compiled to PDF using pdfTeX: 20 | 21 | + `ps1.tex`, `ps1.pdf` - Problem set on dynamic programming 22 | + `ps2.tex`, `ps2.pdf` - Problem set on optimal control 23 | + `ps3.tex`, `ps3.pdf` - Problem set on differential equations 24 | + `ps4.tex`, `ps4.pdf` - Cumulative problem set covering all three topics 25 | 26 | Solutions to the problems sets are available to instructors [upon request](mailto:pascal.michaillat@gmail.com). 27 | 28 | ## Style files 29 | 30 | The folders also contain the LaTeX style files used to format the lecture notes and problem sets: 31 | 32 | + `paper.sty` - [Formatting for academic papers](https://github.com/pmichaillat/latex-paper) 33 | + `notes.sty` -Formatting adjustments for lecture notes 34 | + `math.sty` - [Commands to write math](https://github.com/pmichaillat/latex-math) 35 | 36 | ## License 37 | 38 | This repository is licensed under the [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). 39 | -------------------------------------------------------------------------------- /lecturenotes/math.sty: -------------------------------------------------------------------------------- 1 | % ---------- Brackets ---------- 2 | 3 | \newcommand{\bc}[1]{\left\lbrace #1 \right\rbrace} 4 | \newcommand{\bp}[1]{\left( #1 \right)} 5 | \newcommand{\bs}[1]{\left[ #1 \right]} 6 | \newcommand{\of}[1]{{\left( #1 \right)}} % Parentheses without surrounding space, for function arguments 7 | \newcommand{\abs}[1]{\left\lvert #1 \right\rvert} 8 | \newcommand{\norm}[1]{\left\lVert #1 \right\rVert} 9 | \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} 10 | 11 | % ---------- Accents ---------- 12 | 13 | \newcommand{\ol}[1]{\overline{#1}} 14 | \newcommand{\ul}[1]{\underline{#1}} 15 | \newcommand{\wh}[1]{\widehat{#1}} 16 | \newcommand{\wt}[1]{\widetilde{#1}} 17 | 18 | % ---------- Operators ---------- 19 | 20 | \usepackage{xparse} 21 | 22 | % Natural log operator: 23 | % * \ln produces ln 24 | % * \ln{x} produces ln(x) 25 | \let\oldln\ln 26 | \RenewDocumentCommand{\ln}{g}{\IfNoValueTF{#1}{\oldln}{\,{\oldln}{\bp{#1}}}} 27 | 28 | % Exponential operator: 29 | % * \exp produces exp 30 | % * \exp{x} produces exp(x) 31 | \let\oldexp\exp 32 | \RenewDocumentCommand{\exp}{g}{\IfNoValueTF{#1}{\oldexp}{\,{\oldexp}{\bp{#1}}}} 33 | 34 | % Max operator: 35 | % * \max produces max 36 | % * \max[x] produces max_x 37 | % * \max{y} produces max{y} 38 | % * \max[x]{y} produces max_x{y} 39 | \let\oldmax\max 40 | \RenewDocumentCommand{\max}{o g}{% 41 | \IfNoValueTF{#2}{\oldmax\IfValueT{#1}{_{#1}}}% 42 | {\,{\oldmax\IfValueT{#1}{_{#1}}}{\bc{#2}}}} 43 | 44 | % Min operator: 45 | % * \min produces min 46 | % * \min[x] produces min_x 47 | % * \min{y} produces min{y} 48 | % * \min[x]{y} produces min_x{y} 49 | \let\oldmin\min 50 | \RenewDocumentCommand{\min}{o g}{% 51 | \IfNoValueTF{#2}{\oldmin\IfValueT{#1}{_{#1}}}% 52 | {\,{\oldmin\IfValueT{#1}{_{#1}}}{\bc{#2}}}} 53 | 54 | % Expectation operator: 55 | % * \E produces E 56 | % * \E[x] produces E_x 57 | % * \E{Y} produces E(Y) 58 | % * \E[x]{Y} produces E_x(Y) 59 | \NewDocumentCommand{\E}{o g}{% 60 | \IfNoValueTF{#2}{\operatorname{\mathbb{E}}\IfValueT{#1}{_{#1}}}% 61 | {\,\mathbb{E}\IfValueT{#1}{_{#1}}{\bp{#2}}}} 62 | 63 | % Probability operator: 64 | % * \P produces P 65 | % * \P[x] produces P_x 66 | % * \P{Y} produces P(Y) 67 | % * \P[x]{Y} produces P_x(Y) 68 | \RenewDocumentCommand{\P}{o g}{% 69 | \IfNoValueTF{#2}{\operatorname{\mathbb{P}}\IfValueT{#1}{_{#1}}}% 70 | {\,\mathbb{P}\IfValueT{#1}{_{#1}}{\bp{#2}}}} 71 | 72 | % Indicator operator: 73 | % * \ind produces 1 74 | % * \ind{Y} produces 1(Y) 75 | \NewDocumentCommand{\ind}{g}{% 76 | \IfNoValueTF{#1}{\operatorname{\mathbb{1}}}% 77 | {\,\mathbb{1}{\bp{#1}}}} 78 | 79 | % Trace operator: 80 | % * \tr produces tr 81 | % * \tr{Y} produces tr(Y) 82 | \NewDocumentCommand{\tr}{g}{% 83 | \IfNoValueTF{#1}{\operatorname{tr}}% 84 | {\,{\operatorname{tr}}{\bp{#1}}}} 85 | 86 | % Variance operator: 87 | % * \var produces var 88 | % * \var{Y} produces var(Y) 89 | \NewDocumentCommand{\var}{g}{% 90 | \IfNoValueTF{#1}{\operatorname{var}}% 91 | {\,{\operatorname{var}}{\bp{#1}}}} 92 | 93 | % Covariance operator: 94 | % * \cov produces cov 95 | % * \cov{Y} produces cov(Y) 96 | \NewDocumentCommand{\cov}{g}{% 97 | \IfNoValueTF{#1}{\operatorname{cov}}% 98 | {\,{\operatorname{cov}}{\bp{#1}}}} 99 | 100 | % Correlation operator: 101 | % * \corr produces corr 102 | % * \corr{Y} produces corr(Y) 103 | \NewDocumentCommand{\corr}{g}{% 104 | \IfNoValueTF{#1}{\operatorname{corr}}% 105 | {\,{\operatorname{corr}}{\bp{#1}}}} 106 | 107 | % Standard deviation operator: 108 | % * \sd produces sd 109 | % * \sd{Y} produces sd(Y) 110 | \NewDocumentCommand{\sd}{g}{% 111 | \IfNoValueTF{#1}{\operatorname{sd}}% 112 | {\,{\operatorname{sd}}{\bp{#1}}}} 113 | 114 | % Standard error operator: 115 | % * \se produces se 116 | % * \se{Y} produces se(Y) 117 | \NewDocumentCommand{\se}{g}{% 118 | \IfNoValueTF{#1}{\operatorname{se}}% 119 | {\,{\operatorname{se}}{\bp{#1}}}} 120 | 121 | \DeclareMathOperator*{\argmax}{argmax} 122 | \DeclareMathOperator*{\argmin}{argmin} 123 | \DeclareMathOperator*{\ess}{ess} 124 | \renewcommand{\Re}{\operatorname{Re}} 125 | \renewcommand{\Im}{\operatorname{Im}} 126 | \newcommand{\iid}{\mathbin{\overset{iid}{\sim}}} 127 | \newcommand{\as}{\mathbin{\overset{as}{\to}}} 128 | 129 | % ---------- Derivatives ---------- 130 | 131 | \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}} 132 | \newcommand{\pdx}[2]{\partial #1/\partial #2} 133 | \newcommand{\od}[2]{\frac{d #1}{d #2}} 134 | \newcommand{\odx}[2]{d #1/d #2} 135 | \newcommand{\pdl}[2]{\frac{\partial\ln{#1}}{\partial\ln{#2}}} 136 | \newcommand{\pdlx}[2]{\partial\ln(#1)/\partial\ln(#2)} 137 | \newcommand{\odl}[2]{\frac{d\ln{#1}}{d\ln{#2}}} 138 | \newcommand{\odlx}[2]{d\ln(#1)/d\ln(#2)} 139 | \newcommand{\pdw}[3]{\left.\frac{\partial #1}{\partial #2}\right\vert_{#3}} 140 | \newcommand{\pdwx}[3]{\left.\partial #1/\partial #2\right\vert_{#3}} 141 | 142 | % ---------- Blackboard letters ---------- 143 | 144 | \def\R{\mathbb{R}} 145 | \def\N{\mathbb{N}} 146 | \def\Z{\mathbb{Z}} 147 | \def\Q{\mathbb{Q}} 148 | \def\C{\mathbb{C}} 149 | \def\I{\mathbb{I}} 150 | 151 | % ---------- Greek letters ---------- 152 | 153 | \def\a{\alpha} 154 | \def\b{\beta} 155 | \def\c{\chi} 156 | \def\d{\delta} 157 | \def\D{\Delta} 158 | \def\e{\epsilon} 159 | \def\f{\phi} 160 | \def\vf{\varphi} 161 | \def\F{\Phi} 162 | \def\g{\gamma} 163 | \def\G{\Gamma} 164 | \def\h{\eta} 165 | \def\k{\kappa} 166 | \def\l{\lambda} 167 | \def\L{\Lambda} 168 | \def\m{\mu} 169 | \def\n{\nu} 170 | \def\o{\omega} 171 | \def\O{\Omega} 172 | \def\vp{\varpi} 173 | \def\p{\psi} 174 | \def\r{\rho} 175 | \def\s{\sigma} 176 | \def\vs{\varsigma} 177 | \def\S{\Sigma} 178 | \def\t{\theta} 179 | \def\T{\Theta} 180 | \def\vt{\vartheta} 181 | \def\x{\xi} 182 | \def\X{\Xi} 183 | \def\z{\zeta} 184 | 185 | % ---------- Caligraphic letters ---------- 186 | 187 | \def\Ac{\mathcal{A}} 188 | \def\Bc{\mathcal{B}} 189 | \def\Cc{\mathcal{C}} 190 | \def\Dc{\mathcal{D}} 191 | \def\Ec{\mathcal{E}} 192 | \def\Fc{\mathcal{F}} 193 | \def\Gc{\mathcal{G}} 194 | \def\Hc{\mathcal{H}} 195 | \def\Ic{\mathcal{I}} 196 | \def\Jc{\mathcal{J}} 197 | \def\Kc{\mathcal{K}} 198 | \def\Lc{\mathcal{L}} 199 | \def\Mc{\mathcal{M}} 200 | \def\Nc{\mathcal{N}} 201 | \def\Oc{\mathcal{O}} 202 | \def\Pc{\mathcal{P}} 203 | \def\Qc{\mathcal{Q}} 204 | \def\Rc{\mathcal{R}} 205 | \def\Sc{\mathcal{S}} 206 | \def\Tc{\mathcal{T}} 207 | \def\Uc{\mathcal{U}} 208 | \def\Vc{\mathcal{V}} 209 | \def\Wc{\mathcal{W}} 210 | \def\Xc{\mathcal{X}} 211 | \def\Yc{\mathcal{Y}} 212 | \def\Zc{\mathcal{Z}} -------------------------------------------------------------------------------- /lecturenotes/notes.sty: -------------------------------------------------------------------------------- 1 | % ---------- General typography ---------- 2 | 3 | \setlist[itemize,1]{leftmargin=0pt,label=\color{gray}{\upshape\textbullet}} 4 | \setlist[itemize,2]{leftmargin=0pt,label=\color{gray}{\upshape\textendash}} 5 | \setlist[enumerate,1]{leftmargin=0pt,label=\upshape\Alph*.} 6 | 7 | % ---------- Title page ---------- 8 | 9 | \usepackage[subfigure]{tocloft} 10 | \renewcommand{\contentsname}{} 11 | \renewcommand{\cftsecfont}{\normalfont} 12 | \renewcommand{\cftsecpagefont}{\normalfont} 13 | \renewcommand{\cftsecaftersnum}{.} 14 | \renewcommand{\cftsubsecaftersnum}{.} 15 | \renewcommand{\cftdotsep}{10} 16 | \setlength{\cftbeforesecskip}{0em} 17 | 18 | % ---------- Headings ---------- 19 | 20 | \newcommand{\sectionbreak}{\clearpage} -------------------------------------------------------------------------------- /lecturenotes/notes1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/lecturenotes/notes1.pdf -------------------------------------------------------------------------------- /lecturenotes/notes1.tex: -------------------------------------------------------------------------------- 1 | \documentclass[letterpaper,12pt,leqno]{article} 2 | \usepackage{paper,math,notes} 3 | \available{https://pascalmichaillat.org/x/} 4 | \hypersetup{pdftitle={Dynamic Programming}} 5 | 6 | \begin{document} 7 | 8 | \title{Dynamic Programming} 9 | \author{Pascal Michaillat} 10 | \date{} 11 | 12 | \begin{titlepage} 13 | \maketitle 14 | \tableofcontents 15 | \end{titlepage} 16 | 17 | \section{Simple Deterministic Problem}\label{sec:deterministic} 18 | 19 | This section introduces the key concepts of dynamic programming in a deterministic problem (a problem without randomness). Section~\ref{sec:stochastic} show how the key concepts apply to a stochastic problem. Section~\ref{sec:theory} proposes a more formal treatment of dynamic programming. 20 | 21 | \subsection{Consumption-Saving Problem}\label{subsec:PROBLEM} 22 | 23 | You start life with wealth $a_0>0$. Each period $t=0,1,\ldots,+\infty$, you consume a quantity $c_t\geq 0$ of your wealth, which provides utility $u(c_t)$. You choose consumption to maximize your lifetime utility 24 | \begin{equation*} 25 | \sum_{t=0}^{+\infty}\b^t \cdot u(c_t), 26 | \end{equation*} 27 | where $\b\in[0,1)$ is the discount factor. Assume that $\lim_{c\to 0} u'(c)=+\infty$, which implies that you choose to consume at least some of your wealth $c_{t}>0$ each period. You save the amount of wealth that you do not consume. The wealth at the beginning of period $t$ is $a_t$. Wealth is invested at a constant interest rate $r$, paid at the beginning of period $t$; hence, wealth evolves according to the law of motion\[a_{t+1}=(1+r)\cdot a_t - c_t.\] 28 | 29 | \subsection{Direct Approach} 30 | 31 | Take wealth $a_0>0$ at $t=0$ as given. The direct approach is to choose two sequences of variables $\bc{c_t}_{t=0}^{+\infty}$ and $\bc{a_t}_{t=0}^{+\infty}$ to maximize 32 | \begin{equation*} 33 | \sum_{t=0}^{+\infty}\b^t \cdot u(c_t) 34 | \end{equation*} 35 | subject to the constraints that for all $t\geq 0$, 36 | \begin{align} 37 | a_{t+1}&=(1+r)\cdot a_{t}-c_{t}\label{eq:constraint1}\\ 38 | a_{t+1}&\geq 0.\label{eq:constraint2} 39 | \end{align} 40 | To find the two sequences $\bc{c_t}_{t=0}^{+\infty}$ and $\bc{a_t}_{t=0}^{+\infty}$, we write down the Lagrangian associated with the problem: 41 | \begin{align*} 42 | \Lc=\sum_{t=0}^{+\infty}\b^t \cdot\bc{u(c_t)-\l_{t}\cdot \bs{a_{t+1}-(1+r)\cdot a_{t}+c_{t}}-\mu_{t}\cdot a_{t+1}}, 43 | \end{align*} 44 | where $\bc{\l_t}_{t=1}^{+\infty}$ and $\bc{\mu_t}_{t=1}^{+\infty}$ are the sequences of Lagrange multipliers associated with the sequences of constraints~\eqref{eq:constraint1} and~\eqref{eq:constraint2}. For all $t\geq 0$, the first-order condition of the problem with respect to $c_{t}$ is 45 | \begin{align*} 46 | \pd{\Lc}{c_{t}}&=\od{u}{c}(c_{t})- \l_{t}=0, 47 | \end{align*} 48 | which yields a first optimality condition: 49 | \[\od{u}{c}(c_{t})=\l_{t}.\] 50 | For all $t\geq 0$, the first-order condition of the problem with respect to $a_{t+1}$ is 51 | \begin{align*} 52 | \pd{\Lc}{a_{t+1}}&=-\l_{t}-\mu_{t}+ \b\cdot(1+r)\cdot\l_{t+1} =0, 53 | \end{align*} 54 | which yields a second optimality condition: 55 | \[\l_{t}+\mu_{t}=(1+r)\cdot \b\cdot\l_{t+1}.\] 56 | In addition, the complementary slackness conditions impose that for all $t\geq 0$, $\mu_{t}\geq 0$ and 57 | \[\mu_{t}\cdot a_{t+1}=0.\] 58 | 59 | These optimality conditions and the complementary slackness conditions are necessary and sufficient if the problem is well behaved. Since $c_{t}\geq 0$, the wealth $a_{t+1}$ never falls to zero because if $a_{T}=0$, then $c_{t}=0$ for all $t\geq T$, which cannot be optimal because $u'(c_{t})=+\infty$ for all $t\geq T$. Hence for all $t\geq 0$, $a_{t+1}>0$ and $\mu_{t}=0$. We infer that 60 | \[\l_t=(1+r)\cdot \b\cdot\l_{t+1}\] 61 | for all $t\geq 0$ such that consumption $c_{t}$ satisfies the following intertemporal condition for all $t\geq 0$: 62 | \begin{align} 63 | \od{u}{c}(c_{t})= \b\cdot(1+r)\cdot \od{u}{c}(c_{t+1}).\label{eq:eulerdp} 64 | \end{align} 65 | This intertemporal condition characterizes the optimal consumption path. It is called the \textit{Euler equation}. For all $t\geq 0$, wealth $a_{t}$ satisfies 66 | \begin{align*} 67 | a_{t+1}=(1+r)\cdot a_{t}-c_{t}. 68 | \end{align*} 69 | 70 | \subsection{Dynamic-Programming Approach} 71 | 72 | Instead of looking for two infinite sequences $\bc{c_t}_{t=0}^{+\infty}$ and $\bc{a_t}_{t=0}^{+\infty}$, dynamic programming is looking for a time-invariant \textit{policy function} $h$ mapping wealth at the beginning of period $t$, $a_t$, into optimal consumption in period $t$, $c_t$, such that the sequence $\bc{c_t}_{t=0}^{+\infty}$ generated by iterating 73 | \begin{align} 74 | c_t&=h(a_t)\label{eq:iter1}\\ 75 | a_{t+1}&=(1+r)\cdot a_t-c_t,\label{eq:iter2} 76 | \end{align} 77 | starting from initial wealth $a_0$ solves the consumption-saving problem. Finding the policy function allows us to determine recursively the optimal sequence of consumption $\bc{c_t}_{t=0}^{+\infty}$. 78 | 79 | Why do we want to find a policy function $h$ instead of an infinite sequence $\bc{c_t}_{t=0}^{+\infty}$? It is unclear that finding a function is easier than finding an infinite sequence. But it turns out that dynamic programming has three desirable properties: 80 | \begin{enumerate} 81 | \item Sometimes, dynamic programming allows us to find closed-form solutions for the policy function $h$. 82 | \item Sometimes, dynamic programming allows us to characterize theoretical properties of the policy function $h$. 83 | \item Various numerical methods are available to solve dynamic programs. 84 | \end{enumerate} 85 | 86 | \subsection{Value Function} 87 | 88 | To determine the policy function $h$ and solve this problem, we first need to solve for an auxiliary function that we call \textit{value function}. The value function $V(a)$ measures the optimal lifetime utility from consumption, starting with an initial wealth $a$. The value function is defined by 89 | \begin{equation} 90 | V(a)\equiv \max[\bc{c_t,a_{t+1}}_{t=0}^{+\infty}] \sum_{t=0}^{+\infty}\b^t \cdot u(c_t)\label{eq:value} 91 | \end{equation} 92 | subject to for all $t\geq 0$ 93 | \begin{align*} 94 | a_{0}&= a\\ 95 | a_{t+1}&=(1+r)\cdot a_t-c_t\\ 96 | a_{t+1}&\geq 0. 97 | \end{align*} 98 | 99 | To determine the value function, we use a theorem that says that the value function $V$ is the solution to a functional equation called the \textit{Bellman equation}: 100 | \begin{align} 101 | V(a)=\max[c\in[0,(1+r)\cdot a]] u(c)+\b \cdot V((1+r)\cdot a-c).\label{eq:recur} 102 | \end{align} 103 | 104 | Not all optimization problems can be represented with a Bellman equation. The theorem applies only if the problem satisfies the Principle of Optimality. If a problem satisfies the Principle of Optimality, we say that it has a recursive structure. Section~\ref{sec:theory} characterizes problems with a recursive structure. For instance, the consumption-saving problem has a recursive structure. 105 | 106 | To understand where the Bellman equation comes from and what it means, we manipulate the value function. The value function can be expressed as 107 | \begin{align*} 108 | V(a_0)=&\max[\{0\leq c_t\leq (1+r)\cdot a_t\}_{t=0}^{+\infty}] \sum_{t=0}^{+\infty}\b^t \cdot u(c_t), 109 | \end{align*} 110 | subject to for all $t\geq 0$ 111 | \begin{align*} 112 | a_{t+1}&=(1+r)\cdot a_t-c_t. 113 | \end{align*} 114 | Eliminating $c_t$, we rewrite the value function as 115 | \begin{align*} 116 | V(a_0)=\max[\{0\leq a_{t+1}\leq (1+r)\cdot a_t\}_{t=0}^{+\infty}] \sum_{t=0}^{+\infty}\b^t \cdot u((1+r)\cdot a_t-a_{t+1}). 117 | \end{align*} 118 | Separating the first term in the utility function from the rest of the sum, we obtain 119 | \begin{align*} 120 | V(a_0)=\max[\{0\leq a_{t+1}\leq (1+r)\cdot a_t\}_{t=0}^{+\infty}] \bc{u((1+r)\cdot a_0-a_1)+\b \cdot \sum_{t=1}^{+\infty}\b^{t-1} \cdot u((1+r)\cdot a_t-a_{t+1})}. 121 | \end{align*} 122 | We re-index the terms in the sum: 123 | \begin{align*} 124 | V(a_0)=&\max[\{0\leq a_{t+1}\leq (1+r)\cdot a_t\}_{t=0}^{+\infty}] \bc{u((1+r)\cdot a_0-a_1)+\b \cdot \sum_{t=0}^{+\infty}\b^{t} \cdot u((1+r)\cdot a_{t+1}-a_{t+2})}. 125 | \end{align*} 126 | We separate the maximization process in two stages: choose consumption in period 0 given wealth in period 0, and choose all future consumption given wealth in period 1. Thus, we rewrite the value function as 127 | \begin{align*} 128 | V(a_0)=&\max_{0\leq a_{1}\leq (1+r)\cdot a_0}\bc{u((1+r)\cdot a_0-a_1)+\b\max_{\{0\leq a_{t+2}\leq a_{t+1}\}_{t=0}^{+\infty}}\sum_{t=0}^{+\infty}\b^{t} \cdot u((1+r)\cdot a_{t+1}-a_{t+2})}. 129 | \end{align*} 130 | By definition, the second term is exactly the value function $V(a_{1})$ so we can simplify the equation to 131 | \begin{align*} 132 | V(a_0)=&\max_{0\leq a_{1}\leq (1+r)\cdot a_0}\bc{u((1+r)\cdot a_0-a_1)+\b \cdot V(a_1)}. 133 | \end{align*} 134 | Since $c_{0}=(1+r)\cdot a_0-a_1$, we obtain 135 | \begin{align*} 136 | V(a_0)=&\max_{0\leq c_{0}\leq (1+r)\cdot a_0}\bc{u(c_0)+\b \cdot V((1+r)\cdot a_0-c_0)}. 137 | \end{align*} 138 | This last equation is the Bellman equation. For any problem with a recursive structure, we can apply this procedure and obtain a Bellman equation. 139 | 140 | \subsection{Policy Function} 141 | 142 | With the definition proposed in equation \eqref{eq:recur}, we gave a recursive formulation to our optimization problem. Once we have determined the value function $V(a)$ for all $a$, we can easily solve for the optimal consumption level 143 | \begin{equation*} 144 | c^*=\argmax_{c\in[0,(1+r)\cdot a]}u(c)+\b \cdot V((1+r)\cdot a-c). 145 | \end{equation*} 146 | $c^*$ is a function of initial wealth $a$. We define the policy function $h$ by 147 | \begin{equation*} 148 | h(a)\equiv c^*. 149 | \end{equation*} 150 | The policy function provides a mapping from state to actions. It tells us how much should be consumed in the current period if initial wealth is $a$. The policy function allows us to determine the optimal path of consumption by iterating equations~\eqref{eq:iter1} and~\eqref{eq:iter2}. Therefore, it allows us to solve the optimization problem defined in section~\ref{subsec:PROBLEM}. 151 | 152 | To solve the optimization problem defined in section~\ref{subsec:PROBLEM} using dynamic programming, we proceed in five steps: 153 | \begin{enumerate} 154 | \item Write down the Bellman equation 155 | \item Write down the first-order conditions of the optimization program 156 | \item Write down the Benveniste-Scheinkman equation to determine the derivative of the value function with respect to wealth, $a_{t}$ 157 | \item Apply the Benveniste-Scheinkman equation to next period's wealth, $a_{t+1}$, and plug into the first-order 158 | conditions 159 | \item Derive the Euler equation, which summarizes the optimal intertemporal behavior of consumption, $c_{t}$ 160 | \end{enumerate} 161 | 162 | 163 | \subsection{Step 1: Bellman Equation} 164 | 165 | Thanks to the recursive structure of the consumption-saving problem, the value function $V$ satisfies the Bellman equation: 166 | \begin{align} 167 | V(a)=\max_{c\in[0,(1+r)\cdot a]}u(c)+\b \cdot V((1+r)\cdot a-c)\label{eq:bellman}. 168 | \end{align} 169 | 170 | \begin{itemize} 171 | \item Equation \eqref{eq:bellman} is a \textit{functional equation}: the unknown is the function $V$ itself. We assume for now that a solution to this equation does exist. 172 | \item $a$ is a \textit{state variable}. It summarizes completely the information from the past that is needed to solve the forward-looking optimization problem. 173 | \item $c$ is a \textit{control variable}. It is the variable to be chosen in the current period. It determines the value $a'$ of the state variable next period according to the \textit{transition equation} 174 | \begin{align*} 175 | a'=(1+r)\cdot a-c. 176 | \end{align*} 177 | \item Notation: next period's variables are denoted using a ``prime''. So next period's consumption is $c'$ and next period's wealth is $a'$. 178 | \end{itemize} 179 | 180 | \subsection{Step 1': Rewrite the Bellman Equation} 181 | 182 | We rewrite the Bellman equation to get rid of the state variable, $a$, in the term $\b \cdot V((1+r)\cdot a-c)$. To do so, we use tomorrow's state variable, $a'$, instead of today's consumption, $c$, as a control variable. We can substitute $a'$ for $c$ as a control variable because $a'$ and $c$ are directly related by $a'=(1+r)\cdot a-c$; given $a$, choosing $a'$ is equivalent to choosing $c$. 183 | 184 | Substituting $a'$ for $c$, the Bellman equation becomes 185 | \begin{equation} 186 | V(a)=\max_{a'\in[0,(1+r)\cdot a]} u((1+r)\cdot a-a')+\b \cdot V(a').\label{eq:recursive} 187 | \end{equation} 188 | After this manipulation, the term $\b \cdot V((1+r)\cdot a-c)$ has become $\b \cdot V(a')$; it now only depends on the control variable. This property will be convenient to apply the envelope theorem later on. 189 | 190 | \subsection{Step 2: First-Order Condition} 191 | 192 | For now, let's assume that $V$ does exit and is differentiable. The first step is to take the first-order condition in the optimization program: 193 | \begin{equation*} 194 | \max_{a'\in[0,(1+r)\cdot a]} u((1+r)\cdot a-a')+\b \cdot V(a'). 195 | \end{equation*} 196 | The first-order condition with respect to $a'$ is 197 | \begin{align} 198 | 0=&(-1)\cdot \pd{u}{c}((1+r)\cdot a-a')+\b \cdot \od{V}{a}(a')\nonumber\\ 199 | \od{u}{c}(c)&=\b \cdot \od{V}{a}(a').\label{eq:FOC} 200 | \end{align} 201 | 202 | \subsection{Step 3: Benveniste-Scheinkman Equation} 203 | 204 | In the first-order condition~\eqref{eq:FOC}, we do not know the derivative $\odx{V}{a}$ of the value function. Hence, the next step is to determine what the derivative $\odx{V}{a}$ of the value function is. To do so, we apply the envelope theorem to the Bellman equation~\eqref{eq:recursive}, which yields 205 | \begin{equation} 206 | \od{V}{a}(a)=(1+r)\cdot \od{u}{c}(c).\label{eq:BS} 207 | \end{equation} 208 | This equation is the \textit{Benveniste-Scheinkman equation}. It holds for any $a$. 209 | 210 | \subsection{Step 4: One Step Forward} 211 | 212 | Equation \eqref{eq:BS} is valid for any state variable $a$. In particular, it is valid for next period's state variable: 213 | \begin{equation} 214 | \od{V}{a}(a')=(1+r)\cdot \od{u}{c}(c').\label{eq:BS2} 215 | \end{equation} 216 | 217 | \subsection{Step 5: Euler equation} 218 | 219 | If we plug equation \eqref{eq:BS2} into equation \eqref{eq:FOC}, we obtain the Euler equation in terms of current period's consumption $c$ and next period's consumption $c'$: 220 | \begin{equation} 221 | \od{u}{c}(c)=(1+r)\cdot \b \cdot \od{u}{c}(c').\label{Euler} 222 | \end{equation} 223 | Note that this Euler equation is exactly the same as that obtained with the Lagrangian method (see equation \eqref{eq:eulerdp}). Indeed, the Lagrangian method and the dynamic programming method are equivalent. 224 | 225 | Using the Euler equation, we could reduce the optimization problem of Section~\ref{subsec:PROBLEM} to a policy function problem: for all $a$, determine the policy function $h(a)$ such that 226 | \begin{equation*} 227 | \od{u}{c}(h(a))=(1+r)\cdot \b \cdot \od{u}{c}(h((1+r)\cdot a-h(a))). 228 | \end{equation*} 229 | 230 | \subsection{A Closed-Form Solution}\label{guess} 231 | 232 | One of the advantage of using dynamic programing is that we can sometimes find closed-form solutions for the value function and the policy function. Here, we can find a closed-form solution if we assume $u(c)=\ln(c)$. To find a closed-form solution, we use the method of undetermined coefficients. We conjecture that the value function takes the form 233 | \begin{equation} 234 | V(a)=A+B \cdot \ln(a)\label{eq:fun}, 235 | \end{equation} 236 | where $A$ and $B$ are constants. The Bellman equation~\eqref{eq:recursive} becomes 237 | \begin{equation} 238 | A+B \cdot \ln(a)=\max_{a'\in[0,(1+r)\cdot a]}\ln((1+r)\cdot a-a')+\b \cdot \bs{A+B \cdot \ln(a')}\label{eq:AB}. 239 | \end{equation} 240 | We first express the solution $a'$ of the maximization problem as a function of parameters $A,\;B,\;\b$ and the state variable $a$. The first-order condition with respect to $a'$ yields 241 | \begin{align} 242 | \frac{1}{(1+r)\cdot a-a'}&=\frac{\b \cdot B}{a'}\nonumber\\ 243 | a'&=(1+r)\cdot \frac{\b \cdot B}{1+\b \cdot B} \cdot a \label{eq:focAB}. 244 | \end{align} 245 | The optimal level of consumption, $c=(1+r)\cdot a-a'$, satisfies 246 | \[c=(1+r)\cdot \frac{1}{1+\b \cdot B} \cdot a. \] 247 | We plug the expression~\eqref{eq:focAB} for the optimal $a'$ in the functional equation \eqref{eq:AB}. We obtain 248 | \begin{align*} 249 | A+B\cdot\ln(a)=&\ln{\frac{(1+r)\cdot a}{1+\b B}}+\b\cdot \bs{A+B\cdot \ln{\frac{\b \cdot B \cdot (1+r)\cdot a}{1+\b \cdot B}}}\\ 250 | A+B\cdot\ln(a)=&\bs{ \b \cdot A+ (1+\b \cdot B)\cdot \ln{\frac{1+r}{1+\b \cdot B}}+\b \cdot B \cdot \ln(\b \cdot B)} + \bs{ 1+\b \cdot B}\cdot \ln(a). 251 | \end{align*} 252 | The above equation must hold for all $a$, so we must have 253 | \begin{align} 254 | B&= 1+\b \cdot B\nonumber\\ 255 | B=&\frac{1}{1-\b}\label{eq:eqB}, 256 | \end{align} 257 | and 258 | \begin{align*} 259 | A=&\b \cdot A+(1+\b \cdot B)\cdot \ln{\frac{1+r}{1+\b \cdot B}}+\b \cdot B \cdot \ln(\b \cdot B). 260 | \end{align*} 261 | Using the fact that $1+\b \cdot B=\frac{1}{1-\b}$ and $\b \cdot B=\frac{\b}{1-\b}$, we infer that $A$ satisfies 262 | \begin{align} 263 | \bp{1-\b}\cdot A=&(1+\b \cdot B)\cdot \ln{\frac{1+r}{1+\b \cdot B}}+\b\cdot B\cdot \ln(\b \cdot B)\nonumber\\ 264 | \bp{1-\b}\cdot A=&\frac{1}{1-\b}\cdot \bs{\ln(1-\b)+\ln(1+r)}+\frac{\b}{1-\b}\cdot \bs{\ln(\b)-\ln(1-\b)}\nonumber\\ 265 | A=&\frac{(1-\b)\cdot \ln(1-\b)+\b\cdot \ln(\b)+\ln(1+r)}{(1-\b)^2}\label{eq:eqA}. 266 | \end{align} 267 | Equations \eqref{eq:eqA} and \eqref{eq:eqB} define the parameters of the value function we were solving for. With the values for the parameters $A$ and $B$, the functional form proposed in equation \eqref{eq:fun} actually solves the functional equation \eqref{eq:AB}. Our guess was correct. Since the value function is unique (by theorem), we have found the value function. 268 | 269 | Notice that we did not use the Benveniste-Scheinkman equation. The reason is that we assumed a functional form for the value function $V$, so we could compute the derivative $\odx{V}{a}$ directly, without resorting to the Benveniste-Scheinkman equation. 270 | 271 | Using equation \eqref{eq:eqB}, we can rewrite equation \eqref{eq:focAB} 272 | \begin{equation*} 273 | a'=\b \cdot (1+r)\cdot a. 274 | \end{equation*} 275 | That is, the optimal behavior is to save a constant fraction $\b$ of the invested wealth $(1+r)\cdot a$ and consume what is left. Let $c^{*}$ be the optimal consumption this period and $(a')^*=\b \cdot (1+r)\cdot a $ be the optimal wealth to save for next period. The policy function is 276 | \begin{align*} 277 | h(a)&=c^*=(1+r)\cdot a-(a')^*\\ 278 | h(a)&=(1-\b)\cdot (1+r)\cdot a. 279 | \end{align*} 280 | In this simple problem, we have been able to find a closed-form solution for the policy function. Unfortunately, it is usually not possible to find closed-form solutions to a dynamic program, and we must resort to numerical methods. 281 | 282 | \section{Simple Stochastic Problem}\label{sec:stochastic} 283 | 284 | In this section, we introduce randomness in the example of Section~\ref{sec:deterministic} and show how the techniques that we developed there can be applied. 285 | 286 | \subsection{Taste Shocks} 287 | 288 | We assume that the utility of consumption fluctuates randomly over time. The utility of consuming $c_{t}$ in period $t$ is given by 289 | \begin{equation*} 290 | \e_t \cdot u(c_t), 291 | \end{equation*} 292 | where $\e_t$ is a taste shock in period $t$. The taste shock is determined at the beginning of the period and observed before the consumption decision. The shock can take only two values: $\e_{t}\in\{\e_h,\e_l\}$ with $\e_h>\e_l>0$. The shock follows a Markov process; therefore, the distribution of $\e_t$ only depends on the realization $\e_{t-1}$ of $\e$ in the previous period. 293 | 294 | The problem can be solved as before. The major difference is that the value function is not only a function of current wealth, $a$, but also a function of the current realization of the taste shock, $\e$. In other words, there are two state variables: $a$ and $\e$. 295 | 296 | \subsection{Step 1: Bellman Equation} 297 | 298 | The Bellman equation becomes 299 | \begin{equation*} 300 | V(a,\e)=\max_{a'\in[0,(1+r)\cdot a]} \bc{\e \cdot u((1+r)\cdot a-a')+\b\cdot \E[\e'|\e]{ V(a',\e')}}. 301 | \end{equation*} 302 | \subsection{Step 2: First-order condition} 303 | Taking the first-order condition with respect to $a'$ in the Bellman equation yields 304 | \begin{equation} 305 | \e \cdot \od{u}{c}((1+r)\cdot a-a')=\b\cdot\E[\e'|\e]{\pd{V}{a}(a',\e')},\label{eq:foc0} 306 | \end{equation} 307 | where 308 | \[\pd{V}{a}(a',\e')\] 309 | designates the partial derivative of the value function $V(a,\e)$ with respect to the first variable, $a$, evaluated at the pair $(a',\e')$. 310 | 311 | \subsection{Step 3: Benveniste-Scheinkman Equation} 312 | 313 | In the first-order condition~\eqref{eq:foc0}, we do not know the derivative $\pdx{V}{a}$ of the value function. We determine $\pdx{V}{a}$ by applying the Benveniste-Scheinkman equation: 314 | \begin{equation} 315 | \pd{V}{a}(a,\e)= (1+r)\cdot \e \cdot \od{u}{c}((1+r)\cdot a-a').\label{eq:bs1} 316 | \end{equation} 317 | \subsection{Step 4: One step forward} 318 | Equation \eqref{eq:bs1} is valid for any vector $(a,\e)$ of state variables. In particular, it is valid for the vector $(a',\e')$ of next period's state variables. Hence, 319 | \begin{equation} 320 | \pd{V}{a}(a',\e')= (1+r)\cdot \e'\cdot \od{u}{c}((1+r)\cdot a'-a'').\label{eq:onestep0} 321 | \end{equation} 322 | 323 | \subsection{Step 5: Euler Equation} 324 | 325 | Plugging equation \eqref{eq:onestep0} into the first-order condition \eqref{eq:foc0} yields: 326 | \begin{align} 327 | \e \cdot \od{u}{c}((1+r)\cdot a-a')&=(1+r)\cdot \b\cdot\E[\e'|\e]{\e' \cdot \od{u}{c}((1+r)\cdot a'-a'')}\nonumber\\ 328 | \e \cdot \od{u}{c}(c)&=(1+r)\cdot \b\cdot\E[\e'|\e]{\e' \cdot \od{u}{c}(c')}.\label{eq:eulerstoc} 329 | \end{align} 330 | This is the Euler equation. 331 | 332 | The policy function gives the optimal level of consumption in any state of the world. The policy function now depends on the realization $\e$ of the shock in the current period: 333 | \[c^{*}=h(a,\e).\] 334 | The policy function specifies a contingent plan of consumption, which depends on the state variable $a$ and on the realization of the shock variable $\e$. We can rewrite the Euler equation with the policy function: 335 | \begin{equation*} 336 | \e \cdot \od{u}{c}(h(a,\e))=(1+r)\cdot\b \cdot \E[\e'|\e]{ \od{u}{c}(h(a',\e'))}. 337 | \end{equation*} 338 | 339 | \section{Real Business-Cycle Model} 340 | 341 | Sections \ref{sec:deterministic} and \ref{sec:stochastic} apply dynamic programming to very simple optimization problems. However, dynamic programming has a wide range of applications and can be used to solve complex problems in macroeconomics. To illustrate how to use dynamic programming in a macroeconomic model, we solve for the equilibrium of a real business cycle model using dynamic programming. 342 | 343 | \subsection{Model} 344 | 345 | The model is built up from the following elements: 346 | 347 | \begin{itemize} 348 | \item Preferences of the representative household: \[\E[0]{\sum_{t=0}^{\infty}\b^t \cdot u\bp{C_{t},L_{t}} 349 | }\]with \[u\bp{C_{t},L_{t}} =\ln{C_{t}} +\t \cdot \frac{\bp{1-L_{t}}^{1-\g}}{1-\g}.\] 350 | \item Technology: we introduce labor-augmenting technology so that we have a balanced growth path. 351 | \begin{itemize} 352 | \item Production function: $Y_{t}=K_{t}^{\a} \cdot \bp{A_{t} \cdot L_{t}}^{1-\a}$ with $0<\a<1$. 353 | \item Capital accumulation: $K_{t+1}=\bp{1-\d}\cdot K_{t}+I_{t}$. 354 | \end{itemize} 355 | \item National accounts identity: $Y_{t} =C_{t}+I_{t}$. 356 | \item Technology shock: $\ln A_{t} =\rho_{A}\cdot \ln A_{t-1} +\varepsilon^{A}_{t}$, $|\rho_{A}|<1$, $\varepsilon^{A}_{t}\sim N(0,\s_{A}^{2})$, $\s_{A}>0$. 357 | \end{itemize} 358 | 359 | \subsection{Optimal Allocation} 360 | 361 | \begin{definition} The optimal allocation is the collection of stochastic processes $\bc{C_{t},I_{t},Y_{t},K_{t},A_{t},L_{t}}_{t=0}^{\infty}$ that solves: 362 | \begin{align*} 363 | \max_{\bc{C_{t},L_{t}}_{t=0}^{\infty}}&\E_{0} \sum_{t=0}^{\infty}\b^{t}\cdot u\bp{C_{t},L_{t}} 364 | \end{align*} 365 | subject to 366 | \begin{align} 367 | K_{t+1} &=(1-\d) \cdot K_{t}+I_{t}\label{eq:rbc1}\\ 368 | Y_{t}&=K_{t}^{\a}\cdot \bp{A_{t} \cdot L_{t}}^{1-\a}\label{eq:rbc2}\\ 369 | Y_{t}&=C_{t}+I_{t}\label{eq:rbc3}\\ 370 | \ln A_{t} &=\rho_{A}\cdot \ln A_{t-1} +\varepsilon^{A}_{t},\;\varepsilon^{A}_{t}\sim N(0,\s_{A}^{2}).\nonumber 371 | \end{align} 372 | \end{definition} 373 | The welfare theorems imply that the allocation in the competitive equilibrium coincides with the optimal allocation; thus, we focus on the optimal allocation. 374 | 375 | \subsection{Characterization of the Optimal Allocation} 376 | 377 | We use dynamic programming to characterize the optimal allocation. This stochastic problem and admits a recursive structure with 378 | \begin{itemize} 379 | \item control $=[C,L]$ 380 | \item state $=[K]$ 381 | \item shock $=[A]$. 382 | \end{itemize} 383 | Furthermore, the transition equation for the state variable $K$ is obtained by combining equations~\eqref{eq:rbc1},~\eqref{eq:rbc2}, and~\eqref{eq:rbc3}. The three equations can be aggregated into a single transition equation for capital: 384 | \begin{equation} 385 | K'=(1-\d) \cdot K+K^{\a} \cdot (A \cdot L)^{1-\a}-C\label{eq:rbctrans}. 386 | \end{equation} 387 | 388 | \paragraph{Step 1: Bellman Equation} The Bellman equation is 389 | \begin{align*} 390 | V\bp{K,A}=&\max_{C,L}\ \bc{u\bp{C,L} +\b\cdot\E[A'|A]{ V\bp{K',A'} }} 391 | \end{align*} 392 | We plug the transition equation for capital, given by~\eqref{eq:rbctrans}, into the value function: 393 | \begin{align*} 394 | V\bp{K,A}=\max_{C,L}\ \bc{u\bp{C,L}+\b \cdot \E[A'|A]{V\bp{(1-\d)\cdot K+K^{\a} \cdot \bp{A \cdot L}^{1-\a}-C, A'}}}. 395 | \end{align*} 396 | 397 | \paragraph{Step 2: First-Order Conditions} We derive the first-order conditions with respect to $C$ and $L$ in the Bellman equation: 398 | \begin{align*} 399 | 0&= \pd{u}{C}\bp{C,L} +\b \cdot \E[A'|A]{\pd{V}{K}(K',A')\cdot \pd{ K'}{ C}}\\ 400 | 0&=\pd{u}{L}\bp{C,L} +\b\cdot \E[A'|A]{\pd{V}{K}(K',A')\cdot \pd{ K'}{ L}}. 401 | \end{align*} 402 | Equation~\eqref{eq:rbctrans} implies that 403 | \begin{align*} 404 | \pd{ K'}{ C}&=-1\\ 405 | \pd{ K'}{ L}&=(1-\a)\cdot \frac{Y}{L}. 406 | \end{align*} 407 | The assumption that $u(C,L)=\ln{C}+\t\cdot \frac{(1-L)^{1-\g}}{1-\g}$ implies that 408 | \begin{align*} 409 | \pd{u}{C}&=\frac{1}{C}\\ 410 | -\pd{u}{L}&=\t \cdot (1-L)^{-\g}. 411 | \end{align*} 412 | Thus, we obtain 413 | \begin{align} 414 | \frac{1}{C}&=\b\cdot \E[A'|A]{\pd{V}{K}(K',A')} \label{eq:xx2}\\ 415 | \t \cdot (1-L)^{-\g}&=\b\cdot\bp{1-\a}\cdot \frac{Y}{L}\cdot \E[A'|A]{\pd{V}{K}(K',A')}\label{eq:xx3}. 416 | \end{align} 417 | By taking the ratio of equations~\eqref{eq:xx2} and~\eqref{eq:xx3}, we obtain the following intratemporal optimality condition: 418 | \begin{equation} 419 | \frac{\t \cdot C}{(1-L)^{\g}}=(1-\a) \cdot \frac{Y}{L} \label{eq:intra}. 420 | \end{equation} 421 | This condition simply says that the marginal rate of substitution between leisure and consumption (left-hand side of the equation) equals the marginal product of labor (right-hand side of the equation) in the optimal allocation. 422 | 423 | \paragraph{Step 3: Benveniste-Scheinkman Equation} We use the Benveniste-Scheinkman equation to determine $\pdx{V}{K}$ and make progress on~\eqref{eq:xx2}: 424 | \begin{align*} 425 | \pd{V}{K}\bp{K,A} & =\b\cdot \E[A'|A]{\pd{V}{K}(K',A') \cdot \pd{ K'}{ K}}. 426 | \end{align*} 427 | Equation~\eqref{eq:rbctrans} implies that 428 | \[\pd{ K'}{ K}= \bs{(1-\d)+\a \cdot \frac{Y}{K}}\equiv R.\] 429 | Thus, we obtain 430 | \begin{align} 431 | \pd{V}{K}\bp{K,A}& =\b\cdot R\cdot \E[A'|A]{\pd{V}{K}(K',A')}\label{eq:xx1}. 432 | \end{align} 433 | 434 | \paragraph{Step 4: Euler Equation} Combining the first-order condition~\eqref{eq:xx2} with the Benveniste-Scheinkman equation~\eqref{eq:xx1} yields 435 | \begin{align*} 436 | \pd{V}{K}\bp{K,A} & = R \cdot \frac{1}{C}. 437 | \end{align*} 438 | Moving one period ahead yields 439 | \begin{align} 440 | \pd{V}{K}(K',A') & = R' \cdot \frac{1}{C'}.\label{eq:xx4} 441 | \end{align} 442 | Finally, we combine the first-order condition~\eqref{eq:xx2} with equation~\eqref{eq:xx4} to obtain the Euler equation: 443 | \begin{equation} 444 | \frac{1}{C}=\b \cdot \E[A'|A]{R'\cdot\frac{1}{C'}}.\label{eq:euler} 445 | \end{equation} 446 | We aim to solve explicitly for the stochastic processes of the key variables in this model (consumption, leisure, capital, and so on). There are two approaches to solve for these stochastic processes literature: (1) simplify the economic environment; (2) find an approximate analytical solution by log-linearizing the model. Here we follow the first approach. 447 | 448 | \subsection{Simplifying Assumptions} 449 | 450 | The model contains a mixture of linear and log-linear elements which make it impossible to find a closed-form solution. To eliminate the linear elements in the model and only keep log-linear elements, we assume full capital depreciation: $\d=1$. This assumption imply that the capital-accumulation equation simplifies to 451 | \begin{align*} 452 | K'=I=Y-C=s\cdot Y, 453 | \end{align*} 454 | where $s$ is the current saving rate. Under these assumptions, we simplify the Euler equation as follows: 455 | \begin{align*} 456 | \frac{1}{\bp{1-s}\cdot Y}&=\b\cdot \E[A'|A]{\a\cdot \frac{Y'}{K'}\cdot \frac{1}{\bp{1-s'} \cdot Y'}}\\ 457 | \frac{1}{\bp{1-s}\cdot Y}&=\a\cdot \b\cdot \E[A'|A]{\frac{1}{s\cdot Y}\cdot \frac{1}{1-s'}} \\ 458 | \frac{s}{\bp{1-s} } & =\a\cdot \b\cdot \E[A'|A]{ \frac{1}{1-s'}}. 459 | \end{align*} 460 | 461 | We solve this equation by guessing and verifying. We guess that the saving rate remains constant over time: $s=s'=s^*$. In that case, 462 | \begin{align*} 463 | \frac{s^*}{\bp{1-s^*}}=\a\cdot \b \cdot \E[A'|A]{\frac{1}{\bp{1-s^*}}}=\frac{\a\cdot \b}{\bp{1-s^*} }. 464 | \end{align*} 465 | Thus $s^* =\a\cdot \b$. Using this result in the first-order condition~\eqref{eq:intra} yields 466 | \begin{align*} 467 | \frac{\t \cdot \bp{1-s^*} \cdot Y}{\bp{1-L}^{\g}} =\bp{1-\a}\cdot \frac{Y}{L}\\ 468 | \frac{\t \cdot \bp{1-s^*}}{1-\a} = \frac{\bp{1-L}^{\g}}{L}. 469 | \end{align*} 470 | Therefore employment $L$ is constant over time. The two first-order conditions hold for each point in time, and not just along the balanced growth path. Hence the optimal allocation is characterized by a constant saving rate and a constant employment, even in presence of transitory technology shocks. 471 | 472 | 473 | \section{Theory of Deterministic Problems}\label{sec:theory} 474 | 475 | So far, we have only applied dynamic programming to specific problems. In this section we propose a general treatment of dynamic programming for deterministic problems. The goal is to show you the type of problems that can be solved with dynamic programming. 476 | 477 | Consider the following problem: Given initial condition $a_{0}$, choose $\bc{c_{t}}_{t=0}^{\infty}$ to maximize 478 | \begin{align*} 479 | \sum_{t=0}^{\infty}\b^{t}\cdot u(a_{t},c_{t}) 480 | \end{align*} 481 | subject to the law of motion 482 | \[a_{t+1}=g(a_{t},c_{t}).\] 483 | $u(a_{t},c_{t})$ is a concave function. 484 | 485 | Dynamic programming seeks a time-invariant policy function $h$ mapping the state $a_{t}$ into the control $c_{t}$ such that the sequence 486 | $\{c_{t}\}_{t=0}^{\infty}$ generated by iterating the two functions 487 | \begin{align*} 488 | c_{t} & =h(a_{t}) \\ 489 | a_{t+1} & =g(a_{t},c_{t}) 490 | \end{align*} 491 | solves the original problem. 492 | 493 | \subsection{Step 1: Bellman Equation} 494 | 495 | The Principle of Optimality allows us to write the value function $V$ as the solution of a functional equation 496 | \begin{equation} 497 | V(a)=\max_{c}\bc{u(a,c)+\b\cdot V(g(a,c))}.\label{eq:bellmansymb} 498 | \end{equation} 499 | This functional equation is the Bellman equation.\footnote{The proof of the Principle of Optimality is due to Bellman. The formal derivation and proof of this result, as well as the conditions under which this result holds, are omitted here.} 500 | 501 | The optimal consumption is given by the policy function: $c=h(a)$. Another representation of the Bellman equation is 502 | \begin{equation} 503 | V(a)=u(a,h(a))+\b\cdot V(g(a,h(a))).\label{eq:bellmansymb2} 504 | \end{equation} 505 | 506 | To highlight the recursive structure of the problem, we can write the symbolic representation of 507 | the Bellman equation: 508 | \begin{align*} 509 | V(\text{state(t)})&= \max_{\text{control(t)}}\bc{u(\text{control(t)},\text{state(t)})+\b V(\text{state(t+1)})} 510 | \end{align*} 511 | subject to 512 | \begin{align*} 513 | \text{state(t+1)}&=g(\text{control(t)},\text{state(t)}), 514 | \end{align*} 515 | which is equivalent to 516 | \begin{align*} 517 | V(\text{state(t)})&= \max_{\text{control(t)}}\bc{u(\text{control(t)},\text{state(t)})+\b \cdot V(g(\text{control(t)},\text{state(t)}))}, 518 | \end{align*} 519 | where $\text{control(t)}$ and $\text{state(t)}$ are vectors of control variables and state variables. 520 | 521 | 522 | \subsection{Step 2: First-Order Condition} 523 | 524 | Taking the first-order condition with respect to $c$ of the optimization problem~\eqref{eq:bellmansymb} yields 525 | \begin{equation} 526 | \pd{u}{c}(a,h(a))+\b \cdot \pd{g}{c}(a,h(a)) \cdot \od{V}{a}(g(a,h(a)))=0,\label{eq:focsymb} 527 | \end{equation} 528 | where 529 | \[\pd{g}{c}(a,h(a))\] 530 | designates the partial derivative of the function $g(a,c)$ with respect to the second variable, $c$, evaluated at the pair $(a,h(a))$; 531 | \[\pd{u}{c}(a,h(a))\] 532 | designates the partial derivative of the function $u(a,c)$ with respect to the second variable, $c$, evaluated at the pair $(a,h(a))$; 533 | and 534 | \[\od{V}{a}(g(a,h(a)))\] 535 | designates the derivative of the function $V(a)$ with respect to the variable $a$ evaluated at $g(a,h(a))$. 536 | 537 | \subsection{Step 3: Benveniste-Scheinkman Equation} 538 | 539 | In the first-order condition~\eqref{eq:focsymb}, we do not know the derivative $\odx{V}{a}$ of the value function (because we do not know the value function). Hence, the next step is to determine what the derivative $\odx{V}{a}$ of the value function is. To do so, we apply the Benveniste-Scheinkman theorem. This theorem says that under some regularity conditions, 540 | \begin{align} 541 | \od{V}{a}(a)=\pd{ u}{a}(a,h(a))+\b\cdot \pd{ g}{a}(a,h(a))\cdot \od{V}{a}(g(a,h(a))).\label{eq:BSET} 542 | \end{align} 543 | The theorem is a version of the envelope theorem applied to the Bellman equation~\eqref{eq:bellmansymb}. 544 | 545 | \subsection{Step 3': A Combination} 546 | 547 | The first-order condition~\eqref{eq:focsymb} yields 548 | \begin{align} 549 | \od{V}{a}(g(a,h(a)))=-\frac{1}{\b}\cdot \frac{\pdx{u(a,h(a))}{c}}{\pdx{g(a,h(a))}{c}}. 550 | \end{align} 551 | Combining this equation with the Benveniste-Scheinkman equation~\eqref{eq:BSET} yields 552 | \begin{align} 553 | \od{V}{a}(a)=\pd{u}{a}(a,h(a))-\pd{u}{c}(a,h(a))\cdot \frac{\pdx{g(a,h(a))}{a}}{\pdx{g(a,h(a))}{c}}.\label{eq:plug} 554 | \end{align} 555 | This step is necessary when \[\pd{g(a,c)}{a}\neq 0.\] In the consumption-saving problem, we picked a control variable---next period's wealth, $a'$---such that the state variable, $a$, does not enter the transition function, $g$. Therefore, we could bypass this step. 556 | 557 | \subsection{Step 4: One Step Forward} 558 | 559 | Equation \eqref{eq:plug} is true for any value of the state variable $a$. In particular, it is true for $a'=g(a,h(a))$. Therefore 560 | \begin{align} 561 | \od{V}{a}(a')=\pd{u}{a}(a',h(a'))-\pd{u}{c}(a',h(a'))\cdot \frac{\pdx{g(a',h(a'))}{a}}{\pdx{g(a',h(a'))}{c}}.\label{eq:onestep} 562 | \end{align} 563 | 564 | \subsection{Step 5: Euler Equation} 565 | 566 | We plug equation~\eqref{eq:onestep} into equation~\eqref{eq:focsymb} to get the Euler equation 567 | \begin{equation*} 568 | \pd{u}{c}(a,h(a))+\b \cdot \pd{g}{c}(a,h(a)) \cdot \bc{\pd{u}{a}(a',h(a'))-\pd{u}{c}(a',h(a')) \cdot \frac{\pdx{g(a',h(a'))}{a}}{\pdx{g(a',h(a'))}{c}}}=0. 569 | \end{equation*} 570 | The equation characterizes the optimal behavior of the control variable $c$. 571 | 572 | \end{document} -------------------------------------------------------------------------------- /lecturenotes/notes2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/lecturenotes/notes2.pdf -------------------------------------------------------------------------------- /lecturenotes/notes2.tex: -------------------------------------------------------------------------------- 1 | \documentclass[letterpaper,12pt,leqno]{article} 2 | \usepackage{paper,math,notes} 3 | \available{https://pascalmichaillat.org/x/} 4 | \hypersetup{pdftitle={Optimal Control}} 5 | 6 | \begin{document} 7 | 8 | \title{Optimal Control} 9 | \author{Pascal Michaillat} 10 | \date{} 11 | 12 | \begin{titlepage} 13 | \maketitle 14 | \tableofcontents 15 | \end{titlepage} 16 | 17 | \section{Preliminary Results} 18 | 19 | Before presenting the techniques of optimal control, we review two mathematical results: L'Hopital's rule and Leibniz's rule. These results are used in section~\ref{sec:HEURISTIC} for the derivations of some of the results. 20 | 21 | \subsection{L'Hopital's rule} 22 | 23 | L'Hopital's rule states that if 24 | \begin{equation*} 25 | \lim_{x\to x_{0}}f(x) =\lim_{x\to x_{0}}g(x) =0, 26 | \end{equation*} 27 | then 28 | \begin{equation*} 29 | \lim_{x\to x_{0}}\frac{f(x)}{g(x)}=\lim_{x\to x_{0}}\frac{f'(x) }{g'(x)}. 30 | \end{equation*} 31 | 32 | \subsection{Leibniz's rule} 33 | 34 | Leibniz's rule states that if 35 | \begin{equation*} 36 | I(z) \equiv \int_{a(z) }^{b(z)}f\bp{x,z} dx 37 | \end{equation*} 38 | where $x$ is the integration variable, $z$ is a parameter and $f\bp{ x,z}$ is assumed to have a continuous derivative $\pdx{f(x,z)}{z}$ in the interval $\bs{a(z),b(z)}$, then the effect of change in $z$ on the integral is given by 39 | \begin{equation*} 40 | \od{I}{z}(z) =\int_{a(z)}^{b(z)} \pd{f}{z}(x,z) dx+\od{b}{z}(z) \cdot f( b(z),z) -\od{a}{z}(z) \cdot f(a(z) ,z). 41 | \end{equation*} 42 | 43 | \section{Consumption-Saving Problem} 44 | 45 | This section considers the same consumption-saving problem as in the notes on dynamic programming. The only difference is that the problem is now set in continuous time. 46 | 47 | \subsection{Description of the Problem} 48 | 49 | Taking initial wealth $a_{0}$ as given, the problem is to choose the consumption path $\bc{c(t)}_{t\geq 0}$ to maximize the lifetime utility 50 | \begin{align} 51 | \int_{0}^{\infty}e^{-\r \cdot t}\cdot u(c(t)) dt \label{eq:cont}, 52 | \end{align} 53 | subject to the law of motion 54 | \begin{equation} 55 | \dot{a}(t) = r\cdot a(t)-c(t).\label{eq:LAW} 56 | \end{equation} 57 | The parameter $r>0$ is the constant interest rate at which wealth is invested. The parameter $\r>0$ is the discount factor. The notation $\dot{a}$ denotes the derivative of wealth $a$ with respect to time $t$: 58 | \[\dot{a}(t)\equiv \pd{a(t)}{t}.\] 59 | 60 | \subsection{Optimal-Control Approach} 61 | 62 | To solve this problem, it is inconvenient to use the Lagrangian technique or dynamic programming technique because they are designed for discrete-time optimization problems. Instead, we use a technique called \textit{optimal control}. We will see in section~\ref{sec:HEURISTIC} that optimal control is related both to the Lagrangian technique and the dynamic programming technique. But optimal control is designed for continuous-time optimization problems. 63 | 64 | There are two ways to apply the optimal-control approach: one by forming a \textit{present-value Hamiltonian}, the other by forming a \textit{current-value Hamiltonian}. These two ways are roughly equivalent, but using the current-value Hamiltonian is usually simpler. 65 | 66 | \section{Solution with Present-Value Hamiltonian} 67 | 68 | We start by solving the consumption-saving problem with the present-value Hamiltonian. 69 | 70 | \subsection{State and Control Variables} 71 | 72 | First, we identify the \textit{state variables} and \textit{control variables}. A control variable can be adjusted at any time $t$ whereas the evolution of a state variable follows a law of motion such as~\eqref{eq:LAW}. Here, the control variable is consumption $c(t)$ and the state variable is wealth $a(t)$. 73 | 74 | \subsection{Present-Value Hamiltonian} 75 | 76 | Second, we write down the present-value Hamiltonian 77 | \begin{equation*} 78 | \Hc(t) = e^{-\r \cdot t}\cdot u(c(t)) +\l(t) \cdot \bp{r\cdot a(t) - c(t)}. 79 | \end{equation*} 80 | To form the Hamiltonian, we introduce a new variable $\l(t)$, which we call the \textit{costate variable} associated with the state variable $a(t)$. In general, we introduce as many costate variables as there are state variables. As a consequence, we introduce as many costate variables as there are laws of motion, because each state variable is associated with a law of motion. 81 | 82 | The costate variable is analogous to a Lagrange multiplier, except that it is associated to a law of motion instead of a static constraint. But just like a Lagrange multiplier, the costate variable measures the value from marginally relaxing the constraint at the optimum. 83 | 84 | \subsection{Optimality Conditions} 85 | 86 | Next we write down the two optimality conditions, which are derived from the Maximum Principle: 87 | \begin{align} 88 | \pd{\Hc(t)}{c(t)}=& 0 \label{eq:foc1}\\ 89 | \pd{\Hc(t)}{a(t)}=&-\dot{\l}(t)\label{eq:foc2}. 90 | \end{align} 91 | 92 | The optimality conditions can be rewritten as 93 | \begin{align} 94 | 0 &= e^{-\r \cdot t}\cdot u'(c(t)) -\l(t)\label{eq:focc} \\ 95 | -\dot{\l}(t)&= \l(t)\cdot r \label{eq:focs}. 96 | \end{align} 97 | 98 | \subsection{Transversality Condition} 99 | 100 | Then we impose a \textit{transversality condition}: 101 | \begin{equation} 102 | \lim_{t\to+\infty} \l(t)\cdot a(t)=0.\label{eq:trans1} 103 | \end{equation} 104 | The transversality condition describes what must be satisfied at the end of the time horizon at the optimum. Here it says that at the optimum, at the end of time, either there is no wealth left ($a(t)=0$) or wealth has no value ($\l(t)\cdot a(t)=0$).\footnote{Recall that the costate variable $\l(t)$ measures the marginal value of wealth at the optimum.} This clearly has to be the case at the optimum: if there was some wealth left and wealth had value, then all the wealth that is left over should be consumed. The idea behind the transversality condition is that if there is any flexibility at the end of time, then the marginal benefit from exploiting that flexibility must be zero at the optimum. 105 | 106 | \subsection{Euler Equation} 107 | 108 | We solve explicitly for the optimal consumption path by eliminating the costate variable $\l(t)$ using~\eqref{eq:focc} and~\eqref{eq:focs}. 109 | 110 | To eliminate $\l(t)$, we first take log of \eqref{eq:focc}: 111 | \begin{align*} 112 | -\r \cdot t+ \ln{u'(c(t))} =\ln{\l(t)}. 113 | \end{align*} 114 | We then take time derivatives in this equation: 115 | \begin{equation*} 116 | \r+\bs{ \frac{-u^{''}(c(t)) \cdot c(t)}{u'(c(t))}}\cdot \bs{\frac{\dot{c}(t)}{c(t)}} =-\frac{\dot{\l}(t)}{\l(t)}. 117 | \end{equation*} 118 | Equation~\eqref{eq:focs} can be rewritten as 119 | \begin{align*} 120 | -\frac{\dot{\l}(t)}{\l(t)}&= r. 121 | \end{align*} 122 | Combining these two equations, we obtain the Euler equation for optimal consumption: 123 | \begin{equation} 124 | \frac{\dot{c}(t)}{c(t)}\cdot \bs{\frac{-u^{''}(c(t)) \cdot c(t)}{u'(c(t)) }} =r-\r. 125 | \label{eq:EULER}\end{equation} 126 | The term \[\frac{-u^{''}(c(t))\cdot c(t)}{u'(c(t))}\] measures relative risk aversion. The coefficient of relative risk aversion also corresponds to the inverse of the intertemporal elasticity of substitution. 127 | 128 | \subsection{CRRA Utility} 129 | 130 | Consider the following utility function: 131 | \begin{equation*} 132 | u(c) =\frac{c^{1-\g }-1}{1-\g }. 133 | \end{equation*} 134 | This utility function is know as Constant Relative Risk Aversion (CRRA ) utility. It is characterized by a constant coefficient of relative risk aversion $\g$ as 135 | \[\frac{-u^{''}(c) \cdot c}{u^{'}(c)}=\g.\] 136 | With CRRA utility, the Euler equation simplifies to 137 | \begin{equation*} 138 | \frac{\dot{c}(t)}{c(t)}=\frac{r-\r}{\g}. 139 | \end{equation*} 140 | 141 | \section{Solution with Current-Value Hamiltonian} 142 | 143 | Next we solve the consumption-saving problem with a current-value Hamiltonian. 144 | 145 | \subsection{Current-Value Hamiltonian} 146 | 147 | The present-value Hamiltonian $\Hc$ depends on $t$ because of the discounting $e^{-\r \cdot t}$, which might create some difficulties in deriving and analyzing solutions to the problem. Multiplying $\Hc$ by $e^{\r \cdot t}$ addresses this problem. 148 | 149 | We denote the resulting Hamiltonian $\Hc^{*}$ as current-value Hamiltonian. The current-value Hamiltonian is given by 150 | \begin{equation*} 151 | \Hc^{*}(t)\equiv e^{\r \cdot t}\cdot \Hc(t)=u(c(t)) +e^{\r \cdot t} \cdot \l(t) \cdot \bp{r\cdot a(t)-c(t)}. 152 | \end{equation*} 153 | We define a new costate variable $q(t)$ as 154 | \begin{equation*} 155 | q(t)\equiv e^{\r \cdot t} \cdot \l(t). 156 | \end{equation*} 157 | The current-value Hamiltonian becomes 158 | \begin{equation*} 159 | \Hc^{*}(t)= u(c(t)) +q(t) \cdot \bp{r\cdot a(t)-c(t)}. 160 | \end{equation*} 161 | This is the expression of the current-value Hamiltonian that we use in practice. 162 | 163 | \subsection{Optimality Conditions} 164 | 165 | The optimality conditions are slightly different. The optimality conditions become 166 | \begin{align*} 167 | \pd{\Hc^{*}(t)}{c(t)}=& 0\\ 168 | \pd{\Hc(t)^{*}}{a(t)}=&\r\cdot q(t)-\dot{q}(t). 169 | \end{align*} 170 | 171 | Compared to the optimality conditions~\eqref{eq:foc1} and~\eqref{eq:foc2} with the present-value Hamiltonian, there is an extra term $+\r\cdot q(t)$ in the second condition. The extra term arises because the costate variable $q(t)$ is defined differently form the costate variable $\l(t)$ that we used in the present-value Hamiltonian. 172 | 173 | The optimality conditions can be rewritten as 174 | \begin{align} 175 | 0 &= u'(c(t)) -q(t)\label{eq:auto1}\\ 176 | \r\cdot q(t)-\dot{q}(t)&= q(t)\cdot r .\label{eq:auto2} 177 | \end{align} 178 | 179 | \subsection{Transversality Condition} 180 | 181 | The transversality condition becomes 182 | \[\lim_{t\to+\infty}e^{-\r \cdot t} \cdot q(t)\cdot a(t)=0.\] 183 | Compared to the transversality condition~\eqref{eq:trans1} with the present-value Hamiltonian, there is an extra factor $e^{-\r \cdot t}$ in this condition. The extra factor arises because the costate variable $q(t)$ is defined differently from the costate variable $\l(t)$ that we used in the present-value Hamiltonian. But the interpretation of the transversality condition remains the same: at the optimum, at the end of time, any leftover wealth must be consumed, except if the present-discounted marginal value of wealth is zero at the end of time. 184 | 185 | \subsection{Euler Equation} 186 | 187 | By combining equations~\eqref{eq:auto1} and~\eqref{eq:auto2}, we obtain exactly the same condition~\eqref{eq:EULER} as with the present-value Hamiltonian. We take the log of equation~\eqref{eq:auto1} to obtain 188 | \begin{align*} 189 | \ln{u'(c(t))} = \ln{q(t)}. 190 | \end{align*} 191 | We then take time derivatives in this equation: 192 | \begin{equation*} 193 | \bs{ \frac{-u^{''}(c(t)) \cdot c(t)}{u'(c(t))}}\cdot \bs{\frac{\dot{c}(t)}{c(t)}} =-\frac{\dot{q}(t)}{q(t)}. 194 | \end{equation*} 195 | Equation~\eqref{eq:auto2} can be rewritten 196 | \begin{align*} 197 | -\frac{\dot{q}(t)}{q(t)}&= r-\r. 198 | \end{align*} 199 | Combining these two equations, we obtain the same Euler equation for optimal consumption as the one we obtained with the present-value Hamiltonian: 200 | \begin{equation*} 201 | \frac{\dot{c}(t)}{c(t)}\cdot \bs{\frac{-u^{''}(c(t)) \cdot c(t)}{u'(c(t)) }} =r-\r. 202 | \end{equation*} 203 | 204 | \subsection{Comparison of the Two Solutions} 205 | 206 | The two approaches---the approach with the present-value Hamiltonian and the approach with the current-value Hamiltonian---are equivalent. They lead to the same Euler equation. However, it is often more convenient to work with the current-value Hamiltonian. 207 | 208 | \section{Theory of Optimal Control} 209 | 210 | Optimal control can be used to solve any continuous-time optimization problem. This section provides optimality conditions for a general optimization problem. 211 | 212 | \subsection{General Optimization Problem} 213 | 214 | The general problem is to choose $\bc{c(t)}_{t\geq 0}$ to maximize 215 | \begin{align} 216 | \int_{0}^{\infty}e^{-\r\cdot t}\cdot u\bp{ a(t),c(t)}\label{dpc} 217 | \end{align} 218 | given the constraint that for all $t$ 219 | \begin{align} 220 | \dot{a}(t) = g(a(t),c(t)),\label{eq:thelaw} 221 | \end{align} 222 | and taking $a_{0}$ as given. The parameter $\r >0$ is the discount rate. The functions $u$ and $g$ are concave and twice differentiable. 223 | 224 | \subsection{Present-Value Hamiltonian} 225 | 226 | An important result in optimal control theory is the Maximum Principle. It is due to Pontryagin. In addition to the control and state variables, we introduce a costate variable $\l (t)$ associated with the state variable. The costate variable measures the shadow price of the associated state variable. The costate variable enters the optimal control problem through the present-value Hamiltonian, defined as 227 | \begin{equation} 228 | \Hc(t)=e^{-\r\cdot t}\cdot u(a(t),c(t)) +\l(t)\cdot g(a(t),c(t)). 229 | \label{eq:HAMILDEF}\end{equation} 230 | 231 | \subsection{Present-Value Optimality Conditions} 232 | 233 | The Maximum Principle gives necessary conditions for optimality. There are three conditions. The first two conditions are 234 | \begin{align} 235 | \pd{\Hc(t)}{c(t)} &=0 \label{eq:hcontrol} \\ 236 | \pd{\Hc(t)}{a(t)} &=-\dot{\l}(t).\label{eq:hstate} 237 | \end{align} 238 | Condition \eqref{eq:hcontrol} implies that the Hamiltonian must be maximized 239 | with respect to the control variable at any point in time. Condition \eqref{eq:hstate} says that the marginal change of the Hamiltonian associated with a unit change of the state variable is equal to minus the rate of change of the costate variable. 240 | 241 | The optimal solution must also satisfy a third condition, which we call transversality condition: 242 | \begin{equation} 243 | \lim_{t\to \infty }\l(t)\cdot a(t)=0. \label{eq:tvc} 244 | \end{equation} 245 | The transversality condition implies the product of costate and state must be converging to zero as time goes to infinity. 246 | 247 | \subsection{Current-Value Hamiltonian} 248 | 249 | We can reformulate the results from the Maximum Principle with the current-value Hamiltonian which is often easier to manipulate. The current-value Hamiltonian is defined as 250 | \begin{equation*} 251 | \Hc^{*}(t)= u\bp{a(t),c(t)} +q(t)\cdot g(a(t),c(t)), 252 | \end{equation*} 253 | where $q(t)$ is the costate variable associated with the state variable $a(t)$. 254 | 255 | \subsection{Current-Value Optimality Conditions} 256 | 257 | With the current-value Hamiltonian, the three necessary conditions~\eqref{eq:hcontrol},~\eqref{eq:hstate}, and ~\eqref{eq:tvc} for optimality become 258 | \begin{align*} 259 | \pd{\Hc^{*}(t)}{c(t)} &=0\\ 260 | \pd{\Hc^{*}(t)}{a(t)} &=\r\cdot q(t)-\dot{q}(t)\\ 261 | \lim_{t\to \infty }e^{-\r\cdot t}\cdot q(t)\cdot a(t)&=0. 262 | \end{align*} 263 | 264 | \section{Heuristic Derivation of the Maximum Principle}\label{sec:HEURISTIC} 265 | 266 | In this section, we provide an heuristic derivation of the necessary conditions for optimality provided by the Maximum Principle. One way to derive the optimality conditions \eqref{eq:hcontrol} and \eqref{eq:hstate} is to apply informally the results from dynamic programming. Formally, many of the claims below are imprecise, but they will serve the purpose of providing intuition for the Maximum Principle. 267 | 268 | \subsection{Value Function for the Discretized Optimization Problem} 269 | 270 | We begin by defining the value function of the problem, which the maximized value of the objective function as a function of the 271 | state variable $a(t)$ and time $t$: 272 | \begin{equation*} 273 | V\bp{a(t),t} =\underset{\bc{c(s)}_{s\geq t}}{\max} \int_{t}^{\infty }e^{-\r \cdot \bp{s-t}}\cdot u\bp{a(s),c(s)} ds, 274 | \end{equation*} 275 | where the maximization is subject for all $s\geq t$ to the law of motion of the state variable 276 | \begin{equation} 277 | \dot{a}(s)=g\bp{a(s),c(s)}.\label{eq:lmh} 278 | \end{equation} 279 | 280 | \subsection{Bellman Equation} 281 | 282 | Since the problem has a recursive structure, we can apply the Principle of Optimality and write the value function as the solution to a Bellman equation: 283 | \begin{align*} 284 | V\bp{ a(t),t} =\underset{\bc{c(s)}_{t\leq s\leq t+\D t}}{\max }\bc{\int_{t}^{t+\D t}e^{-\r \cdot \bp{ s-t}} \cdot u\bp{a(s),c(s)} ds+e^{-\r \cdot \D t} \cdot V\bp{ a(t+\D t),t+\D t} }, 285 | \end{align*} 286 | where the maximization is subject for all $t\leq s\leq t+\D t$ to~\eqref{eq:lmh}. 287 | 288 | Subtract $V\bp{a(t),t} $ from both side and divide by $\D t$: 289 | \begin{align} 290 | 0=\underset{\bc{c(s)}_{t\leq s\leq t+\D t}}{\max }\bs{\frac{\int_{t}^{t+\D t}e^{-\r \cdot \bp{ s-t}} \cdot u\bp{a(s),c(s)} ds}{\D t}+ \frac{e^{-\r \cdot \D t} \cdot V\bp{ a(t+\D t),t+\D t}-V\bp{ a(t),t}}{\D t} },\label{eq:BIG} 291 | \end{align} 292 | where the maximization is subject for all $t\leq s\leq t+\D t$ to~\eqref{eq:lmh}. 293 | 294 | \subsection{Hamilton-Jacobi-Bellman Equation} 295 | 296 | We now take the limit of \eqref{eq:BIG} as $\D t\to 0$ to obtain the Hamilton-Jacobi-Bellman equation. 297 | 298 | \paragraph{Limit of First Term} We start with the first term in the curly brackets. Since numerator and denominator of the first term approach zero as $\D t\to 0$, we apply L'Hopital's rule. The derivative of the denominator with respect to $\D t$ is $1$. We apply Leibniz's rule to determine the derivative with respect to $\D t$ of the integral in the numerator. Leibniz's rule tells us that the derivative of the integral with respect to $\D t$ is 299 | \[e^{-\r \cdot \D t} \cdot u\bp{a(t+\D t),c(t+\D t)}.\] 300 | Therefore, the limit as $\D t\to 0$ for the first term in the bracket is 301 | \begin{equation} 302 | u\bp{ a(t),c(t)}.\label{eq:BIG1} 303 | \end{equation} 304 | 305 | \paragraph{Limit of Second Term} We move on to the second term. Since 306 | \[\lim_{\D t\to 0} e^{-\r \cdot \D t}=1\] 307 | and 308 | \[\lim_{\D t\to 0} V\bp{ a(t+\D t),t+\D t} =V\bp{a(t),t}, \] 309 | both numerator and denominator approach zero as $\D t\to 0$. Therefore we apply L'Hopital's rule. The derivative of the denominator with respect to $\D t$ is $1$. The derivative of the numerator with respect to $\D t$ is 310 | \begin{align*} 311 | &-\r \cdot e^{-\r\cdot \D t}\cdot V\bp{ a(t+\D t),t+\D t}+e^{-\r\cdot \D t}\cdot \pd{V}{a}\bp{ a(t+\D t),t+\D t} \cdot \dot{a}(t+\D t)\\ 312 | &+e^{-\r\cdot \D t}\cdot \pd{V}{t}\bp{a(t+\D t),t+\D t}. 313 | \end{align*} 314 | We have the following limits: 315 | \begin{align*} 316 | \lim_{\D t\to 0} \r \cdot e^{-\r\cdot \D t}\cdot V\bp{ a(t+\D t),t+\D t}&=\r \cdot V\bp{ a(t),t}\\ 317 | \lim_{\D t\to 0} e^{-\r\cdot \D t}\cdot \pd{V}{t}\bp{a(t+\D t),t+\D t} &= \pd{V}{t}\bp{a(t),t}\\ 318 | \lim_{\D t\to 0} e^{-\r\cdot \D t}\cdot \pd{V}{a}\bp{ a(t+\D t),t+\D t}\cdot\dot{a}(t+\D t)&=\pd{V}{a}\bp{ a(t),t} \cdot \dot{a}(t)=\pd{V}{a}\bp{ a(t),t} \cdot g(a(t),c(t)), 319 | \end{align*} 320 | where the last equality results from the law of motion~\eqref{eq:thelaw} of state variable $a(t)$. 321 | 322 | Therefore, the limit as $\D t\to 0$ for the first term in the bracket is 323 | \begin{equation} 324 | -\r \cdot V\bp{ a(t),t}+\pd{ V}{a}\bp{ a(t),t} \cdot g(a(t),c(t))+\pd{V}{t}\bp{a(t),t}.\label{eq:BIG2} 325 | \end{equation} 326 | 327 | \paragraph{Derivation of the Hamilton-Jacobi-Bellman Equation} Combining equations~\eqref{eq:BIG},~\eqref{eq:BIG1}, and~\eqref{eq:BIG2}, we obtain a version of the Bellman equation for the continuous-time optimization problem. This equation is called the \textit{Hamilton-Jacobi-Bellman equation}. The equation is 328 | \begin{equation} 329 | \r V\bp{ a(t),t} =\max_{c(t)}\bs{ u\bp{a(t),c(t)} +\pd{V}{a(t)}\bp{a(t),t}\cdot 330 | g(a(t),c(t)) +\pd{V}{t}\bp{a(t),t}}, \label{eq:HJB} 331 | \end{equation} 332 | where $a(t)$ is given. We define \[\l (t)\equiv e^{-\r \cdot t}\cdot \pd{V}{a(t)}\bp{a(t),t}.\] 333 | We can rewrite the Hamilton-Jacobi-Bellman equation as 334 | \begin{equation} 335 | \r V\bp{ a(t),t} =\max_{c(t)}\bs{ u\bp{a(t),c(t)} +e^{\r \cdot t}\cdot\l (t) \cdot 336 | g(a(t),c(t)) +\pd{V}{t}\bp{a(t),t}}. \label{eq:HJB2} 337 | \end{equation} 338 | 339 | \subsection{Derivation of the Optimality Conditions} 340 | 341 | Taking the first-order condition with respect to $c(t)$ in the Hamilton-Jacobi-Bellman equation~\eqref{eq:HJB2} implies 342 | \begin{equation*} 343 | \pd{u}{c(t)}\bp{a(t),c(t)}+e^{\r \cdot t}\cdot\l (t) \cdot \pd{g}{c(t)}\bp{a(t),c(t)}=0. 344 | \end{equation*} 345 | Furthermore, the envelope theorem implies 346 | \begin{align*} 347 | \r \pd{V}{a(t)}\bp{a(t),t}&=\pd{u}{a(t)}\bp{a(t),c(t)}+e^{\r \cdot t}\cdot\l (t)\cdot \pd{g}{a(t)}\bp{ a(t),c(t)}+\frac{\partial^{2}V}{\partial t\partial a(t)}\bp{a(t),t}. 348 | \end{align*} 349 | 350 | The last two equations become equivalent to optimality conditions~\eqref{eq:hcontrol} and~\eqref{eq:hstate}. This is the case because, using the definition of the present-value Hamiltonian, equation~\eqref{eq:hcontrol} can be written 351 | \[\pd{H(t)}{c(t)}\bp{a(t),c(t)} =0\] 352 | and equation~\eqref{eq:hstate} can be written 353 | \[\pd{u}{a(t)}\bp{a(t),c(t)} +e^{\r\cdot t}\cdot \l(t)\cdot \pd{g}{a(t)}\bp{a(t),c(t)}=-e^{\r\cdot t}\cdot \pd{\l(t)}{t},\] 354 | which implies 355 | \[\pd{H(t)}{a(t)}\bp{a(t),c(t)} =-\pd{\l(t)}{t}.\] 356 | Note that we consider that the Hamiltonian is a function of $a_{t}$, $t$, and $\l_{t}$, and we only take the partial derivative with respect to $a_{t}$, thus keeping $\l_{t}$ constant. 357 | 358 | \section{Applications of the Hamilton-Jacobi-Bellman Equation} 359 | 360 | The Hamilton-Jacobi-Bellman equation~\eqref{eq:HJB} is an optimality condition that equates flow costs with flow benefits. In practice, we write it down without going through all the algebra relating to $\D t.$ 361 | 362 | This equation is commonly used in macroeconomics. For instance, it is frequently used in search-and-matching models of the labor market. In a search-and-matching model, a vacant job costs $c$ per unit time and becomes occupied according to a Poisson process with arrival rate $q.$ In the labor market, the occupied job yields net returns $p-w,$ where $p$ is real output and $w$ is the cost of labor. The job runs a risk $\l$ of being destroyed. 363 | 364 | Let $V$ be the value of the vacant job and $J$ be the value of occupied job. Let $r$ be the discount factor. In steady state, the Hamilton-Jacobi-Bellman equations are 365 | \begin{align*} 366 | r\cdot V &=-c+q \cdot \bp{J-V},\\ 367 | r\cdot J &=p-w+ \l \cdot \bp{0-J}=p-w-\l \cdot J. 368 | \end{align*} 369 | There is no maximization on the right-hand-side in this particular example. These equations simply describe the relationship between the equilibrium values $V$, $J$, and $w$. 370 | 371 | \end{document} -------------------------------------------------------------------------------- /lecturenotes/notes3.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/lecturenotes/notes3.pdf -------------------------------------------------------------------------------- /lecturenotes/notes3.tex: -------------------------------------------------------------------------------- 1 | \documentclass[letterpaper,12pt,leqno]{article} 2 | \usepackage{paper,math,notes} 3 | \newcommand{\pdf}{phasediagrams.pdf} 4 | \available{https://pascalmichaillat.org/x/} 5 | \hypersetup{pdftitle={Differential Equations}} 6 | 7 | \begin{document} 8 | 9 | \title{Differential Equations} 10 | \author{Pascal Michaillat} 11 | \date{} 12 | 13 | \begin{titlepage} 14 | \maketitle 15 | \tableofcontents 16 | \end{titlepage} 17 | 18 | \section{Linear First-Order Differential Equations}\label{sec:one} 19 | 20 | This section covers linear first-order differential equations. These are equations that relate a function to its first derivative in a linear way. They are the simplest differential equations. 21 | 22 | \subsection{Constant Growth Rate} 23 | 24 | We consider a function $x(t)$ of time $t\in\R$. We denote the derivative of $x(t)$ with respect to time by \[\dot{x}(t)\equiv dx/dt.\] 25 | 26 | We consider the following differential equation: 27 | \begin{equation} 28 | \dot{x}(t) -\l\cdot x(t) = 0 29 | \label{eq:FODE1}\end{equation} 30 | where $\l\in\R$ is a constant. Equation~\eqref{eq:FODE1} is a first-order differential equation, because it involves $x(t)$ and the first-order derivative of $x(t)$ with respect to time: $\dot{x}(t)$. Equation~\eqref{eq:FODE1} is a functional equation: the unknown is the function $x(t)$ rather than a number or a vector. Solving equation~\eqref{eq:FODE1} means finding the functions $x(t)$ that, together with their derivative $\dot{x}(t)$, satisfy equation~\eqref{eq:FODE1} for all $t \in \R$. 31 | 32 | Equation~\eqref{eq:FODE1} is an especially simple differential equation. It can be rewritten as 33 | \[\frac{\dot{x}(t)}{x(t)}= \l,\] 34 | so it imposes that $x(t)$ has a constant growth rate $\l$ over time. It admits a simple class of functions as solution: 35 | \begin{equation} 36 | x(t) =A\cdot e^{\l \cdot t} \label{eq:FODE1sol}, 37 | \end{equation} 38 | for any constant $A\in \R$. Furthermore, the constant $A$ can be determined by an additional boundary condition because 39 | \[A=x(0)=x(t_{0})\cdot e^{-\l \cdot t_{0}}\] 40 | for any date $t_{0}\in \R$. 41 | 42 | It is clear that functions of the type~\eqref{eq:FODE1sol} satisfy equation~\eqref{eq:FODE1}. We now show that if a function $x(t)$ solves equation~\eqref{eq:FODE1}, it is necessarily of the type~\eqref{eq:FODE1sol}. 43 | Observe that 44 | \begin{equation*} 45 | \frac{\dot{x}(t)}{x(t)}=\od{\ln{x(t)}}{t}, 46 | \end{equation*} 47 | which allows us to rewrite differential equation~\eqref{eq:FODE1} as 48 | \begin{equation*} 49 | \frac{d\ln x(t)}{dt}=\l. 50 | \end{equation*} 51 | Let $t_{0}\in\R$. Integrating the equation from $t_{0}$ to $t$, $x(t)$ necessarily satisfies 52 | \begin{align*} 53 | \int_{t_{0}}^{t} d\ln x(t)&=\int_{t_{0}}^{t} \l\cdot dt\\ 54 | \ln{x(t)} -\ln{x(t_{0})}& =\l \bp{t-t_{0}}\\ 55 | x(t)&=x(t_{0})\cdot e^{\l \cdot(t-t_{0})}=\bs{x(t_{0})\cdot e^{-\l \cdot t_{0}}}\cdot e^{\l \cdot t}. 56 | \end{align*} 57 | Therefore, if $x(t)$ solves equation~\eqref{eq:FODE1}, it is necessarily of the type~\eqref{eq:FODE1sol}. 58 | 59 | \subsection{Constant Coefficient} 60 | 61 | We have solved the simplest differential equation, which takes the form of equation~\eqref{eq:FODE1}. We now study a more general differential equation: 62 | \begin{equation} 63 | \dot{x}(t)-\l \cdot x(t) =f(t),\label{eq:FODE2} 64 | \end{equation} 65 | where $f(t)\in \R$. Equation~\eqref{eq:FODE1} is the special case of equation~\eqref{eq:FODE2} with $f(t)=0$ for all $t$. 66 | 67 | Equation~\eqref{eq:FODE2} admits a simple class of functions as solution: 68 | \begin{equation} 69 | x(t)=e^{\l \cdot t} \cdot \bs{A+\int_{0}^{t}f(z) \cdot e^{-\l \cdot z} dz} \label{eq:FODE2sol} 70 | \end{equation} 71 | for any constant $A\in \R$. Furthermore, the constant $A$ can be determined by an additional boundary condition because 72 | \[A=x(0)=x(t_{0})\cdot e^{-\l \cdot t_{0}}-\int_{0}^{t_{0}}f(z) \cdot e^{-\l \cdot z} dz\] 73 | for any date $t_{0}\in \R$. 74 | 75 | It is clear that functions of the type~\eqref{eq:FODE2sol} satisfy equation~\eqref{eq:FODE2}. We now show that if a function $x(t)$ solves equation~\eqref{eq:FODE2}, it is necessarily of the type~\eqref{eq:FODE2sol}. 76 | 77 | To be able to solve the differential equation, we manipulate a certain function $x(t)\cdot \mu(t)$ instead of manipulating $x(t)$ directly. The auxiliary function $\mu(t) $ is called the \textit{integrating factor}. The integrating factor for this problem is 78 | \[\mu(t) =e^{-\l \cdot t}.\] 79 | This integrating factor $\mu(t)$ has the desirable property that $\dot{\mu}(t) =-\l \cdot \mu(t)$. 80 | 81 | We multiply both sides of the differential equation~\eqref{eq:FODE2} by the integrating factor: 82 | \begin{align*} 83 | \dot{x}(t) \cdot \mu(t) -\l \cdot x(t)\cdot \mu(t) & =f(t) \cdot \mu(t)\\ 84 | \dot{x}(t) \cdot\mu(t) +x(t) \cdot\dot{\mu}(t) &=f(t)\cdot \mu(t)\\ 85 | \od{\bs{x(t) \cdot \mu(t)}}{t}& =f(t) \cdot\mu(t). 86 | \end{align*} 87 | Integrating the equation from $t_{0}\in \R$ to $t$ we obtain 88 | \begin{align} 89 | \int_{t_{0}}^{t} d\bs{x(t) \cdot \mu(t)}&=\int_{t_{0}}^{t}f(z) \cdot \mu(z) dz\nonumber\\ 90 | x(t)\cdot \mu(t)-x(t_{0})\cdot \mu\bp{t_{0}}&=\int_{t_{0}}^{t}f(z)\cdot \mu(z) dz\nonumber\\ 91 | x(t)& =\frac{x(t_{0}) \cdot \mu\bp{t_{0}}+\int_{t_{0}}^{t}f(z) \cdot \mu(z) dz}{\mu(t)}.\label{eq:INTER} 92 | \end{align} 93 | Given the definition of the integrating factor $\mu(t)$, 94 | \begin{align*} 95 | x(t)& =e^{\l \cdot t} \cdot \bs{x(t_{0}) \cdot e^{-\l\cdot t_{0}}+\int_{t_{0}}^{t}f(z) \cdot e^{-\l \cdot z} dz}. 96 | \end{align*} 97 | Therefore, there exists $A\in \R$ such that 98 | \begin{align*} 99 | x(t)& =e^{\l \cdot t} \cdot \bs{A+\int_{0}^{t}f(z) \cdot e^{-\l \cdot z} dz}. 100 | \end{align*} 101 | 102 | \subsection{General Case} 103 | 104 | We now generalize~\eqref{eq:FODE2} to allow the coefficient $\l$ to vary with time $t$. We solve 105 | \begin{equation} 106 | \dot{x}(t) -\l(t) \cdot x(t) =f(t),\label{eq:FODE3} 107 | \end{equation} 108 | with $\l(t)\in \R$ and $f(t)\in \R$. 109 | 110 | Equation~\eqref{eq:FODE3} admits the following class of functions as solution: 111 | \begin{equation} 112 | x(t)=\exp{\int_{0}^{t}\l(s) ds} \cdot \bs{A+\int_{0}^{t}f(z) \cdot \exp{-\int_{0}^{z}\l(s)ds} dz} \label{eq:FODE3sol} 113 | \end{equation} 114 | for any constant $A\in \R$. Furthermore, the constant $A$ can be determined by an additional boundary condition because 115 | \[A=x(0)=x(t_{0})\cdot \exp{-\int_{0}^{t_{0}}\l(s)ds}-\int_{0}^{t_{0}}f(z) \cdot \exp{-\int_{0}^{z}\l(s)ds} dz\] 116 | for any date $t_{0}\in \R$. 117 | 118 | Some algebra shows that functions of the type~\eqref{eq:FODE3sol} satisfy equation~\eqref{eq:FODE3}. We now show that if a function $x(t)$ solves equation~\eqref{eq:FODE3}, it is necessarily of the type~\eqref{eq:FODE3sol}. As above, we introduce an integrating factor. The integrating factor for this problem is 119 | \begin{equation*} 120 | \mu(t) =\exp \bp{-\int_{0}^{t}\l(s) ds}. 121 | \end{equation*} 122 | This integrating factor $\mu(t)$ has the desirable property that 123 | \[\dot{\mu}(t) =-\l(t) \cdot \mu(t).\] 124 | 125 | We multiply both sides of equation~\eqref{eq:FODE3} by the integrating factor: 126 | \begin{align*} 127 | \dot{x}(t) \cdot \mu(t) -\l(t) \cdot \mu(t) \cdot x(t)& =f(t) \cdot \mu(t)\\ 128 | \dot{x}(t) \cdot\mu(t) +x(t) \cdot \dot{\mu}(t) &=f(t)\cdot \mu(t)\\ 129 | \od{\bs{x(t) \cdot \mu(t)}}{t}& =f(t) \cdot\mu(t). 130 | \end{align*} 131 | Integrating the equation from $t_{0}\in \R$ to $t$ we obtain as earlier equation~\eqref{eq:INTER}. Therefore the solution to equation~\eqref{eq:FODE3} is necessarily of the type~\eqref{eq:FODE3sol}. 132 | 133 | \subsection{Initial-Value Problem} 134 | 135 | Often, an initial condition for $x(t) $ is given: 136 | \begin{equation} 137 | x(t_{0}) =x_{0}. \label{eq:ic} 138 | \end{equation} 139 | Equation~\eqref{eq:FODE3} together with equation~\eqref{eq:ic} form an initial-value problem. The constant $A$ in~\eqref{eq:FODE3sol} must satisfy 140 | \[A=x_{0}\cdot \exp{-\int_{0}^{t_{0}}\l(s)ds}-\int_{0}^{t_{0}}f(z) \cdot \exp{-\int_{0}^{z}\l(s)ds} dz.\] 141 | Hence the solution to the initial-value problem is 142 | \begin{equation} 143 | x(t) =x_{0}\cdot \exp{\int_{t_{0}}^{t}\l(s)ds}+\int_{t_{0}}^{t}f(z)\cdot \exp{\int_{z}^{t}\l(s)ds}dz.\label{eq:icsol} 144 | \end{equation} 145 | 146 | \section{Linear Systems of First-Order Differential Equations}\label{sec:two} 147 | 148 | We often encounter dynamical systems with several variables that move together over time. For example the solution to the consumption-saving problem with CRRA utility is characterized by two first-order linear differential equations: 149 | \begin{align*} 150 | \dot{a}(t) &=r\cdot a(t)-c(t), \\ 151 | \dot{c}(t) &=\frac{r-\rho}{\g}\cdot c(t). 152 | \end{align*} 153 | The first differential equation is the asset accumulation equation. The second differential equation is the Euler equation that characterizes optimal consumption over time. To find the optimal consumption path, we need to solve the two differential equations simultaneously. This section presents a method to solve such linear systems of differential equations. 154 | 155 | \subsection{Linear System} 156 | 157 | We consider a linear system of $n$ first-order differential equations with constant coefficients: 158 | \begin{align*} 159 | \dot{x}_{1}(t) &=A_{11}\cdot x_{1}(t)+A_{12}\cdot x_{2}(t)+\ldots+A_{1n}\cdot x_{n}(t) +f_{1}(t) \\ 160 | \dot{x}_{2}(t) &=A_{21}\cdot x_{1}(t)+A_{22}\cdot x_{2}(t)+\ldots+A_{2n}\cdot x_{n}(t) +f_{2}(t) \\ 161 | &. \\ 162 | &. \\ 163 | &. \\ 164 | \dot{x}_{n}(t) &=A_{n1}\cdot x_{1}(t)+A_{n2}\cdot x_{2}(t)+\ldots+A_{nn}\cdot x_{n}(t) +f_{n}(t). 165 | \end{align*} 166 | Our goal is to solve for the $n$ functions $x_{1}(t)$, $x_{2}(t),\ldots, x_{n}(t)$. 167 | 168 | An alternative way of expressing the system is to write it in matrix form: 169 | \begin{equation} 170 | \bm{\dot{x}}(t) =\bm{A} \bm{x}(t) +\bm{f}(t), \label{eq:FODEsys} 171 | \end{equation} 172 | where $\bm{\dot{x}}(t) \in \R^{n}$, $\bm{x}(t) \in \R^{n}$, and $\bm{f}(t) \in \R^{n}$ are column vectors with $n$ elements. $\bm{A}\in \R^{n\times n}$ is a constant $n\times n$ matrix. The system of first-order differential equations is linear because it can be written in matrix form: it involves a linear relationship between the vector $\bm{\dot{x}}(t)$ and the vector $\bm{x}(t)$. 173 | 174 | If $\bm{A}$ is diagonal ($A_{ij}=0$ for all $i\neq j$), the system would reduce to a collection of $n$ first-order differential equations---one first-order differential equation for each $x_{i}(t)$---that can be solved independently using the techniques from Section~\ref{sec:one}. If $\bm{A}$ is not diagonal, the different entries in $\bm{x}(t)$ interact and we must solve the system of first-order differential equations simultaneously. 175 | 176 | \subsection{General Solution} 177 | 178 | Assume that $\bm{A}$ is diagonalizable. There exists $\bm{V}\in \R^{n\times n}$ such that 179 | \begin{equation} 180 | \bm{A}=\bm{V}\bm{\Lambda}\bm{V}^{-1},\label{eq:DECO} 181 | \end{equation} 182 | where $\bm{\Lambda}\in \R^{n\times n}$ is a diagonal matrix. The diagonal entries of $\bm{\Lambda}$ are the $n$ eigenvalues $\l_{1},\ldots,\l_{n}$ of $\bm{A},$ and $\bm{V}$ is the matrix whose 183 | columns are the eigenvectors $\bm{z}_{1},\ldots,\bm{z}_{n}$ of $\bm{A}$. 184 | 185 | By definition, $\l_{1},\ldots,\l_{n}$ are the $n$ roots of the polynomial equation 186 | \begin{equation*} 187 | \det(\bm{A}-\l \bm{I}) =0. 188 | \end{equation*} 189 | For any $i=1,\dots,n$, the eigenvector $\bm{z}_{i}$ associated with the eigenvalue $\l_{i}$ satisfies 190 | \begin{equation*} 191 | \bp{\bm{A}-\l_{i} \bm{I}} \bm{z}_{i} =\bm{0}. 192 | \end{equation*} 193 | 194 | Using the decomposition~\eqref{eq:DECO}, we rewrite the system \eqref{eq:FODEsys} as 195 | \begin{align} 196 | \bm{V}^{-1}\bm{\dot{x}}(t) &=\bm{\Lambda} \bm{V}^{-1}\bm{x}(t) +\bm{V}^{-1}\bm{f}(t)\nonumber\\ 197 | \bm{\dot{y}}(t) &=\bm{\Lambda} \bm{y}(t) +\bm{g}(t) , \label{eq:FODEsyst} 198 | \end{align} 199 | where we define 200 | \begin{align*} 201 | \bm{y}(t) &\equiv \bm{V}^{-1}\bm{x}(t)\\ 202 | \bm{g}(t) &\equiv \bm{V}^{-1}\bm{f}(t). 203 | \end{align*} 204 | Since the matrix $\bm{\Lambda}$ is diagonal, the system is reduced to a collection of $n$ independent first-order differential equations---one for each $y_{i}(t)$. Once we have solved for $\bm{y}(t)$, we can recover $\bm{x}(t)$ by 205 | \begin{equation*} 206 | \bm{x}(t) =\bm{V} \bm{y}(t) . 207 | \end{equation*} 208 | The nature of the eigenvalues and corresponding eigenvectors determines the dynamics of the solution. 209 | 210 | \subsection{Homogeneous Systems} 211 | 212 | If $\bm{f}(t) =\bm{0}$, the system \eqref{eq:FODEsys} is homogeneous; otherwise it is nonhomogeneous. 213 | 214 | For homogeneous systems, 215 | \begin{equation} 216 | \bm{\dot{x}}(t) =\bm{A} \bm{x}(t).\label{eq:FODEsysh} 217 | \end{equation} 218 | So the transformed system~\eqref{eq:FODEsyst} becomes 219 | \begin{equation*} 220 | \bm{\dot{y}}(t) =\bm{\Lambda}\bm{y}(t) , 221 | \end{equation*} 222 | which leads to $n$ independent differential equations: 223 | \begin{equation*} 224 | \dot{y}_{i}(t) -\l_{i} \cdot y_{i}(t)=0 225 | \end{equation*} 226 | for $i=1,\ldots,n$. In other words, each $y_{i}(t) $ is growing at constant rate $\l_{i}$. The analysis of Section~\ref{sec:one} shows that the solution to the $i^{th}$ differential equation is 227 | \begin{equation*} 228 | y_{i}(t) =A_{i}\cdot e^{\l_{i}\cdot t} 229 | \end{equation*} 230 | where $A_{i}\in \R$ is a constant. Finally, $x_{1}(t),\ldots,x_{n}(t)$ are given by 231 | \begin{equation*} 232 | \bm{x}(t) =\bm{V} \bm{y}(t) . 233 | \end{equation*} 234 | The columns of $\bm{V}$ are the eigenvectors $\bm{z}_{1},\ldots,\bm{z}_{n}$ corresponding to the 235 | eigenvalues $\l_{1},...\l_{n}$. Hence the solution of the homogeneous system~\eqref{eq:FODEsysh} is 236 | \begin{equation} 237 | \bm{x}(t) =A_{1}\cdot \bm{z}_{1} \cdot e^{\l_{1}\cdot t}+\ldots+A_{n}\cdot \bm{z}_{n}\cdot e^{\l_{n}\cdot t}.\label{eq:SOLEV} 238 | \end{equation} 239 | The nature of the eigenvalues and the corresponding eigenvectors determines 240 | the dynamics of the solution. 241 | 242 | \subsection{Closed-Form Solution to a Two-Variable Homogeneous System} 243 | 244 | As an example, we consider a two-variable homogeneous system: 245 | \begin{align*} 246 | \dot{x}_{1}(t) &=a\cdot x_{1}(t)+b\cdot x_{2}(t) \\ 247 | \dot{x}_{2}(t) &=c\cdot x_{1}(t)+d\cdot x_{2}(t). 248 | \end{align*} 249 | We can write it in matrix form 250 | \begin{equation*} 251 | \bm{\dot{x}}(t) =\bm{A} \bm{x}(t) 252 | \end{equation*} 253 | where the matrix $\bm{A}$ is 254 | \begin{equation*} 255 | \bm{A}=\bs{ 256 | \begin{array}{ll} 257 | a & b \\ 258 | c & d 259 | \end{array}}. 260 | \end{equation*} 261 | Assume $\det(\bm{A}) =a\cdot d-b\cdot c\neq 0. $ 262 | 263 | Equation~\eqref{eq:SOLEV} implies that to determine a closed-form solution of this homogeneous system, we need to find the eigenvalues and eigenvectors of the matrix $\bm{A}$. 264 | 265 | The eigenvalues are solutions to 266 | \begin{align*} 267 | \det( \bp{\bm{A}-\l \bm{I}} &=0\\ 268 | \bp{a-\l} \cdot \bp{d-\l} -b\cdot c &=0 \\ 269 | \l ^{2}-\bp{a+d}\cdot \l +\bp{a\cdot d-b\cdot c} &=0. 270 | \end{align*} 271 | Note that the product of the two eigenvalues is equal to the determinant of $ 272 | \bm{A}$: 273 | \begin{equation} 274 | \l_{1}\cdot \l_{2}=a\cdot d-b\cdot c=\det(\bm{A}).\label{eq:DETL} 275 | \end{equation} 276 | 277 | Let $\bs{ 278 | \begin{array}{l} 279 | \a_{1} \\ 280 | \b_{1} 281 | \end{array} 282 | } $ be the eigenvector correspond to $\l_{1}$ and $\bs{ 283 | \begin{array}{l} 284 | \a_{2} \\ 285 | \b_{2} 286 | \end{array}} $ be the eigenvector correspond to $\l_{2}$. These vectors are solutions to 287 | \begin{equation*} 288 | \bp{\bm{A}-\l_{i}\bm{I}} \bs{ 289 | \begin{array}{l} 290 | \a_{i} \\ 291 | \b_{i} 292 | \end{array}} =0 293 | \end{equation*} 294 | which yields the system 295 | \begin{align*} 296 | \bp{a-\l_{i}}\cdot \a_{i}+b\cdot \b _{i} &=0 \\ 297 | c\cdot \a_{i}+\bp{d-\l_{i}}\cdot \b _{i} &=0. 298 | \end{align*} 299 | 300 | Consider the cases where the eigenvalues are real and distinct, the general 301 | solution~\eqref{eq:SOLEV} implies 302 | \begin{align*} 303 | x_{1}(t) &=A_{1}\cdot \a_{1}\cdot e^{\l_{1}\cdot t}+A_{2}\cdot \a_{2}\cdot e^{\l_{2}\cdot t}\\ 304 | x_{2}(t) &=A_{1}\cdot \b_{1}\cdot e^{\l_{1}\cdot t}+A_{2}\cdot \b_{2}\cdot e^{\l_{2}\cdot t}, 305 | \end{align*} 306 | where $A_{1}$ and $A_{2}$ are arbitrary constants. 307 | 308 | Note that in the case in which $\l_{1}=\l_{2}=\l$, the system $\bs{x_{1}(t),x_{2}(t)}$ above is still the general solution of the system of differential equations as long as the two eigenvectors $\bs{\a_{1},\b_{1}}$ and $\bs{\a_{2},\b_{2}}$ are linearly independent. 309 | 310 | Also note that any nonhomogeneous system with constant terms $\bs{\k_{1},\k_{2}}$: 311 | \begin{align*} 312 | \dot{x}_{1}(t) &=a\cdot x_{1}(t)+b\cdot x_{2}(t)+\k_{1} \\ 313 | \dot{x}_{2}(t) &=c\cdot x_{1}(t)+d\cdot x_{2}(t)+\k_{2}, 314 | \end{align*} 315 | can be transformed into an homogeneous system. 316 | 317 | \subsection{Stability of a Two-Variable Homogeneous System} 318 | 319 | Now that we have found a closed-form solution to the system, we can analyze its stability. There are three cases. 320 | 321 | \paragraph{Sink: $\l_{1}<0$ and $\l_{2}<0$} As shown by~\eqref{eq:DETL}, since $\l_{1}$ and $\l_{2}$ have the same sign, $\det(\bm{A}) >0$. As $t\to +\infty$, $x_{1}(t)\to 0$ and $x_{2}(t)\to 0$. The system is a \textit{sink}. 322 | 323 | \paragraph{Source: $\l_{1}>0$ and $\l_{2}>0$} As shown by~\eqref{eq:DETL}, since $\l_{1}$ and $\l_{2}$ have the same sign, $\det(\bm{A}) >0$. As $t\to +\infty$, $|x_{1}(t)|\to +\infty$ and $|x_{2}(t)|\to +\infty$. The system is a \textit{source}. 324 | 325 | \paragraph{Saddle: $\l_{1}$ and $\l_{2}$ have opposite sign} As shown by~\eqref{eq:DETL}, since $\l_{1}$ and $\l_{2}$ have opposite sign, $\det(\bm{A}) <0$. One part of the solution is stable (it converges to 0 at $t\to +\infty$), the other is unstable (it converges to $\infty$ at $t\to +\infty$). The system is a \textit{saddle}. 326 | 327 | 328 | \section{Phase Diagrams}\label{sec:three} 329 | 330 | Without solving for eigenvalues and eigenvectors explicitly, we can study the properties of a linear system of first-order differential equations by drawing its phase diagram. 331 | 332 | \subsection{Nonhomogeneous Linear System} 333 | 334 | Here we construct the phase diagram for the following nonhomogeneous linear system of two first-order differential equations: 335 | \begin{align} 336 | \dot{x}(t) &=a\cdot x(t)+b\cdot y(t)+\k_{1}\label{eq:nonh1}\\ 337 | \dot{y}(t) &=c\cdot x(t)+d\cdot y(t)+\k_{2}\label{eq:nonh2}, 338 | \end{align} 339 | with $a<0$, $b<0$, $c<0$, $d>0$, $\k_{1}>0$, and $\k_{2}>0$. Since $a\cdot d-b\cdot c<0$, the eigenvalues of the system are of opposite sign. Hence the dynamical system is a saddle. 340 | 341 | Drawing the phase diagram of a two-variable system is useful to understand the main features of the dynamic system without solving for $x(t) $ and $y(t)$ explicitly. The phase diagram is represented in figure~\ref{f:phase}. 342 | 343 | \begin{figure}[p] 344 | \subcaptionbox{Nullclines \label{f:phase1}}{\includegraphics[scale=\sfig,page=1]{\pdf}}\hfill 345 | \subcaptionbox{Steady state \label{f:phase2}}{\includegraphics[scale=\sfig,page=2]{\pdf}}\vfig 346 | \subcaptionbox{Directional arrows \label{f:phase3}}{\includegraphics[scale=\sfig,page=3]{\pdf}}\hfill 347 | \subcaptionbox{Trajectories \label{f:phase4}}{\includegraphics[scale=\sfig,page=4]{\pdf}} 348 | \caption{Phase diagram for the dynamical system \eqref{eq:nonh1}--\eqref{eq:nonh2}} 349 | \label{f:phase}\end{figure} 350 | 351 | \subsection{Nullclines} 352 | 353 | We first plot the nullclines, which are the loci $\dot{x}=0$ and $\dot{y}=0$ (figure~\ref{f:phase1}). 354 | 355 | The locus for $\dot{x}=0$ is given by 356 | \begin{align*} 357 | y=-\frac{a}{b}\cdot x-\frac{\k_{1}}{b}. 358 | \end{align*} 359 | The locus is a straight line with a negative slope in the $(x,y)$ plan. 360 | 361 | The locus for $\dot{y}=0$ is given by 362 | \begin{align*} 363 | y=-\frac{c}{d}\cdot x-\frac{\k_{2}}{d}. 364 | \end{align*} 365 | The locus is a straight line with positive slope in the $(x,y)$ plan. 366 | 367 | \subsection{Steady state} 368 | 369 | Next we place the system's steady state(figure~\ref{f:phase2}). The steady state is given by the intersection of the two nullclines. Denote the intersection of the two nullclines as $\bp{x^{*},y^{*}} $. These two nullclines divide the $(x,y)$ plane into four areas.\footnote{In some other fields the steady state of the system is called \textit{critical point} of the system.} 370 | 371 | \subsection{Directional arrows} 372 | 373 | Then we place on the diagram the directional arrows. These arrows determine the direction of the system's trajectories over time anywhere on the phase diagram (figure~\ref{f:phase3}). 374 | 375 | From~\eqref{eq:nonh1}, we see that $\dot{x}$ is decreasing in $y$ because $b<0$. Thus any point above the $\dot{x}=0$ line must have $\dot{x}<0$ and any point below the $\dot{x}=0$ line must have $\dot{x}>0$. We represent these properties by an horizontal arrow pointing west for any point above the $\dot{x}=0$ line and an horizontal arrow pointing east for any point below the $\dot{x}=0$ line. 376 | 377 | Similarly, from \eqref{eq:nonh2}, $\dot{y}$ is increasing in $y$ because $d>0$. Thus any point above the $\dot{y}=0$ line must have $\dot{y}>0$ and any point below the $\dot{y}=0$ line must have $\dot{y}<0$. We represent these properties by a vertical arrow pointing north for any point above the $\dot{y}=0$ line and a vertical arrow pointing south for any point below the $\dot{y}=0$ line. 378 | 379 | 380 | \subsection{Trajectories} 381 | 382 | Using the directional arrows, we can draw trajectories that satisfy the system of differential equations (figure~\ref{f:phase4}). These are solutions to the system. To select a specific solution among all possible solutions, we will need to specify either an initial condition or a final condition. 383 | 384 | Among all the trajectories, we highlight the saddle path for the system. We know that such a saddle path exist because the eigenvalues of the system have opposite sign. The saddle path is the straight line that goes through the steady state.\footnote{The saddle path is also sometimes called a \textit{stable line} of the system. There is also an unstable line, which goes through the steady state but moves away from it.} 385 | 386 | 387 | \begin{figure}[p] 388 | \subcaptionbox{Initial phase diagram \label{f:news1}}{\includegraphics[scale=\sfig,page=5]{\pdf}}\hfill 389 | \subcaptionbox{New phase diagram \label{f:news2}}{\includegraphics[scale=\sfig,page=6]{\pdf}}\vfig 390 | \subcaptionbox{Jump upon news \label{f:news3}}{\includegraphics[scale=\sfig,page=7]{\pdf}}\hfill 391 | \subcaptionbox{Movement after the jump \label{f:news4}}{\includegraphics[scale=\sfig,page=8]{\pdf}} 392 | \caption{Response to a shock in a phase diagram with a state variable} 393 | \label{f:news}\end{figure} 394 | 395 | 396 | \subsection{Using Phase Diagram with State Variable and Control Variable} 397 | 398 | Suppose $x$ is a state variable: information revealed at $t$ does not influence its value at $t$. Suppose $y$ is a control variable: information revealed at $t$ may influence its value at $t$. Suppose that we are in the steady state $\bp{x^{*},y^{*}}$ of the previous phase diagram. 399 | 400 | Now assume that there is an exogenous, unanticipated increase in $\k_{2}$. This increase is revelation of news because it is an unanticipated change to one of the parameters or variables of the system. The response to the news in the phase diagram is represented in figure~\ref{f:news}. 401 | 402 | As $\k_{2}$ increases, the $\dot{y}=0$ locus shifts down, so the new steady state $\bp{x^{* *},y^{* *}} $ is to the south-east of the previous steady state: $x^{* *}>x^{*}$ and $y^{* *}0$,and $\d \in \bp{0,1}$ are parameters, the capital stock $k(t)$ is a state variable 418 | with $k_{0}$ given, and the production function $f$ satisfies the Inada conditions: 419 | \begin{align*} 420 | f\bp{0}=0,\; f'>0,\;f''<0,\;\lim_{k\to +\infty}f'(k)=0,\;\lim_{k\to 0}f'(k) =+\infty. 421 | \end{align*} 422 | 423 | We study the properties of this system by drawing its phase diagram in a plane with the state variable $k$ on the x-axis and the control variable $c$ on the y-axis (figure~\ref{f:growth}). 424 | 425 | \begin{figure}[p] 426 | \subcaptionbox{Nullclines \label{f:growth1}}{\includegraphics[scale=\sfig,page=9]{\pdf}}\hfill 427 | \subcaptionbox{Steady state \label{f:growth2}}{\includegraphics[scale=\sfig,page=10]{\pdf}}\vfig 428 | \subcaptionbox{Directional arrows \label{f:growth3}}{\includegraphics[scale=\sfig,page=11]{\pdf}}\hfill 429 | \subcaptionbox{Trajectories\label{f:growth4}}{\includegraphics[scale=\sfig,page=12]{\pdf}} 430 | \caption{Phase diagram of a simple growth model} 431 | \label{f:growth}\end{figure} 432 | 433 | \subsection{Nullclines} 434 | 435 | We first draw the nullclines (figure~\ref{f:growth1}). We draw the $\dot{k}=0$ curve defined by 436 | \begin{align*} 437 | c=f\bp{k} -\d\cdot k, 438 | \end{align*} 439 | and the $\dot{c}=0$ curve defined by 440 | \begin{align*} 441 | f^{\prime}\bp{k} =\d +\rho. 442 | \end{align*} 443 | In the $(k,c)$ plane, the $\dot{k}=0$ curve is concave and the $\dot{c}=0$ curve is a vertical line. 444 | 445 | \subsection{Steady State} 446 | 447 | The intersection of these two loci is the steady state $(k^{*},c^{*})$ of the system (figure~\ref{f:growth2}). 448 | 449 | \subsection{Directional Arrows} 450 | 451 | Next we construct the directional arrows (figure~\ref{f:growth3}). To do that, we partially differentiate equations~\eqref{eq:growth1} and~\eqref{eq:growth2}: 452 | \begin{align*} 453 | \pd{\dot{k}}{c} &=-1<0 \\ 454 | \pd{\dot{c}}{k} &=c\cdot f''(k)<0. 455 | \end{align*} 456 | Therefore as $c$ increases, $\dot{k}$ decreases. So, the horizontal arrows point eastward below the $\dot{k}=0$ curve and westward above it. Similarly as $k$ increases, $\dot{c}$ decreases. So the vertical arrows point northward to the left of the $\dot{c}=0$ curve and southward to the right of it. 457 | 458 | \subsection{Trajectories} 459 | 460 | The directional arrows drawn describe a saddle around the steady state (figure~\ref{f:growth4}). The only way for the economy to converge to the steady state is on the saddle path leading to it. This means that given any initial capital $k_{0}$, initial consumption $c_{0}$ is such that the pair $\bp{k_{0},c_{0}} $ lies on the saddle path. 461 | 462 | \subsection{Linearization} 463 | 464 | The phase diagram indicates that the system is a saddle around the steady state (figure~\ref{f:growth}). We can also obtain this result by linearizing the nonlinear system~\eqref{eq:growth1}--\eqref{eq:growth2} using a first-order Taylor expansion around the steady state: 465 | \begin{align*} 466 | \dot{k} &=\dot{k}^{*} +\bp{k-k^{*}} \cdot \pd{\dot{k}}{k}+\bp{c-c^{*}}\cdot \pd{\dot{k}}{c} \\ 467 | \dot{c} &=\dot{c}^{*} +\bp{k-k^{*}}\cdot \pd{\dot{c}}{k}+\bp{c-c^{*}}\cdot \pd{\dot{c}}{c}. 468 | \end{align*} 469 | Given that $\dot{k}^{*} =\dot{c}^{*} =0,$ we have 470 | \begin{equation*} 471 | \bs{\begin{array}{l} 472 | \dot{k}\\ 473 | \dot{c} 474 | \end{array}} =\bm{J}^{*}\bs{ 475 | \begin{array}{l} 476 | k-k^{*} \\ 477 | c-c^{*} 478 | \end{array}}, 479 | \end{equation*} 480 | where $\bm{J}^{*}$ is the Jacobian matrix evaluated at the steady state: 481 | \begin{equation*} 482 | \bm{J}^{*}=\bs{ 483 | \begin{array}{ll} 484 | \pdw{\dot{k}}{k}{(k^{*},c^{*})} & \pdw{\dot{k}}{c}{(k^{*},c^{*})} \\ 485 | \pdw{\dot{c}}{k}{(k^{*},c^{*})} & \pdw{\dot{c}}{c}{(k^{*},c^{*})} 486 | \end{array}}. 487 | \end{equation*} 488 | This system is a two-variable nonhomogeneous system of first-order differential equations 489 | for \[\bm{x}=\bs{\begin{array}{l} 490 | k\\ 491 | c 492 | \end{array}}.\] 493 | 494 | But it is a two-variable homogeneous system for the transformed variable $\bm{y}$, where 495 | \begin{equation*} 496 | \bm{y}=\bm{x}-\bm{x}^{*}=\bs{ 497 | \begin{array}{l} 498 | k-k^{*}\\ 499 | c-c^{*} 500 | \end{array}}. 501 | \end{equation*} 502 | The constant matrix $A$ of Section~\ref{sec:two} is $\bm{J}^{*}$. The analysis of Section~\ref{sec:two} shows that the properties of the steady state depend on the eigenvalues of $\bm{J}^{*}$. The four partial derivatives are 503 | \begin{align*} 504 | \pdw{\dot{k}}{k}{(k^{*},c^{*})}&=f'(k^{*}) -\d =\rho >0 \\ 505 | \pdw{\dot{k}}{c}{(k^{*},c^{*})}&=-1<0 \\ 506 | \pdw{\dot{c}}{k}{(k^{*},c^{*})}&=c\cdot f''\bp{k^{*}} <0 \\ 507 | \pdw{\dot{c}}{c}{(k^{*},c^{*})}&=f'(k^{*}) -\bp{\d +\rho} =0. 508 | \end{align*} 509 | It follows that the Jacobian matrix can be written 510 | \begin{equation*} 511 | \bm{J}^{*}=\bs{ 512 | \begin{array}{ll} 513 | \rho & -1 \\ 514 | c\cdot f''(k) & 0 515 | \end{array}}. 516 | \end{equation*} 517 | As shown by~\eqref{eq:DETL}, the product of the two eigenvalues is the determinant of $\bm{J}^{*}$: \[\det(\bm{J}^{*})=c\cdot f''(k) <0.\] Therefore, the two eigenvalues have opposite sign. This property confirms that around the steady state the system is a saddle. 518 | 519 | \end{document} -------------------------------------------------------------------------------- /lecturenotes/paper.sty: -------------------------------------------------------------------------------- 1 | % --- Fonts --- 2 | 3 | \usepackage{sourceserifpro} 4 | \usepackage[T1]{fontenc} 5 | \usepackage{amsmath,amssymb,amsthm,eucal,bbold,bm} 6 | \usepackage[italic,eulergreek,symbolmisc]{mathastext} 7 | 8 | % URL font 9 | \usepackage[hyphens]{url} 10 | \urlstyle{same} 11 | 12 | % Correct spacing around letters in math 13 | \MTsetmathskips{f}{\thinmuskip}{0mu} 14 | \MTsetmathskips{y}{\thinmuskip}{0mu} 15 | \MTsetmathskips{p}{\thinmuskip}{0mu} 16 | \MTsetmathskips{l}{0mu}{\thinmuskip} 17 | \MTsetmathskips{j}{\thinmuskip}{\thinmuskip} 18 | 19 | % Copiable text in PDF 20 | \input{glyphtounicode} 21 | \pdfgentounicode=1 22 | 23 | % --- Color --- 24 | 25 | \usepackage[x11names]{xcolor} 26 | 27 | % --- Spacing --- 28 | 29 | \usepackage[onehalfspacing]{setspace} 30 | \AtBeginDocument{\frenchspacing} 31 | \usepackage{microtype} 32 | 33 | % --- Title page --- 34 | 35 | % Title font 36 | \usepackage{titling} 37 | \pretitle{\begin{center}\bfseries\huge} 38 | 39 | % No thanks mark 40 | \renewcommand{\tamark}{} 41 | \setlength{\thanksmarkwidth}{0em} 42 | \setlength{\thanksmargin}{0em} 43 | 44 | % No indentation for abstract 45 | \usepackage{etoolbox} 46 | \AtBeginEnvironment{titlepage}{\pagestyle{empty}\setlength{\parindent}{0pt} 47 | } 48 | 49 | % --- Headings --- 50 | 51 | \usepackage{titlesec} 52 | \titleformat*{\section}{\centering\large\bfseries} 53 | \titleformat*{\subsection}{\bfseries} 54 | \titleformat{\paragraph}[runin]{\itshape}{}{}{}[.] 55 | \titlelabel{\thetitle.\quad} 56 | 57 | % --- Theorems & proofs --- 58 | 59 | \newtheoremstyle{paper}{}{}{\itshape}{}{\scshape}{.}{.5em}{} 60 | \theoremstyle{paper} 61 | \newtheorem{theorem}{Theorem} 62 | \newtheorem{proposition}{Proposition} 63 | \newtheorem{lemma}{Lemma} 64 | \newtheorem{corollary}{Corollary} 65 | \newtheorem{definition}{Definition} 66 | \newtheorem{assumption}{Assumption} 67 | \newtheorem{remark}{Remark} 68 | 69 | % Proof label font 70 | \renewcommand{\proofname}{\upshape\scshape Proof} 71 | 72 | % --- Tables & figures --- 73 | 74 | \usepackage{multirow,booktabs,rotating,graphicx} 75 | \renewcommand{\floatpagefraction}{0} 76 | \renewcommand{\arraystretch}{1.1} 77 | \renewcommand{\figurename}{\textsc{Figure}} 78 | \renewcommand{\tablename}{\textsc{Table}} 79 | \BeforeBeginEnvironment{tabular*}{\footnotesize} 80 | \AtBeginDocument{\allowdisplaybreaks[1]} 81 | 82 | % Captions 83 | \usepackage{caption,subcaption} 84 | \captionsetup{labelsep=period} 85 | \captionsetup[sub]{labelformat=simple,labelsep=period,size=footnotesize} 86 | \captionsetup[table]{position=top} 87 | \renewcommand{\thesubfigure}{\Alph{subfigure}} 88 | 89 | % Centered figures & tables 90 | \makeatletter\g@addto@macro\@floatboxreset\centering\makeatother 91 | 92 | % Notes below tables & figures 93 | \newcommand{\note}[2][]{\parbox{\textwidth}{\footnotesize\vspace*{10pt}\textit{#1}#2}} 94 | 95 | % Figure sizes & spaces 96 | \newcommand{\sfig}{0.2} 97 | \newcommand{\mfig}{0.3} 98 | \newcommand{\lfig}{0.4} 99 | \newcommand{\vfig}{\\\vspace*{0.4cm}} 100 | 101 | % --- Lists --- 102 | 103 | \usepackage{enumitem} 104 | \setlist[itemize,1]{leftmargin=\parindent,label=\color{gray}{\upshape\textbullet}} 105 | \setlist[itemize,2]{leftmargin=2\parindent,label=\color{gray}{\upshape\textendash}} 106 | \setlist[enumerate,1]{leftmargin=\parindent,label=\upshape(\alph*)} 107 | \setlist[enumerate,2]{leftmargin=2\parindent,label=\upshape(\roman*)} 108 | 109 | % --- Bibliography --- 110 | 111 | \usepackage{natbib} 112 | \setcitestyle{aysep={}} 113 | \renewcommand{\bibfont}{\small} 114 | \setlength{\bibsep}{0pt} 115 | \setlength{\bibhang}{1.5em} 116 | 117 | % Reduced spacing in bibliography 118 | \AtBeginEnvironment{thebibliography}{\setstretch{1.1}} 119 | 120 | % --- Appendix --- 121 | 122 | \AfterEndEnvironment{thebibliography}{ 123 | \setcounter{theorem}{0} 124 | \setcounter{proposition}{0} 125 | \setcounter{lemma}{0} 126 | \setcounter{corollary}{0} 127 | \setcounter{definition}{0} 128 | \setcounter{assumption}{0} 129 | \setcounter{remark}{0} 130 | \setcounter{table}{0} 131 | \setcounter{figure}{0} 132 | \setcounter{equation}{0} 133 | \renewcommand{\thetheorem}{A\arabic{theorem}} 134 | \renewcommand{\theproposition}{A\arabic{proposition}} 135 | \renewcommand{\thelemma}{A\arabic{lemma}} 136 | \renewcommand{\thecorollary}{A\arabic{corollary}} 137 | \renewcommand{\thedefinition}{A\arabic{definition}} 138 | \renewcommand{\theassumption}{A\arabic{assumption}} 139 | \renewcommand{\theremark}{A\arabic{remark}} 140 | \renewcommand{\thetable}{A\arabic{table}} 141 | \renewcommand{\thefigure}{A\arabic{figure}} 142 | \renewcommand{\theequation}{A\arabic{equation}} 143 | \titleformat{\section}{\centering\large\bfseries}{Appendix~\thesection.}{1em}{}} 144 | 145 | % --- Hyperlinks (last package) --- 146 | 147 | \usepackage{fancyhdr} % Before hyperref 148 | \usepackage[pdftex,hidelinks,hypertexnames=false,hyperfootnotes=false,pdfpagemode=UseNone,pdfdisplaydoctitle=true]{hyperref} 149 | 150 | % --- Layout (after hyperref) --- 151 | 152 | \usepackage[margin=1.1in,footskip=0.7in]{geometry} 153 | 154 | % --- Headers & footers (after hyperref & geometry) --- 155 | 156 | % No header line 157 | \renewcommand{\headrule}{} 158 | 159 | % Centered page numbers on regular pages 160 | \pagestyle{fancy} 161 | \fancyhf{} 162 | \fancyfoot[C]{\thepage} 163 | 164 | % Footers for title pages 165 | \usepackage{xparse} 166 | \fancypagestyle{plain}{\fancyhf{}} 167 | \NewDocumentCommand{\available}{o g}{% 168 | \fancypagestyle{plain}{\fancyfoot[C]{\color{gray}\footnotesize% 169 | \IfValueT{#1}{Published in \emph{#1}\;\textbullet\;}% 170 | Available at \url{#2}}}} -------------------------------------------------------------------------------- /lecturenotes/phasediagrams.key: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/lecturenotes/phasediagrams.key -------------------------------------------------------------------------------- /lecturenotes/phasediagrams.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/lecturenotes/phasediagrams.pdf -------------------------------------------------------------------------------- /problemsets/math.sty: -------------------------------------------------------------------------------- 1 | % ---------- Brackets ---------- 2 | 3 | \newcommand{\bc}[1]{\left\lbrace #1 \right\rbrace} 4 | \newcommand{\bp}[1]{\left( #1 \right)} 5 | \newcommand{\bs}[1]{\left[ #1 \right]} 6 | \newcommand{\of}[1]{{\left( #1 \right)}} % Parentheses without surrounding space, for function arguments 7 | \newcommand{\abs}[1]{\left\lvert #1 \right\rvert} 8 | \newcommand{\norm}[1]{\left\lVert #1 \right\rVert} 9 | \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} 10 | 11 | % ---------- Accents ---------- 12 | 13 | \newcommand{\ol}[1]{\overline{#1}} 14 | \newcommand{\ul}[1]{\underline{#1}} 15 | \newcommand{\wh}[1]{\widehat{#1}} 16 | \newcommand{\wt}[1]{\widetilde{#1}} 17 | 18 | % ---------- Operators ---------- 19 | 20 | \usepackage{xparse} 21 | 22 | % Natural log operator: 23 | % * \ln produces ln 24 | % * \ln{x} produces ln(x) 25 | \let\oldln\ln 26 | \RenewDocumentCommand{\ln}{g}{\IfNoValueTF{#1}{\oldln}{\,{\oldln}{\bp{#1}}}} 27 | 28 | % Exponential operator: 29 | % * \exp produces exp 30 | % * \exp{x} produces exp(x) 31 | \let\oldexp\exp 32 | \RenewDocumentCommand{\exp}{g}{\IfNoValueTF{#1}{\oldexp}{\,{\oldexp}{\bp{#1}}}} 33 | 34 | % Max operator: 35 | % * \max produces max 36 | % * \max[x] produces max_x 37 | % * \max{y} produces max{y} 38 | % * \max[x]{y} produces max_x{y} 39 | \let\oldmax\max 40 | \RenewDocumentCommand{\max}{o g}{% 41 | \IfNoValueTF{#2}{\oldmax\IfValueT{#1}{_{#1}}}% 42 | {\,{\oldmax\IfValueT{#1}{_{#1}}}{\bc{#2}}}} 43 | 44 | % Min operator: 45 | % * \min produces min 46 | % * \min[x] produces min_x 47 | % * \min{y} produces min{y} 48 | % * \min[x]{y} produces min_x{y} 49 | \let\oldmin\min 50 | \RenewDocumentCommand{\min}{o g}{% 51 | \IfNoValueTF{#2}{\oldmin\IfValueT{#1}{_{#1}}}% 52 | {\,{\oldmin\IfValueT{#1}{_{#1}}}{\bc{#2}}}} 53 | 54 | % Expectation operator: 55 | % * \E produces E 56 | % * \E[x] produces E_x 57 | % * \E{Y} produces E(Y) 58 | % * \E[x]{Y} produces E_x(Y) 59 | \NewDocumentCommand{\E}{o g}{% 60 | \IfNoValueTF{#2}{\operatorname{\mathbb{E}}\IfValueT{#1}{_{#1}}}% 61 | {\,\mathbb{E}\IfValueT{#1}{_{#1}}{\bp{#2}}}} 62 | 63 | % Probability operator: 64 | % * \P produces P 65 | % * \P[x] produces P_x 66 | % * \P{Y} produces P(Y) 67 | % * \P[x]{Y} produces P_x(Y) 68 | \RenewDocumentCommand{\P}{o g}{% 69 | \IfNoValueTF{#2}{\operatorname{\mathbb{P}}\IfValueT{#1}{_{#1}}}% 70 | {\,\mathbb{P}\IfValueT{#1}{_{#1}}{\bp{#2}}}} 71 | 72 | % Indicator operator: 73 | % * \ind produces 1 74 | % * \ind{Y} produces 1(Y) 75 | \NewDocumentCommand{\ind}{g}{% 76 | \IfNoValueTF{#1}{\operatorname{\mathbb{1}}}% 77 | {\,\mathbb{1}{\bp{#1}}}} 78 | 79 | % Trace operator: 80 | % * \tr produces tr 81 | % * \tr{Y} produces tr(Y) 82 | \NewDocumentCommand{\tr}{g}{% 83 | \IfNoValueTF{#1}{\operatorname{tr}}% 84 | {\,{\operatorname{tr}}{\bp{#1}}}} 85 | 86 | % Variance operator: 87 | % * \var produces var 88 | % * \var{Y} produces var(Y) 89 | \NewDocumentCommand{\var}{g}{% 90 | \IfNoValueTF{#1}{\operatorname{var}}% 91 | {\,{\operatorname{var}}{\bp{#1}}}} 92 | 93 | % Covariance operator: 94 | % * \cov produces cov 95 | % * \cov{Y} produces cov(Y) 96 | \NewDocumentCommand{\cov}{g}{% 97 | \IfNoValueTF{#1}{\operatorname{cov}}% 98 | {\,{\operatorname{cov}}{\bp{#1}}}} 99 | 100 | % Correlation operator: 101 | % * \corr produces corr 102 | % * \corr{Y} produces corr(Y) 103 | \NewDocumentCommand{\corr}{g}{% 104 | \IfNoValueTF{#1}{\operatorname{corr}}% 105 | {\,{\operatorname{corr}}{\bp{#1}}}} 106 | 107 | % Standard deviation operator: 108 | % * \sd produces sd 109 | % * \sd{Y} produces sd(Y) 110 | \NewDocumentCommand{\sd}{g}{% 111 | \IfNoValueTF{#1}{\operatorname{sd}}% 112 | {\,{\operatorname{sd}}{\bp{#1}}}} 113 | 114 | % Standard error operator: 115 | % * \se produces se 116 | % * \se{Y} produces se(Y) 117 | \NewDocumentCommand{\se}{g}{% 118 | \IfNoValueTF{#1}{\operatorname{se}}% 119 | {\,{\operatorname{se}}{\bp{#1}}}} 120 | 121 | \DeclareMathOperator*{\argmax}{argmax} 122 | \DeclareMathOperator*{\argmin}{argmin} 123 | \DeclareMathOperator*{\ess}{ess} 124 | \renewcommand{\Re}{\operatorname{Re}} 125 | \renewcommand{\Im}{\operatorname{Im}} 126 | \newcommand{\iid}{\mathbin{\overset{iid}{\sim}}} 127 | \newcommand{\as}{\mathbin{\overset{as}{\to}}} 128 | 129 | % ---------- Derivatives ---------- 130 | 131 | \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}} 132 | \newcommand{\pdx}[2]{\partial #1/\partial #2} 133 | \newcommand{\od}[2]{\frac{d #1}{d #2}} 134 | \newcommand{\odx}[2]{d #1/d #2} 135 | \newcommand{\pdl}[2]{\frac{\partial\ln{#1}}{\partial\ln{#2}}} 136 | \newcommand{\pdlx}[2]{\partial\ln(#1)/\partial\ln(#2)} 137 | \newcommand{\odl}[2]{\frac{d\ln{#1}}{d\ln{#2}}} 138 | \newcommand{\odlx}[2]{d\ln(#1)/d\ln(#2)} 139 | \newcommand{\pdw}[3]{\left.\frac{\partial #1}{\partial #2}\right\vert_{#3}} 140 | \newcommand{\pdwx}[3]{\left.\partial #1/\partial #2\right\vert_{#3}} 141 | 142 | % ---------- Blackboard letters ---------- 143 | 144 | \def\R{\mathbb{R}} 145 | \def\N{\mathbb{N}} 146 | \def\Z{\mathbb{Z}} 147 | \def\Q{\mathbb{Q}} 148 | \def\C{\mathbb{C}} 149 | \def\I{\mathbb{I}} 150 | 151 | % ---------- Greek letters ---------- 152 | 153 | \def\a{\alpha} 154 | \def\b{\beta} 155 | \def\c{\chi} 156 | \def\d{\delta} 157 | \def\D{\Delta} 158 | \def\e{\epsilon} 159 | \def\f{\phi} 160 | \def\vf{\varphi} 161 | \def\F{\Phi} 162 | \def\g{\gamma} 163 | \def\G{\Gamma} 164 | \def\h{\eta} 165 | \def\k{\kappa} 166 | \def\l{\lambda} 167 | \def\L{\Lambda} 168 | \def\m{\mu} 169 | \def\n{\nu} 170 | \def\o{\omega} 171 | \def\O{\Omega} 172 | \def\vp{\varpi} 173 | \def\p{\psi} 174 | \def\r{\rho} 175 | \def\s{\sigma} 176 | \def\vs{\varsigma} 177 | \def\S{\Sigma} 178 | \def\t{\theta} 179 | \def\T{\Theta} 180 | \def\vt{\vartheta} 181 | \def\x{\xi} 182 | \def\X{\Xi} 183 | \def\z{\zeta} 184 | 185 | % ---------- Caligraphic letters ---------- 186 | 187 | \def\Ac{\mathcal{A}} 188 | \def\Bc{\mathcal{B}} 189 | \def\Cc{\mathcal{C}} 190 | \def\Dc{\mathcal{D}} 191 | \def\Ec{\mathcal{E}} 192 | \def\Fc{\mathcal{F}} 193 | \def\Gc{\mathcal{G}} 194 | \def\Hc{\mathcal{H}} 195 | \def\Ic{\mathcal{I}} 196 | \def\Jc{\mathcal{J}} 197 | \def\Kc{\mathcal{K}} 198 | \def\Lc{\mathcal{L}} 199 | \def\Mc{\mathcal{M}} 200 | \def\Nc{\mathcal{N}} 201 | \def\Oc{\mathcal{O}} 202 | \def\Pc{\mathcal{P}} 203 | \def\Qc{\mathcal{Q}} 204 | \def\Rc{\mathcal{R}} 205 | \def\Sc{\mathcal{S}} 206 | \def\Tc{\mathcal{T}} 207 | \def\Uc{\mathcal{U}} 208 | \def\Vc{\mathcal{V}} 209 | \def\Wc{\mathcal{W}} 210 | \def\Xc{\mathcal{X}} 211 | \def\Yc{\mathcal{Y}} 212 | \def\Zc{\mathcal{Z}} -------------------------------------------------------------------------------- /problemsets/notes.sty: -------------------------------------------------------------------------------- 1 | % ---------- General typography ---------- 2 | 3 | \setlist[itemize,1]{leftmargin=0pt,label=\color{gray}{\upshape\textbullet}} 4 | \setlist[itemize,2]{leftmargin=0pt,label=\color{gray}{\upshape\textendash}} 5 | \setlist[enumerate,1]{leftmargin=0pt,label=\upshape\Alph*)} 6 | 7 | % ---------- Title page ---------- 8 | 9 | \usepackage[subfigure]{tocloft} 10 | \renewcommand{\contentsname}{} 11 | \renewcommand{\cftsecfont}{\normalfont} 12 | \renewcommand{\cftsecpagefont}{\normalfont} 13 | \renewcommand{\cftsecaftersnum}{.} 14 | \renewcommand{\cftsubsecaftersnum}{.} 15 | \renewcommand{\cftdotsep}{10} 16 | \setlength{\cftbeforesecskip}{0em} 17 | 18 | % ---------- Headings ---------- 19 | 20 | \newcommand{\sectionbreak}{\clearpage} -------------------------------------------------------------------------------- /problemsets/paper.sty: -------------------------------------------------------------------------------- 1 | % ---------- Fonts ---------- 2 | 3 | \usepackage{sourceserifpro} 4 | \usepackage[T1]{fontenc} 5 | \usepackage{amsmath,amssymb,amsthm,eucal,bbold,bm} 6 | \usepackage[italic,eulergreek]{mathastext} 7 | 8 | \input{glyphtounicode}\pdfgentounicode=1 9 | 10 | % ---------- Spacing ---------- 11 | 12 | \usepackage[onehalfspacing]{setspace} 13 | 14 | \renewcommand{\floatpagefraction}{0} 15 | \AtBeginDocument{\allowdisplaybreaks[1]} 16 | 17 | % ---------- General typography ---------- 18 | 19 | \usepackage{microtype} 20 | \usepackage[x11names]{xcolor} 21 | \usepackage[hyphens]{url} 22 | \urlstyle{same} 23 | 24 | \usepackage{enumitem} 25 | \setlist[itemize,1]{leftmargin=\parindent,label=\color{gray}{\upshape\textbullet}} 26 | \setlist[itemize,2]{leftmargin=\parindent,label=\color{gray}{\upshape\textendash}} 27 | \setlist[enumerate,1]{leftmargin=2\parindent,label=\upshape(\roman*)} 28 | 29 | \AtBeginDocument{\frenchspacing} 30 | \MTsetmathskips{f}{\thinmuskip}{0mu} % Spacing around f in math 31 | \MTsetmathskips{y}{\thinmuskip}{0mu} % Spacing around y in math 32 | \MTsetmathskips{p}{\thinmuskip}{0mu} % Spacing around p in math 33 | \MTsetmathskips{l}{0mu}{\thinmuskip} % Spacing around l in math 34 | 35 | % ---------- Title page ---------- 36 | 37 | \usepackage{titling} 38 | \pretitle{\begin{center}\bfseries\huge} 39 | \renewcommand{\tamark}{} 40 | \setlength{\thanksmarkwidth}{0em} 41 | \setlength{\thanksmargin}{0em} 42 | 43 | \usepackage{etoolbox} 44 | \AtBeginEnvironment{titlepage}{\setlength{\parindent}{0pt}} 45 | 46 | % ---------- Headings ---------- 47 | 48 | \usepackage{titlesec} 49 | \titleformat*{\section}{\centering\large\bfseries} 50 | \titleformat*{\subsection}{\bfseries} 51 | \titleformat{\paragraph}[runin]{\itshape}{}{}{}[.] 52 | \titlelabel{\thetitle.\quad} 53 | 54 | % ---------- Theorems + proofs ---------- 55 | 56 | \newtheoremstyle{paper}{}{}{\itshape}{}{\scshape}{.}{.5em}{} 57 | \theoremstyle{paper} 58 | \newtheorem{theorem}{Theorem} 59 | \newtheorem{proposition}{Proposition} 60 | \newtheorem{lemma}{Lemma} 61 | \newtheorem{corollary}{Corollary} 62 | \newtheorem{definition}{Definition} 63 | \newtheorem{assumption}{Assumption} 64 | \newtheorem{remark}{Remark} 65 | 66 | \renewcommand{\proofname}{\upshape\scshape Proof} 67 | 68 | % ---------- Tables + figures ---------- 69 | 70 | \usepackage{multirow,booktabs,rotating,graphicx} 71 | 72 | \makeatletter\g@addto@macro\@floatboxreset\centering\makeatother % Centered figures + tables 73 | \renewcommand{\figurename}{\textsc{Figure}} 74 | \renewcommand{\tablename}{\textsc{Table}} 75 | 76 | \BeforeBeginEnvironment{tabular*}{\footnotesize} 77 | \renewcommand{\arraystretch}{1.1} 78 | 79 | \usepackage{caption,subcaption} 80 | \captionsetup{labelsep=period} 81 | \captionsetup[sub]{labelformat=simple,labelsep=period,size=footnotesize} 82 | \captionsetup[table]{position=top} 83 | 84 | \renewcommand{\thesubfigure}{\Alph{subfigure}} 85 | \newcommand{\sfig}{0.2} 86 | \newcommand{\lfig}{0.3} 87 | \newcommand{\vfig}{\\\vspace*{0.4cm}} 88 | 89 | \newcommand{\note}[2][]{\parbox{\textwidth}{\footnotesize\vspace*{10pt}\textit{#1}#2}} 90 | 91 | % ---------- Bibliography ---------- 92 | 93 | \AtBeginEnvironment{thebibliography}{\setstretch{1.1}} 94 | 95 | \usepackage{natbib} 96 | \setcitestyle{aysep={}} 97 | \renewcommand{\bibfont}{\small} 98 | \setlength{\bibsep}{0pt} 99 | \setlength{\bibhang}{1.5em} 100 | 101 | % ---------- Appendix ---------- 102 | 103 | \AfterEndEnvironment{thebibliography}{ 104 | \setcounter{theorem}{0} 105 | \setcounter{proposition}{0} 106 | \setcounter{lemma}{0} 107 | \setcounter{corollary}{0} 108 | \setcounter{definition}{0} 109 | \setcounter{assumption}{0} 110 | \setcounter{remark}{0} 111 | \setcounter{table}{0} 112 | \setcounter{figure}{0} 113 | \setcounter{equation}{0} 114 | \renewcommand{\thetheorem}{A\arabic{theorem}} 115 | \renewcommand{\theproposition}{A\arabic{proposition}} 116 | \renewcommand{\thelemma}{A\arabic{lemma}} 117 | \renewcommand{\thecorollary}{A\arabic{corollary}} 118 | \renewcommand{\thedefinition}{A\arabic{definition}} 119 | \renewcommand{\theassumption}{A\arabic{assumption}} 120 | \renewcommand{\theremark}{A\arabic{remark}} 121 | \renewcommand{\thetable}{A\arabic{table}} 122 | \renewcommand{\thefigure}{A\arabic{figure}} 123 | \renewcommand{\theequation}{A\arabic{equation}} 124 | \titleformat{\section}{\centering\large\bfseries}{Appendix~\thesection.}{1em}{}} 125 | 126 | % ---------- Hyperlinks ---------- 127 | 128 | \usepackage{fancyhdr} % Before hyperref 129 | \usepackage[pdftex,hidelinks,hypertexnames=false,hyperfootnotes=false,pdfpagemode=UseNone,pdfdisplaydoctitle=true]{hyperref} % Last package 130 | 131 | % ---------- Layout ---------- 132 | 133 | \usepackage[margin=1.1in,footskip=0.7in]{geometry} % After hyperref 134 | 135 | % ---------- Headers + footers ---------- 136 | 137 | \renewcommand{\headrule}{} 138 | 139 | \pagestyle{fancy} % After hyperref + geometry 140 | \fancyhf{} 141 | \fancyfoot[C]{\thepage} 142 | 143 | \fancypagestyle{plain}{\fancyhf{}} 144 | \newcommand{\available}[1]{\fancypagestyle{plain}{\fancyfoot[C]{\color{gray}\footnotesize Available at \url{#1}}}} 145 | \newcommand{\published}[2]{\fancypagestyle{plain}{\fancyfoot[C]{\footnotesize Published in \emph{#1}\;\textbullet\;Available at \url{#2}}}} 146 | 147 | -------------------------------------------------------------------------------- /problemsets/ps1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/problemsets/ps1.pdf -------------------------------------------------------------------------------- /problemsets/ps1.tex: -------------------------------------------------------------------------------- 1 | \documentclass[letterpaper,12pt,leqno]{article} 2 | \usepackage{paper,math,notes} 3 | \available{https://pascalmichaillat.org/x/} 4 | \hypersetup{pdftitle={Problem Set on Dynamic Programming}} 5 | 6 | \begin{document} 7 | 8 | \title{Problem Set on Dynamic Programming} 9 | \author{Pascal Michaillat} 10 | \date{} 11 | 12 | \begin{titlepage} 13 | \maketitle 14 | \end{titlepage} 15 | 16 | \section*{Problem 1} 17 | 18 | Consider the following optimal growth problem: Given initial capital $k_{0}>0$, choose consumption $\bc{c_{t}} _{t =0}^{+\infty}$ to maximize utility 19 | \begin{equation*} 20 | \sum_{t=0}^{\infty}\b ^{t}\cdot \ln{c_{t}} 21 | \end{equation*} 22 | subject to the resource constraint 23 | \begin{equation*} 24 | k_{t+1}=A\cdot k_{t}^{\a}-c_{t}. 25 | \end{equation*} 26 | The parameters satisfy $0<\b<1,\;A>0,\;0<\a <1.$ 27 | 28 | \begin{enumerate} 29 | \item Derive the optimal law of motion of consumption $c_{t}$ using a Lagrangian. 30 | \item Identify the state variable and the control variable. 31 | \item Write down the Bellman equation. 32 | \item Derive the following Euler equation: 33 | \begin{equation*} 34 | c_{t+1}=\b\cdot \a\cdot A\cdot k_{t+1}^{\a -1}\cdot c_{t}. 35 | \end{equation*} 36 | 37 | \item Derive the first two value functions, $V_{1}(k)$ and $V_{2}(k)$, obtained by iteration on the Bellman equation starting with the value function $V_{0}\bp{k} \equiv 0$. 38 | \item The process of determining the value function by iterations using the Bellman equation is commonly used to solve dynamic programs numerically. The algorithm is called \textit{value function iteration}. For this optimal growth problem, one can show show using value function iteration that the value function is 39 | \[V\bp{k} =\kappa +\frac{\ln{k^{\a}}}{1-\a\cdot \b},\] 40 | where $\k$ is a constant. Using the Bellman equation, determine the policy function $k'(k)$ associated with this value function. 41 | \item In light of these results, for which reasons would you prefer to use the dynamic-programming approach instead of the Lagrangian approach to solve the optimal growth problem? And for which reasons would you prefer to use the Lagrangian approach instead of the dynamic-programming approach? 42 | \end{enumerate} 43 | 44 | \section*{Problem 2} 45 | 46 | Consider the problem of choosing consumption $\bc{c_{t}}_{t=0}^{+\infty}$ to maximize expected utility 47 | \begin{equation*} 48 | \E_{0}\sum_{t=0}^{+\infty}\b^{t}\cdot u\bp{c_{t}} 49 | \end{equation*} 50 | subject to the budget constraint 51 | \begin{equation*} 52 | c_{t}+p_{t}\cdot s_{t+1}=\bp{d_{t}+p_{t}}\cdot s_{t}. 53 | \end{equation*} 54 | $d_{t}$ is the dividend paid out for one share of the asset, $p_{t}$ is the price of one share of the asset, and $s_{t}$ is the number of shares of the asset held at the beginning of period $t$. In equilibrium, the price $p_{t}$ of one share is solely a function of dividends $d_{t}$. Dividends can only take two values $d_{l}$ and $d_{h}$, with $0\rho >0.5.$ 59 | 60 | \begin{enumerate} 61 | \item Identify state and control variables. 62 | \item Write down the Bellman equation. 63 | \item Derive the following Euler equation: 64 | \begin{equation*} 65 | p_{t}\cdot u'\bp{c_{t}} =\b\cdot \E{\bp{d_{t+1}+p_{t+1}} \cdot u'\bp{c_{t+1}} \mid d_{t}} . 66 | \end{equation*} 67 | 68 | \item Suppose that $u\bp{c} =c$. Show that the asset price is higher when the current dividend is high. 69 | \end{enumerate} 70 | 71 | \section*{Problem 3} 72 | 73 | Consider the following optimal growth problem: Given initial capital $k_{0}>0$, choose consumption and labor $\bc{c_{t},l_{t}}_{t=0}^{+\infty}$ to maximize utility 74 | \begin{equation*} 75 | \sum_{t=0}^{+\infty}\b^{t}\cdot u\bp{c_{t},l_{t}} 76 | \end{equation*} 77 | subject to the law of motion of capital 78 | \begin{align*} 79 | k_{t+1}&=A_{t}\cdot f\bp{k_{t},l_{t}} -c_{t}. 80 | \end{align*} 81 | In addition, we impose $0\leq l_{t}\leq 1$. The discount factor $\b \in \bp{0,1} $. The function $f$ is increasing and concave in both arguments. The function $u$ is increasing and concave in $c$, decreasing and convex in $l$. 82 | 83 | \paragraph{Deterministic case} First, suppose $A_{t}=1$ for all $t$. 84 | 85 | \begin{enumerate} 86 | \item What are the state and control variables? 87 | \item Write down the Bellman equation. 88 | \item Derive the following optimality conditions: 89 | \begin{align*} 90 | \pd{u\bp{c_{t},l_{t}}}{ c_{t}} &=\b \cdot \pd{u\bp{c_{t+1},l_{t+1}}}{c_{t+1}} \cdot \pd{f\bp{k_{t+1},l_{t+1}}}{ k_{t+1}}\\ 91 | \pd{u\bp{c_{t},l_{t}}}{ c_{t}}\cdot \pd{f\bp{k_{t},l_{t}}}{l_{t}} &=-\pd{u\bp{c_{t},l_{t}}}{l_{t}}. 92 | \end{align*} 93 | \item Suppose that the production function $f\bp{k,l} =k^{\a}\cdot l^{1-\a}$. Determine the ratios $c/k$ and $l/k$ in steady state. 94 | \end{enumerate} 95 | 96 | \paragraph{Stochastic case} Now, suppose $A_{t}$ is a stochastic process that takes values $A_{1}$ and $A_{2}$ with the following probability: $\P{A_{t+1}=A_{1}\mid A_{t}=A_{1}} =\P{A_{t+1}=A_{2}\mid A_{t}=A_{2}} =\rho.$ 97 | \begin{enumerate}\setcounter{enumi}{4} 98 | \item Write down the Bellman equation. 99 | \item Derive the optimality conditions. 100 | \end{enumerate} 101 | 102 | 103 | \end{document} -------------------------------------------------------------------------------- /problemsets/ps2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/problemsets/ps2.pdf -------------------------------------------------------------------------------- /problemsets/ps2.tex: -------------------------------------------------------------------------------- 1 | \documentclass[letterpaper,12pt,leqno]{article} 2 | \usepackage{paper,math,notes} 3 | \available{https://pascalmichaillat.org/x/} 4 | \hypersetup{pdftitle={Problem Set on Optimal Control}} 5 | 6 | \begin{document} 7 | 8 | \title{Problem Set on Optimal Control} 9 | \author{Pascal Michaillat} 10 | \date{} 11 | 12 | \begin{titlepage} 13 | \maketitle 14 | \end{titlepage} 15 | 16 | \section*{Problem 1} 17 | 18 | Consider the following optimal growth problem: Given initial capital $k_{0}>0$, choose a consumption path $\bc{c_{t}}_{t\geq 0}$ to maximize utility 19 | \begin{align*} 20 | \int_{0}^{\infty}e^{-\rho\cdot t} \cdot \ln{c_{t}} dt 21 | \end{align*} 22 | subject to the law of motion of capital 23 | \begin{align*} 24 | \dot{k}_{t} &=f\bp{k_{t}} -c_{t}-\d \cdot k_{t}. 25 | \end{align*} 26 | The discount factor $\rho>0$, and the production function $f$ satisfies 27 | \[f\bp{k} =A\cdot k^{\a},\] 28 | where $\a \in \bp{0,1}$ and $A>0$. 29 | 30 | \begin{enumerate} 31 | \item Write down the present-value Hamiltonian. 32 | \item Show that the Euler equation is 33 | \begin{align*} 34 | \frac{\dot{c}_{t}}{c_{t}} &=\a \cdot A \cdot k_{t}^{\a -1}-\bp{\d +\rho}. 35 | \end{align*} 36 | 37 | \item Solve for the steady state of the system. 38 | \end{enumerate} 39 | 40 | \section*{Problem 2} 41 | 42 | Consider the following investment problem: Given initial capital $k_{0}$, choose the investment path $\bc{i_{t}} _{t \geq 0}$ to maximize profits 43 | \begin{align*} 44 | \int_{0}^{\infty} e^{-r\cdot t}\bs{f\bp{k_{t}} -i_{t}-\frac{\chi}{2}\cdot \bp{\frac{i_{t}^{2}}{k_{t}}}} dt 45 | \end{align*} 46 | subject to the law of motion of capital (we assume no capital depreciation) 47 | \[\dot{k}_{t} =i_{t}.\] 48 | The interest rate $r>0$, the capital adjustment cost $\chi>0$, and the production function $f$ satisfies $f'>0$ and $f''<0$. 49 | 50 | \begin{enumerate} 51 | \item Write down the current-value Hamiltonian. 52 | \item Use the optimality conditions for the current-value Hamiltonian to derive the following differential equations: 53 | \begin{align*} 54 | \dot{k}_{t} &=\bp{\frac{q_{t}-1}{\chi}}\cdot k_{t} \\ 55 | \dot{q}_{t} &=r\cdot q_{t}-f'\bp{k_{t}} -\frac{1}{2\cdot \chi}\bp{q_{t}-1}^{2} 56 | \end{align*} 57 | 58 | \item Solve for the steady state. 59 | \end{enumerate} 60 | 61 | 62 | \end{document} -------------------------------------------------------------------------------- /problemsets/ps3.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/problemsets/ps3.pdf -------------------------------------------------------------------------------- /problemsets/ps3.tex: -------------------------------------------------------------------------------- 1 | \documentclass[letterpaper,12pt,leqno]{article} 2 | \usepackage{paper,math,notes} 3 | \available{https://pascalmichaillat.org/x/} 4 | \hypersetup{pdftitle={Problem Set on Differential Equations}} 5 | 6 | \begin{document} 7 | 8 | \title{Problem Set on Differential Equations} 9 | \author{Pascal Michaillat} 10 | \date{} 11 | 12 | \begin{titlepage} 13 | \maketitle 14 | \end{titlepage} 15 | 16 | \section*{Problem 1} 17 | 18 | Find the solution of the initial value problem 19 | \begin{align*} 20 | \dot{a}(t) &=r\cdot a(t) +s \\ 21 | a\bp{0} &=a_{0} 22 | \end{align*} 23 | where both $r$ and $s$ are known constant. 24 | 25 | \section*{Problem 2} 26 | 27 | Find the solution of the initial value problem 28 | \begin{align*} 29 | \dot{a}(t) &=r(t)\cdot a(t) +s(t) \\ 30 | a\bp{0} &=a_{0} 31 | \end{align*} 32 | where both $r(t)$ and $s(t)$ are known functions of $t.$ 33 | 34 | \section*{Problem 3} 35 | 36 | Consider the linear system of differential equations given by 37 | \begin{equation*} 38 | \bm{\dot{x}}(t)=\bs{ 39 | \begin{array}{ll} 40 | 1 & 1 \\ 41 | 4 & 1 42 | \end{array}} \bm{x}(t). 43 | \end{equation*} 44 | \begin{enumerate} 45 | \item Find the general solution of the system. 46 | \item What would you need to find a specific solution of the system? 47 | \item Draw the trajectories of the system. 48 | \end{enumerate} 49 | 50 | \section*{Problem 4} 51 | 52 | Consider the initial value problem 53 | \begin{align*} 54 | \dot{k}(t) &=s\cdot f\bp{k(t)} -\d\cdot k(t) \\ 55 | k\bp{0} &=k_{0} 56 | \end{align*} 57 | where the saving rate $s\in \bp{0,1} $, the capital depreciation rate $\d \in \bp{0,1}$, and the production function $f$ satisfies the \textit{Inada conditions}. That is, $f$ is continuously differentiable and 58 | \begin{align*} 59 | f(0)&=0\\ 60 | f'(x)&>0\\ 61 | f''(x)&<0\\ 62 | \lim_{x\to 0} f'(x)&=+\infty\\ 63 | \lim_{x\to +\infty} f'(x)&=0. 64 | \end{align*} 65 | 66 | \begin{enumerate} 67 | \item Give a production function $f$ that satisfies the Inada conditions. 68 | \item Find the steady state of the system. 69 | \item Draw the dynamic path of $k(t) $ and show that it converges to the steady state. 70 | \end{enumerate} 71 | 72 | \section*{Problem 5} 73 | 74 | The solution of the problem studied in Problem 4 is characterized by a system of two nonlinear first-order differential equations: 75 | \begin{align*} 76 | \dot{k}_{t} &=f\bp{k_{t}} -c_{t}-\d \cdot k_{t}\\ 77 | \frac{\dot{c}_{t}}{c_{t}} &=\a \cdot A \cdot k_{t}^{\a -1}-\bp{\d +\rho}. 78 | \end{align*} 79 | The first differential equation is the law of motion of capital. The second differential equation is the Euler equation, which describes the optimal path of consumption over time. 80 | 81 | 82 | \begin{enumerate} 83 | \item Draw the phase diagram of the system. 84 | \item Linearize the system around its steady state. 85 | \item Show that the steady state is a saddle point locally. 86 | \item Suppose the economy is in steady state at time $t_{0}$ and there is an unanticipated decrease in the discount factor $\rho$. Show on your phase diagram the transition dynamics of the model. 87 | \end{enumerate} 88 | 89 | \section*{Problem 6} 90 | 91 | The solution of the investment problem studied in Problem 5 is characterized by a system of two nonlinear first-order differential equations: 92 | \begin{align*} 93 | \dot{k}_{t} &=\bp{\frac{q_{t}-1}{\chi}}\cdot k_{t} \\ 94 | \dot{q}_{t} &=r\cdot q_{t}-f'\bp{k_{t}} -\frac{1}{2\cdot \chi}\bp{q_{t}-1}^{2}. 95 | \end{align*} 96 | The first differential equation is the law of motion of capital $k_{t}$. The second differential equation is the law of motion of the co-state variable $q_{t}$. 97 | 98 | \begin{enumerate} 99 | \item Draw the phase diagram. 100 | \item Show that the steady state is a saddle point locally. 101 | \end{enumerate} 102 | 103 | \section*{Problem 7} 104 | 105 | Consider a discrete time version of the typical growth model: 106 | \begin{align*} 107 | k(t+1) &=f\bp{k(t)} -c(t) +\bp{1-\d}\cdot k(t) \\ 108 | c(t+1) &=\b\cdot \bs{ 1+f'\bp{k(t)} -\d }\cdot c(t) . 109 | \end{align*} 110 | The discount factor $\b \in \bp{0,1}$, the rate of depreciation of capital $\d \in \bp{0,1}$, initial capital $k_{0}$ is given, and the production function $f$ satisfies the Inada conditions. These two equations are a system of first-order difference equations. Whereas a system of first-order differential equations relates $\bm{\dot{x}}(t) $ to $\bm{x}(t)$, a system of first-order difference equations relate $\bm{x}(t+1) $ to $\bm{x}(t)$. 111 | 112 | We will see that we can study a system of first-order difference equations with the tools that we used to study systems of first-order differential equations. In particular, we can use phase diagrams to understand the dynamics of the system. 113 | 114 | \begin{enumerate} 115 | \item Construct a phase diagram for the system. First, define 116 | \begin{align*} 117 | \D k & \equiv k(t+1) -k(t) , \\ 118 | \D c & \equiv c(t+1) -c(t) . 119 | \end{align*} 120 | Second, draw the $\D k=0$ locus and the $\D c=0$ locus on the $(k,c)$ plane. Finally, find the steady state as the intersection of the $\D k=0$ locus and the $ \D c=0$ locus. 121 | \item Show that the steady state is a saddle point in the phase diagram. 122 | \end{enumerate} 123 | 124 | \section*{Problem 8} 125 | 126 | We consider the following optimal growth problem. Given initial human capital $h_{0}$ and initial physical capital $k_{0}$, choose consumption $c(t) $ and labor $l(t) $ to maximize utility 127 | \begin{equation*} 128 | \int_{0}^{\infty}e^{-\rho\cdot t}\cdot \ln{c} dt 129 | \end{equation*} 130 | subject to 131 | \begin{align*} 132 | \dot{k}_{t} &=y_{t}-c_{t}-\d\cdot k_{t} \\ 133 | \dot{h}_{t} &=B\cdot \bp{1-l_{t}}\cdot h_{t}. 134 | \end{align*} 135 | Output $y_{t}$ is defined by 136 | \[y_{t}\equiv A\cdot k_{t}^{\a}\cdot \bp{l_{t}\cdot h_{t}} ^{\b}.\] 137 | We also impose that $0 \leq l_{t}\leq 1$. The discount factor $\rho>0$, the rate of depreciation of physical capital $\d>0$, the constants $A>0$ and $B>0$, and the production function parameters $\a\in \bp{0,1}$ and $\b \in\bp{0,1}$. 138 | 139 | \begin{enumerate} 140 | \item Give state and control variables. 141 | \item Write down the present-value Hamiltonian for this problem. 142 | \item Derive the optimality conditions. 143 | \item Show that the growth rate of consumption $c(t)$ is 144 | \begin{equation*} 145 | \frac{\dot{c}}{c}=\frac{\a\cdot y}{k}-\bp{\d +\rho} . 146 | \end{equation*} 147 | \item From now on, we assume that $B=0$. Show that $l=1$. 148 | \item Draw the phase diagram in the $(k,c)$ plane. 149 | \item Show on the diagram that the steady state of the system is a saddle point. 150 | \item Derive the Jacobian of the system. 151 | \item Show that the steady state of the system is a saddle point. 152 | \end{enumerate} 153 | 154 | 155 | \end{document} -------------------------------------------------------------------------------- /problemsets/ps4.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmichaillat/math-for-macro/6d703cadd8da24bb73edcf6f1b6d3a2971bb405d/problemsets/ps4.pdf -------------------------------------------------------------------------------- /problemsets/ps4.tex: -------------------------------------------------------------------------------- 1 | \documentclass[letterpaper,12pt,leqno]{article} 2 | \usepackage{paper,math,notes} 3 | \available{https://pascalmichaillat.org/x/} 4 | \hypersetup{pdftitle={Problem Set on Mathematics for Macroeconomics}} 5 | 6 | \begin{document} 7 | 8 | \title{Problem Set on Mathematics for Macroeconomics} 9 | \author{Pascal Michaillat} 10 | \date{} 11 | 12 | \begin{titlepage} 13 | \maketitle 14 | \end{titlepage} 15 | 16 | \section*{Problem 1} 17 | 18 | Let $\a \in (0,1)$, $\d \in (0,1)$, $\rho \in (0,1)$, and $\s>0$. Impose that $\rho+\d<1$. Given $k(0)$, we want to find the function $c(t) $ to maximize 19 | \begin{equation*} 20 | \int_{0}^{+\infty }e^{-\rho\cdot t}\cdot \frac{c(t)^{1-\s}-1}{1-\s} dt, 21 | \end{equation*} 22 | subject to the law of motion 23 | \begin{equation*} 24 | \dot{k}(t) =k(t)^{\a}-c(t)-\d \cdot k(t). 25 | \end{equation*} 26 | 27 | \begin{enumerate} 28 | 29 | \item Which variable do you choose as a state variable? Which variable do you choose as a control variable? Write down the current-value Hamiltonian and derive the optimality conditions. 30 | 31 | \item The Euler equation is the first-order differential equation that characterizes the optimal function $c(t)$. Determine the Euler equation. 32 | 33 | \item Suppose $\a =1$ and $\s =1$. Show that the system describing the optimal functions $\{k(t),c(t)\}$ reduces to a linear, homogenous system of first-order differential equations. Show that the system is unstable by computing the eigenvalues. 34 | 35 | \item Suppose $\a <1$ and $\s >0$. Show that the system describing the optimal functions $\{k(t),c(t)\}$ reduces to a nonlinear system of first-order differential equations. Use a phase-diagram to show that the steady state of the system is a saddle point. Explain how you draw the phase diagram. 36 | 37 | \end{enumerate} 38 | 39 | \section*{Problem 2} 40 | 41 | Let $\b \in (0,1)$ and $r>0$. Given $k_{0}>0$, we want to find a collection of sequences $\{c_{t},k_{t+1}\}_{t=0}^{+\infty}$ to maximize 42 | \begin{equation*} 43 | \sum_{t=0}^{\infty }\b^{t} \cdot \ln(c_{t}), 44 | \end{equation*} 45 | subject to the constraints 46 | \begin{equation*} 47 | k_{t+1}=(1+r)\cdot k_{t}-c_{t} 48 | \end{equation*} 49 | for all $t\geq 0$. 50 | 51 | \paragraph{Lagragian} We first solve the maximization problem using the Lagrangian method. 52 | 53 | \begin{enumerate} 54 | \item Write down the Lagrangian of the problem. 55 | \item Derive the first-order condition(s) of the maximization problem. 56 | \item Derive the Euler equation. 57 | \end{enumerate} 58 | 59 | \paragraph{Dynamic Programming} Next we solve the maximization problem using the dynamic programming method. 60 | 61 | \begin{enumerate}\setcounter{enumi}{3} 62 | 63 | \item Which variable do you choose as a state variable? Which variable do you choose as a control variable? Write down the Bellman equation. 64 | 65 | \item Derive the first-order condition associated with the Bellman equation. 66 | 67 | \item Derive the Benveniste-Scheinkman equation. 68 | 69 | \item Derive the Euler equation. Compare it with the Euler equation obtained with the Lagrangian method and discuss. 70 | 71 | \item Suppose that the policy function takes the form $h(k)=A\cdot (1+r)\cdot k$ where $A\in (0,1) $. Derive $A$. 72 | 73 | \item Suppose that the value function takes the form $V(k)=B+D\cdot \ln(k),$ where $B$ and $D$ are constants. Using the expression for the policy function that you derived in the previous question, derive $B$ and $D$. 74 | 75 | \end{enumerate} 76 | \end{document} --------------------------------------------------------------------------------