├── .gitignore
├── 00_dx_quickstart.ipynb
├── 01_dx_frame.ipynb
├── 02_dx_models.ipynb
├── 03_dx_valuation_single_risk.ipynb
├── 04_dx_valuation_multi_risk.ipynb
├── 05_dx_portfolio_multi_risk.ipynb
├── 06_dx_portfolio_parallel.ipynb
├── 07_dx_portfolio_risk.ipynb
├── 08_dx_fourier_pricing.ipynb
├── 09_dx_calibration.ipynb
├── 10_dx_interest_rate_swaps.ipynb
├── 11_dx_mean_variance_portfolio.ipynb
├── 12_dx_stochastic_short_rates.ipynb
├── 13_dx_quite_complex_portfolios.ipynb
├── LICENSE.txt
├── README.md
├── data
├── ukblc05.xls
├── ukois09.xls
└── vstoxx_march_2014.h5
├── dx
├── __init__.py
├── analytical
│ ├── __init__.py
│ ├── black_scholes_merton.py
│ ├── jump_diffusion.py
│ ├── stoch_vol_jump_diffusion.py
│ └── stochastic_volatility.py
├── frame.py
├── license_agpl_3_0.txt
├── models
│ ├── __init__.py
│ ├── geometric_brownian_motion.py
│ ├── jump_diffusion.py
│ ├── mean_reverting_diffusion.py
│ ├── sabr_stochastic_volatility.py
│ ├── simulation_class.py
│ ├── square_root_diffusion.py
│ ├── square_root_jump_diffusion.py
│ ├── stoch_vol_jump_diffusion.py
│ └── stochastic_volatility.py
├── plot.py
├── portfolio.py
├── rates.py
└── valuation
│ ├── __init__.py
│ ├── derivatives_portfolio.py
│ ├── multi_risk.py
│ ├── parallel_valuation.py
│ ├── single_risk.py
│ └── var_portfolio.py
├── dx_analytics.yml
└── setup.py
/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 | .ipynb_check*
3 | *.sublime*
4 | q/
5 | .DS_Store
6 |
--------------------------------------------------------------------------------
/01_dx_frame.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "
"
8 | ]
9 | },
10 | {
11 | "cell_type": "markdown",
12 | "metadata": {},
13 | "source": [
14 | "# Framework Classes and Functions"
15 | ]
16 | },
17 | {
18 | "cell_type": "markdown",
19 | "metadata": {},
20 | "source": [
21 | "This section explains the usage of some basic framework classes and functions of DX Analytics. Mainly some helper functions, the discounting classes and the market environment class used to store market data and other parameters/data needed to model, value and risk manage derivative instruments. "
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": 1,
27 | "metadata": {},
28 | "outputs": [],
29 | "source": [
30 | "from dx import *"
31 | ]
32 | },
33 | {
34 | "cell_type": "code",
35 | "execution_count": 2,
36 | "metadata": {},
37 | "outputs": [],
38 | "source": [
39 | "np.set_printoptions(precision=3)"
40 | ]
41 | },
42 | {
43 | "cell_type": "markdown",
44 | "metadata": {},
45 | "source": [
46 | "## Helper Functions"
47 | ]
48 | },
49 | {
50 | "cell_type": "markdown",
51 | "metadata": {},
52 | "source": [
53 | "There are two helper functions used regulary:\n",
54 | "\n",
55 | "* get_year_deltas
: get a list of year deltas (decimal fractions) relative to first value in time_list\n",
56 | "* sn_random_numbers
: get an array of standard normally distributed pseudo-random numbers"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {},
62 | "source": [
63 | "### get_year_deltas"
64 | ]
65 | },
66 | {
67 | "cell_type": "markdown",
68 | "metadata": {},
69 | "source": [
70 | "Suppose we have a `list` object containing a number of `datetime` objects."
71 | ]
72 | },
73 | {
74 | "cell_type": "code",
75 | "execution_count": 3,
76 | "metadata": {},
77 | "outputs": [],
78 | "source": [
79 | "time_list = [dt.datetime(2015, 1, 1),\n",
80 | " dt.datetime(2015, 4, 1),\n",
81 | " dt.datetime(2015, 6, 15),\n",
82 | " dt.datetime(2015, 10, 21)]"
83 | ]
84 | },
85 | {
86 | "cell_type": "markdown",
87 | "metadata": {},
88 | "source": [
89 | "Passing this object to the `get_year_deltas` functions yields a list of year fractions representing the time intervals between the dates given. This is sometimes used e.g. for discounting purposes."
90 | ]
91 | },
92 | {
93 | "cell_type": "code",
94 | "execution_count": 4,
95 | "metadata": {},
96 | "outputs": [
97 | {
98 | "data": {
99 | "text/plain": [
100 | "array([0. , 0.247, 0.452, 0.803])"
101 | ]
102 | },
103 | "execution_count": 4,
104 | "metadata": {},
105 | "output_type": "execute_result"
106 | }
107 | ],
108 | "source": [
109 | "get_year_deltas(time_list)"
110 | ]
111 | },
112 | {
113 | "cell_type": "markdown",
114 | "metadata": {},
115 | "source": [
116 | "### sn_random_numbers"
117 | ]
118 | },
119 | {
120 | "cell_type": "markdown",
121 | "metadata": {},
122 | "source": [
123 | "Monte Carlo simulation of course relies heavily an the use of random numbers. The function `sn_random_numbers` is a wrapper function around the pseudo-random number generator of the `NumPy` library. It implements antithetic variates and moment matching as generic variance reduction techniques. It also allows to fix the seed value for the random number generator. The `shape` parameter is a `tuple` object of three integers."
124 | ]
125 | },
126 | {
127 | "cell_type": "code",
128 | "execution_count": 5,
129 | "metadata": {},
130 | "outputs": [],
131 | "source": [
132 | "ran = sn_random_numbers((2, 3, 4), antithetic=True,\n",
133 | " moment_matching=True, fixed_seed=False)"
134 | ]
135 | },
136 | {
137 | "cell_type": "code",
138 | "execution_count": 6,
139 | "metadata": {},
140 | "outputs": [
141 | {
142 | "data": {
143 | "text/plain": [
144 | "array([[[-0.866, 0.028, 0.866, -0.028],\n",
145 | " [ 0.891, 0.955, -0.891, -0.955],\n",
146 | " [ 0.352, 1.161, -0.352, -1.161]],\n",
147 | "\n",
148 | " [[ 0.804, -0.206, -0.804, 0.206],\n",
149 | " [ 0.82 , 1.081, -0.82 , -1.081],\n",
150 | " [ 1.87 , -1.429, -1.87 , 1.429]]])"
151 | ]
152 | },
153 | "execution_count": 6,
154 | "metadata": {},
155 | "output_type": "execute_result"
156 | }
157 | ],
158 | "source": [
159 | "ran"
160 | ]
161 | },
162 | {
163 | "cell_type": "markdown",
164 | "metadata": {},
165 | "source": [
166 | "Using moment matching makes sure that the first and second moments match exactly 0 and 1, respectively."
167 | ]
168 | },
169 | {
170 | "cell_type": "code",
171 | "execution_count": 7,
172 | "metadata": {},
173 | "outputs": [
174 | {
175 | "data": {
176 | "text/plain": [
177 | "np.float64(0.0)"
178 | ]
179 | },
180 | "execution_count": 7,
181 | "metadata": {},
182 | "output_type": "execute_result"
183 | }
184 | ],
185 | "source": [
186 | "ran.mean()"
187 | ]
188 | },
189 | {
190 | "cell_type": "code",
191 | "execution_count": 8,
192 | "metadata": {},
193 | "outputs": [
194 | {
195 | "data": {
196 | "text/plain": [
197 | "np.float64(1.0)"
198 | ]
199 | },
200 | "execution_count": 8,
201 | "metadata": {},
202 | "output_type": "execute_result"
203 | }
204 | ],
205 | "source": [
206 | "ran.std()"
207 | ]
208 | },
209 | {
210 | "cell_type": "markdown",
211 | "metadata": {},
212 | "source": [
213 | "Setting the first value of the `shape` parameter to 1 yields a two-dimensional `ndarray` object."
214 | ]
215 | },
216 | {
217 | "cell_type": "code",
218 | "execution_count": 9,
219 | "metadata": {},
220 | "outputs": [],
221 | "source": [
222 | "ran = sn_random_numbers((1, 3, 4), antithetic=True,\n",
223 | " moment_matching=True, fixed_seed=False)"
224 | ]
225 | },
226 | {
227 | "cell_type": "code",
228 | "execution_count": 10,
229 | "metadata": {},
230 | "outputs": [
231 | {
232 | "data": {
233 | "text/plain": [
234 | "array([[ 0.226, -0.093, -0.226, 0.093],\n",
235 | " [ 0.852, -0.229, -0.852, 0.229],\n",
236 | " [ 0.195, 2.264, -0.195, -2.264]])"
237 | ]
238 | },
239 | "execution_count": 10,
240 | "metadata": {},
241 | "output_type": "execute_result"
242 | }
243 | ],
244 | "source": [
245 | "ran"
246 | ]
247 | },
248 | {
249 | "cell_type": "markdown",
250 | "metadata": {},
251 | "source": [
252 | "## Discounting Classes"
253 | ]
254 | },
255 | {
256 | "cell_type": "markdown",
257 | "metadata": {},
258 | "source": [
259 | "In the risk-neutral valuation of derivative instrumente, discounting payoffs is a major task. The following discounting classes are implemented:\n",
260 | "\n",
261 | "* `constant_short_rate`: fixed short rate\n",
262 | "* `deterministic_yield`: deterministic yiels/term structure"
263 | ]
264 | },
265 | {
266 | "cell_type": "markdown",
267 | "metadata": {},
268 | "source": [
269 | "### constant_short_rate"
270 | ]
271 | },
272 | {
273 | "cell_type": "markdown",
274 | "metadata": {},
275 | "source": [
276 | "The `constant_short_rate` class represents the most simple case for risk-neutral discounting. A discounting object is defined by instatiating the class and providing a name and a decimal short rate value only."
277 | ]
278 | },
279 | {
280 | "cell_type": "code",
281 | "execution_count": 11,
282 | "metadata": {},
283 | "outputs": [],
284 | "source": [
285 | "r = constant_short_rate('r', 0.05)"
286 | ]
287 | },
288 | {
289 | "cell_type": "code",
290 | "execution_count": 12,
291 | "metadata": {},
292 | "outputs": [
293 | {
294 | "data": {
295 | "text/plain": [
296 | "'r'"
297 | ]
298 | },
299 | "execution_count": 12,
300 | "metadata": {},
301 | "output_type": "execute_result"
302 | }
303 | ],
304 | "source": [
305 | "r.name"
306 | ]
307 | },
308 | {
309 | "cell_type": "code",
310 | "execution_count": 13,
311 | "metadata": {},
312 | "outputs": [
313 | {
314 | "data": {
315 | "text/plain": [
316 | "0.05"
317 | ]
318 | },
319 | "execution_count": 13,
320 | "metadata": {},
321 | "output_type": "execute_result"
322 | }
323 | ],
324 | "source": [
325 | "r.short_rate"
326 | ]
327 | },
328 | {
329 | "cell_type": "markdown",
330 | "metadata": {},
331 | "source": [
332 | "The object has a method `get_forward_rates` to generate forward rates given, for instance, a `list` object of `datetime` objects."
333 | ]
334 | },
335 | {
336 | "cell_type": "code",
337 | "execution_count": 14,
338 | "metadata": {},
339 | "outputs": [
340 | {
341 | "data": {
342 | "text/plain": [
343 | "([datetime.datetime(2015, 1, 1, 0, 0),\n",
344 | " datetime.datetime(2015, 4, 1, 0, 0),\n",
345 | " datetime.datetime(2015, 6, 15, 0, 0),\n",
346 | " datetime.datetime(2015, 10, 21, 0, 0)],\n",
347 | " array([0.05, 0.05, 0.05, 0.05]))"
348 | ]
349 | },
350 | "execution_count": 14,
351 | "metadata": {},
352 | "output_type": "execute_result"
353 | }
354 | ],
355 | "source": [
356 | "r.get_forward_rates(time_list)"
357 | ]
358 | },
359 | {
360 | "cell_type": "markdown",
361 | "metadata": {},
362 | "source": [
363 | "Similarly, the method `get_discount_factors` returns discount factors for such a `list` object."
364 | ]
365 | },
366 | {
367 | "cell_type": "code",
368 | "execution_count": 15,
369 | "metadata": {},
370 | "outputs": [
371 | {
372 | "data": {
373 | "text/plain": [
374 | "([datetime.datetime(2015, 1, 1, 0, 0),\n",
375 | " datetime.datetime(2015, 4, 1, 0, 0),\n",
376 | " datetime.datetime(2015, 6, 15, 0, 0),\n",
377 | " datetime.datetime(2015, 10, 21, 0, 0)],\n",
378 | " array([0.961, 0.978, 0.988, 1. ]))"
379 | ]
380 | },
381 | "execution_count": 15,
382 | "metadata": {},
383 | "output_type": "execute_result"
384 | }
385 | ],
386 | "source": [
387 | "r.get_discount_factors(time_list)"
388 | ]
389 | },
390 | {
391 | "cell_type": "markdown",
392 | "metadata": {},
393 | "source": [
394 | "You can also pass, for instance, an `ndarry` object containing year fractions."
395 | ]
396 | },
397 | {
398 | "cell_type": "code",
399 | "execution_count": 16,
400 | "metadata": {},
401 | "outputs": [
402 | {
403 | "data": {
404 | "text/plain": [
405 | "(array([0. , 1. , 1.5, 2. ]), array([0.905, 0.928, 0.951, 1. ]))"
406 | ]
407 | },
408 | "execution_count": 16,
409 | "metadata": {},
410 | "output_type": "execute_result"
411 | }
412 | ],
413 | "source": [
414 | "r.get_discount_factors(np.array([0., 1., 1.5, 2.]),\n",
415 | " dtobjects=False)"
416 | ]
417 | },
418 | {
419 | "cell_type": "markdown",
420 | "metadata": {},
421 | "source": [
422 | "### deterministic_short_rate"
423 | ]
424 | },
425 | {
426 | "cell_type": "markdown",
427 | "metadata": {},
428 | "source": [
429 | "The `deterministic_short_rate` class allows to model an interest rate term structure. To this end, you need to pass a `list` object of `datetime` and yield pairs to the class."
430 | ]
431 | },
432 | {
433 | "cell_type": "code",
434 | "execution_count": 17,
435 | "metadata": {},
436 | "outputs": [],
437 | "source": [
438 | "yields = [(dt.datetime(2015, 1, 1), 0.02),\n",
439 | " (dt.datetime(2015, 3, 1), 0.03),\n",
440 | " (dt.datetime(2015, 10, 15), 0.035),\n",
441 | " (dt.datetime(2015, 12, 31), 0.04)]"
442 | ]
443 | },
444 | {
445 | "cell_type": "code",
446 | "execution_count": 18,
447 | "metadata": {},
448 | "outputs": [],
449 | "source": [
450 | "y = deterministic_short_rate('y', yields)"
451 | ]
452 | },
453 | {
454 | "cell_type": "code",
455 | "execution_count": 19,
456 | "metadata": {},
457 | "outputs": [
458 | {
459 | "data": {
460 | "text/plain": [
461 | "'y'"
462 | ]
463 | },
464 | "execution_count": 19,
465 | "metadata": {},
466 | "output_type": "execute_result"
467 | }
468 | ],
469 | "source": [
470 | "y.name"
471 | ]
472 | },
473 | {
474 | "cell_type": "code",
475 | "execution_count": 20,
476 | "metadata": {},
477 | "outputs": [
478 | {
479 | "data": {
480 | "text/plain": [
481 | "array([[datetime.datetime(2015, 1, 1, 0, 0), 0.02],\n",
482 | " [datetime.datetime(2015, 3, 1, 0, 0), 0.03],\n",
483 | " [datetime.datetime(2015, 10, 15, 0, 0), 0.035],\n",
484 | " [datetime.datetime(2015, 12, 31, 0, 0), 0.04]], dtype=object)"
485 | ]
486 | },
487 | "execution_count": 20,
488 | "metadata": {},
489 | "output_type": "execute_result"
490 | }
491 | ],
492 | "source": [
493 | "y.yield_list"
494 | ]
495 | },
496 | {
497 | "cell_type": "markdown",
498 | "metadata": {},
499 | "source": [
500 | "The method `get_interpolated_yields` implements an interpolation of the yield data and returns the interpolated yields given a `list` object of `datetime` objects."
501 | ]
502 | },
503 | {
504 | "cell_type": "code",
505 | "execution_count": 21,
506 | "metadata": {},
507 | "outputs": [
508 | {
509 | "data": {
510 | "text/plain": [
511 | "array([[datetime.datetime(2015, 1, 1, 0, 0), 0.01999999999999999,\n",
512 | " 0.08406085977916118],\n",
513 | " [datetime.datetime(2015, 4, 1, 0, 0), 0.03283048934520344,\n",
514 | " 0.025329983618055687],\n",
515 | " [datetime.datetime(2015, 6, 15, 0, 0), 0.03513304971859118,\n",
516 | " 0.0007769642303797052],\n",
517 | " [datetime.datetime(2015, 10, 21, 0, 0), 0.03515012570984609,\n",
518 | " 0.010083939037494678]], dtype=object)"
519 | ]
520 | },
521 | "execution_count": 21,
522 | "metadata": {},
523 | "output_type": "execute_result"
524 | }
525 | ],
526 | "source": [
527 | "y.get_interpolated_yields(time_list)"
528 | ]
529 | },
530 | {
531 | "cell_type": "markdown",
532 | "metadata": {},
533 | "source": [
534 | "In similar fashion, the methods `get_forward_rates` and `get_discount_factors` return forward rates and discount factors, respcectively."
535 | ]
536 | },
537 | {
538 | "cell_type": "code",
539 | "execution_count": 22,
540 | "metadata": {},
541 | "outputs": [
542 | {
543 | "data": {
544 | "text/plain": [
545 | "([datetime.datetime(2015, 1, 1, 0, 0),\n",
546 | " datetime.datetime(2015, 4, 1, 0, 0),\n",
547 | " datetime.datetime(2015, 6, 15, 0, 0),\n",
548 | " datetime.datetime(2015, 10, 21, 0, 0)],\n",
549 | " array([0.01999999999999999, 0.03907623873047744, 0.035484280124105295,\n",
550 | " 0.04324490417008155], dtype=object))"
551 | ]
552 | },
553 | "execution_count": 22,
554 | "metadata": {},
555 | "output_type": "execute_result"
556 | }
557 | ],
558 | "source": [
559 | "y.get_forward_rates(time_list)"
560 | ]
561 | },
562 | {
563 | "cell_type": "code",
564 | "execution_count": 23,
565 | "metadata": {},
566 | "outputs": [
567 | {
568 | "data": {
569 | "text/plain": [
570 | "([datetime.datetime(2015, 1, 1, 0, 0),\n",
571 | " datetime.datetime(2015, 4, 1, 0, 0),\n",
572 | " datetime.datetime(2015, 6, 15, 0, 0),\n",
573 | " datetime.datetime(2015, 10, 21, 0, 0)],\n",
574 | " [np.float64(0.9716610313922761),\n",
575 | " np.float64(0.9787638348236196),\n",
576 | " np.float64(0.9862902768276359),\n",
577 | " np.float64(1.0)])"
578 | ]
579 | },
580 | "execution_count": 23,
581 | "metadata": {},
582 | "output_type": "execute_result"
583 | }
584 | ],
585 | "source": [
586 | "y.get_discount_factors(time_list)"
587 | ]
588 | },
589 | {
590 | "cell_type": "markdown",
591 | "metadata": {},
592 | "source": [
593 | "## Market Environment"
594 | ]
595 | },
596 | {
597 | "cell_type": "markdown",
598 | "metadata": {},
599 | "source": [
600 | "The `market_environment` class is used to collect relevant data for the modeling, valuation and risk management of single derivatives instruments and portfolios composed of such instruments. A `market_environment` object stores:\n",
601 | "\n",
602 | "* `constants`: e.g. maturity date of option\n",
603 | "* `lists`: e.g. list of dates\n",
604 | "* `curves`: e.g. discounting objects\n",
605 | "\n",
606 | "A `market_environment` object is instantiated by providing a name as a `string` object and the pricing date as a `datetime` object."
607 | ]
608 | },
609 | {
610 | "cell_type": "code",
611 | "execution_count": 24,
612 | "metadata": {},
613 | "outputs": [],
614 | "source": [
615 | "me = market_environment(name='me', pricing_date=dt.datetime(2014, 1, 1))"
616 | ]
617 | },
618 | {
619 | "cell_type": "markdown",
620 | "metadata": {},
621 | "source": [
622 | "Constants are added via the `add_constant` method and providing a key and the value."
623 | ]
624 | },
625 | {
626 | "cell_type": "code",
627 | "execution_count": 25,
628 | "metadata": {},
629 | "outputs": [],
630 | "source": [
631 | "me.add_constant('initial_value', 100.)"
632 | ]
633 | },
634 | {
635 | "cell_type": "code",
636 | "execution_count": 26,
637 | "metadata": {},
638 | "outputs": [],
639 | "source": [
640 | "me.add_constant('volatility', 0.25)"
641 | ]
642 | },
643 | {
644 | "cell_type": "markdown",
645 | "metadata": {},
646 | "source": [
647 | "Lists of data are added via the `add_list` method."
648 | ]
649 | },
650 | {
651 | "cell_type": "code",
652 | "execution_count": 27,
653 | "metadata": {},
654 | "outputs": [],
655 | "source": [
656 | "me.add_list('dates', time_list)"
657 | ]
658 | },
659 | {
660 | "cell_type": "markdown",
661 | "metadata": {},
662 | "source": [
663 | "The `add_curve` method does the same for curves."
664 | ]
665 | },
666 | {
667 | "cell_type": "code",
668 | "execution_count": 28,
669 | "metadata": {},
670 | "outputs": [],
671 | "source": [
672 | "me.add_curve('discount_curve_1', r)"
673 | ]
674 | },
675 | {
676 | "cell_type": "code",
677 | "execution_count": 29,
678 | "metadata": {},
679 | "outputs": [],
680 | "source": [
681 | "me.add_curve('discount_curve_2', y)"
682 | ]
683 | },
684 | {
685 | "cell_type": "markdown",
686 | "metadata": {},
687 | "source": [
688 | "The single data objects are stored in separate `dictionary` objects."
689 | ]
690 | },
691 | {
692 | "cell_type": "code",
693 | "execution_count": 30,
694 | "metadata": {},
695 | "outputs": [
696 | {
697 | "data": {
698 | "text/plain": [
699 | "{'initial_value': 100.0, 'volatility': 0.25}"
700 | ]
701 | },
702 | "execution_count": 30,
703 | "metadata": {},
704 | "output_type": "execute_result"
705 | }
706 | ],
707 | "source": [
708 | "me.constants"
709 | ]
710 | },
711 | {
712 | "cell_type": "code",
713 | "execution_count": 31,
714 | "metadata": {},
715 | "outputs": [
716 | {
717 | "data": {
718 | "text/plain": [
719 | "{'dates': [datetime.datetime(2015, 1, 1, 0, 0),\n",
720 | " datetime.datetime(2015, 4, 1, 0, 0),\n",
721 | " datetime.datetime(2015, 6, 15, 0, 0),\n",
722 | " datetime.datetime(2015, 10, 21, 0, 0)]}"
723 | ]
724 | },
725 | "execution_count": 31,
726 | "metadata": {},
727 | "output_type": "execute_result"
728 | }
729 | ],
730 | "source": [
731 | "me.lists"
732 | ]
733 | },
734 | {
735 | "cell_type": "code",
736 | "execution_count": 32,
737 | "metadata": {},
738 | "outputs": [
739 | {
740 | "data": {
741 | "text/plain": [
742 | "{'discount_curve_1': ,\n",
743 | " 'discount_curve_2': }"
744 | ]
745 | },
746 | "execution_count": 32,
747 | "metadata": {},
748 | "output_type": "execute_result"
749 | }
750 | ],
751 | "source": [
752 | "me.curves"
753 | ]
754 | },
755 | {
756 | "cell_type": "markdown",
757 | "metadata": {},
758 | "source": [
759 | "Data is retrieved from a `market_environment` object via the `get_constant`, `get_list` and `get_curve` methods and providing the respective key."
760 | ]
761 | },
762 | {
763 | "cell_type": "code",
764 | "execution_count": 33,
765 | "metadata": {},
766 | "outputs": [
767 | {
768 | "data": {
769 | "text/plain": [
770 | "0.25"
771 | ]
772 | },
773 | "execution_count": 33,
774 | "metadata": {},
775 | "output_type": "execute_result"
776 | }
777 | ],
778 | "source": [
779 | "me.get_constant('volatility')"
780 | ]
781 | },
782 | {
783 | "cell_type": "code",
784 | "execution_count": 34,
785 | "metadata": {},
786 | "outputs": [
787 | {
788 | "data": {
789 | "text/plain": [
790 | "[datetime.datetime(2015, 1, 1, 0, 0),\n",
791 | " datetime.datetime(2015, 4, 1, 0, 0),\n",
792 | " datetime.datetime(2015, 6, 15, 0, 0),\n",
793 | " datetime.datetime(2015, 10, 21, 0, 0)]"
794 | ]
795 | },
796 | "execution_count": 34,
797 | "metadata": {},
798 | "output_type": "execute_result"
799 | }
800 | ],
801 | "source": [
802 | "me.get_list('dates')"
803 | ]
804 | },
805 | {
806 | "cell_type": "code",
807 | "execution_count": 35,
808 | "metadata": {},
809 | "outputs": [
810 | {
811 | "data": {
812 | "text/plain": [
813 | ""
814 | ]
815 | },
816 | "execution_count": 35,
817 | "metadata": {},
818 | "output_type": "execute_result"
819 | }
820 | ],
821 | "source": [
822 | "me.get_curve('discount_curve_1')"
823 | ]
824 | },
825 | {
826 | "cell_type": "markdown",
827 | "metadata": {},
828 | "source": [
829 | "Retrieving, for instance, a discounting object you can in one step retrieve it and call a method on it."
830 | ]
831 | },
832 | {
833 | "cell_type": "code",
834 | "execution_count": 36,
835 | "metadata": {},
836 | "outputs": [
837 | {
838 | "data": {
839 | "text/plain": [
840 | "([datetime.datetime(2015, 1, 1, 0, 0),\n",
841 | " datetime.datetime(2015, 4, 1, 0, 0),\n",
842 | " datetime.datetime(2015, 6, 15, 0, 0),\n",
843 | " datetime.datetime(2015, 10, 21, 0, 0)],\n",
844 | " [np.float64(0.9716610313922761),\n",
845 | " np.float64(0.9787638348236196),\n",
846 | " np.float64(0.9862902768276359),\n",
847 | " np.float64(1.0)])"
848 | ]
849 | },
850 | "execution_count": 36,
851 | "metadata": {},
852 | "output_type": "execute_result"
853 | }
854 | ],
855 | "source": [
856 | "me.get_curve('discount_curve_2').get_discount_factors(time_list)"
857 | ]
858 | },
859 | {
860 | "cell_type": "markdown",
861 | "metadata": {},
862 | "source": [
863 | "**Copyright, License & Disclaimer**\n",
864 | "\n",
865 | "© Dr. Yves J. Hilpisch | The Python Quants GmbH\n",
866 | "\n",
867 | "DX Analytics (the \"dx library\" or \"dx package\") is licensed under the GNU Affero General\n",
868 | "Public License version 3 or later (see http://www.gnu.org/licenses/).\n",
869 | "\n",
870 | "DX Analytics comes with no representations or warranties, to the extent\n",
871 | "permitted by applicable law.\n",
872 | "\n",
873 | "[Learn More & Stay in Touch](https://linktr.ee/dyjh)\n",
874 | "\n",
875 | "
"
876 | ]
877 | }
878 | ],
879 | "metadata": {
880 | "anaconda-cloud": {},
881 | "kernelspec": {
882 | "display_name": "Python 3 (ipykernel)",
883 | "language": "python",
884 | "name": "python3"
885 | },
886 | "language_info": {
887 | "codemirror_mode": {
888 | "name": "ipython",
889 | "version": 3
890 | },
891 | "file_extension": ".py",
892 | "mimetype": "text/x-python",
893 | "name": "python",
894 | "nbconvert_exporter": "python",
895 | "pygments_lexer": "ipython3",
896 | "version": "3.10.16"
897 | }
898 | },
899 | "nbformat": 4,
900 | "nbformat_minor": 4
901 | }
902 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | # DX Analytics
3 |
4 | DX Analytics is a **Python-based financial analytics library** which allows the modeling of rather complex derivatives instruments and portfolios. Make sure to fully understand what you are using this Python package for and how to apply it. Please also read the license text and disclaimer.
5 |
6 | Last update in April 2025.
7 |
8 |
9 | ## Basic Philosophy
10 |
11 | DX Analytics is a Python-based financial analytics library that mainly implements what is sometimes called the **global valuation of (complex portfolios of) derivatives instruments** (cf. http://www.riskcare.com/files/7314/0360/6145/LowResRiskcare_Risk_0510_2.pdf). The major characteristic of this approach is the **non-redundant modeling** of all components needed for the valuation (e.g. risk factors) and the **consistent simulation and valuation** of all relevant portfolio components (e.g. correlated risk factors, multi-risk derivatives and portfolios themselves).
12 |
13 | With DX Analytics you can, for instance, model and risk manage multi-risk derivatives instruments (e.g. American maximum call option) and generate 3-dimensional **present value surfaces** like this one:
14 |
15 | 
16 |
17 | You can also generate **vega surfaces** for single risk factors like this one:
18 |
19 | 
20 |
21 |
22 | In addition, DX Analytics provides a number of other classes and functions useful for financial analytics, like a class for **mean-variance portfolio analysis** or a class to model **interest-rate swaps**. However, the **focus** lies on the modeling and valuation of complex derivatives instruments and portfolios composed thereof by Monte Carlo simulation.
23 |
24 | In a sense, DX Analytics brings **back office risk management modeling and valuation practice** (e.g. used for Value-at-Risk or XVA calculations based on large scale Monte Carlo simulation efforts) to **front office derivatives analytics**.
25 |
26 | ## Books with Background Information
27 |
28 |
29 | This documentation cannot explain all technical details, it rather explains the API of the library and the single classes. There are two books available by the author of this library which are perfect companions for those who seriously consider to use the DX Analytics library. Both books together cover all major aspects important for an understanding and application of DX Analytics:
30 |
31 | ### Python for Finance — Mastering Data-Driven Finance
32 |
33 |
34 |
35 | This book, published by O'Reilly in its 2nd edition in 2018 (see http://py4fi.tpq.io), is a general introduction to Python for Finance. The book shows how to set up a proper Python infrastructure, explains basic Python techniques and packages and covers a broader range of topics important in financial data science (such as visualization) and computational finance (such as Monte Carlo simulation).
36 |
37 | The final part of the book explains and implements a sub-set of the classes and functions of **DX Analytics** as a larger case study (about 100 pages).
38 |
39 | This books provides you with the **basic and advanced Python knowledge** needed to do Python for Finance and to apply (and maybe integrate, enhance, improve) DX Analytics.
40 |
41 | ### Derivatives Analytics with Python
42 |
43 |
44 |
45 |
46 | This book — published by Wiley Finance (see http://dawp.tpq.io) with the sub-title "Data Analysis, Models, Simulation, Calibration, Hedging" — introduces to the **market-based valuation of financial derivatives** and explains what models can be used (e.g. stochastic volatility jump diffusions), how to discretize them and how to simulate paths for such models. It also shows how to calibrate those models parametrically to market observed option quotes and implied volatilities. In addition, the book covers basic numerical hedging schemes for non-vanilla instruments based on advanced financial models. The approach is a practical one in that all topics are illustrated by a self-contained set of Python scripts.
47 |
48 | This book equips you with the **quantitative and computational finance knowledge** needed to understand the general valuation approach and to apply the financial models provided by DX Analytics. For example, the book intensively discusses the discretization and simulation of such models like the square-root diffusion of Cox-Ingersoll-Ross (1985) or the stochastic volatility model of Heston (1993) as well as their calibration to market data.
49 |
50 | ## Installation & Usage
51 |
52 | DX Analytics has no dependencies apart from a few standard libraries (e.g. ``NumPy, pandas, SciPy, matplotlib``.)
53 |
54 | You can create an appropriate Python environment with `conda` as follows:
55 |
56 | conda env create -f dx_analytics.yml
57 |
58 | You can then install the package via
59 |
60 | pip install git+https://github.com/yhilpisch/dx.git
61 |
62 | ## What is missing?
63 |
64 | Although the focus of DX Analytics lies on the simulation and valuation of derivatives instruments and portfolios composed thereof, there is still "so much" missing alone in this particular area (given the broadness of the field) that a comprehensive list of missing pieces is impossible to compile. Some **major features missing** are, for example:
65 |
66 | * support for multi-currency derivatives and portfolios
67 | * several features to model more exotic payoffs (e.g. barriers)
68 | * standardized model calibration classes/functions
69 | * more sophisticated models for the pricing of rate-sensitive instruments
70 |
71 | To put it the other way round, the **strengths of DX Analytics** at the moment are the modeling, pricing and risk management of **single-currency equity-based derivatives and portfolios thereof**. In this regard, the library has some features to offer that are hard to find in other libraries (also commercial ones).
72 |
73 | In that sense, the current version of DX Analytics is the beginning of a larger project for developing a full-fledged derivatives analytics suite — hopefully with the support of the **Python Quant Finance community**. If you find something missing that you think would be of benefit for all users, just let us know.
74 |
75 | ## Words of Caution
76 |
77 | Technically speaking, a comprehensive **test suite** (and general approach) for DX Analytics is also missing. This is partly due to the fact that there are infinite possibilities to model derivatives instruments and portfolios with DX Analytics. The ultimate test would be to have a means to judge for any kind of model and valuation run whether the results are correct or not. However, with DX Analytics you can model and value "things" for which **no benchmark values** (from the market, from other models, from other libraries, etc.) exist.
78 |
79 | You can think of DX Analytics a bit like of a **spreadsheet application**. Such tools allow you to implement rather complex financial models, e.g. for the valuation of a company. However, there is no "guarantee" that your results are in any (economic) way correct — although they might be mathematically sound. The ultimate test for the soundness of a valuation result will always be what an "informed" market player is willing to pay for the instrument under consideration (or a complete company to this end). In that sense, **the market is always right**. Models, numerical methods and their results **might be wrong** for quite a large number of reasons.
80 |
81 | And when you think of DX Analytics, potential or real **implementation errors** might of course also play a role, especially since the whole approach to modeling and valuation followed by DX Analytics is far away from being mainstream.
82 |
83 | Fortunately, there are at least some ways to implement **sanity checks**. This is, for example, done by benchmarking valuation results for European call and put options from Monte Carlo simulation against valuation results from another numerical method, in particular the **Fourier-based pricing approach**. This alternative approach provides numerical values for benchmark instruments at least for the most important models used by DX Analytics (e.g. Heston (1993) stochastic volatility model).
84 |
85 | ## Questions and Support
86 |
87 | Yves Hilpisch, the author of DX Analytics, is CEO of The Python Quants GmbH (Germany). The group provides professional support for the DX Analytics library. For inquiries in this regard contact dx@tpq.io.
88 |
89 | The Python Quants offers comprehensive online training program about Python & AI for Finance: https://linktr.ee/dyjh.
90 |
91 | ## Documentation
92 |
93 | You find the documentation under http://dx-analytics.com.
94 |
95 |
96 | ## Copyright, License & Disclaimer
97 |
98 | © Dr. Yves J. Hilpisch \| The Python Quants GmbH
99 |
100 | DX Analytics (the "dx library" or "dx package") is licensed under the GNU Affero General
101 | Public License version 3 or later (see http://www.gnu.org/licenses/).
102 |
103 | DX Analytics comes with no representations or warranties, to the extent
104 | permitted by applicable law.
105 |
106 | [Learn More & Stay in Touch](https://linktr.ee/dyjh)
107 |
108 |
--------------------------------------------------------------------------------
/data/ukblc05.xls:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yhilpisch/dx/dbd40f1c3e8eb5f8ddf4fd1f01869949d23caab2/data/ukblc05.xls
--------------------------------------------------------------------------------
/data/ukois09.xls:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yhilpisch/dx/dbd40f1c3e8eb5f8ddf4fd1f01869949d23caab2/data/ukois09.xls
--------------------------------------------------------------------------------
/data/vstoxx_march_2014.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yhilpisch/dx/dbd40f1c3e8eb5f8ddf4fd1f01869949d23caab2/data/vstoxx_march_2014.h5
--------------------------------------------------------------------------------
/dx/__init__.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Financial Analytics Library
4 | #
5 | # DX Analytics is a financial analytics library, mainly for
6 | # derviatives modeling and pricing by Monte Carlo simulation
7 | #
8 | # (c) Dr. Yves J. Hilpisch
9 | # The Python Quants GmbH
10 | #
11 | # This program is free software: you can redistribute it and/or modify
12 | # it under the terms of the GNU Affero General Public License as
13 | # published by the Free Software Foundation, either version 3 of the
14 | # License, or any later version.
15 | #
16 | # This program is distributed in the hope that it will be useful,
17 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
18 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
19 | # GNU Affero General Public License for more details.
20 | #
21 | # You should have received a copy of the GNU Affero General Public License
22 | # along with this program. If not, see http://www.gnu.org/licenses/.
23 | #
24 | from .frame import *
25 | from .models import *
26 | from .valuation import *
27 | from .analytical import *
28 | from .portfolio import *
29 | from .plot import *
30 | from .rates import *
31 |
--------------------------------------------------------------------------------
/dx/analytical/__init__.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Financial Analytics Library
4 | #
5 | # DX Analytics is a financial analytics library, mainly for
6 | # derviatives modeling and pricing by Monte Carlo simulation
7 | #
8 | # (c) Dr. Yves J. Hilpisch
9 | # The Python Quants GmbH
10 | #
11 | # This program is free software: you can redistribute it and/or modify
12 | # it under the terms of the GNU Affero General Public License as
13 | # published by the Free Software Foundation, either version 3 of the
14 | # License, or any later version.
15 | #
16 | # This program is distributed in the hope that it will be useful,
17 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
18 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
19 | # GNU Affero General Public License for more details.
20 | #
21 | # You should have received a copy of the GNU Affero General Public License
22 | # along with this program. If not, see http://www.gnu.org/licenses/.
23 | from .black_scholes_merton import *
24 | from .jump_diffusion import *
25 | from .stochastic_volatility import *
26 | from .stoch_vol_jump_diffusion import *
27 |
28 | __all__ = ['BSM_european_option', 'M76_call_value', 'M76_put_value',
29 | 'H93_call_value', 'H93_put_value',
30 | 'B96_call_value', 'B96_put_value']
--------------------------------------------------------------------------------
/dx/analytical/black_scholes_merton.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Analytical Option Pricing
4 | # black_scholes_merton.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from math import log, exp, sqrt
26 | from scipy import stats
27 | from scipy.optimize import fsolve
28 | from ..frame import market_environment
29 |
30 |
31 | class BSM_european_option(object):
32 | ''' Class for European options in BSM Model.
33 |
34 | Attributes
35 | ==========
36 | initial_value : float
37 | initial stock/index level
38 | strike : float
39 | strike price
40 | pricing_date : datetime/Timestamp object
41 | pricing date
42 | maturity : datetime/Timestamp object
43 | maturity date
44 | short_rate : float
45 | constant risk-free short rate
46 | volatility : float
47 | volatility factor in diffusion term
48 |
49 | Methods
50 | =======
51 | call_value : float
52 | return present value of call option
53 | put_value : float
54 | return present_values of put option
55 | vega : float
56 | return vega of option
57 | imp_vol: float
58 | return implied volatility given option quote
59 | '''
60 |
61 | def __init__(self, name, mar_env):
62 | try:
63 | self.name = name
64 | self.initial_value = mar_env.get_constant('initial_value')
65 | self.strike = mar_env.get_constant('strike')
66 | self.pricing_date = mar_env.pricing_date
67 | self.maturity = mar_env.get_constant('maturity')
68 | self.short_rate = mar_env.get_curve('discount_curve').short_rate
69 | self.volatility = mar_env.get_constant('volatility')
70 | try:
71 | self.dividend_yield = mar_env.get_constant('dividend_yield')
72 | except:
73 | self.dividend_yield = 0.0
74 | self.mar_env = mar_env
75 | except:
76 | print('Error parsing market environment.')
77 |
78 | def update_ttm(self):
79 | ''' Updates time-to-maturity self.ttm. '''
80 | if self.pricing_date > self.maturity:
81 | raise ValueError('Pricing date later than maturity.')
82 | self.ttm = (self.maturity - self.pricing_date).days / 365.
83 |
84 | def d1(self):
85 | ''' Helper function. '''
86 | d1 = ((log(self.initial_value / self.strike) +
87 | (self.short_rate - self.dividend_yield +
88 | 0.5 * self.volatility ** 2) * self.ttm) /
89 | (self.volatility * sqrt(self.ttm)))
90 | return d1
91 |
92 | def d2(self):
93 | ''' Helper function. '''
94 | d2 = ((log(self.initial_value / self.strike) +
95 | (self.short_rate - self.dividend_yield -
96 | 0.5 * self.volatility ** 2) * self.ttm) /
97 | (self.volatility * sqrt(self.ttm)))
98 | return d2
99 |
100 | def call_value(self):
101 | ''' Return call option value. '''
102 | self.update_ttm()
103 | call_value = (
104 | exp(- self.dividend_yield * self.ttm) *
105 | self.initial_value * stats.norm.cdf(self.d1(), 0.0, 1.0) -
106 | exp(-self.short_rate * self.ttm) * self.strike *
107 | stats.norm.cdf(self.d2(), 0.0, 1.0))
108 | return call_value
109 |
110 | def put_value(self):
111 | ''' Return put option value. '''
112 | self.update_ttm()
113 | put_value = (
114 | exp(-self.short_rate * self.ttm) * self.strike *
115 | stats.norm.cdf(-self.d2(), 0.0, 1.0) -
116 | exp(-self.dividend_yield * self.ttm) *
117 | self.initial_value *
118 | stats.norm.cdf(-self.d1(), 0.0, 1.0))
119 | return put_value
120 |
121 | def vega(self):
122 | ''' Return Vega of option. '''
123 | self.update_ttm()
124 | d1 = ((log(self.initial_value / self.strike) +
125 | (self.short_rate + (0.5 * self.volatility ** 2)) * self.ttm) /
126 | (self.volatility * sqrt(self.ttm)))
127 | vega = self.initial_value * stats.norm.pdf(d1, 0.0, 1.0) \
128 | * sqrt(self.ttm)
129 | return vega
130 |
131 | def imp_vol(self, price, otype='call', volatility_est=0.2):
132 | ''' Return implied volatility given option price. '''
133 | me = market_environment('iv', self.pricing_date)
134 | me.add_environment(self.mar_env)
135 | me.add_constant('volatility', volatility_est)
136 | option = BSM_european_option('ivc', me)
137 | option.update_ttm()
138 |
139 | def difference(volatility_est):
140 | option.volatility = volatility_est
141 | if otype == 'call':
142 | return option.call_value() - price
143 | if otype == 'put':
144 | return (option.put_value() - price) ** 2
145 | else:
146 | raise ValueError('No valid option type.')
147 | iv = fsolve(difference, volatility_est)[0]
148 | return iv
149 |
--------------------------------------------------------------------------------
/dx/analytical/jump_diffusion.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Analytical Option Pricing
4 | # jump_diffusion.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | import math
26 | import numpy as np
27 | from scipy.integrate import quad
28 |
29 |
30 | def M76_call_value(mar_env):
31 | ''' Valuation of European call option in M76 model via Lewis (2001)
32 | Fourier-based approach.
33 |
34 | Parameters
35 | ==========
36 | initial_value : float
37 | initial stock/index level
38 | strike : float
39 | strike price
40 | maturity : datetime object
41 | time-to-maturity (for t=0)
42 | short_rate : float
43 | constant risk-free short rate
44 | volatility : float
45 | volatility factor diffusion term
46 | lamb : float
47 | jump intensity
48 | mu : float
49 | expected jump size
50 | delta : float
51 | standard deviation of jump
52 |
53 | Returns
54 | =======
55 | call_value: float
56 | present value of European call option
57 | '''
58 |
59 | try:
60 | S0 = mar_env.get_constant('initial_value')
61 | K = mar_env.get_constant('strike')
62 | T = (mar_env.get_constant('maturity') -
63 | mar_env.pricing_date).days / 365.
64 | r = mar_env.get_curve('discount_curve').short_rate
65 | lamb = mar_env.get_constant('lambda')
66 | mu = mar_env.get_constant('mu')
67 | delta = mar_env.get_constant('delta')
68 | volatility = mar_env.get_constant('volatility')
69 | except:
70 | print('Error parsing market environment.')
71 |
72 | int_value = quad(lambda u:
73 | M76_int_func_sa(
74 | u, S0, K, T, r, volatility, lamb, mu, delta),
75 | 0, np.inf, limit=250)[0]
76 | call_value = max(0, S0 - np.exp(-r * T) * np.sqrt(S0 * K) /
77 | np.pi * int_value)
78 | return call_value
79 |
80 |
81 | def M76_put_value(mar_env):
82 | ''' Valuation of European put option in M76 model via Lewis (2001)
83 | Fourier-based approach. '''
84 |
85 | try:
86 | S0 = mar_env.get_constant('initial_value')
87 | K = mar_env.get_constant('strike')
88 | T = (mar_env.get_constant('maturity') -
89 | mar_env.pricing_date).days / 365.
90 | r = mar_env.get_curve('discount_curve').short_rate
91 | except:
92 | print('Error parsing market environment.')
93 |
94 | call_value = M76_call_value(mar_env)
95 | put_value = call_value + K * math.exp(-r * T) - S0
96 | return put_value
97 |
98 |
99 | def M76_int_func_sa(u, S0, K, T, r, volatility, lamb, mu, delta):
100 | ''' Valuation of European call option in M76 model via Lewis (2001)
101 | Fourier-based approach: integration function.
102 |
103 | Parameter definitions see function M76_call_value.'''
104 | char_func_value = M76_char_func_sa(u - 0.5 * 1j, T, r, volatility,
105 | lamb, mu, delta)
106 | int_func_value = 1 / (u ** 2 + 0.25) \
107 | * (np.exp(1j * u * np.log(S0 / K)) * char_func_value).real
108 | return int_func_value
109 |
110 |
111 | def M76_char_func_sa(u, T, r, volatility, lamb, mu, delta):
112 | ''' Valuation of European call option in M76 model via Lewis (2001)
113 | Fourier-based approach: characteristic function 'jump component'.
114 |
115 | Parameter definitions see function M76_call_value.'''
116 | omega = r - 0.5 * volatility ** 2 \
117 | - lamb * (np.exp(mu + 0.5 * delta ** 2) - 1)
118 | char_func_value = np.exp((1j * u * omega -
119 | 0.5 * u ** 2 * volatility ** 2 +
120 | lamb * (np.exp(1j * u * mu -
121 | u ** 2 * delta ** 2 * 0.5) - 1)) * T)
122 | return char_func_value
123 |
--------------------------------------------------------------------------------
/dx/analytical/stoch_vol_jump_diffusion.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Analytical Option Pricing
4 | # stoch_vol_jump_diffusion.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | import math
26 | import numpy as np
27 | from scipy.integrate import quad
28 | from .stochastic_volatility import H93_char_func
29 |
30 |
31 | def B96_call_value(mar_env):
32 | ''' Valuation of European call option in B96 Model via Lewis (2001)
33 | Fourier-based approach.
34 |
35 | Parameters
36 | ==========
37 | intial_value: float
38 | initial stock/index level
39 | strike: float
40 | strike price
41 | maturity: datetime object
42 | time-to-maturity (for t=0)
43 | short_rate: float
44 | constant risk-free short rate
45 | kappa_v: float
46 | mean-reversion factor
47 | theta_v: float
48 | long-run mean of variance
49 | sigma_v: float
50 | volatility of variance
51 | rho: float
52 | correlation between variance and stock/index level
53 | v0: float
54 | initial level of variance
55 | lamb: float
56 | jump intensity
57 | mu: float
58 | expected jump size
59 | delta: float
60 | standard deviation of jump
61 |
62 | Returns
63 | =======
64 | call_value: float
65 | present value of European call option
66 |
67 | '''
68 |
69 | try:
70 | S0 = mar_env.get_constant('initial_value')
71 | K = mar_env.get_constant('strike')
72 | T = (mar_env.get_constant('maturity') -
73 | mar_env.pricing_date).days / 365.
74 | r = mar_env.get_curve('discount_curve').short_rate
75 | kappa_v = mar_env.get_constant('kappa')
76 | theta_v = mar_env.get_constant('theta')
77 | sigma_v = mar_env.get_constant('vol_vol')
78 | rho = mar_env.get_constant('rho')
79 | v0 = mar_env.get_constant('volatility') ** 2
80 | lamb = mar_env.get_constant('lambda')
81 | mu = mar_env.get_constant('mu')
82 | delta = mar_env.get_constant('delta')
83 | except:
84 | print('Error parsing market environment.')
85 |
86 | int_value = quad(lambda u:
87 | B96_int_func(u, S0, K, T, r, kappa_v, theta_v,
88 | sigma_v, rho, v0, lamb, mu, delta),
89 | 0, np.inf, limit=250)[0]
90 | call_value = max(0, S0 - np.exp(-r * T) * np.sqrt(S0 * K) /
91 | np.pi * int_value)
92 | return call_value
93 |
94 |
95 | def B96_put_value(mar_env):
96 | ''' Valuation of European put option in Bates (1996) model via Lewis (2001)
97 | Fourier-based approach. '''
98 |
99 | try:
100 | S0 = mar_env.get_constant('initial_value')
101 | K = mar_env.get_constant('strike')
102 | T = (mar_env.get_constant('maturity') -
103 | mar_env.pricing_date).days / 365.
104 | r = mar_env.get_curve('discount_curve').short_rate
105 | except:
106 | print('Error parsing market environment.')
107 |
108 | call_value = B96_call_value(mar_env)
109 | put_value = call_value + K * math.exp(-r * T) - S0
110 | return put_value
111 |
112 |
113 | def B96_int_func(u, S0, K, T, r, kappa_v, theta_v, sigma_v, rho, v0,
114 | lamb, mu, delta):
115 | ''' Valuation of European call option in BCC97 model via Lewis (2001)
116 | Fourier-based approach: integration function.
117 |
118 | Parameter definitions see function B96_call_value.'''
119 | char_func_value = B96_char_func(u - 1j * 0.5, T, r, kappa_v, theta_v,
120 | sigma_v, rho, v0, lamb, mu, delta)
121 | int_func_value = 1 / (u ** 2 + 0.25) \
122 | * (np.exp(1j * u * np.log(S0 / K)) * char_func_value).real
123 | return int_func_value
124 |
125 |
126 | def M76_char_func(u, T, lamb, mu, delta):
127 | ''' Valuation of European call option in M76 model via Lewis (2001)
128 | Fourier-based approach: characteristic function.
129 |
130 | Parameter definitions see function M76_call_value.'''
131 | omega = -lamb * (np.exp(mu + 0.5 * delta ** 2) - 1)
132 | char_func_value = np.exp((1j * u * omega + lamb *
133 | (np.exp(1j * u * mu - u ** 2 * delta ** 2 * 0.5) - 1)) * T)
134 | return char_func_value
135 |
136 |
137 | def B96_char_func(u, T, r, kappa_v, theta_v, sigma_v, rho, v0,
138 | lamb, mu, delta):
139 | ''' Valuation of European call option in BCC97 model via Lewis (2001)
140 | Fourier-based approach: characteristic function.
141 |
142 | Parameter definitions see function B96_call_value.'''
143 | BCC1 = H93_char_func(u, T, r, kappa_v, theta_v, sigma_v, rho, v0)
144 | BCC2 = M76_char_func(u, T, lamb, mu, delta)
145 | return BCC1 * BCC2
146 |
--------------------------------------------------------------------------------
/dx/analytical/stochastic_volatility.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Analytical Option Pricing
4 | # stochastic_volatility.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | import math
26 | import numpy as np
27 | from scipy.integrate import quad
28 |
29 |
30 | def H93_call_value(mar_env):
31 | ''' Valuation of European call option in H93 model via Lewis (2001)
32 | Fourier-based approach.
33 |
34 | Parameters
35 | ==========
36 | initial_value : float
37 | initial stock/index level
38 | strike : float
39 | strike price
40 | maturity : datetime object
41 | time-to-maturity (for t=0)
42 | short_rate : float
43 | constant risk-free short rate
44 | kappa_v : float
45 | mean-reversion factor
46 | theta_v : float
47 | long-run mean of variance
48 | sigma_v : float
49 | volatility of variance
50 | rho : float
51 | correlation between variance and stock/index level
52 | volatility: float
53 | initial level of volatility (square root of variance)
54 |
55 | Returns
56 | =======
57 | call_value: float
58 | present value of European call option
59 |
60 | '''
61 |
62 | try:
63 | S0 = mar_env.get_constant('initial_value')
64 | K = mar_env.get_constant('strike')
65 | T = (mar_env.get_constant('maturity') -
66 | mar_env.pricing_date).days / 365.
67 | r = mar_env.get_curve('discount_curve').short_rate
68 | kappa_v = mar_env.get_constant('kappa')
69 | theta_v = mar_env.get_constant('theta')
70 | sigma_v = mar_env.get_constant('vol_vol')
71 | rho = mar_env.get_constant('rho')
72 | v0 = mar_env.get_constant('volatility') ** 2
73 | except:
74 | print('Error parsing market environment.')
75 |
76 | int_value = quad(lambda u:
77 | H93_int_func(u, S0, K, T, r, kappa_v,
78 | theta_v, sigma_v, rho, v0),
79 | 0, np.inf, limit=250)[0]
80 | call_value = max(0, S0 - np.exp(-r * T) * np.sqrt(S0 * K) /
81 | np.pi * int_value)
82 | return call_value
83 |
84 |
85 | def H93_put_value(mar_env):
86 | ''' Valuation of European call option in Heston (1993) model via
87 | Lewis (2001) -- Fourier-based approach. '''
88 |
89 | try:
90 | S0 = mar_env.get_constant('initial_value')
91 | K = mar_env.get_constant('strike')
92 | T = (mar_env.get_constant('maturity') -
93 | mar_env.pricing_date).days / 365.
94 | r = mar_env.get_curve('discount_curve').short_rate
95 | except:
96 | print('Error parsing market environment.')
97 |
98 | call_value = H93_call_value(mar_env)
99 | put_value = call_value + K * math.exp(-r * T) - S0
100 | return put_value
101 |
102 |
103 | def H93_int_func(u, S0, K, T, r, kappa_v, theta_v, sigma_v, rho, v0):
104 | ''' Valuation of European call option in H93 model via Lewis (2001)
105 | Fourier-based approach: integration function.
106 |
107 | Parameter definitions see function H93_call_value.'''
108 | char_func_value = H93_char_func(u - 1j * 0.5, T, r, kappa_v,
109 | theta_v, sigma_v, rho, v0)
110 | int_func_value = 1 / (u ** 2 + 0.25) \
111 | * (np.exp(1j * u * np.log(S0 / K)) * char_func_value).real
112 | return int_func_value
113 |
114 |
115 | def H93_char_func(u, T, r, kappa_v, theta_v, sigma_v, rho, v0):
116 | ''' Valuation of European call option in H93 model via Lewis (2001)
117 | Fourier-based approach: characteristic function.
118 |
119 | Parameter definitions see function B96_call_value.'''
120 | c1 = kappa_v * theta_v
121 | c2 = -np.sqrt((rho * sigma_v * u * 1j - kappa_v) ** 2 -
122 | sigma_v ** 2 * (-u * 1j - u ** 2))
123 | c3 = (kappa_v - rho * sigma_v * u * 1j + c2) \
124 | / (kappa_v - rho * sigma_v * u * 1j - c2)
125 | H1 = (r * u * 1j * T + (c1 / sigma_v ** 2) *
126 | ((kappa_v - rho * sigma_v * u * 1j + c2) * T -
127 | 2 * np.log((1 - c3 * np.exp(c2 * T)) / (1 - c3))))
128 | H2 = ((kappa_v - rho * sigma_v * u * 1j + c2) / sigma_v ** 2 *
129 | ((1 - np.exp(c2 * T)) / (1 - c3 * np.exp(c2 * T))))
130 | char_func_value = np.exp(H1 + H2 * v0)
131 | return char_func_value
132 |
--------------------------------------------------------------------------------
/dx/frame.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Framework Classes and Functions
4 | # dx_frame.py
5 | #
6 | #
7 | # DX Analytics is a financial analytics library, mainly for
8 | # derviatives modeling and pricing by Monte Carlo simulation
9 | #
10 | # (c) Dr. Yves J. Hilpisch
11 | # The Python Quants GmbH
12 | #
13 | # This program is free software: you can redistribute it and/or modify
14 | # it under the terms of the GNU Affero General Public License as
15 | # published by the Free Software Foundation, either version 3 of the
16 | # License, or any later version.
17 | #
18 | # This program is distributed in the hope that it will be useful,
19 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
20 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
21 | # GNU Affero General Public License for more details.
22 | #
23 | # You should have received a copy of the GNU Affero General Public License
24 | # along with this program. If not, see http://www.gnu.org/licenses/.
25 | #
26 | import math
27 | import numpy as np
28 | import pandas as pd
29 | import datetime as dt
30 | import scipy.interpolate as sci
31 | import scipy.optimize as sco
32 |
33 | # Helper functions
34 |
35 |
36 | def get_year_deltas(time_list, day_count=365.):
37 | ''' Return vector of floats with time deltas in years.
38 | Initial value normalized to zero.
39 |
40 | Parameters
41 | ==========
42 | time_list : list or array
43 | collection of datetime objects
44 | day_count : float
45 | number of days for a year
46 | (to account for different conventions)
47 |
48 | Results
49 | =======
50 | delta_list : array
51 | year fractions
52 | '''
53 |
54 | delta_list = []
55 | start = min(time_list)
56 | for time in time_list:
57 | days = (time - start).days
58 | delta_list.append(days / day_count)
59 | return np.array(delta_list)
60 |
61 |
62 | def sn_random_numbers(shape, antithetic=True, moment_matching=True,
63 | fixed_seed=False):
64 | ''' Return an array of shape "shape" with (pseudo-) random numbers
65 | which are standard normally distributed.
66 |
67 | Parameters
68 | ==========
69 | shape : tuple (o, n, m)
70 | generation of array with shape (o, n, m)
71 | antithetic : boolean
72 | generation of antithetic variates
73 | moment_matching : boolean
74 | matching of first and second moments
75 | fixed_seed : boolean
76 | flag to fix the seed
77 |
78 | Results
79 | =======
80 | ran : (o, n, m) array of (pseudo-)random numbers
81 | '''
82 | if fixed_seed is True:
83 | np.random.seed(1000)
84 | if antithetic is True:
85 | ran = np.random.standard_normal(
86 | (shape[0], shape[1], int(shape[2] / 2)))
87 | ran = np.concatenate((ran, -ran), axis=2)
88 | else:
89 | ran = np.random.standard_normal(shape)
90 | if moment_matching is True:
91 | ran = ran - np.mean(ran)
92 | ran = ran / np.std(ran)
93 | if shape[0] == 1:
94 | return ran[0]
95 | else:
96 | return ran
97 |
98 | # Discounting classes
99 |
100 |
101 | class constant_short_rate(object):
102 | ''' Class for constant short rate discounting.
103 |
104 | Attributes
105 | ==========
106 | name : string
107 | name of the object
108 | short_rate : float (positive)
109 | constant rate for discounting
110 |
111 | Methods
112 | =======
113 | get_forward_rates :
114 | get forward rates give list/array of datetime objects;
115 | here: constant forward rates
116 | get_discount_factors :
117 | get discount factors given a list/array of datetime objects
118 | or year fractions
119 | '''
120 |
121 | def __init__(self, name, short_rate):
122 | self.name = name
123 | self.short_rate = short_rate
124 | if short_rate < 0:
125 | raise ValueError('Short rate negative.')
126 |
127 | def get_forward_rates(self, time_list, paths=None, dtobjects=True):
128 | ''' time_list either list of datetime objects or list of
129 | year deltas as decimal number (dtobjects=False)
130 | '''
131 | forward_rates = np.array(len(time_list) * (self.short_rate,))
132 | return time_list, forward_rates
133 |
134 | def get_discount_factors(self, time_list, paths=None, dtobjects=True):
135 | if dtobjects is True:
136 | dlist = get_year_deltas(time_list)
137 | else:
138 | dlist = np.array(time_list)
139 | discount_factors = np.exp(self.short_rate * np.sort(-dlist))
140 | return time_list, discount_factors
141 |
142 |
143 | class deterministic_short_rate(object):
144 | ''' Class for discounting based on deterministic short rates,
145 | derived from a term structure of zero-coupon bond yields
146 |
147 | Attributes
148 | ==========
149 | name : string
150 | name of the object
151 | yield_list : list/array of (time, yield) tuples
152 | input yields with time attached
153 |
154 | Methods
155 | =======
156 | get_interpolated_yields :
157 | return interpolated yield curve given a time list/array
158 | get_forward_rates :
159 | return forward rates given a time list/array
160 | get_discount_factors :
161 | return discount factors given a time list/array
162 | '''
163 |
164 | def __init__(self, name, yield_list):
165 | self.name = name
166 | self.yield_list = np.array(yield_list)
167 | if np.sum(np.where(self.yield_list[:, 1] < 0, 1, 0)) > 0:
168 | raise ValueError('Negative yield(s).')
169 |
170 | def get_interpolated_yields(self, time_list, dtobjects=True):
171 | ''' time_list either list of datetime objects or list of
172 | year deltas as decimal number (dtobjects=False)
173 | '''
174 | if dtobjects is True:
175 | tlist = get_year_deltas(time_list)
176 | else:
177 | tlist = time_list
178 | dlist = get_year_deltas(self.yield_list[:, 0])
179 | if len(time_list) <= 3:
180 | k = 1
181 | else:
182 | k = 3
183 | yield_spline = sci.splrep(dlist, self.yield_list[:, 1], k=k)
184 | yield_curve = sci.splev(tlist, yield_spline, der=0)
185 | yield_deriv = sci.splev(tlist, yield_spline, der=1)
186 | return np.array([time_list, yield_curve, yield_deriv]).T
187 |
188 | def get_forward_rates(self, time_list, paths=None, dtobjects=True):
189 | yield_curve = self.get_interpolated_yields(time_list, dtobjects)
190 | if dtobjects is True:
191 | tlist = get_year_deltas(time_list)
192 | else:
193 | tlist = time_list
194 | forward_rates = yield_curve[:, 1] + yield_curve[:, 2] * tlist
195 | return time_list, forward_rates
196 |
197 | def get_discount_factors(self, time_list, paths=None, dtobjects=True):
198 | discount_factors = []
199 | if dtobjects is True:
200 | dlist = get_year_deltas(time_list)
201 | else:
202 | dlist = time_list
203 | time_list, forward_rate = self.get_forward_rates(time_list, dtobjects)
204 | for no in range(len(dlist)):
205 | factor = 0.0
206 | for d in range(no, len(dlist) - 1):
207 | factor += ((dlist[d + 1] - dlist[d]) *
208 | (0.5 * (forward_rate[d + 1] + forward_rate[d])))
209 | discount_factors.append(np.exp(-factor))
210 | return time_list, discount_factors
211 |
212 |
213 | # Market environment class
214 |
215 | class market_environment(object):
216 | ''' Class to model a market environment relevant for valuation.
217 |
218 | Attributes
219 | ==========
220 | name: string
221 | name of the market environment
222 | pricing_date : datetime object
223 | date of the market environment
224 |
225 | Methods
226 | =======
227 | add_constant :
228 | adds a constant (e.g. model parameter)
229 | get_constant :
230 | get a constant
231 | add_list :
232 | adds a list (e.g. underlyings)
233 | get_list :
234 | get a list
235 | add_curve :
236 | adds a market curve (e.g. yield curve)
237 | get_curve :
238 | get a market curve
239 | add_environment :
240 | adding and overwriting whole market environments
241 | with constants, lists and curves
242 | '''
243 |
244 | def __init__(self, name, pricing_date):
245 | self.name = name
246 | self.pricing_date = pricing_date
247 | self.constants = {}
248 | self.lists = {}
249 | self.curves = {}
250 |
251 | def add_constant(self, key, constant):
252 | self.constants[key] = constant
253 |
254 | def get_constant(self, key):
255 | return self.constants[key]
256 |
257 | def add_list(self, key, list_object):
258 | self.lists[key] = list_object
259 |
260 | def get_list(self, key):
261 | return self.lists[key]
262 |
263 | def add_curve(self, key, curve):
264 | self.curves[key] = curve
265 |
266 | def get_curve(self, key):
267 | return self.curves[key]
268 |
269 | def add_environment(self, env):
270 | for key in env.curves:
271 | self.curves[key] = env.curves[key]
272 | for key in env.lists:
273 | self.lists[key] = env.lists[key]
274 | for key in env.constants:
275 | self.constants[key] = env.constants[key]
276 |
--------------------------------------------------------------------------------
/dx/models/__init__.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # simulation_class.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from .simulation_class import *
26 | from .jump_diffusion import jump_diffusion
27 | from .geometric_brownian_motion import geometric_brownian_motion
28 | from .stochastic_volatility import stochastic_volatility
29 | from .stoch_vol_jump_diffusion import stoch_vol_jump_diffusion
30 | from .square_root_diffusion import *
31 | from .mean_reverting_diffusion import mean_reverting_diffusion
32 | from .square_root_jump_diffusion import *
33 | from .sabr_stochastic_volatility import sabr_stochastic_volatility
34 |
35 | __all__ = ['simulation_class', 'general_underlying',
36 | 'geometric_brownian_motion', 'jump_diffusion',
37 | 'stochastic_volatility', 'stoch_vol_jump_diffusion',
38 | 'square_root_diffusion', 'mean_reverting_diffusion',
39 | 'square_root_jump_diffusion', 'square_root_jump_diffusion_plus',
40 | 'sabr_stochastic_volatility', 'srd_forwards',
41 | 'stochastic_short_rate']
42 |
--------------------------------------------------------------------------------
/dx/models/geometric_brownian_motion.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # geometric_brownian_motion.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from .simulation_class import simulation_class
27 |
28 |
29 | class geometric_brownian_motion(simulation_class):
30 | ''' Class to generate simulated paths based on
31 | the Black-Scholes-Merton geometric Brownian motion model.
32 |
33 | Attributes
34 | ==========
35 | name : string
36 | name of the object
37 | mar_env : instance of market_environment
38 | market environment data for simulation
39 | corr : boolean
40 | True if correlated with other model simulation object
41 |
42 | Methods
43 | =======
44 | update :
45 | updates parameters
46 | generate_paths :
47 | returns Monte Carlo paths given the market environment
48 | '''
49 |
50 | def __init__(self, name, mar_env, corr=False):
51 | super(geometric_brownian_motion, self).__init__(name, mar_env, corr)
52 |
53 | def update(self, pricing_date=None, initial_value=None,
54 | volatility=None, final_date=None):
55 | ''' Updates model parameters. '''
56 | if pricing_date is not None:
57 | self.pricing_date = pricing_date
58 | self.time_grid = None
59 | self.generate_time_grid()
60 | if initial_value is not None:
61 | self.initial_value = initial_value
62 | if volatility is not None:
63 | self.volatility = volatility
64 | if final_date is not None:
65 | self.final_date = final_date
66 | self.instrument_values = None
67 |
68 | def generate_paths(self, fixed_seed=False, day_count=365.):
69 | ''' Generates Monte Carlo paths for the model. '''
70 | if self.time_grid is None:
71 | self.generate_time_grid()
72 | # method from generic model simulation class
73 | # number of dates for time grid
74 | M = len(self.time_grid)
75 | # number of paths
76 | I = self.paths
77 | # array initialization for path simulation
78 | paths = np.zeros((M, I))
79 | # initialize first date with initial_value
80 | paths[0] = self.initial_value
81 | if self.correlated is False:
82 | # if not correlated generate random numbers
83 | rand = sn_random_numbers((1, M, I),
84 | fixed_seed=fixed_seed)
85 | else:
86 | # if correlated use random number object as provided
87 | # in market environment
88 | rand = self.random_numbers
89 |
90 | # forward rates for drift of process
91 | forward_rates = self.discount_curve.get_forward_rates(
92 | self.time_grid, self.paths, dtobjects=True)[1]
93 |
94 | for t in range(1, len(self.time_grid)):
95 | # select the right time slice from the relevant
96 | # random number set
97 | if self.correlated is False:
98 | ran = rand[t]
99 | else:
100 | ran = np.dot(self.cholesky_matrix, rand[:, t, :])
101 | ran = ran[self.rn_set]
102 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
103 | # difference between two dates as year fraction
104 | rt = (forward_rates[t - 1] + forward_rates[t]) / 2
105 | paths[t] = paths[t - 1] * np.exp((rt - 0.5 *
106 | self.volatility ** 2) * dt +
107 | self.volatility * np.sqrt(dt) * ran)
108 | # generate simulated values for the respective date
109 | self.instrument_values = paths
110 |
--------------------------------------------------------------------------------
/dx/models/jump_diffusion.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # jump_diffusion.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from .simulation_class import simulation_class
27 |
28 |
29 | class jump_diffusion(simulation_class):
30 | ''' Class to generate simulated paths based on
31 | the Merton (1976) jump diffusion model.
32 |
33 | Attributes
34 | ==========
35 | name : string
36 | name of the object
37 | mar_env : instance of market_environment
38 | market environment data for simulation
39 | corr : boolean
40 | True if correlated with other model object
41 |
42 | Methods
43 | =======
44 | update :
45 | updates parameters
46 | generate_paths :
47 | returns Monte Carlo paths given the market environment
48 | '''
49 |
50 | def __init__(self, name, mar_env, corr=False):
51 | super(jump_diffusion, self).__init__(name, mar_env, corr)
52 | try:
53 | self.lamb = mar_env.get_constant('lambda')
54 | self.mu = mar_env.get_constant('mu')
55 | self.delt = mar_env.get_constant('delta')
56 | except:
57 | print('Error parsing market environment.')
58 |
59 | def update(self, pricing_date=None, initial_value=None,
60 | volatility=None, lamb=None, mu=None, delta=None,
61 | final_date=None):
62 | if pricing_date is not None:
63 | self.pricing_date = pricing_date
64 | self.time_grid = None
65 | self.generate_time_grid()
66 | if initial_value is not None:
67 | self.initial_value = initial_value
68 | if volatility is not None:
69 | self.volatility = volatility
70 | if lamb is not None:
71 | self.lamb = lamb
72 | if mu is not None:
73 | self.mu = mu
74 | if delta is not None:
75 | self.delt = delta
76 | if final_date is not None:
77 | self.final_date = final_date
78 | self.instrument_values = None
79 |
80 | def generate_paths(self, fixed_seed=False, day_count=365.):
81 | if self.time_grid is None:
82 | self.generate_time_grid()
83 | # method from generic model simulation class
84 | # number of dates for time grid
85 | M = len(self.time_grid)
86 | # number of paths
87 | I = self.paths
88 | # array initialization for path simulation
89 | paths = np.zeros((M, I))
90 | # initialize first date with initial_value
91 | paths[0] = self.initial_value
92 | if self.correlated is False:
93 | # if not correlated generate random numbers
94 | sn1 = sn_random_numbers((1, M, I),
95 | fixed_seed=fixed_seed)
96 | else:
97 | # if correlated use random number object as provided
98 | # in market environment
99 | sn1 = self.random_numbers
100 |
101 | # Standard normally distributed seudo-random numbers
102 | # for the jump component
103 | sn2 = sn_random_numbers((1, M, I),
104 | fixed_seed=fixed_seed)
105 |
106 | forward_rates = self.discount_curve.get_forward_rates(
107 | self.time_grid, self.paths, dtobjects=True)[1]
108 |
109 | rj = self.lamb * (np.exp(self.mu + 0.5 * self.delt ** 2) - 1)
110 | for t in range(1, len(self.time_grid)):
111 | # select the right time slice from the relevant
112 | # random number set
113 | if self.correlated is False:
114 | ran = sn1[t]
115 | else:
116 | # only with correlation in portfolio context
117 | ran = np.dot(self.cholesky_matrix, sn1[:, t, :])
118 | ran = ran[self.rn_set]
119 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
120 | # difference between two dates as year fraction
121 | poi = np.random.poisson(self.lamb * dt, I)
122 | # Poisson distributed pseudo-random numbers for jump component
123 | rt = (forward_rates[t - 1] + forward_rates[t]) / 2
124 | paths[t] = paths[t - 1] * (
125 | np.exp((rt - rj - 0.5 * self.volatility ** 2) * dt +
126 | self.volatility * np.sqrt(dt) * ran) +
127 | (np.exp(self.mu + self.delt * sn2[t]) - 1) * poi)
128 | self.instrument_values = paths
129 |
--------------------------------------------------------------------------------
/dx/models/mean_reverting_diffusion.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # mean_reverting_diffusion.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from .square_root_diffusion import square_root_diffusion
27 |
28 |
29 | class mean_reverting_diffusion(square_root_diffusion):
30 | ''' Class to generate simulated paths based on the
31 | Vasicek (1977) mean-reverting short rate model.
32 |
33 | Attributes
34 | ==========
35 | name : string
36 | name of the object
37 | mar_env : instance of market_environment
38 | market environment data for simulation
39 | corr : boolean
40 | True if correlated with other model object
41 |
42 | Methods
43 | =======
44 | update :
45 | updates parameters
46 | generate_paths :
47 | returns Monte Carlo paths given the market environment
48 | '''
49 |
50 | def __init__(self, name, mar_env, corr=False, truncation=False):
51 | super(mean_reverting_diffusion,
52 | self).__init__(name, mar_env, corr)
53 | self.truncation = truncation
54 |
55 | def generate_paths(self, fixed_seed=True, day_count=365.):
56 | if self.time_grid is None:
57 | self.generate_time_grid()
58 | M = len(self.time_grid)
59 | I = self.paths
60 | paths = np.zeros((M, I))
61 | paths_ = np.zeros_like(paths)
62 | paths[0] = self.initial_value
63 | paths_[0] = self.initial_value
64 | if self.correlated is False:
65 | rand = sn_random_numbers((1, M, I),
66 | fixed_seed=fixed_seed)
67 | else:
68 | rand = self.random_numbers
69 |
70 | for t in range(1, len(self.time_grid)):
71 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
72 | if self.correlated is False:
73 | ran = rand[t]
74 | else:
75 | ran = np.dot(self.cholesky_matrix, rand[:, t, :])
76 | ran = ran[self.rn_set]
77 |
78 | # full truncation Euler discretization
79 | if self.truncation is True:
80 | paths_[t] = (paths_[t - 1] + self.kappa *
81 | (self.theta - np.maximum(0, paths_[t - 1])) * dt +
82 | np.maximum(0, paths_[t - 1]) * self.volatility * np.sqrt(dt) * ran)
83 | paths[t] = np.maximum(0, paths_[t])
84 | else:
85 | paths[t] = (paths[t - 1] + self.kappa *
86 | (self.theta - paths[t - 1]) * dt +
87 | paths[t - 1] * self.volatility * np.sqrt(dt) * ran)
88 | self.instrument_values = paths
89 |
--------------------------------------------------------------------------------
/dx/models/sabr_stochastic_volatility.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # dx_models.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from .simulation_class import simulation_class
27 |
28 |
29 | class sabr_stochastic_volatility(simulation_class):
30 | ''' Class to generate simulated paths based for the SABR model.
31 |
32 | Attributes
33 | ==========
34 | name : string
35 | name of the object
36 | mar_env : instance of market_environment
37 | market environment data for simulation
38 | corr : boolean
39 | True if correlated with other model object
40 |
41 | Methods
42 | =======
43 | update :
44 | updates parameters
45 | generate_paths :
46 | returns Monte Carlo paths for the market environment
47 | get_volatility_values :
48 | returns array with simulated volatility paths
49 | get_log_normal_implied_vol :
50 | returns the approximation of the lognormal Black implied volatility
51 | as given by Hagan et al. (2002)
52 | '''
53 |
54 | def __init__(self, name, mar_env, corr=False):
55 | super(sabr_stochastic_volatility, self).__init__(name, mar_env, corr)
56 | try:
57 | self.alpha = mar_env.get_constant('alpha') # initial variance
58 | self.volatility = self.alpha ** 0.5 # initial volatility
59 | self.beta = mar_env.get_constant('beta') # exponent of the FWR
60 | self.vol_vol = mar_env.get_constant('vol_vol') # vol of var
61 | self.rho = mar_env.get_constant('rho') # correlation
62 | self.leverage = np.linalg.cholesky(
63 | np.array([[1.0, self.rho], [self.rho, 1.0]]))
64 | self.volatility_values = None
65 | except:
66 | print('Error parsing market environment.')
67 |
68 | def update(self, pricing_date=None, initial_value=None, volatility=None,
69 | alpha=None, beta=None, rho=None, vol_vol=None,
70 | final_date=None):
71 | ''' Updates the attributes of the object. '''
72 | if pricing_date is not None:
73 | self.pricing_date = pricing_date
74 | self.time_grid = None
75 | self.generate_time_grid()
76 | if initial_value is not None:
77 | self.initial_value = initial_value
78 | if volatility is not None:
79 | self.volatility = volatility
80 | self.alpha = volatility ** 2
81 | if alpha is not None:
82 | self.alpha = alpha
83 | if beta is not None:
84 | self.beta = beta
85 | if rho is not None:
86 | self.rho = rho
87 | if vol_vol is not None:
88 | self.vol_vol = vol_vol
89 | if final_date is not None:
90 | self.final_date = final_date
91 | self.time_grid = None
92 | self.instrument_values = None
93 | self.volatility_values = None
94 |
95 | def get_log_normal_implied_vol(self, strike, expiry):
96 | ''' Returns the implied volatility for fixed strike and expiry.
97 | '''
98 | self.check_parameter_set()
99 | one_beta = 1. - self.beta
100 | one_beta_square = one_beta ** 2
101 | if self.initial_value != strike:
102 | fk = self.initial_value * strike
103 | fk_beta = fk ** (one_beta / 2.)
104 | log_fk = math.log(self.initial_value / strike)
105 | z = self.vol_vol / self.alpha * fk_beta * log_fk
106 | x = math.log((math.sqrt(1. - 2 * self.rho * z + z ** 2) +
107 | z - self.rho) / (1 - self.rho))
108 |
109 | sigma_1 = ((self.alpha / fk_beta /
110 | (1 + one_beta_square * log_fk ** 2 / 24) +
111 | (one_beta_square ** 2 * log_fk ** 4 / 1920.)) * z / x)
112 |
113 | sigma_ex = (one_beta_square / 24. * self.alpha ** 2 / fk_beta ** 2 +
114 | 0.25 * self.rho * self.beta * self.vol_vol *
115 | self.alpha / fk_beta +
116 | (2. - 3. * self.rho ** 2) /
117 | 24. * self.vol_vol ** 2)
118 |
119 | sigma = sigma_1 * (1. + sigma_ex * expiry)
120 | else:
121 | f_beta = self.initial_value ** one_beta
122 | f_two_beta = self.initial_value ** (2. - 2 * self.beta)
123 | sigma = ((self.alpha / f_beta) *
124 | (1 + expiry * ((one_beta_square / 24. * self.alpha ** 2 /
125 | f_two_beta) + (0.25 * self.rho * self.beta *
126 | self.vol_vol * self.alpha / f_beta) +
127 | ((2. - 3 * self.rho ** 2) / 24. * self.vol_vol ** 2))))
128 | return sigma
129 |
130 | def calibrate_to_impl_vol(self, implied_vols, maturity, para=list()):
131 | ''' Calibrates the parameters alpha, beta, initial_value and vol_vol
132 | to a set of given implied volatilities.
133 | '''
134 | if len(para) != 4:
135 | para = (self.alpha, self.beta, self.initial_value, self.vol_vol)
136 |
137 | def error_function(para):
138 | self.alpha, self.beta, self.initial_value, self.vol_vol = para
139 | if (self.beta < 0 or self.beta > 1 or self.initial_value <= 0 or
140 | self.vol_vol <= 0):
141 | return 10000
142 | e = 0
143 | for strike in implied_vols.columns:
144 | e += (self.get_log_normal_implied_vol(float(strike), maturity) -
145 | float(implied_vols[strike]) / 100.) ** 2
146 | return e
147 | para = fmin(error_function, para, xtol=0.0000001,
148 | ftol=0.0000001, maxiter=550, maxfun=850)
149 | return para
150 |
151 | def check_parameter_set(self):
152 | ''' Checks if all needed parameter are set.
153 | '''
154 | parameter = ['beta', 'initial_value', 'vol_vol', 'alpha', 'rho']
155 | for p in parameter:
156 | try:
157 | val = getattr(self, p)
158 | except:
159 | val = None
160 | if val is None:
161 | raise ValueError('Models %s is unset!' % p)
162 |
163 | def generate_paths(self, fixed_seed=True, day_count=365.):
164 | ''' Generates Monte Carlo Paths using Euler discretization.
165 | '''
166 | if self.time_grid is None:
167 | self.generate_time_grid()
168 | M = len(self.time_grid)
169 | I = self.paths
170 | paths = np.zeros((M, I))
171 | va = np.zeros_like(paths)
172 | va_ = np.zeros_like(paths)
173 | paths[0] = self.initial_value
174 | va[0] = self.alpha
175 | va_[0] = self.alpha
176 | if self.correlated is False:
177 | sn1 = sn_random_numbers((1, M, I), fixed_seed=fixed_seed)
178 | else:
179 | sn1 = self.random_numbers
180 |
181 | # pseudo-random numbers for the stochastic volatility
182 | sn2 = sn_random_numbers((1, M, I), fixed_seed=fixed_seed)
183 |
184 | for t in range(1, len(self.time_grid)):
185 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
186 | square_root_dt = np.sqrt(dt)
187 | if self.correlated is False:
188 | ran = sn1[t]
189 | else:
190 | ran = np.dot(self.cholesky_matrix, sn1[:, t, :])
191 | ran = ran[self.rn_set]
192 | rat = np.array([ran, sn2[t]])
193 | rat = np.dot(self.leverage, rat)
194 |
195 | va_[t] = va_[t - 1] * (1 + self.vol_vol * square_root_dt * rat[1])
196 | va[t] = np.maximum(0, va_[t])
197 |
198 | F_b = np.abs(paths[t - 1]) ** self.beta
199 | p = paths[t - 1] + va_[t] * F_b * square_root_dt * rat[0]
200 | if (self.beta > 0 and self.beta < 1):
201 | paths[t] = np.maximum(0, p)
202 | else:
203 | paths[t] = p
204 |
205 | self.instrument_values = paths
206 | self.volatility_values = np.sqrt(va)
207 |
208 | def get_volatility_values(self):
209 | ''' Returns the volatility values for the model object.
210 | '''
211 | if self.volatility_values is None:
212 | self.generate_paths(self)
213 | return self.volatility_values
214 |
--------------------------------------------------------------------------------
/dx/models/simulation_class.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # simulation_class.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 |
26 | from ..frame import *
27 |
28 |
29 | class simulation_class(object):
30 | ''' Providing base methods for simulation classes.
31 |
32 | Attributes
33 | ==========
34 | name : string
35 | name of the object
36 | mar_env : instance of market_environment
37 | market environment data for simulation
38 | corr : boolean
39 | True if correlated with other model object
40 |
41 | Methods
42 | =======
43 | generate_time_grid :
44 | returns time grid for simulation
45 | get_instrument_values:
46 | returns the current instrument values (array)
47 | '''
48 |
49 | def __init__(self, name, mar_env, corr):
50 | try:
51 | self.name = name
52 | self.pricing_date = mar_env.pricing_date
53 | self.initial_value = mar_env.get_constant('initial_value')
54 | self.volatility = mar_env.get_constant('volatility')
55 | self.final_date = mar_env.get_constant('final_date')
56 | self.currency = mar_env.get_constant('currency')
57 | self.frequency = mar_env.get_constant('frequency')
58 | self.paths = mar_env.get_constant('paths')
59 | self.discount_curve = mar_env.get_curve('discount_curve')
60 | try:
61 | # if time_grid in mar_env take this
62 | # (for portfolio valuation)
63 | self.time_grid = mar_env.get_list('time_grid')
64 | except:
65 | self.time_grid = None
66 | try:
67 | # if there are special dates, then add these
68 | self.special_dates = mar_env.get_list('special_dates')
69 | except:
70 | self.special_dates = []
71 | self.instrument_values = None
72 | self.correlated = corr
73 | if corr is True:
74 | # only needed in a portfolio context when
75 | # risk factors are correlated
76 | self.cholesky_matrix = mar_env.get_list('cholesky_matrix')
77 | self.rn_set = mar_env.get_list('rn_set')[self.name]
78 | self.random_numbers = mar_env.get_list('random_numbers')
79 | except:
80 | print('Error parsing market environment.')
81 |
82 | def generate_time_grid(self):
83 | start = self.pricing_date
84 | end = self.final_date
85 | # pandas date_range function
86 | # freq = e.g. 'B' for Business Day,
87 | # 'W' for Weekly, 'M' for Monthly
88 | time_grid = pd.date_range(start=start, end=end,
89 | freq=self.frequency).to_pydatetime()
90 | time_grid = list(time_grid)
91 | # enhance time_grid by start, end and special_dates
92 | if start not in time_grid:
93 | time_grid.insert(0, start)
94 | # insert start date if not in list
95 | if end not in time_grid:
96 | time_grid.append(end)
97 | # insert end date if not in list
98 | if len(self.special_dates) > 0:
99 | # add all special dates later than self.pricing_date
100 | add_dates = [d for d in self.special_dates
101 | if d > self.pricing_date]
102 | time_grid.extend(add_dates)
103 | # delete duplicates and sort
104 | time_grid = sorted(set(time_grid))
105 | self.time_grid = np.array(time_grid)
106 |
107 | def get_instrument_values(self, fixed_seed=True):
108 | if self.instrument_values is None:
109 | # only initiate simulation if there are no instrument values
110 | self.generate_paths(fixed_seed=fixed_seed, day_count=365.)
111 | elif fixed_seed is False:
112 | # also initiate re-simulation when fixed_seed is False
113 | self.generate_paths(fixed_seed=fixed_seed, day_count=365.)
114 | return self.instrument_values
115 |
116 |
117 | class general_underlying(object):
118 | ''' Needed for VAR-based portfolio modeling and valuation. '''
119 | def __init__(self, name, data, val_env):
120 | self.name = name
121 | self.data = data
122 | self.paths = val_env.get_constant('paths')
123 | self.frequency = 'B'
124 | self.discount_curve = val_env.get_curve('discount_curve')
125 | self.special_dates = []
126 | self.time_grid = val_env.get_list('time_grid')
127 | self.fit_model = None
128 |
129 | def get_instrument_values(self, fixed_seed=False):
130 | return self.data.values
131 |
--------------------------------------------------------------------------------
/dx/models/square_root_diffusion.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # square_root_diffusion.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from .simulation_class import simulation_class
27 |
28 |
29 | class square_root_diffusion(simulation_class):
30 | ''' Class to generate simulated paths based on
31 | the Cox-Ingersoll-Ross (1985) square-root diffusion.
32 |
33 | Attributes
34 | ==========
35 | name : string
36 | name of the object
37 | mar_env : instance of market_environment
38 | market environment data for simulation
39 | corr : boolean
40 | True if correlated with other model object
41 |
42 | Methods
43 | =======
44 | update :
45 | updates parameters
46 | generate_paths :
47 | returns Monte Carlo paths given the market environment
48 | '''
49 |
50 | def __init__(self, name, mar_env, corr=False):
51 | super(square_root_diffusion, self).__init__(name, mar_env, corr)
52 | try:
53 | self.kappa = mar_env.get_constant('kappa')
54 | self.theta = mar_env.get_constant('theta')
55 | except:
56 | print('Error parsing market environment.')
57 |
58 | def update(self, pricing_date=None, initial_value=None, volatility=None,
59 | kappa=None, theta=None, final_date=None):
60 | if pricing_date is not None:
61 | self.pricing_date = pricing_date
62 | self.time_grid = None
63 | self.generate_time_grid()
64 | if initial_value is not None:
65 | self.initial_value = initial_value
66 | if volatility is not None:
67 | self.volatility = volatility
68 | if kappa is not None:
69 | self.kappa = kappa
70 | if theta is not None:
71 | self.theta = theta
72 | if final_date is not None:
73 | self.final_date = final_date
74 | self.instrument_values = None
75 |
76 | def generate_paths(self, fixed_seed=True, day_count=365.):
77 | if self.time_grid is None:
78 | self.generate_time_grid()
79 | M = len(self.time_grid)
80 | I = self.paths
81 | paths = np.zeros((M, I))
82 | paths_ = np.zeros_like(paths)
83 | paths[0] = self.initial_value
84 | paths_[0] = self.initial_value
85 | if self.correlated is False:
86 | rand = sn_random_numbers((1, M, I),
87 | fixed_seed=fixed_seed)
88 | else:
89 | rand = self.random_numbers
90 |
91 | for t in range(1, len(self.time_grid)):
92 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
93 | if self.correlated is False:
94 | ran = rand[t]
95 | else:
96 | ran = np.dot(self.cholesky_matrix, rand[:, t, :])
97 | ran = ran[self.rn_set]
98 |
99 | # full truncation Euler discretization
100 | paths_[t] = (paths_[t - 1] + self.kappa *
101 | (self.theta - np.maximum(0, paths_[t - 1])) * dt +
102 | np.sqrt(np.maximum(0, paths_[t - 1])) *
103 | self.volatility * np.sqrt(dt) * ran)
104 | paths[t] = np.maximum(0, paths_[t])
105 | self.instrument_values = paths
106 |
107 |
108 | class stochastic_short_rate(object):
109 | ''' Class for discounting based on stochastic short rates
110 | based on square-root diffusion process.
111 |
112 | Attributes
113 | ==========
114 | name : string
115 | name of the object
116 | mar_env : market_environment object
117 | containing all relevant parameters
118 |
119 | Methods
120 | =======
121 | get_forward_rates :
122 | return forward rates given a time list/array
123 | get_discount_factors :
124 | return discount factors given a time list/array
125 | '''
126 |
127 | def __init__(self, name, mar_env):
128 | self.name = name
129 | try:
130 | try:
131 | mar_env.get_curve('discount_curve')
132 | except:
133 | mar_env.add_curve('discount_curve', 0.0) # dummy
134 | try:
135 | mar_env.get_constant('currency')
136 | except:
137 | mar_env.add_constant('currency', 'CUR') # dummy
138 | self.process = square_root_diffusion('process', mar_env)
139 | self.process.generate_paths()
140 | except:
141 | raise ValueError('Error parsing market environment.')
142 |
143 | def get_forward_rates(self, time_list, paths, dtobjects=True):
144 | if len(self.process.time_grid) != len(time_list) \
145 | or self.process.paths != paths:
146 | self.process.paths = paths
147 | self.process.time_grid = time_list
148 | self.process.instrument_values = None
149 | rates = self.process.get_instrument_values()
150 | return time_list, rates
151 |
152 | def get_discount_factors(self, time_list, paths, dtobjects=True):
153 | discount_factors = []
154 | if dtobjects is True:
155 | dlist = get_year_deltas(time_list)
156 | else:
157 | dlist = time_list
158 | forward_rate = self.get_forward_rates(time_list, paths, dtobjects)[1]
159 | for no in range(len(dlist)):
160 | factor = np.zeros_like(forward_rate[0, :])
161 | for d in range(no, len(dlist) - 1):
162 | factor += ((dlist[d + 1] - dlist[d]) *
163 | (0.5 * (forward_rate[d + 1] + forward_rate[d])))
164 | discount_factors.append(np.exp(-factor))
165 | return time_list, np.array(discount_factors)
166 |
167 |
168 | def srd_forwards(initial_value, kts, time_grid):
169 | ''' Function for forward vols/rates in SRD model.
170 |
171 | Parameters
172 | ==========
173 | initial_value : float
174 | initial value of the process
175 | kts :
176 | (kappa, theta, sigma)
177 | kappa : float
178 | mean-reversion factor
179 | theta : float
180 | long-run mean
181 | sigma : float
182 | volatility factor (vol-vol)
183 | time_grid : list/array of datetime object
184 | dates to generate forwards for
185 |
186 | Returns
187 | =======
188 | forwards : array
189 | forward vols/rates
190 | '''
191 | kappa, theta, sigma = kts
192 | t = get_year_deltas(time_grid)
193 | g = math.sqrt(kappa ** 2 + 2 * sigma ** 2)
194 | sum1 = ((kappa * theta * (np.exp(g * t) - 1)) /
195 | (2 * g + (kappa + g) * (np.exp(g * t) - 1)))
196 | sum2 = initial_value * ((4 * g ** 2 * np.exp(g * t)) /
197 | (2 * g + (kappa + g) * (np.exp(g * t) - 1)) ** 2)
198 | forwards = sum1 + sum2
199 | return forwards
200 |
--------------------------------------------------------------------------------
/dx/models/square_root_jump_diffusion.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # square_root_jump_diffusion.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from .simulation_class import simulation_class
27 | from .square_root_diffusion import *
28 |
29 |
30 | class square_root_jump_diffusion(simulation_class):
31 | ''' Class to generate simulated paths based on
32 | the square-root jump diffusion model.
33 |
34 | Attributes
35 | ==========
36 | name : string
37 | name of the object
38 | mar_env : instance of market_environment
39 | market environment data for simulation
40 | corr : boolean
41 | True if correlated with other model object
42 |
43 | Methods
44 | =======
45 | update :
46 | updates parameters
47 | generate_paths :
48 | returns Monte Carlo paths for the market environment
49 | '''
50 |
51 | def __init__(self, name, mar_env, corr=False):
52 | super(square_root_jump_diffusion, self).__init__(name, mar_env, corr)
53 | try:
54 | self.kappa = mar_env.get_constant('kappa')
55 | self.theta = mar_env.get_constant('theta')
56 | self.lamb = mar_env.get_constant('lambda')
57 | self.mu = mar_env.get_constant('mu')
58 | self.delt = mar_env.get_constant('delta')
59 | except:
60 | print('Error parsing market environment.')
61 |
62 | def update(self, pricing_date=None, initial_value=None, volatility=None,
63 | kappa=None, theta=None, lamb=None, mu=None, delt=None,
64 | final_date=None):
65 | if pricing_date is not None:
66 | self.pricing_date = pricing_date
67 | self.time_grid = None
68 | self.generate_time_grid()
69 | if initial_value is not None:
70 | self.initial_value = initial_value
71 | if volatility is not None:
72 | self.volatility = volatility
73 | if kappa is not None:
74 | self.kappa = kappa
75 | if theta is not None:
76 | self.theta = theta
77 | if lamb is not None:
78 | self.lamb = lamb
79 | if mu is not None:
80 | self.mu = mu
81 | if delt is not None:
82 | self.delt = delt
83 | if final_date is not None:
84 | self.final_date = final_date
85 | self.instrument_values = None
86 | self.time_grid = None
87 |
88 | def generate_paths(self, fixed_seed=True, day_count=365.):
89 | if self.time_grid is None:
90 | self.generate_time_grid()
91 | M = len(self.time_grid)
92 | I = self.paths
93 | paths = np.zeros((M, I))
94 | paths_ = np.zeros_like(paths)
95 | paths[0] = self.initial_value
96 | paths_[0] = self.initial_value
97 | if self.correlated is False:
98 | rand = sn_random_numbers((1, M, I),
99 | fixed_seed=fixed_seed)
100 | else:
101 | rand = self.random_numbers
102 | snr = sn_random_numbers((1, M, I),
103 | fixed_seed=fixed_seed)
104 | rj = self.lamb * (np.exp(self.mu + 0.5 * self.delt ** 2) - 1)
105 |
106 | for t in range(1, len(self.time_grid)):
107 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
108 | if self.correlated is False:
109 | ran = rand[t]
110 | else:
111 | ran = np.dot(self.cholesky_matrix, rand[:, t, :])
112 | ran = ran[self.rn_set]
113 | poi = np.random.poisson(self.lamb * dt, I)
114 | # full truncation Euler discretization
115 | paths_[t, :] = (paths_[t - 1, :] + self.kappa *
116 | (self.theta - np.maximum(0, paths_[t - 1, :])) * dt +
117 | np.sqrt(np.maximum(0, paths_[t - 1, :])) *
118 | self.volatility * np.sqrt(dt) * ran +
119 | ((np.exp(self.mu + self.delt * snr[t]) - 1) * poi) *
120 | np.maximum(0, paths_[t - 1, :]) - rj * dt)
121 | paths[t, :] = np.maximum(0, paths_[t, :])
122 | self.instrument_values = paths
123 |
124 |
125 | class square_root_jump_diffusion_plus(square_root_jump_diffusion):
126 | ''' Class to generate simulated paths based on
127 | the square-root jump diffusion model with term structure.
128 |
129 | Attributes
130 | ==========
131 | name : string
132 | name of the object
133 | mar_env : instance of market_environment
134 | market environment data for simulation
135 | corr : boolean
136 | True if correlated with other model object
137 |
138 | Methods
139 | =======
140 | srd_forward_error :
141 | error function for forward rate/vols calibration
142 | generate_shift_base :
143 | generates a shift base to take term structure into account
144 | update :
145 | updates parameters
146 | update_shift_values :
147 | updates shift values for term structure
148 | generate_paths :
149 | returns Monte Carlo paths for the market environment
150 | update_forward_rates :
151 | updates forward rates (vol, int. rates) for given time grid
152 | '''
153 |
154 | def __init__(self, name, mar_env, corr=False):
155 | super(square_root_jump_diffusion_plus,
156 | self).__init__(name, mar_env, corr)
157 | try:
158 | self.term_structure = mar_env.get_curve('term_structure')
159 | except:
160 | self.term_structure = None
161 | print('Missing Term Structure.')
162 |
163 | self.forward_rates = []
164 | self.shift_base = None
165 | self.shift_values = []
166 |
167 | def srd_forward_error(self, p0):
168 | if p0[0] < 0 or p0[1] < 0 or p0[2] < 0:
169 | return 100
170 | f_model = srd_forwards(self.initial_value, p0,
171 | self.term_structure[:, 0])
172 |
173 | MSE = np.sum((self.term_structure[:, 1] -
174 | f_model) ** 2) / len(f_model)
175 | return MSE
176 |
177 | def generate_shift_base(self, p0):
178 | # calibration
179 | opt = sco.fmin(self.srd_forward_error, p0)
180 | # shift_calculation
181 | f_model = srd_forwards(self.initial_value, opt,
182 | self.term_structure[:, 0])
183 | shifts = self.term_structure[:, 1] - f_model
184 | self.shift_base = np.array((self.term_structure[:, 0], shifts)).T
185 |
186 | def update_shift_values(self, k=1):
187 | if self.shift_base is not None:
188 | t = get_year_deltas(self.shift_base[:, 0])
189 | tck = sci.splrep(t, self.shift_base[:, 1], k=k)
190 | self.generate_time_grid()
191 | st = get_year_deltas(self.time_grid)
192 | self.shift_values = np.array(list(zip(self.time_grid,
193 | sci.splev(st, tck, der=0))))
194 | else:
195 | self.shift_values = np.array(list(zip(self.time_grid,
196 | np.zeros(len(self.time_grid)))))
197 |
198 | def generate_paths(self, fixed_seed=True, day_count=365.):
199 | if self.time_grid is None:
200 | self.generate_time_grid()
201 | self.update_shift_values()
202 | M = len(self.time_grid)
203 | I = self.paths
204 | paths = np.zeros((M, I))
205 | paths_ = np.zeros_like(paths)
206 | paths[0] = self.initial_value
207 | paths_[0] = self.initial_value
208 | if self.correlated is False:
209 | rand = sn_random_numbers((1, M, I),
210 | fixed_seed=fixed_seed)
211 | else:
212 | rand = self.random_numbers
213 | snr = sn_random_numbers((1, M, I),
214 | fixed_seed=fixed_seed)
215 | # forward_rates = self.discount_curve.get_forward_rates(
216 | # self.time_grid, dtobjects=True)
217 | rj = self.lamb * (np.exp(self.mu + 0.5 * self.delt ** 2) - 1)
218 | for t in range(1, len(self.time_grid)):
219 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
220 | if self.correlated is False:
221 | ran = rand[t]
222 | else:
223 | ran = np.dot(self.cholesky_matrix, rand[:, t, :])
224 | ran = ran[self.rn_set]
225 | poi = np.random.poisson(self.lamb * dt, I)
226 | # full truncation Euler discretization
227 | paths_[t] = (paths_[t - 1] + self.kappa *
228 | (self.theta - np.maximum(0, paths_[t - 1])) * dt +
229 | np.sqrt(np.maximum(0, paths_[t - 1])) *
230 | self.volatility * np.sqrt(dt) * ran +
231 | ((np.exp(self.mu + self.delt * snr[t]) - 1) * poi) *
232 | np.maximum(0, paths_[t - 1]) - rj * dt)
233 | paths[t] = np.maximum(0, paths_[t]) + self.shift_values[t, 1]
234 | self.instrument_values = paths
235 |
236 | def update_forward_rates(self, time_grid=None):
237 | if time_grid is None:
238 | self.generate_time_grid()
239 | time_grid = self.time_grid
240 | t = get_year_deltas(time_grid)
241 | g = np.sqrt(self.kappa ** 2 + 2 * self.volatility ** 2)
242 | sum1 = ((self.kappa * self.theta * (np.exp(g * t) - 1)) /
243 | (2 * g + (self.kappa + g) * (np.exp(g * t) - 1)))
244 | sum2 = self.initial_value * ((4 * g ** 2 * np.exp(g * t)) /
245 | (2 * g + (self.kappa + g) *
246 | (np.exp(g * t) - 1)) ** 2)
247 | self.forward_rates = np.array(list(zip(time_grid, sum1 + sum2)))
248 |
--------------------------------------------------------------------------------
/dx/models/stoch_vol_jump_diffusion.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # stoch_vol_jump_diffusion.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from .simulation_class import simulation_class
27 |
28 |
29 | class stoch_vol_jump_diffusion(simulation_class):
30 | ''' Class to generate simulated paths based on
31 | the Bates (1996) stochastic volatility jump-diffusion model.
32 |
33 | Attributes
34 | ==========
35 | name : string
36 | name of the object
37 | mar_env : instance of market_environment
38 | market environment data for simulation
39 | corr : boolean
40 | True if correlated with other model object
41 |
42 | Methods
43 | =======
44 | update :
45 | updates parameters
46 | generate_paths :
47 | returns Monte Carlo paths for the market environment
48 | get_volatility_values :
49 | returns array with simulated volatility paths
50 | '''
51 |
52 | def __init__(self, name, mar_env, corr=False):
53 | super(stoch_vol_jump_diffusion, self).__init__(name, mar_env, corr)
54 | try:
55 | self.lamb = mar_env.get_constant('lambda')
56 | self.mu = mar_env.get_constant('mu')
57 | self.delt = mar_env.get_constant('delta')
58 |
59 | self.rho = mar_env.get_constant('rho')
60 | self.leverage = np.linalg.cholesky(
61 | np.array([[1.0, self.rho], [self.rho, 1.0]]))
62 |
63 | self.kappa = mar_env.get_constant('kappa')
64 | self.theta = mar_env.get_constant('theta')
65 | self.vol_vol = mar_env.get_constant('vol_vol')
66 |
67 | self.volatility_values = None
68 | except:
69 | print('Error parsing market environment.')
70 |
71 | def update(self, pricing_date=None, initial_value=None, volatility=None,
72 | vol_vol=None, kappa=None, theta=None, rho=None, lamb=None,
73 | mu=None, delta=None, final_date=None):
74 | if pricing_date is not None:
75 | self.pricing_date = pricing_date
76 | self.time_grid = None
77 | self.generate_time_grid()
78 | if initial_value is not None:
79 | self.initial_value = initial_value
80 | if volatility is not None:
81 | self.volatility = volatility
82 | if vol_vol is not None:
83 | self.vol_vol = vol_vol
84 | if kappa is not None:
85 | self.kappa = kappa
86 | if theta is not None:
87 | self.theta = theta
88 | if rho is not None:
89 | self.rho = rho
90 | if lamb is not None:
91 | self.lamb = lamb
92 | if mu is not None:
93 | self.mu = mu
94 | if delta is not None:
95 | self.delt = delta
96 | if final_date is not None:
97 | self.final_date = final_date
98 | self.time_grid = None
99 | self.instrument_values = None
100 | self.volatility_values = None
101 |
102 | def generate_paths(self, fixed_seed=True, day_count=365.):
103 | if self.time_grid is None:
104 | self.generate_time_grid()
105 | M = len(self.time_grid)
106 | I = self.paths
107 | paths = np.zeros((M, I))
108 | va = np.zeros_like(paths)
109 | va_ = np.zeros_like(paths)
110 | paths[0] = self.initial_value
111 | va[0] = self.volatility ** 2
112 | va_[0] = self.volatility ** 2
113 | if self.correlated is False:
114 | sn1 = sn_random_numbers((1, M, I),
115 | fixed_seed=fixed_seed)
116 | else:
117 | sn1 = self.random_numbers
118 |
119 | # Pseudo-random numbers for the jump component
120 | sn2 = sn_random_numbers((1, M, I),
121 | fixed_seed=fixed_seed)
122 | # Pseudo-random numbers for the stochastic volatility
123 | sn3 = sn_random_numbers((1, M, I),
124 | fixed_seed=fixed_seed)
125 |
126 | forward_rates = self.discount_curve.get_forward_rates(
127 | self.time_grid, self.paths, dtobjects=True)[1]
128 |
129 | rj = self.lamb * (np.exp(self.mu + 0.5 * self.delt ** 2) - 1)
130 |
131 | for t in range(1, len(self.time_grid)):
132 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
133 | if self.correlated is False:
134 | ran = sn1[t]
135 | else:
136 | ran = np.dot(self.cholesky_matrix, sn1[:, t, :])
137 | ran = ran[self.rn_set]
138 | rat = np.array([ran, sn3[t]])
139 | rat = np.dot(self.leverage, rat)
140 |
141 | va_[t] = (va_[t - 1] + self.kappa *
142 | (self.theta - np.maximum(0, va_[t - 1])) * dt +
143 | np.sqrt(np.maximum(0, va_[t - 1])) *
144 | self.vol_vol * np.sqrt(dt) * rat[1])
145 | va[t] = np.maximum(0, va_[t])
146 |
147 | poi = np.random.poisson(self.lamb * dt, I)
148 |
149 | rt = (forward_rates[t - 1] + forward_rates[t]) / 2
150 | paths[t] = paths[t - 1] * (
151 | np.exp((rt - rj - 0.5 * va[t]) * dt +
152 | np.sqrt(va[t]) * np.sqrt(dt) * rat[0]) +
153 | (np.exp(self.mu + self.delt * sn2[t]) - 1) * poi)
154 |
155 | # moment matching stoch vol part
156 | paths[t] -= np.mean(paths[t - 1] * np.sqrt(va[t]) *
157 | math.sqrt(dt) * rat[0])
158 |
159 | self.instrument_values = paths
160 | self.volatility_values = np.sqrt(va)
161 |
162 | def get_volatility_values(self):
163 | if self.volatility_values is None:
164 | self.generate_paths(self)
165 | return self.volatility_values
166 |
--------------------------------------------------------------------------------
/dx/models/stochastic_volatility.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Base Classes and Model Classes for Simulation
4 | # stochastic_volatility.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from .simulation_class import simulation_class
27 |
28 |
29 | class stochastic_volatility(simulation_class):
30 | ''' Class to generate simulated paths based on
31 | the Heston (1993) stochastic volatility model.
32 |
33 | Attributes
34 | ==========
35 | name : string
36 | name of the object
37 | mar_env : instance of market_environment
38 | market environment data for simulation
39 | corr : boolean
40 | True if correlated with other model object
41 |
42 | Methods
43 | =======
44 | update :
45 | updates parameters
46 | generate_paths :
47 | returns Monte Carlo paths given the market environment
48 | get_volatility_values :
49 | returns array with simulated volatility paths
50 | '''
51 |
52 | def __init__(self, name, mar_env, corr=False):
53 | super(stochastic_volatility, self).__init__(name, mar_env, corr)
54 | try:
55 | self.kappa = mar_env.get_constant('kappa')
56 | self.theta = mar_env.get_constant('theta')
57 | self.vol_vol = mar_env.get_constant('vol_vol')
58 |
59 | self.rho = mar_env.get_constant('rho')
60 | self.leverage = np.linalg.cholesky(
61 | np.array([[1.0, self.rho], [self.rho, 1.0]]))
62 |
63 | self.volatility_values = None
64 | except:
65 | print('Error parsing market environment.')
66 |
67 | def update(self, pricing_date=None, initial_value=None, volatility=None,
68 | vol_vol=None, kappa=None, theta=None, rho=None, final_date=None):
69 | if pricing_date is not None:
70 | self.pricing_date = pricing_date
71 | self.time_grid = None
72 | self.generate_time_grid()
73 | if initial_value is not None:
74 | self.initial_value = initial_value
75 | if volatility is not None:
76 | self.volatility = volatility
77 | if vol_vol is not None:
78 | self.vol_vol = vol_vol
79 | if kappa is not None:
80 | self.kappa = kappa
81 | if theta is not None:
82 | self.theta = theta
83 | if rho is not None:
84 | self.rho = rho
85 | if final_date is not None:
86 | self.final_date = final_date
87 | self.time_grid = None
88 | self.instrument_values = None
89 | self.volatility_values = None
90 |
91 | def generate_paths(self, fixed_seed=True, day_count=365.):
92 | if self.time_grid is None:
93 | self.generate_time_grid()
94 | M = len(self.time_grid)
95 | I = self.paths
96 | paths = np.zeros((M, I))
97 | va = np.zeros_like(paths)
98 | va_ = np.zeros_like(paths)
99 | paths[0] = self.initial_value
100 | va[0] = self.volatility ** 2
101 | va_[0] = self.volatility ** 2
102 | if self.correlated is False:
103 | sn1 = sn_random_numbers((1, M, I),
104 | fixed_seed=fixed_seed)
105 | else:
106 | sn1 = self.random_numbers
107 |
108 | # Pseudo-random numbers for the stochastic volatility
109 | sn2 = sn_random_numbers((1, M, I), fixed_seed=fixed_seed)
110 |
111 | forward_rates = self.discount_curve.get_forward_rates(
112 | self.time_grid, self.paths, dtobjects=True)[1]
113 |
114 | for t in range(1, len(self.time_grid)):
115 | dt = (self.time_grid[t] - self.time_grid[t - 1]).days / day_count
116 | if self.correlated is False:
117 | ran = sn1[t]
118 | else:
119 | ran = np.dot(self.cholesky_matrix, sn1[:, t, :])
120 | ran = ran[self.rn_set]
121 | rat = np.array([ran, sn2[t]])
122 | rat = np.dot(self.leverage, rat)
123 |
124 | va_[t] = (va_[t - 1] + self.kappa *
125 | (self.theta - np.maximum(0, va_[t - 1])) * dt +
126 | np.sqrt(np.maximum(0, va_[t - 1])) *
127 | self.vol_vol * np.sqrt(dt) * rat[1])
128 | va[t] = np.maximum(0, va_[t])
129 |
130 | rt = (forward_rates[t - 1] + forward_rates[t]) / 2
131 | paths[t] = paths[t - 1] * (
132 | np.exp((rt - 0.5 * va[t]) * dt +
133 | np.sqrt(va[t]) * np.sqrt(dt) * rat[0]))
134 |
135 | # moment matching stoch vol part
136 | paths[t] -= np.mean(paths[t - 1] * np.sqrt(va[t]) *
137 | math.sqrt(dt) * rat[0])
138 |
139 | self.instrument_values = paths
140 | self.volatility_values = np.sqrt(va)
141 |
142 | def get_volatility_values(self):
143 | if self.volatility_values is None:
144 | self.generate_paths(self)
145 | return self.volatility_values
146 |
--------------------------------------------------------------------------------
/dx/plot.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Helper Function for Plotting
4 | # dx_plot.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 |
26 | import matplotlib as mpl; mpl.use('agg')
27 | import matplotlib.pyplot as plt
28 | from mpl_toolkits.mplot3d import Axes3D
29 | from pylab import cm
30 |
31 | def plot_option_stats(s_list, pv, de, ve):
32 | ''' Plot option prices, deltas and vegas for a set of
33 | different initial values of the underlying.
34 |
35 | Parameters
36 | ==========
37 | s_list : array or list
38 | set of intial values of the underlying
39 | pv : array or list
40 | present values
41 | de : array or list
42 | results for deltas
43 | ve : array or list
44 | results for vega
45 | '''
46 | plt.figure(figsize=(9, 7))
47 | sub1 = plt.subplot(311)
48 | plt.plot(s_list, pv, 'ro', label='Present Value')
49 | plt.plot(s_list, pv, 'b')
50 | plt.grid(True)
51 | plt.legend(loc=0)
52 | plt.setp(sub1.get_xticklabels(), visible=False)
53 | sub2 = plt.subplot(312)
54 | plt.plot(s_list, de, 'go', label='Delta')
55 | plt.plot(s_list, de, 'b')
56 | plt.grid(True)
57 | plt.legend(loc=0)
58 | plt.setp(sub2.get_xticklabels(), visible=False)
59 | sub3 = plt.subplot(313)
60 | plt.plot(s_list, ve, 'yo', label='Vega')
61 | plt.plot(s_list, ve, 'b')
62 | plt.xlabel('Strike')
63 | plt.grid(True)
64 | plt.legend(loc=0)
65 |
66 |
67 | def plot_option_stats_full(s_list, pv, de, ve, th, rh, ga):
68 | ''' Plot option prices, deltas and vegas for a set of
69 | different initial values of the underlying.
70 |
71 | Parameters
72 | ==========
73 | s_list : array or list
74 | set of intial values of the underlying
75 | pv : array or list
76 | present values
77 | de : array or list
78 | results for deltas
79 | ve : array or list
80 | results for vega
81 | th : array or list
82 | results for theta
83 | rh : array or list
84 | results for rho
85 | ga : array or list
86 | results for gamma
87 | '''
88 | plt.figure(figsize=(10, 14))
89 | sub1 = plt.subplot(611)
90 | plt.plot(s_list, pv, 'ro', label='Present Value')
91 | plt.plot(s_list, pv, 'b')
92 | plt.grid(True)
93 | plt.legend(loc=0)
94 | plt.setp(sub1.get_xticklabels(), visible=False)
95 | sub2 = plt.subplot(612)
96 | plt.plot(s_list, de, 'go', label='Delta')
97 | plt.plot(s_list, de, 'b')
98 | plt.grid(True)
99 | plt.legend(loc=0)
100 | plt.setp(sub2.get_xticklabels(), visible=False)
101 | sub3 = plt.subplot(613)
102 | plt.plot(s_list, ve, 'yo', label='Gamma')
103 | plt.plot(s_list, ve, 'b')
104 | plt.grid(True)
105 | plt.legend(loc=0)
106 | sub4 = plt.subplot(614)
107 | plt.plot(s_list, th, 'mo', label='Vega')
108 | plt.plot(s_list, th, 'b')
109 | plt.grid(True)
110 | plt.legend(loc=0)
111 | sub5 = plt.subplot(615)
112 | plt.plot(s_list, rh, 'co', label='Theta')
113 | plt.plot(s_list, rh, 'b')
114 | plt.grid(True)
115 | plt.legend(loc=0)
116 | sub6 = plt.subplot(616)
117 | plt.plot(s_list, ga, 'ko', label='Rho')
118 | plt.plot(s_list, ga, 'b')
119 | plt.xlabel('Strike')
120 | plt.grid(True)
121 | plt.legend(loc=0)
122 |
123 |
124 | def plot_greeks_3d(inputs, labels):
125 | ''' Plot Greeks in 3d.
126 |
127 | Parameters
128 | ==========
129 | inputs : list of arrays
130 | x, y, z arrays
131 | labels : list of strings
132 | labels for x, y, z
133 | '''
134 | x, y, z = inputs
135 | xl, yl, zl = labels
136 | fig = plt.figure(figsize=(10, 7))
137 | ax = fig.add_subplot(projection='3d') # CORRECTED LINE
138 | surf = ax.plot_surface(x, y, z, rstride=1, cstride=1,
139 | cmap=cm.coolwarm, linewidth=0.5, antialiased=True)
140 | ax.set_xlabel(xl)
141 | ax.set_ylabel(yl)
142 | ax.set_zlabel(zl)
143 | fig.colorbar(surf, shrink=0.5, aspect=5)
144 |
145 |
146 | def plot_calibration_results(cali, relative=False):
147 | ''' Plot calibration results.
148 |
149 | Parameters
150 | ==========
151 | cali : instance of calibration class
152 | instance has to have opt_parameters
153 | relative : boolean
154 | if True, then relative error reporting
155 | if False, absolute error reporting
156 | '''
157 | cali.update_model_values()
158 | mats = set(cali.option_data[:, 0])
159 | mats = np.sort(list(mats))
160 | fig, axarr = plt.subplots(len(mats), 2, sharex=True)
161 | fig.set_size_inches(8, 12)
162 | fig.subplots_adjust(wspace=0.2, hspace=0.2)
163 | z = 0
164 | for T in mats:
165 | strikes = strikes = cali.option_data[cali.option_data[:, 0] == T][:, 1]
166 | market = cali.option_data[cali.option_data[:, 0] == T][:, 2]
167 | model = cali.model_values[cali.model_values[:, 0] == T][:, 2]
168 | axarr[z, 0].set_ylabel('%s' % str(T)[:10])
169 | axarr[z, 0].plot(strikes, market, label='Market Quotes')
170 | axarr[z, 0].plot(strikes, model, 'ro', label='Model Prices')
171 | axarr[z, 0].grid()
172 | if T is mats[0]:
173 | axarr[z, 0].set_title('Option Quotes')
174 | if T is mats[-1]:
175 | axarr[z, 0].set_xlabel('Strike')
176 | wi = 2.
177 | if relative is True:
178 | axarr[z, 1].bar(strikes - wi / 2,
179 | (model - market) / market * 100, width=wi)
180 | else:
181 | axarr[z, 1].bar(strikes - wi / 2, model - market, width=wi)
182 | axarr[z, 1].grid()
183 | if T is mats[0]:
184 | axarr[z, 1].set_title('Differences')
185 | if T is mats[-1]:
186 | axarr[z, 1].set_xlabel('Strike')
187 | z += 1
188 |
--------------------------------------------------------------------------------
/dx/portfolio.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Mean Variance Portfolio
4 | # portfolio.py
5 | #
6 | #
7 | # DX Analytics is a financial analytics library, mainly for
8 | # derviatives modeling and pricing by Monte Carlo simulation
9 | #
10 | # (c) Dr. Yves J. Hilpisch
11 | # The Python Quants GmbH
12 | #
13 | # This program is free software: you can redistribute it and/or modify
14 | # it under the terms of the GNU Affero General Public License as
15 | # published by the Free Software Foundation, either version 3 of the
16 | # License, or any later version.
17 | #
18 | # This program is distributed in the hope that it will be useful,
19 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
20 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
21 | # GNU Affero General Public License for more details.
22 | #
23 | # You should have received a copy of the GNU Affero General Public License
24 | # along with this program. If not, see http://www.gnu.org/licenses/.
25 | #
26 | import math
27 | from .frame import *
28 | import scipy.optimize as sco
29 | import scipy.interpolate as sci
30 |
31 |
32 |
33 | class mean_variance_portfolio(object):
34 | '''
35 | Class to implement the mean variance portfolio theory of Markowitz
36 | '''
37 |
38 | def __init__(self, name, mar_env):
39 | self.name = name
40 | self.data_source = 'http://hilpisch.com/tr_eikon_eod_data.csv'
41 | self.get_raw_data()
42 | try:
43 | self.symbols = mar_env.get_list('symbols')
44 | self.start_date = mar_env.pricing_date
45 | except:
46 | raise ValueError('Error parsing market environment.')
47 |
48 | self.number_of_assets = len(self.symbols)
49 | try:
50 | self.final_date = mar_env.get_constant('final date')
51 | except:
52 | self.final_date = dt.date.today()
53 | try:
54 | self.source = mar_env.get_constant('source')
55 | except:
56 | self.source = 'google'
57 | try:
58 | self.weights = mar_env.get_constant('weights')
59 | except:
60 | self.weights = np.ones(self.number_of_assets, 'float')
61 | self.weights /= self.number_of_assets
62 | try:
63 | weights_sum = sum(self.weights)
64 | except:
65 | msg = 'Weights must be an iterable of numbers.'
66 | raise TypeError(msg)
67 |
68 | if round(weights_sum, 6) != 1:
69 | raise ValueError('Sum of weights must be one.')
70 |
71 | if len(self.weights) != self.number_of_assets:
72 | msg = 'Expected %s weights, got %s'
73 | raise ValueError(msg % (self.number_of_assets,
74 | len(self.weights)))
75 |
76 | self.data = self.raw_data[self.symbols]
77 | self.make_raw_stats()
78 | self.apply_weights()
79 |
80 | def __str__(self):
81 | string = 'Portfolio %s \n' % self.name
82 | string += len(string) * '-' + '\n'
83 | string += 'return %10.3f\n' % self.portfolio_return
84 | string += 'volatility %10.3f\n' % math.sqrt(self.variance)
85 | string += 'Sharpe ratio %10.3f\n' % (self.portfolio_return /
86 | math.sqrt(self.variance))
87 | string += '\n'
88 | string += 'Positions\n'
89 | string += 'symbol | weight | ret. con. \n'
90 | string += '--------------------------- \n'
91 | for i in range(len(self.symbols)):
92 | string += '{:<6} | {:6.3f} | {:9.3f} \n'.format(
93 | self.symbols[i], self.weights[i], self.mean_returns[i])
94 |
95 | return string
96 |
97 | def get_raw_data(self):
98 | self.raw_data = pd.read_csv(self.data_source, index_col=0,
99 | parse_dates=True)
100 | self.available_symbols = self.raw_data.columns
101 |
102 | def get_available_symbols(self):
103 | return self.available_symbols
104 |
105 | # def load_data(self):
106 | # '''
107 | # Loads asset values from the web.
108 | # '''
109 |
110 | # self.data = pd.DataFrame()
111 | # # if self.source == 'yahoo' or self.source == 'google':
112 | # for sym in self.symbols:
113 | # try:
114 | # self.data[sym] = web.DataReader(sym, self.source,
115 | # self.start_date,
116 | # self.final_date)['Close']
117 | # except:
118 | # print('Can not find data for source %s and symbol %s.'
119 | # % (self.source, sym))
120 | # print('Will try other source.')
121 | # try:
122 | # if self.source == 'yahoo':
123 | # source = 'google'
124 | # if self.source == 'google':
125 | # source = 'yahoo'
126 | # self.data[sym] = web.DataReader(sym, source,
127 | # self.start_date,
128 | # self.final_date)['Close']
129 | # except:
130 | # msg = 'Can not find data for source %s and symbol %s'
131 | # raise IOError(msg % (source, sym))
132 | # self.data.columns = self.symbols
133 | # # To do: add more sources
134 |
135 | def make_raw_stats(self):
136 | '''
137 | Computes returns and variances
138 | '''
139 |
140 | self.raw_returns = np.log(self.data / self.data.shift(1))
141 | self.mean_raw_return = self.raw_returns.mean()
142 | self.raw_covariance = self.raw_returns.cov()
143 |
144 | def apply_weights(self):
145 | '''
146 | Applies weights to the raw returns and covariances
147 | '''
148 |
149 | self.returns = self.raw_returns * self.weights
150 | self.mean_returns = self.returns.mean() * 252
151 | self.portfolio_return = np.sum(self.mean_returns)
152 |
153 | self.variance = np.dot(self.weights.T,
154 | np.dot(self.raw_covariance * 252, self.weights))
155 |
156 | def test_weights(self, weights):
157 | '''
158 | Returns the theoretical portfolio return, portfolio volatility
159 | and Sharpe ratio for given weights.
160 |
161 | Please note:
162 | The method does not set the weight.
163 |
164 | Parameters
165 | ==========
166 | weight: iterable,
167 | the weights of the portfolio content.
168 | '''
169 | weights = np.array(weights)
170 | portfolio_return = np.sum(self.raw_returns.mean() * weights) * 252
171 | portfolio_vol = math.sqrt(
172 | np.dot(weights.T, np.dot(self.raw_covariance * 252, weights)))
173 |
174 | return np.array([portfolio_return, portfolio_vol,
175 | portfolio_return / portfolio_vol])
176 |
177 | def set_weights(self, weights):
178 | '''
179 | Sets new weights
180 |
181 | Parameters
182 | ==========
183 | weights: interable
184 | new set of weights
185 | '''
186 |
187 | try:
188 | weights = np.array(weights)
189 | weights_sum = sum(weights).round(3)
190 | except:
191 | msg = 'weights must be an iterable of numbers'
192 | raise TypeError(msg)
193 |
194 | if weights_sum != 1:
195 | raise ValueError('Sum of weights must be one')
196 |
197 | if len(weights) != self.number_of_assets:
198 | msg = 'Expected %s weights, got %s'
199 | raise ValueError(msg % (self.number_of_assets,
200 | len(weights)))
201 | self.weights = weights
202 | self.apply_weights()
203 |
204 | def get_weights(self):
205 | '''
206 | Returns a dictionary with entries symbol:weights
207 | '''
208 |
209 | d = dict()
210 | for i in range(len(self.symbols)):
211 | d[self.symbols[i]] = self.weights[i]
212 | return d
213 |
214 | def get_portfolio_return(self):
215 | '''
216 | Returns the average return of the weighted portfolio
217 | '''
218 |
219 | return self.portfolio_return
220 |
221 | def get_portfolio_variance(self):
222 | '''
223 | Returns the average variance of the weighted portfolio
224 | '''
225 |
226 | return self.variance
227 |
228 | def get_volatility(self):
229 | '''
230 | Returns the average volatility of the portfolio
231 | '''
232 |
233 | return math.sqrt(self.variance)
234 |
235 | def optimize(self, target, constraint=None, constraint_type='Exact'):
236 | '''
237 | Optimize the weights of the portfolio according to the value of the
238 | string 'target'
239 |
240 | Parameters
241 | ==========
242 | target: string
243 | one of:
244 |
245 | Sharpe: maximizes the ratio return/volatility
246 | Vol: minimizes the expected volatility
247 | Return: maximizes the expected return
248 |
249 | constraint: number
250 | only for target options 'Vol' and 'Return'.
251 | For target option 'Return', the function tries to optimize
252 | the expected return given the constraint on the volatility.
253 | For target option 'Vol', the optimization returns the minimum
254 | volatility given the constraint for the expected return.
255 | If constraint is None (default), the optimization is made
256 | without concerning the other value.
257 |
258 | constraint_type: string, one of 'Exact' or 'Bound'
259 | only relevant if constraint is not None.
260 | For 'Exact' (default) the value of the constraint must be hit
261 | (if possible), for 'Bound', constraint is only the upper/lower
262 | bound of the volatility or return resp.
263 | '''
264 | weights = self.get_optimal_weights(target, constraint, constraint_type)
265 | if weights is not False:
266 | self.set_weights(weights)
267 | else:
268 | raise ValueError('Optimization failed.')
269 |
270 | def get_capital_market_line(self, riskless_asset):
271 | '''
272 | Returns the capital market line as a lambda function and
273 | the coordinates of the intersection between the captal market
274 | line and the efficient frontier
275 |
276 | Parameters
277 | ==========
278 |
279 | riskless_asset: float
280 | the return of the riskless asset
281 | '''
282 | x, y = self.get_efficient_frontier(100)
283 | if len(x) == 1:
284 | raise ValueError('Efficient Frontier seems to be constant.')
285 | f_eff = sci.UnivariateSpline(x, y, s=0)
286 | f_eff_der = f_eff.derivative(1)
287 |
288 | def tangent(x, rl=riskless_asset):
289 | return f_eff_der(x) * x / (f_eff(x) - rl) - 1
290 |
291 | left_start = x[0]
292 | right_start = x[-1]
293 |
294 | left, right = self.search_sign_changing(
295 | left_start, right_start, tangent, right_start - left_start)
296 | if left == 0 and right == 0:
297 | raise ValueError('Can not find tangent.')
298 |
299 | zero_x = sco.brentq(tangent, left, right)
300 |
301 | opt_return = f_eff(zero_x)
302 | cpl = lambda x: f_eff_der(zero_x) * x + riskless_asset
303 | return cpl, zero_x, float(opt_return)
304 |
305 | def get_efficient_frontier(self, n):
306 | '''
307 | Returns the efficient frontier in form of lists containing the x and y
308 | coordinates of points of the frontier.
309 |
310 | Parameters
311 | ==========
312 | n : int >= 3
313 | number of points
314 | '''
315 | if type(n) is not int:
316 | raise TypeError('n must be an int')
317 | if n < 3:
318 | raise ValueError('n must be at least 3')
319 |
320 | min_vol_weights = self.get_optimal_weights('Vol')
321 | min_vol = self.test_weights(min_vol_weights)[1]
322 | min_return_weights = self.get_optimal_weights('Return',
323 | constraint=min_vol)
324 | min_return = self.test_weights(min_return_weights)[0]
325 | max_return_weights = self.get_optimal_weights('Return')
326 | max_return = self.test_weights(max_return_weights)[0]
327 |
328 | delta = (max_return - min_return) / (n - 1)
329 | if delta > 0:
330 | returns = np.arange(min_return, max_return + delta, delta)
331 | vols = list()
332 | rets = list()
333 | for r in returns:
334 | w = self.get_optimal_weights('Vol', constraint=r,
335 | constraint_type='Exact')
336 | if w is not False:
337 | result = self.test_weights(w)[:2]
338 | rets.append(result[0])
339 | vols.append(result[1])
340 | else:
341 | rets = [max_return, ]
342 | vols = [min_vol, ]
343 |
344 | return np.array(vols), np.array(rets)
345 |
346 | def get_optimal_weights(self, target, constraint=None,
347 | constraint_type='Exact'):
348 | if target == 'Sharpe':
349 | def optimize_function(weights):
350 | return -self.test_weights(weights)[2]
351 |
352 | cons = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1})
353 |
354 | elif target == 'Vol':
355 | def optimize_function(weights):
356 | return self.test_weights(weights)[1]
357 |
358 | cons = [{'type': 'eq', 'fun': lambda x: np.sum(x) - 1}, ]
359 | if constraint is not None:
360 | d = dict()
361 | if constraint_type == 'Exact':
362 | d['type'] = 'eq'
363 | d['fun'] = lambda x: self.test_weights(x)[0] - constraint
364 | cons.append(d)
365 | elif constraint_type == 'Bound':
366 | d['type'] = 'ineq'
367 | d['fun'] = lambda x: self.test_weights(x)[0] - constraint
368 | cons.append(d)
369 | else:
370 | msg = 'Value for constraint_type must be either '
371 | msg += 'Exact or Bound, not %s' % constraint_type
372 | raise ValueError(msg)
373 |
374 | elif target == 'Return':
375 | def optimize_function(weights):
376 | return -self.test_weights(weights)[0]
377 |
378 | cons = [{'type': 'eq', 'fun': lambda x: np.sum(x) - 1}, ]
379 | if constraint is not None:
380 | d = dict()
381 | if constraint_type == 'Exact':
382 | d['type'] = 'eq'
383 | d['fun'] = lambda x: self.test_weights(x)[1] - constraint
384 | cons.append(d)
385 | elif constraint_type == 'Bound':
386 | d['type'] = 'ineq'
387 | d['fun'] = lambda x: constraint - self.test_weights(x)[1]
388 | cons.append(d)
389 | else:
390 | msg = 'Value for constraint_type must be either '
391 | msg += 'Exact or Bound, not %s' % constraint_type
392 | raise ValueError(msg)
393 |
394 | else:
395 | raise ValueError('Unknown target %s' % target)
396 |
397 | bounds = tuple((0, 1) for x in range(self.number_of_assets))
398 | start = self.number_of_assets * [1. / self.number_of_assets, ]
399 | result = sco.minimize(optimize_function, start,
400 | method='SLSQP', bounds=bounds, constraints=cons)
401 |
402 | if bool(result['success']) is True:
403 | new_weights = result['x'].round(6)
404 | return new_weights
405 | else:
406 | return False
407 |
408 | def search_sign_changing(self, l, r, f, d):
409 | if d < 0.000001:
410 | return (0, 0)
411 | for x in np.arange(l, r + d, d):
412 | if f(l) * f(x) < 0:
413 | ret = (x - d, x)
414 | return ret
415 |
416 | ret = self.search_sign_changing(l, r, f, d / 2.)
417 | return ret
418 |
419 |
420 | if __name__ == '__main__':
421 | ma = market_environment('ma', dt.date(2010, 1, 1))
422 | ma.add_constant('symbols', ['AAPL', 'GOOG', 'MSFT', 'DB'])
423 | ma.add_constant('final date', dt.date(2014, 3, 1))
424 | port = mean_variance_portfolio('My Portfolio', ma)
425 |
--------------------------------------------------------------------------------
/dx/rates.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Rates Instruments Valuation Classes
4 | # dx_rates.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from .frame import *
26 | from .models import *
27 |
28 |
29 | # Classes for simple interest rates products
30 |
31 | class interest_rate_swap(object):
32 | ''' Basic class for interest rate swap valuation.
33 |
34 | Attributes
35 | ==========
36 | name : string
37 | name of the object
38 | underlying :
39 | instance of simulation class
40 | mar_env : instance of market_environment
41 | market environment data for valuation
42 |
43 | Methods
44 | =======
45 | update:
46 | updates selected valuation parameters
47 | '''
48 |
49 | def __init__(self, name, underlying, mar_env):
50 | try:
51 | self.name = name
52 | self.pricing_date = mar_env.pricing_date
53 |
54 | # interest rate swap parameters
55 | self.fixed_rate = mar_env.get_constant('fixed_rate')
56 | self.trade_date = mar_env.get_constant('trade_date')
57 | self.effective_date = mar_env.get_constant('effective_date')
58 | self.payment_date = mar_env.get_constant('payment_date')
59 | self.payment_day = mar_env.get_constant('payment_day')
60 | self.termination_date = mar_env.get_constant('termination_date')
61 | self.notional = mar_env.get_constant('notional')
62 | self.currency = mar_env.get_constant('currency')
63 | self.tenor = mar_env.get_constant('tenor')
64 | self.counting = mar_env.get_constant('counting')
65 |
66 | # simulation parameters and discount curve from simulation object
67 | self.frequency = underlying.frequency
68 | self.paths = underlying.paths
69 | self.discount_curve = underlying.discount_curve
70 | self.underlying = underlying
71 | self.payoff = None
72 |
73 | # provide selected dates to underlying
74 | self.underlying.special_dates.extend([self.pricing_date,
75 | self.effective_date,
76 | self.payment_date,
77 | self.termination_date])
78 | except:
79 | print('Error parsing market environment.')
80 |
81 | self.payment_dates = pd.date_range(self.payment_date,
82 | self.termination_date,
83 | freq=self.tenor)
84 | self.payment_dates = [d.replace(day=self.payment_day)
85 | for d in self.payment_dates]
86 | self.payment_dates = pd.DatetimeIndex(self.payment_dates)
87 | self.underlying.time_grid = None
88 | self.underlying.instrument_values = None
89 | self.underlying.special_dates.extend(
90 | self.payment_dates.to_pydatetime())
91 |
92 | def generate_payoff(self, fixed_seed=True):
93 | ''' Generates the IRS payoff for simulated underlyin values. '''
94 | paths = self.underlying.get_instrument_values(fixed_seed=fixed_seed)
95 | payoff = paths - self.fixed_rate
96 | payoff = pd.DataFrame(payoff, index=self.underlying.time_grid)
97 | payoff = payoff.loc[self.payment_dates]
98 | return self.notional * payoff
99 |
100 | def present_value(self, fixed_seed=True, full=False):
101 | ''' Calculates the present value of the IRS. '''
102 | if self.payoff is None:
103 | self.payoff = self.generate_payoff(fixed_seed=fixed_seed)
104 | if not fixed_seed:
105 | self.payoff = self.generate_payoff(fixed_seed=fixed_seed)
106 | discount_factors = self.discount_curve.get_discount_factors(
107 | self.payment_dates, dtobjects=True)[1]
108 | present_values = self.payoff.T * discount_factors[:][::-1]
109 | if full:
110 | return present_values.T
111 | else:
112 | return np.sum(np.sum(present_values)) / len(self.payoff.columns)
113 |
114 |
115 | #
116 | # Zero-Coupon Bond Valuation Formula CIR85/SRD Model
117 | #
118 |
119 |
120 | def gamma(kappa_r, sigma_r):
121 | ''' Help Function. '''
122 | return math.sqrt(kappa_r ** 2 + 2 * sigma_r ** 2)
123 |
124 |
125 | def b1(kappa_r, theta_r, sigma_r, T):
126 | ''' Help Function. '''
127 | g = gamma(kappa_r, sigma_r)
128 | return (((2 * g * math.exp((kappa_r + g) * T / 2)) /
129 | (2 * g + (kappa_r + g) * (math.exp(g * T) - 1))) **
130 | (2 * kappa_r * theta_r / sigma_r ** 2))
131 |
132 |
133 | def b2(kappa_r, theta_r, sigma_r, T):
134 | ''' Help Function. '''
135 | g = gamma(kappa_r, sigma_r)
136 | return ((2 * (math.exp(g * T) - 1)) /
137 | (2 * g + (kappa_r + g) * (math.exp(g * T) - 1)))
138 |
139 |
140 | def CIR_zcb_valuation(r0, kappa_r, theta_r, sigma_r, T):
141 | ''' Function to value unit zero-coupon bonds in
142 | Cox-Ingersoll-Ross (1985) model.
143 |
144 | Parameters
145 | ==========
146 | r0: float
147 | initial short rate
148 | kappa_r: float
149 | mean-reversion factor
150 | theta_r: float
151 | long-run mean of short rate
152 | sigma_r: float
153 | volatility of short rate
154 | T: float
155 | time horizon/interval
156 |
157 | Returns
158 | =======
159 | zcb_value: float
160 | zero-coupon bond present value
161 | '''
162 | b_1 = b1(kappa_r, theta_r, sigma_r, T)
163 | b_2 = b2(kappa_r, theta_r, sigma_r, T)
164 | zcb_value = b_1 * math.exp(-b_2 * r0)
165 | return zcb_value
166 |
--------------------------------------------------------------------------------
/dx/valuation/__init__.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Financial Analytics Library
4 | #
5 | # DX Analytics is a financial analytics library, mainly for
6 | # derviatives modeling and pricing by Monte Carlo simulation
7 | #
8 | # (c) Dr. Yves J. Hilpisch
9 | # The Python Quants GmbH
10 | #
11 | # This program is free software: you can redistribute it and/or modify
12 | # it under the terms of the GNU Affero General Public License as
13 | # published by the Free Software Foundation, either version 3 of the
14 | # License, or any later version.
15 | #
16 | # This program is distributed in the hope that it will be useful,
17 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
18 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
19 | # GNU Affero General Public License for more details.
20 | #
21 | # You should have received a copy of the GNU Affero General Public License
22 | # along with this program. If not, see http://www.gnu.org/licenses/.
23 | #
24 | from .single_risk import *
25 | from .multi_risk import *
26 | from .parallel_valuation import *
27 | from .derivatives_portfolio import *
28 | from .var_portfolio import *
29 |
30 | __all__ = ['valuation_class_single', 'valuation_mcs_european_single',
31 | 'valuation_mcs_american_single', 'valuation_class_multi',
32 | 'valuation_mcs_european_multi', 'valuation_mcs_american_multi',
33 | 'derivatives_position', 'derivatives_portfolio', 'var_portfolio',
34 | 'risk_report']
--------------------------------------------------------------------------------
/dx/valuation/derivatives_portfolio.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Derivatives Instruments and Portfolio Valuation Classes
4 | # derivatives_portfolio.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from ..models import *
27 | from .single_risk import *
28 | from .multi_risk import *
29 | from .parallel_valuation import *
30 | #import os
31 | #print(os.listdir('.'))
32 | #exec(open("./dx/valuation/parallel_valuation.py").read())
33 | #print('***SPAWNING MP***')
34 | #mp.set_start_method('spawn', force=True)
35 |
36 | import xarray as xr # replacement for pd.Panel
37 | import warnings; warnings.simplefilter('ignore')
38 |
39 |
40 | class derivatives_position(object):
41 | ''' Class to model a derivatives position.
42 |
43 | Attributes
44 | ==========
45 |
46 | name : string
47 | name of the object
48 | quantity : float
49 | number of derivatives instruments making up the position
50 | underlyings : list of strings
51 | names of risk_factors/risk factors for the derivative
52 | mar_env : instance of market_environment
53 | constants, lists and curves relevant for valuation_class
54 | otype : string
55 | valuation class to use
56 | payoff_func : string
57 | payoff string for the derivative
58 |
59 | Methods
60 | =======
61 | get_info :
62 | prints information about the derivative position
63 | '''
64 |
65 | def __init__(self, name, quantity, underlyings, mar_env,
66 | otype, payoff_func):
67 | self.name = name
68 | self.quantity = quantity
69 | self.underlyings = underlyings
70 | self.mar_env = mar_env
71 | self.otype = otype
72 | self.payoff_func = payoff_func
73 |
74 | def get_info(self):
75 | print('NAME')
76 | print(self.name, '\n')
77 | print('QUANTITY')
78 | print(self.quantity, '\n')
79 | print('UNDERLYINGS')
80 | print(self.underlyings, '\n')
81 | print('MARKET ENVIRONMENT')
82 | print('\n**Constants**')
83 | for key in self.mar_env.constants:
84 | print(key, self.mar_env.constants[key])
85 | print('\n**Lists**')
86 | for key in self.mar_env.lists:
87 | print(key, self.mar_env.lists[key])
88 | print('\n**Curves**')
89 | for key in self.mar_env.curves:
90 | print(key, self.mar_env.curves[key])
91 | print('\nOPTION TYPE')
92 | print(self.otype, '\n')
93 | print('PAYOFF FUNCTION')
94 | print(self.payoff_func)
95 |
96 |
97 | otypes = {'European single': valuation_mcs_european_single,
98 | 'American single': valuation_mcs_american_single,
99 | 'European multi': valuation_mcs_european_multi,
100 | 'American multi': valuation_mcs_american_multi}
101 |
102 |
103 | class derivatives_portfolio(object):
104 | ''' Class for building and valuing portfolios of derivatives positions.
105 |
106 | Attributes
107 | ==========
108 | name : str
109 | name of the object
110 | positions : dict
111 | dictionary of positions (instances of derivatives_position class)
112 | val_env : market_environment
113 | market environment for the valuation
114 | risk_factors : dict
115 | dictionary of market environments for the risk_factors
116 | correlations : list or pd.DataFrame
117 | correlations between risk_factors
118 | fixed_seed : boolean
119 | flag for fixed rng seed
120 |
121 | Methods
122 | =======
123 | get_positions :
124 | prints information about the single portfolio positions
125 | get_values :
126 | estimates and returns positions values
127 | get_present_values :
128 | returns the full distribution of the simulated portfolio values
129 | get_statistics :
130 | returns a pandas DataFrame object with portfolio statistics
131 | get_port_risk :
132 | estimates sensitivities for point-wise parameter shocks
133 | '''
134 |
135 | def __init__(self, name, positions, val_env, risk_factors,
136 | correlations=None, fixed_seed=False, parallel=False):
137 | self.name = name
138 | self.positions = positions
139 | self.val_env = val_env
140 | self.risk_factors = risk_factors
141 | self.underlyings = set()
142 | if correlations is None or correlations is False:
143 | self.correlations = None
144 | else:
145 | self.correlations = correlations
146 | self.time_grid = None
147 | self.underlying_objects = {}
148 | self.valuation_objects = {}
149 | self.fixed_seed = fixed_seed
150 | self.parallel = parallel
151 | self.special_dates = []
152 | for pos in self.positions:
153 | # determine earliest starting_date
154 | self.val_env.constants['starting_date'] = \
155 | min(self.val_env.constants['starting_date'],
156 | positions[pos].mar_env.pricing_date)
157 | # determine latest date of relevance
158 | self.val_env.constants['final_date'] = \
159 | max(self.val_env.constants['final_date'],
160 | positions[pos].mar_env.constants['maturity'])
161 | # collect all underlyings
162 | # add to set; avoids redundancy
163 | for ul in positions[pos].underlyings:
164 | self.underlyings.add(ul)
165 |
166 | # generating general time grid
167 | start = self.val_env.constants['starting_date']
168 | end = self.val_env.constants['final_date']
169 | time_grid = pd.date_range(start=start, end=end,
170 | freq=self.val_env.constants['frequency']
171 | ).to_pydatetime()
172 | time_grid = list(time_grid)
173 | for pos in self.positions:
174 | maturity_date = positions[pos].mar_env.constants['maturity']
175 | if maturity_date not in time_grid:
176 | time_grid.insert(0, maturity_date)
177 | self.special_dates.append(maturity_date)
178 | if start not in time_grid:
179 | time_grid.insert(0, start)
180 | if end not in time_grid:
181 | time_grid.append(end)
182 | # delete duplicate entries
183 | # & sort dates in time_grid
184 | time_grid = sorted(set(time_grid))
185 |
186 | self.time_grid = np.array(time_grid)
187 | self.val_env.add_list('time_grid', self.time_grid)
188 |
189 | # taking care of correlations
190 | ul_list = sorted(self.underlyings)
191 | correlation_matrix = np.zeros((len(ul_list), len(ul_list)))
192 | np.fill_diagonal(correlation_matrix, 1.0)
193 | correlation_matrix = pd.DataFrame(correlation_matrix,
194 | index=ul_list, columns=ul_list)
195 |
196 | if self.correlations is not None:
197 | if isinstance(self.correlations, list):
198 | # if correlations are given as list of list/tuple objects
199 | for corr in self.correlations:
200 | if corr[2] >= 1.0:
201 | corr[2] = 0.999999999999
202 | if corr[2] <= -1.0:
203 | corr[2] = -0.999999999999
204 | # fill correlation matrix
205 | correlation_matrix[corr[0]].loc[corr[1]] = corr[2]
206 | correlation_matrix[corr[1]].loc[corr[0]] = corr[2]
207 | # determine Cholesky matrix
208 | cholesky_matrix = np.linalg.cholesky(np.array(
209 | correlation_matrix))
210 | else:
211 | # if correlation matrix was already given as pd.DataFrame
212 | cholesky_matrix = np.linalg.cholesky(np.array(
213 | self.correlations))
214 | else:
215 | cholesky_matrix = np.linalg.cholesky(np.array(
216 | correlation_matrix))
217 |
218 | # dictionary with index positions for the
219 | # slice of the random number array to be used by
220 | # respective underlying
221 | rn_set = {}
222 | for asset in self.underlyings:
223 | rn_set[asset] = ul_list.index(asset)
224 |
225 | # random numbers array, to be used by
226 | # all underlyings (if correlations exist)
227 | random_numbers = sn_random_numbers(
228 | (len(rn_set),
229 | len(self.time_grid),
230 | self.val_env.constants['paths']),
231 | fixed_seed=self.fixed_seed)
232 |
233 | # adding all to valuation environment which is
234 | # to be shared with every underlying
235 | self.val_env.add_list('correlation_matrix', correlation_matrix)
236 | self.val_env.add_list('cholesky_matrix', cholesky_matrix)
237 | self.val_env.add_list('random_numbers', random_numbers)
238 | self.val_env.add_list('rn_set', rn_set)
239 |
240 | for asset in self.underlyings:
241 | # select market environment of asset
242 | mar_env = self.risk_factors[asset]
243 | # add valuation environment to market environment
244 | mar_env.add_environment(val_env)
245 | # select the right simulation class
246 | model = models[mar_env.constants['model']]
247 | # instantiate simulation object
248 | if self.correlations is not None:
249 | corr = True
250 | else:
251 | corr = False
252 | self.underlying_objects[asset] = model(asset, mar_env,
253 | corr=corr)
254 |
255 | for pos in positions:
256 | # select right valuation class (European, American)
257 | val_class = otypes[positions[pos].otype]
258 | # pick the market environment and add the valuation environment
259 | mar_env = positions[pos].mar_env
260 | mar_env.add_environment(self.val_env)
261 | # instantiate valuation class single risk vs. multi risk
262 | if self.positions[pos].otype[-5:] == 'multi':
263 | underlying_objects = {}
264 | for obj in positions[pos].underlyings:
265 | underlying_objects[obj] = self.underlying_objects[obj]
266 | self.valuation_objects[pos] = \
267 | val_class(name=positions[pos].name,
268 | val_env=mar_env,
269 | risk_factors=underlying_objects,
270 | payoff_func=positions[pos].payoff_func,
271 | portfolio=True)
272 | else:
273 | self.valuation_objects[pos] = \
274 | val_class(name=positions[pos].name,
275 | mar_env=mar_env,
276 | underlying=self.underlying_objects[
277 | positions[pos].underlyings[0]],
278 | payoff_func=positions[pos].payoff_func)
279 |
280 | def get_positions(self):
281 | ''' Convenience method to get information about
282 | all derivatives positions in a portfolio. '''
283 | for pos in self.positions:
284 | bar = '\n' + 50 * '-'
285 | print(bar)
286 | self.positions[pos].get_info()
287 | print(bar)
288 |
289 | def get_values(self, fixed_seed=False):
290 | ''' Providing portfolio position values. '''
291 | res_list = []
292 | if self.parallel is True:
293 | self.underlying_objects = \
294 | simulate_parallel(self.underlying_objects.values())
295 | results = value_parallel(self.valuation_objects.values())
296 | # iterate over all positions in portfolio
297 | for pos in self.valuation_objects:
298 | pos_list = []
299 | if self.parallel is True:
300 | present_value = results[self.valuation_objects[pos].name]
301 | else:
302 | present_value = self.valuation_objects[pos].present_value()
303 | pos_list.append(pos)
304 | pos_list.append(self.positions[pos].name)
305 | pos_list.append(self.positions[pos].quantity)
306 | pos_list.append(self.positions[pos].otype)
307 | pos_list.append(self.positions[pos].underlyings)
308 | # calculate all present values for the single instruments
309 | pos_list.append(present_value)
310 | pos_list.append(self.valuation_objects[pos].currency)
311 | # single instrument value times quantity
312 | pos_list.append(present_value * self.positions[pos].quantity)
313 | res_list.append(pos_list)
314 | res_df = pd.DataFrame(res_list,
315 | columns=['position', 'name', 'quantity',
316 | 'otype', 'risk_facts', 'value',
317 | 'currency', 'pos_value'])
318 | print('Total\n', res_df[['pos_value']].sum())
319 | return res_df
320 |
321 | def get_present_values(self, fixed_seed=False):
322 | ''' Get full distribution of present values. '''
323 | present_values = np.zeros(self.val_env.get_constant('paths'))
324 | if self.parallel is True:
325 | self.underlying_objects = \
326 | simulate_parallel(self.underlying_objects.values())
327 | results = value_parallel(self.valuation_objects.values(),
328 | full=True)
329 | for pos in self.valuation_objects:
330 | present_values += results[self.valuation_objects[pos].name] \
331 | * self.positions[pos].quantity
332 | else:
333 | for pos in self.valuation_objects:
334 | present_values += self.valuation_objects[pos].present_value(
335 | fixed_seed=fixed_seed, full=True)[1] \
336 | * self.positions[pos].quantity
337 | return present_values
338 |
339 | def get_statistics(self, fixed_seed=None):
340 | ''' Providing position statistics. '''
341 | res_list = []
342 | if fixed_seed is None:
343 | fixed_seed = self.fixed_seed
344 | if self.parallel is True:
345 | self.underlying_objects = \
346 | simulate_parallel(self.underlying_objects.values())
347 | results = value_parallel(self.valuation_objects.values(),
348 | fixed_seed=fixed_seed)
349 | delta_list = greeks_parallel(self.valuation_objects.values(),
350 | Greek='Delta')
351 | vega_list = greeks_parallel(self.valuation_objects.values(),
352 | Greek='Vega')
353 | # iterate over all positions in portfolio
354 | for pos in self.valuation_objects:
355 | pos_list = []
356 | if self.parallel is True:
357 | present_value = results[self.valuation_objects[pos].name]
358 | else:
359 | present_value = self.valuation_objects[pos].present_value(
360 | fixed_seed=fixed_seed, accuracy=3)
361 | pos_list.append(pos)
362 | pos_list.append(self.positions[pos].name)
363 | pos_list.append(self.positions[pos].quantity)
364 | pos_list.append(self.positions[pos].otype)
365 | pos_list.append(self.positions[pos].underlyings)
366 | # calculate all present values for the single instruments
367 | pos_list.append(present_value)
368 | pos_list.append(self.valuation_objects[pos].currency)
369 | # single instrument value times quantity
370 | pos_list.append(present_value * self.positions[pos].quantity)
371 | if self.positions[pos].otype[-5:] == 'multi':
372 | # multiple delta and vega values for multi-risk derivatives
373 | delta_dict = {}
374 | vega_dict = {}
375 | for key in self.valuation_objects[pos].underlying_objects.keys():
376 | # delta and vega per position and underlying
377 | delta_dict[key] = round(
378 | self.valuation_objects[pos].delta(key) *
379 | self.positions[pos].quantity, 6)
380 | vega_dict[key] = round(
381 | self.valuation_objects[pos].vega(key) *
382 | self.positions[pos].quantity, 6)
383 | pos_list.append(str(delta_dict))
384 | pos_list.append(str(vega_dict))
385 | else:
386 | if self.parallel is True:
387 | # delta from parallel calculation
388 | pos_list.append(delta_list[pos] *
389 | self.positions[pos].quantity)
390 | # vega from parallel calculation
391 | pos_list.append(vega_list[pos] *
392 | self.positions[pos].quantity)
393 | else:
394 | # delta per position
395 | pos_list.append(self.valuation_objects[pos].delta() *
396 | self.positions[pos].quantity)
397 | # vega per position
398 | pos_list.append(self.valuation_objects[pos].vega() *
399 | self.positions[pos].quantity)
400 | res_list.append(pos_list)
401 | res_df = pd.DataFrame(res_list, columns=['position', 'name',
402 | 'quantity', 'otype',
403 | 'risk_facts', 'value',
404 | 'currency', 'pos_value',
405 | 'pos_delta', 'pos_vega'])
406 | try:
407 | print('Totals\n',
408 | res_df[['pos_value', 'pos_delta', 'pos_vega']].sum())
409 | except:
410 | print(res_df[['pos_value', 'pos_delta', 'pos_vega']])
411 | return res_df
412 |
413 | def get_port_risk(self, Greek='Delta', low=0.8, high=1.2, step=0.1,
414 | fixed_seed=None, risk_factors=None):
415 | ''' Calculating portfolio risk statistics. '''
416 | if risk_factors is None:
417 | risk_factors = self.underlying_objects.keys()
418 | if fixed_seed is None:
419 | fixed_seed = self.fixed_seed
420 | sensitivities = {}
421 | levels = np.arange(low, high + 0.01, step)
422 | if self.parallel is True:
423 | values = value_parallel(self.valuation_objects.values(),
424 | fixed_seed=fixed_seed)
425 | for key in self.valuation_objects:
426 | values[key] *= self.positions[key].quantity
427 | else:
428 | values = {}
429 | for key, obj in self.valuation_objects.items():
430 | values[key] = obj.present_value() \
431 | * self.positions[key].quantity
432 | import copy
433 | for rf in risk_factors:
434 | print('\n' + rf)
435 | in_val = self.underlying_objects[rf].initial_value
436 | in_vol = self.underlying_objects[rf].volatility
437 | results = []
438 | for level in levels:
439 | values_sens = copy.deepcopy(values)
440 | print(round(level, 4))
441 | if level == 1.0:
442 | pass
443 | else:
444 | for key, obj in self.valuation_objects.items():
445 | if rf in self.positions[key].underlyings:
446 |
447 | if self.positions[key].otype[-5:] == 'multi':
448 | if Greek == 'Delta':
449 | obj.underlying_objects[rf].update(
450 | initial_value=level * in_val)
451 | if Greek == 'Vega':
452 | obj.underlying_objects[rf].update(
453 | volatility=level * in_vol)
454 |
455 | else:
456 | if Greek == 'Delta':
457 | obj.underlying.update(
458 | initial_value=level * in_val)
459 | elif Greek == 'Vega':
460 | obj.underlying.update(
461 | volatility=level * in_vol)
462 |
463 | values_sens[key] = obj.present_value(
464 | fixed_seed=fixed_seed) \
465 | * self.positions[key].quantity
466 |
467 | if self.positions[key].otype[-5:] == 'multi':
468 | obj.underlying_objects[rf].update(
469 | initial_value=in_val)
470 | obj.underlying_objects[rf].update(
471 | volatility=in_vol)
472 |
473 | else:
474 | obj.underlying.update(initial_value=in_val)
475 | obj.underlying.update(volatility=in_vol)
476 |
477 | if Greek == 'Delta':
478 | results.append((round(level * in_val, 2),
479 | sum(values_sens.values())))
480 | if Greek == 'Vega':
481 | results.append((round(level * in_vol, 2),
482 | sum(values_sens.values())))
483 |
484 | sensitivities[rf + '_' + Greek] = pd.DataFrame(
485 | np.array(results), index=levels, columns=['factor', 'value'])
486 | print(2 * '\n')
487 | df = xr.Dataset(sensitivities).to_dataframe() # replacing pd.Panel
488 | return df, sum(values.values())
489 |
490 | def get_deltas(self, net=True, low=0.9, high=1.1, step=0.05):
491 | ''' Returns the deltas of the portfolio. Convenience function.'''
492 | deltas, benchvalue = self.dx_port.get_port_risk(
493 | Greek='Delta', low=low, high=high, step=step)
494 | if net is True:
495 | deltas.loc[:, :, 'value'] -= benchvalue
496 | return deltas, benchvalue
497 |
498 | def get_vegas(self, net=True, low=0.9, high=1.1, step=0.05):
499 | ''' Returns the vegas of the portfolio. Convenience function.'''
500 | vegas, benchvalue = self.dx_port.get_port_risk(
501 | Greek='Vega', low=low, high=high, step=step)
502 | if net is True:
503 | vegas.loc[:, :, 'value'] -= benchvalue
504 | return vegas, benchvalue
505 |
506 |
507 | def risk_report(sensitivities, digits=2, gross=True):
508 | if gross is True:
509 | for key in sensitivities:
510 | print('\n' + key)
511 | print(np.round(sensitivities[key].transpose(), digits))
512 | else:
513 | print(np.round(sensitivities, digits))
514 |
--------------------------------------------------------------------------------
/dx/valuation/multi_risk.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Derivatives Instruments and Portfolio Valuation Classes
4 | # dx_valuation.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from ..models import *
27 | from .single_risk import models
28 | # import statsmodels.api as sm
29 |
30 |
31 | class valuation_class_multi(object):
32 | ''' Basic class for multi-risk factor instrument valuation.
33 |
34 | Attributes
35 | ==========
36 | name : string
37 | name of the object
38 | mar_env : instance of market_environment
39 | market environment data for valuation
40 | underlyings : dictionary
41 | instances of model classes
42 | correlations : list
43 | correlations between underlyings
44 | payoff_func : string
45 | derivatives payoff in Python syntax
46 | Example: 'np.maximum(maturity_value[key] - 100, 0)'
47 | where maturity_value[key] is the NumPy vector with
48 | respective values of the underlying 'key' from the
49 | risk_factors dictionary
50 |
51 | Methods
52 | =======
53 | update:
54 | updates selected valuation parameters
55 | delta :
56 | returns the delta of the derivative
57 | vega :
58 | returns the vega of the derivative
59 | '''
60 |
61 | def __init__(self, name, val_env, risk_factors=None, correlations=None,
62 | payoff_func='', fixed_seed=False, portfolio=False):
63 | try:
64 | self.name = name
65 | self.val_env = val_env
66 | self.currency = self.val_env.get_constant('currency')
67 | self.pricing_date = val_env.pricing_date
68 | try:
69 | # strike optional
70 | self.strike = self.val_env.get_constant('strike')
71 | except:
72 | pass
73 | self.maturity = self.val_env.get_constant('maturity')
74 | self.frequency = self.val_env.get_constant('frequency')
75 | self.paths = self.val_env.get_constant('paths')
76 | self.discount_curve = self.val_env.get_curve('discount_curve')
77 | self.risk_factors = risk_factors
78 | self.underlyings = set()
79 | if portfolio is False:
80 | self.underlying_objects = {}
81 | else:
82 | self.underlying_objects = risk_factors
83 | self.correlations = correlations
84 | self.payoff_func = payoff_func
85 | self.fixed_seed = fixed_seed
86 | self.instrument_values = {}
87 | try:
88 | self.time_grid = self.val_env.get_curve('time_grid')
89 | except:
90 | self.time_grid = None
91 | self.correlation_matrix = None
92 | except:
93 | print('Error parsing market environment.')
94 |
95 | # Generating general time grid
96 | if self.time_grid is None:
97 | self.generate_time_grid()
98 |
99 | if portfolio is False:
100 | if self.correlations is not None:
101 | ul_list = sorted(self.risk_factors)
102 | if isinstance(self.correlations, list):
103 | correlation_matrix = np.zeros((len(ul_list), len(ul_list)))
104 | np.fill_diagonal(correlation_matrix, 1.0)
105 | correlation_matrix = pd.DataFrame(
106 | correlation_matrix, index=ul_list, columns=ul_list)
107 | for corr in correlations:
108 | if corr[2] >= 1.0:
109 | corr[2] = 0.999999999999
110 | correlation_matrix[corr[0]].loc[corr[1]] = corr[2]
111 | correlation_matrix[corr[1]].loc[corr[0]] = corr[2]
112 | self.correlation_matrix = correlation_matrix
113 | cholesky_matrix = np.linalg.cholesky(
114 | np.array(correlation_matrix))
115 | else:
116 | # if correlation matrix was already given as pd.DataFrame
117 | cholesky_matrix = np.linalg.cholesky(np.array(
118 | self.correlations))
119 |
120 | # dictionary with index positions
121 | rn_set = {}
122 | for asset in self.risk_factors:
123 | rn_set[asset] = ul_list.index(asset)
124 |
125 | # random numbers array
126 | random_numbers = sn_random_numbers(
127 | (len(rn_set), len(self.time_grid),
128 | self.val_env.constants['paths']),
129 | fixed_seed=self.fixed_seed)
130 |
131 | # adding all to valuation environment
132 | self.val_env.add_list('cholesky_matrix', cholesky_matrix)
133 | self.val_env.add_list('rn_set', rn_set)
134 | self.val_env.add_list('random_numbers', random_numbers)
135 | self.generate_underlying_objects()
136 |
137 | def generate_time_grid(self):
138 | ''' Generates time grid for all relevant objects. '''
139 | start = self.val_env.get_constant('starting_date')
140 | end = self.val_env.get_constant('final_date')
141 | maturity = self.maturity
142 | time_grid = pd.date_range(start=start, end=end,
143 | freq=self.val_env.get_constant('frequency')
144 | ).to_pydatetime()
145 | if start in time_grid and end in time_grid and \
146 | maturity in time_grid:
147 | self.time_grid = time_grid
148 | else:
149 | time_grid = list(time_grid)
150 | if maturity not in time_grid:
151 | time_grid.insert(0, maturity)
152 | if start not in time_grid:
153 | time_grid.insert(0, start)
154 | if end not in time_grid:
155 | time_grid.append(end)
156 | time_grid.sort()
157 | self.time_grid = np.array(time_grid)
158 | self.val_env.add_curve('time_grid', self.time_grid)
159 |
160 | def generate_underlying_objects(self):
161 | for asset in self.risk_factors:
162 | mar_env = self.risk_factors[asset]
163 | mar_env.add_environment(self.val_env)
164 | model = models[mar_env.constants['model']]
165 | if self.correlations is not None:
166 | self.underlying_objects[asset] = model(asset,
167 | mar_env, True)
168 | else:
169 | self.underlying_objects[asset] = model(asset,
170 | mar_env, False)
171 |
172 | def get_instrument_values(self, fixed_seed=True):
173 | for obj in self.underlying_objects.values():
174 | if obj.instrument_values is None:
175 | obj.generate_paths(fixed_seed=fixed_seed)
176 |
177 | def update(self, key=None, pricing_date=None, initial_value=None,
178 | volatility=None, short_rate=None, strike=None, maturity=None):
179 | ''' Updates parameters of the derivative. '''
180 | if key is not None:
181 | underlying = self.underlying_objects[key]
182 | if pricing_date is not None:
183 | self.pricing_date = pricing_date
184 | self.val_env.add_constant('starting_date', pricing_date)
185 | self.generate_time_grid()
186 | self.generate_underlying_objects()
187 | if initial_value is not None:
188 | underlying.update(initial_value=initial_value)
189 | if volatility is not None:
190 | underlying.update(volatility=volatility)
191 | if short_rate is not None:
192 | self.val_env.curves['discount_curve'].short_rate = short_rate
193 | self.discount_curve.short_rate = short_rate
194 | self.generate_underlying_objects()
195 | if strike is not None:
196 | self.strike = strike
197 | if maturity is not None:
198 | self.maturity = maturity
199 | for underlying in underlyings.values():
200 | underlying.update(final_date=self.maturity)
201 | self.get_instrument_values()
202 |
203 | def delta(self, key, interval=None, accuracy=4):
204 | ''' Returns the delta for the specified risk factor
205 | for the derivative. '''
206 | if len(self.instrument_values) == 0:
207 | self.get_instrument_values()
208 | asset = self.underlying_objects[key]
209 | if interval is None:
210 | interval = asset.initial_value / 50.
211 | value_left = self.present_value(fixed_seed=True, accuracy=10)
212 | start_value = asset.initial_value
213 | initial_del = start_value + interval
214 | asset.update(initial_value=initial_del)
215 | self.get_instrument_values()
216 | value_right = self.present_value(fixed_seed=True, accuracy=10)
217 | asset.update(initial_value=start_value)
218 | self.instrument_values = {}
219 | delta = (value_right - value_left) / interval
220 | if delta < -1.0:
221 | return -1.0
222 | elif delta > 1.0:
223 | return 1.0
224 | else:
225 | return round(delta, accuracy)
226 |
227 | def gamma(self, key, interval=None, accuracy=4):
228 | ''' Returns the gamma for the specified risk factor
229 | for the derivative. '''
230 | if len(self.instrument_values) == 0:
231 | self.get_instrument_values()
232 | asset = self.underlying_objects[key]
233 | if interval is None:
234 | interval = asset.initial_value / 50.
235 | # forward-difference approximation
236 | # calculate left value for numerical gamma
237 | value_left = self.delta(key=key)
238 | # numerical underlying value for right value
239 | initial_del = asset.initial_value + interval
240 | asset.update(initial_value=initial_del)
241 | # calculate right value for numerical delta
242 | value_right = self.delta(key=key)
243 | # reset the initial_value of the simulation object
244 | asset.update(initial_value=initial_del - interval)
245 | gamma = (value_right - value_left) / interval
246 | return round(gamma, accuracy)
247 |
248 | def vega(self, key, interval=0.01, accuracy=4):
249 | ''' Returns the vega for the specified risk factor. '''
250 | if len(self.instrument_values) == 0:
251 | self.get_instrument_values()
252 | asset = self.underlying_objects[key]
253 | if interval < asset.volatility / 50.:
254 | interval = asset.volatility / 50.
255 | value_left = self.present_value(fixed_seed=True, accuracy=10)
256 | start_vola = asset.volatility
257 | vola_del = start_vola + interval
258 | asset.update(volatility=vola_del)
259 | self.get_instrument_values()
260 | value_right = self.present_value(fixed_seed=True, accuracy=10)
261 | asset.update(volatility=start_vola)
262 | self.instrument_values = {}
263 | vega = (value_right - value_left) / interval
264 | return round(vega, accuracy)
265 |
266 | def theta(self, interval=10, accuracy=4):
267 | ''' Returns the theta for the derivative. '''
268 | if len(self.instrument_values) == 0:
269 | self.get_instrument_values()
270 | # calculate the left value for numerical theta
271 | value_left = self.present_value(fixed_seed=True, accuracy=10)
272 | # determine new pricing date
273 | orig_date = self.pricing_date
274 | new_date = orig_date + dt.timedelta(interval)
275 | # calculate the right value of numerical theta
276 | self.update(pricing_date=new_date)
277 | value_right = self.present_value(fixed_seed=True, accuracy=10)
278 | # reset pricing dates of valuation & underlying objects
279 | self.update(pricing_date=orig_date)
280 | # calculating the negative value by convention
281 | # (i.e. a decrease in time-to-maturity)
282 | theta = (value_right - value_left) / (interval / 365.)
283 | return round(theta, accuracy)
284 |
285 | def rho(self, interval=0.005, accuracy=4):
286 | ''' Returns the rho for the derivative. '''
287 | # calculate the left value for numerical rho
288 | value_left = self.present_value(fixed_seed=True, accuracy=12)
289 | if type(self.discount_curve) == constant_short_rate:
290 | # adjust constant short rate factor
291 | orig_short_rate = self.discount_curve.short_rate
292 | new_short_rate = orig_short_rate + interval
293 | self.update(short_rate=new_short_rate)
294 | # self.discount_curve.short_rate += interval
295 | # delete instrument values (since drift changes)
296 | # for asset in self.underlying_objects.values():
297 | # asset.instrument_values = None
298 | # calculate the right value for numerical rho
299 | value_right = self.present_value(fixed_seed=True, accuracy=12)
300 | # reset constant short rate factor
301 | self.update(short_rate=orig_short_rate)
302 | # self.discount_curve.short_rate -= interval
303 | rho = (value_right - value_left) / interval
304 | return round(rho, accuracy)
305 | else:
306 | raise NotImplementedError(
307 | 'Not yet implemented for this short rate model.')
308 |
309 | def dollar_gamma(self, key, interval=None, accuracy=4):
310 | ''' Returns the dollar gamma for the specified risk factor. '''
311 | dollar_gamma = (0.5 * self.gamma(key, interval=interval) *
312 | self.underlying_objects[key].initial_value ** 2)
313 | return round(dollar_gamma, accuracy)
314 |
315 |
316 | class valuation_mcs_european_multi(valuation_class_multi):
317 | ''' Class to value European options with arbitrary payoff
318 | by multi-risk factor Monte Carlo simulation.
319 |
320 | Methods
321 | =======
322 | generate_payoff :
323 | returns payoffs given the paths and the payoff function
324 | present_value :
325 | returns present value (Monte Carlo estimator)
326 | '''
327 |
328 | def generate_payoff(self, fixed_seed=True):
329 | self.get_instrument_values(fixed_seed=True)
330 | paths = {key: name.instrument_values for key, name
331 | in self.underlying_objects.items()}
332 | time_grid = self.time_grid
333 | try:
334 | time_index = np.where(time_grid == self.maturity)[0]
335 | time_index = int(time_index)
336 | except:
337 | print('Maturity date not in time grid of underlying.')
338 | maturity_value = {}
339 | mean_value = {}
340 | max_value = {}
341 | min_value = {}
342 | for key in paths:
343 | maturity_value[key] = paths[key][time_index]
344 | mean_value[key] = np.mean(paths[key][:time_index], axis=1)
345 | max_value[key] = np.amax(paths[key][:time_index], axis=1)
346 | min_value[key] = np.amin(paths[key][:time_index], axis=1)
347 | try:
348 | payoff = eval(self.payoff_func)
349 | return payoff
350 | except:
351 | print('Error evaluating payoff function.')
352 |
353 | def present_value(self, accuracy=3, fixed_seed=True, full=False):
354 | cash_flow = self.generate_payoff(fixed_seed)
355 |
356 | discount_factor = self.discount_curve.get_discount_factors(
357 | self.time_grid, self.paths)[1][0]
358 |
359 | result = np.sum(discount_factor * cash_flow) / len(cash_flow)
360 | if full:
361 | return round(result, accuracy), df * cash_flow
362 | else:
363 | return round(result, accuracy)
364 |
365 |
366 | class valuation_mcs_american_multi(valuation_class_multi):
367 | ''' Class to value American options with arbitrary payoff
368 | by multi-risk factor Monte Carlo simulation.
369 |
370 | Methods
371 | =======
372 | generate_payoff :
373 | returns payoffs given the paths and the payoff function
374 | present_value :
375 | returns present value (Monte Carlo estimator)
376 | '''
377 |
378 | def generate_payoff(self, fixed_seed=True):
379 | self.get_instrument_values(fixed_seed=True)
380 | self.instrument_values = {key: name.instrument_values for key, name
381 | in self.underlying_objects.items()}
382 | try:
383 | time_index_start = int(
384 | np.where(self.time_grid == self.pricing_date)[0])
385 | time_index_end = int(np.where(self.time_grid == self.maturity)[0])
386 | except:
387 | print('Pricing or maturity date not in time grid of underlying.')
388 | instrument_values = {}
389 | for key, obj in self.instrument_values.items():
390 | instrument_values[key] = \
391 | self.instrument_values[key][
392 | time_index_start:time_index_end + 1]
393 | try:
394 | payoff = eval(self.payoff_func)
395 | return instrument_values, payoff, time_index_start, time_index_end
396 | except:
397 | print('Error evaluating payoff function.')
398 |
399 | def present_value(self, accuracy=3, fixed_seed=True, full=False):
400 | instrument_values, inner_values, time_index_start, time_index_end = \
401 | self.generate_payoff(fixed_seed=fixed_seed)
402 | time_list = self.time_grid[time_index_start:time_index_end + 1]
403 |
404 | discount_factors = self.discount_curve.get_discount_factors(
405 | time_list, self.paths, dtobjects=True)[1]
406 |
407 | V = inner_values[-1]
408 | for t in range(len(time_list) - 2, 0, -1):
409 | df = discount_factors[t] / discount_factors[t + 1]
410 | matrix = {}
411 | for asset_1 in instrument_values.keys():
412 | matrix[asset_1] = instrument_values[asset_1][t]
413 | for asset_2 in instrument_values.keys():
414 | matrix[asset_1 + asset_2] = instrument_values[asset_1][t] \
415 | * instrument_values[asset_2][t]
416 | # rg = sm.OLS(V * df, np.array(list(matrix.values())).T).fit()
417 | rg = np.linalg.lstsq(np.array(list(matrix.values())).T, V * df)[0]
418 | # C = np.sum(rg.params * np.array(list(matrix.values())).T, axis=1)
419 | C = np.sum(rg * np.array(list(matrix.values())).T, axis=1)
420 | V = np.where(inner_values[t] > C, inner_values[t], V * df)
421 | df = discount_factors[0] / discount_factors[1]
422 | result = np.sum(df * V) / len(V)
423 | if full:
424 | return round(result, accuracy), df * V
425 | else:
426 | return round(result, accuracy)
427 |
--------------------------------------------------------------------------------
/dx/valuation/parallel_valuation.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Derivatives Instruments and Portfolio Valuation Classes
4 | # parallel_valuation.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | import multiprocess as mp # changed import
26 |
27 |
28 | def simulate_parallel(objs, fixed_seed=True):
29 | procs = []
30 | man = mp.Manager()
31 | output = man.Queue()
32 |
33 | def worker(o, output):
34 | o.generate_paths(fixed_seed=fixed_seed)
35 | output.put((o.name, o))
36 |
37 | for o in objs:
38 | procs.append(mp.Process(target=worker, args=(o, output)))
39 | for pr in procs:
40 | pr.start()
41 | for pr in procs:
42 | pr.join()
43 |
44 | results = [output.get() for _ in objs]
45 | underlying_objects = {name: obj for name, obj in results}
46 | return underlying_objects
47 |
48 |
49 | def value_parallel(objs, fixed_seed=True, full=False):
50 | procs = []
51 | man = mp.Manager()
52 | output = man.Queue()
53 |
54 | def worker(o, output):
55 | if full:
56 | _, pvs = o.present_value(fixed_seed=fixed_seed, full=True)
57 | output.put((o.name, pvs))
58 | else:
59 | pv = o.present_value(fixed_seed=fixed_seed)
60 | output.put((o.name, pv))
61 |
62 | for o in objs:
63 | procs.append(mp.Process(target=worker, args=(o, output)))
64 | for pr in procs:
65 | pr.start()
66 | for pr in procs:
67 | pr.join()
68 |
69 | res_list = [output.get() for _ in objs]
70 | return {name: result for name, result in res_list}
71 |
72 |
73 | def greeks_parallel(objs, Greek='Delta'):
74 | procs = []
75 | man = mp.Manager()
76 | output = man.Queue()
77 |
78 | def worker(o, output):
79 | if Greek == 'Delta':
80 | output.put((o.name, o.delta()))
81 | elif Greek == 'Vega':
82 | output.put((o.name, o.vega()))
83 |
84 | for o in objs:
85 | procs.append(mp.Process(target=worker, args=(o, output)))
86 | for pr in procs:
87 | pr.start()
88 | for pr in procs:
89 | pr.join()
90 |
91 | res_list = [output.get() for _ in objs]
92 | return {name: greek for name, greek in res_list}
93 |
--------------------------------------------------------------------------------
/dx/valuation/single_risk.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Derivatives Instruments and Portfolio Valuation Classes
4 | # single_risk.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from ..frame import *
26 | from ..models import *
27 |
28 | models = {'gbm': geometric_brownian_motion,
29 | 'jd': jump_diffusion,
30 | 'sv': stochastic_volatility,
31 | 'svjd': stoch_vol_jump_diffusion,
32 | 'sabr': sabr_stochastic_volatility,
33 | 'srd': square_root_diffusion,
34 | 'mrd': mean_reverting_diffusion,
35 | 'srjd': square_root_jump_diffusion,
36 | 'srjd+': square_root_jump_diffusion_plus}
37 |
38 |
39 | class valuation_class_single(object):
40 | ''' Basic class for single-risk factor instrument valuation.
41 |
42 | Attributes
43 | ==========
44 | name : string
45 | name of the object
46 | underlying :
47 | instance of simulation class
48 | mar_env : instance of market_environment
49 | market environment data for valuation
50 | payoff_func : string
51 | derivatives payoff in Python syntax
52 | Example: 'np.maximum(maturity_value - 100, 0)'
53 | where maturity_value is the NumPy vector with
54 | respective values of the underlying
55 | Example: 'np.maximum(instrument_values - 100, 0)'
56 | where instrument_values is the NumPy matrix with
57 | values of the underlying over the whole time/path grid
58 |
59 | Methods
60 | =======
61 | update:
62 | updates selected valuation parameters
63 | delta :
64 | returns the delta of the derivative
65 | gamma :
66 | returns the gamma of the derivative
67 | vega :
68 | returns the vega of the derivative
69 | theta :
70 | returns the theta of the derivative
71 | rho :
72 | returns the rho of the derivative
73 | '''
74 |
75 | def __init__(self, name, underlying, mar_env, payoff_func=''):
76 | try:
77 | self.name = name
78 | self.pricing_date = mar_env.pricing_date
79 | try:
80 | # strike is optional
81 | self.strike = mar_env.get_constant('strike')
82 | except:
83 | pass
84 | self.maturity = mar_env.get_constant('maturity')
85 | self.currency = mar_env.get_constant('currency')
86 | # simulation parameters and discount curve from simulation object
87 | self.frequency = underlying.frequency
88 | self.paths = underlying.paths
89 | self.discount_curve = underlying.discount_curve
90 | self.payoff_func = payoff_func
91 | self.underlying = underlying
92 | # provide pricing_date and maturity to underlying
93 | self.underlying.special_dates.extend([self.pricing_date,
94 | self.maturity])
95 | except:
96 | print('Error parsing market environment.')
97 |
98 | def update(self, initial_value=None, volatility=None,
99 | strike=None, maturity=None):
100 | ''' Updates single parameters of the derivative. '''
101 | if initial_value is not None:
102 | self.underlying.update(initial_value=initial_value)
103 | if volatility is not None:
104 | self.underlying.update(volatility=volatility)
105 | if strike is not None:
106 | self.strike = strike
107 | if maturity is not None:
108 | self.maturity = maturity
109 | # add new maturity date if not in time_grid
110 | if maturity not in self.underlying.time_grid:
111 | self.underlying.special_dates.append(maturity)
112 | self.underlying.instrument_values = None
113 |
114 | def delta(self, interval=None, accuracy=4):
115 | ''' Returns the delta for the derivative. '''
116 | if interval is None:
117 | interval = self.underlying.initial_value / 50.
118 | # forward-difference approximation
119 | # calculate left value for numerical delta
120 | value_left = self.present_value(fixed_seed=True, accuracy=10)
121 | # numerical underlying value for right value
122 | initial_del = self.underlying.initial_value + interval
123 | self.underlying.update(initial_value=initial_del)
124 | # calculate right value for numerical delta
125 | value_right = self.present_value(fixed_seed=True, accuracy=10)
126 | # reset the initial_value of the simulation object
127 | self.underlying.update(initial_value=initial_del - interval)
128 | delta = (value_right - value_left) / interval
129 | # correct for potential numerical errors
130 | if delta < -1.0:
131 | return -1.0
132 | elif delta > 1.0:
133 | return 1.0
134 | else:
135 | return round(delta, accuracy)
136 |
137 | def gamma(self, interval=None, accuracy=4):
138 | ''' Returns the gamma for the derivative. '''
139 | if interval is None:
140 | interval = self.underlying.initial_value / 50.
141 | # forward-difference approximation
142 | # calculate left value for numerical gamma
143 | value_left = self.delta()
144 | # numerical underlying value for right value
145 | initial_del = self.underlying.initial_value + interval
146 | self.underlying.update(initial_value=initial_del)
147 | # calculate right value for numerical delta
148 | value_right = self.delta()
149 | # reset the initial_value of the simulation object
150 | self.underlying.update(initial_value=initial_del - interval)
151 | gamma = (value_right - value_left) / interval
152 | return round(gamma, accuracy)
153 |
154 | def vega(self, interval=0.01, accuracy=4):
155 | ''' Returns the vega for the derivative. '''
156 | if interval < self.underlying.volatility / 50.:
157 | interval = self.underlying.volatility / 50.
158 | # forward-difference approximation
159 | # calculate the left value for numerical vega
160 | value_left = self.present_value(fixed_seed=True, accuracy=10)
161 | # numerical volatility value for right value
162 | vola_del = self.underlying.volatility + interval
163 | # update the simulation object
164 | self.underlying.update(volatility=vola_del)
165 | # calculate the right value of numerical vega
166 | value_right = self.present_value(fixed_seed=True, accuracy=10)
167 | # reset volatility value of simulation object
168 | self.underlying.update(volatility=vola_del - interval)
169 | vega = (value_right - value_left) / interval
170 | return round(vega, accuracy)
171 |
172 | def theta(self, interval=10, accuracy=4):
173 | ''' Returns the theta for the derivative. '''
174 | # calculate the left value for numerical theta
175 | value_left = self.present_value(fixed_seed=True, accuracy=10)
176 | # determine new pricing date
177 | orig_date = self.pricing_date
178 | new_date = orig_date + dt.timedelta(interval)
179 | # update the simulation object
180 | self.underlying.update(pricing_date=new_date)
181 | # calculate the right value of numerical theta
182 | self.pricing_date = new_date
183 | value_right = self.present_value(fixed_seed=True, accuracy=10)
184 | # reset pricing dates of sim & val objects
185 | self.underlying.update(pricing_date=orig_date)
186 | self.pricing_date = orig_date
187 | # calculating the negative value by convention
188 | # (i.e. a decrease in time-to-maturity)
189 | theta = (value_right - value_left) / (interval / 365.)
190 | return round(theta, accuracy)
191 |
192 | def rho(self, interval=0.005, accuracy=4):
193 | ''' Returns the rho for the derivative. '''
194 | # calculate the left value for numerical rho
195 | value_left = self.present_value(fixed_seed=True, accuracy=12)
196 | if type(self.discount_curve) == constant_short_rate:
197 | # adjust constant short rate factor
198 | self.discount_curve.short_rate += interval
199 | # delete instrument values (since drift changes)
200 | if self.underlying.instrument_values is not None:
201 | iv = True
202 | store_underlying_values = self.underlying.instrument_values
203 | self.underlying.instrument_values = None
204 | # calculate the right value for numerical rho
205 | value_right = self.present_value(fixed_seed=True, accuracy=12)
206 | # reset constant short rate factor
207 | self.discount_curve.short_rate -= interval
208 | if iv is True:
209 | self.underlying.instrument_values = store_underlying_values
210 | rho = (value_right - value_left) / interval
211 | return round(rho, accuracy)
212 | else:
213 | raise NotImplementedError(
214 | 'Not yet implemented for this short rate model.')
215 |
216 | def dollar_gamma(self, key, interval=None, accuracy=4):
217 | ''' Returns the dollar gamma for the derivative. '''
218 | dollar_gamma = (0.5 * self.gamma(key, interval=interval) *
219 | self.underlying_objects[key].initial_value ** 2)
220 | return round(dollar_gamma, accuracy)
221 |
222 |
223 | class valuation_mcs_european_single(valuation_class_single):
224 | ''' Class to value European options with arbitrary payoff
225 | by single-factor Monte Carlo simulation.
226 |
227 | Methods
228 | =======
229 | generate_payoff :
230 | returns payoffs given the paths and the payoff function
231 | present_value :
232 | returns present value (Monte Carlo estimator)
233 | '''
234 |
235 | def generate_payoff(self, fixed_seed=False):
236 | '''
237 | Attributes
238 | ==========
239 | fixed_seed : boolean
240 | used same/fixed seed for valued
241 | '''
242 | try:
243 | # strike defined?
244 | strike = self.strike
245 | except:
246 | pass
247 | paths = self.underlying.get_instrument_values(fixed_seed=fixed_seed)
248 | time_grid = self.underlying.time_grid
249 | try:
250 | time_index = np.where(time_grid == self.maturity)[0]
251 | time_index = int(time_index)
252 | except:
253 | print('Maturity date not in time grid of underlying.')
254 | maturity_value = paths[time_index]
255 | # average value over whole path
256 | mean_value = np.mean(paths[:time_index], axis=0)
257 | # maximum value over whole path
258 | max_value = np.amax(paths[:time_index], axis=0)
259 | # minimum value over whole path
260 | min_value = np.amin(paths[:time_index], axis=0)
261 | try:
262 | payoff = eval(self.payoff_func)
263 | return payoff
264 | except:
265 | print('Error evaluating payoff function.')
266 |
267 | def present_value(self, accuracy=6, fixed_seed=False, full=False):
268 | '''
269 | Attributes
270 | ==========
271 | accuracy : int
272 | number of decimals in returned result
273 | fixed_seed :
274 | used same/fixed seed for valuation
275 | '''
276 | cash_flow = self.generate_payoff(fixed_seed=fixed_seed)
277 |
278 | discount_factor = self.discount_curve.get_discount_factors(
279 | self.underlying.time_grid, self.paths)[1][0]
280 |
281 | result = np.sum(discount_factor * cash_flow) / len(cash_flow)
282 |
283 | if full:
284 | return round(result, accuracy), discount_factor * cash_flow
285 | else:
286 | return round(result, accuracy)
287 |
288 |
289 | class valuation_mcs_american_single(valuation_class_single):
290 | ''' Class to value American options with arbitrary payoff
291 | by single-factor Monte Carlo simulation.
292 | Methods
293 | =======
294 | generate_payoff :
295 | returns payoffs given the paths and the payoff function
296 | present_value :
297 | returns present value (LSM Monte Carlo estimator)
298 | according to Longstaff-Schwartz (2001)
299 | '''
300 |
301 | def generate_payoff(self, fixed_seed=False):
302 | '''
303 | Attributes
304 | ==========
305 | fixed_seed :
306 | use same/fixed seed for valuation
307 | '''
308 | try:
309 | strike = self.strike
310 | except:
311 | pass
312 | paths = self.underlying.get_instrument_values(fixed_seed=fixed_seed)
313 | time_grid = self.underlying.time_grid
314 | try:
315 | time_index_start = int(np.where(time_grid == self.pricing_date)[0])
316 | time_index_end = int(np.where(time_grid == self.maturity)[0])
317 | except:
318 | print('Maturity date not in time grid of underlying.')
319 | instrument_values = paths[time_index_start:time_index_end + 1]
320 | try:
321 | payoff = eval(self.payoff_func)
322 | return instrument_values, payoff, time_index_start, time_index_end
323 | except:
324 | print('Error evaluating payoff function.')
325 |
326 | def present_value(self, accuracy=3, fixed_seed=False, bf=5, full=False):
327 | '''
328 | Attributes
329 | ==========
330 | accuracy : int
331 | number of decimals in returned result
332 | fixed_seed :
333 | used same/fixed seed for valuation
334 | bf : int
335 | number of basis functions for regression
336 | '''
337 | instrument_values, inner_values, time_index_start, time_index_end = \
338 | self.generate_payoff(fixed_seed=fixed_seed)
339 | time_list = \
340 | self.underlying.time_grid[time_index_start:time_index_end + 1]
341 |
342 | discount_factors = self.discount_curve.get_discount_factors(
343 | time_list, self.paths, dtobjects=True)[1]
344 |
345 | V = inner_values[-1]
346 | for t in range(len(time_list) - 2, 0, -1):
347 | # derive relevant discount factor for given time interval
348 | df = discount_factors[t] / discount_factors[t + 1]
349 | # regression step
350 | rg = np.polyfit(instrument_values[t], V * df, bf)
351 | # calculation of continuation values per path
352 | C = np.polyval(rg, instrument_values[t])
353 | # optimal decision step:
354 | # if condition is satisfied (inner value > regressed cont. value)
355 | # then take inner value; take actual cont. value otherwise
356 | V = np.where(inner_values[t] > C, inner_values[t], V * df)
357 | df = discount_factors[0] / discount_factors[1]
358 | result = np.sum(df * V) / len(V)
359 | if full:
360 | return round(result, accuracy), df * V
361 | else:
362 | return round(result, accuracy)
363 |
--------------------------------------------------------------------------------
/dx/valuation/var_portfolio.py:
--------------------------------------------------------------------------------
1 | #
2 | # DX Analytics
3 | # Derivatives Instruments and Portfolio Valuation Classes
4 | # dx_valuation.py
5 | #
6 | # DX Analytics is a financial analytics library, mainly for
7 | # derviatives modeling and pricing by Monte Carlo simulation
8 | #
9 | # (c) Dr. Yves J. Hilpisch
10 | # The Python Quants GmbH
11 | #
12 | # This program is free software: you can redistribute it and/or modify
13 | # it under the terms of the GNU Affero General Public License as
14 | # published by the Free Software Foundation, either version 3 of the
15 | # License, or any later version.
16 | #
17 | # This program is distributed in the hope that it will be useful,
18 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 | # GNU Affero General Public License for more details.
21 | #
22 | # You should have received a copy of the GNU Affero General Public License
23 | # along with this program. If not, see http://www.gnu.org/licenses/.
24 | #
25 | from .derivatives_portfolio import *
26 |
27 |
28 | class var_derivatives_portfolio(derivatives_portfolio):
29 | ''' Class for building and valuing portfolios of derivatives positions
30 | with risk factors given from fitted VAR model.
31 |
32 | Attributes
33 | ==========
34 | name : str
35 | name of the object
36 | positions : dict
37 | dictionary of positions (instances of derivatives_position class)
38 | val_env : market_environment
39 | market environment for the valuation
40 | var_risk_factors : VAR model
41 | vector autoregressive model for risk factors
42 | fixed_seed : boolean
43 | flag for fixed rng seed
44 |
45 | Methods
46 | =======
47 | get_positions :
48 | prints information about the single portfolio positions
49 | get_values :
50 | estimates and returns positions values
51 | get_present_values :
52 | returns the full distribution of the simulated portfolio values
53 | '''
54 |
55 | def __init__(self, name, positions, val_env, var_risk_factors,
56 | fixed_seed=False, parallel=False):
57 | self.name = name
58 | self.positions = positions
59 | self.val_env = val_env
60 | self.var_risk_factors = var_risk_factors
61 | self.underlyings = set()
62 |
63 | self.time_grid = None
64 | self.underlying_objects = {}
65 | self.valuation_objects = {}
66 | self.fixed_seed = fixed_seed
67 | self.special_dates = []
68 | for pos in self.positions:
69 | # determine earliest starting_date
70 | self.val_env.constants['starting_date'] = \
71 | min(self.val_env.constants['starting_date'],
72 | positions[pos].mar_env.pricing_date)
73 | # determine latest date of relevance
74 | self.val_env.constants['final_date'] = \
75 | max(self.val_env.constants['final_date'],
76 | positions[pos].mar_env.constants['maturity'])
77 | # collect all underlyings
78 | # add to set; avoids redundancy
79 | for ul in positions[pos].underlyings:
80 | self.underlyings.add(ul)
81 |
82 | # generating general time grid
83 | start = self.val_env.constants['starting_date']
84 | end = self.val_env.constants['final_date']
85 | time_grid = pd.date_range(start=start, end=end,
86 | freq='B' # allow business day only
87 | ).to_pydatetime()
88 | time_grid = list(time_grid)
89 |
90 | if start not in time_grid:
91 | time_grid.insert(0, start)
92 | if end not in time_grid:
93 | time_grid.append(end)
94 | # delete duplicate entries & sort dates in time_grid
95 | time_grid = sorted(set(time_grid))
96 |
97 | self.time_grid = np.array(time_grid)
98 | self.val_env.add_list('time_grid', self.time_grid)
99 |
100 | #
101 | # generate simulated paths
102 | #
103 | self.fit_model = var_risk_factors.fit(maxlags=5, ic='bic')
104 | sim_paths = self.fit_model.simulate(
105 | paths=self.val_env.get_constant('paths'),
106 | steps=len(self.time_grid),
107 | initial_values=var_risk_factors.y[-1])
108 | symbols = sim_paths[0].columns.values
109 | for sym in symbols:
110 | df = pd.DataFrame()
111 | for i, path in enumerate(sim_paths):
112 | df[i] = path[sym]
113 | self.underlying_objects[sym] = general_underlying(
114 | sym, df, self.val_env)
115 | for pos in positions:
116 | # select right valuation class (European, American)
117 | val_class = otypes[positions[pos].otype]
118 | # pick the market environment and add the valuation environment
119 | mar_env = positions[pos].mar_env
120 | mar_env.add_environment(self.val_env)
121 | # instantiate valuation classes
122 | self.valuation_objects[pos] = \
123 | val_class(name=positions[pos].name,
124 | mar_env=mar_env,
125 | underlying=self.underlying_objects[
126 | positions[pos].underlyings[0]],
127 | payoff_func=positions[pos].payoff_func)
128 |
129 | def get_statistics(self):
130 | raise NotImplementedError
131 |
132 | def get_port_risk(self):
133 | raise NotImplementedError
134 |
--------------------------------------------------------------------------------
/dx_analytics.yml:
--------------------------------------------------------------------------------
1 | name: dx
2 | channels:
3 | - conda-forge
4 | dependencies:
5 | - pandas
6 | - numpy
7 | - python=3.10
8 | - jupyterlab
9 | - matplotlib
10 | - scipy
11 | - xarray
12 | - pytables
13 | - xlrd
14 | - multiprocess
15 | prefix: /Users/yves/Python/envs/dx
16 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from setuptools import setup, find_packages
4 |
5 | with open('requirements.txt') as f:
6 | requirements = f.read().splitlines()
7 |
8 | DISTNAME = 'dx'
9 |
10 | setup(name=DISTNAME,
11 | version='0.1.22',
12 | packages=find_packages(include=['dx', 'dx.*']),
13 | description='DX Analytics',
14 | author='Dr. Yves Hilpisch',
15 | author_email='dx@tpq.io',
16 | url='http://dx-analytics.com/',
17 | install_requires=requirements)
18 |
--------------------------------------------------------------------------------