├── .coveragerc ├── .github └── workflows │ ├── build.yml │ └── update.yml ├── .gitignore ├── .vscode ├── launch.json ├── settings.json └── tasks.json ├── CITATION.cff ├── LICENSE ├── Makefile ├── dev ├── install ├── lint ├── nbconvert ├── nbsync ├── notebook_config.json ├── start-jupyter └── test ├── notebooks ├── CNAME ├── _config.yml ├── _toc.yml ├── api │ ├── data │ │ ├── deribit.rst │ │ ├── fed.rst │ │ ├── fmp.rst │ │ ├── fred.rst │ │ └── index.rst │ ├── index.rst │ ├── options │ │ ├── black.rst │ │ ├── calibration.rst │ │ ├── index.rst │ │ ├── pricer.rst │ │ └── vol_surface.rst │ ├── sp │ │ ├── cir.rst │ │ ├── compound_poisson.rst │ │ ├── heston.rst │ │ ├── index.rst │ │ ├── jump_diffusion.rst │ │ ├── ou.rst │ │ ├── poisson.rst │ │ └── weiner.rst │ ├── ta │ │ ├── index.rst │ │ ├── ohlc.rst │ │ └── paths.rst │ └── utils │ │ ├── bins.rst │ │ ├── distributions.rst │ │ ├── index.rst │ │ └── marginal1d.rst ├── applications │ ├── calibration.md │ ├── hurst.md │ ├── overview.md │ ├── sampling.md │ └── volatility_surface.md ├── assets │ ├── heston.gif │ ├── linkedin-banner.png │ ├── quantflow-light.svg │ ├── quantflow-logo.png │ ├── quantflow-repo.png │ ├── quantflow-repo.svg │ └── quantflow.svg ├── conf.py ├── data │ ├── fed.md │ ├── fiscal_data.md │ ├── fmp.md │ └── timeseries.md ├── examples │ ├── exponential_sampling.md │ ├── gaussian_sampling.md │ ├── heston_vol_surface.md │ ├── overview.md │ └── poisson_sampling.md ├── index.md ├── models │ ├── bns.md │ ├── cir.md │ ├── gousv.md │ ├── heston.md │ ├── heston_jumps.md │ ├── jump_diffusion.md │ ├── ou.md │ ├── overview.md │ ├── poisson.md │ └── weiner.md ├── reference │ ├── biblio.md │ ├── contributing.md │ ├── glossary.md │ └── references.bib └── theory │ ├── characteristic.md │ ├── inversion.md │ ├── levy.md │ ├── option_pricing.md │ └── overview.md ├── poetry.lock ├── pyproject.toml ├── quantflow ├── __init__.py ├── cli │ ├── __init__.py │ ├── app.py │ ├── commands │ │ ├── __init__.py │ │ ├── base.py │ │ ├── crypto.py │ │ ├── fred.py │ │ ├── stocks.py │ │ └── vault.py │ ├── script.py │ └── settings.py ├── data │ ├── __init__.py │ ├── deribit.py │ ├── fed.py │ ├── fiscal_data.py │ ├── fmp.py │ ├── fred.py │ └── vault.py ├── options │ ├── __init__.py │ ├── bs.py │ ├── calibration.py │ ├── inputs.py │ ├── pricer.py │ └── surface.py ├── py.typed ├── sp │ ├── __init__.py │ ├── base.py │ ├── bns.py │ ├── cir.py │ ├── copula.py │ ├── dsp.py │ ├── heston.py │ ├── jump_diffusion.py │ ├── ou.py │ ├── poisson.py │ └── weiner.py ├── ta │ ├── __init__.py │ ├── base.py │ ├── ohlc.py │ └── paths.py └── utils │ ├── __init__.py │ ├── bins.py │ ├── dates.py │ ├── distributions.py │ ├── functions.py │ ├── interest_rates.py │ ├── marginal.py │ ├── numbers.py │ ├── plot.py │ ├── transforms.py │ └── types.py ├── quantflow_tests ├── conftest.py ├── test_cir.py ├── test_copula.py ├── test_data.py ├── test_distributions.py ├── test_frft.py ├── test_heston.py ├── test_jump_diffusion.py ├── test_ohlc.py ├── test_options.py ├── test_options_pricer.py ├── test_ou.py ├── test_poisson.py ├── test_utils.py ├── test_weiner.py ├── utils.py └── volsurface.json └── readme.md /.coveragerc: -------------------------------------------------------------------------------- 1 | [run] 2 | source = quantflow 3 | 4 | omit = 5 | quantflow/utils/plot.py 6 | 7 | [html] 8 | directory = build/coverage/html 9 | 10 | [report] 11 | exclude_lines = 12 | pragma: no cover 13 | raise NotImplementedError 14 | if TYPE_CHECKING: 15 | @abstract 16 | 17 | [xml] 18 | output = build/coverage.xml 19 | -------------------------------------------------------------------------------- /.github/workflows/build.yml: -------------------------------------------------------------------------------- 1 | name: build 2 | 3 | on: 4 | push: 5 | branches-ignore: 6 | - deploy 7 | tags-ignore: 8 | - v* 9 | 10 | jobs: 11 | build: 12 | runs-on: ubuntu-latest 13 | env: 14 | PYTHON_ENV: ci 15 | PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }} 16 | FMP_API_KEY: ${{ secrets.FMP_API_KEY }} 17 | strategy: 18 | matrix: 19 | python-version: ["3.11", "3.12", "3.13"] 20 | 21 | steps: 22 | - uses: actions/checkout@v4 23 | - name: Set up Python ${{ matrix.python-version }} 24 | uses: actions/setup-python@v5 25 | with: 26 | python-version: ${{ matrix.python-version }} 27 | - name: Install poetry 28 | run: pip install -U pip poetry 29 | - name: Install dependencies no book 30 | run: poetry install --all-extras 31 | - name: run tests no book 32 | run: make tests 33 | - name: Install dependencies 34 | run: make install-dev 35 | - name: run lint 36 | run: make lint-check 37 | - name: run tests 38 | run: make tests 39 | - name: upload coverage reports to codecov 40 | if: matrix.python-version == '3.12' 41 | uses: codecov/codecov-action@v3 42 | with: 43 | token: ${{ secrets.CODECOV_TOKEN }} 44 | files: ./build/coverage.xml 45 | - name: build book 46 | if: ${{ matrix.python-version == '3.12' }} 47 | run: make book 48 | - name: publish book 49 | if: ${{ matrix.python-version == '3.12' }} 50 | run: make publish-book 51 | - name: publish 52 | if: ${{ matrix.python-version == '3.12' && github.event.head_commit.message == 'release' }} 53 | run: make publish 54 | -------------------------------------------------------------------------------- /.github/workflows/update.yml: -------------------------------------------------------------------------------- 1 | name: update 2 | 3 | on: 4 | schedule: 5 | - cron: "0 6 * * 5" 6 | 7 | jobs: 8 | update: 9 | runs-on: ubuntu-latest 10 | 11 | steps: 12 | - name: checkout repo 13 | uses: actions/checkout@v2 14 | - name: Set up Python 15 | uses: actions/setup-python@v2 16 | with: 17 | python-version: "3.10" 18 | - name: install poetry 19 | run: pip install -U pip poetry 20 | - name: update dependencies 21 | run: poetry update 22 | - name: Create Pull Request 23 | uses: peter-evans/create-pull-request@v3 24 | with: 25 | token: ${{ secrets.QMBOT_GITHUB_TOKEN }} 26 | author: qmbot 27 | commit-message: update dependencies 28 | title: Automated Dependency Updates 29 | body: This is an auto-generated PR with dependency updates. 30 | branch: ci-poetry-update 31 | labels: ci, automated pr, automerge 32 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # 2 | # IDEs 3 | .idea 4 | *.sublime-workspace 5 | 6 | # 7 | # Files 8 | .DS_Store 9 | *.log 10 | .env 11 | 12 | # 13 | # Coverage 14 | .coveralls-repo-token 15 | .coverage 16 | htmlcov 17 | 18 | # 19 | # Python 20 | *.pyc 21 | .eggs 22 | *.egg-info 23 | __pycache__ 24 | build 25 | dist 26 | .venv 27 | .mypy_cache 28 | .pytest_cache 29 | .ruff_cache 30 | .python-version 31 | 32 | # Jupyter 33 | *.ipynb 34 | .ipynb_checkpoints 35 | _build 36 | -------------------------------------------------------------------------------- /.vscode/launch.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "0.2.0", 3 | "configurations": [ 4 | 5 | { 6 | "name": "PyTest", 7 | "type": "debugpy", 8 | "request": "launch", 9 | "cwd": "${workspaceFolder}", 10 | "console": "integratedTerminal", 11 | "module": "pytest", 12 | "env": {}, 13 | "justMyCode": false, 14 | "args": [ 15 | "-x", 16 | "-vvv", 17 | "quantflow_tests/test_jump_diffusion.py", 18 | ] 19 | }, 20 | ] 21 | } 22 | -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "python.defaultInterpreterPath": ".venv/bin/python" 3 | } 4 | -------------------------------------------------------------------------------- /.vscode/tasks.json: -------------------------------------------------------------------------------- 1 | { 2 | "tasks": [ 3 | { 4 | "label": "JupyText Sync", 5 | "type": "shell", 6 | "command": "${command:python.interpreterPath} -m jupytext ${file} -s", 7 | "group": { 8 | "kind": "build", 9 | "isDefault": true 10 | }, 11 | "presentation": { 12 | "reveal": "never", 13 | } 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /CITATION.cff: -------------------------------------------------------------------------------- 1 | # This CITATION.cff file was generated with cffinit. 2 | # Visit https://bit.ly/cffinit to generate yours today! 3 | 4 | cff-version: 1.2.0 5 | title: quantflow 6 | message: >- 7 | If you use this software, please cite it using the 8 | metadata from this file. 9 | type: software 10 | authors: 11 | - given-names: Luca 12 | family-names: Sbardella 13 | email: luca@quantmind.com 14 | - name: quantmind 15 | repository-code: 'https://github.com/quantmind/quantflow' 16 | url: 'https://quantmind.github.io/quantflow/' 17 | abstract: Quantitative finance and derivative pricing 18 | keywords: 19 | - finance 20 | - option-pricing 21 | - quantitative-finance 22 | - timeseries 23 | license: BSD-3-Clause 24 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2024 Quantmind 2 | 3 | Redistribution and use in source and binary forms, with or without modification, 4 | are permitted provided that the following conditions are met: 5 | 6 | * Redistributions of source code must retain the above copyright notice, 7 | this list of conditions and the following disclaimer. 8 | * Redistributions in binary form must reproduce the above copyright notice, 9 | this list of conditions and the following disclaimer in the documentation 10 | and/or other materials provided with the distribution. 11 | * Neither the name of the author nor the names of its contributors 12 | may be used to endorse or promote products derived from this software without 13 | specific prior written permission. 14 | 15 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 16 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 17 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. 18 | IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, 19 | INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 20 | BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 21 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 22 | LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE 23 | OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED 24 | OF THE POSSIBILITY OF SUCH DAMAGE. 25 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | 2 | .PHONY: help 3 | help: 4 | @echo ================================================================================ 5 | @fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##//' 6 | @echo ================================================================================ 7 | 8 | 9 | .PHONY: lint 10 | lint: ## Lint and fix 11 | @poetry run ./dev/lint fix 12 | 13 | 14 | .PHONY: lint-check 15 | lint-check: ## Lint check only 16 | @poetry run ./dev/lint 17 | 18 | 19 | .PHONY: install-dev 20 | install-dev: ## Install development dependencies 21 | @./dev/install 22 | 23 | 24 | .PHONY: notebook 25 | notebook: ## Run Jupyter notebook server 26 | @poetry run ./dev/start-jupyter 9095 27 | 28 | 29 | .PHONY: book 30 | book: ## Build static jupyter {book} 31 | poetry run jupyter-book build notebooks --all 32 | @cp notebooks/CNAME notebooks/_build/html/CNAME 33 | 34 | 35 | .PHONY: nbconvert 36 | nbconvert: ## Convert notebooks to myst markdown 37 | poetry run ./dev/nbconvert 38 | 39 | .PHONY: nbsync 40 | nbsync: ## Sync python myst notebooks to .ipynb files - needed for vs notebook development 41 | poetry run ./dev/nbsync 42 | 43 | .PHONY: sphinx-config 44 | sphinx-config: ## Build sphinx config 45 | poetry run jupyter-book config sphinx notebooks 46 | 47 | 48 | .PHONY: sphinx 49 | sphinx: 50 | poetry run sphinx-build notebooks path/to/book/_build/html -b html 51 | 52 | 53 | .PHONY: publish 54 | publish: ## Release to pypi 55 | @poetry publish --build -u __token__ -p $(PYPI_TOKEN) 56 | 57 | 58 | .PHONY: publish-book 59 | publish-book: ## publish the book to github pages 60 | poetry run ghp-import -n -p -f notebooks/_build/html 61 | 62 | 63 | .PHONY: tests 64 | tests: ## Unit tests 65 | @./dev/test 66 | 67 | 68 | .PHONY: outdated 69 | outdated: ## Show outdated packages 70 | poetry show -o -a 71 | -------------------------------------------------------------------------------- /dev/install: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | pip install -U pip poetry 4 | poetry install --all-extras --with book 5 | -------------------------------------------------------------------------------- /dev/lint: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | 4 | ISORT_ARGS="-c" 5 | BLACK_ARG="--check" 6 | RUFF_ARG="" 7 | 8 | if [ "$1" = "fix" ] ; then 9 | ISORT_ARGS="" 10 | BLACK_ARG="" 11 | RUFF_ARG="--fix" 12 | fi 13 | 14 | echo isort 15 | isort quantflow quantflow_tests ${ISORT_ARGS} 16 | echo black 17 | black quantflow quantflow_tests ${BLACK_ARG} 18 | echo ruff 19 | ruff check quantflow quantflow_tests ${RUFF_ARG} 20 | echo mypy 21 | mypy quantflow 22 | echo mypy tests 23 | mypy quantflow_tests --explicit-package-bases 24 | -------------------------------------------------------------------------------- /dev/nbconvert: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | 4 | for file in notebooks/**/*.ipynb 5 | do 6 | jupytext "$file" -s 7 | done 8 | -------------------------------------------------------------------------------- /dev/nbsync: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | 4 | for file in notebooks/**/*.md 5 | do 6 | jupytext "$file" -s 7 | done 8 | -------------------------------------------------------------------------------- /dev/notebook_config.json: -------------------------------------------------------------------------------- 1 | { 2 | "ServerApp": { 3 | "password": "sha1:f0ac62b893bc:1d3d98395d095914c994e1a84f767e47a6d3b4a4" 4 | }, 5 | "@jupyterlab/apputils-extension:themes": { 6 | "theme": "JupyterLab Dark" 7 | } 8 | } 9 | -------------------------------------------------------------------------------- /dev/start-jupyter: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | 4 | PORT=$1 5 | 6 | export PYTHONPATH=${PWD}:${PYTHONPATH} 7 | ENV_FILE="${PWD}/.env" 8 | touch ${ENV_FILE} 9 | export $(grep -v '^#' ${ENV_FILE} | xargs) 10 | 11 | jupyter-lab --port=${PORT} 12 | -------------------------------------------------------------------------------- /dev/test: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env sh 2 | 3 | poetry run \ 4 | pytest -x -vv \ 5 | --log-cli-level error \ 6 | --cov --cov-report xml --cov-report html "$@" 7 | 8 | -------------------------------------------------------------------------------- /notebooks/CNAME: -------------------------------------------------------------------------------- 1 | quantflow.quantmind.com 2 | -------------------------------------------------------------------------------- /notebooks/_config.yml: -------------------------------------------------------------------------------- 1 | # Book settings 2 | # Learn more at https://jupyterbook.org/customize/config.html 3 | 4 | title: Quantflow library 5 | author: quantmind 6 | copyright: "2014-2025" 7 | logo: assets/quantflow-light.svg 8 | 9 | # Force re-execution of notebooks on each build. 10 | # See https://jupyterbook.org/content/execute.html 11 | execute: 12 | #execute_notebooks: "off" 13 | execute_notebooks: force 14 | 15 | # Define the name of the latex output file for PDF builds 16 | latex: 17 | latex_documents: 18 | targetname: book.tex 19 | 20 | # Add a bibtex file so that we can create citations 21 | bibtex_bibfiles: 22 | - reference/references.bib 23 | 24 | # Information about where the book exists on the web 25 | repository: 26 | url: https://github.com/quantmind/quantflow # Online location of your book 27 | path_to_book: notebooks # Optional path to your book, relative to the repository root 28 | branch: main # Which branch of the repository should be used when creating links (optional) 29 | 30 | # Add GitHub buttons to your book 31 | # See https://jupyterbook.org/customize/config.html#add-a-link-to-your-repository 32 | html: 33 | favicon: assets/quantflow-logo.png 34 | home_page_in_navbar: false 35 | use_edit_page_button: true 36 | use_issues_button: true 37 | use_repository_button: true 38 | analytics: 39 | google_analytics_id: G-CM0DR45HDR 40 | 41 | parse: 42 | myst_enable_extensions: 43 | # don't forget to list any other extensions you want enabled, 44 | # including those that are enabled by default! 45 | - dollarmath 46 | - amsmath 47 | 48 | sphinx: 49 | recursive_update: true 50 | config: 51 | html_static_path: 52 | - assets 53 | html_js_files: 54 | # required by plotly charts 55 | - https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js 56 | mathjax_options: { 57 | "async": "async", 58 | } 59 | extra_extensions: 60 | - "sphinx.ext.autodoc" 61 | - "sphinx.ext.autosummary" 62 | - "sphinx.ext.intersphinx" 63 | - "sphinx_autosummary_accessors" 64 | - "sphinx_copybutton" 65 | - "autodocsumm" 66 | -------------------------------------------------------------------------------- /notebooks/_toc.yml: -------------------------------------------------------------------------------- 1 | # Table of contents 2 | # Learn more at https://jupyterbook.org/customize/toc.html 3 | 4 | format: jb-book 5 | root: index 6 | parts: 7 | - caption: Topic Guides 8 | chapters: 9 | - file: theory/overview 10 | sections: 11 | - file: theory/levy 12 | - file: theory/characteristic 13 | - file: theory/inversion 14 | - file: theory/option_pricing 15 | 16 | - file: models/overview 17 | sections: 18 | - file: models/weiner 19 | - file: models/poisson 20 | - file: models/jump_diffusion 21 | - file: models/cir 22 | - file: models/ou 23 | - file: models/heston 24 | - file: models/heston_jumps 25 | - file: models/bns 26 | 27 | - file: applications/overview 28 | sections: 29 | - file: applications/volatility_surface 30 | - file: applications/hurst 31 | - file: applications/calibration 32 | 33 | - file: examples/overview 34 | sections: 35 | - file: examples/gaussian_sampling 36 | - file: examples/exponential_sampling 37 | - file: examples/poisson_sampling 38 | - file: examples/heston_vol_surface 39 | 40 | - file: api/index.rst 41 | 42 | - caption: Reference 43 | chapters: 44 | - file: reference/contributing 45 | - file: reference/glossary 46 | - file: reference/biblio 47 | -------------------------------------------------------------------------------- /notebooks/api/data/deribit.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | Deribit 3 | ================ 4 | 5 | .. currentmodule:: quantflow.data.deribit 6 | 7 | .. autoclass:: Deribit 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | -------------------------------------------------------------------------------- /notebooks/api/data/fed.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | Federal Reserve 3 | ================ 4 | 5 | .. currentmodule:: quantflow.data.fed 6 | 7 | .. autoclass:: FederalReserve 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | -------------------------------------------------------------------------------- /notebooks/api/data/fmp.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | FMP 3 | ================ 4 | 5 | .. currentmodule:: quantflow.data.fmp 6 | 7 | .. autoclass:: FMP 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | -------------------------------------------------------------------------------- /notebooks/api/data/fred.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | Fred 3 | ================ 4 | 5 | .. currentmodule:: quantflow.data.fred 6 | 7 | .. autoclass:: Fred 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | -------------------------------------------------------------------------------- /notebooks/api/data/index.rst: -------------------------------------------------------------------------------- 1 | ============== 2 | Data fetching 3 | ============== 4 | 5 | .. currentmodule:: quantflow.data 6 | 7 | The :mod:`quantflow.data` module provides classes and functions for fetching data from various sources. 8 | To use the module the package must be installed with the optional `data` extra. 9 | ``` 10 | pip install quantflow[data] 11 | ``` 12 | 13 | .. toctree:: 14 | :maxdepth: 1 15 | 16 | fmp 17 | fed 18 | fred 19 | deribit 20 | -------------------------------------------------------------------------------- /notebooks/api/index.rst: -------------------------------------------------------------------------------- 1 | 2 | API Reference 3 | ============== 4 | 5 | .. grid:: 6 | 7 | .. grid-item-card:: 8 | 9 | .. toctree:: 10 | :maxdepth: 2 11 | 12 | sp/index 13 | 14 | .. grid-item-card:: 15 | 16 | .. toctree:: 17 | :maxdepth: 2 18 | 19 | options/index 20 | .. grid-item-card:: 21 | 22 | .. toctree:: 23 | :maxdepth: 2 24 | 25 | ta/index 26 | .. grid:: 27 | 28 | .. grid-item-card:: 29 | 30 | .. toctree:: 31 | :maxdepth: 2 32 | 33 | data/index 34 | 35 | .. grid-item-card:: 36 | 37 | .. toctree:: 38 | :maxdepth: 2 39 | 40 | utils/index 41 | -------------------------------------------------------------------------------- /notebooks/api/options/black.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | Black Pricing 3 | ================================== 4 | 5 | .. module:: quantflow.options.bs 6 | 7 | .. autofunction:: black_price 8 | 9 | .. autofunction:: black_delta 10 | 11 | .. autofunction:: black_vega 12 | 13 | .. autofunction:: implied_black_volatility 14 | -------------------------------------------------------------------------------- /notebooks/api/options/calibration.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | Vol Model Calibration 3 | ================================== 4 | 5 | .. module:: quantflow.options.calibration 6 | 7 | .. autoclass:: VolModelCalibration 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | 13 | 14 | .. autoclass:: HestonCalibration 15 | :members: 16 | :member-order: groupwise 17 | :autosummary: 18 | :autosummary-nosignatures: 19 | -------------------------------------------------------------------------------- /notebooks/api/options/index.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | Option Pricing 3 | ================================== 4 | 5 | .. currentmodule:: quantflow.options 6 | 7 | The :mod:`options` module provides classes and functions for pricing and analyzing options. 8 | The main class is the :class:`.VolSurface` class which is used to represent 9 | a volatility surface for a given asset. 10 | A volatility surface is usually created via a :class:`.VolSurfaceLoader` object. 11 | 12 | .. toctree:: 13 | :maxdepth: 1 14 | 15 | vol_surface 16 | black 17 | pricer 18 | calibration 19 | -------------------------------------------------------------------------------- /notebooks/api/options/pricer.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | Option Pricer 3 | ================================== 4 | 5 | The option pricer module provides classes for pricing options using 6 | different stochastic volatility models. 7 | 8 | 9 | .. module:: quantflow.options.pricer 10 | 11 | .. autoclass:: OptionPricer 12 | :members: 13 | :member-order: groupwise 14 | :autosummary: 15 | :autosummary-nosignatures: 16 | 17 | 18 | .. autoclass:: MaturityPricer 19 | :members: 20 | :member-order: groupwise 21 | :autosummary: 22 | :autosummary-nosignatures: 23 | -------------------------------------------------------------------------------- /notebooks/api/options/vol_surface.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | Vol Surface 3 | ================================== 4 | 5 | .. module:: quantflow.options.surface 6 | 7 | .. autoclass:: VolSurface 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | 13 | 14 | .. autoclass:: VolCrossSection 15 | :members: 16 | :member-order: groupwise 17 | :autosummary: 18 | :autosummary-nosignatures: 19 | 20 | .. autoclass:: GenericVolSurfaceLoader 21 | :members: 22 | :member-order: groupwise 23 | :autosummary: 24 | :autosummary-nosignatures: 25 | 26 | 27 | .. autoclass:: VolSurfaceLoader 28 | :members: 29 | :member-order: groupwise 30 | :autosummary: 31 | :autosummary-nosignatures: 32 | 33 | 34 | .. autoclass:: OptionSelection 35 | :members: 36 | -------------------------------------------------------------------------------- /notebooks/api/sp/cir.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | CIR 3 | ================ 4 | 5 | The Cox–Ingersoll–Ross (CIR) model 6 | 7 | .. currentmodule:: quantflow.sp.cir 8 | 9 | .. autoclass:: CIR 10 | :members: 11 | :member-order: groupwise 12 | :autosummary: 13 | :autosummary-nosignatures: 14 | 15 | -------------------------------------------------------------------------------- /notebooks/api/sp/compound_poisson.rst: -------------------------------------------------------------------------------- 1 | =================== 2 | Compound Poisson 3 | =================== 4 | 5 | .. currentmodule:: quantflow.sp.poisson 6 | 7 | .. autoclass:: CompoundPoissonProcess 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | 13 | -------------------------------------------------------------------------------- /notebooks/api/sp/heston.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | Heston process 3 | ================ 4 | 5 | .. currentmodule:: quantflow.sp.heston 6 | 7 | .. autoclass:: Heston 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | 13 | 14 | .. autoclass:: HestonJ 15 | :members: 16 | :member-order: groupwise 17 | :autosummary: 18 | :autosummary-nosignatures: 19 | -------------------------------------------------------------------------------- /notebooks/api/sp/index.rst: -------------------------------------------------------------------------------- 1 | =================== 2 | Stochastic Process 3 | =================== 4 | 5 | This page gives an overview of all Stochastic Processes available in the library. 6 | 7 | .. _sp: 8 | 9 | .. currentmodule:: quantflow.sp.base 10 | 11 | .. autoclass:: StochasticProcess 12 | :members: 13 | :noindex: 14 | :autosummary: 15 | :autosummary-nosignatures: 16 | 17 | .. autoclass:: StochasticProcess1D 18 | :members: 19 | :noindex: 20 | 21 | 22 | .. autoclass:: IntensityProcess 23 | :members: 24 | :noindex: 25 | 26 | .. toctree:: 27 | :maxdepth: 1 28 | 29 | weiner 30 | poisson 31 | compound_poisson 32 | ou 33 | cir 34 | jump_diffusion 35 | heston 36 | -------------------------------------------------------------------------------- /notebooks/api/sp/jump_diffusion.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | Jump diffusions 3 | ================ 4 | 5 | Jump-diffusions models are a class of stochastic processes that combine a diffusion process with a jump process. The jump process is a Poisson process that generates jumps in the value of the underlying asset. The jump-diffusion model is a generalization of the Black-Scholes model that allows for the possibility of large, 6 | discontinuous jumps in the value of the underlying asset. 7 | 8 | The most famous jump-diffusion model is the Merton model, which was introduced by Robert Merton in 1976. The Merton model assumes that the underlying asset follows a geometric Brownian motion with jumps that are normally distributed. 9 | 10 | .. currentmodule:: quantflow.sp.jump_diffusion 11 | 12 | .. autoclass:: JumpDiffusion 13 | :members: 14 | :member-order: groupwise 15 | :autosummary: 16 | :autosummary-nosignatures: 17 | -------------------------------------------------------------------------------- /notebooks/api/sp/ou.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | OU Processes 3 | ================ 4 | 5 | These are the classes that implement gaussian and non-gaussian 6 | `Ornstein-Uhlenbeck `_ process. 7 | 8 | 9 | .. currentmodule:: quantflow.sp.ou 10 | 11 | .. autoclass:: Vasicek 12 | :members: 13 | :member-order: groupwise 14 | :autosummary: 15 | :autosummary-nosignatures: 16 | 17 | 18 | .. autoclass:: GammaOU 19 | :members: 20 | :member-order: groupwise 21 | :autosummary: 22 | :autosummary-nosignatures: 23 | -------------------------------------------------------------------------------- /notebooks/api/sp/poisson.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | Poisson process 3 | ================ 4 | 5 | .. currentmodule:: quantflow.sp.poisson 6 | 7 | .. autoclass:: PoissonProcess 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | 13 | -------------------------------------------------------------------------------- /notebooks/api/sp/weiner.rst: -------------------------------------------------------------------------------- 1 | =============== 2 | Weiner process 3 | =============== 4 | 5 | .. module:: quantflow.sp.weiner 6 | 7 | .. autoclass:: WeinerProcess 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | 13 | -------------------------------------------------------------------------------- /notebooks/api/ta/index.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | Timeseries Analysis 3 | ================================== 4 | 5 | .. currentmodule:: quantflow.ta 6 | 7 | .. toctree:: 8 | :maxdepth: 1 9 | 10 | ohlc 11 | paths 12 | -------------------------------------------------------------------------------- /notebooks/api/ta/ohlc.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | OHLC 3 | ================ 4 | 5 | .. currentmodule:: quantflow.ta.ohlc 6 | 7 | .. autoclass:: OHLC 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | -------------------------------------------------------------------------------- /notebooks/api/ta/paths.rst: -------------------------------------------------------------------------------- 1 | =========== 2 | Paths 3 | =========== 4 | 5 | .. module:: quantflow.ta.paths 6 | 7 | .. autoclass:: Paths 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | -------------------------------------------------------------------------------- /notebooks/api/utils/bins.rst: -------------------------------------------------------------------------------- 1 | =========== 2 | Bins 3 | =========== 4 | 5 | .. module:: quantflow.utils.bins 6 | 7 | .. autofunction:: pdf 8 | 9 | .. autofunction:: event_density 10 | -------------------------------------------------------------------------------- /notebooks/api/utils/distributions.rst: -------------------------------------------------------------------------------- 1 | ============== 2 | Distributions 3 | ============== 4 | 5 | .. module:: quantflow.utils.distributions 6 | 7 | .. autoclass:: Distribution1D 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | 13 | 14 | .. autoclass:: Exponential 15 | :members: 16 | :member-order: groupwise 17 | :autosummary: 18 | :autosummary-nosignatures: 19 | 20 | 21 | .. autoclass:: DoubleExponential 22 | :members: 23 | :member-order: groupwise 24 | :autosummary: 25 | :autosummary-nosignatures: 26 | 27 | 28 | .. autoclass:: Normal 29 | :members: 30 | :member-order: groupwise 31 | :autosummary: 32 | :autosummary-nosignatures: 33 | -------------------------------------------------------------------------------- /notebooks/api/utils/index.rst: -------------------------------------------------------------------------------- 1 | ====== 2 | Utils 3 | ====== 4 | 5 | .. currentmodule:: quantflow.utils 6 | 7 | .. toctree:: 8 | :maxdepth: 1 9 | 10 | marginal1d 11 | distributions 12 | bins 13 | -------------------------------------------------------------------------------- /notebooks/api/utils/marginal1d.rst: -------------------------------------------------------------------------------- 1 | =========== 2 | Marginal1D 3 | =========== 4 | 5 | .. module:: quantflow.utils.marginal 6 | 7 | .. autoclass:: Marginal1D 8 | :members: 9 | :member-order: groupwise 10 | :autosummary: 11 | :autosummary-nosignatures: 12 | -------------------------------------------------------------------------------- /notebooks/applications/calibration.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.16.6 9 | kernelspec: 10 | display_name: Python 3 (ipykernel) 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # Calibration 16 | 17 | Early pointers 18 | 19 | * https://github.com/rlabbe/filterpy 20 | * [filterpy book](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python) 21 | 22 | +++ 23 | 24 | ## Calibrating ABC 25 | 26 | For calibration we use {cite:p}`ukf`. 27 | Lets consider the Heston model as a test case 28 | 29 | ```{code-cell} 30 | from quantflow.sp.heston import Heston 31 | 32 | pr = Heston.create(vol=0.6, kappa=1.3, sigma=0.8, rho=-0.6) 33 | pr.variance_process.is_positive 34 | ``` 35 | 36 | The Heston model is a classical example where the calibration of parameters requires to deal with the estimation of an unobserved random variable, the stochastic variance. The model can be discretized as follow: 37 | 38 | \begin{align} 39 | d \nu_t &= \kappa\left(\theta -\nu_t\right) dt + \sigma \sqrt{\nu_t} d z_t \\ 40 | d s_t &= -\frac{\nu_t}{2}dt + \sqrt{\nu_t} d w_t \\ 41 | {\mathbb E}\left[d w_t d z_t\right] &= \rho dt 42 | \end{align} 43 | 44 | noting that 45 | 46 | \begin{equation} 47 | d z_t = \rho d w_t + \sqrt{1-\rho^2} d b_t 48 | \end{equation} 49 | 50 | which leads to 51 | 52 | \begin{align} 53 | d \nu_t &= \kappa\left(\theta -\nu_t\right) dt + \sigma \sqrt{\nu_t} \rho d w_t + \sigma \sqrt{\nu_t} \sqrt{1-\rho^2} d b_t \\ 54 | d s_t &= -\frac{\nu_t}{2}dt + \sqrt{\nu_t} d w_t \\ 55 | \end{align} 56 | 57 | and finally 58 | 59 | \begin{align} 60 | d \nu_t &= \kappa\left(\theta -\nu_t\right) dt + \sigma \rho \frac{\nu_t}{2} dt + \sigma \sqrt{\nu_t} \sqrt{1-\rho^2} d b_t + \sigma \rho d s_t\\ 61 | d s_t &= -\frac{\nu_t}{2}dt + \sqrt{\nu_t} d w_t \\ 62 | \end{align} 63 | 64 | Our problem is to find the *best* estimate of $\nu_t$ given by ths equation based on the observations $s_t$. 65 | 66 | The Heston model is a dynamic model which can be represented by a state-space form: $X_t$ is the state while $Z_t$ is the observable 67 | 68 | \begin{align} 69 | X_{t+1} &= f\left(X_t, \Theta\right) + B^x_t\\ 70 | Z_t &= h\left(X_t, \Theta\right) + B^z_t \\ 71 | B^x_t &= {\cal N}\left(0, Q_t\right) \\ 72 | B^z_t &= {\cal N}\left(0, R_t\right) \\ 73 | \end{align} 74 | 75 | $f$ is the *state transition equation* while $h$ is the *measurement equation*. 76 | 77 | +++ 78 | 79 | the state equation is given by 80 | 81 | \begin{align} 82 | X_{t+1} &= \left[\begin{matrix}\kappa\left(\theta\right) dt \\ 0\end{matrix}\right] + 83 | \end{align} 84 | 85 | ```{code-cell} 86 | [p for p in pr.variance_process.parameters] 87 | ``` 88 | 89 | ```{code-cell} 90 | 91 | ``` 92 | 93 | ## Calibration against historical timeseries 94 | 95 | We calibrate the Heston model agais historical time series, in this case the measurement is the log change for a given frequency. 96 | 97 | \begin{align} 98 | F_t &= \left[\begin{matrix}1 - \kappa\theta dt \\ 0\end{matrix}\right] \\ 99 | Q_t &= \left[\begin{matrix}1 - \kappa\theta dt \\ 0\end{matrix}\right] \\ 100 | z_t &= d s_t 101 | \end{align} 102 | 103 | The observation vector is given by 104 | \begin{align} 105 | x_t &= \left[\begin{matrix}\nu_t && w_t && z_t\end{matrix}\right]^T \\ 106 | \bar{x}_t = {\mathbb E}\left[x_t\right] &= \left[\begin{matrix}\nu_t && 0 && 0\end{matrix}\right]^T 107 | \end{align} 108 | 109 | ```{code-cell} 110 | from quantflow.data.fmp import FMP 111 | frequency = "1min" 112 | async with FMP() as cli: 113 | df = await cli.prices("ETHUSD", frequency) 114 | df = df.sort_values("date").reset_index(drop=True) 115 | df 116 | ``` 117 | 118 | ```{code-cell} 119 | import plotly.express as px 120 | fig = px.line(df, x="date", y="close", markers=True) 121 | fig.show() 122 | ``` 123 | 124 | ```{code-cell} 125 | import numpy as np 126 | from quantflow.utils.volatility import parkinson_estimator, GarchEstimator 127 | df["returns"] = np.log(df["close"]) - np.log(df["open"]) 128 | df["pk"] = parkinson_estimator(df["high"], df["low"]) 129 | ds = df.dropna() 130 | dt = cli.historical_frequencies_annulaized()[frequency] 131 | fig = px.line(ds["returns"], markers=True) 132 | fig.show() 133 | ``` 134 | 135 | ```{code-cell} 136 | import plotly.express as px 137 | from quantflow.utils.bins import pdf 138 | df = pdf(ds["returns"], num=20) 139 | fig = px.bar(df, x="x", y="f") 140 | fig.show() 141 | ``` 142 | 143 | ```{code-cell} 144 | g1 = GarchEstimator.returns(ds["returns"], dt) 145 | g2 = GarchEstimator.pk(ds["returns"], ds["pk"], dt) 146 | ``` 147 | 148 | ```{code-cell} 149 | import pandas as pd 150 | yf = pd.DataFrame(dict(returns=g2.y2, pk=g2.p)) 151 | fig = px.line(yf, markers=True) 152 | fig.show() 153 | ``` 154 | 155 | ```{code-cell} 156 | r1 = g1.fit() 157 | r1 158 | ``` 159 | 160 | ```{code-cell} 161 | r2 = g2.fit() 162 | r2 163 | ``` 164 | 165 | ```{code-cell} 166 | sig2 = pd.DataFrame(dict(returns=np.sqrt(g2.filter(r1["params"])), pk=np.sqrt(g2.filter(r2["params"])))) 167 | fig = px.line(sig2, markers=False, title="Stochastic volatility") 168 | fig.show() 169 | ``` 170 | 171 | ```{code-cell} 172 | class HestonCalibration: 173 | 174 | def __init__(self, dt: float, initial_std = 0.5): 175 | self.dt = dt 176 | self.kappa = 1 177 | self.theta = initial_std*initial_std 178 | self.sigma = 0.2 179 | self.x0 = np.array((self.theta, 0)) 180 | 181 | def prediction(self, x): 182 | return np.array((x[0] + self.kappa*(self.theta - x[0])*self.dt, -0.5*x[0]*self.dt)) 183 | 184 | def state_jacobian(self): 185 | """THe Jacobian of the state equation""" 186 | return np.array(((1-self.kappa*self.dt, 0),(-0.5*self.dt, 0))) 187 | ``` 188 | 189 | ```{code-cell} 190 | 191 | ``` 192 | 193 | ```{code-cell} 194 | c = HestonCalibration(dt) 195 | c.x0 196 | ``` 197 | 198 | ```{code-cell} 199 | c.prediction(c.x0) 200 | ``` 201 | 202 | ```{code-cell} 203 | c.state_jacobian() 204 | ``` 205 | -------------------------------------------------------------------------------- /notebooks/applications/overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Applications 15 | 16 | Real-world applications of the library 17 | 18 | ```{tableofcontents} 19 | ``` 20 | 21 | ```{code-cell} 22 | 23 | ``` 24 | -------------------------------------------------------------------------------- /notebooks/applications/sampling.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Sampling Tools 15 | 16 | The library use the `Paths` class for managing monte carlo paths. 17 | 18 | ```{code-cell} 19 | from quantflow.utils.paths import Paths 20 | 21 | nv = Paths.normal_draws(paths=1000, time_horizon=1, time_steps=1000) 22 | ``` 23 | 24 | ```{code-cell} 25 | nv.var().mean() 26 | ``` 27 | 28 | ```{code-cell} 29 | nv = Paths.normal_draws(paths=1000, time_horizon=1, time_steps=1000, antithetic_variates=False) 30 | nv.var().mean() 31 | ``` 32 | 33 | ```{code-cell} 34 | 35 | ``` 36 | -------------------------------------------------------------------------------- /notebooks/applications/volatility_surface.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: .venv 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Volatility Surface 15 | 16 | In this notebook we illustrate the use of the Volatility Surface tool in the library. We use [deribit](https://docs.deribit.com/) options on ETHUSD as example. 17 | 18 | First thing, fetch the data 19 | 20 | ```{code-cell} ipython3 21 | from quantflow.data.deribit import Deribit 22 | 23 | async with Deribit() as cli: 24 | loader = await cli.volatility_surface_loader("eth") 25 | ``` 26 | 27 | Once we have loaded the data, we create the surface and display the term-structure of forwards 28 | 29 | ```{code-cell} ipython3 30 | vs = loader.surface() 31 | vs.maturities = vs.maturities 32 | vs.term_structure() 33 | ``` 34 | 35 | ```{code-cell} ipython3 36 | vs.spot 37 | ``` 38 | 39 | ## bs method 40 | 41 | This method calculate the implied Black volatility from option prices. By default it uses the best option in the surface for the calculation. 42 | 43 | The `options_df` method allows to inspect bid/ask for call options at a given cross section. 44 | Prices of options are normalized by the Forward price, in other words they are given as base currency price, in this case BTC. 45 | 46 | Moneyness is defined as 47 | 48 | \begin{equation} 49 | k = \log{\frac{K}{F}} 50 | \end{equation} 51 | 52 | ```{code-cell} ipython3 53 | vs.bs() 54 | df = vs.disable_outliers(0.95).options_df() 55 | df 56 | ``` 57 | 58 | The plot function is enabled only if [plotly](https://plotly.com/python/) is installed 59 | 60 | ```{code-cell} ipython3 61 | from plotly.subplots import make_subplots 62 | 63 | # consider 6 expiries 64 | vs6 = vs.trim(6) 65 | 66 | titles = [] 67 | for row in range(2): 68 | for col in range(3): 69 | index = row * 3 + col 70 | titles.append(f"Expiry {vs6.maturities[index].maturity}") 71 | fig = make_subplots(rows=2, cols=3, subplot_titles=titles).update_layout(height=600, title="ETH Volatility Surface") 72 | for row in range(2): 73 | for col in range(3): 74 | index = row * 3 + col 75 | vs6.plot(index=index, fig=fig, showlegend=False, fig_params=dict(row=row+1, col=col+1)) 76 | fig 77 | ``` 78 | 79 | The `moneyness_ttm` is defined as 80 | 81 | \begin{equation} 82 | \frac{1}{\sqrt{T}} \ln{\frac{K}{F}} 83 | \end{equation} 84 | 85 | where $T$ is the time-to-maturity. 86 | 87 | ```{code-cell} ipython3 88 | vs6.plot3d().update_layout(height=800, title="ETH Volatility Surface", scene_camera=dict(eye=dict(x=1, y=-2, z=1))) 89 | ``` 90 | 91 | ## Model Calibration 92 | 93 | We can now use the Vol Surface to calibrate the Heston stochastic volatility model. 94 | 95 | ```{code-cell} ipython3 96 | from quantflow.options.calibration import HestonJCalibration, OptionPricer 97 | from quantflow.utils.distributions import DoubleExponential 98 | from quantflow.sp.heston import HestonJ 99 | 100 | model = HestonJ.create(DoubleExponential, vol=0.8, sigma=1.5, kappa=0.5, rho=0.1, jump_intensity=50, jump_fraction=0.3) 101 | pricer = OptionPricer(model=model) 102 | cal = HestonJCalibration(pricer=pricer, vol_surface=vs6, moneyness_weight=-0) 103 | len(cal.options) 104 | ``` 105 | 106 | ```{code-cell} ipython3 107 | cal.model.model_dump() 108 | ``` 109 | 110 | ```{code-cell} ipython3 111 | cal.fit() 112 | ``` 113 | 114 | ```{code-cell} ipython3 115 | pricer.model 116 | ``` 117 | 118 | ```{code-cell} ipython3 119 | cal.plot(index=5, max_moneyness_ttm=1) 120 | ``` 121 | 122 | ## 123 | Serialization 124 | 125 | It is possible to save the vol surface into a json file so it can be recreated for testing or for serialization/deserialization. 126 | 127 | ```{code-cell} ipython3 128 | with open("../tests/volsurface.json", "w") as fp: 129 | fp.write(vs.inputs().model_dump_json()) 130 | ``` 131 | 132 | ```{code-cell} ipython3 133 | from quantflow.options.surface import VolSurfaceInputs, surface_from_inputs 134 | import json 135 | 136 | with open("../tests/volsurface.json", "r") as fp: 137 | inputs = VolSurfaceInputs(**json.load(fp)) 138 | 139 | vs2 = surface_from_inputs(inputs) 140 | ``` 141 | -------------------------------------------------------------------------------- /notebooks/assets/heston.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/notebooks/assets/heston.gif -------------------------------------------------------------------------------- /notebooks/assets/linkedin-banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/notebooks/assets/linkedin-banner.png -------------------------------------------------------------------------------- /notebooks/assets/quantflow-light.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 19 | 39 | 41 | 46 | 52 | Quantflow 67 | 70 | 76 | 82 | 88 | 94 | 100 | 107 | 108 | 109 | 110 | 111 | -------------------------------------------------------------------------------- /notebooks/assets/quantflow-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/notebooks/assets/quantflow-logo.png -------------------------------------------------------------------------------- /notebooks/assets/quantflow-repo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/notebooks/assets/quantflow-repo.png -------------------------------------------------------------------------------- /notebooks/conf.py: -------------------------------------------------------------------------------- 1 | ############################################################################### 2 | # Auto-generated by `jupyter-book config` 3 | # If you wish to continue using _config.yml, make edits to that file and 4 | # re-generate this one. 5 | ############################################################################### 6 | author = 'Quantmind Team' 7 | bibtex_bibfiles = ['reference/references.bib'] 8 | comments_config = {'hypothesis': False, 'utterances': False} 9 | copyright = '2024' 10 | exclude_patterns = ['**.ipynb_checkpoints', '.DS_Store', 'Thumbs.db', '_build'] 11 | extensions = ['sphinx_togglebutton', 'sphinx_copybutton', 'myst_nb', 'jupyter_book', 'sphinx_thebe', 'sphinx_comments', 'sphinx_external_toc', 'sphinx.ext.intersphinx', 'sphinx_design', 'sphinx_book_theme', 'sphinx.ext.autodoc', 'sphinx_autodoc_typehints', 'sphinx.ext.autosummary', 'sphinx.ext.linkcode', 'sphinx_autosummary_accessors', 'autodocsumm', 'sphinxcontrib.bibtex', 'sphinx_jupyterbook_latex', 'sphinx_multitoc_numbering'] 12 | external_toc_exclude_missing = False 13 | external_toc_path = '_toc.yml' 14 | html_baseurl = '' 15 | html_favicon = 'assets/quantflow-logo.png' 16 | html_js_files = ['https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js'] 17 | html_logo = 'assets/quantflow-light.svg' 18 | html_sourcelink_suffix = '' 19 | html_theme = 'sphinx_book_theme' 20 | html_theme_options = {'search_bar_text': 'Search this book...', 'launch_buttons': {'notebook_interface': 'classic', 'binderhub_url': '', 'jupyterhub_url': '', 'thebe': False, 'colab_url': '', 'deepnote_url': ''}, 'path_to_docs': 'notebooks', 'repository_url': 'https://github.com/quantmind/quantflow', 'repository_branch': 'main', 'extra_footer': '', 'home_page_in_toc': False, 'announcement': '', 'analytics': {'google_analytics_id': 'G-XBNNWQ560T', 'plausible_analytics_domain': '', 'plausible_analytics_url': 'https://plausible.io/js/script.js'}, 'use_repository_button': True, 'use_edit_page_button': True, 'use_issues_button': True} 21 | html_title = 'Quantflow library' 22 | latex_engine = 'pdflatex' 23 | linkcode_resolve = 'lambda domain, info: "test"\n' 24 | mathjax_options = {'async': 'async'} 25 | myst_enable_extensions = ['dollarmath', 'amsmath'] 26 | myst_url_schemes = ['mailto', 'http', 'https'] 27 | nb_execution_allow_errors = False 28 | nb_execution_cache_path = '' 29 | nb_execution_excludepatterns = [] 30 | nb_execution_in_temp = False 31 | nb_execution_mode = 'off' 32 | nb_execution_timeout = 30 33 | nb_output_stderr = 'show' 34 | numfig = True 35 | pygments_style = 'sphinx' 36 | suppress_warnings = ['myst.domains'] 37 | use_jupyterbook_latex = True 38 | use_multitoc_numbering = True 39 | -------------------------------------------------------------------------------- /notebooks/data/fed.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Federal Reserve Data 15 | 16 | ```{code-cell} ipython3 17 | from quantflow.data.fed import FederalReserve 18 | ``` 19 | 20 | ```{code-cell} ipython3 21 | async with FederalReserve() as fed: 22 | rates = await fed.ref_rates() 23 | ``` 24 | 25 | ```{code-cell} ipython3 26 | rates 27 | ``` 28 | 29 | ```{code-cell} ipython3 30 | async with FederalReserve() as fed: 31 | curves = await fed.yield_curves() 32 | ``` 33 | 34 | ```{code-cell} ipython3 35 | curves 36 | ``` 37 | 38 | ```{code-cell} ipython3 39 | 40 | ``` 41 | -------------------------------------------------------------------------------- /notebooks/data/fiscal_data.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Fiscal Data 15 | 16 | ```{code-cell} ipython3 17 | from quantflow.data.fiscal_data import FiscalData 18 | ``` 19 | 20 | ```{code-cell} ipython3 21 | fd = FiscalData() 22 | ``` 23 | 24 | ```{code-cell} ipython3 25 | data = await fd.securities() 26 | data 27 | ``` 28 | 29 | ```{code-cell} ipython3 30 | 31 | ``` 32 | -------------------------------------------------------------------------------- /notebooks/data/fmp.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.16.6 9 | kernelspec: 10 | display_name: Python 3 (ipykernel) 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # Data 16 | 17 | The library provides a python client for the [Financial Modelling Prep API](https://site.financialmodelingprep.com/developer/docs). To use the client one needs to provide the API key aither directly to the client or via the `FMP_API_KEY` environment variable. The API offers 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour and daily historical prices. 18 | 19 | ```{code-cell} ipython3 20 | from quantflow.data.fmp import FMP 21 | import pandas as pd 22 | cli = FMP() 23 | cli.url 24 | ``` 25 | 26 | ## Get path 27 | 28 | ```{code-cell} ipython3 29 | d = await cli.get_path("stock-list") 30 | pd.DataFrame(d) 31 | ``` 32 | 33 | ## Search 34 | 35 | ```{code-cell} ipython3 36 | d = await cli.search("electric") 37 | pd.DataFrame(d) 38 | ``` 39 | 40 | ```{code-cell} ipython3 41 | stock = "KNOS.L" 42 | ``` 43 | 44 | ## Company Profile 45 | 46 | ```{code-cell} ipython3 47 | d = await cli.profile(stock) 48 | d 49 | ``` 50 | 51 | ```{code-cell} ipython3 52 | c = await cli.peers(stock) 53 | c 54 | ``` 55 | 56 | ## Executive trading 57 | 58 | ```{code-cell} ipython3 59 | stock = "AAPL" 60 | ``` 61 | 62 | ```{code-cell} ipython3 63 | await cli.executives(stock) 64 | ``` 65 | 66 | ```{code-cell} ipython3 67 | await cli.insider_trading(stock) 68 | ``` 69 | 70 | ## News 71 | 72 | ```{code-cell} ipython3 73 | c = await cli.news(stock) 74 | c 75 | ``` 76 | 77 | ```{code-cell} ipython3 78 | p = await cli.market_risk_premium() 79 | pd.DataFrame(p) 80 | ``` 81 | -------------------------------------------------------------------------------- /notebooks/data/timeseries.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | ## Timeseries 15 | 16 | ```{code-cell} ipython3 17 | from quantflow.data.fmp import FMP 18 | from quantflow.utils.plot import candlestick_plot 19 | cli = FMP() 20 | ``` 21 | 22 | ```{code-cell} ipython3 23 | prices = await cli.prices("ethusd", frequency="") 24 | ``` 25 | 26 | ```{code-cell} ipython3 27 | candlestick_plot(prices).update_layout(height=500) 28 | ``` 29 | 30 | ```{code-cell} ipython3 31 | from quantflow.utils.df import DFutils 32 | 33 | df = DFutils(prices).with_rogers_satchel().with_parkinson() 34 | df 35 | ``` 36 | 37 | ```{code-cell} ipython3 38 | 39 | ``` 40 | -------------------------------------------------------------------------------- /notebooks/examples/exponential_sampling.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: .venv 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Exponential Sampling 15 | 16 | Here we sample the Asymmetric Laplace distribution. We will set the mean to 0 and the variance to 1 so that the distribution is fully determined by the asymmetric parameter $\kappa$. 17 | 18 | ```{admonition} Interactive notebook not enabled in docs - how to run it interactively? 19 | The widget below is not enabled in the documentation. You can run the notebook to see the widget in action, see [contributing](../reference/contributing.md) for instructions on how to run the notebook. 20 | ``` 21 | 22 | ```{code-cell} ipython3 23 | from quantflow.utils.distributions import DoubleExponential 24 | from quantflow.utils import bins 25 | import numpy as np 26 | import ipywidgets as widgets 27 | import plotly.graph_objects as go 28 | 29 | def simulate(): 30 | pr = DoubleExponential.from_moments(kappa=np.exp(asym.value)) 31 | data = pr.sample(samples.value) 32 | pdf = bins.pdf(data, num_bins=50, symmetric=0) 33 | pdf["simulation"] = pdf["pdf"] 34 | pdf["analytical"] = pr.pdf(pdf.index) 35 | cha = pr.pdf_from_characteristic() 36 | return pdf, cha 37 | 38 | def on_change(change): 39 | df, cha = simulate() 40 | fig.data[0].x = df.index 41 | fig.data[0].y = df["simulation"] 42 | fig.data[1].x = df.index 43 | fig.data[1].y = df["analytical"] 44 | fig.data[2].x = cha.x 45 | fig.data[2].y = cha.y 46 | 47 | asym = widgets.FloatSlider(description="asymmetry (log of k)", min=-2, max=2) 48 | samples = widgets.IntSlider(description="paths", min=100, max=10000, step=100) 49 | asym.value = 0 50 | samples.value = 1000 51 | asym.observe(on_change) 52 | samples.observe(on_change) 53 | 54 | df, cha = simulate() 55 | simulation = go.Bar(x=df.index, y=df["simulation"], name="simulation") 56 | analytical = go.Scatter(x=df.index, y=df["analytical"], name="analytical") 57 | cha = go.Scatter(x=cha.x, y=cha.y, name="from characteristic", mode="markers") 58 | fig = go.FigureWidget(data=[simulation, cha, analytical]) 59 | 60 | widgets.VBox([asym, samples, fig]) 61 | ``` 62 | 63 | ```{code-cell} ipython3 64 | 65 | ``` 66 | -------------------------------------------------------------------------------- /notebooks/examples/gaussian_sampling.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Gaussian Sampling 15 | 16 | Here we sample the gaussian OU process for different mean reversion speed and number of paths. 17 | 18 | ```{admonition} Interactive notebook not enabled in docs - how to run it interactively? 19 | The widget below is not enabled in the documentation. You can run the notebook to see the widget in action, see [contributing](../reference/contributing.md) for instructions on how to run the notebook. 20 | ``` 21 | 22 | ```{code-cell} ipython3 23 | from quantflow.sp.ou import Vasicek 24 | from quantflow.utils import plot 25 | import ipywidgets as widgets 26 | import plotly.graph_objects as go 27 | 28 | def simulate(): 29 | pr = Vasicek(rate=0.5, kappa=kappa.value) 30 | paths = pr.sample(samples.value, 1, 1000) 31 | pdf = paths.pdf(num_bins=50) 32 | pdf["simulation"] = pdf["pdf"] 33 | pdf["analytical"] = pr.marginal(1).pdf(pdf.index) 34 | return pdf 35 | 36 | def on_intensity_change(change): 37 | df = simulate() 38 | fig.data[0].x = df.index 39 | fig.data[0].y = df["simulation"] 40 | fig.data[1].x = df.index 41 | fig.data[1].y = df["analytical"] 42 | 43 | kappa = widgets.FloatSlider(description="mean reversion", min=0.1, max=5) 44 | samples = widgets.IntSlider(description="paths", min=100, max=10000, step=100) 45 | kappa.value = 1 46 | samples.value = 1000 47 | kappa.observe(on_intensity_change) 48 | samples.observe(on_intensity_change) 49 | 50 | df = simulate() 51 | simulation = go.Bar(x=df.index, y=df["simulation"], name="simulation") 52 | analytical = go.Scatter(x=df.index, y=df["analytical"], name="analytical") 53 | fig = go.FigureWidget(data=[simulation, analytical]) 54 | 55 | widgets.VBox([kappa, samples, fig]) 56 | ``` 57 | 58 | ```{code-cell} ipython3 59 | 60 | ``` 61 | -------------------------------------------------------------------------------- /notebooks/examples/heston_vol_surface.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: .venv 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # HestonJ Volatility Surface 15 | 16 | Here we study the Implied volatility surface of the Heston model with jumps. 17 | The Heston model is a stochastic volatility model that is widely used in the finance industry to price options. 18 | 19 | ```{code-cell} ipython3 20 | from quantflow.sp.heston import HestonJ 21 | from quantflow.utils.distributions import DoubleExponential 22 | from quantflow.options.pricer import OptionPricer 23 | 24 | pricer = OptionPricer(model=HestonJ.create( 25 | DoubleExponential, 26 | vol=0.5, 27 | kappa=2, 28 | rho=-0.2, 29 | sigma=0.8, 30 | jump_fraction=0.5, 31 | jump_asymmetry=0.2 32 | )) 33 | pricer 34 | ``` 35 | 36 | ```{code-cell} ipython3 37 | fig = None 38 | for ttm in (0.1, 0.5, 1): 39 | fig = pricer.maturity(ttm).plot(fig=fig, name=f"ttm={ttm}") 40 | fig 41 | ``` 42 | 43 | 44 | 45 | ```{code-cell} ipython3 46 | pricer.plot3d(max_moneyness_ttm=1.5, support=31).update_layout( 47 | height=800, 48 | title="Heston volatility surface", 49 | ) 50 | ``` 51 | 52 | ```{code-cell} ipython3 53 | 54 | ``` 55 | -------------------------------------------------------------------------------- /notebooks/examples/overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Examples 15 | 16 | This section is a collection of random examples 17 | 18 | ```{tableofcontents} 19 | ``` 20 | -------------------------------------------------------------------------------- /notebooks/examples/poisson_sampling.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Poisson Sampling 15 | 16 | Evaluate the MC simulation for The Poisson process against the analytical PDF. 17 | 18 | ```{admonition} Interactive notebook not enabled in docs - how to run it interactively? 19 | The widget below is not enabled in the documentation. You can run the notebook to see the widget in action, see [contributing](../reference/contributing.md) for instructions on how to run the notebook. 20 | ``` 21 | 22 | ```{code-cell} 23 | from quantflow.sp.poisson import PoissonProcess 24 | from quantflow.utils import plot 25 | import ipywidgets as widgets 26 | import plotly.graph_objects as go 27 | 28 | def simulate(): 29 | pr = PoissonProcess(intensity=intensity.value) 30 | paths = pr.sample(samples.value, 1, 1000) 31 | pdf = paths.pdf(delta=1) 32 | pdf["simulation"] = pdf["pdf"] 33 | pdf["analytical"] = pr.marginal(1).pdf(pdf.index) 34 | return pdf 35 | 36 | def on_intensity_change(change): 37 | df = simulate() 38 | fig.data[0].x = df.index 39 | fig.data[0].y = df["simulation"] 40 | fig.data[1].x = df.index 41 | fig.data[1].y = df["analytical"] 42 | 43 | intensity = widgets.IntSlider(description="intensity") 44 | samples = widgets.IntSlider(description="paths", min=100, max=10000, step=100) 45 | intensity.value = 50 46 | samples.value = 1000 47 | intensity.observe(on_intensity_change) 48 | samples.observe(on_intensity_change) 49 | 50 | df = simulate() 51 | simulation = go.Bar(x=df.index, y=df["simulation"], name="simulation") 52 | analytical = go.Scatter(x=df.index, y=df["analytical"], name="analytical") 53 | fig = go.FigureWidget(data=[simulation, analytical]) 54 | 55 | widgets.VBox([intensity, samples, fig]) 56 | ``` 57 | 58 | ```{code-cell} 59 | 60 | ``` 61 | -------------------------------------------------------------------------------- /notebooks/index.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.14.7 9 | kernelspec: 10 | display_name: Python 3 (ipykernel) 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # Quantflow 16 | 17 | A library for quantitative analysis and pricing. 18 | 19 | ```{grid} 20 | ```{grid-item} 21 | ```{image} _static/heston.gif 22 | :alt: Heston volatility surface 23 | :width: 400px 24 | ```{grid-item} 25 | ``` 26 | 27 | 28 | This documentation is organized into a few major sections. 29 | * [Theory](./theory/overview.md) cover some important concept used throughout the library, for the curious reader 30 | * [Stochastic models](./models/overview.md) cover all the stochastic models supported and their use 31 | * [Applications](./applications/overview.md) show case the real-world use cases 32 | * [Examples](./examples/overview.md) random examples 33 | * [API Reference](./api/index.rst) python API reference 34 | 35 | ## Installation 36 | 37 | To install the library use 38 | ``` 39 | pip install quantflow 40 | ``` 41 | 42 | 43 | ## Optional dependencies 44 | 45 | Quantflow comes with two optional dependencies: 46 | 47 | * `data` for data retrieval, to install it use 48 | ``` 49 | pip install quantflow[data] 50 | ``` 51 | * `cli` for command line interface, to install it use 52 | ``` 53 | pip install quantflow[data,cli] 54 | ``` 55 | -------------------------------------------------------------------------------- /notebooks/models/bns.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # BNS Model 15 | 16 | The Barndorff-Nielson--Shephard (BNS) model is a stochastic volatility model where the variance process $\nu_t$, or better, the activity rate process, follows a [non-gaussian OU process](./ou.md). The leverage effect can be accommodated by correlating the Brownian motion $w_t$ and the BDLP $z_t$ as the following equations illustrate: 17 | 18 | \begin{align} 19 | y_t &= w_{\tau_t} + \rho z_{\kappa t} \\ 20 | d \nu_t &= -\kappa \nu_t dt + d z_{\kappa t} \\ 21 | \tau_t &= \int_0^t \nu_s ds 22 | \end{align} 23 | 24 | This means that the characteristic function of $y_t$ can be represented as 25 | 26 | \begin{align} 27 | \Phi_{y_t, u} & = {\mathbb E}\left[\exp{\left(i u w_{\tau_t} + i u \rho z_{\kappa t}\right)}\right] \\ 28 | &= {\mathbb E}\left[\exp{\left(-\tau_t \phi_{w, u} + i u \rho z_{\kappa t}\right)}\right] 29 | \end{align} 30 | 31 | $\phi_{w, u}$ is the characteristic exponent of $w_1$. The second equivalence is a consequence of $w$ and $\tau$ being independent, as discussed in [the time-changed Lévy](../theory/levy.md) process section. 32 | 33 | ```{code-cell} 34 | from quantflow.sp.bns import BNS 35 | 36 | pr = BNS.create(vol=0.5, decay=10, kappa=10, rho=-1) 37 | pr 38 | ``` 39 | 40 | ```{code-cell} 41 | from quantflow.utils import plot 42 | m = pr.marginal(2) 43 | plot.plot_characteristic(m, max_frequency=10) 44 | ``` 45 | 46 | ## Marginal Distribution 47 | 48 | ```{code-cell} 49 | m.mean(), m.std() 50 | ``` 51 | 52 | ```{code-cell} 53 | plot.plot_marginal_pdf(m, 128, normal=True, analytical=False) 54 | ``` 55 | 56 | ## Appendix 57 | 58 | 59 | Carr at al {cite:p}`cgmy` show that the join characteristic function of $\tau_t$ and $z_{\kappa t}$ has a closed formula, and this is our derivation 60 | 61 | \begin{align} 62 | \zeta_{a, b} &= \ln {\mathbb E} \left[\exp{\left(i a \tau_t + i b z_{\kappa t}\right)}\right] \\ 63 | \zeta_{a, b} &= i c \nu_0 - \int_b^{b+c} \frac{\phi_{z_1, s}}{a+\kappa b - \kappa s} ds = i c \nu_0 + \lambda \left(I_{b+c} - I_{b}\right) \\ 64 | c &= a \frac{1 - e^{-\kappa t}}{\kappa} 65 | \end{align} 66 | 67 | 68 | Noting that (see [non-gaussian OU process](./ou.md)) 69 | 70 | \begin{align} 71 | i a \tau_t + i b z_{\kappa t} &= i a \epsilon_t \nu_0 + \int_0^t \left(i a \epsilon_{t-s} + i b\right) d z_{\kappa s} \\ 72 | &= i a \epsilon_t \nu_0 + \int_0^{\kappa t} \left(i a \epsilon_{t-s/\kappa} + i b\right) d z_s \\ 73 | \epsilon_t &= \frac{1 - e^{-\kappa t}}{\kappa} 74 | \end{align} 75 | 76 | we obtain 77 | \begin{align} 78 | \zeta_{a, b} &= i a \epsilon_t \nu_0 + \ln {\mathbb E} \left[\exp{\left(\int_0^{\kappa t} \left(i a \epsilon_{t-s/\kappa} + i b\right) d z_s\right)}\right] \\ 79 | &= i a \epsilon_t \nu_0 - \int_0^{\kappa t} \phi_z\left(a \epsilon_{t-s/\kappa} + b\right) d s \\ 80 | &= i a \epsilon_t \nu_0 - \int_L^U \frac{\phi_{z,s}}{a + \kappa b - \kappa s} d s 81 | \end{align} 82 | 83 | Here we use [sympy](https://www.sympy.org/en/index.html) to derive the integral in the characteristic function. 84 | 85 | ```{code-cell} 86 | import sympy as sym 87 | ``` 88 | 89 | ```{code-cell} 90 | k = sym.Symbol("k") 91 | iβ = sym.Symbol("iβ") 92 | γ = sym.Symbol("γ") 93 | s = sym.Symbol("s") 94 | ϕ = s/(s+iβ)/(γ-k*s) 95 | ϕ 96 | ``` 97 | 98 | ```{code-cell} 99 | r = sym.integrate(ϕ, s) 100 | sym.simplify(r) 101 | ``` 102 | 103 | ```{code-cell} 104 | import numpy as np 105 | f = lambda x: x*np.log(x) 106 | f(0.001) 107 | ``` 108 | 109 | ```{code-cell} 110 | 111 | ``` 112 | -------------------------------------------------------------------------------- /notebooks/models/cir.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.16.6 9 | kernelspec: 10 | display_name: Python 3 (ipykernel) 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # CIR process 16 | 17 | The Cox–Ingersoll–Ross (CIR) model is a standard mean reverting square-root process used to model interest rates and stochastic variance. It takes the form 18 | 19 | \begin{equation} 20 | dx_t = \kappa\left(\theta - x_t\right) dt + \sigma \sqrt{x_t} d w_t 21 | \end{equation} 22 | 23 | $\kappa$ is the mean reversion speed, $\theta$ the long term value of $x_t$, $\sigma$ controls the standard deviation given by $\sigma\sqrt{x_t}$ and $w_t$ is a brownian motion. 24 | 25 | Importantly, the process remains positive if the Feller condition is satisfied 26 | 27 | \begin{equation} 28 | 2 \kappa \theta > \sigma^2 29 | \end{equation} 30 | 31 | In the code, the initial value of the process, ${\bf x}_0$, is given by the `rate` field, for example, a CIR process can be created via 32 | 33 | ```{code-cell} 34 | from quantflow.sp.cir import CIR 35 | pr = CIR(rate=1.0, kappa=2.0, sigma=1.2) 36 | pr 37 | ``` 38 | 39 | ```{code-cell} 40 | pr.is_positive 41 | ``` 42 | 43 | ## Marginal and moments 44 | 45 | The model has a closed-form solution for the mean, the variance, and the [marginal pdf](https://en.wikipedia.org/wiki/Cox%E2%80%93Ingersoll%E2%80%93Ross_model). 46 | 47 | \begin{align} 48 | {\mathbb E}[x_t] &= x_0 e^{-\kappa t} + \theta\left(1 - e^{-\kappa t}\right) \\ 49 | {\mathbb Var}[x_t] &= x_0 \frac{\sigma^2}{\kappa}\left(e^{-\kappa t} - e^{-2 \kappa t}\right) + \frac{\theta \sigma^2}{2\kappa}\left(1 - e^{-\kappa t}\right)^2 \\ 50 | \end{align} 51 | 52 | ```{code-cell} 53 | m = pr.marginal(1) 54 | m.mean(), m.variance() 55 | ``` 56 | 57 | ```{code-cell} 58 | m.mean_from_characteristic(), m.variance_from_characteristic() 59 | ``` 60 | 61 | The code below show the computed PDF via FRFT and the analytical formula above 62 | 63 | ```{code-cell} 64 | from quantflow.utils import plot 65 | import numpy as np 66 | plot.plot_marginal_pdf(m, 128, max_frequency=20) 67 | ``` 68 | 69 | ## Characteristic Function 70 | 71 | For this process, it is possible to obtain the analytical formula of $a$ and $b$: 72 | 73 | \begin{align} 74 | a &=-\frac{2\kappa\theta}{\sigma^2} \log{\left(\frac{c + d e^{-\gamma \tau}}{c + d}\right)} + \frac{\kappa \theta \tau}{c}\\ 75 | b &= \frac{1-e^{-\gamma \tau}}{c + d e^{-\gamma_u \tau}} 76 | \end{align} 77 | 78 | with 79 | \begin{align} 80 | \gamma &= \sqrt{\kappa^2 - 2 u \sigma^2} \\ 81 | c &= \frac{\gamma + \kappa}{2 u} \\ 82 | d &= \frac{\gamma - \kappa}{2 u} 83 | \end{align} 84 | 85 | ```{code-cell} 86 | from quantflow.utils import plot 87 | m = pr.marginal(0.5) 88 | plot.plot_characteristic(m) 89 | ``` 90 | 91 | ## Sampling 92 | 93 | The code offers three sampling algorithms, both guarantee positiveness even if the Feller condition above is not satisfied. 94 | 95 | The first sampling algorithm is the explicit Euler *full truncation* algorithm where the process is allowed to go below zero, at which point the process becomes deterministic with an upward drift of $\kappa \theta$, see {cite:p}`heston-calibration` and {cite:p}`heston-simulation` for a detailed discussion. 96 | 97 | ```{code-cell} 98 | from quantflow.sp.cir import CIR 99 | pr = CIR(rate=1.0, kappa=1.0, sigma=2.0, sample_algo="euler") 100 | pr 101 | ``` 102 | 103 | ```{code-cell} 104 | pr.is_positive 105 | ``` 106 | 107 | ```{code-cell} 108 | pr.sample(20, time_horizon=1, time_steps=1000).plot().update_traces(line_width=0.5) 109 | ``` 110 | 111 | The second sampling algorithm is the implicit Milstein scheme, a refinement of the Euler scheme produced by adding an extra term using the Ito's lemma. 112 | 113 | The third algorithm is a fully implicit one that guarantees positiveness of the process if the Feller condition is met. 114 | 115 | ```{code-cell} 116 | pr = CIR(rate=1.0, kappa=1.0, sigma=0.8) 117 | pr 118 | ``` 119 | 120 | ```{code-cell} 121 | pr.sample(20, time_horizon=1, time_steps=1000).plot().update_traces(line_width=0.5) 122 | ``` 123 | 124 | Sampling with a mean reversion speed 20 times larger 125 | 126 | ```{code-cell} 127 | pr.kappa = 20; pr 128 | ``` 129 | 130 | ```{code-cell} 131 | pr.sample(20, time_horizon=1, time_steps=1000).plot().update_traces(line_width=0.5) 132 | ``` 133 | 134 | ## MC simulations 135 | 136 | In this section we compare the performance of the three sampling algorithms in estimating the mean and and standard deviation. 137 | 138 | ```{code-cell} 139 | from quantflow.sp.cir import CIR 140 | 141 | params = dict(rate=0.8, kappa=1.5, sigma=1.2) 142 | pr = CIR(**params) 143 | 144 | prs = [ 145 | CIR(sample_algo="euler", **params), 146 | CIR(sample_algo="milstein", **params), 147 | pr 148 | ] 149 | ``` 150 | 151 | ```{code-cell} 152 | import pandas as pd 153 | from quantflow.utils import plot 154 | from quantflow.utils.paths import Paths 155 | 156 | samples = 1000 157 | time_steps = 100 158 | 159 | draws = Paths.normal_draws(samples, time_horizon=1, time_steps=time_steps) 160 | mean = dict(mean=pr.marginal(draws.time).mean()) 161 | mean.update({pr.sample_algo.name: pr.sample_from_draws(draws).mean() for pr in prs}) 162 | df = pd.DataFrame(mean, index=draws.time) 163 | 164 | plot.plot_lines(df) 165 | ``` 166 | 167 | ```{code-cell} 168 | std = dict(std=pr.marginal(draws.time).std()) 169 | std.update({pr.sample_algo.name: pr.sample_from_draws(draws).std() for pr in prs}) 170 | df = pd.DataFrame(std, index=draws.time) 171 | 172 | plot.plot_lines(df) 173 | ``` 174 | 175 | ## Integrated log-Laplace Transform 176 | 177 | The log-Laplace transform of the integrated CIR process is defined as 178 | 179 | \begin{align} 180 | \iota_{t,u} &= \log {\mathbb E}\left[e^{- u \int_0^t x_s ds}\right]\\ 181 | &= a_{t,u} + x_0 b_{t,u}\\ 182 | a_{t,u} &= \frac{2\kappa\theta}{\sigma^2} \log{\frac{2\gamma_u e^{\left(\kappa+\gamma_u\right)t/2}}{d_{t,u}}}\\ 183 | b_{t,u} &=-\frac{2u\left(e^{\gamma_u t}-1\right)}{d_{t,u}}\\ 184 | d_{t,u} &= 2\gamma_u + \left(\gamma_u+\kappa\right)\left(e^{\gamma_u t}-1\right)\\ 185 | \gamma_u &= \sqrt{\kappa^2+2u\sigma^2}\\ 186 | \end{align} 187 | -------------------------------------------------------------------------------- /notebooks/models/gousv.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Gaussian OU Vol Model 15 | 16 | 17 | \begin{align} 18 | d x_t &= d w_t \\ 19 | d \eta_t &= \kappa\left(\theta - \eta_t\right) dt + \sigma d b_t \\ 20 | \tau_t &= \int_0^t \eta_s^2 ds \\ 21 | {\mathbb E}\left[d w_t d b_t\right] &= \rho dt 22 | \end{align} 23 | 24 | This means that the characteristic function of $y_t=x_{\tau_t}$ can be represented as 25 | 26 | \begin{align} 27 | \Phi_{y_t, u} & = {\mathbb E}\left[e^{i u y_t}\right] = {\mathbb L}_{\tau_t}^u\left(\frac{u^2}{2}\right) \\ 28 | &= e^{-a_{t,u} - b_{t,u} \nu_0} 29 | \end{align} 30 | 31 | ```{code-cell} 32 | 33 | ``` 34 | 35 | ## Characteristic Function 36 | 37 | \begin{align} 38 | a_t &= \left(\theta - \frac{\sigma^2}{2\kappa^2}\right)\left(b_t -t\right) - \frac{\sigma^2}{4\kappa}b_t^2 \\ 39 | b_t &= \frac{1 - e^{-\kappa t}}{\kappa} \\ 40 | \end{align} 41 | 42 | ```{code-cell} 43 | 44 | ``` 45 | 46 | ```{code-cell} 47 | 48 | ``` 49 | -------------------------------------------------------------------------------- /notebooks/models/heston.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.16.6 9 | kernelspec: 10 | display_name: .venv 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # Heston Model and Option Pricing 16 | 17 | A very important example of time-changed Lévy process useful for option pricing is the Heston model. In this model, the Lévy process is a standard Brownian motion, while the activity rate follows a [CIR process](./cir.md). The leverage effect can be accommodated by correlating the two Brownian motions as the following equations illustrate: 18 | 19 | \begin{align} 20 | y_t &= x_{\tau_t} \\ 21 | \tau_t &= \int_0^t \nu_s ds \\ 22 | d x_t &= d w_t \\ 23 | d \nu_t &= \kappa\left(\theta - \nu_t\right) dt + \sigma\sqrt{\nu_t} d z_t \\ 24 | {\mathbb E}\left[d w_t d z_t\right] &= \rho dt 25 | \end{align} 26 | 27 | This means that the characteristic function of $y_t=x_{\tau_t}$ can be represented as 28 | 29 | \begin{align} 30 | \Phi_{y_t, u} & = {\mathbb E}\left[e^{i u y_t}\right] = {\mathbb L}_{\tau_t}^u\left(\frac{u^2}{2}\right) \\ 31 | &= e^{-a_{t,u} - b_{t,u} \nu_0} 32 | \end{align} 33 | 34 | ```{code-cell} ipython3 35 | from quantflow.sp.heston import Heston 36 | pr = Heston.create(vol=0.6, kappa=2, sigma=1.5, rho=-0.1) 37 | pr 38 | ``` 39 | 40 | ```{code-cell} ipython3 41 | # check that the variance CIR process is positive 42 | pr.variance_process.is_positive, pr.variance_process.marginal(1).std() 43 | ``` 44 | 45 | ## Characteristic Function 46 | 47 | ```{code-cell} ipython3 48 | from quantflow.utils import plot 49 | m = pr.marginal(0.1) 50 | plot.plot_characteristic(m) 51 | ``` 52 | 53 | The immaginary part of the characteristic function is given by the correlation coefficient. 54 | 55 | +++ 56 | 57 | ## Marginal Distribution 58 | 59 | Here we compare the marginal distribution at a time in the future $t=1$ with a normal distribution with the same standard deviation. 60 | 61 | ```{code-cell} ipython3 62 | plot.plot_marginal_pdf(m, 128, normal=True, analytical=False) 63 | ``` 64 | 65 | Using log scale on the y axis highlighs the probability on the tails much better 66 | 67 | ```{code-cell} ipython3 68 | plot.plot_marginal_pdf(m, 128, normal=True, analytical=False, log_y=True) 69 | ``` 70 | 71 | ## Option pricing 72 | 73 | ```{code-cell} ipython3 74 | from quantflow.options.pricer import OptionPricer 75 | from quantflow.sp.heston import Heston 76 | pricer = OptionPricer(Heston.create(vol=0.6, kappa=2, sigma=0.8, rho=-0.2)) 77 | pricer 78 | ``` 79 | 80 | ```{code-cell} ipython3 81 | import plotly.express as px 82 | import plotly.graph_objects as go 83 | from quantflow.options.bs import black_call 84 | 85 | r = pricer.maturity(0.1) 86 | b = r.black() 87 | fig = px.line(x=r.moneyness_ttm, y=r.time_value, markers=True, title=r.name) 88 | fig.add_trace(go.Scatter(x=r.moneyness_ttm, y=b.time_value, name=b.name, line=dict())) 89 | fig.show() 90 | ``` 91 | 92 | ```{code-cell} ipython3 93 | fig = None 94 | for ttm in (0.05, 0.1, 0.2, 0.4, 0.6, 1): 95 | fig = pricer.maturity(ttm).plot(fig=fig, name=f"t={ttm}") 96 | fig.update_layout(title="Implied black vols", height=500) 97 | ``` 98 | 99 | ## Simulation 100 | 101 | The simulation of the Heston model is heavily dependent on the simulation of the activity rate, mainly how the behavior near zero is handled. 102 | 103 | The code implements algorithms from {cite:p}heston-simulation 104 | 105 | ```{code-cell} ipython3 106 | from quantflow.sp.heston import Heston 107 | pr = Heston.create(vol=0.6, kappa=2, sigma=0.8, rho=-0.4) 108 | pr 109 | ``` 110 | 111 | ```{code-cell} ipython3 112 | pr.sample(20, time_horizon=1, time_steps=1000).plot().update_traces(line_width=0.5) 113 | ``` 114 | 115 | ```{code-cell} ipython3 116 | import pandas as pd 117 | from quantflow.utils import plot 118 | 119 | paths = pr.sample(1000, time_horizon=1, time_steps=1000) 120 | mean = dict(mean=pr.marginal(paths.time).mean(), simulated=paths.mean()) 121 | df = pd.DataFrame(mean, index=paths.time) 122 | plot.plot_lines(df) 123 | ``` 124 | 125 | ```{code-cell} ipython3 126 | std = dict(std=pr.marginal(paths.time).std(), simulated=paths.std()) 127 | df = pd.DataFrame(std, index=paths.time) 128 | plot.plot_lines(df) 129 | ``` 130 | 131 | ```{code-cell} ipython3 132 | 133 | ``` 134 | -------------------------------------------------------------------------------- /notebooks/models/heston_jumps.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: .venv 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Heston Model with Jumps 15 | 16 | The models complements the standard [Heston](./heston.md) stochastic volatility model, with the addition of a double exponential Compound Poisson process. 17 | The Compound Poisson process adds a jump component to the Heston diffusion SDEs which control the volatility smile and skew for shorter maturities. 18 | 19 | \begin{align} 20 | y_t &= x_{\tau_t} + d j_t\\ 21 | \tau_t &= \int_0^t \nu_s ds \\ 22 | d x_t &= d w_t \\ 23 | d \nu_t &= \kappa\left(\theta - \nu_t\right) dt + \sigma\sqrt{\nu_t} d z_t \\ 24 | {\mathbb E}\left[d w_t d z_t\right] &= \rho dt 25 | \end{align} 26 | 27 | where $j_t$ is a double exponential Compound Poisson process which adds three additional parameter to the model 28 | 29 | * the jump intensity, which measures the expected number of jumps in a year 30 | * the jump percentage (fraction) contribution to the total variance 31 | * the jump asymmetry is defined as a parameter greater than 0; 1 means jump are symmetric 32 | 33 | The jump process is independent from the Brownian motions. See [HestonJ](../api/sp/heston.rst#quantflow.sp.heston.HestonJ) for python 34 | API documentation. 35 | 36 | ```{code-cell} ipython3 37 | from quantflow.sp.heston import HestonJ 38 | from quantflow.utils.distributions import DoubleExponential 39 | pr = HestonJ.create( 40 | DoubleExponential, 41 | vol=0.6, 42 | kappa=2, 43 | sigma=0.8, 44 | rho=-0.0, 45 | jump_intensity=50, 46 | jump_fraction=0.2, 47 | jump_asymmetry=0.0 48 | ) 49 | pr 50 | ``` 51 | 52 | ```{code-cell} ipython3 53 | from quantflow.utils import plot 54 | plot.plot_marginal_pdf(pr.marginal(0.5), 128, normal=True, analytical=False) 55 | ``` 56 | 57 | ```{code-cell} ipython3 58 | from quantflow.options.pricer import OptionPricer 59 | pricer = OptionPricer(pr) 60 | pricer 61 | ``` 62 | 63 | ```{code-cell} ipython3 64 | 65 | fig = None 66 | for ttm in (0.05, 0.1, 0.2, 0.4, 0.6, 1): 67 | 68 | fig = pricer.maturity(ttm).plot(fig=fig, name=f"t={ttm}") 69 | fig.update_layout(title="Implied black vols", height=500) 70 | ``` 71 | 72 | ```{code-cell} ipython3 73 | 74 | ``` 75 | 76 | ```{code-cell} ipython3 77 | 78 | ``` 79 | -------------------------------------------------------------------------------- /notebooks/models/jump_diffusion.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: .venv 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Jump Diffusion Models 15 | 16 | The library allows to create a vast array of jump-diffusion models. The most famous one is the Merton jump-diffusion model. 17 | 18 | ## Merton Model 19 | 20 | ```{code-cell} ipython3 21 | from quantflow.sp.jump_diffusion import JumpDiffusion 22 | from quantflow.utils.distributions import Normal 23 | 24 | merton = JumpDiffusion.create(Normal, jump_fraction=0.8, jump_intensity=50) 25 | ``` 26 | 27 | ### Marginal Distribution 28 | 29 | ```{code-cell} ipython3 30 | m = merton.marginal(0.02) 31 | m.std(), m.std_from_characteristic() 32 | ``` 33 | 34 | ```{code-cell} ipython3 35 | m2 = jd.marginal(0.02) 36 | m2.std(), m2.std_from_characteristic() 37 | ``` 38 | 39 | ```{code-cell} ipython3 40 | from quantflow.utils import plot 41 | 42 | plot.plot_marginal_pdf(m, 128, normal=True, analytical=False, log_y=True) 43 | ``` 44 | 45 | ### Characteristic Function 46 | 47 | ```{code-cell} ipython3 48 | plot.plot_characteristic(m) 49 | ``` 50 | 51 | ### Option Pricing 52 | 53 | We can price options using the `OptionPricer` tooling. 54 | 55 | ```{code-cell} ipython3 56 | from quantflow.options.pricer import OptionPricer 57 | pricer = OptionPricer(merton) 58 | pricer 59 | ``` 60 | 61 | ```{code-cell} ipython3 62 | fig = None 63 | for ttm in (0.05, 0.1, 0.2, 0.4, 0.6, 1): 64 | fig = pricer.maturity(ttm).plot(fig=fig, name=f"t={ttm}") 65 | fig.update_layout(title="Implied black vols - Merton", height=500) 66 | ``` 67 | 68 | This term structure of volatility demostrates one of the principal weakness of the Merton's model, and indeed of all jump diffusion models based on Lévy processes, namely the rapid flattening of the volatility surface as time-to-maturity increases. 69 | For very short time-to-maturities, however, the model has no problem in producing steep volatility smile and skew. 70 | 71 | +++ 72 | 73 | ### MC paths 74 | 75 | ```{code-cell} ipython3 76 | merton.sample(20, time_horizon=1, time_steps=1000).plot().update_traces(line_width=0.5) 77 | ``` 78 | 79 | ## Exponential Jump Diffusion 80 | 81 | This is a variation of the Mertoin model, where the jump distribution is a double exponential. 82 | The advantage of this model is that it allows for an asymmetric jump distribution, which can be useful in some cases, for example options prices with a skew. 83 | 84 | ```{code-cell} ipython3 85 | from quantflow.utils.distributions import DoubleExponential 86 | 87 | jd = JumpDiffusion.create(DoubleExponential, jump_fraction=0.8, jump_intensity=50, jump_asymmetry=0.2) 88 | pricer = OptionPricer(jd) 89 | pricer 90 | ``` 91 | 92 | ```{code-cell} ipython3 93 | fig = None 94 | for ttm in (0.05, 0.1, 0.2, 0.4, 0.6, 1): 95 | fig = pricer.maturity(ttm).plot(fig=fig, name=f"t={ttm}") 96 | fig.update_layout(title="Implied black vols - Double-exponential Jump Diffusion ", height=500) 97 | ``` 98 | 99 | ```{code-cell} ipython3 100 | 101 | ``` 102 | -------------------------------------------------------------------------------- /notebooks/models/overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Stochastic Processes 15 | 16 | These sections describe the various stochastic processes implemented in the library and how to use them. 17 | 18 | ```{tableofcontents} 19 | ``` 20 | -------------------------------------------------------------------------------- /notebooks/models/weiner.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Weiner Process 15 | 16 | In this document, we use the term Weiner process $w_t$ to indicate a Brownian motion with standard deviation given by the parameter $\sigma$; that is to say, the one-dimensional Weiner process is defined as: 17 | 18 | 1. $w_t$ is Lévy process 19 | 2. $d w_t = w_{t+dt}-w_t \sim N\left(0, \sigma dt\right)$ where $N$ is the normal distribution 20 | 21 | The [](characteristic-exponent) of $w$ is 22 | \begin{equation} 23 | \phi_{w, u} = \frac{\sigma^2 u^2}{2} 24 | \end{equation} 25 | 26 | ```{code-cell} 27 | from quantflow.sp.weiner import WeinerProcess 28 | 29 | pr = WeinerProcess(sigma=0.5) 30 | pr 31 | ``` 32 | 33 | ```{code-cell} 34 | from quantflow.utils import plot 35 | # create the marginal at time in the future 36 | m = pr.marginal(1) 37 | plot.plot_characteristic(m, n=32) 38 | ``` 39 | 40 | ```{code-cell} 41 | from quantflow.utils import plot 42 | import numpy as np 43 | plot.plot_marginal_pdf(m, 128) 44 | ``` 45 | 46 | ## Test Option Pricing 47 | 48 | ```{code-cell} 49 | from quantflow.options.pricer import OptionPricer 50 | from quantflow.sp.weiner import WeinerProcess 51 | pricer = OptionPricer(WeinerProcess(sigma=0.2)) 52 | pricer 53 | ``` 54 | 55 | ```{code-cell} 56 | import plotly.express as px 57 | import plotly.graph_objects as go 58 | from quantflow.options.bs import black_call 59 | pricer.reset() 60 | r = pricer.maturity(0.005) 61 | b = r.black() 62 | fig = px.line(x=r.moneyness_ttm, y=r.time_value, markers=True, title="Time value") 63 | fig.add_trace(go.Scatter(x=b.moneyness_ttm, y=b.time_value, name=b.name, line=dict())) 64 | fig.show() 65 | ``` 66 | 67 | ```{code-cell} 68 | pricer.maturity(0.1).plot() 69 | ``` 70 | 71 | ```{code-cell} 72 | 73 | ``` 74 | -------------------------------------------------------------------------------- /notebooks/reference/biblio.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.16.6 9 | kernelspec: 10 | display_name: Python 3 (ipykernel) 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # Bibliography 16 | 17 | ```{bibliography} 18 | ``` 19 | -------------------------------------------------------------------------------- /notebooks/reference/contributing.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.16.6 9 | kernelspec: 10 | display_name: Python 3 (ipykernel) 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # Contributing 16 | 17 | Welcome to `quantflow` repository! We are excited you are here and want to contribute. 18 | 19 | ## Getting Started 20 | 21 | To get started with quantflow's codebase, take the following steps: 22 | 23 | * Clone the repo 24 | ``` 25 | git clone git@github.com:quantmind/quantflow.git 26 | ``` 27 | * Install dev dependencies 28 | ``` 29 | make install-dev 30 | ``` 31 | * Run tests 32 | ``` 33 | make tests 34 | ``` 35 | * Run the jupyter notebook server during development 36 | ``` 37 | make notebook 38 | ``` 39 | ## Documentation 40 | 41 | The documentation is built using [Jupyter book](https://jupyterbook.org/en/stable/intro.html) which supports an *extended version of Jupyter Markdown* called "MyST Markdown". 42 | For information about the MyST syntax and how to use it, see 43 | [the MyST-Parser documentation](https://myst-parser.readthedocs.io/en/latest/using/syntax.html). 44 | 45 | To build the documentation website 46 | ``` 47 | make book 48 | ``` 49 | Navigate to the `notebook/_build/html` directory to find the `index.html` file you can open on your browser. 50 | 51 | ## Notebooks 52 | 53 | To run the notebooks you can use the provided `make` command. 54 | 55 | ``` 56 | make notebook 57 | ``` 58 | 59 | This will start a jupyter notebook server and open the browser with the notebook interface. 60 | You will be able to run the notebooks and see the results interactively (the book doesn't have interactive widgets). 61 | 62 | ## Developing with VS code 63 | 64 | If you develop with VS code we provide several tooling for easing developing. 65 | 66 | * **Notebooks development**: you can use the provided tasks to synchronize notebooks with markdown `myst` files by `Ctrl+Shift+B`. This allows to interact with the notebooks on VS code rather than jupyter interface. 67 | 68 | +++ 69 | -------------------------------------------------------------------------------- /notebooks/reference/glossary.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Glossary 15 | 16 | ## Characteristic Function 17 | 18 | The [characteristic function](../theory/characteristic.md) of a random variable $x$ is the Fourier transform of ${\mathbb P}_x$, 19 | where ${\mathbb P}_x$ is the distrubution measure of $x$. 20 | 21 | \begin{equation} 22 | \Phi_{x,u} = {\mathbb E}\left[e^{i u x}\right] = \int e^{i u s} {\mathbb P}_x\left(d s\right) 23 | \end{equation} 24 | 25 | If $x$ is a continuous random variable, than the characteristic function is the Fourier transform of the PDF $f_x$. 26 | 27 | \begin{equation} 28 | \Phi_{x,u} = {\mathbb E}\left[e^{i u x}\right] = \int e^{i u s} f_x\left(s\right) ds 29 | \end{equation} 30 | 31 | 32 | ## Cumulative Distribution Function (CDF) 33 | 34 | The cumulative distribution function (CDF), or just distribution function, 35 | of a real-valued random variable $x$ is the function given by 36 | \begin{equation} 37 | F_x(s) = {\mathbb P}_x(x \leq s) 38 | \end{equation} 39 | 40 | where ${\mathbb P}_x$ is the distrubution measure of $x$. 41 | 42 | ## Hurst Exponent 43 | 44 | The Hurst exponent is a measure of the long-term memory of time series. The Hurst exponent is a measure of the relative tendency of a time series either to regress strongly to the mean or to cluster in a direction. 45 | 46 | Check this study on the [Hurst exponent with OHLC data](../applications/hurst). 47 | 48 | ## Moneyness 49 | 50 | Moneyness is used in the context of option pricing and it is defined as 51 | 52 | \begin{equation} 53 | \ln\frac{K}{F} 54 | \end{equation} 55 | 56 | where $K$ is the strike and $F$ is the Forward price. A positive value implies strikes above the forward, which means put options are in the money and call options are out of the money. 57 | 58 | 59 | ## Moneyness Time Adjusted 60 | 61 | The time-adjusted moneyness is used in the context of option pricing in order to compare options with different maturities. It is defined as 62 | 63 | \begin{equation} 64 | \frac{1}{\sqrt{T}}\ln\frac{K}{F} 65 | \end{equation} 66 | 67 | where $K$ is the strike and $F$ is the Forward price and $T$ is the time to maturity. 68 | 69 | The key reason for dividing by the square root of time-to-maturity is related to how volatility and price movement behave over time. 70 | The price of the underlying asset is subject to random fluctuations, if these fluctuations follow a Brownian motion than the 71 | standard deviation of the price movement will increase with the square root of time. 72 | 73 | ## Probability Density Function (PDF) 74 | 75 | The [probability density function](https://en.wikipedia.org/wiki/Probability_density_function) 76 | (PDF), or density, of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value. It is related to the CDF by the formula 77 | 78 | \begin{equation} 79 | F_x(x) = \int_{-\infty}^x f_x(s) ds 80 | \end{equation} 81 | -------------------------------------------------------------------------------- /notebooks/reference/references.bib: -------------------------------------------------------------------------------- 1 | --- 2 | --- 3 | 4 | @book{bertoin, 5 | title={Lévy Processes}, 6 | author={Bertoin, J.}, 7 | year={1996}, 8 | publisher={Cambridge University Press}, 9 | } 10 | 11 | @article{carr_madan, 12 | title = {Option valuation using the fast Fourier transform}, 13 | journal = {Journal of Computational Finance}, 14 | volume = {3}, 15 | pages = {463-520}, 16 | author = {Carr, P. and Madan, D.}, 17 | year = {1999}, 18 | url = {http://faculty.baruch.cuny.edu/lwu/890/CarrMadan99.pdf}, 19 | } 20 | 21 | @article{lee_option, 22 | title = {Option Pricing by Transform Methods: Extensions, Unification, and Error Control}, 23 | author = {Lee, R. W.}, 24 | year = {2004}, 25 | url = {https://www.math.uchicago.edu/~rl/dft.pdf}, 26 | } 27 | 28 | @article{carr_wu, 29 | title = {Time-changed Lévy processes and option pricing}, 30 | author = {Carr, P. and Wu, L.}, 31 | year = {2002}, 32 | journal = {Journal of Financial Economics}, 33 | volume = {7}, 34 | pages = {113-141}, 35 | url = {https://engineering.nyu.edu/sites/default/files/2019-03/Carr-time-changed-levy-processes-option-pricing.pdf}, 36 | } 37 | 38 | @article{cgmy, 39 | title={Stochastic Volatility for Lévy processes}, 40 | author={Carr P., Geman H., Madan D.B. and Yor M.}, 41 | year={2003}, 42 | journal = {Mathematical Finance}, 43 | volume = {13}, 44 | number = {3}, 45 | url = {https://engineering.nyu.edu/sites/default/files/2019-03/Carr-stochastic-volatility-levy-processes.pdf}, 46 | } 47 | 48 | @article{frft, 49 | title={Option Pricing Using the Fractional FFT}, 50 | author={Chourdakis, K.}, 51 | journal={Journal of Computational Finance}, 52 | year={2004}, 53 | volume={8}, 54 | pages={1-18}, 55 | url = {https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=6bdf4696312d37427eda2740137650c09deacda7} 56 | } 57 | 58 | @mastersthesis{saez, 59 | title={Fourier Transform Methods for Option Pricing: An Application to extended Heston-type Models}, 60 | author={Saez, G. K. G.}, 61 | school={Universidad del Pais Vasco}, 62 | year={2014}, 63 | url = {https://www.uv.es/bfc/TFM2014/008-014.pdf} 64 | } 65 | 66 | @article{heston-calibration, 67 | url = {https://doi.org/10.1515/math-2017-0058}, 68 | title = {Calibration and simulation of Heston model}, 69 | author = {Milan Mrázek and Jan Pospíšil}, 70 | pages = {679--704}, 71 | volume = {15}, 72 | number = {1}, 73 | journal = {Open Mathematics}, 74 | doi = {doi:10.1515/math-2017-0058}, 75 | year = {2017}, 76 | lastchecked = {2023-07-07} 77 | } 78 | 79 | @article{heston-simulation, 80 | url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=946405}, 81 | title = {Efficient Simulation of the Heston Stochastic Volatility Model}, 82 | author = {Andersen, Leif B.G.}, 83 | volume = {11}, 84 | number = {3}, 85 | journal = {Journal of Computational Finance}, 86 | year = {2008}, 87 | } 88 | 89 | @article{ukf, 90 | title={The Unscented Kalman Filter for Nonlinear Estimation}, 91 | author={Merwe}, 92 | journal={internet}, 93 | year={2014}, 94 | url={https://groups.seas.harvard.edu/courses/cs281/papers/unscented.pdf} 95 | } 96 | 97 | @article{ekf, 98 | title={Parameter Estimations of Heston Model Based on Consistent Extended Kalman Filter}, 99 | author={Wang X., He X., Zhao Y. & Zuo Z.}, 100 | journal={internet}, 101 | year={2017}, 102 | url={https://www.sciencedirect.com/science/article/pii/S2405896317324758}, 103 | } 104 | 105 | @article{ou, 106 | title={Non-Gaussian OU based models and some of their uses in financial economics}, 107 | author={Barndorff-Nielsen, O.E. & Shephard, N.}, 108 | journal={Journal of the Royal Statistical Society}, 109 | year={2001}, 110 | volume = {63}, 111 | number = {2}, 112 | url={https://www.sciencedirect.com/science/article/pii/S2405896317324758}, 113 | } 114 | 115 | @article{gamma-ou, 116 | title="Gamma Related Ornstein-Uhlenbeck Processes and their Simulation", 117 | author={Sabino, P. & Petroni, C.}, 118 | journal={Journal of Statistical Computation and Simulation}, 119 | year={2021}, 120 | volume = {91}, 121 | number = {6}, 122 | url={https://doi.org/10.1080/00949655.2020.1842408}, 123 | } 124 | 125 | @article{dspp, 126 | url={https://editorialexpress.com/cgi-bin/conference/download.cgi?db_name=sbe35&paper_id=179}, 127 | title={Doubly Stochastic Poisson Processes with Affine Intensities}, 128 | author={Unknown}, 129 | journal={internet}, 130 | year={2017}, 131 | } 132 | 133 | @mastersthesis{molnar, 134 | url={https://drive.google.com/file/d/1zCU1OZyrKQLpxaypPv9U5UPbReBDXcMf/view}, 135 | title={Volatility modeling and forecasting: utilization of realized volatility, implied volatility and the highest and lowest price of the day}, 136 | author={Peter Molnar}, 137 | school={University of Economics in Prague}, 138 | year={2020}, 139 | } 140 | -------------------------------------------------------------------------------- /notebooks/theory/characteristic.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Characteristic Function 15 | 16 | The library makes heavy use of [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)) 17 | concept and therefore, it is useful to familiarize with it. 18 | 19 | ## Definition 20 | 21 | The characteristic function of a random variable $x$ is the Fourier (inverse) transform of ${\mathbb P}_x$, where ${\mathbb P}_x$ is the distrubution measure of $x$ 22 | \begin{equation} 23 | \Phi_{x,u} = {\mathbb E}\left[e^{i u x}\right] = \int e^{i u s} {\mathbb P}_x\left(ds\right) 24 | \end{equation} 25 | 26 | ## Properties 27 | 28 | * $\Phi_{x, 0} = 1$ 29 | * it is bounded, $\left|\Phi_{x, u}\right| \le 1$ 30 | * it is Hermitian, $\Phi_{x, -u} = \overline{\Phi_{x, u}}$ 31 | * it is continuous 32 | * characteristic function of a symmetric random variable is real-valued and even 33 | * moments of $x$ are given by 34 | 35 | \begin{equation} 36 | {\mathbb E}\left[x^n\right] = i^{-n} \left.\frac{\Phi_{x, u}}{d u}\right|_{u=0} 37 | \end{equation} 38 | 39 | ## Covolution 40 | 41 | The characteristic function is a great tool for working with linear combination of random variables. 42 | 43 | * if $x$ and $y$ are independent random variables then the characteristic function of the linear combination $a x + b y$ ($a$ and $b$ are constants) is 44 | 45 | \begin{equation} 46 | \Phi_{ax+bx,u} = \Phi_{x,a u}\Phi_{y,b u} 47 | \end{equation} 48 | 49 | * which means, if $x$ and $y$ are independent, the characteristic function of $x+y$ is the product 50 | 51 | \begin{equation} 52 | \Phi_{x+x,u} = \Phi_{x,u}\Phi_{y,u} 53 | \end{equation} 54 | 55 | * The characteristic function of $ax+b$ is 56 | 57 | \begin{equation} 58 | \Phi_{ax+b,u} = e^{iub}\Phi_{x,au} 59 | \end{equation} 60 | 61 | ## Inversion 62 | 63 | There is a one-to-one correspondence between cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. 64 | 65 | ### Continuous distributions 66 | 67 | The inversion formula for these distributions is given by 68 | 69 | \begin{equation} 70 | {\mathbb P}_x\left(x=s\right) = \frac{1}{2\pi}\int_{-\infty}^\infty e^{-ius}\Phi_{s, u} du 71 | \end{equation} 72 | 73 | ### Discrete distributions 74 | 75 | In these distributions, the random variable $x$ takes integer values $k$. For example, the Poisson distribution is discrete. 76 | The inversion formula for these distributions is given by 77 | 78 | \begin{equation} 79 | {\mathbb P}_x\left(x=k\right) = \frac{1}{2\pi}\int_{-\pi}^\pi e^{-iuk}\Phi_{k, u} du 80 | \end{equation} 81 | 82 | ```{code-cell} 83 | 84 | ``` 85 | 86 | (characteristic-exponent)= 87 | ## Characteristic Exponent 88 | 89 | The characteristic exponent $\phi_{x,u}$ is defined as 90 | 91 | \begin{equation} 92 | \Phi_{x,u} = e^{-\phi_{x,u}} 93 | \end{equation} 94 | -------------------------------------------------------------------------------- /notebooks/theory/inversion.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.16.6 9 | kernelspec: 10 | display_name: Python 3 (ipykernel) 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # From Characteristic Function to PDF 16 | 17 | +++ 18 | 19 | One uses the inverse Fourier transform formula to obtain the probability density function (PDF) from a characteristic function. 20 | 21 | \begin{equation} 22 | f(x) = \frac{1}{2\pi}\int_{-\infty}^\infty e^{-iux} \Phi_x\left(u\right) du = \frac{1}{\pi} {\mathcal R} \int_0^\infty e^{-iux} \Phi_u du 23 | \end{equation} 24 | 25 | The last equivalence is because the PDF is real-valued. 26 | 27 | ## Discretization 28 | 29 | The PDF integration can be approximated as: 30 | 31 | \begin{align} 32 | u_m &= \delta_u m \\ 33 | f(x) &\approx \frac{1}{\pi}\sum_{m=0}^{N-1} h_m e^{-i u_m x} \Phi_x\left(u_m\right) \delta_u 34 | \end{align} 35 | 36 | * $\delta_u$ is the discretization in the frequency domain. It must be small enough to provide good accuracy of the integral. 37 | * $N$ is the number of discretization points and must be large enough so that the characteristic function is virtually 0 for $u_{N-1}=\delta_u N$. 38 | * $h_m$ is given by the integration methodology, either trapezoidal or Simpson rule (the library support both, with trapezoidal as default). 39 | 40 | For full details, follow {cite:p}`carr_madan`, {cite:p}`saez`. 41 | 42 | One could use the inverse Fourier transform to solve the integral. However, it has $O(N^2)$ time complexity. 43 | One alternative, implemented in the library, is using the Fast Fourier Transform (FFT), which has $O(N \log N)$ time complexity. 44 | Another more flexible alternative is the Fractional FFT as described in {cite:p}`frft`. This is the methodology used by default in the library. 45 | 46 | +++ 47 | 48 | ## FFT Integration 49 | 50 | FFT is an efficient algorithm for computing discrete Fourier coefficients $d$ from $f$. Gievn an event number $N=2^n$, these are given by 51 | 52 | \begin{equation} 53 | d_j = \frac{1}{N}\sum_{m=0}^{N-1} f_m e^{-jm\frac{2\pi}{N} i}\ \ j=0, 1, \dots, N-1 54 | \end{equation} 55 | 56 | Using this formula, the discretization above can be rewritten as 57 | 58 | \begin{align} 59 | x_j &= -b + \delta_x j \\ 60 | \zeta &= \delta_u \delta_x \\ 61 | f_m &= h_m \frac{N}{\pi} e^{i u_m b} \Phi_x\left(u_m\right) \delta_u\\ 62 | f(x_j) &\approx \frac{1}{N} \sum_{m=0}^{N-1} f_m e^{-j m \zeta i} 63 | \end{align} 64 | 65 | The parameter $b$ controls the range of the random variable $x$. The FFT requires that 66 | 67 | \begin{equation} 68 | \zeta = \frac{2\pi}{N} 69 | \end{equation} 70 | 71 | which means $\delta_u$ and $\delta_x$ cannot be chosen indipendently. 72 | 73 | As an example, let us invert the characteristic function of the Weiber process, which yields the standard distribution. 74 | 75 | ```{code-cell} 76 | from quantflow.sp.weiner import WeinerProcess 77 | p = WeinerProcess(sigma=0.5) 78 | m = p.marginal(0.2) 79 | m.std() 80 | ``` 81 | 82 | ```{code-cell} 83 | from quantflow.utils import plot 84 | 85 | plot.plot_characteristic(m) 86 | ``` 87 | 88 | ```{code-cell} 89 | from quantflow.utils import plot 90 | import numpy as np 91 | plot.plot_marginal_pdf(m, 128, use_fft=True, max_frequency=20) 92 | ``` 93 | 94 | ```{code-cell} 95 | plot.plot_marginal_pdf(m, 128*8, use_fft=True, max_frequency=8*20) 96 | ``` 97 | 98 | **Note** the amount of unnecessary discretization points in the frequency domain (the characteristic function is zero after 15 or so). However the space domain is poorly represented because of the FFT constraints (we have a relatively small number of points where it matters, around zero). 99 | 100 | \begin{equation} 101 | \delta_x = \frac{2 \pi}{N} \delta_u 102 | \end{equation} 103 | 104 | +++ 105 | 106 | ## FRFT 107 | Compared to the FFT, this method relaxes the constraint $\zeta=2\pi/N$ so that frequency domain and space domains can be discretized independently. We use the methodology from {cite:p}`frft` 108 | 109 | \begin{align} 110 | y &= \left(\left[e^{-i j^2 \zeta/2}\right]_{j=0}^{N-1}, \left[0\right]_{j=0}^{N-1}\right) \\ 111 | z &= \left(\left[e^{i j^2 \zeta/2}\right]_{j=0}^{N-1}, \left[e^{i\left(N-j\right)^2 \zeta/2}\right]_{j=0}^{N-1}\right) 112 | \end{align} 113 | 114 | We can now reduce the number of points needed for the discretization and achieve higher accuracy by properly selecting the domain discretization independently. 115 | 116 | ```{code-cell} 117 | plot.plot_marginal_pdf(m, 128) 118 | ``` 119 | 120 | Since one N-pointFRFTt will invoke three 2N-pointFFTt procedures, the number of operations will be approximately $6N\log{N}$ compared to $N\log{N}$ for the FFT. However, we can use fewer points as demonstrated and be more robust in delivering results. 121 | 122 | The FRFT is used as the default transforms across the library, the FFT can be used by passing `use_fft` to the transform functions, but it is not advised. 123 | 124 | +++ 125 | 126 | ## Additional References 127 | 128 | 129 | * [Fourier Transfrom and Characteristic Functions](https://faculty.baruch.cuny.edu/lwu/890/ADP_Transform.pdf) - useful but lots of typos 130 | -------------------------------------------------------------------------------- /notebooks/theory/levy.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | encoding: '# -*- coding: utf-8 -*-' 4 | formats: ipynb,md:myst 5 | text_representation: 6 | extension: .md 7 | format_name: myst 8 | format_version: 0.13 9 | jupytext_version: 1.16.6 10 | kernelspec: 11 | display_name: Python 3 (ipykernel) 12 | language: python 13 | name: python3 14 | --- 15 | 16 | # Lévy process 17 | A Lévy process $x_t$ is a stochastic process which satisfies the following properties 18 | 19 | * $x_0 = 0$ 20 | * **independent increments**: $x_t - x_s$ is independent of $x_u; u \le s\ \forall\ s < t$ 21 | * **stationary increments**: $x_{s+t} - x_s$ has the same distribution as $x_t - x_0$ for any $s,t > 0$ 22 | 23 | This means that the shocks to the process are independent, while the stationarity assumption specifies that the distribution of $x_{t+s} - x_s$ may change with $s$ but does not depend upon $t$. 24 | 25 | **Remark**: The properties of stationary and independent increments implies that a Lévy process is a Markov process. 26 | Thanks to almost sure right continuity of paths, one may show in addition that Lévy processes are also 27 | Strong Markov processes. See ([Markov property](https://en.wikipedia.org/wiki/Markov_property)). 28 | 29 | ## Characteristic function 30 | 31 | The independence and stationarity of the increments of the Lévy process imply that the [characteristic function](./characteristic.md) of $x_t$ has the form 32 | 33 | \begin{equation} 34 | \Phi_{x_t, u} = {\mathbb E}\left[e^{i u x_t}\right] = e^{-\phi_{x_t, u}} = e^{-t \phi_{x_1,u}} 35 | \end{equation} 36 | 37 | where the [](characteristic-exponent) $\phi_{x_1,u}$ is given by the [Lévy–Khintchine formula](https://en.wikipedia.org/wiki/L%C3%A9vy_process). 38 | 39 | There are several Lévy processes in the literature, including, the [Poisson process](../models/poisson.md), the compound Poisson process 40 | and the [Brownian motion](../models/weiner.md). 41 | 42 | +++ 43 | 44 | ## Time Changed Lévy Processes 45 | 46 | We follow the paper by Carr and Wu {cite:p}`carr_wu` to defined a continuous time changed Lévy process $y_t$ as 47 | 48 | \begin{align} 49 | y_t &= x_{\tau_t}\\ 50 | \tau_t &= \int_0^t \lambda_s ds 51 | \end{align} 52 | 53 | where $x_s$ is a Lévy process and $\lambda_s$ is a positive and integrable process which we refer to **stochastic intensity process**. 54 | While $\tau_t$ is always continuous, $\lambda$ can exhibit jumps. Since the time-changed process is a stochastic process evaluated at a stochastic time, its characteristic function involves expectations over two sources of randomness: 55 | 56 | \begin{equation} 57 | \Phi_{y_t, u} = {\mathbb E}\left[e^{i u x_{\tau_t}}\right] = {\mathbb E}\left[{\mathbb E}\left[\left.e^{i u x_s}\right|\tau_t=s\right]\right] 58 | \end{equation} 59 | 60 | where the inside expectation is taken on $x_{\tau_t}$ conditional on a fixed value of $\tau_t = s$ and the outside expectation is on all possible values of $\tau_t$. If the random time $\tau_t$ is independent of $x_t$, the randomness due to the Lévy process can be integrated out using the characteristic function of $x_t$: 61 | 62 | \begin{equation} 63 | \Phi_{y_t, u} = {\mathbb E}\left[e^{-\tau_t \phi_{x,u}}\right] = {\mathbb L}_{\tau_t}\left(\phi_{x_1,u}\right) 64 | \end{equation} 65 | 66 | **Remark**: Under independence, the characteristic function of a time-changed Lévy process $y_t$ is the **Laplace transform** of the cumulative intensity $\tau_t$ evaluated at the characteristic exponent of $x$. 67 | 68 | Therefore the characteristic function of $y_t$ can be expressed in closed form if 69 | 70 | * the characteristic exponent of the Lévy process $x_t$ is available in closed from 71 | * the Laplace transform of $\tau_t$, the integrated intensity process, is known in closed from 72 | 73 | ## Leverage Effect 74 | 75 | To obtain the Laplace transform of $\tau_t$ in closed form, consider its specification in terms of the intensity process $\lambda_t$: 76 | 77 | \begin{equation} 78 | {\mathbb L}_{\tau_t}\left(u\right) = {\mathbb E}\left[e^{- u \int_0^t \lambda_s ds}\right] 79 | \end{equation} 80 | 81 | This equation is very common in the bond pricing literature if we regard $u\lambda_t$ as the instantaneous interest rate. 82 | In the general case, the intensity process is correlated with the Lévy process of increments, this is well 83 | known in the literature as the **leverage effect**. 84 | 85 | Carr and Wu {cite:p}`carr_wu` solve this problem by changing the measure from an economy with leverage effect to one without it. 86 | 87 | \begin{align} 88 | \Phi_{y_t, u} &= {\mathbb E}\left[e^{i u y_t}\right] \\ 89 | &= {\mathbb E}\left[e^{i u y_t + \tau_t \phi_{x_1, u} - \tau_t \phi_{x_1, u}}\right] \\ 90 | &= {\mathbb E}\left[M_{t, u} e^{-\tau_t \phi_{x_1,u}}\right] \\ 91 | &= {\mathbb E}^u\left[e^{-\tau_t \phi_{x_1,u}}\right] \\ 92 | &= {\mathbb L}_{\tau_t}^u\left(\phi_{x_1,u}\right) 93 | \end{align} 94 | 95 | where $E[\cdot]$ and $E^u[\cdot]$ denote the expectation under probability measure $P$ and $Q^u$, respectively. The two measures are linked via 96 | the complex-valued [Radon–Nikodym derivative](https://en.wikipedia.org/wiki/Radon%E2%80%93Nikodym_theorem#Radon%E2%80%93Nikodym_derivative) 97 | 98 | \begin{equation} 99 | M_{t, u} = \frac{d Q^u}{d P} = \exp{\left(i u y_t + \tau_t \phi_{x_1, u}\right)} = \exp{\left(i u y_t + \phi_{x_1, u}\int_0^t \lambda_s ds\right)} 100 | \end{equation} 101 | 102 | ## Affine definition 103 | 104 | In order to obtain analytically tractable models we need to impose some restriction on the stochastic intensity process. 105 | An affine intensity process takes the general form 106 | 107 | \begin{equation} 108 | v_t = r_0 + r z_t 109 | \end{equation} 110 | 111 | where $r_0$ and $r_1$ are contants and ${\bf z}_t$ is a Markov process called the **state process**. 112 | When the intensity process is affine, the Laplace transform takes the following form. 113 | 114 | \begin{equation} 115 | {\mathbb L}_{\tau_t}\left(z\right) = {\mathbb E}\left[e^{- z \tau_t}\right] = e^{-a_{u, t} - b_{u, t} z_0} 116 | \end{equation} 117 | 118 | where coefficients $a$ and $b$ satisfy Riccati ODEs, which can be solved numerically and, in some cases, analytically. 119 | 120 | ```{code-cell} 121 | 122 | ``` 123 | -------------------------------------------------------------------------------- /notebooks/theory/option_pricing.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | formats: ipynb,md:myst 4 | text_representation: 5 | extension: .md 6 | format_name: myst 7 | format_version: 0.13 8 | jupytext_version: 1.16.6 9 | kernelspec: 10 | display_name: Python 3 (ipykernel) 11 | language: python 12 | name: python3 13 | --- 14 | 15 | # Option Pricing 16 | 17 | 18 | We can use the tooling from characteristic function inversion to price european call options on an underlying $S_t = S_0 e^{s_t}$, where $S_0$ is the spot price at time 0. 19 | 20 | ## Convexity Correction 21 | 22 | We assume an interest rate 0, so that the forward price is equal the spot price. This assumtion leads to the following no arbitrage condition 23 | 24 | \begin{align} 25 | s_t &= x_t - c_t \\ 26 | {\mathbb E}_0\left[e^{s_t} \right] &= {\mathbb E}_0\left[e^{x_t - c_t} \right] = e^{-c_t} {\mathbb E}_0\left[e^{x_t} \right] = e^{-c_t} e^{-\phi_{x_t, -i}} = 1 27 | \end{align} 28 | 29 | Therefore, $c_t$ represents the so-called convexity correction term, and it is equal to 30 | 31 | \begin{equation} 32 | c_t = -\phi_{x_t, -i} 33 | \end{equation} 34 | 35 | The characteristic function of $s_t$ is given by 36 | 37 | \begin{equation} 38 | \Phi_{s_t}\left(u\right) = \Phi_x\left(u\right) e^{-i u c_t} 39 | \end{equation} 40 | 41 | As you can see the convexity correction increases with time horizon, lets take few examples: 42 | 43 | ### Weiner process 44 | 45 | This is the very famous convexity correction which appears in all diffusion driven SDE: 46 | 47 | \begin{equation} 48 | c_t = \frac{\sigma^2 t}{2} 49 | \end{equation} 50 | 51 | ```{code-cell} 52 | from quantflow.sp.weiner import WeinerProcess 53 | pr = WeinerProcess(sigma=0.5) 54 | -pr.characteristic_exponent(1, complex(0,-1)) 55 | ``` 56 | 57 | which is the same as 58 | 59 | ```{code-cell} 60 | pr.convexity_correction(1) 61 | ``` 62 | 63 | ## Call option 64 | 65 | The price C of a call option with strike $K$ is defined as 66 | \begin{align} 67 | C &= S_0 c_k \\ 68 | k &= \ln\frac{K}{S_0}\\ 69 | c_k &= {\mathbb E}\left[\left(e^{s_t} - e^k\right)1_{s_t\ge k}\right] 70 | \end{align} 71 | 72 | 73 | We follow {cite:p}`carr_madan` and write the Fourier transform of the the call option as 74 | 75 | \begin{equation} 76 | \Psi_u = \int_{-\infty}^\infty e^{i u k} c_k dk 77 | \end{equation} 78 | 79 | Note that $c_k$ tends to $e^x_t$ as $k \to -\infty$, therefore the call price function is not square-integrable. In order to obtain integrability, we choose complex values of $u$ of the form 80 | \begin{equation} 81 | u = v - i \alpha 82 | \end{equation} 83 | The value of $\alpha$ is a numerical choice we can check later. 84 | 85 | It is possible to obtain the analytical expression of $\Psi_u$ in terms of the characteristic function $\Phi_s$. Once we have that expression, we can use the Fourier transform tooling presented previously to calculate option prices in this way 86 | 87 | \begin{align} 88 | c_k &= \int_0^{\infty} e^{-iuk} \Psi\left(u\right) du \\ 89 | &= \frac{e^{-\alpha k}}{\pi} \int_0^{\infty} e^{-ivk} \Psi\left(v-i\alpha\right) dv \\ 90 | \end{align} 91 | 92 | The analytical expression of $\Psi_u$ is given by 93 | 94 | \begin{equation} 95 | \Psi_u = \frac{\Phi_{s_t}\left(u-i\right)}{iu \left(iu + 1\right)} 96 | \end{equation} 97 | 98 | To integrate, we use the same approach as the PDF integration. 99 | 100 | ### Choice of $\alpha$ 101 | 102 | Positive values of α assist the integrability of the modified call value over the 103 | negative moneyness axis, but aggravate the same condition for the positive moneyness axis. For the modified call value to be integrable in the positive moneyness 104 | direction, and hence for it to be square-integrable as well, a sufficient condition 105 | is provided by $\Psi_{-i\alpha}$ being finite, which means the characteristic function $\Phi_{t,{-(\alpha+1)i}}$ is finite. 106 | 107 | 108 | ## Black Formula 109 | 110 | Here we illustrate how to use the characteristic function integration with the classical [Weiner process](https://en.wikipedia.org/wiki/Wiener_process). 111 | 112 | ```{code-cell} 113 | from quantflow.sp.weiner import WeinerProcess 114 | ttm=1 115 | p = WeinerProcess(sigma=0.5) 116 | 117 | # create the marginal density at ttm 118 | m = p.marginal(ttm) 119 | m.std() 120 | ``` 121 | 122 | ```{code-cell} 123 | import plotly.express as px 124 | import plotly.graph_objects as go 125 | from quantflow.options.bs import black_call 126 | N, M = 128, 10 127 | dx = 10/N 128 | r = m.call_option(64, M, dx, alpha=0.3) 129 | b = black_call(r.x, p.sigma, ttm) 130 | fig = px.line(x=r.x, y=r.y, markers=True, labels=dict(x="moneyness", y="call price")) 131 | fig.add_trace(go.Scatter(x=r.x, y=b, name="analytical", line=dict())) 132 | fig.show() 133 | ``` 134 | 135 | ```{code-cell} 136 | 137 | ``` 138 | -------------------------------------------------------------------------------- /notebooks/theory/overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | jupytext: 3 | text_representation: 4 | extension: .md 5 | format_name: myst 6 | format_version: 0.13 7 | jupytext_version: 1.16.6 8 | kernelspec: 9 | display_name: Python 3 (ipykernel) 10 | language: python 11 | name: python3 12 | --- 13 | 14 | # Theory 15 | 16 | ```{tableofcontents} 17 | ``` 18 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.poetry] 2 | name = "quantflow" 3 | version = "0.3.3" 4 | description = "quantitative analysis" 5 | authors = ["Luca "] 6 | license = "BSD-3-Clause" 7 | readme = "readme.md" 8 | 9 | [tool.poetry.urls] 10 | Homepage = "https://github.com/quantmind/quantflow" 11 | Repository = "https://github.com/quantmind/quantflow" 12 | Documentation = "https://quantmind.github.io/quantflow/" 13 | 14 | [tool.poetry.dependencies] 15 | python = ">=3.11" 16 | scipy = "^1.14.1" 17 | pydantic = "^2.0.2" 18 | ccy = { version = "^1.7.1" } 19 | python-dotenv = "^1.0.1" 20 | polars = { version = "^1.11.0", extras = ["pandas", "pyarrow"] } 21 | asciichartpy = { version = "^1.5.25", optional = true } 22 | prompt-toolkit = { version = "^3.0.43", optional = true } 23 | aio-fluid = { version = "^1.2.1", extras = ["http"], optional = true } 24 | rich = { version = "^13.9.4", optional = true } 25 | click = { version = "^8.1.7", optional = true } 26 | holidays = { version = "^0.63", optional = true } 27 | async-cache = { version = "^1.1.1", optional = true } 28 | 29 | [tool.poetry.group.dev.dependencies] 30 | black = "^25.1.0" 31 | pytest-cov = "^6.0.0" 32 | mypy = "^1.14.1" 33 | ghp-import = "^2.0.2" 34 | ruff = "^0.11.12" 35 | pytest-asyncio = "^0.26.0" 36 | isort = "^6.0.1" 37 | 38 | 39 | [tool.poetry.extras] 40 | data = ["aio-fluid"] 41 | cli = [ 42 | "asciichartpy", 43 | "async-cache", 44 | "prompt-toolkit", 45 | "rich", 46 | "click", 47 | "holidays", 48 | ] 49 | 50 | [tool.poetry.group.book] 51 | optional = true 52 | 53 | [tool.poetry.group.book.dependencies] 54 | jupyter-book = "^1.0.0" 55 | jupytext = "^1.13.8" 56 | plotly = "^5.20.0" 57 | jupyterlab = "^4.0.2" 58 | sympy = "^1.12" 59 | ipywidgets = "^8.0.7" 60 | sphinx-autodoc-typehints = "2.3.0" 61 | sphinx-autosummary-accessors = "^2023.4.0" 62 | sphinx-copybutton = "^0.5.2" 63 | autodocsumm = "^0.2.14" 64 | 65 | [tool.poetry.scripts] 66 | qf = "quantflow.cli.script:main" 67 | 68 | [build-system] 69 | requires = ["poetry-core>=1.0.0"] 70 | build-backend = "poetry.core.masonry.api" 71 | 72 | [tool.jupytext] 73 | formats = "ipynb,myst" 74 | 75 | [tool.pytest.ini_options] 76 | asyncio_mode = "auto" 77 | testpaths = ["quantflow_tests"] 78 | 79 | [tool.isort] 80 | profile = "black" 81 | 82 | [tool.ruff] 83 | lint.select = ["E", "F"] 84 | line-length = 88 85 | 86 | [tool.hatch.version] 87 | path = "quantflow/__init__.py" 88 | 89 | [tool.mypy] 90 | # strict = true 91 | disallow_untyped_calls = true 92 | disallow_untyped_defs = true 93 | warn_no_return = true 94 | 95 | [[tool.mypy.overrides]] 96 | module = [ 97 | "asciichartpy.*", 98 | "cache.*", 99 | "quantflow_tests.*", 100 | "IPython.*", 101 | "pandas.*", 102 | "plotly.*", 103 | "scipy.*", 104 | ] 105 | ignore_missing_imports = true 106 | disallow_untyped_defs = false 107 | -------------------------------------------------------------------------------- /quantflow/__init__.py: -------------------------------------------------------------------------------- 1 | """Quantitative analysis and pricing""" 2 | 3 | __version__ = "0.3.3" 4 | -------------------------------------------------------------------------------- /quantflow/cli/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/quantflow/cli/__init__.py -------------------------------------------------------------------------------- /quantflow/cli/app.py: -------------------------------------------------------------------------------- 1 | import os 2 | from dataclasses import dataclass, field 3 | from functools import partial 4 | from typing import Any 5 | 6 | import click 7 | from fluid.utils.http_client import HttpResponseError 8 | from prompt_toolkit import PromptSession 9 | from prompt_toolkit.completion import NestedCompleter 10 | from prompt_toolkit.formatted_text import HTML 11 | from prompt_toolkit.history import FileHistory 12 | from rich.console import Console 13 | from rich.text import Text 14 | 15 | from quantflow.data.vault import Vault 16 | 17 | from . import settings 18 | from .commands import quantflow 19 | from .commands.base import QuantGroup 20 | 21 | 22 | @dataclass 23 | class QfApp: 24 | console: Console = field(default_factory=Console) 25 | vault: Vault = field(default_factory=partial(Vault, settings.VAULT_FILE_PATH)) 26 | sections: list[QuantGroup] = field(default_factory=lambda: [quantflow]) 27 | 28 | def __call__(self) -> None: 29 | os.makedirs(settings.SETTINGS_DIRECTORY, exist_ok=True) 30 | history = FileHistory(str(settings.HIST_FILE_PATH)) 31 | session: PromptSession = PromptSession(history=history) 32 | 33 | self.print("Welcome to QuantFlow!", style="bold green") 34 | self.handle_command("help") 35 | 36 | try: 37 | while True: 38 | try: 39 | text = session.prompt( 40 | self.prompt_message(), 41 | completer=self.prompt_completer(), 42 | complete_while_typing=True, 43 | bottom_toolbar=self.bottom_toolbar, 44 | ) 45 | except KeyboardInterrupt: 46 | break 47 | else: 48 | self.handle_command(text) 49 | except click.Abort: 50 | self.console.print(Text("Bye!", style="bold magenta")) 51 | 52 | def prompt_message(self) -> str: 53 | name = ":".join([str(section.name) for section in self.sections]) 54 | return f"{name} > " 55 | 56 | def prompt_completer(self) -> NestedCompleter: 57 | return NestedCompleter.from_nested_dict( 58 | {command: None for command in self.sections[-1].commands} 59 | ) 60 | 61 | def set_section(self, section: QuantGroup) -> None: 62 | self.sections.append(section) 63 | 64 | def back(self) -> None: 65 | self.sections.pop() 66 | 67 | def print(self, text_alike: Any, style: str = "") -> None: 68 | if isinstance(text_alike, str): 69 | style = style or "cyan" 70 | text_alike = Text(f"\n{text_alike}\n", style="cyan") 71 | self.console.print(text_alike) 72 | 73 | def error(self, err: str | Exception) -> None: 74 | self.console.print(Text(f"\n{err}\n", style="bold red")) 75 | 76 | def handle_command(self, text: str) -> None: 77 | if not text: 78 | return 79 | command = self.sections[-1] 80 | try: 81 | command.main(text.split(), standalone_mode=False, obj=self) 82 | except ( 83 | click.exceptions.MissingParameter, 84 | click.exceptions.NoSuchOption, 85 | click.exceptions.UsageError, 86 | HttpResponseError, 87 | ) as e: 88 | self.error(e) 89 | 90 | def bottom_toolbar(self) -> HTML: 91 | sections = "/".join([str(section.name) for section in self.sections]) 92 | back = ( 93 | (' ' "to exit the current section,") 94 | if len(self.sections) > 1 95 | else "" 96 | ) 97 | return HTML( 98 | f"Your are in {sections}, type{back} " 99 | ' to exit' 100 | ) 101 | -------------------------------------------------------------------------------- /quantflow/cli/commands/__init__.py: -------------------------------------------------------------------------------- 1 | from .base import QuantContext, quant_group 2 | from .crypto import crypto 3 | from .fred import fred 4 | from .stocks import stocks 5 | from .vault import vault 6 | 7 | 8 | @quant_group() 9 | def quantflow() -> None: 10 | ctx = QuantContext.current() 11 | if ctx.invoked_subcommand is None: 12 | ctx.qf.print(ctx.get_help()) 13 | 14 | 15 | quantflow.add_command(vault) 16 | quantflow.add_command(crypto) 17 | quantflow.add_command(stocks) 18 | quantflow.add_command(fred) 19 | -------------------------------------------------------------------------------- /quantflow/cli/commands/base.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import enum 4 | from typing import TYPE_CHECKING, Any, Self, cast 5 | 6 | import click 7 | 8 | from quantflow.data.fmp import FMP 9 | from quantflow.data.fred import Fred 10 | 11 | if TYPE_CHECKING: 12 | from quantflow.cli.app import QfApp 13 | 14 | 15 | FREQUENCIES = tuple(FMP().historical_frequencies()) 16 | 17 | 18 | class HistoricalPeriod(enum.StrEnum): 19 | day = "1d" 20 | week = "1w" 21 | month = "1m" 22 | three_months = "3m" 23 | six_months = "6m" 24 | year = "1y" 25 | 26 | 27 | class QuantContext(click.Context): 28 | 29 | @classmethod 30 | def current(cls) -> Self: 31 | return cast(Self, click.get_current_context()) 32 | 33 | @property 34 | def qf(self) -> QfApp: 35 | return self.obj # type: ignore 36 | 37 | def set_as_section(self) -> None: 38 | group = cast(QuantGroup, self.command) 39 | group.add_command(back) 40 | self.qf.set_section(group) 41 | self.qf.print(self.get_help()) 42 | 43 | def fmp(self) -> FMP: 44 | if key := self.qf.vault.get("fmp"): 45 | return FMP(key=key) 46 | else: 47 | raise click.UsageError("No FMP API key found") 48 | 49 | def fred(self) -> Fred: 50 | if key := self.qf.vault.get("fred"): 51 | return Fred(key=key) 52 | else: 53 | raise click.UsageError("No FRED API key found") 54 | 55 | 56 | class QuantCommand(click.Command): 57 | context_class = QuantContext 58 | 59 | 60 | class QuantGroup(click.Group): 61 | context_class = QuantContext 62 | command_class = QuantCommand 63 | 64 | 65 | @click.command(cls=QuantCommand) 66 | def exit() -> None: 67 | """Exit the program""" 68 | raise click.Abort() 69 | 70 | 71 | @click.command(cls=QuantCommand) 72 | def help() -> None: 73 | """display the commands""" 74 | if ctx := QuantContext.current().parent: 75 | cast(QuantContext, ctx).qf.print(ctx.get_help()) 76 | 77 | 78 | @click.command(cls=QuantCommand) 79 | def back() -> None: 80 | """Exit the current section""" 81 | ctx = QuantContext.current() 82 | ctx.qf.back() 83 | ctx.qf.handle_command("help") 84 | 85 | 86 | def quant_group() -> Any: 87 | return click.group( 88 | cls=QuantGroup, 89 | commands=[exit, help], 90 | invoke_without_command=True, 91 | add_help_option=False, 92 | ) 93 | 94 | 95 | class options: 96 | length = click.option( 97 | "-l", 98 | "--length", 99 | type=int, 100 | default=100, 101 | show_default=True, 102 | help="Number of data points", 103 | ) 104 | height = click.option( 105 | "-h", 106 | "--height", 107 | type=int, 108 | default=20, 109 | show_default=True, 110 | help="Chart height", 111 | ) 112 | chart = click.option("-c", "--chart", is_flag=True, help="Display chart") 113 | period = click.option( 114 | "-p", 115 | "--period", 116 | type=click.Choice(tuple(p.value for p in HistoricalPeriod)), 117 | default="1d", 118 | show_default=True, 119 | help="Historical period", 120 | ) 121 | index = click.option( 122 | "-i", 123 | "--index", 124 | type=int, 125 | default=-1, 126 | help="maturity index", 127 | ) 128 | frequency = click.option( 129 | "-f", 130 | "--frequency", 131 | type=click.Choice(FREQUENCIES), 132 | default="", 133 | help="Frequency of data - if not provided it is daily", 134 | ) 135 | -------------------------------------------------------------------------------- /quantflow/cli/commands/crypto.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import asyncio 4 | 5 | import click 6 | import pandas as pd 7 | from asciichartpy import plot 8 | from cache import AsyncTTL 9 | from ccy.cli.console import df_to_rich 10 | 11 | from quantflow.data.deribit import Deribit, InstrumentKind 12 | from quantflow.options.surface import VolSurface 13 | from quantflow.utils.numbers import round_to_step 14 | 15 | from .base import QuantContext, options, quant_group 16 | from .stocks import get_prices 17 | 18 | 19 | @quant_group() 20 | def crypto() -> None: 21 | """Crypto currencies commands""" 22 | ctx = QuantContext.current() 23 | if ctx.invoked_subcommand is None: 24 | ctx.set_as_section() 25 | 26 | 27 | @crypto.command() 28 | @click.argument("currency") 29 | @click.option( 30 | "-k", 31 | "--kind", 32 | type=click.Choice(list(InstrumentKind)), 33 | default=InstrumentKind.spot.value, 34 | ) 35 | def instruments(currency: str, kind: str) -> None: 36 | """Provides information about instruments 37 | 38 | Instruments for given cryptocurrency from Deribit API""" 39 | ctx = QuantContext.current() 40 | data = asyncio.run(get_instruments(ctx, currency, kind)) 41 | df = pd.DataFrame(data) 42 | ctx.qf.print(df_to_rich(df)) 43 | 44 | 45 | @crypto.command() 46 | @click.argument("currency") 47 | @options.length 48 | @options.height 49 | @options.chart 50 | def volatility(currency: str, length: int, height: int, chart: bool) -> None: 51 | """Provides information about historical volatility 52 | 53 | Historical volatility for given cryptocurrency from Deribit API 54 | """ 55 | ctx = QuantContext.current() 56 | df = asyncio.run(get_volatility(ctx, currency)) 57 | df["volatility"] = df["volatility"].map(lambda p: round_to_step(p, "0.01")) 58 | if chart: 59 | data = df["volatility"].tolist()[:length] 60 | ctx.qf.print(plot(data, {"height": height})) 61 | else: 62 | ctx.qf.print(df_to_rich(df)) 63 | 64 | 65 | @crypto.command() 66 | @click.argument("currency") 67 | def term_structure(currency: str) -> None: 68 | """Provides information about the term structure for given cryptocurrency""" 69 | ctx = QuantContext.current() 70 | vs = asyncio.run(get_vol_surface(currency)) 71 | ts = vs.term_structure().round({"ttm": 4}) 72 | ts["open_interest"] = ts["open_interest"].map("{:,d}".format) 73 | ts["volume"] = ts["volume"].map("{:,d}".format) 74 | ctx.qf.print(df_to_rich(ts)) 75 | 76 | 77 | @crypto.command() 78 | @click.argument("currency") 79 | @options.index 80 | @options.height 81 | @options.chart 82 | def implied_vol(currency: str, index: int, height: int, chart: bool) -> None: 83 | """Display the Volatility Surface for given cryptocurrency 84 | at a given maturity index 85 | """ 86 | ctx = QuantContext.current() 87 | vs = asyncio.run(get_vol_surface(currency)) 88 | index_or_none = None if index < 0 else index 89 | vs.bs(index=index_or_none) 90 | df = vs.options_df(index=index_or_none) 91 | if chart: 92 | data = (df["implied_vol"] * 100).tolist() 93 | ctx.qf.print(plot(data, {"height": height})) 94 | else: 95 | df[["ttm", "moneyness", "moneyness_ttm"]] = df[ 96 | ["ttm", "moneyness", "moneyness_ttm"] 97 | ].map("{:.4f}".format) 98 | df["implied_vol"] = df["implied_vol"].map("{:.2%}".format) 99 | df["price"] = df["price"].map(lambda p: round_to_step(p, vs.tick_size_options)) 100 | df["forward_price"] = df["forward_price"].map( 101 | lambda p: round_to_step(p, vs.tick_size_forwards) 102 | ) 103 | ctx.qf.print(df_to_rich(df)) 104 | 105 | 106 | @crypto.command() 107 | @click.argument("symbol") 108 | @options.height 109 | @options.length 110 | @options.chart 111 | @options.frequency 112 | def prices(symbol: str, height: int, length: int, chart: bool, frequency: str) -> None: 113 | """Fetch OHLC prices for given cryptocurrency""" 114 | ctx = QuantContext.current() 115 | df = asyncio.run(get_prices(ctx, symbol, frequency)) 116 | if df.empty: 117 | raise click.UsageError( 118 | f"No data for {symbol} - are you sure the symbol exists?" 119 | ) 120 | if chart: 121 | data = list(reversed(df["close"].tolist()[:length])) 122 | ctx.qf.print(plot(data, {"height": height})) 123 | else: 124 | ctx.qf.print( 125 | df_to_rich( 126 | df[["date", "open", "high", "low", "close", "volume"]].sort_values( 127 | "date" 128 | ) 129 | ) 130 | ) 131 | 132 | 133 | async def get_instruments(ctx: QuantContext, currency: str, kind: str) -> list[dict]: 134 | async with Deribit() as client: 135 | return await client.get_instruments( 136 | currency=currency, kind=InstrumentKind(kind) 137 | ) 138 | 139 | 140 | async def get_volatility(ctx: QuantContext, currency: str) -> pd.DataFrame: 141 | async with Deribit() as client: 142 | return await client.get_volatility(currency) 143 | 144 | 145 | @AsyncTTL(time_to_live=10) 146 | async def get_vol_surface(currency: str) -> VolSurface: 147 | async with Deribit() as client: 148 | loader = await client.volatility_surface_loader(currency) 149 | return loader.surface() 150 | -------------------------------------------------------------------------------- /quantflow/cli/commands/fred.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import asyncio 4 | 5 | import click 6 | import pandas as pd 7 | from asciichartpy import plot 8 | from ccy.cli.console import df_to_rich 9 | from fluid.utils.data import compact_dict 10 | from fluid.utils.http_client import HttpResponseError 11 | 12 | from quantflow.data.fred import Fred 13 | 14 | from .base import QuantContext, options, quant_group 15 | 16 | FREQUENCIES = tuple(Fred.freq) 17 | 18 | 19 | @quant_group() 20 | def fred() -> None: 21 | """Federal Reserve of St. Louis data commands""" 22 | ctx = QuantContext.current() 23 | if ctx.invoked_subcommand is None: 24 | ctx.set_as_section() 25 | 26 | 27 | @fred.command() 28 | @click.argument("category-id", required=False) 29 | def subcategories(category_id: str | None = None) -> None: 30 | """List subcategories for a Fred category""" 31 | ctx = QuantContext.current() 32 | try: 33 | data = asyncio.run(get_subcategories(ctx, category_id)) 34 | except HttpResponseError as e: 35 | ctx.qf.error(e) 36 | else: 37 | df = pd.DataFrame(data["categories"], columns=["id", "name"]) 38 | ctx.qf.print(df_to_rich(df)) 39 | 40 | 41 | @fred.command() 42 | @click.argument("category-id") 43 | @click.option("-j", "--json", is_flag=True, help="Output as JSON") 44 | def series(category_id: str, json: bool = False) -> None: 45 | """List series for a Fred category""" 46 | ctx = QuantContext.current() 47 | try: 48 | data = asyncio.run(get_series(ctx, category_id)) 49 | except HttpResponseError as e: 50 | ctx.qf.error(e) 51 | else: 52 | if json: 53 | ctx.qf.print(data) 54 | else: 55 | df = pd.DataFrame( 56 | data["seriess"], 57 | columns=[ 58 | "id", 59 | "popularity", 60 | "title", 61 | "frequency", 62 | "observation_start", 63 | "observation_end", 64 | ], 65 | ).sort_values("popularity", ascending=False) 66 | ctx.qf.print(df_to_rich(df)) 67 | 68 | 69 | @fred.command() 70 | @click.argument("series-id") 71 | @options.length 72 | @options.height 73 | @options.chart 74 | @click.option( 75 | "-f", 76 | "--frequency", 77 | type=click.Choice(FREQUENCIES), 78 | default="d", 79 | show_default=True, 80 | help="Frequency of data", 81 | ) 82 | def data(series_id: str, length: int, height: int, chart: bool, frequency: str) -> None: 83 | """Display a series data""" 84 | ctx = QuantContext.current() 85 | try: 86 | df = asyncio.run(get_serie_data(ctx, series_id, length, frequency)) 87 | except HttpResponseError as e: 88 | ctx.qf.error(e) 89 | else: 90 | if chart: 91 | data = list(reversed(df["value"].tolist()[:length])) 92 | ctx.qf.print(plot(data, {"height": height})) 93 | else: 94 | ctx.qf.print(df_to_rich(df)) 95 | 96 | 97 | async def get_subcategories(ctx: QuantContext, category_id: str | None) -> dict: 98 | async with ctx.fred() as cli: 99 | return await cli.subcategories(params=compact_dict(category_id=category_id)) 100 | 101 | 102 | async def get_series(ctx: QuantContext, category_id: str) -> dict: 103 | async with ctx.fred() as cli: 104 | return await cli.series(params=compact_dict(category_id=category_id)) 105 | 106 | 107 | async def get_serie_data( 108 | ctx: QuantContext, series_id: str, length: int, frequency: str 109 | ) -> dict: 110 | async with ctx.fred() as cli: 111 | return await cli.serie_data( 112 | params=dict( 113 | series_id=series_id, 114 | limit=length, 115 | frequency=frequency, 116 | sort_order="desc", 117 | ) 118 | ) 119 | -------------------------------------------------------------------------------- /quantflow/cli/commands/stocks.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import asyncio 4 | from datetime import timedelta 5 | from typing import cast 6 | 7 | import click 8 | import pandas as pd 9 | from asciichartpy import plot 10 | from ccy import period as to_period 11 | from ccy.cli.console import df_to_rich 12 | from ccy.tradingcentres import prevbizday 13 | 14 | from quantflow.utils.dates import utcnow 15 | 16 | from .base import HistoricalPeriod, QuantContext, options, quant_group 17 | 18 | 19 | @quant_group() 20 | def stocks() -> None: 21 | """Stocks commands""" 22 | ctx = QuantContext.current() 23 | if ctx.invoked_subcommand is None: 24 | ctx.set_as_section() 25 | 26 | 27 | @stocks.command() 28 | def indices() -> None: 29 | """Search companies""" 30 | ctx = QuantContext.current() 31 | data = asyncio.run(get_indices(ctx)) 32 | df = pd.DataFrame(data) 33 | ctx.qf.print(df_to_rich(df)) 34 | 35 | 36 | @stocks.command() 37 | @click.argument("symbol") 38 | def profile(symbol: str) -> None: 39 | """Company profile""" 40 | ctx = QuantContext.current() 41 | data = asyncio.run(get_profile(ctx, symbol)) 42 | if not data: 43 | raise click.UsageError(f"Company {symbol} not found - try searching") 44 | else: 45 | d = data[0] 46 | ctx.qf.print(d.pop("description") or "") 47 | df = pd.DataFrame(d.items(), columns=["Key", "Value"]) 48 | ctx.qf.print(df_to_rich(df)) 49 | 50 | 51 | @stocks.command() 52 | @click.argument("text") 53 | def search(text: str) -> None: 54 | """Search companies""" 55 | ctx = QuantContext.current() 56 | data = asyncio.run(search_company(ctx, text)) 57 | df = pd.DataFrame(data, columns=["symbol", "name", "currency", "stockExchange"]) 58 | ctx.qf.print(df_to_rich(df)) 59 | 60 | 61 | @stocks.command() 62 | @click.argument("symbol") 63 | @options.height 64 | @options.length 65 | @options.frequency 66 | def chart(symbol: str, height: int, length: int, frequency: str) -> None: 67 | """Symbol chart""" 68 | ctx = QuantContext.current() 69 | df = asyncio.run(get_prices(ctx, symbol, frequency)) 70 | if df.empty: 71 | raise click.UsageError( 72 | f"No data for {symbol} - are you sure the symbol exists?" 73 | ) 74 | data = list(reversed(df["close"].tolist()[:length])) 75 | print(plot(data, {"height": height})) 76 | 77 | 78 | @stocks.command() 79 | @options.period 80 | def sectors(period: str) -> None: 81 | """Sectors performance and PE ratios""" 82 | ctx = QuantContext.current() 83 | data = asyncio.run(sector_performance(ctx, HistoricalPeriod(period))) 84 | df = pd.DataFrame(data, columns=["sector", "performance", "pe"]).sort_values( 85 | "performance", ascending=False 86 | ) 87 | ctx.qf.print(df_to_rich(df)) 88 | 89 | 90 | async def get_indices(ctx: QuantContext) -> list[dict]: 91 | async with ctx.fmp() as cli: 92 | return await cli.indices() 93 | 94 | 95 | async def get_prices(ctx: QuantContext, symbol: str, frequency: str) -> pd.DataFrame: 96 | async with ctx.fmp() as cli: 97 | return await cli.prices(symbol, frequency) 98 | 99 | 100 | async def get_profile(ctx: QuantContext, symbol: str) -> list[dict]: 101 | async with ctx.fmp() as cli: 102 | return await cli.profile(symbol) 103 | 104 | 105 | async def search_company(ctx: QuantContext, text: str) -> list[dict]: 106 | async with ctx.fmp() as cli: 107 | return await cli.search(text) 108 | 109 | 110 | async def sector_performance( 111 | ctx: QuantContext, period: HistoricalPeriod 112 | ) -> dict | list[dict]: 113 | async with ctx.fmp() as cli: 114 | to_date = utcnow().date() 115 | if period != HistoricalPeriod.day: 116 | from_date = to_date - timedelta(days=to_period(period.value).totaldays) 117 | sp = await cli.sector_performance( 118 | from_date=prevbizday(from_date, 0).isoformat(), # type: ignore 119 | to_date=prevbizday(to_date, 0).isoformat(), # type: ignore 120 | summary=True, 121 | ) 122 | else: 123 | sp = await cli.sector_performance() 124 | spd = cast(dict, sp) 125 | pe = await cli.sector_pe(params=dict(date=prevbizday(to_date, 0).isoformat())) # type: ignore 126 | pes = {} 127 | for k in pe: 128 | sector = k["sector"] 129 | if sector in spd: 130 | pes[sector] = round(float(k["pe"]), 3) 131 | return [ 132 | dict(sector=k, performance=float(v), pe=pes.get(k, float("nan"))) 133 | for k, v in spd.items() 134 | ] 135 | -------------------------------------------------------------------------------- /quantflow/cli/commands/vault.py: -------------------------------------------------------------------------------- 1 | import click 2 | 3 | from .base import QuantContext, quant_group 4 | 5 | API_KEYS = ("fmp", "fred") 6 | 7 | 8 | @quant_group() 9 | def vault() -> None: 10 | """Manage vault secrets""" 11 | ctx = QuantContext.current() 12 | if ctx.invoked_subcommand is None: 13 | ctx.set_as_section() 14 | 15 | 16 | @vault.command() 17 | @click.argument("key", type=click.Choice(API_KEYS)) 18 | @click.argument("value") 19 | def add(key: str, value: str) -> None: 20 | """Add an API key to the vault""" 21 | app = QuantContext.current().qf 22 | app.vault.add(key, value) 23 | 24 | 25 | @vault.command() 26 | @click.argument("key") 27 | def delete(key: str) -> None: 28 | """Delete an API key from the vault""" 29 | app = QuantContext.current().qf 30 | if app.vault.delete(key): 31 | app.print(f"Deleted key {key}") 32 | else: 33 | app.error(f"Key {key} not found") 34 | 35 | 36 | @vault.command() 37 | @click.argument("key") 38 | def show(key: str) -> None: 39 | """Show the value of an API key""" 40 | app = QuantContext.current().qf 41 | if value := app.vault.get(key): 42 | app.print(value) 43 | else: 44 | app.error(f"Key {key} not found") 45 | 46 | 47 | @vault.command() 48 | def keys() -> None: 49 | """Show the keys in the vault""" 50 | app = QuantContext.current().qf 51 | for key in app.vault.keys(): 52 | app.print(key) 53 | -------------------------------------------------------------------------------- /quantflow/cli/script.py: -------------------------------------------------------------------------------- 1 | import dotenv 2 | 3 | dotenv.load_dotenv() 4 | 5 | try: 6 | from .app import QfApp 7 | except ImportError as ex: 8 | raise ImportError( 9 | "Cannot run qf command line, " 10 | "quantflow needs to be installed with cli & data extras, " 11 | "pip install quantflow[cli, data]" 12 | ) from ex 13 | 14 | main = QfApp() 15 | -------------------------------------------------------------------------------- /quantflow/cli/settings.py: -------------------------------------------------------------------------------- 1 | # IMPORTATION STANDARD 2 | from pathlib import Path 3 | 4 | # Installation related paths 5 | HOME_DIRECTORY = Path.home() 6 | PACKAGE_DIRECTORY = Path(__file__).parent.parent.parent 7 | REPOSITORY_DIRECTORY = PACKAGE_DIRECTORY.parent 8 | 9 | SETTINGS_DIRECTORY = HOME_DIRECTORY / ".quantflow" 10 | SETTINGS_ENV_FILE = SETTINGS_DIRECTORY / ".env" 11 | HIST_FILE_PATH = SETTINGS_DIRECTORY / ".quantflow.his" 12 | VAULT_FILE_PATH = SETTINGS_DIRECTORY / ".vault" 13 | -------------------------------------------------------------------------------- /quantflow/data/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/quantflow/data/__init__.py -------------------------------------------------------------------------------- /quantflow/data/fed.py: -------------------------------------------------------------------------------- 1 | import io 2 | from dataclasses import dataclass, field 3 | from typing import Any 4 | 5 | import numpy as np 6 | import pandas as pd 7 | from fluid.utils.http_client import AioHttpClient 8 | 9 | MATURITIES = ( 10 | "month_1", 11 | "month_3", 12 | "month_6", 13 | "year_1", 14 | "year_2", 15 | "year_3", 16 | "year_5", 17 | "year_7", 18 | "year_10", 19 | "year_20", 20 | "year_30", 21 | ) 22 | 23 | 24 | @dataclass 25 | class FederalReserve(AioHttpClient): 26 | """Federal Reserve API client. 27 | 28 | This class is used to fetch yield curves from the Federal Reserve at 29 | https://www.federalreserve.gov/datadownload/ 30 | """ 31 | 32 | url: str = "https://www.federalreserve.gov/datadownload/Output.aspx" 33 | default_params: dict[str, Any] = field( 34 | default_factory=lambda: { 35 | "from": "", 36 | "to": "", 37 | "lastobs": "", 38 | "filetype": "csv", 39 | "label": "include", 40 | "layout": "seriescolumn", 41 | "type": "package", 42 | } 43 | ) 44 | 45 | async def yield_curves(self, **params: Any) -> pd.DataFrame: 46 | """Get treasury constant maturities rates""" 47 | params.update(series="bf17364827e38702b42a58cf8eaa3f78", rel="H15") 48 | data = await self._get_text(params) 49 | df = pd.read_csv(data, header=5, index_col=None, parse_dates=True) 50 | df.columns = list(("date",) + MATURITIES) # type: ignore 51 | df = df.set_index("date").replace("ND", np.nan) 52 | return df.dropna(axis=0, how="all").reset_index() 53 | 54 | async def ref_rates(self, **params: Any) -> pd.DataFrame: 55 | """Get policy rates 56 | 57 | Prior to 2021-07-08 it is the rate on excess reserves (IOER rate) 58 | After 2021-07-08 it is the rate on reserve balances (IORB rate) 59 | 60 | The IOER rate was the primary tool used by the Federal Reserve to set 61 | a floor on the federal funds rate. 62 | While the Interest rate on required reserves (IORR rate) existed, 63 | the IOER rate had a more direct impact on market rates, 64 | as banks typically held far more excess reserves than required reserves. 65 | Therefore, the IOER rate was more influential 66 | in the Fed's monetary policy implementation. 67 | """ 68 | params.update(series="c27939ee810cb2e929a920a6bd77d9f6", rel="PRATES") 69 | data = await self._get_text(params) 70 | df = pd.read_csv(data, header=5, index_col=None, parse_dates=True) 71 | ioer = df["RESBME_N.D"] 72 | iorb = df["RESBM_N.D"] 73 | rate = iorb.combine_first(ioer) 74 | return pd.DataFrame( 75 | { 76 | "date": df["Time Period"], 77 | "rate": rate, 78 | } 79 | ) 80 | 81 | async def _get_text(self, params: dict[str, Any]) -> io.StringIO: 82 | """Get parameters for the request.""" 83 | params = {**self.default_params, **params} 84 | response = await self.get(self.url, params=params, callback=True) 85 | data = await response.text() 86 | return io.StringIO(data) 87 | -------------------------------------------------------------------------------- /quantflow/data/fiscal_data.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | from datetime import date, timedelta 3 | 4 | import pandas as pd 5 | from fluid.utils.http_client import AioHttpClient 6 | 7 | from quantflow.utils.dates import as_date 8 | 9 | 10 | @dataclass 11 | class FiscalData(AioHttpClient): 12 | """Fiscal Data API client. 13 | 14 | THis class is used to fetch data from the 15 | [fiscal data api](https://fiscaldata.treasury.gov/api-documentation/) 16 | """ 17 | 18 | url: str = "https://api.fiscaldata.treasury.gov/services/api/fiscal_service" 19 | 20 | async def securities(self, record_date: date | None = None) -> pd.DataFrame: 21 | """Get treasury constant maturities rates""" 22 | rd = as_date(record_date) 23 | pm = rd.replace(day=1) - timedelta(days=1) 24 | params = {"filter": f"record_date:eq:{pm.isoformat()}"} 25 | data = await self.get_all("/v1/debt/mspd/mspd_table_3_market", params) 26 | return pd.DataFrame(data) 27 | 28 | async def get_all(self, path: str, params: dict[str, str]) -> list: 29 | """Get all data from the API""" 30 | next_url: str | None = f"{self.url}{path}" 31 | full_data = [] 32 | while next_url: 33 | payload = await self.get(next_url, params=params) 34 | full_data.extend(payload["data"]) 35 | if links := payload.get("links"): 36 | if next_path := links.get("next"): 37 | next_url = f"{self.url}{next_path}" 38 | else: 39 | next_url = None 40 | else: 41 | next_url = None 42 | return full_data 43 | -------------------------------------------------------------------------------- /quantflow/data/fred.py: -------------------------------------------------------------------------------- 1 | import os 2 | from dataclasses import dataclass, field 3 | from enum import StrEnum 4 | from typing import Any, cast 5 | 6 | import pandas as pd 7 | from fluid.utils.http_client import AioHttpClient 8 | 9 | 10 | @dataclass 11 | class Fred(AioHttpClient): 12 | """Federal Reserve Economic Data API client 13 | 14 | Fetch economic data from `FRED`_. 15 | 16 | .. _FRED: https://fred.stlouisfed.org/ 17 | """ 18 | 19 | url: str = "https://api.stlouisfed.org/fred" 20 | key: str = field(default_factory=lambda: os.environ.get("FRED_API_KEY", "")) 21 | 22 | class freq(StrEnum): 23 | """Fred historical frequencies""" 24 | 25 | d = "d" 26 | w = "w" 27 | bw = "bw" 28 | m = "m" 29 | q = "q" 30 | sa = "sa" 31 | a = "a" 32 | 33 | async def categiories(self, **kw: Any) -> dict: 34 | """Get categories""" 35 | return await self.get_path("category", **kw) 36 | 37 | async def subcategories(self, **kw: Any) -> dict: 38 | """Get subcategories of a given category""" 39 | return await self.get_path("category/children", **kw) 40 | 41 | async def series(self, **kw: Any) -> dict: 42 | """Get series of a given category""" 43 | return await self.get_path("category/series", **kw) 44 | 45 | async def serie_data(self, *, to_date: bool = False, **kw: Any) -> pd.DataFrame: 46 | """Get series data frame""" 47 | data = await self.get_path("series/observations", **kw) 48 | df = pd.DataFrame(data["observations"]) 49 | df["value"] = pd.to_numeric(df["value"]) 50 | if to_date and "date" in df.columns: 51 | df["date"] = pd.to_datetime(df["date"]) 52 | return df 53 | 54 | # Internals 55 | async def get_path(self, path: str, **kw: Any) -> dict: 56 | result = await self.get(f"{self.url}/{path}", **self.params(**kw)) 57 | return cast(dict, result) 58 | 59 | def params(self, params: dict | None = None, **kw: Any) -> dict: 60 | params = params.copy() if params is not None else {} 61 | params.update(api_key=self.key, file_type="json") 62 | return {"params": params, **kw} 63 | -------------------------------------------------------------------------------- /quantflow/data/vault.py: -------------------------------------------------------------------------------- 1 | from pathlib import Path 2 | 3 | 4 | class Vault: 5 | """Keeps key-value pairs in a file.""" 6 | 7 | def __init__(self, path: str | Path) -> None: 8 | self.path = Path(path) 9 | self.path.touch(exist_ok=True) 10 | self.data = self.load() 11 | 12 | def load(self) -> dict[str, str]: 13 | data = {} 14 | with open(self.path) as file: 15 | for line in file: 16 | key, value = line.strip().split("=") 17 | data[key] = value 18 | return data 19 | 20 | def add(self, key: str, value: str) -> None: 21 | """Add a key-value pair to the vault.""" 22 | self.data[key] = value 23 | self.save() 24 | 25 | def delete(self, key: str) -> bool: 26 | """Delete a key-value pair from the vault.""" 27 | if self.data.pop(key, None) is not None: 28 | self.save() 29 | return True 30 | return False 31 | 32 | def get(self, key: str) -> str | None: 33 | """Get the value of a key if available otherwise None.""" 34 | return self.data.get(key) 35 | 36 | def keys(self) -> list[str]: 37 | """Get the keys in the vault.""" 38 | return sorted(self.data) 39 | 40 | def save(self) -> None: 41 | """Save the data to the file.""" 42 | with open(self.path, "w") as file: 43 | for key in sorted(self.data): 44 | value = self.data[key] 45 | file.write(f"{key}={value}\n") 46 | -------------------------------------------------------------------------------- /quantflow/options/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/quantflow/options/__init__.py -------------------------------------------------------------------------------- /quantflow/options/bs.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy.optimize import RootResults, newton 3 | from scipy.stats import norm 4 | 5 | from ..utils.types import FloatArray, FloatArrayLike 6 | 7 | 8 | def black_call( 9 | k: FloatArrayLike, sigma: FloatArrayLike, ttm: FloatArrayLike 10 | ) -> np.ndarray: 11 | kk = np.asarray(k) 12 | return black_price(kk, np.asarray(sigma), np.asarray(ttm), np.ones(kk.shape)) 13 | 14 | 15 | def black_put( 16 | k: FloatArrayLike, sigma: FloatArrayLike, ttm: FloatArrayLike 17 | ) -> np.ndarray: 18 | kk = np.asarray(k) 19 | return black_price(kk, np.asarray(sigma), np.asarray(ttm), -np.ones(kk.shape)) 20 | 21 | 22 | def black_price( 23 | k: np.ndarray, 24 | sigma: FloatArrayLike, 25 | ttm: FloatArrayLike, 26 | s: FloatArrayLike, 27 | ) -> np.ndarray: 28 | r"""Calculate the Black call/put option prices in forward terms 29 | from the following params 30 | 31 | .. math:: 32 | c &= \frac{C}{F} = N(d1) - e^k N(d2) 33 | 34 | p &= \frac{P}{F} = -N(-d1) + e^k N(-d2) 35 | 36 | d1 &= \frac{-k + \frac{\sigma^2 t}{2}}{\sigma \sqrt{t}} 37 | 38 | d2 &= d1 - \sigma \sqrt{t} 39 | 40 | :param k: a vector of :math:`\log{\frac{K}{F}}` also known as moneyness 41 | :param sigma: a corresponding vector of implied volatilities (0.2 for 20%) 42 | :param ttm: time to maturity 43 | :param s: the call/put flag, 1 for calls, -1 for puts 44 | 45 | The results are option prices divided by the forward price also known as 46 | option prices in forward terms. 47 | """ 48 | sig2 = sigma * sigma * ttm 49 | sig = np.sqrt(sig2) 50 | d1 = (-k + 0.5 * sig2) / sig 51 | d2 = d1 - sig 52 | return s * norm.cdf(s * d1) - s * np.exp(k) * norm.cdf(s * d2) 53 | 54 | 55 | def black_delta( 56 | k: np.ndarray, 57 | sigma: FloatArrayLike, 58 | ttm: FloatArrayLike, 59 | s: FloatArrayLike, 60 | ) -> np.ndarray: 61 | r"""Calculate the Black call/put option delta from the moneyness, 62 | volatility and time to maturity. 63 | 64 | .. math:: 65 | \begin{align} 66 | \delta_c &= \frac{\partial C}{\partial F} = N(d1) \\ 67 | \delta_p &= \frac{\partial P}{\partial F} = N(d1) - 1 68 | \end{align} 69 | 70 | :param k: a vector of moneyness, see above 71 | :param sigma: a corresponding vector of implied volatilities (0.2 for 20%) 72 | :param ttm: time to maturity 73 | :param s: the call/put flag, 1 for calls, -1 for puts 74 | """ 75 | sig2 = sigma * sigma * ttm 76 | sig = np.sqrt(sig2) 77 | d1 = (-k + 0.5 * sig2) / sig 78 | return norm.cdf(d1) - 0.5 * (1 - s) 79 | 80 | 81 | def black_vega(k: np.ndarray, sigma: np.ndarray, ttm: FloatArrayLike) -> np.ndarray: 82 | r"""Calculate the Black option vega from the moneyness, 83 | volatility and time to maturity. 84 | 85 | .. math:: 86 | 87 | \nu = \frac{\partial c}{\partial \sigma} = 88 | \frac{\partial p}{\partial \sigma} = N'(d1) \sqrt{t} 89 | 90 | :param k: a vector of moneyness, see above 91 | :param sigma: a corresponding vector of implied volatilities (0.2 for 20%) 92 | :param ttm: time to maturity 93 | 94 | Same formula for both calls and puts. 95 | """ 96 | sig2 = sigma * sigma * ttm 97 | sig = np.sqrt(sig2) 98 | d1 = (-k + 0.5 * sig2) / sig 99 | return norm.pdf(d1) * np.sqrt(ttm) 100 | 101 | 102 | def implied_black_volatility( 103 | k: np.ndarray, 104 | price: np.ndarray, 105 | ttm: FloatArrayLike, 106 | initial_sigma: FloatArray, 107 | call_put: FloatArrayLike, 108 | ) -> RootResults: 109 | """Calculate the implied block volatility via Newton's method 110 | 111 | :param k: a vector of log(strikes/forward) also known as moneyness 112 | :param price: a corresponding vector of option_price/forward 113 | :param ttm: time to maturity 114 | :param initial_sigma: a vector of initial volatility guesses 115 | :param call_put: a vector of call/put flags, 1 for calls, -1 for puts 116 | """ 117 | return newton( 118 | lambda x: black_price(k, x, ttm, call_put) - price, 119 | initial_sigma, 120 | fprime=lambda x: black_vega(k, x, ttm), 121 | full_output=True, 122 | ) 123 | -------------------------------------------------------------------------------- /quantflow/options/inputs.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import enum 4 | from datetime import datetime 5 | from decimal import Decimal 6 | from typing import Generic, TypeVar 7 | 8 | from pydantic import BaseModel 9 | 10 | P = TypeVar("P") 11 | 12 | 13 | class VolSecurityType(enum.StrEnum): 14 | """Type of security for the volatility surface""" 15 | 16 | spot = enum.auto() 17 | forward = enum.auto() 18 | option = enum.auto() 19 | 20 | def vol_surface_type(self) -> VolSecurityType: 21 | return self 22 | 23 | 24 | class VolSurfaceInput(BaseModel, Generic[P]): 25 | bid: P 26 | ask: P 27 | 28 | 29 | class OptionInput(BaseModel): 30 | price: Decimal 31 | strike: Decimal 32 | maturity: datetime 33 | call: bool 34 | 35 | 36 | class SpotInput(VolSurfaceInput[Decimal]): 37 | security_type: VolSecurityType = VolSecurityType.spot 38 | 39 | 40 | class ForwardInput(VolSurfaceInput[Decimal]): 41 | maturity: datetime 42 | security_type: VolSecurityType = VolSecurityType.forward 43 | 44 | 45 | class OptionSidesInput(VolSurfaceInput[OptionInput]): 46 | security_type: VolSecurityType = VolSecurityType.option 47 | 48 | 49 | class VolSurfaceInputs(BaseModel): 50 | ref_date: datetime 51 | inputs: list[ForwardInput | SpotInput | OptionSidesInput] 52 | -------------------------------------------------------------------------------- /quantflow/py.typed: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/quantflow/py.typed -------------------------------------------------------------------------------- /quantflow/sp/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/quantflow/sp/__init__.py -------------------------------------------------------------------------------- /quantflow/sp/bns.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import numpy as np 4 | from pydantic import Field 5 | from scipy.special import xlogy 6 | 7 | from ..ta.paths import Paths 8 | from ..utils.types import FloatArrayLike, Vector 9 | from .base import Im, StochasticProcess1D 10 | from .ou import GammaOU 11 | 12 | 13 | class BNS(StochasticProcess1D): 14 | """Barndorff-Nielson--Shephard (BNS) stochastic volatility model""" 15 | 16 | variance_process: GammaOU = Field( 17 | default_factory=GammaOU.create, description="Variance process" 18 | ) 19 | rho: float = Field(default=0, ge=-1, le=1, description="Correlation") 20 | 21 | @classmethod 22 | def create(cls, vol: float, kappa: float, decay: float, rho: float) -> BNS: 23 | return cls( 24 | variance_process=GammaOU.create(rate=vol * vol, kappa=kappa, decay=decay), 25 | rho=rho, 26 | ) 27 | 28 | def characteristic_exponent(self, t: FloatArrayLike, u: Vector) -> Vector: 29 | return -self._zeta(t, 0.5 * Im * u * u, self.rho * u) 30 | 31 | def sample(self, n: int, time_horizon: float = 1, time_steps: int = 100) -> Paths: 32 | return self.sample_from_draws(Paths.normal_draws(n, time_horizon, time_steps)) 33 | 34 | def sample_from_draws(self, path_dw: Paths, *args: Paths) -> Paths: 35 | if args: 36 | args[0] 37 | else: 38 | # generate the background driving process samples if not provided 39 | path_dz = self.variance_process.bdlp.sample( 40 | path_dw.samples, path_dw.t, path_dw.time_steps 41 | ) 42 | dt = path_dw.dt 43 | # sample the activity rate process 44 | v = self.variance_process.sample_from_draws(path_dz) 45 | # create the time-changed Brownian motion 46 | dw = path_dw.data * np.sqrt(v.data * dt) 47 | paths = np.zeros(dw.shape) 48 | paths[1:] = np.cumsum(dw[:-1], axis=0) + path_dz.data 49 | return Paths(t=path_dw.t, data=paths) 50 | 51 | # Internal characteristics function methods (see docs) 52 | 53 | def _zeta(self, t: Vector, a: Vector, b: Vector) -> Vector: 54 | k = self.variance_process.kappa 55 | c = a * (1 - np.exp(-k * t)) / k 56 | g = (a + b) / self.variance_process.beta 57 | return Im * c * self.variance_process.rate - self.variance_process.intensity * ( 58 | self._i(b + c, g) - self._i(b, g) 59 | ) 60 | 61 | def _i(self, x: Vector, g: Vector) -> Vector: 62 | k = self.variance_process.kappa 63 | beta = self.variance_process.beta 64 | l1 = xlogy(k - Im * g, x + Im * beta) 65 | l2 = xlogy(g / (g + Im * k) / k, beta * g / k - x) 66 | return l1 + l2 67 | -------------------------------------------------------------------------------- /quantflow/sp/copula.py: -------------------------------------------------------------------------------- 1 | from abc import ABC, abstractmethod 2 | from decimal import Decimal 3 | from math import isclose 4 | 5 | import numpy as np 6 | from pydantic import BaseModel, Field 7 | 8 | from quantflow.utils.functions import debye 9 | from quantflow.utils.numbers import ZERO 10 | from quantflow.utils.types import FloatArray, FloatArrayLike 11 | 12 | 13 | class Copula(BaseModel, ABC): 14 | """Bivariate copula probability distribution - Abstract class 15 | 16 | Sklar's theorem states that any multivariate joint distribution can be 17 | written in terms of univariate marginal-distribution functions and a 18 | copula which describes the dependence structure between the variables. 19 | """ 20 | 21 | @abstractmethod 22 | def __call__(self, u: FloatArrayLike, v: FloatArrayLike) -> FloatArrayLike: 23 | """ 24 | Computes the copula, given the cdf of two 1D processes 25 | """ 26 | 27 | @abstractmethod 28 | def tau(self) -> float: 29 | """Kendall's tau - rank correlation parameter""" 30 | 31 | @abstractmethod 32 | def rho(self) -> float: 33 | """Spearman's rho - rank correlation parameter""" 34 | 35 | def jacobian(self, u: FloatArrayLike, v: FloatArrayLike) -> FloatArray: 36 | """ 37 | Jacobian with respected to u, v, and the internal parameters 38 | parameters of the copula. 39 | Optional to implement. 40 | """ 41 | raise NotImplementedError 42 | 43 | 44 | class IndependentCopula(Copula): 45 | """ 46 | No-op copula that keep the distributions independent. 47 | 48 | .. math:: 49 | 50 | C(u,v) = uv 51 | """ 52 | 53 | def __call__(self, u: FloatArrayLike, v: FloatArrayLike) -> FloatArrayLike: 54 | return u * v 55 | 56 | def tau(self) -> float: 57 | return 0.0 58 | 59 | def rho(self) -> float: 60 | return 0.0 61 | 62 | def jacobian(self, u: FloatArrayLike, v: FloatArrayLike) -> FloatArray: 63 | return np.array([v, u]) 64 | 65 | 66 | class FrankCopula(Copula): 67 | r""" 68 | Frank Copula with parameter :math:`\kappa` 69 | 70 | .. math:: 71 | 72 | C(u, v) = -\frac{1}{\kappa}\log\left[1+\frac{\left(\exp\left(-\kappa 73 | u\right)-1\right)\left(\exp\left(-\kappa 74 | v\right)-1\right)}{\exp\left(-\kappa\right)-1}\right] 75 | """ 76 | 77 | kappa: Decimal = Field(default=ZERO, description="Frank copula parameter") 78 | 79 | def __call__(self, u: FloatArrayLike, v: FloatArrayLike) -> FloatArrayLike: 80 | k = float(self.kappa) 81 | if isclose(k, 0.0): 82 | return u * v 83 | eu = np.exp(-k * u) 84 | ev = np.exp(-k * v) 85 | e = np.exp(-k) 86 | return -np.log(1 + (eu - 1) * (ev - 1) / (e - 1)) / k 87 | 88 | def tau(self) -> float: 89 | """Kendall's tau""" 90 | k = float(self.kappa) 91 | if isclose(k, 0.0): 92 | return 0 93 | return 1 + 4 * (debye(1, k) - 1) / k 94 | 95 | def rho(self) -> float: 96 | """Spearman's rho""" 97 | k = float(self.kappa) 98 | if isclose(k, 0.0): 99 | return 0 100 | return 1 - 12 * (debye(2, -k) - debye(1, -k)) / k 101 | 102 | def jacobian(self, u: FloatArrayLike, v: FloatArrayLike) -> FloatArray: 103 | k = float(self.kappa) 104 | if isclose(k, 0.0): 105 | return np.array([v, u, v * 0]) 106 | eu = np.exp(-k * u) 107 | ev = np.exp(-k * v) 108 | e = np.exp(-k) 109 | x = (eu - 1) * (ev - 1) / (e - 1) 110 | c = -np.log(1 + x) / k 111 | xx = x / (1 + x) 112 | du = eu * (ev - 1) / (e - 1) / (1 + x) 113 | # du = eu * xx / (eu - 1) 114 | dv = eu * (eu - 1) / (e - 1) / (1 + x) 115 | # dv = ev * xx / (ev - 1) 116 | dk = (u * du + v * dv - e * xx / (e - 1) - c) / k 117 | return np.array([du, dv, dk]) 118 | -------------------------------------------------------------------------------- /quantflow/sp/dsp.py: -------------------------------------------------------------------------------- 1 | import math 2 | 3 | import numpy as np 4 | from pydantic import Field 5 | from scipy.optimize import Bounds 6 | 7 | from ..utils.types import FloatArray, FloatArrayLike, Vector 8 | from .base import StochasticProcess1DMarginal 9 | from .cir import CIR, IntensityProcess 10 | from .poisson import MarginalDiscrete1D, PoissonBase, PoissonProcess, poisson_arrivals 11 | 12 | 13 | class DSP(PoissonBase): 14 | r""" 15 | Doubly Stochastic Poisson process. 16 | 17 | It's a process where the inter-arrival time is exponentially distributed 18 | with rate :math:`\lambda_t` 19 | 20 | :param intensity: the stochastic intensity of the Poisson 21 | """ 22 | 23 | intensity: IntensityProcess = Field( # type ignore 24 | default_factory=CIR, description="intensity process" 25 | ) 26 | poisson: PoissonProcess = Field(default_factory=PoissonProcess, exclude=True) 27 | 28 | def marginal(self, t: FloatArrayLike) -> StochasticProcess1DMarginal: 29 | return MarginalDiscrete1D(process=self, t=t) 30 | 31 | def frequency_range(self, std: float, max_frequency: float | None = None) -> Bounds: 32 | """Frequency range of the process""" 33 | return Bounds(0, np.pi) 34 | 35 | def support(self, mean: float, std: float, points: int) -> FloatArray: 36 | return np.linspace(0, points, points + 1) 37 | 38 | def characteristic_exponent(self, t: FloatArrayLike, u: Vector) -> Vector: 39 | phi = self.poisson.characteristic_exponent(t, u) 40 | return -self.intensity.integrated_log_laplace(t, phi) 41 | 42 | def arrivals(self, t: float = 1) -> list[float]: 43 | paths = self.intensity.sample(1, t, math.ceil(100 * t)).integrate() 44 | intensity = paths.data[-1, 0] 45 | return poisson_arrivals(intensity, t) 46 | 47 | def sample_jumps(self, n: int) -> FloatArray: 48 | return self.poisson.sample_jumps(n) 49 | -------------------------------------------------------------------------------- /quantflow/sp/jump_diffusion.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from typing import Generic 4 | 5 | import numpy as np 6 | from pydantic import Field 7 | 8 | from ..ta.paths import Paths 9 | from ..utils.types import FloatArrayLike, Vector 10 | from .base import StochasticProcess1D 11 | from .poisson import CompoundPoissonProcess, D 12 | from .weiner import WeinerProcess 13 | 14 | 15 | class JumpDiffusion(StochasticProcess1D, Generic[D]): 16 | r"""A generic jump-diffusion model 17 | 18 | .. math:: 19 | dx_t = \sigma d w_t + d N_t 20 | 21 | where :math:`w_t` is a Weiner process with standard deviation :math:`\sigma` 22 | and :math:`N_t` is a :class:`.CompoundPoissonProcess` 23 | with intensity :math:`\lambda` and generic jump distribution `D` 24 | """ 25 | 26 | diffusion: WeinerProcess = Field( 27 | default_factory=WeinerProcess, description="diffusion" 28 | ) 29 | """The diffusion process is a standard :class:`.WeinerProcess`""" 30 | jumps: CompoundPoissonProcess[D] = Field(description="jump process") 31 | """The jump process is a generic :class:`.CompoundPoissonProcess`""" 32 | 33 | def characteristic_exponent(self, t: FloatArrayLike, u: Vector) -> Vector: 34 | return self.diffusion.characteristic_exponent( 35 | t, u 36 | ) + self.jumps.characteristic_exponent(t, u) 37 | 38 | def sample(self, n: int, time_horizon: float = 1, time_steps: int = 100) -> Paths: 39 | dw1 = Paths.normal_draws(n, time_horizon, time_steps) 40 | return self.sample_from_draws(dw1) 41 | 42 | def sample_from_draws(self, path_w: Paths, *args: Paths) -> Paths: 43 | if args: 44 | path_j = args[0] 45 | else: 46 | path_j = self.jumps.sample(path_w.samples, path_w.t, path_w.time_steps) 47 | path_w = self.diffusion.sample_from_draws(path_w) 48 | return Paths(t=path_w.t, data=path_w.data + path_j.data) 49 | 50 | def analytical_mean(self, t: FloatArrayLike) -> FloatArrayLike: 51 | return self.diffusion.analytical_mean(t) + self.jumps.analytical_mean(t) 52 | 53 | def analytical_variance(self, t: FloatArrayLike) -> FloatArrayLike: 54 | return self.diffusion.analytical_variance(t) + self.jumps.analytical_variance(t) 55 | 56 | @classmethod 57 | def create( 58 | cls, 59 | jump_distribution: type[D], 60 | vol: float = 0.5, 61 | jump_intensity: float = 100, 62 | jump_fraction: float = 0.5, 63 | jump_asymmetry: float = 0.0, 64 | ) -> JumpDiffusion[D]: 65 | """Create a jump-diffusion model with a given jump distribution, volatility 66 | and jump fraction. 67 | 68 | :param jump_distribution: The distribution of jump sizes (currently only 69 | :class:`.Normal` and :class:`.DoubleExponential` are supported) 70 | :param vol: total annualized standard deviation 71 | :param jump_intensity: The average number of jumps per year 72 | :param jump_fraction: The fraction of variance due to jumps (between 0 and 1) 73 | :param jump_asymmetry: The asymmetry of the jump distribution (0 for symmetric, 74 | only used by distributions with asymmetry) 75 | 76 | If the jump distribution is set to the :class:`.Normal` distribution, the 77 | model reduces to a Merton jump-diffusion model. 78 | """ 79 | variance = vol * vol 80 | if jump_fraction >= 1: 81 | raise ValueError("jump_fraction must be less than 1") 82 | elif jump_fraction <= 0: 83 | raise ValueError("jump_fraction must be greater than 0") 84 | else: 85 | jump_variance = variance * jump_fraction 86 | jump_distribution_variance = jump_variance / jump_intensity 87 | jumps = jump_distribution.from_variance_and_asymmetry( 88 | jump_distribution_variance, jump_asymmetry 89 | ) 90 | return cls( 91 | diffusion=WeinerProcess(sigma=np.sqrt(variance * (1 - jump_fraction))), 92 | jumps=CompoundPoissonProcess(intensity=jump_intensity, jumps=jumps), 93 | ) 94 | -------------------------------------------------------------------------------- /quantflow/sp/weiner.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import numpy as np 4 | from pydantic import Field 5 | from scipy.stats import norm 6 | 7 | from ..ta.paths import Paths 8 | from ..utils.types import FloatArrayLike, Vector 9 | from .base import StochasticProcess1D 10 | 11 | 12 | class WeinerProcess(StochasticProcess1D): 13 | sigma: float = Field(default=1, ge=0, description="volatility") 14 | 15 | @property 16 | def sigma2(self) -> float: 17 | return self.sigma * self.sigma 18 | 19 | def characteristic_exponent(self, t: Vector, u: Vector) -> Vector: 20 | su = self.sigma * u 21 | return 0.5 * su * su * t 22 | 23 | def sample(self, n: int, time_horizon: float = 1, time_steps: int = 100) -> Paths: 24 | paths = Paths.normal_draws(n, time_horizon, time_steps) 25 | return self.sample_from_draws(paths) 26 | 27 | def sample_from_draws(self, draws: Paths, *args: Paths) -> Paths: 28 | sdt = self.sigma * np.sqrt(draws.dt) 29 | paths = np.zeros(draws.data.shape) 30 | paths[1:] = np.cumsum(draws.data[:-1], axis=0) 31 | return Paths(t=draws.t, data=sdt * paths) 32 | 33 | def analytical_mean(self, t: FloatArrayLike) -> FloatArrayLike: 34 | return 0 * t 35 | 36 | def analytical_variance(self, t: FloatArrayLike) -> FloatArrayLike: 37 | return t * self.sigma2 38 | 39 | def analytical_pdf(self, t: FloatArrayLike, x: FloatArrayLike) -> FloatArrayLike: 40 | return norm.pdf(x, scale=self.analytical_std(t)) 41 | 42 | def analytical_cdf(self, t: FloatArrayLike, x: FloatArrayLike) -> FloatArrayLike: 43 | return norm.cdf(x, scale=self.analytical_std(t)) 44 | -------------------------------------------------------------------------------- /quantflow/ta/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/quantflow/ta/__init__.py -------------------------------------------------------------------------------- /quantflow/ta/base.py: -------------------------------------------------------------------------------- 1 | from typing import TypeAlias 2 | 3 | import pandas as pd 4 | import polars as pl 5 | 6 | DataFrame: TypeAlias = pl.DataFrame | pd.DataFrame 7 | 8 | 9 | def to_polars(df: DataFrame, *, copy: bool = False) -> pl.DataFrame: 10 | if isinstance(df, pd.DataFrame): 11 | return pl.DataFrame(df) 12 | elif copy: 13 | return df.clone() 14 | return df 15 | -------------------------------------------------------------------------------- /quantflow/ta/ohlc.py: -------------------------------------------------------------------------------- 1 | from datetime import timedelta 2 | 3 | import numpy as np 4 | import polars as pl 5 | from pydantic import BaseModel 6 | 7 | from .base import DataFrame, to_polars 8 | 9 | 10 | class OHLC(BaseModel): 11 | """Aggregates OHLC data over a given period and serie 12 | 13 | Optionally calculates the range-based variance estimators for the serie. 14 | Range-based estimator are called like that because they are calculated from the 15 | difference between the period high and low. 16 | """ 17 | 18 | serie: str 19 | """serie to aggregate""" 20 | period: str | timedelta 21 | """down-sampling period, e.g. 1h, 1d, 1w""" 22 | index_column: str = "index" 23 | """column to group by""" 24 | parkinson_variance: bool = False 25 | """add Parkinson variance column""" 26 | garman_klass_variance: bool = False 27 | """add Garman Klass variance column""" 28 | rogers_satchell_variance: bool = False 29 | """add Rogers Satchell variance column""" 30 | percent_variance: bool = False 31 | """log-transform the variance columns""" 32 | 33 | @property 34 | def open_col(self) -> pl.Expr: 35 | return self.var_column("open") 36 | 37 | @property 38 | def high_col(self) -> pl.Expr: 39 | return self.var_column("high") 40 | 41 | @property 42 | def low_col(self) -> pl.Expr: 43 | return self.var_column("low") 44 | 45 | @property 46 | def close_col(self) -> pl.Expr: 47 | return self.var_column("close") 48 | 49 | def __call__(self, df: DataFrame) -> pl.DataFrame: 50 | """Returns a dataframe with OHLC data sampled over the given period""" 51 | result = ( 52 | to_polars(df, copy=True) 53 | .group_by_dynamic(self.index_column, every=self.period) 54 | .agg( 55 | pl.col(self.serie).first().alias(f"{self.serie}_open"), 56 | pl.col(self.serie).max().alias(f"{self.serie}_high"), 57 | pl.col(self.serie).min().alias(f"{self.serie}_low"), 58 | pl.col(self.serie).last().alias(f"{self.serie}_close"), 59 | pl.col(self.serie).mean().alias(f"{self.serie}_mean"), 60 | ) 61 | ) 62 | if self.parkinson_variance: 63 | result = self.parkinson(result) 64 | if self.garman_klass_variance: 65 | result = self.garman_klass(result) 66 | if self.rogers_satchell_variance: 67 | result = self.rogers_satchell(result) 68 | return result 69 | 70 | def parkinson(self, df: DataFrame) -> pl.DataFrame: 71 | """Adds parkinson variance column to the dataframe 72 | 73 | This requires the serie high and low columns to be present 74 | """ 75 | c = (self.high_col - self.low_col) ** 2 / np.sqrt(4 * np.log(2)) 76 | return to_polars(df).with_columns(c.alias(f"{self.serie}_pk")) 77 | 78 | def garman_klass(self, df: DataFrame) -> pl.DataFrame: 79 | """Adds Garman Klass variance estimator column to the dataframe 80 | 81 | This requires the serie high and low columns to be present. 82 | """ 83 | open = self.open_col 84 | hh = self.high_col - open 85 | ll = self.low_col - open 86 | cc = self.close_col - open 87 | c = ( 88 | 0.522 * (hh - ll) ** 2 89 | - 0.019 * (cc * (hh + ll) + 2.0 * ll * hh) 90 | - 0.383 * cc**2 91 | ) 92 | return to_polars(df).with_columns(c.alias(f"{self.serie}_gk")) 93 | 94 | def rogers_satchell(self, df: DataFrame) -> pl.DataFrame: 95 | """Adds Rogers Satchell variance estimator column to the dataframe 96 | 97 | This requires the serie high and low columns to be present. 98 | """ 99 | open = self.open_col 100 | hh = self.high_col - open 101 | ll = self.low_col - open 102 | cc = self.close_col - open 103 | c = hh * (hh - cc) + ll * (ll - cc) 104 | return to_polars(df).with_columns(c.alias(f"{self.serie}_rs")) 105 | 106 | def var_column(self, suffix: str) -> pl.Expr: 107 | """Returns a polars expression for the OHLC column""" 108 | col = pl.col(f"{self.serie}_{suffix}") 109 | return col.log() if self.percent_variance else col 110 | -------------------------------------------------------------------------------- /quantflow/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/quantmind/quantflow/c439352354501e50333319912802fa41d44aac29/quantflow/utils/__init__.py -------------------------------------------------------------------------------- /quantflow/utils/bins.py: -------------------------------------------------------------------------------- 1 | from typing import Any, Sequence, cast 2 | 3 | import numpy as np 4 | from pandas import DataFrame 5 | 6 | from .types import FloatArray 7 | 8 | 9 | def pdf( 10 | data: FloatArray, 11 | *, 12 | num_bins: int | None = None, 13 | delta: float | None = None, 14 | symmetric: float | None = None, 15 | precision: int = 6, 16 | ) -> DataFrame: 17 | """Extract a probability density function from the data as a DataFrame 18 | with index given by the bin centers and a single column `pdf` with the 19 | estimated probability density function values 20 | 21 | :param data: the data to extract the PDF from 22 | :param num_bins: the number of bins to use in the histogram, 23 | if not provided it is calculated from the `delta` parameter (if provided) 24 | or set to 50 25 | :param delta: the spacing between bins, if not provided it is calculated 26 | from the `num_bins` 27 | :param symmetric: if provided, the bins are centered around this value 28 | :param precision: the precision to use in the calculation 29 | """ 30 | max_value = cast(float, np.max(data)) 31 | min_value = cast(float, np.min(data)) 32 | domain: float = max(abs(data)) if symmetric is not None else max_value - min_value # type: ignore 33 | if num_bins is None: 34 | if not delta: 35 | num_bins = 50 36 | delta_ = round(domain / (num_bins - 1), precision) 37 | else: 38 | delta_ = delta 39 | num_bins = round(domain / delta_) 40 | else: 41 | if delta: 42 | raise ValueError("Cannot specify both num_bins and delta") 43 | if num_bins < 2: 44 | raise ValueError("num_bins must be greater than 1") 45 | delta_ = round(domain / (num_bins - 1), precision) 46 | if symmetric is not None: 47 | b = (num_bins + 0.5) * delta_ 48 | min_value = symmetric - b 49 | max_value = symmetric + b 50 | x = np.arange(min_value - delta_, max_value + 2 * delta_, delta_) 51 | bins = (x[:-1] + x[1:]) * 0.5 52 | pdf, _ = np.histogram(data, bins=bins, density=True) 53 | return DataFrame(dict(pdf=pdf), index=x[1:-1]) 54 | 55 | 56 | def event_density( 57 | df: DataFrame, columns: Sequence[str], num: int = 10 58 | ) -> dict[str, Any]: 59 | """Calculate the probability density of the number of events 60 | in the dataframe columns 61 | """ 62 | bins = np.linspace(-0.5, num - 0.5, num + 1) 63 | data = dict(n=np.arange(num)) 64 | for col in columns: 65 | counts, _ = np.histogram(df[col], bins=bins) 66 | counts = counts / np.sum(counts) 67 | data[col] = counts[:num] # type: ignore 68 | return data 69 | -------------------------------------------------------------------------------- /quantflow/utils/dates.py: -------------------------------------------------------------------------------- 1 | from datetime import date, datetime, timezone 2 | 3 | 4 | def utcnow() -> datetime: 5 | return datetime.now(timezone.utc) 6 | 7 | 8 | def as_utc(dt: date | None = None) -> datetime: 9 | if dt is None: 10 | return utcnow() 11 | elif isinstance(dt, datetime): 12 | return dt.astimezone(timezone.utc) 13 | else: 14 | return datetime(dt.year, dt.month, dt.day, tzinfo=timezone.utc) 15 | 16 | 17 | def isoformat(date: str | date) -> str: 18 | if isinstance(date, str): 19 | return date 20 | return date.isoformat() 21 | 22 | 23 | def start_of_day(dt: date | None = None) -> datetime: 24 | return as_utc(dt).replace(hour=0, minute=0, second=0, microsecond=0) 25 | 26 | 27 | def as_date(dt: date | None = None) -> date: 28 | if dt is None: 29 | return date.today() 30 | elif isinstance(dt, datetime): 31 | return dt.date() 32 | else: 33 | return dt 34 | -------------------------------------------------------------------------------- /quantflow/utils/functions.py: -------------------------------------------------------------------------------- 1 | import math 2 | 3 | import numpy as np 4 | from scipy.integrate import quad 5 | 6 | _factorial = [math.factorial(k) for k in range(51)] 7 | 8 | 9 | @np.vectorize 10 | def factorial(n: int) -> float: 11 | """Cached factorial function""" 12 | if n < 0: 13 | return np.inf 14 | return _factorial[n] if n < len(_factorial) else math.factorial(n) 15 | 16 | 17 | def debye(n: int, x: float) -> float: 18 | xn = n * x ** (-n) 19 | return xn * quad(_debye, 0, x, args=(n,))[0] 20 | 21 | 22 | def _debye(t: float, n: int) -> float: 23 | return t**n / (np.exp(t) - 1) 24 | -------------------------------------------------------------------------------- /quantflow/utils/interest_rates.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import math 4 | from datetime import timedelta 5 | from decimal import Decimal 6 | from typing import NamedTuple 7 | 8 | from .numbers import to_decimal 9 | 10 | 11 | class Rate(NamedTuple): 12 | rate: Decimal = Decimal("0") 13 | frequency: int = 0 14 | 15 | @classmethod 16 | def from_number(cls, rate: float, frequency: int = 0) -> Rate: 17 | return cls(rate=round(to_decimal(rate), 7), frequency=frequency) 18 | 19 | @property 20 | def percent(self) -> Decimal: 21 | return round(100 * self.rate, 5) 22 | 23 | @property 24 | def bps(self) -> Decimal: 25 | return round(10000 * self.rate, 3) 26 | 27 | 28 | def rate_from_spot_and_forward( 29 | spot: Decimal, forward: Decimal, maturity: timedelta, frequency: int = 0 30 | ) -> Rate: 31 | """Calculate rate from spot and forward 32 | 33 | Args: 34 | basis: basis point 35 | maturity: maturity in years 36 | frequency: number of payments per year - 0 for continuous compounding 37 | 38 | Returns: 39 | Rate 40 | """ 41 | # use Act/365 for now 42 | ttm = maturity.days / 365 43 | if ttm <= 0: 44 | return Rate(frequency=frequency) 45 | if frequency == 0: 46 | return Rate.from_number( 47 | rate=math.log(forward / spot) / ttm, frequency=frequency 48 | ) 49 | else: 50 | # TODO: implement this 51 | raise NotImplementedError 52 | -------------------------------------------------------------------------------- /quantflow/utils/numbers.py: -------------------------------------------------------------------------------- 1 | import math 2 | from decimal import Decimal 3 | from enum import IntEnum, auto, unique 4 | 5 | Number = Decimal | float | int | str 6 | ZERO = Decimal(0) 7 | ONE = Decimal(1) 8 | 9 | 10 | @unique 11 | class Rounding(IntEnum): 12 | ZERO = auto() 13 | UP = auto() 14 | DOWN = auto() 15 | 16 | 17 | def to_decimal(value: Number) -> Decimal: 18 | return Decimal(str(value)) if not isinstance(value, Decimal) else value 19 | 20 | 21 | def sigfig(value: Number, sig: int = 5) -> str: 22 | """round a number to the given significant digit""" 23 | return f"%.{sig}g" % to_decimal(value) 24 | 25 | 26 | def normalize_decimal(d: Decimal) -> Decimal: 27 | return d.quantize(ONE) if d == d.to_integral() else d.normalize() 28 | 29 | 30 | def round_to_step( 31 | amount_to_adjust: Number, 32 | rounding_precision: Number, 33 | rounding: Rounding = Rounding.ZERO, 34 | ) -> Decimal: 35 | amount = normalize_decimal(to_decimal(amount_to_adjust)) 36 | precision = normalize_decimal(to_decimal(rounding_precision)) 37 | # Quantize 38 | match rounding: 39 | case Rounding.ZERO: 40 | stepped_amount = precision * round(amount / precision) 41 | case Rounding.UP: 42 | stepped_amount = precision * math.ceil(amount / precision) 43 | case Rounding.DOWN: 44 | stepped_amount = precision * math.floor(amount / precision) 45 | return stepped_amount 46 | -------------------------------------------------------------------------------- /quantflow/utils/plot.py: -------------------------------------------------------------------------------- 1 | import os 2 | from typing import Any 3 | 4 | import pandas as pd 5 | from scipy.stats import norm 6 | 7 | from .marginal import Marginal1D 8 | from .types import FloatArray 9 | 10 | PLOTLY_THEME = os.environ.get("PLOTLY_THEME", "plotly_dark") 11 | 12 | try: 13 | import plotly.express as px # type: ignore 14 | import plotly.graph_objects as go 15 | import plotly.io as pio 16 | 17 | pio.templates.default = PLOTLY_THEME 18 | except ImportError: 19 | px = None 20 | 21 | 22 | def check_plotly() -> None: 23 | if px is None: 24 | raise ImportError("plotly is not installed") 25 | 26 | 27 | def plot_lines(data: Any, template: str = PLOTLY_THEME, **kwargs: Any) -> Any: 28 | check_plotly() 29 | return px.line(data, template=template, **kwargs) 30 | 31 | 32 | def plot_marginal_pdf( 33 | m: Marginal1D, 34 | n: int | None = None, 35 | *, 36 | analytical: str | bool = "lines", 37 | normal: bool = False, 38 | marker_size: int = 8, 39 | marker_color: str = "rgba(30, 186, 64, .5)", 40 | label: str = "characteristic PDF", 41 | log_y: bool = False, 42 | fig: Any | None = None, 43 | **kwargs: Any 44 | ) -> Any: 45 | """Plot the marginal pdf on an input support""" 46 | check_plotly() 47 | pdf = m.pdf_from_characteristic(n, **kwargs) 48 | if fig is None: 49 | fig = go.Figure() 50 | if analytical: 51 | fig.add_trace( 52 | go.Scatter( 53 | x=pdf.x, 54 | y=m.pdf(pdf.x), 55 | name="analytical", 56 | mode=analytical, 57 | ) 58 | ) 59 | if normal: 60 | n = norm.pdf(pdf.x, loc=m.mean(), scale=m.std()) 61 | fig.add_trace( 62 | go.Scatter( 63 | x=pdf.x, 64 | y=n, 65 | name="normal", 66 | mode="lines", 67 | ) 68 | ) 69 | 70 | fig.add_trace( 71 | go.Scatter( 72 | x=pdf.x, 73 | y=pdf.y, 74 | name=label, 75 | mode="markers", 76 | marker_color=marker_color, 77 | marker_size=marker_size, 78 | ) 79 | ) 80 | if log_y: 81 | fig.update_yaxes(type="log") 82 | return fig 83 | 84 | 85 | def plot_characteristic(m: Marginal1D, n: int | None = None, **kwargs: Any) -> Any: 86 | check_plotly() 87 | df = m.characteristic_df(n=n, **kwargs) 88 | return px.line( 89 | df, 90 | x="frequency", 91 | y="characteristic", 92 | color="name", 93 | markers=True, 94 | ) 95 | 96 | 97 | def plot_vol_surface( 98 | data: pd.DataFrame, 99 | *, 100 | model: pd.DataFrame | None = None, 101 | marker_size: int = 10, 102 | x_series: str = "moneyness_ttm", 103 | series: str = "implied_vol", 104 | color_series: str = "side", 105 | fig: Any | None = None, 106 | fig_params: dict | None = None, 107 | **kwargs: Any 108 | ) -> Any: 109 | check_plotly() 110 | # Define a color map for the categorical values 111 | color_map = {"bid": "blue", "ask": "red"} 112 | colors = data[color_series].map(color_map) 113 | fig_params = fig_params or {} 114 | fig_: go.Figure = fig or go.Figure() 115 | params = dict( 116 | mode="markers", 117 | marker=dict(color=colors), 118 | **kwargs, 119 | ) 120 | fig_.add_trace( 121 | go.Scatter( 122 | x=data[x_series], 123 | y=data[series], 124 | **params, 125 | ), 126 | **fig_params, 127 | ) 128 | if model is not None: 129 | fig_.add_trace( 130 | go.Scatter( 131 | x=model["moneyness_ttm"], 132 | y=model[series], 133 | name="model", 134 | mode="lines", 135 | ), 136 | **fig_params, 137 | ) 138 | fig_.update_traces(marker_size=marker_size) 139 | return fig_ 140 | 141 | 142 | def plot_vol_surface_3d( 143 | df: pd.DataFrame, 144 | *, 145 | marker_size: int = 10, 146 | series: str = "implied_vol", 147 | **kwargs: Any 148 | ) -> Any: 149 | check_plotly() 150 | return px.scatter_3d(df, x="moneyness_ttm", y="ttm", z=series, color="side") 151 | 152 | 153 | def plot_vol_cross( 154 | data: pd.DataFrame, 155 | *, 156 | data2: pd.DataFrame | None = None, 157 | series: str = "implied_vol", 158 | marker_size: int = 10, 159 | fig: Any | None = None, 160 | name: str = "model", 161 | **kwargs: Any 162 | ) -> Any: 163 | check_plotly() 164 | fig = fig or go.Figure() 165 | fig.add_trace( 166 | go.Scatter( 167 | x=data["moneyness_ttm"], 168 | y=data[series], 169 | name=name, 170 | mode="lines", 171 | ) 172 | ) 173 | if data2 is not None: 174 | fig.add_trace( 175 | go.Scatter( 176 | x=data2["moneyness_ttm"], 177 | y=data2[series], 178 | name="model", 179 | mode="lines", 180 | ) 181 | ) 182 | return fig.update_layout(xaxis_title="moneyness_ttm", yaxis_title=series) 183 | 184 | 185 | def plot3d( 186 | x: FloatArray, 187 | y: FloatArray, 188 | z: FloatArray, 189 | contours: Any | None, 190 | colorscale: str = "viridis", 191 | **kwargs: Any 192 | ) -> Any: 193 | check_plotly() 194 | fig = go.Figure( 195 | data=[go.Surface(x=x, y=y, z=z, contours=contours, colorscale=colorscale)] 196 | ) 197 | if kwargs: 198 | fig.update_layout(**kwargs) 199 | return fig 200 | 201 | 202 | def candlestick_plot(df: pd.DataFrame, slider: bool = True) -> Any: 203 | fig = go.Figure( 204 | data=go.Candlestick( 205 | x=df["date"], 206 | open=df["open"], 207 | high=df["high"], 208 | low=df["low"], 209 | close=df["close"], 210 | ) 211 | ) 212 | if slider is False: 213 | fig.update_layout(xaxis_rangeslider_visible=False) 214 | return fig 215 | -------------------------------------------------------------------------------- /quantflow/utils/types.py: -------------------------------------------------------------------------------- 1 | from decimal import Decimal 2 | from typing import Any, Optional, Union 3 | 4 | import numpy as np 5 | import numpy.typing as npt 6 | import pandas as pd 7 | 8 | Number = Decimal 9 | Float = float | np.floating[Any] 10 | Numbers = Union[int, Float, np.number] 11 | NumberType = Union[float, int, str, Number] 12 | Vector = Union[int, float, complex, np.ndarray, pd.Series] 13 | FloatArray = npt.NDArray[np.floating[Any]] 14 | IntArray = npt.NDArray[np.signedinteger[Any]] 15 | FloatArrayLike = FloatArray | float 16 | 17 | 18 | def as_number(num: Optional[NumberType] = None) -> Number: 19 | return Number(0 if num is None else str(num)) 20 | 21 | 22 | def as_float(num: Optional[NumberType] = None) -> float: 23 | return float(0 if num is None else num) 24 | 25 | 26 | def as_array(n: Vector) -> np.ndarray: 27 | """Convert an input into an array""" 28 | if isinstance(n, int): 29 | return np.arange(n) 30 | else: 31 | return np.asarray(n) 32 | -------------------------------------------------------------------------------- /quantflow_tests/conftest.py: -------------------------------------------------------------------------------- 1 | import dotenv 2 | 3 | dotenv.load_dotenv() 4 | -------------------------------------------------------------------------------- /quantflow_tests/test_cir.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pytest 3 | 4 | from quantflow.sp.cir import CIR, SamplingAlgorithm 5 | 6 | 7 | @pytest.fixture 8 | def cir_neg() -> CIR: 9 | return CIR(kappa=1, sigma=2, sample_algo=SamplingAlgorithm.euler) 10 | 11 | 12 | @pytest.fixture 13 | def cir() -> CIR: 14 | return CIR(kappa=1, sigma=1.2, sample_algo=SamplingAlgorithm.euler) 15 | 16 | 17 | def test_cir_neg(cir_neg: CIR) -> None: 18 | assert cir_neg.is_positive is False 19 | assert cir_neg.sigma2 == 4 20 | m = cir_neg.marginal(1) 21 | assert m.mean() == 1.0 22 | assert m.mean_from_characteristic() == pytest.approx(1.0, 1e-3) 23 | assert m.variance_from_characteristic() == pytest.approx(m.variance(), 1e-3) 24 | 25 | 26 | def test_cir_neg_sampling(cir_neg: CIR) -> None: 27 | paths = cir_neg.sample(10, time_horizon=1, time_steps=1000) 28 | assert paths.samples == 10 29 | assert paths.time_steps == 1000 30 | assert paths.dt == 0.001 31 | assert np.all(paths.data == paths.data) 32 | 33 | 34 | def test_cir_pdf(cir: CIR): 35 | assert cir.is_positive is True 36 | m = cir.marginal(1) 37 | pdf = m.pdf_from_characteristic(128, max_frequency=20) 38 | np.testing.assert_array_almost_equal(pdf.y, m.pdf(pdf.x), 1e-1) 39 | -------------------------------------------------------------------------------- /quantflow_tests/test_copula.py: -------------------------------------------------------------------------------- 1 | from decimal import Decimal 2 | from math import isclose 3 | 4 | import numpy as np 5 | 6 | from quantflow.sp.copula import FrankCopula, IndependentCopula 7 | 8 | 9 | def test_independent_copula(): 10 | c = IndependentCopula() 11 | assert c.tau() == 0 12 | assert c.rho() == 0 13 | assert np.allclose(c.jacobian(0.3, 0.4), np.array([0.4, 0.3])) 14 | 15 | 16 | def test_frank_copula(): 17 | c = FrankCopula(kappa=Decimal("0.3")) 18 | assert c.kappa == Decimal("0.3") 19 | assert c.tau() > 0 20 | assert c.rho() < 0 21 | assert c.jacobian(0.3, 0.4).shape == (3,) 22 | 23 | c.kappa = 0 24 | assert c.tau() == 0 25 | assert c.rho() == 0 26 | assert np.allclose(c.jacobian(0.3, 0.4), np.array([0.4, 0.3, 0.0])) 27 | 28 | c = FrankCopula() 29 | assert isclose(c(11.0, 3.0), 33.0) 30 | assert isclose(c(11.0, 3.0), 33.0) 31 | -------------------------------------------------------------------------------- /quantflow_tests/test_data.py: -------------------------------------------------------------------------------- 1 | from typing import AsyncIterator 2 | 3 | import pytest 4 | from aiohttp.client_exceptions import ClientError 5 | 6 | from quantflow.data.fed import FederalReserve 7 | from quantflow.data.fiscal_data import FiscalData 8 | from quantflow.data.fmp import FMP 9 | 10 | pytestmark = pytest.mark.skipif(not FMP().key, reason="No FMP API key found") 11 | 12 | 13 | @pytest.fixture 14 | async def fmp() -> AsyncIterator[FMP]: 15 | async with FMP() as fmp: 16 | yield fmp 17 | 18 | 19 | def test_client(fmp: FMP) -> None: 20 | assert fmp.url 21 | assert fmp.key 22 | 23 | 24 | async def test_historical(fmp: FMP) -> None: 25 | df = await fmp.prices("BTCUSD", fmp.freq.one_hour) 26 | assert df["close"] is not None 27 | 28 | 29 | async def test_dividends(fmp: FMP) -> None: 30 | data = await fmp.dividends() 31 | assert data is not None 32 | 33 | 34 | async def test_fed_yc() -> None: 35 | try: 36 | async with FederalReserve() as fed: 37 | df = await fed.yield_curves() 38 | assert df is not None 39 | assert df.shape[0] > 0 40 | assert df.shape[1] == 12 41 | except (ConnectionError, ClientError) as e: 42 | pytest.skip(f"Skipping test_fed due to network issue: {e}") 43 | 44 | 45 | async def test_fed_rates() -> None: 46 | try: 47 | async with FederalReserve() as fed: 48 | df = await fed.ref_rates() 49 | assert df is not None 50 | assert df.shape[0] > 0 51 | assert df.shape[1] == 2 52 | except (ConnectionError, ClientError) as e: 53 | pytest.skip(f"Skipping test_fed due to network issue: {e}") 54 | 55 | 56 | async def __test_fiscal_data() -> None: 57 | try: 58 | async with FiscalData() as fd: 59 | df = await fd.securities() 60 | assert df is not None 61 | assert df.shape[0] > 0 62 | assert df.shape[1] == 2 63 | except (ConnectionError, ClientError) as e: 64 | pytest.skip(f"Skipping test_fed due to network issue: {e}") 65 | -------------------------------------------------------------------------------- /quantflow_tests/test_distributions.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pytest 3 | 4 | from quantflow.utils.distributions import DoubleExponential 5 | 6 | 7 | def test_double_exponential(): 8 | d = DoubleExponential(decay=0.1) 9 | assert d.mean() == 0 10 | assert d.variance() == 200 11 | assert d.scale == 10 12 | 13 | 14 | def test_double_exponential_samples(): 15 | d = DoubleExponential(decay=0.1, kappa=2) 16 | samples = d.sample(10000) 17 | assert samples.shape == (10000,) 18 | assert samples.mean() == pytest.approx(d.mean(), rel=0.8) 19 | # 20 | d = DoubleExponential.from_moments(kappa=1) 21 | assert d.decay == pytest.approx(np.sqrt(2)) 22 | assert d.mean() == 0 23 | assert d.variance() == pytest.approx(1) 24 | # 25 | d = DoubleExponential.from_moments(variance=2, kappa=2) 26 | assert d.mean() == 0 27 | assert d.variance() == pytest.approx(2) 28 | # 29 | d = DoubleExponential.from_moments(mean=-1, variance=2, kappa=2) 30 | assert d.mean() == -1 31 | assert d.variance() == pytest.approx(2) 32 | -------------------------------------------------------------------------------- /quantflow_tests/test_frft.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pytest 3 | from scipy.optimize import Bounds 4 | 5 | from quantflow.utils.transforms import FrFT, Transform 6 | 7 | 8 | @pytest.fixture 9 | def x(): 10 | t = np.linspace(-4 * np.pi, 4 * np.pi, 64) 11 | return ( 12 | np.sin(2 * np.pi * 40 * t) 13 | + np.sin(2 * np.pi * 20 * t) 14 | + np.sin(2 * np.pi * 10 * t) 15 | ) 16 | 17 | 18 | def test_frft(x): 19 | t = FrFT.calculate(x, 0.01) 20 | assert t.n == 64 21 | 22 | 23 | def test_transform_positive_domain(): 24 | n = 10 25 | t = Transform.create(n, domain_range=Bounds(0, np.inf)) 26 | assert t.n == n 27 | x = t.space_domain(1) 28 | assert len(x) == n 29 | np.testing.assert_almost_equal(x, np.linspace(0, n - 1, n)) 30 | -------------------------------------------------------------------------------- /quantflow_tests/test_heston.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from quantflow.sp.heston import Heston, HestonJ 4 | from quantflow.utils.distributions import DoubleExponential 5 | from quantflow_tests.utils import characteristic_tests 6 | 7 | 8 | @pytest.fixture 9 | def heston() -> Heston: 10 | return Heston.create(vol=0.5, kappa=1, sigma=0.5, rho=0) 11 | 12 | 13 | @pytest.fixture 14 | def heston_jumps() -> HestonJ[DoubleExponential]: 15 | return HestonJ.create( 16 | DoubleExponential, 17 | vol=0.5, 18 | kappa=1, 19 | sigma=0.5, 20 | jump_intensity=50, 21 | jump_fraction=0.3, 22 | ) 23 | 24 | 25 | def test_characteristic(heston: Heston) -> None: 26 | assert heston.variance_process.is_positive is True 27 | assert heston.characteristic(1, 0) == 1 28 | m = heston.marginal(1) 29 | characteristic_tests(m) 30 | assert m.mean() == 0.0 31 | assert pytest.approx(m.std()) == 0.5 32 | 33 | 34 | def test_heston_jumps_characteristic(heston_jumps: HestonJ) -> None: 35 | assert heston_jumps.variance_process.is_positive is True 36 | m = heston_jumps.marginal(1) 37 | characteristic_tests(m) 38 | assert m.mean() == 0.0 39 | assert m.std() == pytest.approx(0.5) 40 | -------------------------------------------------------------------------------- /quantflow_tests/test_jump_diffusion.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from quantflow.sp.jump_diffusion import JumpDiffusion 4 | from quantflow.utils.distributions import Normal 5 | 6 | 7 | @pytest.fixture 8 | def merton() -> JumpDiffusion[Normal]: 9 | return JumpDiffusion.create(Normal, jump_fraction=0.8) 10 | 11 | 12 | def test_characteristic(merton: JumpDiffusion[Normal]) -> None: 13 | m = merton.marginal(1) 14 | assert m.mean() == 0 15 | assert pytest.approx(m.std()) == pytest.approx(0.5, 1.0e-3) 16 | pdf = m.pdf_from_characteristic(128) 17 | assert pdf.x[0] < 0 18 | assert pdf.x[-1] > 0 19 | assert -pdf.x[0] != pdf.x[-1] 20 | 21 | 22 | def test_sampling(merton: JumpDiffusion[Normal]) -> None: 23 | paths = merton.sample(1000, time_horizon=1, time_steps=1000) 24 | mean = paths.mean() 25 | assert mean[0] == 0 26 | std = paths.std() 27 | assert std[0] == 0 28 | -------------------------------------------------------------------------------- /quantflow_tests/test_ohlc.py: -------------------------------------------------------------------------------- 1 | from quantflow.sp.weiner import WeinerProcess 2 | from quantflow.ta.ohlc import OHLC 3 | 4 | 5 | def test_ohlc() -> None: 6 | ohlc = OHLC( 7 | serie="0", 8 | period="10m", 9 | parkinson_variance=True, 10 | garman_klass_variance=True, 11 | rogers_satchell_variance=True, 12 | ) 13 | assert ohlc.serie == "0" 14 | assert ohlc.period == "10m" 15 | assert ohlc.index_column == "index" 16 | assert ohlc.parkinson_variance is True 17 | assert ohlc.garman_klass_variance is True 18 | assert ohlc.rogers_satchell_variance is True 19 | assert ohlc.percent_variance is False 20 | # create a dataframe 21 | path = WeinerProcess(sigma=0.5).sample(1, 1, 1000) 22 | df = path.as_datetime_df().reset_index() 23 | result = ohlc(df) 24 | assert result.shape == (145, 9) 25 | -------------------------------------------------------------------------------- /quantflow_tests/test_options.py: -------------------------------------------------------------------------------- 1 | import json 2 | import math 3 | 4 | import numpy as np 5 | import pytest 6 | 7 | from quantflow.options import bs 8 | from quantflow.options.calibration import HestonCalibration 9 | from quantflow.options.pricer import OptionPricer 10 | from quantflow.options.surface import ( 11 | OptionPrice, 12 | VolSurface, 13 | VolSurfaceInputs, 14 | surface_from_inputs, 15 | ) 16 | from quantflow.sp.heston import Heston 17 | from quantflow_tests.utils import has_plotly 18 | 19 | a = np.asarray 20 | CROSS_SECTIONS = 8 21 | 22 | 23 | @pytest.fixture 24 | def heston() -> OptionPricer[Heston]: 25 | return OptionPricer(model=Heston.create(vol=0.5, kappa=1, sigma=0.8, rho=0)) 26 | 27 | 28 | @pytest.fixture 29 | def vol_surface() -> VolSurface: 30 | with open("quantflow_tests/volsurface.json") as fp: 31 | return surface_from_inputs(VolSurfaceInputs(**json.load(fp))) 32 | 33 | 34 | @pytest.mark.parametrize("ttm", [0.4, 0.8, 1.4, 2]) 35 | def test_atm_black_pricing(ttm): 36 | price = bs.black_call(0, 0.2, ttm) 37 | result = bs.implied_black_volatility(0, price, ttm, 0.5, 1) 38 | assert pytest.approx(result[0]) == 0.2 39 | 40 | 41 | @pytest.mark.parametrize("ttm", [0.4, 0.8, 1.4, 2]) 42 | def test_otm_black_pricing(ttm): 43 | price = bs.black_call(math.log(1.1), 0.25, ttm) 44 | result = bs.implied_black_volatility(math.log(1.1), price, ttm, 0.5, 1) 45 | assert pytest.approx(result[0]) == 0.25 46 | 47 | 48 | @pytest.mark.parametrize("ttm", [0.4, 0.8, 1.4, 2]) 49 | def test_itm_black_pricing(ttm): 50 | price = bs.black_call(math.log(0.9), 0.25, ttm) 51 | result = bs.implied_black_volatility(math.log(0.9), price, ttm, 0.5, 1) 52 | assert pytest.approx(result[0]) == 0.25 53 | 54 | 55 | def test_ditm_black_pricing(): 56 | price = bs.black_call(math.log(0.6), 0.25, 1) 57 | assert pytest.approx(price, 0.01) == 0.4 58 | result = bs.implied_black_volatility(math.log(0.6), price, 1, 0.5, 1) 59 | assert pytest.approx(result[0]) == 0.25 60 | 61 | 62 | def test_vol_surface(vol_surface: VolSurface): 63 | assert vol_surface.ref_date 64 | ts = vol_surface.term_structure() 65 | assert len(ts) == CROSS_SECTIONS 66 | options = vol_surface.options_df() 67 | crosses = [] 68 | for index in range(0, len(vol_surface.maturities)): 69 | crosses.append(vol_surface.options_df(index=index)) 70 | assert len(crosses) == CROSS_SECTIONS 71 | assert len(options) == sum(len(cross) for cross in crosses) 72 | 73 | 74 | def test_same_vol_surface(vol_surface: VolSurface): 75 | inputs = vol_surface.inputs() 76 | vol_surface2 = surface_from_inputs(inputs) 77 | assert vol_surface == vol_surface2 78 | 79 | 80 | def test_black_vol(vol_surface: VolSurface): 81 | options = vol_surface.option_list(index=1) 82 | for option in options: 83 | assert option.price_time > 0 84 | 85 | options = vol_surface.bs(index=1) 86 | converged = [o for o in options if o.converged] 87 | assert converged 88 | # calculate the black price now 89 | prices = vol_surface.calc_bs_prices(index=1) 90 | assert len(converged) == len(prices) 91 | for o, price in zip(converged, prices): 92 | assert pytest.approx(float(o.price)) == price 93 | 94 | 95 | def test_call_put_parity(): 96 | option = OptionPrice.create(100).calculate_price() 97 | assert option.moneyness == 0 98 | assert option.price == option.call_price 99 | option2 = OptionPrice.create(100, call=False).calculate_price() 100 | assert option2.price == option2.put_price 101 | assert option2.price == option.put_price 102 | assert option2.call_price == option.price 103 | 104 | 105 | def test_call_put_parity_otm(): 106 | option = OptionPrice.create(105, forward=100).calculate_price() 107 | assert option.moneyness > 0 108 | assert option.price == option.call_price 109 | option2 = OptionPrice.create(105, forward=100, call=False).calculate_price() 110 | assert option2.price == option2.put_price 111 | assert option2.price == pytest.approx(option.put_price) 112 | assert option2.call_price == pytest.approx(option.price) 113 | 114 | 115 | def test_calibration_setup(vol_surface: VolSurface, heston: OptionPricer[Heston]): 116 | cal = HestonCalibration(pricer=heston, vol_surface=vol_surface) 117 | assert cal.ref_date == vol_surface.ref_date 118 | assert cal.options 119 | n = len(cal.options) 120 | vol_range = cal.implied_vol_range() 121 | assert vol_range.lb < vol_range.ub 122 | assert vol_range.lb > 0 123 | assert vol_range.ub < 10 124 | cal2 = cal.remove_implied_above(1.0) 125 | assert len(cal2.options) == n 126 | cal2 = cal.remove_implied_above(0.95) 127 | assert len(cal2.options) < n 128 | 129 | 130 | def test_calibration(vol_surface: VolSurface, heston: OptionPricer[Heston]): 131 | vol_surface.maturities = vol_surface.maturities[1:] 132 | cal = HestonCalibration( 133 | pricer=heston, vol_surface=vol_surface 134 | ).remove_implied_above(0.95) 135 | cal.fit() 136 | if has_plotly: 137 | assert cal.plot(index=2) is not None 138 | -------------------------------------------------------------------------------- /quantflow_tests/test_options_pricer.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from quantflow.options.pricer import OptionPricer 4 | from quantflow.sp.heston import HestonJ 5 | from quantflow.utils.distributions import DoubleExponential 6 | from quantflow_tests.utils import has_plotly 7 | 8 | 9 | @pytest.fixture 10 | def pricer() -> OptionPricer[HestonJ[DoubleExponential]]: 11 | return OptionPricer( 12 | model=HestonJ.create(DoubleExponential, vol=0.5, kappa=1, sigma=0.8, rho=0) 13 | ) 14 | 15 | 16 | @pytest.mark.skipif(not has_plotly, reason="Plotly not installed") 17 | def test_plot_surface(pricer: OptionPricer): 18 | fig = pricer.plot3d() 19 | surface = fig.data[0] 20 | assert surface.x is not None 21 | assert surface.y is not None 22 | assert surface.z is not None 23 | -------------------------------------------------------------------------------- /quantflow_tests/test_ou.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from quantflow.sp.bns import BNS 4 | from quantflow.sp.ou import GammaOU, Vasicek 5 | from quantflow_tests.utils import analytical_tests, characteristic_tests 6 | 7 | 8 | @pytest.fixture 9 | def vasicek() -> Vasicek: 10 | return Vasicek(kappa=5) 11 | 12 | 13 | @pytest.fixture 14 | def gamma_ou() -> GammaOU: 15 | return GammaOU.create(decay=10, kappa=5) 16 | 17 | 18 | @pytest.fixture 19 | def bns() -> BNS: 20 | return BNS.create(vol=0.5, decay=5, kappa=1, rho=0) 21 | 22 | 23 | def test_marginal(gamma_ou: GammaOU) -> None: 24 | m = gamma_ou.marginal(1) 25 | assert m.mean() == 1 26 | 27 | 28 | def test_sample(gamma_ou: GammaOU) -> None: 29 | paths = gamma_ou.sample(10, 1, 100) 30 | assert paths.t == 1 31 | assert paths.dt == 0.01 32 | 33 | 34 | def test_vasicek(vasicek: Vasicek) -> None: 35 | m = vasicek.marginal(10) 36 | characteristic_tests(m) 37 | analytical_tests(vasicek) 38 | assert m.mean() == 1.0 39 | assert m.variance() == pytest.approx(0.1) 40 | assert m.mean_from_characteristic() == pytest.approx(1.0, 1e-3) 41 | assert m.std_from_characteristic() == pytest.approx(m.std(), 1e-3) 42 | 43 | 44 | def test_bns(bns: BNS): 45 | m = bns.marginal(1) 46 | assert bns.characteristic(1, 0) == 1 47 | assert m.mean() == 0.0 48 | # assert pytest.approx(m.std(), 1e-3) == 0.5 49 | -------------------------------------------------------------------------------- /quantflow_tests/test_poisson.py: -------------------------------------------------------------------------------- 1 | import math 2 | 3 | import numpy as np 4 | import pytest 5 | 6 | from quantflow.sp.dsp import DSP 7 | from quantflow.sp.poisson import CompoundPoissonProcess, PoissonProcess 8 | from quantflow.utils.distributions import DoubleExponential, Exponential, Normal 9 | from quantflow_tests.utils import analytical_tests, characteristic_tests 10 | 11 | 12 | @pytest.fixture 13 | def poisson() -> PoissonProcess: 14 | return PoissonProcess(intensity=2) 15 | 16 | 17 | @pytest.fixture 18 | def comp() -> CompoundPoissonProcess[Exponential]: 19 | return CompoundPoissonProcess(intensity=2, jumps=Exponential(decay=10)) 20 | 21 | 22 | @pytest.fixture 23 | def dsp() -> DSP: 24 | return DSP() 25 | 26 | 27 | def test_characteristic(poisson: PoissonProcess) -> None: 28 | characteristic_tests(poisson.marginal(1)) 29 | m1 = poisson.marginal(1) 30 | m2 = poisson.marginal(2) 31 | assert m1.mean() == 2 32 | assert pytest.approx(m1.mean_from_characteristic(), 0.001) == 2 33 | assert pytest.approx(m2.mean_from_characteristic(), 0.001) == 4 34 | assert pytest.approx(m1.std()) == math.sqrt(2) 35 | assert pytest.approx(m1.variance_from_characteristic(), 0.001) == 2 36 | assert pytest.approx(m2.variance_from_characteristic(), 0.001) == 4 37 | 38 | 39 | def test_poisson_cdf_from_characteristic(poisson: PoissonProcess) -> None: 40 | m = poisson.marginal(0.1) 41 | # m.pdf(x) 42 | cdf1 = m.cdf(1.0 * np.arange(10)) 43 | cdf2 = m.cdf_from_characteristic(10, frequency_n=128 * 8).y 44 | np.testing.assert_almost_equal(cdf1, cdf2, decimal=3) 45 | # TODO: fix this 46 | # np.testing.assert_almost_equal(pdf, c_pdf.y[:10]) 47 | 48 | 49 | def test_poisson_pdf(poisson: PoissonProcess) -> None: 50 | m = poisson.marginal(1) 51 | analytical_tests(poisson) 52 | # m.pdf(x) 53 | c_pdf = m.pdf_from_characteristic(32) 54 | np.testing.assert_almost_equal(np.linspace(0, 31, 32), c_pdf.x) 55 | # TODO: fix this 56 | # np.testing.assert_almost_equal(pdf, c_pdf.y[:10]) 57 | 58 | 59 | def test_poisson_sampling(poisson: PoissonProcess) -> None: 60 | paths = poisson.sample(1000, time_horizon=1, time_steps=1000) 61 | mean = paths.mean() 62 | assert mean[0] == 0 63 | std = paths.std() 64 | assert std[0] == 0 65 | pdf = paths.pdf(delta=1) 66 | assert len(pdf.columns) == 1 67 | assert sum(pdf["pdf"]) == pytest.approx(1) 68 | 69 | 70 | def test_comp_characteristic(comp: CompoundPoissonProcess) -> None: 71 | characteristic_tests(comp.marginal(1)) 72 | analytical_tests(comp) 73 | 74 | 75 | def test_dsp_sample(dsp: DSP): 76 | paths = dsp.sample(1000, time_horizon=1, time_steps=1000) 77 | mean = paths.mean() 78 | assert mean[0] == 0 79 | std = paths.std() 80 | assert std[0] == 0 81 | 82 | 83 | def test_dsp_pdf(dsp: DSP): 84 | m = dsp.marginal(1) 85 | pdf1 = m.pdf_from_characteristic(32).y 86 | pdf2 = m.pdf_from_characteristic(64).y 87 | np.testing.assert_almost_equal(pdf2[:32], pdf1) 88 | 89 | 90 | def test_compound_create_double_exponential(): 91 | poi = CompoundPoissonProcess.create(DoubleExponential, jump_intensity=20, vol=0.5) 92 | assert poi.intensity == 20 93 | assert poi.analytical_mean(0.1) == 0 94 | assert poi.analytical_std(0.1) == 0.5 * np.sqrt(0.1) 95 | 96 | 97 | def test_compound_create(): 98 | poi = CompoundPoissonProcess.create(Normal, jump_intensity=20, vol=0.5) 99 | assert poi.intensity == 20 100 | assert poi.analytical_mean(0.1) == 0 101 | assert poi.analytical_std(0.1) == 0.5 * np.sqrt(0.1) 102 | -------------------------------------------------------------------------------- /quantflow_tests/test_utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | from quantflow.ta.paths import Paths 4 | from quantflow.utils.numbers import round_to_step, to_decimal 5 | 6 | 7 | def test_round_to_step(): 8 | assert str(round_to_step(1.234, 0.1)) == "1.2" 9 | assert str(round_to_step(1.234, 0.01)) == "1.23" 10 | assert str(round_to_step(1.236, 0.01)) == "1.24" 11 | assert str(round_to_step(1.1, 0.01)) == "1.10" 12 | assert str(round_to_step(1.1, 0.001)) == "1.100" 13 | assert str(round_to_step(2, 0.001)) == "2.000" 14 | assert str(round_to_step(to_decimal("2.00000000000"), 0.001)) == "2.000" 15 | 16 | 17 | def test_normal_draws() -> None: 18 | paths = Paths.normal_draws(100, 1, 1000) 19 | assert paths.samples == 100 20 | assert paths.time_steps == 1000 21 | m = paths.mean() 22 | np.testing.assert_array_almost_equal(m, 0) 23 | paths = Paths.normal_draws(100, 1, 1000, antithetic_variates=False) 24 | assert np.abs(paths.mean().mean()) > np.abs(m.mean()) 25 | 26 | 27 | def test_normal_draws1() -> None: 28 | paths = Paths.normal_draws(1, 1, 1000) 29 | assert paths.samples == 1 30 | assert paths.time_steps == 1000 31 | paths = Paths.normal_draws(1, 1, 1000, antithetic_variates=False) 32 | assert paths.samples == 1 33 | assert paths.time_steps == 1000 34 | 35 | 36 | def test_path_stats() -> None: 37 | paths = Paths.normal_draws(paths=2, time_steps=1000) 38 | assert paths.paths_mean().shape == (2,) 39 | assert paths.paths_std(scaled=True).shape == (2,) 40 | assert paths.paths_var(scaled=False).shape == (2,) 41 | -------------------------------------------------------------------------------- /quantflow_tests/test_weiner.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pytest 3 | 4 | from quantflow.sp.weiner import WeinerProcess 5 | from quantflow_tests.utils import characteristic_tests 6 | 7 | 8 | @pytest.fixture 9 | def weiner() -> WeinerProcess: 10 | return WeinerProcess(sigma=0.5) 11 | 12 | 13 | def test_characteristic(weiner: WeinerProcess) -> None: 14 | assert weiner.characteristic(1, 0) == 1 15 | assert weiner.convexity_correction(2) == 0.25 16 | marginal = weiner.marginal(1) 17 | characteristic_tests(marginal) 18 | assert marginal.mean() == 0 19 | assert marginal.mean_from_characteristic() == 0 20 | assert marginal.std() == 0.5 21 | assert marginal.std_from_characteristic() == pytest.approx(0.5) 22 | assert marginal.variance_from_characteristic() == pytest.approx(0.25) 23 | df = marginal.characteristic_df(128) 24 | assert len(df.columns) == 3 25 | 26 | 27 | def test_sampling(weiner: WeinerProcess) -> None: 28 | paths = weiner.sample(1000, time_horizon=1, time_steps=1000) 29 | mean = paths.mean() 30 | assert mean[0] == 0 31 | std = paths.std() 32 | assert std[0] == 0 33 | 34 | 35 | def test_support(weiner: WeinerProcess) -> None: 36 | m = weiner.marginal(0.01) 37 | pdf = m.pdf_from_characteristic(32) 38 | assert len(pdf.x) == 32 39 | 40 | 41 | def test_fft_v_frft(weiner: WeinerProcess) -> None: 42 | m = weiner.marginal(1) 43 | pdf1 = m.pdf_from_characteristic(128, max_frequency=10) 44 | pdf2 = m.pdf_from_characteristic(128, use_fft=True, max_frequency=200) 45 | y = np.interp(pdf1.x[10:-10], pdf2.x, pdf2.y) 46 | assert np.allclose(y, pdf1.y[10:-10], 1e-2) 47 | # 48 | # TODO: simpson rule seems to fail for FFT 49 | # pdf1 = m.pdf_from_characteristic(128, max_frequency=10, simpson_rule=True) 50 | # pdf2 = m.pdf_from_characteristic( 51 | # 128, use_fft=True, max_frequency=200, simpson_rule=True 52 | # ) 53 | # y = np.interp(pdf1.x[10:-10], pdf2.x, pdf2.y) 54 | # assert np.allclose(y, pdf1.y[10:-10], 1e-2) 55 | -------------------------------------------------------------------------------- /quantflow_tests/utils.py: -------------------------------------------------------------------------------- 1 | from typing import cast 2 | 3 | import numpy as np 4 | 5 | from quantflow.sp.base import StochasticProcess1D 6 | from quantflow.utils.marginal import Marginal1D 7 | from quantflow.utils.plot import check_plotly 8 | 9 | try: 10 | check_plotly() 11 | has_plotly = True 12 | except ImportError: 13 | has_plotly = False 14 | 15 | 16 | def characteristic_tests(m: Marginal1D): 17 | assert m.characteristic(0) == 1 18 | u = np.linspace(0, 10, 1000) 19 | # test boundedness 20 | assert np.all(np.abs(m.characteristic(u)) <= 1) 21 | # hermitian symmetry 22 | np.testing.assert_allclose( 23 | m.characteristic(u), cast(np.ndarray, m.characteristic(-u)).conj() 24 | ) 25 | 26 | 27 | def analytical_tests(pr: StochasticProcess1D, tol: float = 1e-3): 28 | t = np.linspace(0.1, 2, 20) 29 | m = pr.marginal(t) 30 | np.testing.assert_allclose(m.mean(), m.mean_from_characteristic(), tol) 31 | np.testing.assert_allclose(m.std(), m.std_from_characteristic(), tol) 32 | np.testing.assert_allclose(m.variance(), m.variance_from_characteristic(), tol) 33 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # 2 | 3 | [![PyPI version](https://badge.fury.io/py/quantflow.svg)](https://badge.fury.io/py/quantflow) 4 | [![Python versions](https://img.shields.io/pypi/pyversions/quantflow.svg)](https://pypi.org/project/quantflow) 5 | [![Python downloads](https://img.shields.io/pypi/dd/quantflow.svg)](https://pypi.org/project/quantflow) 6 | [![build](https://github.com/quantmind/quantflow/actions/workflows/build.yml/badge.svg)](https://github.com/quantmind/quantflow/actions/workflows/build.yml) 7 | [![codecov](https://codecov.io/gh/quantmind/quantflow/branch/main/graph/badge.svg?token=wkH9lYKOWP)](https://codecov.io/gh/quantmind/quantflow) 8 | 9 | Quantitative analysis and pricing tools. 10 | 11 | Documentation is available as [quantflow jupyter book](https://quantflow.quantmind.com). 12 | 13 | ## Installation 14 | 15 | ```bash 16 | pip install quantflow 17 | ``` 18 | 19 | ![btcvol](https://github.com/quantmind/quantflow/assets/144320/88ed85d1-c3c5-489c-ac07-21b036593214) 20 | 21 | 22 | ## Modules 23 | 24 | * [quantflow.cli](https://github.com/quantmind/quantflow/tree/main/quantflow/cli) command line client (requires `quantflow[cli,data]`) 25 | * [quantflow.data](https://github.com/quantmind/quantflow/tree/main/quantflow/data) data APIs (requires `quantflow[data]`) 26 | * [quantflow.options](https://github.com/quantmind/quantflow/tree/main/quantflow/options) option pricing and calibration 27 | * [quantflow.sp](https://github.com/quantmind/quantflow/tree/main/quantflow/sp) stochastic process primitives 28 | * [quantflow.ta](https://github.com/quantmind/quantflow/tree/main/quantflow/ta) timeseries analysis tools 29 | * [quantflow.utils](https://github.com/quantmind/quantflow/tree/main/quantflow/utils) utilities and helpers 30 | 31 | 32 | 33 | ## Command line tools 34 | 35 | The command line tools are available when installing with the extra `cli` and `data` dependencies. 36 | 37 | ```bash 38 | pip install quantflow[cli,data] 39 | ``` 40 | 41 | It is possible to use the command line tool `qf` to download data and run pricing and calibration scripts. 42 | --------------------------------------------------------------------------------