├── tests ├── __init__.py ├── test_rhiza │ ├── benchmarks │ │ ├── .gitignore │ │ ├── README.md │ │ └── analyze_benchmarks.py │ ├── test_marimushka_script.py │ ├── test_structure.py │ ├── README.md │ ├── test_updatereadme_script.py │ ├── test_readme.py │ ├── test_docstrings.py │ ├── test_release_script.py │ ├── test_git_repo_fixture.py │ └── test_bump_script.py ├── standard_test_computations.py ├── test_metadata.py ├── test_loman_tree_functions.py ├── Makefile.tests ├── test_converters.py ├── test_value_eq.py ├── test_nodekeys.py ├── test_dill_serialization.py ├── test_computeengine_structure.py ├── test_blocks.py └── test_class_style_definition.py ├── docs ├── requirements.txt ├── _static │ └── example000.png ├── user │ ├── features │ │ ├── manipulating │ │ │ ├── index.md │ │ │ └── repointing_nodes.md │ │ ├── index.md │ │ ├── other │ │ │ ├── index.md │ │ │ ├── serializing_computations.md │ │ │ └── interactive_debugging.md │ │ ├── querying │ │ │ ├── index.md │ │ │ ├── show_as_dataframe.md │ │ │ └── view_inputs_outputs.md │ │ └── creating │ │ │ ├── index.md │ │ │ ├── constant_values.md │ │ │ ├── adding_nodes_using_decorators.md │ │ │ ├── tagging_nodes.md │ │ │ ├── non_string_node_names.md │ │ │ ├── automatically_expanding_named_tuples.md │ │ │ └── creating_computation_factories.md │ ├── install.md │ ├── intro.md │ └── strategies.md ├── api.rst ├── dev │ └── release.md ├── Makefile ├── make.bat ├── index.md └── conf.py ├── src └── loman │ ├── serialization │ ├── default.py │ └── __init__.py │ ├── consts.py │ ├── compat.py │ ├── __init__.py │ ├── exception.py │ ├── graph_utils.py │ └── util.py ├── AUTHORS ├── .github ├── rhiza │ ├── template.yml │ ├── scripts │ │ ├── customisations │ │ │ ├── post-release.sh │ │ │ └── build-extras.sh │ │ ├── marimushka.sh │ │ ├── book.sh │ │ └── update-readme-help.sh │ ├── utils │ │ ├── version_max.py │ │ └── version_matrix.py │ └── actions │ │ └── setup-project │ │ └── action.yml ├── renovate.json ├── workflows │ ├── rhiza_rhiza.yml │ ├── rhiza_pre-commit.yml │ ├── rhiza_deptry.yml │ ├── rhiza_ci.yml │ ├── structure.yml │ ├── rhiza_book.yml │ ├── rhiza_sync.yml │ ├── rhiza_marimo.yml │ └── rhiza_devcontainer.yml └── dependabot.yml ├── .editorconfig ├── CODE_OF_CONDUCT.md ├── .readthedocs.yaml ├── .devcontainer ├── bootstrap.sh └── devcontainer.json ├── .pre-commit-config.yaml ├── LICENSE ├── .rhiza.history ├── .gitignore ├── pyproject.toml ├── book └── Makefile.book ├── CONTRIBUTING.md ├── presentation └── Makefile.presentation ├── ruff.toml └── CHANGELOG.md /tests/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/test_rhiza/benchmarks/.gitignore: -------------------------------------------------------------------------------- 1 | benchmarks.json 2 | benchmarks.svg 3 | benchmarks.html 4 | -------------------------------------------------------------------------------- /docs/requirements.txt: -------------------------------------------------------------------------------- 1 | sphinx 2 | sphinx-rtd-theme 3 | pillow 4 | mock 5 | commonmark 6 | recommonmark 7 | myst-parser -------------------------------------------------------------------------------- /docs/_static/example000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/janushendersonassetallocation/loman/HEAD/docs/_static/example000.png -------------------------------------------------------------------------------- /docs/user/features/manipulating/index.md: -------------------------------------------------------------------------------- 1 | # Manipulating Computation Graphs 2 | 3 | ```{toctree} 4 | :maxdepth: 1 5 | :titlesonly: 6 | 7 | repointing_nodes 8 | ``` 9 | -------------------------------------------------------------------------------- /docs/user/features/index.md: -------------------------------------------------------------------------------- 1 | # Features 2 | 3 | ```{toctree} 4 | :maxdepth: 1 5 | :titlesonly: 6 | 7 | creating/index 8 | querying/index 9 | manipulating/index 10 | other/index 11 | ``` 12 | -------------------------------------------------------------------------------- /docs/api.rst: -------------------------------------------------------------------------------- 1 | API Reference 2 | ============= 3 | 4 | .. module:: loman 5 | 6 | .. autoclass:: States 7 | :members: 8 | :undoc-members: 9 | 10 | .. autoclass:: Computation 11 | :members: -------------------------------------------------------------------------------- /docs/user/features/other/index.md: -------------------------------------------------------------------------------- 1 | # Other Computation Graph Operations 2 | 3 | ```{toctree} 4 | :maxdepth: 1 5 | :titlesonly: 6 | 7 | interactive_debugging 8 | serializing_computations 9 | ``` 10 | -------------------------------------------------------------------------------- /docs/user/features/querying/index.md: -------------------------------------------------------------------------------- 1 | # Querying Computation Graphs 2 | 3 | ```{toctree} 4 | :maxdepth: 1 5 | :titlesonly: 6 | 7 | visualizing_computation_graph 8 | show_as_dataframe 9 | view_inputs_outputs 10 | ``` 11 | -------------------------------------------------------------------------------- /docs/user/features/creating/index.md: -------------------------------------------------------------------------------- 1 | # Creating Computation Graphs 2 | 3 | ```{toctree} 4 | :maxdepth: 1 5 | :titlesonly: 6 | 7 | creating_computation_factories 8 | adding_nodes_using_decorators 9 | constant_values 10 | tagging_nodes 11 | automatically_expanding_named_tuples 12 | non_string_node_names 13 | ``` 14 | -------------------------------------------------------------------------------- /src/loman/serialization/default.py: -------------------------------------------------------------------------------- 1 | """Default transformer configuration for serialization.""" 2 | 3 | from .transformer import NdArrayTransformer, Transformer 4 | 5 | 6 | def default_transformer(*args, **kwargs): 7 | """Create a default transformer with NdArray support.""" 8 | t = Transformer(*args, **kwargs) 9 | t.register(NdArrayTransformer()) 10 | return t 11 | -------------------------------------------------------------------------------- /AUTHORS: -------------------------------------------------------------------------------- 1 | List of contributors: 2 | 3 | Ed Parcell 4 | Mark Richardson 5 | Harry Campion 6 | Weston Platter 7 | Allan Maymin 8 | Eric Przybylinski 9 | Paul Algreen 10 | Chris Lundberg 11 | Thomas Schmelzer 12 | 13 | With thanks to: 14 | 15 | Gedas Stanzys 16 | John Montgomery -------------------------------------------------------------------------------- /.github/rhiza/template.yml: -------------------------------------------------------------------------------- 1 | template-repository: "jebel-quant/rhiza" 2 | template-branch: "main" 3 | include: 4 | - .devcontainer/ 5 | - .github/ 6 | - tests/ 7 | - .editorconfig 8 | - .pre-commit-config.yaml 9 | - Makefile 10 | - presentation 11 | - book/Makefile.book 12 | exclude: 13 | - .github/rhiza/scripts/customisations/build-extras.sh 14 | - .github/workflows/rhiza_docker.yml 15 | - .github/rhiza/CONFIG.md 16 | - .github/rhiza/TOKEN_SETUP.md 17 | -------------------------------------------------------------------------------- /docs/dev/release.md: -------------------------------------------------------------------------------- 1 | # Release Checklist 2 | 3 | - Check CHANGELOG is up-to-date 4 | - Check [Travis CI](https://travis-ci.org/janusassetallocation/loman) builds are passing 5 | - Check [Read The Docs documentation builds](https://readthedocs.org/projects/loman/) are passing 6 | - Update version string in 7 | - pyproject.toml 8 | - docs/conf.py 9 | - Commit updated versions and tag 10 | - Build the tar.gz and wheel: `python -m build` 11 | - Upload the tar.gz and wheel: `twine upload dist\loman-x.y.z*` 12 | - Email the community 13 | -------------------------------------------------------------------------------- /.github/renovate.json: -------------------------------------------------------------------------------- 1 | { 2 | "extends": [ 3 | "config:recommended", 4 | ":enablePreCommit", 5 | ":automergeMinor", 6 | ":dependencyDashboard", 7 | ":maintainLockFilesWeekly", 8 | ":semanticCommits", 9 | ":pinDevDependencies" 10 | ], 11 | "enabledManagers": [ 12 | "pep621", 13 | "pre-commit", 14 | "github-actions", 15 | "devcontainer", 16 | "dockerfile" 17 | ], 18 | "timezone": "Asia/Dubai", 19 | "schedule": [ 20 | "before 10am on tuesday" 21 | ] 22 | } 23 | -------------------------------------------------------------------------------- /.github/workflows/rhiza_rhiza.yml: -------------------------------------------------------------------------------- 1 | name: RHIZA VALIDATE 2 | 3 | permissions: 4 | contents: read 5 | 6 | on: 7 | push: 8 | pull_request: 9 | branches: [ main, master ] 10 | 11 | jobs: 12 | validation: 13 | runs-on: ubuntu-latest 14 | # don't run this in rhiza itself. Rhiza has no template.yml file. 15 | if: ${{ github.repository != 'jebel-quant/rhiza' }} 16 | container: 17 | image: ghcr.io/astral-sh/uv:0.9.18-python3.12-trixie 18 | 19 | steps: 20 | - name: Checkout repository 21 | uses: actions/checkout@v6 22 | 23 | - name: Validate Rhiza config 24 | shell: bash 25 | run: | 26 | uvx rhiza validate . 27 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SPHINXPROJ = loman 8 | SOURCEDIR = . 9 | BUILDDIR = _build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /src/loman/serialization/__init__.py: -------------------------------------------------------------------------------- 1 | """Serialization utilities for Loman computations.""" 2 | 3 | from .default import default_transformer 4 | from .transformer import ( 5 | CustomTransformer, 6 | MissingObject, 7 | NdArrayTransformer, 8 | Transformable, 9 | Transformer, 10 | UnrecognizedTypeError, 11 | UntransformableTypeError, 12 | ) 13 | 14 | # Backward compatibility aliases 15 | UnrecognizedTypeException = UnrecognizedTypeError 16 | UntransformableTypeException = UntransformableTypeError 17 | 18 | __all__ = [ 19 | "default_transformer", 20 | "CustomTransformer", 21 | "MissingObject", 22 | "NdArrayTransformer", 23 | "Transformable", 24 | "Transformer", 25 | "UnrecognizedTypeException", 26 | "UntransformableTypeException", 27 | ] 28 | -------------------------------------------------------------------------------- /docs/user/features/manipulating/repointing_nodes.md: -------------------------------------------------------------------------------- 1 | # Repointing Nodes 2 | 3 | It is possible to repoint existing nodes to a new node. This can be useful when it is desired to make a small change in one node, without having to recreate all descendant nodes. As an example: 4 | 5 | ```pycon 6 | >>> from loman import * 7 | >>> comp = Computation() 8 | >>> comp.add_node('a', value = 2) 9 | >>> comp.add_node('b', lambda a: a + 1) 10 | >>> comp.add_node('c', lambda a: 10*a) 11 | >>> comp.compute_all() 12 | >>> comp.v.b 13 | 3 14 | >>> comp.v.c 15 | 20 16 | >>> comp.add_node('modified_a', lambda a: a*a) 17 | >>> comp.compute_all() 18 | >>> comp.v.a 19 | 2 20 | >>> comp.v.modified_a 21 | 4 22 | >>> comp.v.b 23 | 3 24 | >>> comp.v.c 25 | 20 26 | >>> comp.repoint('a', 'modified_a') 27 | >>> comp.compute_all() 28 | >>> comp.v.b 29 | 5 30 | >>> comp.v.c 31 | 40 32 | ``` 33 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=. 11 | set BUILDDIR=_build 12 | set SPHINXPROJ=loman 13 | 14 | if "%1" == "" goto help 15 | 16 | %SPHINXBUILD% >NUL 2>NUL 17 | if errorlevel 9009 ( 18 | echo. 19 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 20 | echo.installed, then set the SPHINXBUILD environment variable to point 21 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 22 | echo.may add the Sphinx directory to PATH. 23 | echo. 24 | echo.If you don't have Sphinx installed, grab it from 25 | echo.http://sphinx-doc.org/ 26 | exit /b 1 27 | ) 28 | 29 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 30 | goto end 31 | 32 | :help 33 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 34 | 35 | :end 36 | popd 37 | -------------------------------------------------------------------------------- /.editorconfig: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | root = true 5 | 6 | # Default settings for all files 7 | [*] 8 | end_of_line = lf 9 | trim_trailing_whitespace = true 10 | insert_final_newline = true 11 | charset = utf-8 12 | 13 | # Python, reStructuredText, and text files 14 | [*.{py,rst,txt}] 15 | indent_style = space 16 | indent_size = 4 17 | 18 | # YAML, JSON, and other config files 19 | [*.{yml,yaml,json}] 20 | indent_style = space 21 | indent_size = 2 22 | 23 | # Markdown files 24 | # [*.{md,markdown}] 25 | # trim_trailing_whitespace = false 26 | 27 | # Don't apply editorconfig rules to vendor/ resources 28 | # This is a "defensive" rule for the day we may have 29 | # the vendor folder 30 | [vendor/**] 31 | charset = unset 32 | end_of_line = unset 33 | indent_size = unset 34 | indent_style = unset 35 | insert_final_newline = unset 36 | trim_trailing_whitespace = unset 37 | -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | # Loman 2 | 3 | Loman is a Python library to deal with complex dependencies between sets of calculations. You can think of it as make for calculations. By keeping track of the state of your computations, and the dependencies between them, it makes understanding calculation processes easier and allows on-demand full or partial recalculations. This makes it easy to efficiently implement robust real-time and batch systems, as well as providing powerful mechanism for interactive work. 4 | 5 | ## User Guide 6 | 7 | ```{toctree} 8 | :maxdepth: 2 9 | 10 | user/intro 11 | user/install 12 | user/quickstart 13 | user/features/index 14 | user/strategies 15 | ``` 16 | 17 | ## API Reference 18 | 19 | ```{toctree} 20 | :maxdepth: 2 21 | 22 | api 23 | ``` 24 | 25 | ## Developer Guidelines 26 | 27 | ```{toctree} 28 | :maxdepth: 2 29 | 30 | dev/release 31 | ``` 32 | 33 | # Indices and tables 34 | 35 | - {ref}`genindex` 36 | - {ref}`modindex` 37 | - {ref}`search` 38 | -------------------------------------------------------------------------------- /tests/standard_test_computations.py: -------------------------------------------------------------------------------- 1 | from loman import Computation, ComputationFactory, calc_node, input_node 2 | 3 | 4 | @ComputationFactory 5 | class BasicFourNodeComputation: 6 | a = input_node() 7 | 8 | @calc_node 9 | def b(a): 10 | return a + 1 11 | 12 | @calc_node 13 | def c(a): 14 | return 2 * a 15 | 16 | @calc_node 17 | def d(b, c): 18 | return b + c 19 | 20 | 21 | def create_example_block_computation(): 22 | comp_inner = BasicFourNodeComputation() 23 | comp_inner.insert("a", value=7) 24 | comp_inner.compute_all() 25 | comp = Computation() 26 | comp.add_block("foo", comp_inner, keep_values=False, links={"a": "input_foo"}) 27 | comp.add_block("bar", comp_inner, keep_values=False, links={"a": "input_bar"}) 28 | comp.add_node("output", lambda x, y: x + y, kwds={"x": "foo/d", "y": "bar/d"}) 29 | comp.add_node("input_foo", value=7) 30 | comp.add_node("input_bar", value=10) 31 | return comp 32 | -------------------------------------------------------------------------------- /docs/user/features/creating/constant_values.md: -------------------------------------------------------------------------------- 1 | # Constant Values 2 | 3 | When you are using a pre-existing function for a node, and one or more of the parameters takes a constant value, one way is to define a lambda, which fixes the parameter value. For example, below we use a lambda to fix the second parameter passed to the add function: 4 | 5 | ```pycon 6 | >>> def add(x, y): 7 | ... return x + y 8 | 9 | >>> comp = Computation() 10 | >>> comp.add_node('a', value=1) 11 | >>> comp.add_node('b', lambda a: add(a, 1)) 12 | >>> comp.compute_all() 13 | >>> comp.v.b 14 | 2 15 | ``` 16 | 17 | However providing `ConstantValue` objects to the `args` or `kwds` parameters of `add_node`, make this simpler. `C` is an alias for `ConstantValue`, and in the example below, we use that to tell node `b` to calculate by taking parameter `x` from node `a`, and `y` as a constant, `1`: 18 | 19 | ```pycon 20 | >>> comp = Computation() 21 | >>> comp.add_node('a', value=1) 22 | >>> comp.add_node('b', add, kwds={"x": "a", "y": C(1)}) 23 | >>> comp.compute_all() 24 | >>> comp.v.b 25 | 2 26 | ``` 27 | 28 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | ## Purpose 4 | 5 | This code of conduct outlines expectations for behavior when participating in the project with the aim of fostering a constructive and professional engineering environment. 6 | 7 | ## Expected Behavior 8 | 9 | Participants are expected to: 10 | 11 | - Be respectful in communication and collaboration. 12 | - Focus on technical topics and project goals. 13 | - Accept constructive feedback gracefully and offer it in kind. 14 | - Assume good intentions from others and engage with a problem-solving mindset. 15 | - Use appropriate language and conduct in all community spaces. 16 | 17 | ## Scope 18 | 19 | This Code of Conduct applies in all project spaces—online and offline—and also applies when individuals are representing the project in public spaces. 20 | 21 | ## Enforcement 22 | 23 | Project maintainers are responsible for enforcing the code and may take proportionate corrective action in response to unexpected behavior. 24 | 25 | ## Attribution 26 | 27 | Adapted from the [Contributor Covenant](https://www.contributor-covenant.org), with modifications for brevity and neutrality. 28 | -------------------------------------------------------------------------------- /tests/test_metadata.py: -------------------------------------------------------------------------------- 1 | import loman as lm 2 | 3 | 4 | def test_simple_node_metadata(): 5 | comp = lm.Computation() 6 | comp.add_node("foo", metadata={"test": "working"}) 7 | assert comp.metadata("foo")["test"] == "working" 8 | 9 | 10 | def test_simple_computation_metadata(): 11 | comp = lm.Computation(metadata={"test": "working"}) 12 | assert comp.metadata("")["test"] == "working" 13 | 14 | 15 | def test_setting_node_metadata(): 16 | comp = lm.Computation() 17 | comp.add_node("foo") 18 | comp.metadata("foo")["test"] = "working" 19 | assert comp.metadata("foo")["test"] == "working" 20 | 21 | 22 | def test_setting_block_metadata(): 23 | comp = lm.Computation() 24 | comp.add_node("foo/bar") 25 | comp.metadata("foo")["test"] = "working" 26 | assert comp.metadata("foo")["test"] == "working" 27 | 28 | 29 | def test_setting_computation_block_metadata(): 30 | """Test setting metadata on computation blocks.""" 31 | comp_inner = lm.Computation() 32 | comp_inner.add_node("bar") 33 | 34 | comp = lm.Computation() 35 | comp.add_block("foo", comp_inner, metadata={"test": "working"}) 36 | assert comp.metadata("foo")["test"] == "working" 37 | -------------------------------------------------------------------------------- /src/loman/consts.py: -------------------------------------------------------------------------------- 1 | """Constants and enumerations for the loman computation engine.""" 2 | 3 | from enum import Enum 4 | 5 | 6 | class States(Enum): 7 | """Possible states for a computation node.""" 8 | 9 | PLACEHOLDER = 0 10 | UNINITIALIZED = 1 11 | STALE = 2 12 | COMPUTABLE = 3 13 | UPTODATE = 4 14 | ERROR = 5 15 | PINNED = 6 16 | 17 | 18 | class NodeAttributes: 19 | """Constants for node attribute names in the computation graph.""" 20 | 21 | VALUE = "value" 22 | STATE = "state" 23 | FUNC = "func" 24 | GROUP = "group" 25 | TAG = "tag" 26 | STYLE = "style" 27 | ARGS = "args" 28 | KWDS = "kwds" 29 | TIMING = "timing" 30 | EXECUTOR = "executor" 31 | CONVERTER = "converter" 32 | 33 | 34 | class EdgeAttributes: 35 | """Constants for edge attribute names in the computation graph.""" 36 | 37 | PARAM = "param" 38 | 39 | 40 | class SystemTags: 41 | """System-level tags used internally by loman.""" 42 | 43 | SERIALIZE = "__serialize__" 44 | EXPANSION = "__expansion__" 45 | 46 | 47 | class NodeTransformations: 48 | """Node transformation types for visualization.""" 49 | 50 | CONTRACT = "contract" 51 | COLLAPSE = "collapse" 52 | EXPAND = "expand" 53 | -------------------------------------------------------------------------------- /docs/user/features/creating/adding_nodes_using_decorators.md: -------------------------------------------------------------------------------- 1 | # Adding Nodes Using the `node` Decorator 2 | 3 | Loman provide a decorator `@node`, which allows functions to be added to computations. The first parameter is the Computation object to add a node to. By default, it will take the node name from the function, and the names of input nodes from the names of the parameter of the function, but any parameters provided are passed through to `add_node`, including name: 4 | 5 | ```pycon 6 | >>> from loman import * 7 | >>> comp = Computation() 8 | >>> comp.add_node('a', value=1) 9 | 10 | >>> @node(comp) 11 | ... def b(a): 12 | ... return a + 1 13 | 14 | >>> @node(comp, 'c', args=['a']) 15 | ... def foo(x): 16 | ... return 2 * x 17 | 18 | >>> @node(comp, kwds={'x': 'a', 'y': 'b'}) 19 | ... def d(x, y): 20 | ... return x + y 21 | 22 | >>> comp.draw() 23 | ``` 24 | 25 | ```{graphviz} 26 | digraph { 27 | n0 [label=a fillcolor="#15b01a" style=filled] 28 | n1 [label=b fillcolor="#9dff00" style=filled] 29 | n2 [label=c fillcolor="#9dff00" style=filled] 30 | n3 [label=d fillcolor="#0343df" style=filled] 31 | n0 -> n1 32 | n0 -> n2 33 | n1 -> n3 34 | n2 -> n3 35 | } 36 | ``` 37 | 38 | -------------------------------------------------------------------------------- /src/loman/compat.py: -------------------------------------------------------------------------------- 1 | """Compatibility utilities for function signature inspection.""" 2 | 3 | import inspect 4 | from dataclasses import dataclass, field 5 | 6 | 7 | @dataclass 8 | class _Signature: 9 | kwd_params: list[str] = field() 10 | default_params: list[str] = field() 11 | has_var_args: bool = field() 12 | has_var_kwds: bool = field() 13 | 14 | 15 | def get_signature(func): 16 | """Extract function signature information for compatibility purposes.""" 17 | sig = inspect.signature(func) 18 | pk = inspect._ParameterKind 19 | has_var_args = False 20 | has_var_kwds = False 21 | all_keyword_params = [] 22 | default_params = [] 23 | for param_name, param in sig.parameters.items(): 24 | if param.kind == pk.VAR_POSITIONAL: 25 | has_var_args = True 26 | elif param.kind == pk.VAR_KEYWORD: 27 | has_var_kwds = True 28 | elif param.kind in (pk.POSITIONAL_OR_KEYWORD, pk.KEYWORD_ONLY): 29 | all_keyword_params.append(param_name) 30 | if param.default != inspect._empty: 31 | default_params.append(param_name) 32 | else: 33 | raise NotImplementedError(f"Unexpected param kind: {param.kind}") 34 | return _Signature(all_keyword_params, default_params, has_var_args, has_var_kwds) 35 | -------------------------------------------------------------------------------- /.readthedocs.yaml: -------------------------------------------------------------------------------- 1 | # Read the Docs configuration file for Sphinx projects 2 | # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details 3 | 4 | # Required 5 | version: 2 6 | 7 | # Set the OS, Python version and other tools you might need 8 | build: 9 | os: ubuntu-22.04 10 | tools: 11 | python: "3.12" 12 | # You can also specify other tool versions: 13 | # nodejs: "20" 14 | # rust: "1.70" 15 | # golang: "1.20" 16 | apt_packages: 17 | - fontconfig 18 | - fonts-liberation 19 | - libfreetype6 20 | - graphviz 21 | - gsfonts 22 | - libcairo2 23 | 24 | # Build documentation in the "docs/" directory with Sphinx 25 | sphinx: 26 | configuration: docs/conf.py 27 | # You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs 28 | # builder: "dirhtml" 29 | # Fail on all warnings to avoid broken references 30 | # fail_on_warning: true 31 | 32 | # Optionally build your docs in additional formats such as PDF and ePub 33 | # formats: 34 | # - pdf 35 | # - epub 36 | 37 | # Optional but recommended, declare the Python requirements required 38 | # to build your documentation 39 | # See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html 40 | python: 41 | install: 42 | - requirements: docs/requirements.txt -------------------------------------------------------------------------------- /tests/test_loman_tree_functions.py: -------------------------------------------------------------------------------- 1 | """Tests for Loman tree-related functions and node key operations.""" 2 | 3 | import loman as lm 4 | from loman.nodekey import to_nodekey 5 | 6 | 7 | def test_list_children(): 8 | comp = lm.Computation() 9 | comp.add_node("foo1/bar1/baz1/a", value=1) 10 | comp.add_node("foo1/bar1/baz2/a", value=1) 11 | 12 | assert comp.get_tree_list_children("foo1/bar1") == {"baz1", "baz2"} 13 | 14 | 15 | def test_has_path_has_node(): 16 | comp = lm.Computation() 17 | comp.add_node("foo1/bar1/baz1/a", value=1) 18 | 19 | assert comp.has_node("foo1/bar1/baz1/a") 20 | assert comp.tree_has_path("foo1/bar1/baz1/a") 21 | assert not comp.has_node("foo1/bar1/baz1") 22 | assert comp.tree_has_path("foo1/bar1/baz1") 23 | assert not comp.has_node("foo1/bar1") 24 | assert comp.tree_has_path("foo1/bar1") 25 | 26 | 27 | def test_nodekey_ancestors(): 28 | nk = to_nodekey("foo/bar/baz") 29 | result = set(x.name for x in nk.ancestors()) 30 | assert result == {"foo/bar/baz", "foo/bar", "foo", ""} 31 | 32 | 33 | def test_tree_descendents(): 34 | comp = lm.Computation() 35 | comp.add_node("foo/bar/baz") 36 | comp.add_node("foo/bar2") 37 | comp.add_node("beef/bar") 38 | 39 | assert comp.get_tree_descendents() == {"foo", "foo/bar", "foo/bar/baz", "foo/bar2", "beef", "beef/bar"} 40 | -------------------------------------------------------------------------------- /src/loman/__init__.py: -------------------------------------------------------------------------------- 1 | """Loman: A Python library for building computation graphs. 2 | 3 | Loman provides tools for creating and managing dependency-aware computation graphs 4 | where nodes represent data or calculations, and edges represent dependencies. 5 | """ 6 | 7 | import loman.util as util 8 | import loman.visualization as viz 9 | from loman.computeengine import C, Computation, block, calc_node, computation_factory, input_node, node 10 | from loman.consts import NodeTransformations, States 11 | from loman.exception import ( 12 | CannotInsertToPlaceholderNodeError, 13 | LoopDetectedError, 14 | MapError, 15 | NonExistentNodeError, 16 | ) 17 | from loman.nodekey import Name, Names, NodeKey, to_nodekey 18 | from loman.visualization import GraphView 19 | 20 | # Backward compatibility alias 21 | ComputationFactory = computation_factory 22 | 23 | __all__ = [ 24 | "util", 25 | "viz", 26 | "C", 27 | "Computation", 28 | "computation_factory", 29 | "ComputationFactory", # Backward compatibility 30 | "block", 31 | "calc_node", 32 | "input_node", 33 | "node", 34 | "NodeTransformations", 35 | "States", 36 | "CannotInsertToPlaceholderNodeError", 37 | "LoopDetectedError", 38 | "MapError", 39 | "NonExistentNodeError", 40 | "Name", 41 | "Names", 42 | "NodeKey", 43 | "to_nodekey", 44 | "GraphView", 45 | ] 46 | -------------------------------------------------------------------------------- /.devcontainer/bootstrap.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -euo pipefail 3 | IFS=$'\n\t' 4 | 5 | # Use UV_INSTALL_DIR from environment or default to local bin 6 | # In devcontainer, this is set to /home/vscode/.local/bin to avoid conflict with host 7 | export UV_INSTALL_DIR="${UV_INSTALL_DIR:-./bin}" 8 | export UV_BIN="${UV_INSTALL_DIR}/uv" 9 | export UVX_BIN="${UV_INSTALL_DIR}/uvx" 10 | 11 | # Only remove existing binaries if we are installing to the default ./bin location 12 | # and we want to force re-installation (e.g. if OS changed) 13 | if [ "$UV_INSTALL_DIR" = "./bin" ]; then 14 | rm -f "$UV_BIN" "$UVX_BIN" 15 | fi 16 | 17 | # Set UV environment variables to avoid prompts and warnings 18 | export UV_VENV_CLEAR=1 19 | export UV_LINK_MODE=copy 20 | 21 | # Make UV environment variables persistent for all sessions 22 | echo "export UV_LINK_MODE=copy" >> ~/.bashrc 23 | echo "export UV_VENV_CLEAR=1" >> ~/.bashrc 24 | echo "export PATH=\"$UV_INSTALL_DIR:\$PATH\"" >> ~/.bashrc 25 | 26 | # Add to current PATH so subsequent commands can find uv 27 | export PATH="$UV_INSTALL_DIR:$PATH" 28 | 29 | make install 30 | 31 | # Install Marimo tool for notebook editing 32 | "$UV_BIN" tool install marimo 33 | 34 | # Initialize pre-commit hooks if configured 35 | if [ -f .pre-commit-config.yaml ]; then 36 | # uvx runs tools without requiring them in the project deps 37 | "$UVX_BIN" pre-commit install 38 | fi 39 | -------------------------------------------------------------------------------- /.github/workflows/rhiza_pre-commit.yml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | # Workflow: Pre-commit 5 | # 6 | # Purpose: This workflow runs pre-commit checks to ensure code quality 7 | # and consistency across the codebase. It helps catch issues 8 | # like formatting errors, linting issues, and other code quality 9 | # problems before they are merged. 10 | # 11 | # Trigger: This workflow runs on every push and on pull requests to main/master 12 | # branches (including from forks) 13 | # 14 | # Components: 15 | # - 🔍 Run pre-commit checks using reusable action 16 | 17 | name: "PRE-COMMIT" 18 | permissions: 19 | contents: read 20 | 21 | on: 22 | push: 23 | pull_request: 24 | branches: [ main, master ] 25 | 26 | jobs: 27 | pre-commit: 28 | runs-on: ubuntu-latest 29 | 30 | steps: 31 | - uses: actions/checkout@v6 32 | 33 | - name: Get Python version 34 | id: get-python 35 | run: | 36 | echo "python-version=$(python ./.github/rhiza/utils/version_max.py)" >> "$GITHUB_OUTPUT" 37 | 38 | - uses: ./.github/rhiza/actions/setup-project 39 | with: 40 | python-version: ${{ steps.get-python.outputs.python-version }} 41 | uv-extra-index-url: ${{ secrets.UV_EXTRA_INDEX_URL }} 42 | 43 | - uses: pre-commit/action@v3.0.1 44 | -------------------------------------------------------------------------------- /docs/user/features/querying/show_as_dataframe.md: -------------------------------------------------------------------------------- 1 | # Showing a computation as a DataFrame 2 | 3 | `Computation` objects have a method `to_df()` which allows them to be shown as a DataFrame. This provides a quick summary of the states and values of each node, as well as useful timing information: 4 | 5 | ```pycon 6 | >>> from loman import * 7 | 8 | >>> comp = Computation() 9 | >>> comp.add_node('a', value=1) 10 | >>> comp.add_node('b', lambda a: a + 1) 11 | >>> comp.add_node('c', lambda a: 2 * a) 12 | >>> comp.add_node('d', lambda b, c: b + c) 13 | >>> comp.compute_all() 14 | 15 | >>> comp.to_df() 16 | ``` 17 | 18 | | | state | value | start | end | duration | 19 | |:---|:----------------|--------:|:---------------------------|:---------------------------|-----------:| 20 | | a | States.UPTODATE | 1 | NaT | NaT | nan | 21 | | b | States.UPTODATE | 2 | 2024-11-30 18:49:41.626849 | 2024-11-30 18:49:41.626849 | 0 | 22 | | c | States.UPTODATE | 2 | 2024-11-30 18:49:41.626849 | 2024-11-30 18:49:41.626849 | 0 | 23 | | d | States.UPTODATE | 4 | 2024-11-30 18:49:41.626849 | 2024-11-30 18:49:41.626849 | 0 | 24 | 25 | :::{tip} 26 | If your values are not scalars, it can be useful to drop the value column. 27 | ```pycon 28 | >>> comp.to_df().drop(columns='value') 29 | ``` 30 | ::: -------------------------------------------------------------------------------- /docs/user/features/creating/tagging_nodes.md: -------------------------------------------------------------------------------- 1 | # Tagging Nodes 2 | 3 | Nodes can be tagged with string tags, either when the node is added, using the `tags` parameter of `add_node`, or later, using the `set_tag` or `set_tags` methods, which can take a single node or a list of nodes: 4 | 5 | ```pycon 6 | >>> from loman import * 7 | >>> comp = Computation() 8 | >>> comp.add_node('a', value=1, tags=['foo']) 9 | >>> comp.add_node('b', lambda a: a + 1) 10 | >>> comp.set_tag(['a', 'b'], 'bar') 11 | ``` 12 | 13 | :::{note} 14 | Tags beginning and ending with double-underscores ("\_\_\[tag\]\_\_") are reserved for internal use by Loman. 15 | ::: 16 | 17 | The tags associated with a node can be inspected using the `tags` method, or the `t` attribute-style accessor: 18 | 19 | ```pycon 20 | >>> comp.tags('a') 21 | {'__serialize__', 'bar', 'foo'} 22 | >>> comp.t.b 23 | {'__serialize__', 'bar'} 24 | ``` 25 | 26 | Tags can also be cleared with the `clear_tag` and `clear_tags` methods: 27 | 28 | ```pycon 29 | >>> comp.clear_tag(['a', 'b'], 'foo') 30 | >>> comp.t.a 31 | {'__serialize__', 'bar'} 32 | ``` 33 | 34 | By design, no error is thrown if a tag is added to a node that already has that tag, nor if a tag is cleared from a node that does not have that tag. 35 | 36 | In future, it is intended it will be possible to control graph drawing and calculation using tags (for example, by requesting that only nodes with or without certain tags are rendered or calculated). 37 | -------------------------------------------------------------------------------- /src/loman/exception.py: -------------------------------------------------------------------------------- 1 | """Exception classes for the loman computation engine.""" 2 | 3 | 4 | class ComputationError(Exception): 5 | """Base exception for computation-related errors.""" 6 | 7 | pass 8 | 9 | 10 | class MapError(ComputationError): 11 | """Exception raised during map operations with partial results.""" 12 | 13 | def __init__(self, message, results): 14 | """Initialize MapError with message and partial results.""" 15 | super().__init__(message) 16 | self.results = results 17 | 18 | 19 | class LoopDetectedError(ComputationError): 20 | """Exception raised when a dependency loop is detected.""" 21 | 22 | pass 23 | 24 | 25 | class NonExistentNodeError(ComputationError): 26 | """Exception raised when trying to access a non-existent node.""" 27 | 28 | pass 29 | 30 | 31 | class NodeAlreadyExistsError(ComputationError): 32 | """Exception raised when trying to create a node that already exists.""" 33 | 34 | pass 35 | 36 | 37 | class CannotInsertToPlaceholderNodeError(ComputationError): 38 | """Exception raised when trying to insert into a placeholder node.""" 39 | 40 | pass 41 | 42 | 43 | # Backward compatibility aliases 44 | MapException = MapError 45 | LoopDetectedException = LoopDetectedError 46 | NonExistentNodeException = NonExistentNodeError 47 | NodeAlreadyExistsException = NodeAlreadyExistsError 48 | CannotInsertToPlaceholderNodeException = CannotInsertToPlaceholderNodeError 49 | -------------------------------------------------------------------------------- /.github/workflows/rhiza_deptry.yml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | # Workflow: Deptry 5 | # 6 | # Purpose: This workflow identifies missing and obsolete dependencies in the project. 7 | # It helps maintain a clean dependency tree by detecting unused packages and 8 | # implicit dependencies that should be explicitly declared. 9 | # 10 | # Trigger: This workflow runs on every push and on pull requests to main/master 11 | # branches (including from forks) 12 | 13 | name: "DEPTRY" 14 | 15 | # Permissions: Only read access to repository contents is needed 16 | permissions: 17 | contents: read 18 | 19 | on: 20 | push: 21 | pull_request: 22 | branches: [ main, master ] 23 | 24 | jobs: 25 | deptry: 26 | name: Check dependencies with deptry 27 | runs-on: ubuntu-latest 28 | container: 29 | image: ghcr.io/astral-sh/uv:0.9.18-python3.12-trixie 30 | 31 | steps: 32 | - uses: actions/checkout@v6 33 | 34 | - name: Run deptry 35 | run: | 36 | set -euo pipefail 37 | if [ -f "pyproject.toml" ]; then 38 | if [ -d "src" ]; then 39 | SOURCE_FOLDER="src" 40 | else 41 | SOURCE_FOLDER="." 42 | fi 43 | uvx deptry "$SOURCE_FOLDER" 44 | else 45 | printf "${YELLOW:-}[WARN] No pyproject.toml found, skipping deptry${RESET:-}\n" 46 | fi 47 | -------------------------------------------------------------------------------- /tests/Makefile.tests: -------------------------------------------------------------------------------- 1 | ## Makefile.tests - Testing targets 2 | # This file is included by the main Makefile 3 | 4 | # Declare phony targets (they don't produce files) 5 | .PHONY: test benchmark 6 | 7 | # Test-specific variables 8 | TESTS_FOLDER := tests 9 | 10 | ##@ Development and Testing 11 | test: install ## run all tests 12 | @if [ -d ${SOURCE_FOLDER} ] && [ -d ${TESTS_FOLDER} ]; then \ 13 | mkdir -p _tests/html-coverage _tests/html-report; \ 14 | ${UV_BIN} pip install pytest pytest-cov pytest-html; \ 15 | ${UV_BIN} run pytest ${TESTS_FOLDER} --ignore=${TESTS_FOLDER}/benchmarks --cov=${SOURCE_FOLDER} --cov-report=term --cov-report=html:_tests/html-coverage --html=_tests/html-report/report.html; \ 16 | else \ 17 | printf "${YELLOW}[WARN] Source folder ${SOURCE_FOLDER} or tests folder ${TESTS_FOLDER} not found, skipping tests${RESET}\n"; \ 18 | fi 19 | 20 | benchmark: install ## run performance benchmarks 21 | @if [ -d "${TESTS_FOLDER}/benchmarks" ]; then \ 22 | printf "${BLUE}[INFO] Running performance benchmarks...${RESET}\n"; \ 23 | ${UV_BIN} pip install pytest-benchmark==5.2.3 pygal==3.1.0; \ 24 | ${UV_BIN} run pytest "${TESTS_FOLDER}/benchmarks/" \ 25 | --benchmark-only \ 26 | --benchmark-histogram=tests/test_rhiza/benchmarks/benchmarks \ 27 | --benchmark-json=tests/test_rhiza/benchmarks/benchmarks.json; \ 28 | ${UV_BIN} run tests/test_rhiza/benchmarks/analyze_benchmarks.py ; \ 29 | else \ 30 | printf "${YELLOW}[WARN] Benchmarks folder not found, skipping benchmarks${RESET}\n"; \ 31 | fi 32 | 33 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | repos: 5 | - repo: https://github.com/pre-commit/pre-commit-hooks 6 | rev: v6.0.0 7 | hooks: 8 | - id: check-toml 9 | - id: check-yaml 10 | 11 | - repo: https://github.com/astral-sh/ruff-pre-commit 12 | rev: 'v0.14.10' 13 | hooks: 14 | - id: ruff 15 | args: [ --fix, --exit-non-zero-on-fix, --unsafe-fixes ] 16 | 17 | # Run the formatter 18 | - id: ruff-format 19 | 20 | - repo: https://github.com/igorshubovych/markdownlint-cli 21 | rev: v0.47.0 22 | hooks: 23 | - id: markdownlint 24 | args: ["--disable", "MD013"] 25 | 26 | - repo: https://github.com/python-jsonschema/check-jsonschema 27 | rev: 0.36.0 28 | hooks: 29 | - id: check-renovate 30 | args: [ "--verbose" ] 31 | 32 | - id: check-github-workflows 33 | args: ["--verbose"] 34 | 35 | - repo: https://github.com/rhysd/actionlint 36 | rev: v1.7.9 37 | hooks: 38 | - id: actionlint 39 | args: [ -ignore, SC ] 40 | 41 | - repo: https://github.com/abravalheri/validate-pyproject 42 | rev: v0.24.1 43 | hooks: 44 | - id: validate-pyproject 45 | 46 | - repo: local 47 | hooks: 48 | - id: update-readme-help 49 | name: Update README with Makefile help output 50 | entry: /bin/sh .github/rhiza/scripts/update-readme-help.sh 51 | language: system 52 | files: '^Makefile$' 53 | pass_filenames: false 54 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2017, Janus Capital Group 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | * Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | * Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | * Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /docs/user/features/creating/non_string_node_names.md: -------------------------------------------------------------------------------- 1 | # Non-string node names 2 | 3 | In the previous example, the nodes have all been given strings as keys. This is not a requirement, and in fact any object that could be used as a key in a dictionary can be a key for a node. As function parameters can only be strings, we have to rely on the `kwds` argument to `add_node` to specify which nodes should be used as inputs for calculation nodes' functions. For a simple but frivolous example, we can represent a finite part of the Fibonacci sequence using tuples of the form `('fib', [int])` as keys: 4 | 5 | ```pycon 6 | >>> comp = Computation() 7 | >>> comp.add_node(('fib', 1), value=1) 8 | >>> comp.add_node(('fib', 2), value=1) 9 | >>> for i in range(3,7): 10 | ... comp.add_node(('fib', i), lambda x, y: x + y, kwds={'x': ('fib', i - 1), 'y': ('fib', i - 2)}) 11 | ... 12 | >>> comp.draw() 13 | ``` 14 | 15 | ```{graphviz} 16 | digraph { 17 | n0 [label="('fib', 1)" fillcolor="#15b01a" style=filled] 18 | n1 [label="('fib', 2)" fillcolor="#15b01a" style=filled] 19 | n2 [label="('fib', 3)" fillcolor="#9dff00" style=filled] 20 | n3 [label="('fib', 4)" fillcolor="#0343df" style=filled] 21 | n4 [label="('fib', 5)" fillcolor="#0343df" style=filled] 22 | n5 [label="('fib', 6)" fillcolor="#0343df" style=filled] 23 | n0 -> n2 24 | n1 -> n2 25 | n1 -> n3 26 | n2 -> n3 27 | n2 -> n4 28 | n3 -> n4 29 | n3 -> n5 30 | n4 -> n5 31 | } 32 | ``` 33 | 34 | ```pycon 35 | >>> comp.compute_all() 36 | >>> comp.value(('fib', 6)) 37 | 8 38 | ``` 39 | -------------------------------------------------------------------------------- /docs/user/features/creating/automatically_expanding_named_tuples.md: -------------------------------------------------------------------------------- 1 | # Automatically expanding named tuples 2 | 3 | Often, a calculation will return more than one result. For example, a numerical solver may return the best solution it found, along with a status indicating whether the solver converged. Python introduced namedtuples in version 2.6. A namedtuple is a tuple-like object where each element can be accessed by name, as well as by position. If a node will always contain a given type of namedtuple, Loman has a convenience method `add_named_tuple_expansion` which will create new nodes for each element of a namedtuple, using the naming convention **parent_node.tuple_element_name**. This can be useful for clarity when different downstream nodes depend on different parts of computation result: 4 | 5 | ```pycon 6 | >>> Coordinate = namedtuple('Coordinate', ['x', 'y']) 7 | >>> comp = Computation() 8 | >>> comp.add_node('a', value=1) 9 | >>> comp.add_node('b', lambda a: Coordinate(a+1, a+2)) 10 | >>> comp.add_named_tuple_expansion('b', Coordinate) 11 | >>> comp.add_node('c', lambda *args: sum(args), args=['b.x', 'b.y']) 12 | >>> comp.compute_all() 13 | >>> comp.get_value_dict() 14 | {'a': 1, 'b': Coordinate(x=2, y=3), 'b.x': 2, 'b.y': 3, 'c': 5} 15 | >>> comp.draw() 16 | ``` 17 | 18 | ```{graphviz} 19 | digraph { 20 | n0 [label=a fillcolor="#15b01a" style=filled] 21 | n1 [label=b fillcolor="#9dff00" style=filled] 22 | n2 [label="b.x" fillcolor="#0343df" style=filled] 23 | n3 [label="b.y" fillcolor="#0343df" style=filled] 24 | n4 [label=c fillcolor="#0343df" style=filled] 25 | n0 -> n1 26 | n1 -> n2 27 | n1 -> n3 28 | n2 -> n4 29 | n3 -> n4 30 | } 31 | ``` 32 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | # Configuration: Dependabot 5 | # 6 | # Purpose: Automate dependency updates for Python packages, GitHub Actions, and Docker images. 7 | # 8 | # Documentation: https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file 9 | 10 | version: 2 11 | updates: 12 | # Python dependencies (pip/pyproject.toml) 13 | - package-ecosystem: "pip" 14 | directory: "/" 15 | schedule: 16 | interval: "weekly" 17 | day: "tuesday" 18 | time: "09:00" 19 | timezone: "Asia/Dubai" 20 | open-pull-requests-limit: 10 21 | labels: 22 | - "dependencies" 23 | - "python" 24 | commit-message: 25 | prefix: "chore(deps)" 26 | prefix-development: "chore(deps-dev)" 27 | include: "scope" 28 | 29 | # GitHub Actions 30 | - package-ecosystem: "github-actions" 31 | directory: "/" 32 | schedule: 33 | interval: "weekly" 34 | day: "tuesday" 35 | time: "09:00" 36 | timezone: "Asia/Dubai" 37 | open-pull-requests-limit: 10 38 | labels: 39 | - "dependencies" 40 | - "github-actions" 41 | commit-message: 42 | prefix: "chore(deps)" 43 | include: "scope" 44 | 45 | # Docker 46 | #- package-ecosystem: "docker" 47 | # directory: "/docker" 48 | # schedule: 49 | # interval: "weekly" 50 | # day: "tuesday" 51 | # time: "09:00" 52 | # timezone: "Asia/Dubai" 53 | # open-pull-requests-limit: 10 54 | # labels: 55 | # - "dependencies" 56 | # - "docker" 57 | # commit-message: 58 | # prefix: "chore(deps)" 59 | # include: "scope" 60 | -------------------------------------------------------------------------------- /.rhiza.history: -------------------------------------------------------------------------------- 1 | # Rhiza Template History 2 | # This file lists all files managed by the Rhiza template. 3 | # Template repository: jebel-quant/rhiza 4 | # Template branch: main 5 | # 6 | # Files under template control: 7 | .devcontainer/bootstrap.sh 8 | .devcontainer/devcontainer.json 9 | .editorconfig 10 | .github/dependabot.yml 11 | .github/renovate.json 12 | .github/rhiza/actions/setup-project/action.yml 13 | .github/rhiza/scripts/book.sh 14 | .github/rhiza/scripts/bump.sh 15 | .github/rhiza/scripts/customisations/post-release.sh 16 | .github/rhiza/scripts/marimushka.sh 17 | .github/rhiza/scripts/release.sh 18 | .github/rhiza/scripts/update-readme-help.sh 19 | .github/rhiza/utils/version_matrix.py 20 | .github/rhiza/utils/version_max.py 21 | .github/workflows/rhiza_book.yml 22 | .github/workflows/rhiza_ci.yml 23 | .github/workflows/rhiza_deptry.yml 24 | .github/workflows/rhiza_devcontainer.yml 25 | .github/workflows/rhiza_marimo.yml 26 | .github/workflows/rhiza_pre-commit.yml 27 | .github/workflows/rhiza_release.yml 28 | .github/workflows/rhiza_rhiza.yml 29 | .github/workflows/rhiza_sync.yml 30 | .pre-commit-config.yaml 31 | Makefile 32 | book/Makefile.book 33 | presentation/Makefile.presentation 34 | presentation/README.md 35 | tests/Makefile.tests 36 | tests/test_rhiza/README.md 37 | tests/test_rhiza/benchmarks/.gitignore 38 | tests/test_rhiza/benchmarks/README.md 39 | tests/test_rhiza/benchmarks/analyze_benchmarks.py 40 | tests/test_rhiza/conftest.py 41 | tests/test_rhiza/test_bump_script.py 42 | tests/test_rhiza/test_docstrings.py 43 | tests/test_rhiza/test_git_repo_fixture.py 44 | tests/test_rhiza/test_makefile.py 45 | tests/test_rhiza/test_marimushka_script.py 46 | tests/test_rhiza/test_readme.py 47 | tests/test_rhiza/test_release_script.py 48 | tests/test_rhiza/test_structure.py 49 | tests/test_rhiza/test_updatereadme_script.py 50 | -------------------------------------------------------------------------------- /.github/rhiza/scripts/customisations/post-release.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This file is part of the jebel-quant/rhiza repository 3 | # (https://github.com/jebel-quant/rhiza). 4 | # 5 | # Optional hook script for post-release actions 6 | # 7 | # Purpose: This script is called automatically after a release is created. 8 | # Use it to perform additional tasks after the release process completes 9 | # (e.g., notifications, cleanup, triggering deployments). 10 | # 11 | # When it runs: 12 | # - Called by: make release (after release tag is created and pushed) 13 | # - Environment: GitHub Actions runner or local development machine 14 | # 15 | # How to use: 16 | # 1. Add your custom post-release commands below 17 | # 2. Make sure the script is executable: chmod +x .github/rhiza/scripts/customisations/post-release.sh 18 | # 3. Commit to your repository 19 | # 20 | # Examples: 21 | # - Send notifications (Slack, email, etc.) 22 | # - Trigger deployment workflows 23 | # - Update external documentation sites 24 | # - Clean up temporary release artifacts 25 | # 26 | # Note: If you customize this file in your repository, add it to the exclude list 27 | # in template.yml to prevent it from being overwritten by template updates: 28 | # exclude: | 29 | # .github/rhiza/scripts/customisations/post-release.sh 30 | # 31 | 32 | set -euo pipefail 33 | 34 | echo "Running post-release.sh..." 35 | 36 | # Add your custom post-release commands here 37 | 38 | # Example: Send notification 39 | # if command -v curl &> /dev/null && [ -n "${SLACK_WEBHOOK_URL:-}" ]; then 40 | # echo "Sending notification to Slack..." 41 | # curl -X POST -H 'Content-type: application/json' \ 42 | # --data '{"text":"New release created!"}' \ 43 | # "${SLACK_WEBHOOK_URL}" 44 | # fi 45 | 46 | echo "Post-release setup complete." 47 | -------------------------------------------------------------------------------- /.github/rhiza/scripts/customisations/build-extras.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This file is part of the tschm/.config-templates repository 3 | # (https://github.com/Jebel-Quant/rhiza). 4 | # 5 | # Optional hook script for installing extra dependencies 6 | # 7 | # Purpose: This script is called automatically during the install phase and before 8 | # building documentation. Use it to install additional system packages or 9 | # dependencies that your project needs (e.g., graphviz for diagrams). 10 | # 11 | # When it runs: 12 | # - Called by: make install-extras (during setup phase) 13 | # - Also runs before: make test, make book, make docs 14 | # - Environment: GitHub Actions runner or local development machine 15 | # 16 | # How to use: 17 | # 1. Add your custom installation commands below 18 | # 2. Make sure the script is executable: chmod +x .github/scripts/customisations/build-extras.sh 19 | # 3. Commit to your repository 20 | # 21 | # Examples: 22 | # - Install system packages: apt-get, brew, yum, etc. 23 | # - Install optional dependencies for documentation 24 | # - Download or build tools needed by your project 25 | # 26 | # Note: If you customize this file in your repository, add it to the exclude list 27 | # in action.yml to prevent it from being overwritten by template updates: 28 | # exclude: | 29 | # .github/scripts/customisations/build-extras.sh 30 | # 31 | 32 | set -euo pipefail 33 | 34 | echo "Running build-extras.sh..." 35 | 36 | # Add your custom installation commands here 37 | 38 | # Example: graphviz 39 | # Good practice to check if already installed. 40 | 41 | if ! command -v dot &> /dev/null; then 42 | echo "graphviz not found, installing..." 43 | sudo apt-get update && sudo apt-get install -y graphviz 44 | else 45 | echo "graphviz is already installed, skipping installation." 46 | fi 47 | 48 | echo "Build extras setup complete." 49 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | _pdoc 3 | _book 4 | _tests 5 | _marimushka 6 | __marimo__ 7 | 8 | # temp file used by Junie 9 | .output.txt 10 | 11 | # Byte-compiled / optimized / DLL files 12 | __pycache__/ 13 | *.py[cod] 14 | *$py.class 15 | 16 | # C extensions 17 | *.so 18 | 19 | # folder used for task 20 | bin 21 | 22 | # Distribution / packaging 23 | .Python 24 | env/ 25 | build/ 26 | develop-eggs/ 27 | dist/ 28 | downloads/ 29 | eggs/ 30 | .eggs/ 31 | lib/ 32 | lib64/ 33 | parts/ 34 | sdist/ 35 | var/ 36 | wheels/ 37 | *.egg-info/ 38 | .installed.cfg 39 | *.egg 40 | 41 | # PyInstaller 42 | # Usually these files are written by a python script from a template 43 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 44 | *.manifest 45 | *.spec 46 | 47 | # Installer logs 48 | pip-log.txt 49 | pip-delete-this-directory.txt 50 | 51 | # Unit test / coverage reports 52 | htmlcov/ 53 | .tox/ 54 | .coverage 55 | .coverage.* 56 | .cache 57 | nosetests.xml 58 | coverage.xml 59 | *,cover 60 | .hypothesis/ 61 | 62 | # Translations 63 | *.mo 64 | *.pot 65 | 66 | # Django stuff: 67 | *.log 68 | local_settings.py 69 | 70 | # Flask stuff: 71 | instance/ 72 | .webassets-cache 73 | 74 | # Scrapy stuff: 75 | .scrapy 76 | 77 | # Sphinx documentation 78 | docs/_build/ 79 | 80 | # PyBuilder 81 | target/ 82 | 83 | # Jupyter Notebook 84 | .ipynb_checkpoints 85 | 86 | # pyenv 87 | .python-version 88 | 89 | # celery beat schedule file 90 | celerybeat-schedule 91 | 92 | # SageMath parsed files 93 | *.sage.py 94 | 95 | # dotenv 96 | .env 97 | 98 | # virtualenv 99 | .venv 100 | venv/ 101 | ENV/ 102 | 103 | # Spyder project settings 104 | .spyderproject 105 | 106 | # Rope project settings 107 | .ropeproject 108 | 109 | # PyCharm project settings 110 | .idea 111 | .vscode 112 | 113 | # Other 114 | /MANIFEST 115 | /README.html 116 | -------------------------------------------------------------------------------- /.github/workflows/rhiza_ci.yml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | # Workflow: Continuous Integration 5 | # 6 | # Purpose: Run tests on multiple Python versions to ensure compatibility. 7 | # 8 | # Trigger: On push and pull requests to main/master branches. 9 | 10 | name: CI 11 | 12 | permissions: 13 | contents: read 14 | 15 | on: 16 | push: 17 | pull_request: 18 | branches: [main, master] 19 | 20 | jobs: 21 | generate-matrix: 22 | runs-on: ubuntu-latest 23 | outputs: 24 | matrix: ${{ steps.versions.outputs.list }} 25 | steps: 26 | - uses: actions/checkout@v6 27 | 28 | - id: versions 29 | run: | 30 | # Generate Python versions JSON from the script 31 | JSON=$(python ./.github/rhiza/utils/version_matrix.py) 32 | echo "list=$JSON" >> "$GITHUB_OUTPUT" 33 | 34 | - name: Debug matrix 35 | run: | 36 | echo "Python versions: ${{ steps.versions.outputs.list }}" 37 | 38 | test: 39 | needs: generate-matrix 40 | runs-on: ubuntu-latest 41 | strategy: 42 | matrix: 43 | python-version: ${{ fromJson(needs.generate-matrix.outputs.matrix) }} 44 | fail-fast: false 45 | 46 | steps: 47 | - name: Checkout repository 48 | uses: actions/checkout@v6 49 | with: 50 | lfs: true 51 | 52 | - name: Setup the project 53 | uses: ./.github/rhiza/actions/setup-project 54 | with: 55 | python-version: ${{ matrix.python-version }} 56 | uv-extra-index-url: ${{ secrets.UV_EXTRA_INDEX_URL }} 57 | 58 | - name: Install dependencies 59 | run: | 60 | if [ -f "tests/requirements.txt" ]; then 61 | uv pip install -r tests/requirements.txt 62 | fi 63 | uv pip install pytest 64 | 65 | - name: Run tests 66 | run: uv run pytest tests 67 | -------------------------------------------------------------------------------- /.devcontainer/devcontainer.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Python 3.14 (Rhiza)", 3 | "image": "mcr.microsoft.com/devcontainers/python:3.14", 4 | "hostRequirements": { 5 | "cpus": 4 6 | }, 7 | "features": { 8 | "ghcr.io/devcontainers/features/common-utils:2": {}, 9 | "ghcr.io/devcontainers/features/git:1": {} 10 | }, 11 | "mounts": [ 12 | "source=${localEnv:HOME}/.ssh,target=/home/vscode/.ssh,type=bind,consistency=cached" 13 | ], 14 | "containerEnv": { 15 | "SSH_AUTH_SOCK": "${localEnv:SSH_AUTH_SOCK}", 16 | "UV_INSTALL_DIR": "/home/vscode/.local/bin" 17 | }, 18 | "forwardPorts": [8080], 19 | "customizations": { 20 | "vscode": { 21 | "settings": { 22 | "python.defaultInterpreterPath": ".venv/bin/python", 23 | "python.testing.pytestEnabled": true, 24 | "python.testing.unittestEnabled": false, 25 | "python.testing.pytestArgs": ["."], 26 | "python.terminal.activateEnvInCurrentTerminal": true, 27 | "marimo.pythonPath": ".venv/bin/python", 28 | "marimo.marimoPath": ".venv/bin/marimo" 29 | }, 30 | "extensions": [ 31 | // Python Development 32 | "ms-python.python", 33 | "ms-python.vscode-pylance", 34 | // Marimo/Notebooks 35 | "marimo-team.vscode-marimo", 36 | "marimo-ai.marimo-vscode", 37 | // Linting and Formatting 38 | "charliermarsh.ruff", 39 | "tamasfe.even-better-toml", 40 | // Build Tools 41 | "ms-vscode.makefile-tools", 42 | // AI Assistance 43 | "github.copilot-chat", 44 | "github.copilot" 45 | ] 46 | } 47 | }, 48 | "onCreateCommand": ".devcontainer/bootstrap.sh", 49 | "remoteUser": "vscode" 50 | } 51 | -------------------------------------------------------------------------------- /docs/user/features/other/serializing_computations.md: -------------------------------------------------------------------------------- 1 | # Serializing computations 2 | 3 | Loman can serialize computations to disk using the dill package. This can be useful to have a system store the inputs, intermediates and results of a scheduled calculation for later inspection if required: 4 | 5 | ```pycon 6 | >>> comp = Computation() 7 | >>> comp.add_node('a', value=1) 8 | >>> comp.add_node('b', lambda a: a + 1) 9 | >>> comp.compute_all() 10 | >>> comp.draw() 11 | ``` 12 | 13 | ```{graphviz} 14 | digraph { 15 | n0 [label=a fillcolor="#15b01a" style=filled] 16 | n1 [label=b fillcolor="#15b01a" style=filled] 17 | n0 -> n1 18 | } 19 | ``` 20 | 21 | ```pycon 22 | >>> comp.to_dict() 23 | {'a': 1, 'b': 2} 24 | >>> comp.write_dill('foo.dill') 25 | >>> comp2 = Computation.read_dill('foo.dill') 26 | >>> comp2.draw() 27 | ``` 28 | 29 | ```{graphviz} 30 | digraph { 31 | n0 [label=a fillcolor="#15b01a" style=filled] 32 | n1 [label=b fillcolor="#15b01a" style=filled] 33 | n0 -> n1 34 | } 35 | ``` 36 | 37 | >>> comp.get_value_dict() 38 | {'a': 1, 'b': 2} 39 | 40 | It is also possible to request that a particular node not be serialized, in which case it will have no value, and uninitialized state when it is deserialized. This can be useful where an object is not serializable, or where data is not licensed to be distributed: 41 | 42 | ```pycon 43 | >>> comp.add_node('a', value=1, serialize=False) 44 | >>> comp.compute_all() 45 | >>> comp.write_dill('foo.dill') 46 | >>> comp2 = Computation.read_dill('foo.dill') 47 | >>> comp2.draw() 48 | ``` 49 | 50 | ```{graphviz} 51 | digraph { 52 | n0 [label=a fillcolor="#0343df" style=filled] 53 | n1 [label=b fillcolor="#15b01a" style=filled] 54 | n0 -> n1 55 | } 56 | ``` 57 | 58 | :::{note} 59 | The serialization format is not currently stabilized. While it is convenient to be able to inspect the results of previous calculations, this method should *not* be relied on for long-term storage. 60 | ::: 61 | -------------------------------------------------------------------------------- /docs/user/features/other/interactive_debugging.md: -------------------------------------------------------------------------------- 1 | # Interactive Debugging 2 | 3 | As shown in the quickstart section "Error-handling", loman makes it easy to see a traceback for any exceptions that are shown while calculating nodes, and also makes it easy to update calculation functions in-place to fix errors. However, it is often desirable to use Python's interactive debugger at the exact time that an error occurs. To support this, the `calculate` method takes a parameter `raise_exceptions`. When it is `False` (the default), nodes are set to state ERROR when exceptions occur during their calculation. When it is set to `True` any exceptions are not caught, allowing the user to invoke the interactive debugger 4 | 5 | ```pycon 6 | comp = Computation() 7 | comp.add_node('numerator', value=1) 8 | comp.add_node('divisor', value=0) 9 | comp.add_node('result', lambda numerator, divisor: numerator / divisor) 10 | comp.compute('result', raise_exceptions=True) 11 | ``` 12 | 13 | ```pycon 14 | --------------------------------------------------------------------------- 15 | 16 | ZeroDivisionError Traceback (most recent call last) 17 | 18 | in () 19 | ----> 1 comp.compute('result', raise_exceptions=True) 20 | 21 | 22 | [... skipped ...] 23 | 24 | 25 | in (numerator, divisor) 26 | 3 comp.add_node('numerator', value=1) 27 | 4 comp.add_node('divisor', value=0) 28 | ----> 5 comp.add_node('result', lambda numerator, divisor: numerator / divisor) 29 | 30 | 31 | ZeroDivisionError: division by zero 32 | ``` 33 | 34 | ```pycon 35 | %debug 36 | 37 | > (5)() 38 | 1 from loman import * 39 | 2 comp = Computation() 40 | 3 comp.add_node('numerator', value=1) 41 | 4 comp.add_node('divisor', value=0) 42 | ----> 5 comp.add_node('result', lambda numerator, divisor: numerator / divisor) 43 | 44 | ipdb> p numerator 45 | 1 46 | ipdb> p divisor 47 | 0 48 | ``` 49 | 50 | -------------------------------------------------------------------------------- /docs/user/install.md: -------------------------------------------------------------------------------- 1 | # Installation Guide 2 | 3 | ## Using Pip 4 | 5 | To install Loman, run the following command: 6 | 7 | ```bash 8 | $ pip install loman 9 | ``` 10 | 11 | If you don't have [pip](https://pip.pypa.io) installed (tisk tisk!), 12 | [this Python installation guide](http://docs.python-guide.org/en/latest/starting/installation/) 13 | can guide you through the process. 14 | 15 | ## Dependency on graphviz 16 | 17 | Loman uses the [graphviz](http://www.graphviz.org/) tool, and the Python [graphviz library](https://pypi.python.org/pypi/graphviz) to draw dependency graphs. If you are using Continuum's excellent [Anaconda Python](https://www.continuum.io/downloads) distribution (recommended), then you can install them by running these commands: 18 | 19 | ```bash 20 | $ conda install graphviz 21 | $ python install graphviz 22 | ``` 23 | 24 | ### Windows users: Adding the graphviz binary to your PATH 25 | 26 | Under Windows, Anaconda's graphviz package installs the graphviz tool's binaries in a subdirectory under the bin directory, but only the bin directory is on the PATH. So we will need to add the subdirectory to the path. To find out where the bin directory is in your installation, use the where command: 27 | 28 | ``` 29 | C:\>where dot 30 | C:\ProgramData\Anaconda3\Library\bin\dot.bat 31 | C:\>dir C:\ProgramData\Anaconda3\Library\bin\graphviz\dot.exe 32 | Volume in drive C has no label. 33 | Volume Serial Number is XXXX-XXXX 34 | 35 | Directory of C:\ProgramData\Anaconda3\Library\bin\graphviz 36 | 37 | 01/03/2017 04:16 PM 7,680 dot.exe 38 | 1 File(s) 7,680 bytes 39 | 0 Dir(s) xx bytes free 40 | ``` 41 | 42 | You can then add the subdirectory graphviz to your PATH. You can either do this through the Windows Control Panel, or in an interactive session, by running this code: 43 | 44 | ```python 45 | import sys, os 46 | def ensure_path(path): 47 | paths = os.environ['PATH'].split(';') 48 | if path not in paths: 49 | paths.append(path) 50 | os.environ['PATH'] = ';'.join(paths) 51 | ensure_path(r'C:\ProgramData\Anaconda3\Library\bin\graphviz') 52 | ``` 53 | -------------------------------------------------------------------------------- /tests/test_converters.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from loman import Computation, States 4 | 5 | 6 | def test_conversion_on_add_node(): 7 | comp = Computation() 8 | comp.add_node("a", value=1, converter=float) 9 | assert comp.s.a == States.UPTODATE and isinstance(comp.v.a, float) and comp.v.a == 1.0 10 | 11 | 12 | def test_conversion_on_insert(): 13 | comp = Computation() 14 | comp.add_node("a", converter=float) 15 | comp.insert("a", 1) 16 | assert comp.s.a == States.UPTODATE and isinstance(comp.v.a, float) and comp.v.a == 1.0 17 | 18 | 19 | def test_conversion_on_computation(): 20 | comp = Computation() 21 | comp.add_node("a") 22 | comp.add_node("b", lambda a: a + 1, converter=float) 23 | comp.insert("a", 1) 24 | comp.compute_all() 25 | assert comp.s.b == States.UPTODATE and isinstance(comp.v.b, float) and comp.v.b == 2.0 26 | 27 | 28 | def throw_exception(value): 29 | raise ValueError("Error") 30 | 31 | 32 | def test_exception_on_add_node(): 33 | comp = Computation() 34 | with pytest.raises(ValueError): 35 | comp.add_node("a", value=1, converter=throw_exception) 36 | assert comp.s.a == States.ERROR and isinstance(comp.v.a.exception, ValueError) 37 | 38 | 39 | def test_exception_on_insert(): 40 | comp = Computation() 41 | comp.add_node("a", converter=throw_exception) 42 | with pytest.raises(ValueError): 43 | comp.insert("a", 1) 44 | assert comp.s.a == States.ERROR and isinstance(comp.v.a.exception, ValueError) 45 | 46 | 47 | def test_exception_on_computation(): 48 | comp = Computation() 49 | comp.add_node("a") 50 | comp.add_node("b", lambda a: a + 1, converter=throw_exception) 51 | comp.insert("a", 1) 52 | comp.compute_all() 53 | assert comp.s.b == States.ERROR 54 | assert isinstance(comp.v.b.exception, ValueError) 55 | 56 | 57 | def test_exception_in_computation_with_converter(): 58 | comp = Computation() 59 | comp.add_node("a") 60 | comp.add_node("b", lambda a: a / 0, converter=float) 61 | comp.insert("a", 1) 62 | comp.compute_all() 63 | assert comp.s.b == States.ERROR 64 | assert isinstance(comp.v.b.exception, ZeroDivisionError) 65 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["hatchling"] 3 | build-backend = "hatchling.build" 4 | 5 | [project] 6 | name = "loman" 7 | version = "0.5.3" 8 | authors = [ 9 | { name="Ed Parcell", email="edparcell@gmail.com" }, 10 | ] 11 | description = "Loman tracks state of computations, and the dependencies between them, allowing full and partial recalculations." 12 | readme = "README.md" 13 | requires-python = ">=3.11" 14 | classifiers = [ 15 | "Programming Language :: Python :: 3", 16 | "Operating System :: OS Independent", 17 | "Development Status :: 4 - Beta", 18 | "Intended Audience :: Developers", 19 | "Topic :: Software Development :: Libraries" 20 | ] 21 | license = "BSD-3-Clause" 22 | license-files = ["LICEN[CS]E*"] 23 | dependencies = [ 24 | "decorator", 25 | "matplotlib", 26 | "numpy", 27 | "pydotplus", 28 | "dill >= 0.2.5", 29 | "networkx >= 2.0", 30 | "pandas >= 0.19.2", 31 | ] 32 | 33 | [project.optional-dependencies] 34 | test = [ 35 | "attrs", 36 | "pytest" 37 | ] 38 | dev = [ 39 | "pytest>=7.0", 40 | "pytest-cov>=4.0", 41 | "ruff>=0.12.0", 42 | "pre-commit>=3.0", 43 | "mypy>=1.0", 44 | "marimo>=0.15", 45 | "scipy>=1.16.3", 46 | #"yahoo-finance>=1.4.0", 47 | #"beautifulsoup4>=4.14.2", 48 | #"requests>=2.32.5" 49 | ] 50 | notebook = [ 51 | "scipy>=1.16.3", 52 | "marimo>=0.15", 53 | ] 54 | 55 | [project.urls] 56 | Homepage = "https://github.com/janusassetallocation/loman" 57 | 58 | [tool.hatch.build.targets.wheel] 59 | packages = ["src/loman"] 60 | 61 | [tool.deptry] 62 | known_first_party = ["loman"] 63 | pep621_dev_dependency_groups = ["test", "dev", "notebook"] 64 | per_rule_ignores = {DEP002 = ["pytest", "pytest-cov", "ruff", "pre-commit", "mypy", "marimo", "attrs"], DEP004 = ["attrs"]} 65 | 66 | [tool.deptry.package_module_name_map] 67 | pre-commit = "pre_commit" 68 | pytest-cov = "pytest_cov" 69 | matplotlib = "matplotlib" 70 | pydotplus = "pydotplus" 71 | decorator = "decorator" 72 | networkx = "networkx" 73 | marimo = "marimo" 74 | pandas = "pandas" 75 | pytest = "pytest" 76 | scipy = "scipy" 77 | attrs = "attrs" 78 | numpy = "numpy" 79 | mypy = "mypy" 80 | ruff = "ruff" 81 | dill = "dill" 82 | -------------------------------------------------------------------------------- /.github/rhiza/utils/version_max.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Emit the maximum supported Python version from pyproject.toml. 3 | 4 | This helper is used in GitHub Actions to pick a default interpreter. 5 | """ 6 | 7 | import json 8 | import tomllib 9 | from pathlib import Path 10 | 11 | from packaging.specifiers import SpecifierSet 12 | from packaging.version import Version 13 | 14 | PYPROJECT = Path(__file__).resolve().parents[3] / "pyproject.toml" 15 | CANDIDATES = ["3.11", "3.12", "3.13", "3.14"] # extend as needed 16 | 17 | 18 | def max_supported_version() -> str: 19 | """Return the highest supported Python version from pyproject.toml. 20 | 21 | Reads project.requires-python, evaluates candidate versions against the 22 | specifier, and returns the maximum version that satisfies the constraint. 23 | 24 | Returns: 25 | str: The maximum Python version (e.g., "3.13") satisfying the spec. 26 | """ 27 | # Load and parse pyproject.toml 28 | with PYPROJECT.open("rb") as f: 29 | data = tomllib.load(f) 30 | 31 | # Extract and validate the requires-python constraint 32 | spec_str = data.get("project", {}).get("requires-python") 33 | if not spec_str: 34 | msg = "pyproject.toml: missing 'project.requires-python'" 35 | raise KeyError(msg) 36 | 37 | # Create a SpecifierSet to check version compatibility 38 | spec = SpecifierSet(spec_str) 39 | max_version = None 40 | 41 | # Iterate through candidates in order (ascending) 42 | # The last matching version will be the maximum 43 | for v in CANDIDATES: 44 | if Version(v) in spec: 45 | max_version = v 46 | 47 | if max_version is None: 48 | msg = "pyproject.toml: no supported Python versions match 'project.requires-python'" 49 | raise ValueError(msg) 50 | 51 | return max_version 52 | 53 | 54 | if __name__ == "__main__": 55 | # Check if pyproject.toml exists at the expected location 56 | # If found, determine max version from requires-python 57 | # Otherwise, default to 3.13 (latest stable as of this code) 58 | if PYPROJECT.exists(): 59 | print(json.dumps(max_supported_version())) 60 | else: 61 | print(json.dumps("3.13")) 62 | -------------------------------------------------------------------------------- /tests/test_rhiza/benchmarks/README.md: -------------------------------------------------------------------------------- 1 | # Benchmarks 2 | 3 | This folder contains benchmark analysis scripts for the project. 4 | It does **not** contain the benchmark tests themselves, 5 | which are expected to be located in `tests/benchmarks/`. 6 | 7 | ## Files 8 | 9 | - `analyze_benchmarks.py` – Script to analyze benchmark results and generate reports. 10 | - `README.md` – This file. 11 | 12 | ## Running Benchmarks 13 | 14 | Benchmarks are executed via the Makefile or `pytest`: 15 | 16 | ```bash 17 | # Using Makefile target 18 | make benchmark 19 | 20 | # Or manually via uv 21 | uv run pytest tests/benchmarks/ \ 22 | --benchmark-only \ 23 | --benchmark-histogram=tests/test_rhiza/benchmarks/benchmarks \ 24 | --benchmark-json=tests/test_rhiza/benchmarks/benchmarks.json 25 | 26 | # Analyze results 27 | uv run tests/test_rhiza/benchmarks/analyze_benchmarks.py 28 | ``` 29 | ## Output 30 | 31 | * benchmarks.json – JSON file containing benchmark results. 32 | * Histogram plots – Generated in the folder specified by --benchmark-histogram (by default tests/test_rhiza/benchmarks/benchmarks). 33 | 34 | ## Notes 35 | 36 | * Ensure pytest-benchmark (v5.2.3) and pygal (v3.1.0) are installed. 37 | * The Makefile target handles this automatically. 38 | * analyze_benchmarks.py reads the JSON output and generates human-readable summaries and plots. 39 | 40 | ## Example benchmark tests 41 | 42 | ```python 43 | import time 44 | 45 | def something(duration=0.001): 46 | """ 47 | Function that needs some serious benchmarking. 48 | """ 49 | time.sleep(duration) 50 | # You may return anything you want, like the result of a computation 51 | return 123 52 | 53 | def test_my_stuff(benchmark): 54 | # benchmark something 55 | result = benchmark(something) 56 | 57 | # Extra code, to verify that the run completed correctly. 58 | # Sometimes you may want to check the result, fast functions 59 | # are no good if they return incorrect results :-) 60 | assert result == 123 61 | ``` 62 | 63 | Please note the usage of the `@pytest.mark.benchmark` fixture 64 | which becomes available after installing pytest-benchmark. 65 | 66 | See https://pytest-benchmark.readthedocs.io/en/stable/ for more details. 67 | 68 | 69 | 70 | -------------------------------------------------------------------------------- /docs/user/features/querying/view_inputs_outputs.md: -------------------------------------------------------------------------------- 1 | # Viewing node inputs and outputs 2 | 3 | Loman computations contain methods to see what nodes are inputs to any node, and what nodes a given node is itself an input to. First, let's define a simple Computation: 4 | 5 | ```pycon 6 | >>> comp = Computation() 7 | >>> comp.add_node('a', value=1) 8 | >>> comp.add_node('b', lambda a: a + 1) 9 | >>> comp.add_node('c', lambda a: 2 * a) 10 | >>> comp.add_node('d', lambda b, c: b + c) 11 | >>> comp.compute_all() 12 | >>> comp 13 | ``` 14 | 15 | ```{graphviz} 16 | digraph G { 17 | n0 [fillcolor="#15b01a", label=a, style=filled]; 18 | n1 [fillcolor="#15b01a", label=b, style=filled]; 19 | n2 [fillcolor="#15b01a", label=c, style=filled]; 20 | n3 [fillcolor="#15b01a", label=d, style=filled]; 21 | n0 -> n1 22 | n0 -> n2 23 | n1 -> n3 24 | n2 -> n3 25 | } 26 | ``` 27 | 28 | We can find the inputs of a node using the `get_inputs` method, or the `i` attribute (which works similarly to the `v` and `s` attributes to access value and state): 29 | 30 | ```pycon 31 | >>> comp.get_inputs('b') 32 | ['a'] 33 | >>> comp.get_inputs('d') 34 | ['c', 'b'] 35 | >>> comp.i.d # Attribute-style access 36 | ['c', 'b'] 37 | >>> comp.i['d'] # Dictionary-style access 38 | ['c', 'b'] 39 | >>> comp.i[['b', 'd']] # Multiple dictionary-style accesses: 40 | [['a'], ['c', 'b]] 41 | ``` 42 | 43 | We can also use `get_original_inputs` to find the inputs of the entire Computation (or a subset of it): 44 | 45 | ```pycon 46 | >>> comp.get_original_inputs() 47 | ['a'] 48 | >>> comp.get_original_inputs(['b']) # Just the inputs used to compute b 49 | ['a'] 50 | ``` 51 | 52 | To find what a node feeds into, there are `get_outputs`, the `o` attribute and `get_final_outputs` (although as intermediate nodes are often useful, this latter is less useful): 53 | 54 | ```pycon 55 | >>> comp.get_outputs('a') 56 | ['b', 'c'] 57 | >>> comp.o.a 58 | ['b', 'c'] 59 | >>> comp.o[['b', 'c']] 60 | [['d'], ['d']] 61 | >>> comp.get_final_outputs() 62 | ['d'] 63 | ``` 64 | 65 | Finally, these can be used with the `v` accessor to quickly see all the input values to a given node: 66 | 67 | ```pycon 68 | >>> {n: comp.v[n] for n in comp.i.d} 69 | {'c': 2, 'b': 2} 70 | ``` 71 | -------------------------------------------------------------------------------- /.github/rhiza/utils/version_matrix.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Emit the list of supported Python versions from pyproject.toml. 3 | 4 | This helper is used in GitHub Actions to compute the test matrix. 5 | """ 6 | 7 | import json 8 | import tomllib 9 | from pathlib import Path 10 | 11 | from packaging.specifiers import SpecifierSet 12 | from packaging.version import Version 13 | 14 | PYPROJECT = Path(__file__).resolve().parents[3] / "pyproject.toml" 15 | CANDIDATES = ["3.11", "3.12", "3.13", "3.14"] # extend as needed 16 | 17 | 18 | def supported_versions() -> list[str]: 19 | """Return all supported Python versions declared in pyproject.toml. 20 | 21 | Reads project.requires-python, evaluates candidate versions against the 22 | specifier, and returns the subset that satisfy the constraint, in ascending order. 23 | 24 | Returns: 25 | list[str]: The supported versions (e.g., ["3.11", "3.12"]). 26 | """ 27 | # Load pyproject.toml using the tomllib standard library (Python 3.11+) 28 | with PYPROJECT.open("rb") as f: 29 | data = tomllib.load(f) 30 | 31 | # Extract the requires-python field from project metadata 32 | # This specifies the Python version constraint (e.g., ">=3.11") 33 | spec_str = data.get("project", {}).get("requires-python") 34 | if not spec_str: 35 | msg = "pyproject.toml: missing 'project.requires-python'" 36 | raise KeyError(msg) 37 | 38 | # Parse the version specifier (e.g., ">=3.11,<3.14") 39 | spec = SpecifierSet(spec_str) 40 | 41 | # Filter candidate versions to find which ones satisfy the constraint 42 | versions: list[str] = [] 43 | for v in CANDIDATES: 44 | # packaging.version.Version parses the version string 45 | # The 'in' operator checks if the version satisfies the specifier 46 | if Version(v) in spec: 47 | versions.append(v) 48 | 49 | if not versions: 50 | msg = "pyproject.toml: no supported Python versions match 'project.requires-python'" 51 | raise ValueError(msg) 52 | 53 | return versions 54 | 55 | 56 | if __name__ == "__main__": 57 | # Check if pyproject.toml exists in the expected location 58 | # If it exists, use it to determine supported versions 59 | # Otherwise, fall back to returning all candidates (for edge cases) 60 | if PYPROJECT.exists(): 61 | print(json.dumps(supported_versions())) 62 | else: 63 | print(json.dumps(CANDIDATES)) 64 | -------------------------------------------------------------------------------- /src/loman/graph_utils.py: -------------------------------------------------------------------------------- 1 | """Graph utility functions for computation graph operations.""" 2 | 3 | import functools 4 | 5 | import networkx as nx 6 | 7 | from loman.exception import LoopDetectedError 8 | from loman.util import apply_n 9 | 10 | 11 | def contract_node_one(g, n): 12 | """Remove a node from graph and connect its predecessors to its successors.""" 13 | for p in g.predecessors(n): 14 | for s in g.successors(n): 15 | g.add_edge(p, s) 16 | g.remove_node(n) 17 | 18 | 19 | def contract_node(g, ns): 20 | """Remove multiple nodes from graph and connect their predecessors to successors.""" 21 | apply_n(functools.partial(contract_node_one, g), ns) 22 | 23 | 24 | def topological_sort(g): 25 | """Performs a topological sort on a directed acyclic graph (DAG). 26 | 27 | This function attempts to compute the topological order of the nodes in 28 | the given graph `g`. If the graph contains a cycle, it raises a 29 | `LoopDetectedError` with details about the detected cycle, making it 30 | informative for debugging purposes. 31 | 32 | Parameters: 33 | g : networkx.DiGraph 34 | A directed graph to be sorted. Must be provided as an instance of 35 | `networkx.DiGraph`. The function assumes the graph is acyclic unless 36 | a cycle is detected. 37 | 38 | Returns: 39 | list 40 | A list of nodes in topologically sorted order, if the graph has no 41 | cycles. 42 | 43 | Raises: 44 | LoopDetectedError 45 | If the graph contains a cycle, a `LoopDetectedError` is raised with 46 | information about the detected cycle if available. The detected cycle 47 | is presented as a list of directed edges forming the cycle. 48 | 49 | NetworkXUnfeasible 50 | If topological sorting fails due to reasons other than cyclic 51 | dependencies in the graph. 52 | """ 53 | try: 54 | return list(nx.topological_sort(g)) 55 | except nx.NetworkXUnfeasible as e: 56 | cycle_lst = None 57 | if g is not None: 58 | try: 59 | cycle_lst = nx.find_cycle(g) 60 | except nx.NetworkXNoCycle: 61 | # there must non-cycle reason NetworkXUnfeasible, leave as is 62 | raise e 63 | args = [] 64 | if cycle_lst: 65 | lst = [f"{n_src}->{n_tgt}" for n_src, n_tgt in cycle_lst] 66 | args = [f"DAG cycle: {', '.join(lst)}"] 67 | raise LoopDetectedError(*args) from e 68 | -------------------------------------------------------------------------------- /.github/rhiza/actions/setup-project/action.yml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | # Action: Setup Project (composite action) 5 | # 6 | # Purpose: Bootstrap a Python project in GitHub Actions by: 7 | # - Installing Task, uv, and uvx 8 | # - Detecting presence of pyproject.toml and exposing it as an output 9 | # - Creating a virtual environment with uv and syncing dependencies 10 | # 11 | # Inputs: 12 | # - python-version: Python version used for the virtual environment (default: 3.12) 13 | # Can be overridden by setting PYTHON_DEFAULT_VERSION repository variable 14 | # - uv-extra-index-url: Extra index URL for uv (optional) 15 | # 16 | # Outputs: 17 | # - pyproject_exists: "true" if pyproject.toml exists, otherwise "false" 18 | # 19 | # Notes: 20 | # - Safe to run in repositories without pyproject.toml; dependency sync will be skipped. 21 | # - Used by workflows such as CI, Book, Marimo, and Release. 22 | 23 | name: 'Setup Project' 24 | description: 'Setup the project' 25 | 26 | inputs: 27 | python-version: 28 | description: 'Python version to use' 29 | required: true 30 | uv-extra-index-url: 31 | description: 'Extra index URL for uv' 32 | required: false 33 | 34 | outputs: 35 | pyproject_exists: 36 | description: 'Flag indicating whether pyproject.toml exists' 37 | value: ${{ steps.check_pyproject.outputs.exists }} 38 | 39 | runs: 40 | using: 'composite' 41 | steps: 42 | - name: Install uv 43 | uses: astral-sh/setup-uv@v7 44 | with: 45 | version: "0.9.18" 46 | 47 | - name: Check for pyproject.toml 48 | id: check_pyproject 49 | shell: bash 50 | run: | 51 | if [ -f "pyproject.toml" ]; then 52 | echo "exists=true" >> "$GITHUB_OUTPUT" 53 | else 54 | echo "exists=false" >> "$GITHUB_OUTPUT" 55 | fi 56 | 57 | - name: Build the virtual environment 58 | shell: bash 59 | run: uv venv --python ${{ inputs.python-version }} 60 | 61 | - name: "Sync the virtual environment for ${{ github.repository }} if pyproject.toml exists" 62 | shell: bash 63 | run: | 64 | if [[ -n "${{ inputs.uv-extra-index-url }}" ]]; then 65 | export UV_EXTRA_INDEX_URL="${{ inputs.uv-extra-index-url }}" 66 | fi 67 | 68 | if [ -f "pyproject.toml" ]; then 69 | uv sync --all-extras 70 | else 71 | echo "No pyproject.toml found, skipping package installation" 72 | fi 73 | -------------------------------------------------------------------------------- /tests/test_value_eq.py: -------------------------------------------------------------------------------- 1 | """Tests for value equality utility functions in Loman.""" 2 | 3 | import numpy as np 4 | import pandas as pd 5 | 6 | from loman.util import value_eq 7 | 8 | 9 | def test_ints_equal(): 10 | a = 1 11 | b = 1 12 | assert value_eq(a, b) 13 | 14 | 15 | def test_ints_not_equal(): 16 | a = 1 17 | b = 2 18 | assert not value_eq(a, b) 19 | 20 | 21 | def test_floats_equal(): 22 | a = 1.0 23 | b = 1.0 24 | assert value_eq(a, b) 25 | 26 | 27 | def test_floats_not_equal(): 28 | a = 1.0 29 | b = 2.0 30 | assert not value_eq(a, b) 31 | 32 | 33 | def test_lists_equal(): 34 | a = [1, 2, 3] 35 | b = [1, 2, 3] 36 | assert value_eq(a, b) 37 | 38 | 39 | def test_lists_not_equal(): 40 | a = [1, 2, 3] 41 | b = ["a", "b", "c"] 42 | assert not value_eq(a, b) 43 | 44 | 45 | def test_dicts_equal(): 46 | a = {"x": 1, "y": 2} 47 | b = {"x": 1, "y": 2} 48 | assert value_eq(a, b) 49 | 50 | 51 | def test_dicts_not_equal(): 52 | a = {"x": 1, "y": 2} 53 | b = {"x": 1, "z": 2} 54 | assert not value_eq(a, b) 55 | b = {"x": 1, "y": 3} 56 | assert not value_eq(a, b) 57 | 58 | 59 | def test_series_equal(): 60 | a = pd.Series([1.0, np.nan]) 61 | b = pd.Series([1.0, np.nan]) 62 | assert value_eq(a, b) 63 | 64 | 65 | def test_series_not_equal(): 66 | a = pd.Series([1.0, 1.0]) 67 | b = pd.Series([1.0, np.nan]) 68 | assert not value_eq(a, b) 69 | a = pd.Series([1.0, np.nan]) 70 | b = pd.Series([1.0, 1.0]) 71 | assert not value_eq(a, b) 72 | 73 | 74 | def test_df_equal(): 75 | a = pd.DataFrame({"a": [1.0, np.nan]}) 76 | b = pd.DataFrame({"a": [1.0, np.nan]}) 77 | assert value_eq(a, b) 78 | 79 | 80 | def test_df_not_equal(): 81 | a = pd.DataFrame({"a": [1.0, 1.0]}) 82 | b = pd.DataFrame({"a": [1.0, np.nan]}) 83 | assert not value_eq(a, b) 84 | a = pd.DataFrame({"a": [1.0, np.nan]}) 85 | b = pd.DataFrame({"a": [1.0, 1.0]}) 86 | assert not value_eq(a, b) 87 | 88 | 89 | def test_numpy_arrays_equal(): 90 | a = np.array([1.0, 2.0, 3.0]) 91 | b = np.array([1.0, 2.0, 3.0]) 92 | assert value_eq(a, b) 93 | 94 | 95 | def test_numpy_arrays_not_equal(): 96 | a = np.array([1.0, 2.0, 3.0]) 97 | b = np.array([1.0, 2.0, 4.0]) 98 | assert not value_eq(a, b) 99 | 100 | 101 | def test_numpy_arrays_with_nan_equal(): 102 | a = np.array([1.0, np.nan, 3.0]) 103 | b = np.array([1.0, np.nan, 3.0]) 104 | assert value_eq(a, b) 105 | 106 | 107 | def test_numpy_arrays_with_nan_not_equal(): 108 | a = np.array([1.0, np.nan, 3.0]) 109 | b = np.array([1.0, 2.0, 3.0]) 110 | assert not value_eq(a, b) 111 | -------------------------------------------------------------------------------- /.github/workflows/structure.yml: -------------------------------------------------------------------------------- 1 | # GitHub Actions workflow: repository structure advisory checks 2 | # Purpose: Soft-check the presence of common scaffolding (src/, tests/, and key dotfiles) 3 | # and print WARN messages if something is missing. This workflow is advisory: 4 | # it never fails the job based on these checks; it only surfaces helpful hints 5 | # for template consumers. 6 | # 7 | # Triggers: on push and pull requests targeting main/master. 8 | # Permissions: read-only; no writes or tokens required. LFS enabled to support 9 | # repositories that track large files. 10 | # Notes: The shell steps use printf with optional color variables (e.g., YELLOW, 11 | # RESET). If these env vars are unset in the runner, the parameter expansion 12 | # defaults ensure plain text output without ANSI colors. 13 | name: "STRUCTURE" 14 | 15 | # Minimal read permission is sufficient; the workflow does not modify the repo 16 | permissions: 17 | contents: read 18 | 19 | # Run on pushes and pull requests to the default branches 20 | on: 21 | push: 22 | pull_request: 23 | branches: [ main, master ] 24 | 25 | jobs: 26 | # Single job that performs presence checks and emits WARN lines (warn-only) 27 | structure_check: 28 | runs-on: "ubuntu-latest" 29 | 30 | permissions: 31 | contents: read 32 | 33 | steps: 34 | # Check out the repository code (with Git LFS files, if any) 35 | - uses: actions/checkout@v6 36 | with: 37 | lfs: true 38 | 39 | # Check there is a src folder 40 | - name: Check src folder exists 41 | run: | 42 | if [ ! -d "src" ]; then 43 | printf "${YELLOW:-}[WARN] No src folder found, skipping structure check${RESET:-}\n" 44 | fi 45 | 46 | # Check there is a tests folder 47 | - name: Check tests folder exists 48 | run: | 49 | if [ ! -d "tests" ]; then 50 | printf "${YELLOW:-}[WARN] No tests folder found, skipping structure check${RESET:-}\n" 51 | fi 52 | 53 | # Check there is a .editorconfig, .gitignore, .pre-commit-config.yaml and Makefile file 54 | - name: Check .editorconfig and .gitignore files exist 55 | run: | 56 | if [ ! -f ".editorconfig" ]; then 57 | printf "${YELLOW:-}[WARN] No .editorconfig file found, skipping structure check${RESET:-}\n" 58 | fi 59 | 60 | if [ ! -f ".gitignore" ]; then 61 | printf "${YELLOW:-}[WARN] No .gitignore file found, skipping structure check${RESET:-}\n" 62 | fi 63 | 64 | if [ ! -f ".pre-commit-config.yaml" ]; then 65 | printf "${YELLOW:-}[WARN] No .pre-commit-config file found, skipping structure check${RESET:-}\n" 66 | fi 67 | 68 | if [ ! -f "Makefile" ]; then 69 | printf "${YELLOW:-}[WARN] No Makefile found, skipping structure check${RESET:-}\n" 70 | fi 71 | -------------------------------------------------------------------------------- /tests/test_rhiza/benchmarks/analyze_benchmarks.py: -------------------------------------------------------------------------------- 1 | # /// script 2 | # dependencies = [ 3 | # "pandas", 4 | # "plotly", 5 | # ] 6 | # /// 7 | 8 | """Analyze pytest-benchmark results and visualize them. 9 | 10 | This script reads a local ``benchmarks.json`` file produced by pytest-benchmark, 11 | prints a reduced table with benchmark name, mean milliseconds, and operations 12 | per second, and renders an interactive Plotly bar chart of mean runtimes. 13 | """ 14 | 15 | # Python script: read JSON, create reduced table, and Plotly chart 16 | import json 17 | import logging 18 | import sys 19 | from pathlib import Path 20 | 21 | import pandas as pd 22 | import plotly.express as px 23 | 24 | # check if the file exists at all 25 | if not Path(__file__).parent.joinpath("benchmarks.json").exists(): 26 | logging.warning("benchmarks.json not found; skipping analysis and exiting successfully.") 27 | sys.exit(0) 28 | 29 | # Load pytest-benchmark JSON 30 | with open(Path(__file__).parent / "benchmarks.json") as f: 31 | # Do not continue if JSON is invalid (e.g. empty file) 32 | try: 33 | data = json.load(f) 34 | except json.JSONDecodeError: 35 | logging.warning("benchmarks.json is invalid or empty; skipping analysis and exiting successfully.") 36 | sys.exit(0) 37 | 38 | # Validate structure: require a 'benchmarks' list 39 | if not isinstance(data, dict) or "benchmarks" not in data or not isinstance(data["benchmarks"], list): 40 | logging.warning("benchmarks.json missing valid 'benchmarks' list; skipping analysis and exiting successfully.") 41 | sys.exit(0) 42 | 43 | # Extract relevant info: Benchmark name, Mean (ms), OPS 44 | benchmarks = [] 45 | for bench in data["benchmarks"]: 46 | mean_s = bench["stats"]["mean"] 47 | benchmarks.append( 48 | { 49 | "Benchmark": bench["name"], 50 | "Mean_ms": mean_s * 1000, # convert seconds → milliseconds 51 | "OPS": 1 / mean_s, 52 | } 53 | ) 54 | 55 | # Create DataFrame and sort fastest → slowest 56 | df = pd.DataFrame(benchmarks) 57 | df = df.sort_values("Mean_ms") 58 | 59 | # 3️⃣ Display reduced table 60 | print(df[["Benchmark", "Mean_ms", "OPS"]].to_string(index=False, float_format="%.3f")) 61 | 62 | # 4️⃣ Create interactive Plotly bar chart 63 | fig = px.bar( 64 | df, 65 | x="Benchmark", 66 | y="Mean_ms", 67 | color="Mean_ms", 68 | color_continuous_scale="Viridis_r", 69 | title="Benchmark Mean Runtime (ms) per Test", 70 | text="Mean_ms", 71 | ) 72 | 73 | fig.update_traces(texttemplate="%{text:.2f} ms", textposition="outside") 74 | fig.update_layout( 75 | xaxis_tickangle=-45, 76 | yaxis_title="Mean Runtime (ms)", 77 | coloraxis_colorbar=dict(title="ms"), 78 | height=600, 79 | margin=dict(t=100, b=200), 80 | ) 81 | 82 | fig.show() 83 | 84 | # plotly fig to html 85 | fig.write_html(Path(__file__).parent / "benchmarks.html") 86 | -------------------------------------------------------------------------------- /tests/test_rhiza/test_marimushka_script.py: -------------------------------------------------------------------------------- 1 | """Tests for the marimushka.sh script using a sandboxed environment. 2 | 3 | This file and its associated tests flow down via a SYNC action from the jebel-quant/rhiza repository 4 | (https://github.com/jebel-quant/rhiza). 5 | 6 | Provides test fixtures for testing git-based workflows and version management. 7 | """ 8 | 9 | import os 10 | import subprocess 11 | 12 | 13 | def test_marimushka_script_success(git_repo): 14 | """Test successful execution of the marimushka script.""" 15 | script = git_repo / ".github" / "rhiza" / "scripts" / "marimushka.sh" 16 | 17 | # Setup directories in the git repo 18 | marimo_folder = git_repo / "book" / "marimo" 19 | marimo_folder.mkdir(parents=True) 20 | (marimo_folder / "notebook.py").touch() 21 | 22 | output_folder = git_repo / "_marimushka" 23 | 24 | # Run the script 25 | # We need to set env vars. git_repo fixture sets cwd to local_dir. 26 | env = os.environ.copy() 27 | env["MARIMO_FOLDER"] = "book/marimo" 28 | env["MARIMUSHKA_OUTPUT"] = "_marimushka" 29 | # UVX_BIN is defaulted to ./bin/uvx in the script, which matches our mock setup in git_repo 30 | 31 | result = subprocess.run([str(script)], env=env, cwd=git_repo, capture_output=True, text=True) 32 | 33 | assert result.returncode == 0 34 | assert "Exporting notebooks" in result.stdout 35 | assert (output_folder / "index.html").exists() 36 | assert (output_folder / ".nojekyll").exists() 37 | assert (output_folder / "index.html").read_text() == "Mock Export" 38 | 39 | 40 | def test_marimushka_missing_folder(git_repo): 41 | """Test script behavior when MARIMO_FOLDER is missing.""" 42 | script = git_repo / ".github" / "rhiza" / "scripts" / "marimushka.sh" 43 | 44 | env = os.environ.copy() 45 | env["MARIMO_FOLDER"] = "missing" 46 | 47 | result = subprocess.run([str(script)], env=env, cwd=git_repo, capture_output=True, text=True) 48 | 49 | assert result.returncode == 0 50 | assert "does not exist" in result.stdout 51 | 52 | 53 | def test_marimushka_no_python_files(git_repo): 54 | """Test script behavior when MARIMO_FOLDER has no python files.""" 55 | script = git_repo / ".github" / "rhiza" / "scripts" / "marimushka.sh" 56 | 57 | marimo_folder = git_repo / "book" / "marimo" 58 | marimo_folder.mkdir(parents=True) 59 | # No .py files created 60 | 61 | output_folder = git_repo / "_marimushka" 62 | 63 | env = os.environ.copy() 64 | env["MARIMO_FOLDER"] = "book/marimo" 65 | env["MARIMUSHKA_OUTPUT"] = "_marimushka" 66 | 67 | result = subprocess.run([str(script)], env=env, cwd=git_repo, capture_output=True, text=True) 68 | 69 | assert result.returncode == 0 70 | assert "No Python files found" in result.stdout 71 | assert (output_folder / "index.html").exists() 72 | assert "No notebooks found" in (output_folder / "index.html").read_text() 73 | -------------------------------------------------------------------------------- /tests/test_rhiza/test_structure.py: -------------------------------------------------------------------------------- 1 | """Tests for the root pytest fixture that yields the repository root Path. 2 | 3 | This file and its associated tests flow down via a SYNC action from the jebel-quant/rhiza repository 4 | (https://github.com/jebel-quant/rhiza). 5 | 6 | This module ensures the fixture resolves to the true project root and that 7 | expected files/directories exist, enabling other tests to locate resources 8 | reliably. 9 | """ 10 | 11 | import warnings 12 | from pathlib import Path 13 | 14 | 15 | class TestRootFixture: 16 | """Tests for the root fixture that provides repository root path.""" 17 | 18 | def test_root_returns_pathlib_path(self, root): 19 | """Root fixture should return a pathlib.Path object.""" 20 | assert isinstance(root, Path) 21 | 22 | def test_root_points_to_repository_root(self, root): 23 | """Root fixture should point to the actual repository root.""" 24 | assert (root / ".github").is_dir() 25 | 26 | def test_root_is_absolute_path(self, root): 27 | """Root fixture should return an absolute path.""" 28 | assert root.is_absolute() 29 | 30 | def test_root_resolves_correctly_from_nested_location(self, root): 31 | """Root should correctly resolve to repository root from tests/test_config_templates/.""" 32 | conftest_path = root / "tests" / "test_rhiza" / "conftest.py" 33 | assert conftest_path.exists() 34 | 35 | def test_root_contains_expected_directories(self, root): 36 | """Root should contain all expected project directories.""" 37 | expected_dirs = [".github", "src", "tests", "book"] 38 | for dirname in expected_dirs: 39 | if not (root / dirname).exists(): 40 | warnings.warn(f"Expected directory {dirname} not found", stacklevel=2) 41 | 42 | def test_root_contains_expected_files(self, root): 43 | """Root should contain all expected configuration files.""" 44 | expected_files = [ 45 | "pyproject.toml", 46 | "README.md", 47 | "Makefile", 48 | "ruff.toml", 49 | ".gitignore", 50 | ".editorconfig", 51 | ] 52 | for filename in expected_files: 53 | if not (root / filename).exists(): 54 | warnings.warn(f"Expected file {filename} not found", stacklevel=2) 55 | 56 | def test_root_can_locate_github_scripts(self, root): 57 | """Root should allow locating GitHub scripts.""" 58 | scripts_dir = root / ".github" / "rhiza" / "scripts" 59 | if not scripts_dir.exists(): 60 | warnings.warn("GitHub scripts directory not found", stacklevel=2) 61 | else: 62 | if not (scripts_dir / "release.sh").exists(): 63 | warnings.warn("Expected script release.sh not found", stacklevel=2) 64 | if not (scripts_dir / "bump.sh").exists(): 65 | warnings.warn("Expected script bump.sh not found", stacklevel=2) 66 | -------------------------------------------------------------------------------- /book/Makefile.book: -------------------------------------------------------------------------------- 1 | ## Makefile.book - Documentation and book-building targets 2 | # This file is included by the main Makefile 3 | 4 | # Book-specific variables 5 | BOOK_TITLE := Project Documentation 6 | BOOK_SUBTITLE := Generated by minibook 7 | PDOC_TEMPLATE_DIR := book/pdoc-templates 8 | BOOK_TEMPLATE := book/minibook-templates/custom.html.jinja2 9 | DOCFORMAT := 10 | 11 | # Declare phony targets (they don't produce files) 12 | .PHONY: docs marimushka book 13 | 14 | ##@ Documentation 15 | docs:: install ## create documentation with pdoc 16 | @if [ -d "${SOURCE_FOLDER}" ]; then \ 17 | PKGS=""; for d in "${SOURCE_FOLDER}"/*; do [ -d "$$d" ] && PKGS="$$PKGS $$(basename "$$d")"; done; \ 18 | if [ -z "$$PKGS" ]; then \ 19 | printf "${YELLOW}[WARN] No packages found under ${SOURCE_FOLDER}, skipping docs${RESET}\n"; \ 20 | else \ 21 | TEMPLATE_ARG=""; \ 22 | if [ -d "${PDOC_TEMPLATE_DIR}" ]; then \ 23 | TEMPLATE_ARG="-t ${PDOC_TEMPLATE_DIR}"; \ 24 | printf "${BLUE}[INFO] Using pdoc templates from ${PDOC_TEMPLATE_DIR}${RESET}\n"; \ 25 | fi; \ 26 | DOCFORMAT="$(DOCFORMAT)"; \ 27 | if [ -z "$$DOCFORMAT" ]; then \ 28 | if [ -f "ruff.toml" ]; then \ 29 | DOCFORMAT=$$(${UV_BIN} run python -c "import tomllib; print(tomllib.load(open('ruff.toml', 'rb')).get('lint', {}).get('pydocstyle', {}).get('convention', ''))"); \ 30 | fi; \ 31 | if [ -z "$$DOCFORMAT" ]; then \ 32 | DOCFORMAT="google"; \ 33 | fi; \ 34 | printf "${BLUE}[INFO] Detected docformat: $$DOCFORMAT${RESET}\n"; \ 35 | else \ 36 | printf "${BLUE}[INFO] Using provided docformat: $$DOCFORMAT${RESET}\n"; \ 37 | fi; \ 38 | ${UV_BIN} pip install pdoc && \ 39 | PYTHONPATH="${SOURCE_FOLDER}" ${UV_BIN} run pdoc --docformat $$DOCFORMAT --output-dir _pdoc $$TEMPLATE_ARG $$PKGS; \ 40 | fi; \ 41 | else \ 42 | printf "${YELLOW}[WARN] Source folder ${SOURCE_FOLDER} not found, skipping docs${RESET}\n"; \ 43 | fi 44 | 45 | marimushka:: install-uv ## export Marimo notebooks to HTML 46 | @printf "${BLUE}[INFO] Exporting notebooks from ${MARIMO_FOLDER}...${RESET}\n" 47 | @if [ ! -d "${MARIMO_FOLDER}" ]; then \ 48 | printf "${YELLOW}[WARN] Directory '${MARIMO_FOLDER}' does not exist. Skipping marimushka.${RESET}\n"; \ 49 | else \ 50 | MARIMO_FOLDER="${MARIMO_FOLDER}" UV_BIN="${UV_BIN}" UVX_BIN="${UVX_BIN}" /bin/sh "${SCRIPTS_FOLDER}/marimushka.sh"; \ 51 | fi 52 | 53 | book:: test docs marimushka ## compile the companion book 54 | @${UV_BIN} pip install marimo 55 | @/bin/sh "${SCRIPTS_FOLDER}/book.sh" 56 | @TEMPLATE_ARG=""; \ 57 | if [ -f "${BOOK_TEMPLATE}" ]; then \ 58 | TEMPLATE_ARG="--template ${BOOK_TEMPLATE}"; \ 59 | printf "${BLUE}[INFO] Using book template ${BOOK_TEMPLATE}${RESET}\n"; \ 60 | fi; \ 61 | ${UVX_BIN} minibook --title "${BOOK_TITLE}" --subtitle "${BOOK_SUBTITLE}" $$TEMPLATE_ARG --links "$$(python3 -c 'import json,sys; print(json.dumps(json.load(open("_book/links.json"))))')" --output "_book" 62 | @touch "_book/.nojekyll" 63 | -------------------------------------------------------------------------------- /.github/workflows/rhiza_book.yml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | # Workflow: Book 5 | # Purpose: This workflow builds and deploys comprehensive documentation for the project. 6 | # It combines API documentation, test coverage reports, test results, and 7 | # interactive notebooks into a single GitHub Pages site. 8 | # 9 | # Trigger: This workflow runs on every push to the main or master branch 10 | # 11 | # Components: 12 | # - 📓 Process Marimo notebooks 13 | # - 📖 Generate API documentation with pdoc 14 | # - 🧪 Run tests and generate coverage reports 15 | # - 🚀 Deploy combined documentation to GitHub Pages 16 | 17 | name: "BOOK" 18 | 19 | on: 20 | push: 21 | branches: 22 | - main 23 | - master 24 | 25 | jobs: 26 | book: 27 | runs-on: "ubuntu-latest" 28 | 29 | environment: 30 | name: github-pages # 👈 this is the critical missing piece 31 | 32 | permissions: 33 | contents: read 34 | pages: write # Permission to deploy to Pages 35 | id-token: write # Permission to verify deployment origin 36 | 37 | steps: 38 | # Check out the repository code 39 | - uses: actions/checkout@v6 40 | with: 41 | lfs: true 42 | 43 | # Determine the Python version to use 44 | - name: Get Python version 45 | id: get-python 46 | run: | 47 | echo "python-version=$(python ./.github/rhiza/utils/version_max.py)" >> "$GITHUB_OUTPUT" 48 | 49 | # Use the composite action to set up the project 50 | - name: Setup the project 51 | uses: ./.github/rhiza/actions/setup-project 52 | with: 53 | python-version: ${{ steps.get-python.outputs.python-version }} 54 | uv-extra-index-url: ${{ secrets.UV_EXTRA_INDEX_URL }} 55 | 56 | - name: "Make the book" 57 | run: | 58 | make book 59 | 60 | # Step 5: Package all artifacts for GitHub Pages deployment 61 | # This prepares the combined outputs for deployment by creating a single artifact 62 | - name: Upload static files as artifact 63 | uses: actions/upload-pages-artifact@v4 # Official GitHub Pages artifact upload action 64 | with: 65 | path: _book/ # Path to the directory containing all artifacts to deploy 66 | 67 | # Step 6: Deploy the packaged artifacts to GitHub Pages 68 | # This step publishes the content to GitHub Pages 69 | # The deployment is conditional based on whether the repository is a fork and the PUBLISH_COMPANION_BOOK variable is set 70 | # If the repository is a fork, deployment is skipped to avoid unauthorized publishing 71 | # If PUBLISH_COMPANION_BOOK is not set, it defaults to allowing deployment 72 | - name: Deploy to GitHub Pages 73 | if: ${{ !github.event.repository.fork && (vars.PUBLISH_COMPANION_BOOK == 'true' || vars.PUBLISH_COMPANION_BOOK == '') }} 74 | uses: actions/deploy-pages@v4 # Official GitHub Pages deployment action 75 | continue-on-error: true 76 | -------------------------------------------------------------------------------- /.github/workflows/rhiza_sync.yml: -------------------------------------------------------------------------------- 1 | name: RHIZA SYNC 2 | # This workflow synchronizes the repository with its template. 3 | # IMPORTANT: When workflow files (.github/workflows/rhiza_*.yml) are modified, 4 | # a Personal Access Token (PAT) with 'workflow' scope is required. 5 | # The PAT_TOKEN secret must be set in repository secrets. 6 | # See .github/TOKEN_SETUP.md for setup instructions. 7 | 8 | permissions: 9 | contents: write 10 | pull-requests: write 11 | 12 | on: 13 | workflow_dispatch: 14 | inputs: 15 | create-pr: 16 | description: "Create a pull request" 17 | type: boolean 18 | default: true 19 | schedule: 20 | - cron: '0 0 * * 1' # Weekly on Monday 21 | 22 | jobs: 23 | sync: 24 | if: ${{ github.repository != 'jebel-quant/rhiza' }} 25 | runs-on: ubuntu-latest 26 | 27 | steps: 28 | - name: Checkout repository 29 | uses: actions/checkout@v6 30 | with: 31 | token: ${{ secrets.PAT_TOKEN || github.token }} 32 | fetch-depth: 0 33 | 34 | - name: Define sync branch name 35 | id: branch 36 | run: | 37 | echo "name=rhiza/${{ github.run_id }}" >> "$GITHUB_OUTPUT" 38 | 39 | - name: Check PAT_TOKEN configuration 40 | shell: bash 41 | env: 42 | PAT_TOKEN: ${{ secrets.PAT_TOKEN }} 43 | run: | 44 | if [ -z "$PAT_TOKEN" ]; then 45 | echo "::warning::PAT_TOKEN secret is not configured." 46 | echo "::warning::If this sync modifies workflow files, the push will fail." 47 | echo "::warning::See .github/TOKEN_SETUP.md for setup instructions." 48 | else 49 | echo "✓ PAT_TOKEN is configured." 50 | fi 51 | 52 | - name: Install uv 53 | uses: astral-sh/setup-uv@v7 54 | 55 | - name: Sync template 56 | id: sync 57 | shell: bash 58 | run: | 59 | set -euo pipefail 60 | 61 | uvx rhiza materialize --force . 62 | 63 | git add -A 64 | 65 | if git diff --cached --quiet; then 66 | echo "changes_detected=false" >> "$GITHUB_OUTPUT" 67 | exit 0 68 | fi 69 | 70 | echo "changes_detected=true" >> "$GITHUB_OUTPUT" 71 | 72 | git config user.name "github-actions[bot]" 73 | git config user.email "41898282+github-actions[bot]@users.noreply.github.com" 74 | 75 | git commit -m "chore: Update via rhiza" 76 | 77 | - name: Create pull request 78 | if: > 79 | (github.event_name == 'schedule' || inputs.create-pr == true) 80 | && steps.sync.outputs.changes_detected == 'true' 81 | uses: peter-evans/create-pull-request@v8 82 | with: 83 | token: ${{ secrets.PAT_TOKEN || github.token }} 84 | base: ${{ github.event.repository.default_branch }} 85 | branch: ${{ steps.branch.outputs.name }} 86 | delete-branch: true 87 | title: "chore: Sync with rhiza" 88 | body: | 89 | This pull request synchronizes the repository with its template. 90 | 91 | Changes were generated automatically using **rhiza**. 92 | -------------------------------------------------------------------------------- /tests/test_rhiza/README.md: -------------------------------------------------------------------------------- 1 | # Rhiza Test Suite 2 | 3 | This directory contains the core test suite that flows down via SYNC action from the [jebel-quant/rhiza](https://github.com/jebel-quant/rhiza) repository. 4 | 5 | ## Purpose 6 | 7 | These tests validate the foundational infrastructure and workflows that are shared across all Rhiza-synchronized projects: 8 | 9 | - **Git-based workflows**: Version bumping, releasing, and tagging 10 | - **Project structure**: Ensuring required files and directories exist 11 | - **Build automation**: Makefile targets and commands 12 | - **Documentation**: README code examples and docstring validation 13 | - **Synchronization**: Template file exclusion and sync script behavior 14 | - **Development tools**: Mock fixtures for testing in isolation 15 | 16 | ## Test Organization 17 | 18 | - `conftest.py` - Pytest fixtures including the `git_repo` fixture for sandboxed testing 19 | - `test_bump_script.py` - Tests for version bumping workflow 20 | - `test_docstrings.py` - Doctest validation across all modules 21 | - `test_git_repo_fixture.py` - Validation of the mock git repository fixture 22 | - `test_makefile.py` - Makefile target validation using dry-runs 23 | - `test_readme.py` - README code example execution and validation 24 | - `test_release_script.py` - Release and tagging workflow tests 25 | - `test_structure.py` - Project structure and file existence checks 26 | - `test_sync_script.py` - Template synchronization exclusion tests 27 | 28 | ## Exclusion from Sync 29 | 30 | While it is **technically possible** to exclude these tests from synchronization by adding them to the `exclude` section of your `template.yml` file, this is **not recommended**. 31 | 32 | These tests ensure that the shared infrastructure components work correctly in your project. Excluding them means: 33 | 34 | - ❌ No validation of version bumping and release workflows 35 | - ❌ No automated checks for project structure requirements 36 | - ❌ Missing critical integration tests for synced scripts 37 | - ❌ Potential breakage when shared components are updated 38 | 39 | ## When to Exclude 40 | 41 | You should only consider excluding specific tests if: 42 | 43 | 1. Your project has fundamentally different workflow requirements 44 | 2. You've replaced the synced scripts with custom implementations 45 | 3. You have equivalent or better test coverage for the same functionality 46 | 47 | If you must exclude tests, do so selectively rather than excluding the entire `test_rhiza/` directory. 48 | 49 | ## Running the Tests 50 | 51 | ```bash 52 | # Run all Rhiza tests 53 | make test 54 | 55 | # Run specific test files 56 | pytest tests/test_rhiza/test_bump_script.py -v 57 | 58 | # Run tests with detailed output 59 | pytest tests/test_rhiza/ -vv 60 | ``` 61 | 62 | ## Customization 63 | 64 | If you need to customize or extend these tests for your project-specific needs, consider: 65 | 66 | 1. Creating additional test files in `tests/` (outside `test_rhiza/`) 67 | 2. Adding project-specific fixtures to a separate `conftest.py` 68 | 3. Keeping the synced tests intact for baseline validation 69 | 70 | This approach maintains the safety net of standardized tests while accommodating your unique requirements. 71 | -------------------------------------------------------------------------------- /.github/rhiza/scripts/marimushka.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # Export Marimo notebooks in ${MARIMO_FOLDER} to HTML under _marimushka 3 | # This replicates the previous Makefile logic for maintainability and reuse. 4 | 5 | set -e 6 | 7 | MARIMO_FOLDER=${MARIMO_FOLDER:-book/marimo} 8 | MARIMUSHKA_OUTPUT=${MARIMUSHKA_OUTPUT:-_marimushka} 9 | UV_BIN=${UV_BIN:-./bin/uv} 10 | UVX_BIN=${UVX_BIN:-./bin/uvx} 11 | 12 | BLUE="\033[36m" 13 | YELLOW="\033[33m" 14 | RESET="\033[0m" 15 | 16 | printf "%b[INFO] Exporting notebooks from %s...%b\n" "$BLUE" "$MARIMO_FOLDER" "$RESET" 17 | 18 | if [ ! -d "$MARIMO_FOLDER" ]; then 19 | printf "%b[WARN] Directory '%s' does not exist. Skipping marimushka.%b\n" "$YELLOW" "$MARIMO_FOLDER" "$RESET" 20 | exit 0 21 | fi 22 | 23 | # Ensure output directory exists 24 | mkdir -p "$MARIMUSHKA_OUTPUT" 25 | 26 | # Discover .py files (top-level only) using globbing; handle no-match case 27 | # Using shell globbing to find all .py files in the notebook folder 28 | # The set command expands the glob pattern; if no files match, the pattern itself is returned 29 | set -- "$MARIMO_FOLDER"/*.py 30 | if [ "$1" = "$MARIMO_FOLDER/*.py" ]; then 31 | # No Python files found - the glob pattern didn't match any files 32 | printf "%b[WARN] No Python files found in '%s'.%b\n" "$YELLOW" "$MARIMO_FOLDER" "$RESET" 33 | # Create a minimal index.html indicating no notebooks 34 | printf 'Marimo Notebooks

Marimo Notebooks

No notebooks found.

' > "$MARIMUSHKA_OUTPUT/index.html" 35 | exit 0 36 | fi 37 | 38 | 39 | CURRENT_DIR=$(pwd) 40 | OUTPUT_DIR="$CURRENT_DIR/$MARIMUSHKA_OUTPUT" 41 | 42 | # Resolve UVX_BIN to absolute path if it's a relative path (contains / but doesn't start with /) 43 | # This is necessary because we'll change directory later and need absolute paths 44 | # Case 1: Already absolute (starts with /) - no change needed 45 | # Case 2: Relative path with / (e.g., ./bin/uvx) - convert to absolute 46 | # Case 3: Command name only (e.g., uvx) - leave as-is to search in PATH 47 | case "$UVX_BIN" in 48 | /*) ;; 49 | */*) UVX_BIN="$CURRENT_DIR/$UVX_BIN" ;; 50 | *) ;; 51 | esac 52 | 53 | # Resolve UV_BIN to absolute path using the same logic 54 | case "$UV_BIN" in 55 | /*) ;; 56 | */*) UV_BIN="$CURRENT_DIR/$UV_BIN" ;; 57 | *) ;; 58 | esac 59 | 60 | # Derive UV_INSTALL_DIR from UV_BIN 61 | # This directory is passed to marimushka so it can find uv for processing notebooks 62 | UV_INSTALL_DIR=$(dirname "$UV_BIN") 63 | 64 | # Change to the notebook directory to ensure relative paths in notebooks work correctly 65 | # Marimo notebooks may contain relative imports or file references 66 | cd "$MARIMO_FOLDER" 67 | 68 | # Run marimushka export 69 | # - --notebooks: directory containing .py notebooks 70 | # - --output: where to write HTML files 71 | # - --bin-path: where marimushka can find the uv binary for processing 72 | "$UVX_BIN" "marimushka>=0.1.9" export --notebooks "." --output "$OUTPUT_DIR" --bin-path "$UV_INSTALL_DIR" 73 | 74 | # Ensure GitHub Pages does not process with Jekyll 75 | # The : command is a no-op that creates an empty file 76 | # .nojekyll tells GitHub Pages to serve files as-is without Jekyll processing 77 | : > "$OUTPUT_DIR/.nojekyll" -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | This document is a guide to contributing to the project. 4 | 5 | We welcome all contributions. You don't need to be an expert 6 | to help out. 7 | 8 | ## Checklist 9 | 10 | Contributions are made through 11 | [pull requests](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests). 12 | Before sending a pull request, make sure you do the following: 13 | 14 | - Run 'make check' to make sure your code adheres to our [coding style](#code-style) 15 | and all tests pass. 16 | - [Write unit tests](#writing-unit-tests) for new functionality added. 17 | 18 | ## Building from source 19 | 20 | You'll need to build the project locally to start editing code. 21 | To install from source, clone the repository from GitHub, 22 | navigate to its root, and run the following command: 23 | 24 | ```bash 25 | make install 26 | ``` 27 | 28 | ## Contributing code 29 | 30 | To contribute to the project, send us pull requests. 31 | For those new to contributing, check out GitHub's 32 | [guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests). 33 | 34 | Once you've made your pull request, a member of the 35 | development team will assign themselves to review it. 36 | You might have a few 37 | back-and-forths with your reviewer before it is accepted, 38 | which is completely normal. 39 | Your pull request will trigger continuous integration tests 40 | for many different 41 | Python versions and different platforms. If these tests start failing, 42 | please 43 | fix your code and send another commit, which will re-trigger the tests. 44 | 45 | If you'd like to add a new feature, please propose your 46 | change in a GitHub issue to make sure 47 | that your priorities align with ours. 48 | 49 | If you'd like to contribute code but don't know where to start, 50 | try one of the 51 | following: 52 | 53 | - Read the source and enhance the documentation, 54 | or address TODOs 55 | - Browse the open issues, 56 | and look for the issues tagged "help wanted". 57 | 58 | ## Code style 59 | 60 | We use ruff to enforce our Python coding style. 61 | Before sending us a pull request, navigate to the project 62 | root and run 63 | 64 | ```bash 65 | make check 66 | ``` 67 | 68 | to make sure that your changes abide by our style conventions. 69 | Please fix any errors that are reported before sending 70 | the pull request. 71 | 72 | ## Writing unit tests 73 | 74 | Most code changes will require new unit tests. 75 | Even bug fixes require unit tests, 76 | since the presence of bugs usually indicates insufficient tests. 77 | When adding tests, try to find a file in which your tests should belong; 78 | if you're testing a new feature, you might want to create a new test file. 79 | 80 | We use the popular Python [pytest](https://docs.pytest.org/en/) framework for our 81 | tests. 82 | 83 | ## Running unit tests 84 | 85 | We use `pytest` to run our unit tests. 86 | To run all unit tests run the following command: 87 | 88 | ```bash 89 | make test 90 | ``` 91 | 92 | Please make sure that your change doesn't cause any 93 | of the unit tests to fail. 94 | -------------------------------------------------------------------------------- /tests/test_rhiza/test_updatereadme_script.py: -------------------------------------------------------------------------------- 1 | """Tests for the update-readme-help.sh script using a sandboxed git environment. 2 | 3 | This file and its associated tests flow down via a SYNC action from the jebel-quant/rhiza repository 4 | (https://github.com/jebel-quant/rhiza). 5 | """ 6 | 7 | import subprocess 8 | 9 | 10 | def test_update_readme_success(git_repo): 11 | """Test successful update of README.md.""" 12 | script = git_repo / ".github" / "rhiza" / "scripts" / "update-readme-help.sh" 13 | readme_path = git_repo / "README.md" 14 | 15 | # Create a README with the target section 16 | initial_content = """# Project 17 | 18 | Some description. 19 | 20 | Run `make help` to see all available targets: 21 | 22 | ```makefile 23 | old help content 24 | ``` 25 | 26 | Footer content. 27 | """ 28 | readme_path.write_text(initial_content) 29 | 30 | # Run the script 31 | result = subprocess.run([str(script)], cwd=git_repo, capture_output=True, text=True) 32 | 33 | assert result.returncode == 0 34 | assert "README.md updated" in result.stdout 35 | 36 | # Verify content 37 | new_content = readme_path.read_text() 38 | assert "Mock Makefile Help" in new_content 39 | assert "old help content" not in new_content 40 | assert "Footer content" in new_content 41 | 42 | 43 | def test_update_readme_no_marker(git_repo): 44 | """Test script behavior when README.md lacks the marker.""" 45 | script = git_repo / ".github" / "rhiza" / "scripts" / "update-readme-help.sh" 46 | readme_path = git_repo / "README.md" 47 | 48 | # Create a README without the target section 49 | initial_content = """# Project 50 | 51 | No help section here. 52 | """ 53 | readme_path.write_text(initial_content) 54 | 55 | # Run the script 56 | result = subprocess.run([str(script)], cwd=git_repo, capture_output=True, text=True) 57 | 58 | # The script exits with 0 if pattern not found (based on my reading of the script) 59 | # Wait, let's check the script again. 60 | # if (pattern_found == 0) { print ... > "/dev/stderr"; exit 2 } 61 | # ... 62 | # if [ $awk_status -eq 2 ]; then ... exit 0 63 | 64 | assert result.returncode == 0 65 | # It prints to stderr if not found, but then exits 0. 66 | # Note: The script redirects the awk error to stderr. 67 | # But the script itself swallows the exit code 2 and exits 0. 68 | 69 | # Verify content is unchanged 70 | assert readme_path.read_text() == initial_content 71 | 72 | 73 | def test_update_readme_preserves_surrounding_content(git_repo): 74 | """Test that content before and after the help block is preserved.""" 75 | script = git_repo / ".github" / "rhiza" / "scripts" / "update-readme-help.sh" 76 | readme_path = git_repo / "README.md" 77 | 78 | initial_content = """Header 79 | 80 | Run `make help` to see all available targets: 81 | 82 | ```makefile 83 | replace me 84 | ``` 85 | 86 | Footer 87 | """ 88 | readme_path.write_text(initial_content) 89 | 90 | subprocess.run([str(script)], cwd=git_repo, check=True) 91 | 92 | new_content = readme_path.read_text() 93 | assert new_content.startswith("Header\n\nRun `make help`") 94 | assert new_content.endswith("```\n\nFooter\n") 95 | -------------------------------------------------------------------------------- /tests/test_rhiza/test_readme.py: -------------------------------------------------------------------------------- 1 | """Tests for README code examples. 2 | 3 | This file and its associated tests flow down via a SYNC action from the jebel-quant/rhiza repository 4 | (https://github.com/jebel-quant/rhiza). 5 | 6 | This module extracts Python code and expected result blocks from README.md, 7 | executes the code, and verifies the output matches the documented result. 8 | """ 9 | 10 | import re 11 | import subprocess 12 | import sys 13 | 14 | import pytest 15 | 16 | # Regex for Python code blocks 17 | CODE_BLOCK = re.compile(r"```python\n(.*?)```", re.DOTALL) 18 | 19 | RESULT = re.compile(r"```result\n(.*?)```", re.DOTALL) 20 | 21 | 22 | def test_readme_runs(logger, root): 23 | """Execute README code blocks and compare output to documented results.""" 24 | readme = root / "README.md" 25 | logger.info("Reading README from %s", readme) 26 | readme_text = readme.read_text(encoding="utf-8") 27 | code_blocks = CODE_BLOCK.findall(readme_text) 28 | result_blocks = RESULT.findall(readme_text) 29 | logger.info("Found %d code block(s) and %d result block(s) in README", len(code_blocks), len(result_blocks)) 30 | 31 | code = "".join(code_blocks) # merged code 32 | expected = "".join(result_blocks) # merged results 33 | 34 | # Trust boundary: we execute Python snippets sourced from README.md in this repo. 35 | # The README is part of the trusted repository content and reviewed in PRs. 36 | logger.debug("Executing README code via %s -c ...", sys.executable) 37 | result = subprocess.run([sys.executable, "-c", code], capture_output=True, text=True, cwd=root) # noqa: S603 38 | 39 | stdout = result.stdout 40 | logger.debug("Execution finished with return code %d", result.returncode) 41 | if result.stderr: 42 | logger.debug("Stderr from README code:\n%s", result.stderr) 43 | logger.debug("Stdout from README code:\n%s", stdout) 44 | 45 | assert result.returncode == 0, f"README code exited with {result.returncode}. Stderr:\n{result.stderr}" 46 | logger.info("README code executed successfully; comparing output to expected result") 47 | assert stdout.strip() == expected.strip() 48 | logger.info("README code output matches expected result") 49 | 50 | 51 | class TestReadmeTestEdgeCases: 52 | """Edge cases for README code block testing.""" 53 | 54 | def test_readme_file_exists_at_root(self, root): 55 | """README.md should exist at repository root.""" 56 | readme = root / "README.md" 57 | assert readme.exists() 58 | assert readme.is_file() 59 | 60 | def test_readme_is_readable(self, root): 61 | """README.md should be readable with UTF-8 encoding.""" 62 | readme = root / "README.md" 63 | content = readme.read_text(encoding="utf-8") 64 | assert len(content) > 0 65 | assert isinstance(content, str) 66 | 67 | def test_readme_code_is_syntactically_valid(self, root): 68 | """Python code blocks in README should be syntactically valid.""" 69 | readme = root / "README.md" 70 | content = readme.read_text(encoding="utf-8") 71 | code_blocks = re.findall(r"\`\`\`python\n(.*?)\`\`\`", content, re.DOTALL) 72 | 73 | for i, code in enumerate(code_blocks): 74 | try: 75 | compile(code, f"", "exec") 76 | except SyntaxError as e: 77 | pytest.fail(f"Code block {i} has syntax error: {e}") 78 | -------------------------------------------------------------------------------- /.github/rhiza/scripts/book.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # Assemble the combined documentation site into _book 3 | # - Copies API docs (pdoc), coverage, test report, and marimushka exports 4 | # - Generates a links.json consumed by minibook 5 | # 6 | # This script mirrors the logic previously embedded in the Makefile `book` target 7 | # for maintainability and testability. It is POSIX-sh compatible. 8 | 9 | set -e 10 | 11 | BLUE="\033[36m" 12 | YELLOW="\033[33m" 13 | RESET="\033[0m" 14 | 15 | printf "%b[INFO] Building combined documentation...%b\n" "$BLUE" "$RESET" 16 | printf "%b[INFO] Assembling book without jq dependency...%b\n" "$BLUE" "$RESET" 17 | 18 | printf "%b[INFO] Delete the _book folder...%b\n" "$BLUE" "$RESET" 19 | rm -rf _book 20 | printf "%b[INFO] Create empty _book folder...%b\n" "$BLUE" "$RESET" 21 | mkdir -p _book 22 | 23 | # Start building links.json content without jq 24 | # We manually construct JSON by concatenating strings 25 | # This avoids the dependency on jq while maintaining valid JSON output 26 | LINKS_ENTRIES="" 27 | 28 | printf "%b[INFO] Copy API docs...%b\n" "$BLUE" "$RESET" 29 | if [ -f _pdoc/index.html ]; then 30 | mkdir -p _book/pdoc 31 | cp -r _pdoc/* _book/pdoc 32 | # Start building JSON entries - first entry doesn't need a comma prefix 33 | LINKS_ENTRIES='"API": "./pdoc/index.html"' 34 | fi 35 | 36 | printf "%b[INFO] Copy coverage report...%b\n" "$BLUE" "$RESET" 37 | if [ -f _tests/html-coverage/index.html ]; then 38 | mkdir -p _book/tests/html-coverage 39 | cp -r _tests/html-coverage/* _book/tests/html-coverage 40 | # Add comma separator if there are existing entries 41 | if [ -n "$LINKS_ENTRIES" ]; then 42 | LINKS_ENTRIES="$LINKS_ENTRIES, \"Coverage\": \"./tests/html-coverage/index.html\"" 43 | else 44 | LINKS_ENTRIES='"Coverage": "./tests/html-coverage/index.html"' 45 | fi 46 | else 47 | printf "%b[WARN] No coverage report found or directory is empty%b\n" "$YELLOW" "$RESET" 48 | fi 49 | 50 | printf "%b[INFO] Copy test report...%b\n" "$BLUE" "$RESET" 51 | if [ -f _tests/html-report/report.html ]; then 52 | mkdir -p _book/tests/html-report 53 | cp -r _tests/html-report/* _book/tests/html-report 54 | if [ -n "$LINKS_ENTRIES" ]; then 55 | LINKS_ENTRIES="$LINKS_ENTRIES, \"Test Report\": \"./tests/html-report/report.html\"" 56 | else 57 | LINKS_ENTRIES='"Test Report": "./tests/html-report/report.html"' 58 | fi 59 | else 60 | printf "%b[WARN] No test report found or directory is empty%b\n" "$YELLOW" "$RESET" 61 | fi 62 | 63 | printf "%b[INFO] Copy notebooks...%b\n" "$BLUE" "$RESET" 64 | if [ -f _marimushka/index.html ]; then 65 | mkdir -p _book/marimushka 66 | cp -r _marimushka/* _book/marimushka 67 | if [ -n "$LINKS_ENTRIES" ]; then 68 | LINKS_ENTRIES="$LINKS_ENTRIES, \"Notebooks\": \"./marimushka/index.html\"" 69 | else 70 | LINKS_ENTRIES='"Notebooks": "./marimushka/index.html"' 71 | fi 72 | printf "%b[INFO] Copied notebooks into _book/marimushka%b\n" "$BLUE" "$RESET" 73 | else 74 | printf "%b[WARN] No notebooks found or directory is empty%b\n" "$YELLOW" "$RESET" 75 | fi 76 | 77 | # Write final links.json 78 | # Wrap the accumulated entries in JSON object syntax 79 | if [ -n "$LINKS_ENTRIES" ]; then 80 | # If we have entries, create a proper JSON object with them 81 | printf '{%s}\n' "$LINKS_ENTRIES" > _book/links.json 82 | else 83 | # If no entries were found, create an empty JSON object 84 | printf '{}\n' > _book/links.json 85 | fi 86 | 87 | printf "%b[INFO] Generated links.json:%b\n" "$BLUE" "$RESET" 88 | cat _book/links.json 89 | -------------------------------------------------------------------------------- /presentation/Makefile.presentation: -------------------------------------------------------------------------------- 1 | ## Makefile.presentation - Presentation targets 2 | # This file is included by the main Makefile 3 | 4 | # Declare phony targets (they don't produce files) 5 | .PHONY: presentation presentation-pdf presentation-serve 6 | 7 | ##@ Presentation 8 | presentation:: ## generate presentation slides from PRESENTATION.md using Marp 9 | @printf "${BLUE}[INFO] Checking for Marp CLI...${RESET}\n" 10 | @if ! command -v marp >/dev/null 2>&1; then \ 11 | if command -v npm >/dev/null 2>&1; then \ 12 | printf "${YELLOW}[WARN] Marp CLI not found. Installing with npm...${RESET}\n"; \ 13 | npm install -g @marp-team/marp-cli || { \ 14 | printf "${RED}[ERROR] Failed to install Marp CLI. Please install manually:${RESET}\n"; \ 15 | printf "${BLUE} npm install -g @marp-team/marp-cli${RESET}\n"; \ 16 | exit 1; \ 17 | }; \ 18 | else \ 19 | printf "${RED}[ERROR] npm not found. Please install Node.js and npm first.${RESET}\n"; \ 20 | printf "${BLUE} See: https://nodejs.org/${RESET}\n"; \ 21 | printf "${BLUE} Then run: npm install -g @marp-team/marp-cli${RESET}\n"; \ 22 | exit 1; \ 23 | fi; \ 24 | fi 25 | @printf "${BLUE}[INFO] Generating HTML presentation...${RESET}\n" 26 | @marp PRESENTATION.md -o presentation.html 27 | @printf "${GREEN}[SUCCESS] Presentation generated: presentation.html${RESET}\n" 28 | @printf "${BLUE}[TIP] Open presentation.html in a browser to view slides${RESET}\n" 29 | 30 | presentation-pdf:: ## generate PDF presentation from PRESENTATION.md using Marp 31 | @printf "${BLUE}[INFO] Checking for Marp CLI...${RESET}\n" 32 | @if ! command -v marp >/dev/null 2>&1; then \ 33 | if command -v npm >/dev/null 2>&1; then \ 34 | printf "${YELLOW}[WARN] Marp CLI not found. Installing with npm...${RESET}\n"; \ 35 | npm install -g @marp-team/marp-cli || { \ 36 | printf "${RED}[ERROR] Failed to install Marp CLI. Please install manually:${RESET}\n"; \ 37 | printf "${BLUE} npm install -g @marp-team/marp-cli${RESET}\n"; \ 38 | exit 1; \ 39 | }; \ 40 | else \ 41 | printf "${RED}[ERROR] npm not found. Please install Node.js and npm first.${RESET}\n"; \ 42 | printf "${BLUE} See: https://nodejs.org/${RESET}\n"; \ 43 | printf "${BLUE} Then run: npm install -g @marp-team/marp-cli${RESET}\n"; \ 44 | exit 1; \ 45 | fi; \ 46 | fi 47 | @printf "${BLUE}[INFO] Generating PDF presentation...${RESET}\n" 48 | @marp PRESENTATION.md -o presentation.pdf --allow-local-files 49 | @printf "${GREEN}[SUCCESS] Presentation generated: presentation.pdf${RESET}\n" 50 | 51 | presentation-serve:: ## serve presentation interactively with Marp 52 | @printf "${BLUE}[INFO] Checking for Marp CLI...${RESET}\n" 53 | @if ! command -v marp >/dev/null 2>&1; then \ 54 | if command -v npm >/dev/null 2>&1; then \ 55 | printf "${YELLOW}[WARN] Marp CLI not found. Installing with npm...${RESET}\n"; \ 56 | npm install -g @marp-team/marp-cli || { \ 57 | printf "${RED}[ERROR] Failed to install Marp CLI. Please install manually:${RESET}\n"; \ 58 | printf "${BLUE} npm install -g @marp-team/marp-cli${RESET}\n"; \ 59 | exit 1; \ 60 | }; \ 61 | else \ 62 | printf "${RED}[ERROR] npm not found. Please install Node.js and npm first.${RESET}\n"; \ 63 | printf "${BLUE} See: https://nodejs.org/${RESET}\n"; \ 64 | printf "${BLUE} Then run: npm install -g @marp-team/marp-cli${RESET}\n"; \ 65 | exit 1; \ 66 | fi; \ 67 | fi 68 | @printf "${BLUE}[INFO] Starting Marp server...${RESET}\n" 69 | @printf "${GREEN}[INFO] Press Ctrl+C to stop the server${RESET}\n" 70 | @marp -s . 71 | -------------------------------------------------------------------------------- /docs/user/intro.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | Loman is a Python library for keeping track of dependencies between elements of a large computation, allowing you to recalculate only the parts that are necessary as new input data arrives, or as you change how certain elements are calculated. 4 | 5 | It stems from experience with real-life systems taking data from many independent source. Often systems are implemented using sets of scheduled tasks. This approach is often pragmatic at first, but suffers several drawbacks as the scale of the system increases: 6 | - When failures occur, such as a required file or data set not being in place on time, then downstream scheduled tasks may execute anyway. 7 | - When re-runs are required, typically each step must be manually invoked. Often it is not clear which steps must be re-run, and so operators re-run everything until things look right. A large proportion of the operational overhead of many real-world systems comes from needing enough capacity to improvised re-runs when systems fail. 8 | - As tasks are added, the schedule may be become tight. It may not be clear which items can be moved earlier or later to make room for new tasks. 9 | 10 | Other problems occur at the scale of single programs, which are often programmed as a sequential set of steps. Typically any reasonably complex computation will require multiple iterations before it is correct. A limiting factor is the speed at which the programmer can perform these iterations - there are only so many minutes in each day. Often repeatedly pulling large data sets or re-performing lengthy calculations that will not have changed between iterations ends up substantially slowing progress. 11 | 12 | Loman aims to provide a solution to both these problems. Computations are represented explicitly as a directed acyclic graph data structures. A graph is a set of nodes, each representing an input value calculated value, and a set of edges (lines) between them, where one value feeds into the calculation of another. This is similar to a flowchart, the calculation tree in Excel, or the dependency graph used in build tools such as make. Loman keeps track of the current state of each node as the user requests certain elements be calculated, inserts new data into input nodes of the graph, or even changes the functions used to perform calculations. This allows analysts, researchers and developers to iterate quickly, making changes to isolated parts of complicated calculations. 13 | 14 | Loman can serialize the entire contents of a graph to disk. When failures occur in batch systems a serialized copy of its computations allows for easy inspection of the inputs and intermediates to determine what failed. Once the error is diagnosed, it can be fixed by inserting updated data if available, and only recalculating what was necessary. Or alternatively, input or intermediate data can be directly updated by the operator. In either case, diagnosing errors is as easy as it can be, and recovering from errors is efficient. 15 | 16 | Finally, Loman also provides useful capability to real-time systems, where the cadence of inputs can vary widely between input sources, and the computational requirement for different outputs can also be quite different. In this context, Loman allows updates to fast-calculated outputs for every tick of incoming data, but may limit the rate at which slower calculated outputs are produced. 17 | 18 | Hopefully this gives a flavor of the type of problem Loman is trying to solve, and whether it will be useful to you. Our aim is that if you are performing a computational task, Loman should be able to provide value to you, and should be as frictionless as possible to use. 19 | -------------------------------------------------------------------------------- /.github/rhiza/scripts/update-readme-help.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # Script to update README.md with the current output from `make help` 3 | # 4 | # This script replaces the hardcoded Makefile help output in README.md 5 | # with the actual output generated by running `make help`. 6 | 7 | set -eu 8 | 9 | # Navigate to repository root (from .github/rhiza/scripts/ up three levels to repo root) 10 | cd "$(dirname "$0")/../../.." 11 | 12 | README_FILE="README.md" 13 | TEMP_FILE=$(mktemp) 14 | HELP_TEMP=$(mktemp) 15 | 16 | # Generate the help output from Makefile 17 | # Strip ANSI color codes and filter out make[1] directory messages 18 | # The sed command removes ANSI escape sequences (color codes) from the output 19 | # The grep commands filter out make's directory change messages 20 | make help 2>/dev/null | \ 21 | sed 's/\x1b\[[0-9;]*m//g' | \ 22 | grep -v "^make\[" | \ 23 | grep -v "Entering directory" | \ 24 | grep -v "Leaving directory" > "$HELP_TEMP" 25 | 26 | # Create the new README with updated help output 27 | # Using a temporary file to avoid awk escaping issues 28 | # Temporarily disable exit-on-error to handle pattern not found gracefully 29 | set +e 30 | # The awk script processes the README.md file to find and replace the help section 31 | # It looks for the marker pattern, preserves the structure, and inserts updated help text 32 | awk -v helpfile="$HELP_TEMP" ' 33 | BEGIN { 34 | in_help_block = 0 35 | pattern_found = 0 36 | } 37 | 38 | # Detect start of help output block 39 | # This marker indicates where the Makefile help output should be inserted 40 | /^Run `make help` to see all available targets:$/ { 41 | print 42 | pattern_found = 1 43 | getline 44 | if ($0 == "") { 45 | print 46 | } 47 | getline 48 | # If we find the code fence, start replacing the content 49 | if ($0 ~ /^```makefile$/) { 50 | print "```makefile" 51 | # Read and print help output from file 52 | while ((getline line < helpfile) > 0) { 53 | print line 54 | } 55 | close(helpfile) 56 | in_help_block = 1 57 | next 58 | } 59 | } 60 | 61 | # Skip lines inside the old help block 62 | # Once we hit the closing code fence, we stop skipping lines 63 | in_help_block == 1 && /^```$/ { 64 | print "```" 65 | in_help_block = 0 66 | next 67 | } 68 | 69 | # Continue skipping lines that are part of the old help output 70 | in_help_block == 1 { 71 | next 72 | } 73 | 74 | # Print all other lines (outside the help block) unchanged 75 | { 76 | print 77 | } 78 | 79 | END { 80 | # If we never found the pattern, notify but don'\''t fail 81 | # This allows the script to work even if README structure changes 82 | if (pattern_found == 0) { 83 | print "INFO: No help section marker found in README.md - skipping update" > "/dev/stderr" 84 | exit 2 85 | } 86 | } 87 | ' "$README_FILE" > "$TEMP_FILE" 88 | 89 | # Check if awk succeeded or pattern was not found 90 | # Exit code 2 means the pattern wasn't found (not an error) 91 | # Other non-zero codes indicate genuine errors 92 | awk_status=$? 93 | set -e 94 | if [ $awk_status -eq 2 ]; then 95 | # Pattern not found - clean up and exit gracefully (this is not an error) 96 | rm -f "$TEMP_FILE" "$HELP_TEMP" 97 | exit 0 98 | elif [ $awk_status -ne 0 ]; then 99 | # Other awk error - this is a genuine error that should be reported 100 | rm -f "$TEMP_FILE" "$HELP_TEMP" 101 | echo "ERROR: Failed to update README.md due to awk error" >&2 102 | exit 1 103 | else 104 | # Replace the original README with the updated version 105 | mv "$TEMP_FILE" "$README_FILE" 106 | echo "README.md updated with current 'make help' output" 107 | fi 108 | 109 | # Clean up temporary help file (safety net for successful path) 110 | rm -f "$HELP_TEMP" 111 | -------------------------------------------------------------------------------- /.github/workflows/rhiza_marimo.yml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | # Workflow: Marimo Notebooks 5 | # 6 | # Purpose: This workflow discovers and executes all Marimo notebooks in the 7 | # repository. It builds a dynamic matrix to run each notebook in 8 | # parallel to surface errors early and keep notebooks reproducible. 9 | # 10 | # Trigger: This workflow runs on every push and on pull requests to main/master 11 | # branches (including from forks) 12 | # 13 | # Components: 14 | # - 🔎 Discover notebooks in book/marimo 15 | # - 🧪 Run each notebook in parallel using a matrix strategy 16 | # - ✅ Fail-fast disabled to report all failing notebooks 17 | 18 | name: "MARIMO" 19 | 20 | permissions: 21 | contents: read 22 | 23 | on: 24 | push: 25 | pull_request: 26 | branches: [ main, master ] 27 | 28 | jobs: 29 | # Build a matrix of notebooks to test 30 | list-notebooks: 31 | runs-on: ubuntu-latest 32 | outputs: 33 | notebook-list: ${{ steps.notebooks.outputs.matrix }} 34 | steps: 35 | # Check out the repository code 36 | - uses: actions/checkout@v6 37 | 38 | # Find all Python files in the marimo folder and create a matrix for parallel execution 39 | - name: Find notebooks and build matrix 40 | id: notebooks 41 | run: | 42 | NOTEBOOK_DIR="book/marimo" 43 | echo "Searching notebooks in: $NOTEBOOK_DIR" 44 | # Check if directory exists 45 | if [ ! -d "$NOTEBOOK_DIR" ]; then 46 | echo "Directory $NOTEBOOK_DIR does not exist. Setting empty matrix." 47 | echo "matrix=[]" >> "$GITHUB_OUTPUT" 48 | exit 0 49 | fi 50 | 51 | # Find notebooks and handle empty results 52 | if [ -z "$(find "$NOTEBOOK_DIR" -maxdepth 1 -name "*.py" 2>/dev/null)" ]; then 53 | echo "No notebooks found in $NOTEBOOK_DIR. Setting empty matrix." 54 | echo "matrix=[]" >> "$GITHUB_OUTPUT" 55 | else 56 | notebooks=$(find "$NOTEBOOK_DIR" -maxdepth 1 -name "*.py" -print0 | xargs -0 -n1 echo | jq -R -s -c 'split("\n")[:-1]') 57 | echo "matrix=$notebooks" >> "$GITHUB_OUTPUT" 58 | fi 59 | shell: bash 60 | 61 | # Create one job per notebook using the matrix strategy for parallel execution 62 | test-notebooks: 63 | if: needs.list-notebooks.outputs.notebook-list != '[]' 64 | runs-on: ubuntu-latest 65 | needs: list-notebooks 66 | strategy: 67 | matrix: 68 | notebook: ${{ fromJson(needs.list-notebooks.outputs.notebook-list) }} 69 | # Don't fail the entire workflow if one notebook fails 70 | fail-fast: false 71 | name: Run notebook ${{ matrix.notebook }} 72 | steps: 73 | # Check out the repository code 74 | - uses: actions/checkout@v6 75 | with: 76 | lfs: true 77 | 78 | # Install uv/uvx 79 | - name: Install uv 80 | uses: astral-sh/setup-uv@v7 81 | with: 82 | version: "0.9.18" 83 | 84 | # Execute the notebook with the appropriate runner based on its content 85 | - name: Run notebook 86 | run: | 87 | uvx uv run "${{ matrix.notebook }}" 88 | # uvx → creates a fresh ephemeral environment 89 | # uv run → runs the notebook as a script in that ephemeral env 90 | # No project packages are pre-installed 91 | # ✅ This forces the notebook to explicitly handle dependencies (e.g., uv install ., or pip install inside the script). 92 | # ✅ It’s a true integration smoke test. 93 | # Benefits of this pattern 94 | # Confirms the notebook can bootstrap itself in a fresh environment 95 | # Catches missing uv install or pip steps early 96 | # Ensures CI/other users can run the notebook without manual setup 97 | shell: bash 98 | -------------------------------------------------------------------------------- /docs/user/features/creating/creating_computation_factories.md: -------------------------------------------------------------------------------- 1 | # `@ComputationFactory`: Define computations with classes 2 | 3 | Loman provides a decorator `@ComputationFactory`, which allows complete computations to be specified as a single class. This is often a more convenient way to define classes, providing better encapsulation, and making them easier to read later. 4 | 5 | Within a computation factory class, we can use `input_node` to declare input nodes, and `calc_node` to define functions used to calculate nodes 6 | 7 | ```pycon 8 | >>> @ComputationFactory 9 | ... class ExampleComputation: 10 | ... a = input_node() 11 | ... 12 | ... @calc_node 13 | ... def b(a): 14 | ... return a + 1 15 | ... 16 | ... @calc_node 17 | ... def c(a): 18 | ... return 2 * a 19 | ... 20 | ... @calc_node 21 | ... def d(b, c): 22 | ... return b + c 23 | ``` 24 | 25 | Once the computation factory is defined, we can use is to instantiate computations, which we can then use just like regular loman Computations. 26 | 27 | ``` 28 | >>> comp = ExampleComputation() 29 | >>> comp.insert('a', 3) 30 | >>> comp.compute_all() 31 | >>> comp.v.d 32 | 10 33 | >>> comp 34 | ``` 35 | 36 | ```{graphviz} 37 | digraph G { 38 | n0 [fillcolor="#15b01a", label=a, style=filled]; 39 | n1 [fillcolor="#15b01a", label=b, style=filled]; 40 | n2 [fillcolor="#15b01a", label=c, style=filled]; 41 | n3 [fillcolor="#15b01a", label=d, style=filled]; 42 | n0 -> n1; 43 | n0 -> n2; 44 | n1 -> n3; 45 | n2 -> n3; 46 | } 47 | ``` 48 | 49 | ## Using `self` 50 | 51 | In the example above, we used functions defined exactly as we have in previous articles. Since many IDEs expect the first parameter of a function defined in a class to be 'self', loman supports this. If the first parameter of a calc_node function defined within a ComputationFactory is 'self', then it will not refer to a 'self' node of the computation, but instead will allow access to non-calc_node methods defined within the class. For example, this class acts exactly the same as the previous example, with the `d` node using the `custom_add` method to perform addition: 52 | 53 | ```python 54 | @ComputationFactory 55 | class ExampleComputation2: 56 | a = input_node() 57 | 58 | @calc_node 59 | def b(self, a): 60 | return a + 1 61 | 62 | @calc_node 63 | def c(self, a): 64 | return 2 * a 65 | 66 | def custom_add(self, x, y): 67 | return x + y 68 | 69 | @calc_node 70 | def d(self, b, c): 71 | return self.custom_add(x, y) 72 | ``` 73 | 74 | If this behavior for 'self' is not required, it can be disabled at the class level or for individual nodes by providing the kwarg `ignore_self=False`. 75 | 76 | ## Providing optional arguments through `@calc_node` 77 | 78 | Arguments provided to the `@calc_node` are passed through to `add_node`, and can be used to control node creation, argument mapping, styling, etc. 79 | 80 | ```pycon 81 | @ComputationFactory 82 | class ExampleComputation3: 83 | a = input_node() 84 | 85 | @calc_node(style='dot', tags=['tag']) 86 | def b(self, a): 87 | return a + 1 88 | 89 | @calc_node(kwds={'x': 'a'}, style='dot') 90 | def c(self, x): 91 | return 2 * x 92 | 93 | @calc_node(serialize=False) 94 | def d(self, b, c): 95 | return b + c 96 | 97 | >>> comp = ExampleComputation3() 98 | >>> comp.insert('a', 3) 99 | >>> comp.compute_all() 100 | >>> comp 101 | ``` 102 | 103 | ```{graphviz} 104 | digraph G { 105 | n0 [fillcolor="#15b01a", label=a, style=filled]; 106 | n1 [fillcolor="#15b01a", label=b, peripheries=1, shape=point, style=filled, width=0.1]; 107 | n2 [fillcolor="#15b01a", label=c, peripheries=1, shape=point, style=filled, width=0.1]; 108 | n3 [fillcolor="#15b01a", label=d, style=filled]; 109 | n0 -> n1; 110 | n0 -> n2; 111 | n1 -> n3; 112 | n2 -> n3; 113 | } 114 | ``` -------------------------------------------------------------------------------- /tests/test_nodekeys.py: -------------------------------------------------------------------------------- 1 | """Tests for node key functionality and pattern matching in Loman.""" 2 | 3 | import pytest 4 | 5 | from loman.nodekey import NodeKey, is_pattern, match_pattern, nodekey_join, to_nodekey 6 | 7 | TEST_DATA = [ 8 | ("/A", NodeKey(("A",))), 9 | ("A", NodeKey(("A",))), 10 | ("/foo/bar", NodeKey(("foo", "bar"))), 11 | ("foo/bar", NodeKey(("foo", "bar"))), 12 | ('/foo/"bar"', NodeKey(("foo", "bar"))), 13 | ('foo/"bar"', NodeKey(("foo", "bar"))), 14 | ('/foo/"bar"/baz', NodeKey(("foo", "bar", "baz"))), 15 | ('foo/"bar"/baz', NodeKey(("foo", "bar", "baz"))), 16 | ] 17 | 18 | 19 | @pytest.mark.parametrize("test_str,expected_path", TEST_DATA) 20 | def test_simple_nodekey_parser(test_str, expected_path): 21 | assert to_nodekey(test_str) == expected_path 22 | 23 | 24 | TEST_JOIN_DATA = [ 25 | (to_nodekey("/A"), ["B"], to_nodekey("/A/B")), 26 | (to_nodekey("/A"), ["B", "C"], to_nodekey("/A/B/C")), 27 | (to_nodekey("/A"), [to_nodekey("B/C")], to_nodekey("/A/B/C")), 28 | ] 29 | 30 | 31 | @pytest.mark.parametrize("base_path,join_parts,expected_path", TEST_JOIN_DATA) 32 | def test_join_nodekeys(base_path, join_parts, expected_path): 33 | result = base_path.join(*join_parts) 34 | assert result == expected_path 35 | 36 | 37 | TEST_ADD_DATA = [ 38 | (to_nodekey("/A"), "B", to_nodekey("/A/B")), 39 | (to_nodekey("/A"), to_nodekey("B/C"), to_nodekey("/A/B/C")), 40 | ] 41 | 42 | 43 | @pytest.mark.parametrize("this,other,path_expected", TEST_ADD_DATA) 44 | def test_div_op(this, other, path_expected): 45 | assert this / other == path_expected 46 | 47 | 48 | TEST_JOIN_DATA_2 = [ 49 | (["A", "B"], "A/B"), 50 | (["A", "B", "C"], "A/B/C"), 51 | (["A", "B/C"], "A/B/C"), 52 | (["/A", "B"], "/A/B"), 53 | (["/A", "B", "C"], "/A/B/C"), 54 | (["/A", "B/C"], "/A/B/C"), 55 | (["A", None, "B"], "A/B"), 56 | ] 57 | 58 | 59 | @pytest.mark.parametrize("paths,expected_path", TEST_JOIN_DATA_2) 60 | def test_join_nodekeys_2(paths, expected_path): 61 | result = nodekey_join(*paths) 62 | assert result == to_nodekey(expected_path) 63 | 64 | 65 | TEST_COMMON_PARENT_DATA = [ 66 | ("A", "B", ""), 67 | ("/A", "/B", "/"), 68 | ("/A/X", "/A/Y", "/A"), 69 | ] 70 | 71 | 72 | @pytest.mark.parametrize("path1,path2,expected_path", TEST_COMMON_PARENT_DATA) 73 | def test_common_parent(path1, path2, expected_path): 74 | result = NodeKey.common_parent(path1, path2) 75 | assert result == to_nodekey(expected_path) 76 | 77 | 78 | TEST_PATTERN_MATCH_DATA = [ 79 | (("a", "*", "c"), ("a", "b", "c"), True), 80 | (("a", "**", "d"), ("a", "b", "c", "d"), True), 81 | (("a", "**", "d"), ("a", "d"), True), 82 | (("**",), ("a", "b", "c"), True), 83 | (("a", "**"), ("a",), True), 84 | (("a", "*", "**"), ("a", "x", "y", "z"), True), 85 | (("a", "*", "c"), ("a", "b", "d"), False), 86 | (("a", "**", "d"), ("a", "b", "c"), False), 87 | (("a", "*"), ("a", "b", "c"), False), 88 | ] 89 | 90 | 91 | @pytest.mark.parametrize("pattern,target,expected", TEST_PATTERN_MATCH_DATA) 92 | def test_pattern_matching(pattern, target, expected): 93 | pattern_key = NodeKey(pattern) 94 | target_key = NodeKey(target) 95 | assert match_pattern(pattern_key, target_key) == expected 96 | 97 | 98 | def test_is_pattern(): 99 | # Test single asterisk patterns 100 | assert is_pattern(NodeKey(("*",))) 101 | assert is_pattern(NodeKey(("abc", "*"))) 102 | assert is_pattern(NodeKey(("*", "def"))) 103 | 104 | # Test double asterisk patterns 105 | assert is_pattern(NodeKey(("**",))) 106 | assert is_pattern(NodeKey(("abc", "**"))) 107 | assert is_pattern(NodeKey(("**", "def"))) 108 | 109 | # Test non-patterns 110 | assert not is_pattern(NodeKey(())) 111 | assert not is_pattern(NodeKey(("abc",))) 112 | assert not is_pattern(NodeKey(("abc", "def"))) 113 | 114 | # Test complex patterns 115 | assert is_pattern(NodeKey(("abc", "*", "**", "def"))) 116 | assert is_pattern(NodeKey(("**", "*", "def"))) 117 | assert is_pattern(NodeKey(("abc", "**", "*"))) 118 | -------------------------------------------------------------------------------- /ruff.toml: -------------------------------------------------------------------------------- 1 | # Maximum line length for the entire project 2 | line-length = 120 3 | # Target Python version 4 | target-version = "py311" 5 | 6 | # Exclude directories with Jinja template variables in their names 7 | exclude = ["**/[{][{]*/", "**/*[}][}]*/", "examples/**/*.ipynb"] 8 | 9 | [lint] 10 | # Available rule sets in Ruff: 11 | # A: flake8-builtins - Check for python builtins being used as variables or parameters 12 | # B: flake8-bugbear - Find likely bugs and design problems 13 | # C4: flake8-comprehensions - Helps write better list/set/dict comprehensions 14 | # D: pydocstyle - Check docstring style 15 | # E: pycodestyle errors - PEP 8 style guide 16 | # ERA: eradicate - Find commented out code 17 | # F: pyflakes - Detect logical errors 18 | # I: isort - Sort imports 19 | # N: pep8-naming - Check PEP 8 naming conventions 20 | # PT: flake8-pytest-style - Check pytest best practices 21 | # RUF: Ruff-specific rules 22 | # S: flake8-bandit - Find security issues 23 | # SIM: flake8-simplify - Simplify code 24 | # T10: flake8-debugger - Check for debugger imports and calls 25 | # UP: pyupgrade - Upgrade syntax for newer Python 26 | # W: pycodestyle warnings - PEP 8 style guide warnings 27 | # ANN: flake8-annotations - Type annotation checks 28 | # ARG: flake8-unused-arguments - Unused arguments 29 | # BLE: flake8-blind-except - Check for blind except statements 30 | # COM: flake8-commas - Trailing comma enforcement 31 | # DTZ: flake8-datetimez - Ensure timezone-aware datetime objects 32 | # EM: flake8-errmsg - Check error message strings 33 | # FBT: flake8-boolean-trap - Boolean argument checks 34 | # ICN: flake8-import-conventions - Import convention enforcement 35 | # ISC: flake8-implicit-str-concat - Implicit string concatenation 36 | # NPY: NumPy-specific rules 37 | # PD: pandas-specific rules 38 | # PGH: pygrep-hooks - Grep-based checks 39 | # PIE: flake8-pie - Miscellaneous rules 40 | # PL: Pylint rules 41 | # Q: flake8-quotes - Quotation style enforcement 42 | # RSE: flake8-raise - Raise statement checks 43 | # RET: flake8-return - Return statement checks 44 | # SLF: flake8-self - Check for self references 45 | # TCH: flake8-type-checking - Type checking imports 46 | # TID: flake8-tidy-imports - Import tidying 47 | # TRY: flake8-try-except-raise - Try/except/raise checks 48 | # YTT: flake8-2020 - Python 2020+ compatibility 49 | 50 | # Selected rule sets to enforce: 51 | # D: pydocstyle - Check docstring style 52 | # E: pycodestyle errors - PEP 8 style guide 53 | # F: pyflakes - Detect logical errors 54 | # I: isort - Sort imports 55 | # N: pep8-naming - Check PEP 8 naming conventions 56 | # W: pycodestyle warnings - PEP 8 style guide warnings 57 | # UP: pyupgrade - Upgrade syntax for newer Python 58 | select = ["D", "E", "F", "I", "N", "W", "UP"] 59 | 60 | [lint.pydocstyle] 61 | convention = "google" 62 | 63 | ########################################################### 64 | # Formatting configuration 65 | ########################################################### 66 | [format] 67 | # Use double quotes for strings 68 | quote-style = "double" 69 | # Use spaces for indentation 70 | indent-style = "space" 71 | # Automatically detect and use the appropriate line ending 72 | line-ending = "auto" 73 | 74 | ########################################################### 75 | # File-specific exceptions 76 | ########################################################### 77 | [lint.per-file-ignores] 78 | # Allow assert statements and ignore docstring requirements in tests 79 | "tests/**/*.py" = [ 80 | "S101", 81 | "D100", 82 | "D101", 83 | "D102", 84 | "D103", 85 | "D104", 86 | "D105", 87 | "D107", 88 | "F403", 89 | "F405", 90 | "E741" 91 | ] 92 | # Ignore self parameter naming for ComputationFactory methods 93 | "tests/test_class_style_definition.py" = [ 94 | "N805", 95 | ] 96 | "tests/standard_test_computations.py" = [ 97 | "N805", 98 | ] 99 | # Allow non-lowercase variable names and assert statements in marimo files 100 | "book/marimo/*.py" = [ 101 | "N803", 102 | "S101", 103 | ] 104 | # Ignore missing module docstring in docs config 105 | "docs/conf.py" = [ 106 | "D100", 107 | ] 108 | -------------------------------------------------------------------------------- /tests/test_rhiza/test_docstrings.py: -------------------------------------------------------------------------------- 1 | """Tests for module docstrings using doctest. 2 | 3 | This file and its associated tests flow down via a SYNC action from the jebel-quant/rhiza repository 4 | (https://github.com/jebel-quant/rhiza). 5 | 6 | Automatically discovers all packages under `src/` and runs doctests for each. 7 | """ 8 | 9 | from __future__ import annotations 10 | 11 | import doctest 12 | import importlib 13 | import warnings 14 | from pathlib import Path 15 | 16 | import pytest 17 | 18 | 19 | def _iter_modules_from_path(logger, package_path: Path): 20 | """Recursively find all Python modules in a directory.""" 21 | for path in package_path.rglob("*.py"): 22 | if path.name == "__init__.py": 23 | module_path = path.parent.relative_to(package_path.parent) 24 | else: 25 | module_path = path.relative_to(package_path.parent).with_suffix("") 26 | 27 | # Convert path to module name in an OS-independent way 28 | module_name = ".".join(module_path.parts) 29 | 30 | try: 31 | yield importlib.import_module(module_name) 32 | except ImportError as e: 33 | warnings.warn(f"Could not import {module_name}: {e}", stacklevel=2) 34 | logger.warning("Could not import module %s: %s", module_name, e) 35 | continue 36 | 37 | 38 | def test_doctests(logger, root, monkeypatch: pytest.MonkeyPatch): 39 | """Run doctests for each package directory under src/.""" 40 | src_path = root / "src" 41 | 42 | logger.info("Starting doctest discovery in: %s", src_path) 43 | if not src_path.exists(): 44 | logger.info("Source directory not found: %s — skipping doctests", src_path) 45 | pytest.skip(f"Source directory not found: {src_path}") 46 | 47 | # Add src to sys.path with automatic cleanup 48 | monkeypatch.syspath_prepend(str(src_path)) 49 | logger.debug("Prepended to sys.path: %s", src_path) 50 | 51 | total_tests = 0 52 | total_failures = 0 53 | failed_modules = [] 54 | 55 | # Find all packages in src 56 | for package_dir in src_path.iterdir(): 57 | if package_dir.is_dir() and (package_dir / "__init__.py").exists(): 58 | # Import the package 59 | package_name = package_dir.name 60 | logger.info("Discovered package: %s", package_name) 61 | try: 62 | modules = list(_iter_modules_from_path(logger, package_dir)) 63 | logger.debug("%d module(s) found in package %s", len(modules), package_name) 64 | 65 | for module in modules: 66 | logger.debug("Running doctests for module: %s", module.__name__) 67 | results = doctest.testmod( 68 | module, 69 | verbose=False, 70 | optionflags=(doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE), 71 | ) 72 | total_tests += results.attempted 73 | 74 | if results.failed: 75 | logger.warning( 76 | "Doctests failed for %s: %d/%d failed", 77 | module.__name__, 78 | results.failed, 79 | results.attempted, 80 | ) 81 | total_failures += results.failed 82 | failed_modules.append((module.__name__, results.failed, results.attempted)) 83 | else: 84 | logger.debug("Doctests passed for %s (%d test(s))", module.__name__, results.attempted) 85 | 86 | except ImportError as e: 87 | warnings.warn(f"Could not import package {package_name}: {e}", stacklevel=2) 88 | logger.warning("Could not import package %s: %s", package_name, e) 89 | continue 90 | 91 | if failed_modules: 92 | formatted = "\n".join(f" {name}: {failed}/{attempted} failed" for name, failed, attempted in failed_modules) 93 | msg = ( 94 | f"Doctest summary: {total_tests} tests across {len(failed_modules)} module(s)\n" 95 | f"Failures: {total_failures}\n" 96 | f"Failed modules:\n{formatted}" 97 | ) 98 | logger.error("%s", msg) 99 | assert total_failures == 0, msg 100 | else: 101 | logger.info("Doctest summary: %d tests, 0 failures", total_tests) 102 | 103 | if total_tests == 0: 104 | logger.info("No doctests were found in any module — skipping") 105 | pytest.skip("No doctests were found in any module") 106 | -------------------------------------------------------------------------------- /docs/user/strategies.md: -------------------------------------------------------------------------------- 1 | # Strategies for using Loman in the Real World 2 | 3 | ## Fine-grained or Coarse-grained nodes 4 | 5 | When using Loman, we have a choice of whether we make each expression in our program a node (very fine-grained), or have one node which executes all the code in our calculation (very coarse-grained) or somewhere in between. Loman is relatively efficient at executing code, but it can never be as efficient as the Python interpreter sequentially running lines of Python. Accordingly, we recommend that you should create a node for each input, result or intermediate value that you might care to inspect, alter, or calculate in a different way. 6 | 7 | On the other hand, there is no cost to nodes that are not executed [^f1], and execution is effectively lazy if you specify which nodes you wish to calculate. With this in mind, it can make sense to create large numbers of nodes that import data for example, in the anticipation that it will be useful to have that data to hand at some point, but there is no cost if it is not needed. 8 | 9 | [^f1]: This is not quite true. The method for working out computable nodes has not been optimized, so in fact there is linear cost to adding unused nodes, but this limitation is unnecessary and will be removed in due course. 10 | 11 | ## Converting existing codebases to use Loman 12 | 13 | We typically find that production code tends to have a "main" function which loads data from databases and other systems, coordinates running a few calculations, and then loads the results back into other systems. We have had good experiences transferring the data downloading and calculation parts to Loman. 14 | 15 | Typically such a "main" function will have small groups of lines responsible for grabbing particular pieces of input data, or for calling out to specific calculation routine. Each of these groups can be converted to a node in a Loman computation easily. Often, Loman's `kwds` input to `add_node` is useful to martial data into existing functions. 16 | 17 | It is often helpful to put the creation of a Loman computation object with uninitialized values into a separate function. Then it is easy to experiment with the computation in an interactive environment such as Jupyter. 18 | 19 | The final result is that the "main" function will instantiate a computation object, give it objects for database access, and other inputs, such as run date. Exporting calculated results is not within Loman's scope, so the "main" function will coordinate writing results from the computation object to the same systems as before. It is also useful if the "main" function serializes the computation for later inspection if necessary. 20 | 21 | Already, having a concrete visualization of the computation's structure, as well as the ability to access intermediates of the computation through the serialized copy will be great steps ahead of the existing system. Experimenting with adding additional parts to the computation will also be easier, as blank or serialized computations can be used as the basis for this work in an interactive environment. 22 | 23 | Finally, it is not necessary to "big bang" existing systems over to a Loman-based solution. Instead, small discrete parts of an existing implementation can be converted, and gradually migrate additional parts inside of the Loman computation as desirable. 24 | 25 | ## Accessing databases and other external systems 26 | 27 | To access databases, we recommend [SQLAlchemy](http://www.sqlalchemy.org/) Core. For each database, we recommend creating two nodes in a computation, one for the engine, and another for the metadata object, and these nodes should not be serialized. Then every data access can use the nodes **engine** and **metadata** as necessary. This is not dissimilar to dependency injection: 28 | 29 | ```pycon 30 | >>> import sqlalchemy as sa 31 | >>> comp = Computation() 32 | >>> comp.add_node('engine', sa.create_engine(...), serialize=False) 33 | >>> comp.add_node('metadata', lambda engine: sa.MetaData(engine), serialize=False) 34 | >>> def get_some_data(engine, ...): 35 | ... [...] 36 | ... 37 | >>> comp.add_node('some_data', get_some_data) 38 | ``` 39 | 40 | Accessing other data sources such as scraped websites, or vendor systems can be accessed similarly. For example, here is code to create a logged in browser under the control of Selenium to scrape data from a website: 41 | 42 | ```pycon 43 | >>> from selenium import webdriver 44 | >>> comp = Computation() 45 | >>> def get_logged_in_browser(): 46 | ... browser = webdriver.Chrome() 47 | ... browser.get('http://somewebsite.com') 48 | ... elem = browser.find_element_by_id('userid') 49 | ... elem.send_keys('user@id.com') 50 | ... elem = browser.find_element_by_id('password') 51 | ... elem.send_keys('secret') 52 | ... elem = browser.find_element_by_name('_submit') 53 | ... elem.click() 54 | ... return browser 55 | ... comp.add_node('browser', get_logged_in_browser) 56 | ``` 57 | -------------------------------------------------------------------------------- /src/loman/util.py: -------------------------------------------------------------------------------- 1 | """Utility functions and classes for loman computation graphs.""" 2 | 3 | import itertools 4 | import types 5 | 6 | import numpy as np 7 | import pandas as pd 8 | 9 | 10 | def apply1(f, xs, *args, **kwds): 11 | """Apply function f to xs, handling generators, lists, and single values.""" 12 | if isinstance(xs, types.GeneratorType): 13 | return (f(x, *args, **kwds) for x in xs) 14 | if isinstance(xs, list): 15 | return [f(x, *args, **kwds) for x in xs] 16 | return f(xs, *args, **kwds) 17 | 18 | 19 | def as_iterable(xs): 20 | """Convert input to iterable form if not already iterable.""" 21 | if isinstance(xs, (types.GeneratorType, list, set)): 22 | return xs 23 | return (xs,) 24 | 25 | 26 | def apply_n(f, *xs, **kwds): 27 | """Apply function f to the cartesian product of iterables xs.""" 28 | for p in itertools.product(*[as_iterable(x) for x in xs]): 29 | f(*p, **kwds) 30 | 31 | 32 | class AttributeView: 33 | """Provides attribute-style access to dynamic collections.""" 34 | 35 | def __init__(self, get_attribute_list, get_attribute, get_item=None): 36 | """Initialize with functions to get attribute list and individual attributes. 37 | 38 | Args: 39 | get_attribute_list: Function that returns list of available attributes 40 | get_attribute: Function that takes an attribute name and returns its value 41 | get_item: Optional function for item access, defaults to get_attribute 42 | """ 43 | self.get_attribute_list = get_attribute_list 44 | self.get_attribute = get_attribute 45 | self.get_item = get_item 46 | if self.get_item is None: 47 | self.get_item = get_attribute 48 | 49 | def __dir__(self): 50 | """Return list of available attributes.""" 51 | return self.get_attribute_list() 52 | 53 | def __getattr__(self, attr): 54 | """Get attribute by name, raising AttributeError if not found.""" 55 | try: 56 | return self.get_attribute(attr) 57 | except KeyError: 58 | raise AttributeError(attr) 59 | 60 | def __getitem__(self, key): 61 | """Get item by key.""" 62 | return self.get_item(key) 63 | 64 | def __getstate__(self): 65 | """Prepare object for serialization.""" 66 | return { 67 | "get_attribute_list": self.get_attribute_list, 68 | "get_attribute": self.get_attribute, 69 | "get_item": self.get_item, 70 | } 71 | 72 | def __setstate__(self, state): 73 | """Restore object from serialized state.""" 74 | self.get_attribute_list = state["get_attribute_list"] 75 | self.get_attribute = state["get_attribute"] 76 | self.get_item = state["get_item"] 77 | if self.get_item is None: 78 | self.get_item = self.get_attribute 79 | 80 | @staticmethod 81 | def from_dict(d, use_apply1=True): 82 | """Create an AttributeView from a dictionary.""" 83 | if use_apply1: 84 | 85 | def get_attribute(xs): 86 | return apply1(d.get, xs) 87 | else: 88 | get_attribute = d.get 89 | return AttributeView(d.keys, get_attribute) 90 | 91 | 92 | pandas_types = (pd.Series, pd.DataFrame) 93 | 94 | 95 | def value_eq(a, b): 96 | """Compare two values for equality, handling pandas and numpy objects safely. 97 | 98 | - Uses .equals for pandas Series/DataFrame 99 | - For numpy arrays, returns a single boolean using np.array_equal (treats NaNs as equal) 100 | - Falls back to == and coerces to bool when possible 101 | """ 102 | if a is b: 103 | return True 104 | 105 | # pandas objects: use robust equality 106 | if isinstance(a, pandas_types): 107 | return a.equals(b) 108 | if isinstance(b, pandas_types): 109 | return b.equals(a) 110 | if isinstance(a, np.ndarray) or isinstance(b, np.ndarray): 111 | try: 112 | return np.array_equal(a, b, equal_nan=True) 113 | except Exception: 114 | return False 115 | try: 116 | if isinstance(a, np.ndarray) or isinstance(b, np.ndarray): 117 | try: 118 | return np.array_equal(a, b, equal_nan=True) 119 | except TypeError: 120 | # Fallback if equal_nan not available 121 | a_arr = np.asarray(a) 122 | b_arr = np.asarray(b) 123 | if a_arr.shape != b_arr.shape: 124 | return False 125 | eq = a_arr == b_arr 126 | # align NaN handling 127 | with np.errstate(invalid="ignore"): 128 | both_nan = np.isnan(a_arr) & np.isnan(b_arr) 129 | return bool(np.all(eq | both_nan)) 130 | 131 | # Default comparison; ensure a single boolean 132 | result = a == b 133 | # If result is an array-like truth value, reduce safely 134 | if isinstance(result, (np.ndarray,)): 135 | return bool(np.all(result)) 136 | return bool(result) 137 | except Exception: 138 | return False 139 | -------------------------------------------------------------------------------- /tests/test_dill_serialization.py: -------------------------------------------------------------------------------- 1 | """Tests for dill serialization functionality in Loman computations.""" 2 | 3 | import io 4 | 5 | import pytest 6 | 7 | from loman import Computation, ComputationFactory, States, calc_node, input_node 8 | from loman.computeengine import NodeData 9 | 10 | 11 | def test_serialization(): 12 | def b(x): 13 | return x + 1 14 | 15 | def c(x): 16 | return 2 * x 17 | 18 | def d(x, y): 19 | return x + y 20 | 21 | comp = Computation() 22 | comp.add_node("a") 23 | comp.add_node("b", b, kwds={"x": "a"}) 24 | comp.add_node("c", c, kwds={"x": "a"}) 25 | comp.add_node("d", d, kwds={"x": "b", "y": "c"}) 26 | 27 | comp.insert("a", 1) 28 | comp.compute_all() 29 | f = io.BytesIO() 30 | comp.write_dill(f) 31 | 32 | f.seek(0) 33 | foo = Computation.read_dill(f) 34 | 35 | assert set(comp.dag.nodes) == set(foo.dag.nodes) 36 | for n in comp.dag.nodes(): 37 | assert comp.dag.nodes[n].get("state", None) == foo.dag.nodes[n].get("state", None) 38 | assert comp.dag.nodes[n].get("value", None) == foo.dag.nodes[n].get("value", None) 39 | 40 | 41 | def test_serialization_skip_flag(): 42 | comp = Computation() 43 | comp.add_node("a") 44 | comp.add_node("b", lambda a: a + 1, serialize=False) 45 | comp.add_node("c", lambda b: b + 1) 46 | 47 | comp.insert("a", 1) 48 | comp.compute_all() 49 | f = io.BytesIO() 50 | comp.write_dill(f) 51 | 52 | assert comp.state("a") == States.UPTODATE 53 | assert comp.state("b") == States.UPTODATE 54 | assert comp.state("c") == States.UPTODATE 55 | assert comp.value("a") == 1 56 | assert comp.value("b") == 2 57 | assert comp.value("c") == 3 58 | 59 | f.seek(0) 60 | comp2 = Computation.read_dill(f) 61 | assert comp2.state("a") == States.UPTODATE 62 | assert comp2.state("b") == States.UNINITIALIZED 63 | assert comp2.state("c") == States.UPTODATE 64 | assert comp2.value("a") == 1 65 | assert comp2.value("c") == 3 66 | 67 | 68 | def test_no_serialize_flag(): 69 | comp = Computation() 70 | comp.add_node("a", serialize=False) 71 | comp.add_node("b", lambda a: a + 1) 72 | comp.insert("a", 1) 73 | comp.compute_all() 74 | 75 | f = io.BytesIO() 76 | comp.write_dill(f) 77 | f.seek(0) 78 | comp2 = Computation.read_dill(f) 79 | assert comp2.state("a") == States.UNINITIALIZED 80 | assert comp2["b"] == NodeData(States.UPTODATE, 2) 81 | 82 | 83 | def test_serialize_nested_loman(): 84 | @ComputationFactory 85 | class CompInner: 86 | a = input_node(value=3) 87 | 88 | @calc_node 89 | def b(self, a): 90 | return a + 1 91 | 92 | @ComputationFactory 93 | class CompOuter: 94 | COMP = input_node() 95 | 96 | @calc_node 97 | def out(self, comp): 98 | return comp.x.b + 10 99 | 100 | inner = CompInner() 101 | inner.compute_all() 102 | 103 | outer = CompOuter() 104 | outer.insert("COMP", inner) 105 | outer.compute_all() 106 | 107 | f = io.BytesIO() 108 | outer.write_dill(f) 109 | f.seek(0) 110 | outer2 = Computation.read_dill(f) 111 | 112 | assert outer2.v.COMP.v.b == outer.v.COMP.v.b 113 | assert outer2.v.out == outer.v.out 114 | 115 | 116 | def test_roundtrip_old_dill(): 117 | def b(x): 118 | return x + 1 119 | 120 | def c(x): 121 | return 2 * x 122 | 123 | def d(x, y): 124 | return x + y 125 | 126 | comp = Computation() 127 | comp.add_node("a") 128 | comp.add_node("b", b, kwds={"x": "a"}) 129 | comp.add_node("c", c, kwds={"x": "a"}) 130 | comp.add_node("d", d, kwds={"x": "b", "y": "c"}) 131 | 132 | comp.insert("a", 1) 133 | comp.compute_all() 134 | f = io.BytesIO() 135 | comp.write_dill(f) 136 | 137 | f.seek(0) 138 | foo = Computation.read_dill(f) 139 | 140 | assert set(comp.dag.nodes) == set(foo.dag.nodes) 141 | for n in comp.dag.nodes(): 142 | assert comp.dag.nodes[n].get("state", None) == foo.dag.nodes[n].get("state", None) 143 | assert comp.dag.nodes[n].get("value", None) == foo.dag.nodes[n].get("value", None) 144 | 145 | 146 | class UnserializableObject: 147 | def __init__(self): 148 | self.data = "This is some data" 149 | 150 | def __getstate__(self): 151 | raise TypeError(f"{self.__class__.__name__} is not serializable") 152 | 153 | 154 | def test_serialize_nested_loman_with_unserializable_nodes(): 155 | @ComputationFactory 156 | class CompInner: 157 | a = input_node(value=3) 158 | 159 | @calc_node 160 | def unserializable(self, a): 161 | return UnserializableObject() 162 | 163 | @ComputationFactory 164 | class CompOuter: 165 | COMP = input_node() 166 | 167 | @calc_node 168 | def out(self, comp): 169 | return comp.x.a + 10 170 | 171 | inner = CompInner() 172 | inner.compute_all() 173 | print(inner.v.unserializable) 174 | outer = CompOuter() 175 | outer.insert("COMP", inner) 176 | outer.compute_all() 177 | 178 | with pytest.raises(TypeError): 179 | f = io.BytesIO() 180 | outer.write_dill(f) 181 | 182 | outer.v.COMP.clear_tag("unserializable", "__serialize__") 183 | 184 | f = io.BytesIO() 185 | outer.write_dill(f) 186 | 187 | f.seek(0) 188 | outer2 = Computation.read_dill(f) 189 | 190 | assert outer2.v.COMP.v.a == outer.v.COMP.v.a 191 | assert outer2.v.out == outer.v.out 192 | -------------------------------------------------------------------------------- /tests/test_rhiza/test_release_script.py: -------------------------------------------------------------------------------- 1 | """Tests for the release.sh script using a sandboxed git environment. 2 | 3 | This file and its associated tests flow down via a SYNC action from the jebel-quant/rhiza repository 4 | (https://github.com/jebel-quant/rhiza). 5 | 6 | The script exposes the `release` command (creates and pushes tags). 7 | Tests call the script from a temporary clone and use a small mock `uv` 8 | to avoid external dependencies. 9 | """ 10 | 11 | import subprocess 12 | 13 | 14 | def test_release_creates_tag(git_repo): 15 | """Release creates a tag.""" 16 | script = git_repo / ".github" / "rhiza" / "scripts" / "release.sh" 17 | 18 | # Run release 19 | # 1. Prompts to create tag -> y 20 | # 2. Prompts to push tag -> y 21 | result = subprocess.run([str(script)], cwd=git_repo, input="y\ny\n", capture_output=True, text=True) 22 | assert result.returncode == 0 23 | assert "Tag 'v0.1.0' created locally" in result.stdout 24 | 25 | # Verify the tag exists 26 | verify_result = subprocess.run( 27 | ["git", "tag", "-l", "v0.1.0"], 28 | cwd=git_repo, 29 | capture_output=True, 30 | text=True, 31 | ) 32 | assert "v0.1.0" in verify_result.stdout 33 | 34 | 35 | def test_release_fails_if_local_tag_exists(git_repo): 36 | """If the target tag already exists locally, release should warn and abort if user says no.""" 37 | script = git_repo / ".github" / "rhiza" / "scripts" / "release.sh" 38 | 39 | # Create a local tag that matches current version 40 | subprocess.run(["git", "tag", "v0.1.0"], cwd=git_repo, check=True) 41 | 42 | # Input 'n' to abort 43 | result = subprocess.run([str(script)], cwd=git_repo, input="n\n", capture_output=True, text=True) 44 | 45 | assert result.returncode == 0 46 | assert "Tag 'v0.1.0' already exists locally" in result.stdout 47 | assert "Aborted by user" in result.stdout 48 | 49 | 50 | def test_release_fails_if_remote_tag_exists(git_repo): 51 | """Release fails if tag exists on remote.""" 52 | script = git_repo / ".github" / "rhiza" / "scripts" / "release.sh" 53 | 54 | # Create tag locally and push to remote 55 | subprocess.run(["git", "tag", "v0.1.0"], cwd=git_repo, check=True) 56 | subprocess.run(["git", "push", "origin", "v0.1.0"], cwd=git_repo, check=True) 57 | 58 | result = subprocess.run([str(script)], cwd=git_repo, input="y\n", capture_output=True, text=True) 59 | 60 | assert result.returncode == 1 61 | assert "already exists on remote" in result.stdout 62 | 63 | 64 | def test_release_uncommitted_changes_failure(git_repo): 65 | """Release fails if there are uncommitted changes (even pyproject.toml).""" 66 | script = git_repo / ".github" / "rhiza" / "scripts" / "release.sh" 67 | 68 | # Modify pyproject.toml (which is allowed in bump but NOT in release) 69 | with open(git_repo / "pyproject.toml", "a") as f: 70 | f.write("\n# comment") 71 | 72 | result = subprocess.run([str(script)], cwd=git_repo, capture_output=True, text=True) 73 | 74 | assert result.returncode == 1 75 | assert "You have uncommitted changes" in result.stdout 76 | 77 | 78 | def test_release_pushes_if_ahead_of_remote(git_repo): 79 | """Release prompts to push if local branch is ahead of remote.""" 80 | script = git_repo / ".github" / "rhiza" / "scripts" / "release.sh" 81 | 82 | # Create a commit locally that isn't on remote 83 | tracked_file = git_repo / "file.txt" 84 | tracked_file.touch() 85 | subprocess.run(["git", "add", "file.txt"], cwd=git_repo, check=True) 86 | subprocess.run(["git", "commit", "-m", "Local commit"], cwd=git_repo, check=True) 87 | 88 | # Run release 89 | # 1. Prompts to push -> y 90 | # 2. Prompts to create tag -> y 91 | # 3. Prompts to push tag -> y 92 | result = subprocess.run([str(script)], cwd=git_repo, input="y\ny\ny\n", capture_output=True, text=True) 93 | 94 | assert result.returncode == 0 95 | assert "Your branch is ahead" in result.stdout 96 | assert "Unpushed commits:" in result.stdout 97 | assert "Local commit" in result.stdout 98 | assert "Push changes to remote before releasing?" in result.stdout 99 | 100 | 101 | def test_release_fails_if_behind_remote(git_repo): 102 | """Release fails if local branch is behind remote.""" 103 | script = git_repo / ".github" / "rhiza" / "scripts" / "release.sh" 104 | 105 | # Create a commit on remote that isn't local 106 | # We need to clone another repo to push to remote 107 | other_clone = git_repo.parent / "other_clone" 108 | subprocess.run(["git", "clone", str(git_repo.parent / "remote.git"), str(other_clone)], check=True) 109 | 110 | # Configure git user for other_clone (needed in CI) 111 | subprocess.run(["git", "config", "user.email", "test@example.com"], cwd=other_clone, check=True) 112 | subprocess.run(["git", "config", "user.name", "Test User"], cwd=other_clone, check=True) 113 | 114 | # Commit and push from other clone 115 | with open(other_clone / "other.txt", "w") as f: 116 | f.write("content") 117 | subprocess.run(["git", "add", "other.txt"], cwd=other_clone, check=True) 118 | subprocess.run(["git", "commit", "-m", "Remote commit"], cwd=other_clone, check=True) 119 | subprocess.run(["git", "push"], cwd=other_clone, check=True) 120 | 121 | # Run release (it will fetch and see it's behind) 122 | result = subprocess.run([str(script)], cwd=git_repo, capture_output=True, text=True) 123 | 124 | assert result.returncode == 1 125 | assert "Your branch is behind" in result.stdout 126 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # 3 | # loman documentation build configuration file, created by 4 | # sphinx-quickstart on Sat Mar 25 16:18:54 2017. 5 | # 6 | # This file is execfile()d with the current directory set to its 7 | # containing dir. 8 | # 9 | # Note that not all possible configuration values are present in this 10 | # autogenerated file. 11 | # 12 | # All configuration values have a default; values that are commented out 13 | # serve to show the default. 14 | 15 | # If extensions (or modules to document with autodoc) are in another directory, 16 | # add these directories to sys.path here. If the directory is relative to the 17 | # documentation root, use os.path.abspath to make it absolute, like shown here. 18 | # 19 | import os 20 | import sys 21 | 22 | sys.path.insert(0, os.path.abspath(r"..")) 23 | 24 | 25 | # -- General configuration ------------------------------------------------ 26 | 27 | # If your documentation needs a minimal Sphinx version, state it here. 28 | # 29 | # needs_sphinx = '1.0' 30 | 31 | # Add any Sphinx extension module names here, as strings. They can be 32 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 33 | # ones. 34 | extensions = [ 35 | "sphinx.ext.autodoc", 36 | "sphinx.ext.doctest", 37 | "sphinx.ext.intersphinx", 38 | "sphinx.ext.mathjax", 39 | "sphinx.ext.graphviz", 40 | "myst_parser", 41 | ] 42 | 43 | myst_enable_extensions = [ 44 | "colon_fence", 45 | "html_admonition", 46 | ] 47 | 48 | # Add any paths that contain templates here, relative to this directory. 49 | templates_path = ["_templates"] 50 | 51 | source_suffix = { 52 | ".rst": "restructuredtext", 53 | ".md": "markdown", 54 | } 55 | 56 | # The master toctree document. 57 | master_doc = "index" 58 | 59 | # General information about the project. 60 | project = "loman" 61 | copyright = "2017-2025, Ed Parcell" 62 | author = "Ed Parcell" 63 | 64 | # The version info for the project you're documenting, acts as replacement for 65 | # |version| and |release|, also used in various other places throughout the 66 | # built documents. 67 | # 68 | # The short X.Y version. 69 | version = "0.5.3" 70 | # The full version, including alpha/beta/rc tags. 71 | release = "0.5.3" 72 | 73 | # The language for content autogenerated by Sphinx. Refer to documentation 74 | # for a list of supported languages. 75 | # 76 | # This is also used if you do content translation via gettext catalogs. 77 | # Usually you set "language" from the command line for these cases. 78 | language = "en" 79 | 80 | # List of patterns, relative to source directory, that match files and 81 | # directories to ignore when looking for source files. 82 | # This patterns also effect to html_static_path and html_extra_path 83 | exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] 84 | 85 | # The name of the Pygments (syntax highlighting) style to use. 86 | pygments_style = "sphinx" 87 | 88 | # If true, `todo` and `todoList` produce output, else they produce nothing. 89 | todo_include_todos = False 90 | 91 | 92 | # -- Options for HTML output ---------------------------------------------- 93 | 94 | # The theme to use for HTML and HTML Help pages. See the documentation for 95 | # a list of builtin themes. 96 | # 97 | 98 | html_theme = "sphinx_rtd_theme" 99 | 100 | # Theme options are theme-specific and customize the look and feel of a theme 101 | # further. For a list of options available for each theme, see the 102 | # documentation. 103 | # 104 | # html_theme_options = {} 105 | 106 | # Add any paths that contain custom static files (such as style sheets) here, 107 | # relative to this directory. They are copied after the builtin static files, 108 | # so a file named "default.css" will overwrite the builtin "default.css". 109 | html_static_path = ["_static"] 110 | 111 | 112 | # -- Options for HTMLHelp output ------------------------------------------ 113 | 114 | # Output file base name for HTML help builder. 115 | htmlhelp_basename = "lomandoc" 116 | 117 | 118 | # -- Options for LaTeX output --------------------------------------------- 119 | 120 | latex_elements = { 121 | # The paper size ('letterpaper' or 'a4paper'). 122 | # 123 | # 'papersize': 'letterpaper', 124 | # The font size ('10pt', '11pt' or '12pt'). 125 | # 126 | # 'pointsize': '10pt', 127 | # Additional stuff for the LaTeX preamble. 128 | # 129 | # 'preamble': '', 130 | # Latex figure (float) alignment 131 | # 132 | # 'figure_align': 'htbp', 133 | } 134 | 135 | # Grouping the document tree into LaTeX files. List of tuples 136 | # (source start file, target name, title, 137 | # author, documentclass [howto, manual, or own class]). 138 | latex_documents = [ 139 | (master_doc, "loman.tex", "loman Documentation", "Ed Parcell", "manual"), 140 | ] 141 | 142 | 143 | # -- Options for manual page output --------------------------------------- 144 | 145 | # One entry per manual page. List of tuples 146 | # (source start file, name, description, authors, manual section). 147 | man_pages = [(master_doc, "loman", "loman Documentation", [author], 1)] 148 | 149 | 150 | # -- Options for Texinfo output ------------------------------------------- 151 | 152 | # Grouping the document tree into Texinfo files. List of tuples 153 | # (source start file, target name, title, author, 154 | # dir menu entry, description, category) 155 | texinfo_documents = [ 156 | (master_doc, "loman", "loman Documentation", author, "loman", "One line description of project.", "Miscellaneous"), 157 | ] 158 | 159 | intersphinx_mapping = {"python": ("https://docs.python.org/3", None)} 160 | -------------------------------------------------------------------------------- /tests/test_computeengine_structure.py: -------------------------------------------------------------------------------- 1 | import random 2 | 3 | from loman import Computation, States 4 | from tests.standard_test_computations import BasicFourNodeComputation 5 | 6 | 7 | def test_get_inputs(): 8 | comp = BasicFourNodeComputation() 9 | assert set(comp.get_inputs("a")) == set() 10 | assert set(comp.get_inputs("b")) == {"a"} 11 | assert set(comp.get_inputs("c")) == {"a"} 12 | assert set(comp.get_inputs("d")) == {"c", "b"} 13 | assert list(map(set, comp.get_inputs(["a", "b", "c", "d"]))) == [set(), {"a"}, {"a"}, {"b", "c"}] 14 | 15 | 16 | def test_attribute_i(): 17 | comp = BasicFourNodeComputation() 18 | assert set(comp.i.a) == set() 19 | assert set(comp.i.b) == {"a"} 20 | assert set(comp.i.c) == {"a"} 21 | assert set(comp.i.d) == {"c", "b"} 22 | assert set(comp.i["a"]) == set() 23 | assert set(comp.i["b"]) == {"a"} 24 | assert set(comp.i["c"]) == {"a"} 25 | assert set(comp.i["d"]) == {"c", "b"} 26 | assert list(map(set, comp.i[["a", "b", "c", "d"]])) == [set(), {"a"}, {"a"}, {"b", "c"}] 27 | 28 | 29 | def test_get_inputs_order(): 30 | comp = Computation() 31 | input_nodes = list(("inp", i) for i in range(100)) 32 | comp.add_node(input_node for input_node in input_nodes) 33 | random.shuffle(input_nodes) 34 | comp.add_node("res", lambda *args: args, args=input_nodes, inspect=False) 35 | assert comp.i.res == input_nodes 36 | 37 | 38 | def test_get_original_inputs(): 39 | comp = BasicFourNodeComputation() 40 | assert set(comp.get_original_inputs()) == {"a"} 41 | assert set(comp.get_original_inputs("a")) == {"a"} 42 | assert set(comp.get_original_inputs("b")) == {"a"} 43 | assert set(comp.get_original_inputs(["b", "c"])) == {"a"} 44 | 45 | comp.add_node("a", lambda: 1) 46 | assert set(comp.get_original_inputs()) == set() 47 | 48 | 49 | def test_get_outputs(): 50 | comp = BasicFourNodeComputation() 51 | assert set(comp.get_outputs("a")) == {"c", "b"} 52 | assert set(comp.get_outputs("b")) == {"d"} 53 | assert set(comp.get_outputs("c")) == {"d"} 54 | assert set(comp.get_outputs("d")) == set() 55 | assert list(map(set, comp.get_outputs(["a", "b", "c", "d"]))) == [{"b", "c"}, {"d"}, {"d"}, set()] 56 | 57 | 58 | def test_attribute_o(): 59 | comp = BasicFourNodeComputation() 60 | assert set(comp.o.a) == {"c", "b"} 61 | assert set(comp.o.b) == {"d"} 62 | assert set(comp.o.c) == {"d"} 63 | assert set(comp.o.d) == set() 64 | assert set(comp.o["a"]) == {"c", "b"} 65 | assert set(comp.o["b"]) == {"d"} 66 | assert set(comp.o["c"]) == {"d"} 67 | assert set(comp.o["d"]) == set() 68 | assert list(map(set, comp.o[["a", "b", "c", "d"]])) == [{"b", "c"}, {"d"}, {"d"}, set()] 69 | 70 | 71 | def test_get_final_outputs(): 72 | comp = BasicFourNodeComputation() 73 | assert set(comp.get_final_outputs()) == {"d"} 74 | assert set(comp.get_final_outputs("a")) == {"d"} 75 | assert set(comp.get_final_outputs("b")) == {"d"} 76 | assert set(comp.get_final_outputs(["b", "c"])) == {"d"} 77 | 78 | 79 | def test_restrict_1(): 80 | comp = BasicFourNodeComputation() 81 | comp.restrict("c") 82 | assert set(comp.nodes()) == {"a", "c"} 83 | 84 | 85 | def test_restrict_2(): 86 | comp = BasicFourNodeComputation() 87 | comp.restrict(["b", "c"]) 88 | assert set(comp.nodes()) == {"a", "b", "c"} 89 | 90 | 91 | def test_restrict_3(): 92 | comp = BasicFourNodeComputation() 93 | comp.restrict("d", ["b", "c"]) 94 | assert set(comp.nodes()) == {"b", "c", "d"} 95 | 96 | 97 | def test_rename_nodes(): 98 | comp = BasicFourNodeComputation() 99 | comp.insert("a", 10) 100 | comp.compute("b") 101 | 102 | comp.rename_node("a", "alpha") 103 | comp.rename_node("b", "beta") 104 | comp.rename_node("c", "gamma") 105 | comp.rename_node("d", "delta") 106 | assert comp.s[["alpha", "beta", "gamma", "delta"]] == [ 107 | States.UPTODATE, 108 | States.UPTODATE, 109 | States.COMPUTABLE, 110 | States.STALE, 111 | ] 112 | 113 | comp.compute("delta") 114 | assert comp.s[["alpha", "beta", "gamma", "delta"]] == [ 115 | States.UPTODATE, 116 | States.UPTODATE, 117 | States.UPTODATE, 118 | States.UPTODATE, 119 | ] 120 | 121 | 122 | def test_rename_nodes_with_dict(): 123 | comp = BasicFourNodeComputation() 124 | comp.insert("a", 10) 125 | comp.compute("b") 126 | 127 | comp.rename_node({"a": "alpha", "b": "beta", "c": "gamma", "d": "delta"}) 128 | assert comp.s[["alpha", "beta", "gamma", "delta"]] == [ 129 | States.UPTODATE, 130 | States.UPTODATE, 131 | States.COMPUTABLE, 132 | States.STALE, 133 | ] 134 | 135 | comp.compute("delta") 136 | assert comp.s[["alpha", "beta", "gamma", "delta"]] == [ 137 | States.UPTODATE, 138 | States.UPTODATE, 139 | States.UPTODATE, 140 | States.UPTODATE, 141 | ] 142 | 143 | 144 | def test_state_map_updated_with_placeholder(): 145 | comp = Computation() 146 | comp.add_node("b", lambda a: a + 1) 147 | assert comp.s.a == States.PLACEHOLDER 148 | assert "a" in comp._get_names_for_state(States.PLACEHOLDER) 149 | 150 | 151 | def test_state_map_updated_with_placeholder_kwds(): 152 | comp = Computation() 153 | comp.add_node("b", lambda x: x + 1, kwds={"x": "a"}) 154 | assert comp.s.a == States.PLACEHOLDER 155 | assert "a" in comp._get_names_for_state(States.PLACEHOLDER) 156 | 157 | 158 | def test_state_map_updated_with_placeholder_args(): 159 | comp = Computation() 160 | comp.add_node("b", lambda x: x + 1, args=["a"]) 161 | assert comp.s.a == States.PLACEHOLDER 162 | assert "a" in comp._get_names_for_state(States.PLACEHOLDER) 163 | -------------------------------------------------------------------------------- /.github/workflows/rhiza_devcontainer.yml: -------------------------------------------------------------------------------- 1 | # This file is part of the jebel-quant/rhiza repository 2 | # (https://github.com/jebel-quant/rhiza). 3 | # 4 | # Devcontainer CI Workflow 5 | # 6 | # Purpose: 7 | # Validates that the devcontainer image builds successfully when devcontainer-related 8 | # files are changed. This workflow does NOT publish/push the image to any registry. 9 | # 10 | # Trigger Conditions: 11 | # - Push to any branch when files in .devcontainer/ change 12 | # - Pull requests to main/master when files in .devcontainer/ change 13 | # - Changes to this workflow file itself 14 | # 15 | # Image Configuration: 16 | # - Registry: Defaults to ghcr.io (override with DEVCONTAINER_REGISTRY variable) 17 | # - Image name: {registry}/{owner}/{repository}/devcontainer 18 | # - Image tag: {branch}-{commit-sha} (e.g., main-abc123def456) 19 | # - Config file: Always .devcontainer/devcontainer.json 20 | # 21 | # Safeguards: 22 | # - Checks if .devcontainer/devcontainer.json exists before building 23 | # - Skips gracefully with a warning if devcontainer.json is not found 24 | # - Converts repository owner to lowercase for Docker compatibility 25 | # 26 | # Publishing: 27 | # Publishing only happens during releases via rhiza_release.yml when the 28 | # PUBLISH_DEVCONTAINER repository variable is set to "true". 29 | # 30 | # For repos without devcontainers: 31 | # This workflow won't trigger unless .devcontainer/ files exist and are modified, 32 | # or this workflow file itself is changed (in which case it skips gracefully). 33 | 34 | 35 | name: DEVCONTAINER 36 | 37 | on: 38 | push: 39 | branches: 40 | - '**' 41 | paths: 42 | - ".devcontainer/**" 43 | - ".github/workflows/rhiza_devcontainer.yml" 44 | 45 | pull_request: 46 | branches: [ main, master ] 47 | paths: 48 | - ".devcontainer/**" 49 | - ".github/workflows/rhiza_devcontainer.yml" 50 | 51 | permissions: 52 | contents: read 53 | packages: write 54 | 55 | jobs: 56 | build: 57 | name: Build Devcontainer Image 58 | runs-on: ubuntu-latest 59 | steps: 60 | - name: Checkout repository 61 | uses: actions/checkout@v6 62 | 63 | - name: Set registry 64 | id: registry 65 | run: | 66 | REGISTRY="${{ vars.DEVCONTAINER_REGISTRY }}" 67 | if [ -z "$REGISTRY" ]; then 68 | REGISTRY="ghcr.io" 69 | fi 70 | echo "registry=$REGISTRY" >> $GITHUB_OUTPUT 71 | 72 | - name: Login to Container Registry 73 | uses: docker/login-action@v3 74 | with: 75 | registry: ${{ steps.registry.outputs.registry }} 76 | username: ${{ github.repository_owner }} 77 | password: ${{ secrets.GITHUB_TOKEN }} 78 | 79 | # Additional gate to skip build if no devcontainer.json exists 80 | - name: Check devcontainer exists 81 | id: check 82 | run: | 83 | if [ ! -f ".devcontainer/devcontainer.json" ]; then 84 | echo "exists=false" >> $GITHUB_OUTPUT 85 | echo "::warning::No .devcontainer/devcontainer.json found, skipping build" 86 | else 87 | echo "exists=true" >> $GITHUB_OUTPUT 88 | fi 89 | # repository owner to lowercase for Docker image naming, as devcontainers/ci does not safeguard 90 | - name: Get lowercase repository owner 91 | id: repo_owner 92 | run: echo "owner_lc=$(echo '${{ github.repository_owner }}' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT 93 | 94 | - name: Get Image Name 95 | if: steps.check.outputs.exists == 'true' 96 | id: image_name 97 | run: | 98 | # Check if custom name is provided, otherwise use default 99 | if [ -z "${{ vars.DEVCONTAINER_IMAGE_NAME }}" ]; then 100 | REPO_NAME_LC=$(echo "${{ github.event.repository.name }}" | tr '[:upper:]' '[:lower:]') 101 | # Sanitize repo name: replace invalid characters with hyphens 102 | # Docker image names must match [a-z0-9]+([._-][a-z0-9]+)* 103 | # Replace leading dots and multiple consecutive separators 104 | REPO_NAME_SANITIZED=$(echo "$REPO_NAME_LC" | sed 's/^[._-]*//; s/[._-][._-]*/-/g') 105 | IMAGE_NAME="$REPO_NAME_SANITIZED/devcontainer" 106 | echo "Using default image name component: $IMAGE_NAME" 107 | else 108 | IMAGE_NAME="${{ vars.DEVCONTAINER_IMAGE_NAME }}" 109 | echo "Using custom image name component: $IMAGE_NAME" 110 | fi 111 | 112 | # Validate the image component matches [a-z0-9]+([._-][a-z0-9]+)* with optional / separators 113 | if ! echo "$IMAGE_NAME" | grep -qE '^[a-z0-9]+([._-][a-z0-9]+)*(/[a-z0-9]+([._-][a-z0-9]+)*)*$'; then 114 | echo "::error::Invalid image name component: $IMAGE_NAME" 115 | echo "::error::Each component must match [a-z0-9]+([._-][a-z0-9]+)* separated by /" 116 | exit 1 117 | fi 118 | 119 | IMAGE_NAME="${{ steps.registry.outputs.registry }}/${{ steps.repo_owner.outputs.owner_lc }}/$IMAGE_NAME" 120 | echo "✅ Final image name: $IMAGE_NAME" 121 | echo "image_name=$IMAGE_NAME" >> $GITHUB_OUTPUT 122 | 123 | - name: Sanitize Image Tag 124 | if: steps.check.outputs.exists == 'true' 125 | id: sanitized_tag 126 | run: | 127 | SANITIZED_TAG=$(echo "${{ github.ref_name }}-${{ github.sha }}" | tr '/' '-') 128 | echo "sanitized_tag=$SANITIZED_TAG" >> $GITHUB_OUTPUT 129 | 130 | - name: Build Devcontainer Image 131 | uses: devcontainers/ci@v0.3 132 | if: steps.check.outputs.exists == 'true' 133 | with: 134 | configFile: .devcontainer/devcontainer.json 135 | push: never 136 | imageName: ${{ steps.image_name.outputs.image_name }} 137 | imageTag: ${{ steps.sanitized_tag.outputs.sanitized_tag }} 138 | -------------------------------------------------------------------------------- /tests/test_rhiza/test_git_repo_fixture.py: -------------------------------------------------------------------------------- 1 | """Tests for the git_repo pytest fixture that creates a mock Git repository. 2 | 3 | This file and its associated tests flow down via a SYNC action from the jebel-quant/rhiza repository 4 | (https://github.com/jebel-quant/rhiza). 5 | 6 | This module validates the temporary repository structure, git initialization, 7 | mocked tool executables, environment variables, and basic git configuration the 8 | fixture is expected to provide for integration-style tests. 9 | """ 10 | 11 | import os 12 | import subprocess 13 | from pathlib import Path 14 | 15 | 16 | class TestGitRepoFixture: 17 | """Tests for the git_repo fixture that sets up a mock git repository.""" 18 | 19 | def test_git_repo_creates_temporary_directory(self, git_repo): 20 | """Git repo fixture should create a temporary directory.""" 21 | assert git_repo.exists() 22 | assert git_repo.is_dir() 23 | 24 | def test_git_repo_contains_pyproject_toml(self, git_repo): 25 | """Git repo should contain a pyproject.toml file.""" 26 | pyproject = git_repo / "pyproject.toml" 27 | assert pyproject.exists() 28 | content = pyproject.read_text() 29 | assert 'name = "test-project"' in content 30 | assert 'version = "0.1.0"' in content 31 | 32 | def test_git_repo_contains_uv_lock(self, git_repo): 33 | """Git repo should contain a uv.lock file.""" 34 | assert (git_repo / "uv.lock").exists() 35 | 36 | def test_git_repo_has_bin_directory_with_mocks(self, git_repo): 37 | """Git repo should have bin directory with mock tools.""" 38 | bin_dir = git_repo / "bin" 39 | assert bin_dir.exists() 40 | assert (bin_dir / "uv").exists() 41 | 42 | def test_git_repo_mock_tools_are_executable(self, git_repo): 43 | """Mock tools should be executable.""" 44 | for tool in ["uv"]: 45 | tool_path = git_repo / "bin" / tool 46 | assert os.access(tool_path, os.X_OK), f"{tool} is not executable" 47 | 48 | def test_git_repo_has_github_scripts_directory(self, git_repo): 49 | """Git repo should have .github/rhiza/scripts directory.""" 50 | scripts_dir = git_repo / ".github" / "rhiza" / "scripts" 51 | assert scripts_dir.exists() 52 | assert (scripts_dir / "release.sh").exists() 53 | assert (scripts_dir / "bump.sh").exists() 54 | 55 | def test_git_repo_scripts_are_executable(self, git_repo): 56 | """GitHub scripts should be executable.""" 57 | for script in ["release.sh", "bump.sh"]: 58 | script_path = git_repo / ".github" / "rhiza" / "scripts" / script 59 | assert os.access(script_path, os.X_OK), f"{script} is not executable" 60 | 61 | def test_git_repo_is_initialized(self, git_repo): 62 | """Git repo should be properly initialized.""" 63 | result = subprocess.run( 64 | ["git", "rev-parse", "--git-dir"], 65 | cwd=git_repo, 66 | capture_output=True, 67 | text=True, 68 | ) 69 | assert result.returncode == 0 70 | assert ".git" in result.stdout 71 | 72 | def test_git_repo_has_master_branch(self, git_repo): 73 | """Git repo should be on master branch.""" 74 | result = subprocess.run( 75 | ["git", "branch", "--show-current"], 76 | cwd=git_repo, 77 | capture_output=True, 78 | text=True, 79 | ) 80 | assert result.returncode == 0 81 | assert result.stdout.strip() == "master" 82 | 83 | def test_git_repo_has_initial_commit(self, git_repo): 84 | """Git repo should have an initial commit.""" 85 | result = subprocess.run( 86 | ["git", "log", "--oneline"], 87 | cwd=git_repo, 88 | capture_output=True, 89 | text=True, 90 | ) 91 | assert result.returncode == 0 92 | assert "Initial commit" in result.stdout 93 | 94 | def test_git_repo_has_remote_configured(self, git_repo): 95 | """Git repo should have origin remote configured.""" 96 | result = subprocess.run( 97 | ["git", "remote", "-v"], 98 | cwd=git_repo, 99 | capture_output=True, 100 | text=True, 101 | ) 102 | assert result.returncode == 0 103 | assert "origin" in result.stdout 104 | 105 | def test_git_repo_user_config_is_set(self, git_repo): 106 | """Git repo should have user.email and user.name configured.""" 107 | email = subprocess.check_output( 108 | ["git", "config", "user.email"], 109 | cwd=git_repo, 110 | text=True, 111 | ).strip() 112 | name = subprocess.check_output( 113 | ["git", "config", "user.name"], 114 | cwd=git_repo, 115 | text=True, 116 | ).strip() 117 | assert email == "test@example.com" 118 | assert name == "Test User" 119 | 120 | def test_git_repo_working_tree_is_clean(self, git_repo): 121 | """Git repo should start with a clean working tree.""" 122 | result = subprocess.run( 123 | ["git", "status", "--porcelain"], 124 | cwd=git_repo, 125 | capture_output=True, 126 | text=True, 127 | ) 128 | assert result.returncode == 0 129 | assert result.stdout.strip() == "" 130 | 131 | def test_git_repo_changes_current_directory(self, git_repo): 132 | """Git repo fixture should change to the temporary directory.""" 133 | current_dir = Path.cwd() 134 | assert current_dir == git_repo 135 | 136 | def test_git_repo_modifies_path_environment(self, git_repo): 137 | """Git repo fixture should prepend bin directory to PATH.""" 138 | path_env = os.environ.get("PATH", "") 139 | bin_dir = str(git_repo / "bin") 140 | assert bin_dir in path_env 141 | assert path_env.startswith(bin_dir) 142 | -------------------------------------------------------------------------------- /tests/test_blocks.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from loman import * 4 | from loman.computeengine import Block 5 | 6 | 7 | def test_simple_block(): 8 | comp_inner = Computation() 9 | comp_inner.add_node("a") 10 | comp_inner.add_node("b", lambda a: a + 1) 11 | comp_inner.add_node("c", lambda a: 2 * a) 12 | comp_inner.add_node("d", lambda b, c: b + c) 13 | 14 | comp = Computation() 15 | comp.add_block("foo", comp_inner) 16 | comp.add_block("bar", comp_inner) 17 | comp.add_node("input_foo") 18 | comp.add_node("input_bar") 19 | comp.link("foo/a", "input_foo") 20 | comp.link("bar/a", "input_bar") 21 | comp.add_node("output", lambda x, y: x + y, kwds={"x": "foo/d", "y": "bar/d"}) 22 | 23 | comp.insert("input_foo", value=7) 24 | comp.insert("input_bar", value=10) 25 | 26 | comp.compute_all() 27 | 28 | assert comp.v["foo/d"] == 22 29 | assert comp.v["bar/d"] == 31 30 | assert comp.v.output == 22 + 31 31 | 32 | 33 | def test_add_node_for_block_definition(): 34 | comp = Computation() 35 | comp.add_node("foo/a") 36 | comp.add_node("foo/b", lambda a: a + 1) 37 | comp.add_node("foo/c", lambda a: 2 * a) 38 | comp.add_node("foo/d", lambda b, c: b + c) 39 | comp.insert("foo/a", value=7) 40 | comp.compute_all() 41 | assert comp.v["foo/d"] == 22 42 | 43 | 44 | def test_add_node_for_block_definition_with_kwds(): 45 | comp = Computation() 46 | comp.add_node("foo_a/a") 47 | comp.add_node("foo_b/b", lambda a: a + 1, kwds={"a": "foo_a/a"}) 48 | comp.add_node("foo_c/c", lambda a: 2 * a, kwds={"a": "foo_a/a"}) 49 | comp.add_node("foo_d/d", lambda b, c: b + c, kwds={"b": "foo_b/b", "c": "foo_c/c"}) 50 | comp.insert("foo_a/a", value=7) 51 | comp.compute_all() 52 | assert comp.v["foo_d/d"] == 22 53 | 54 | 55 | def test_add_block_with_links(): 56 | comp_inner = Computation() 57 | comp_inner.add_node("a") 58 | comp_inner.add_node("b", lambda a: a + 1) 59 | comp_inner.add_node("c", lambda a: 2 * a) 60 | comp_inner.add_node("d", lambda b, c: b + c) 61 | 62 | comp = Computation() 63 | comp.add_block("foo", comp_inner, links={"a": "input_foo"}) 64 | comp.add_block("bar", comp_inner, links={"a": "input_bar"}) 65 | comp.add_node("output", lambda x, y: x + y, kwds={"x": "foo/d", "y": "bar/d"}) 66 | 67 | comp.add_node("input_foo", value=7) 68 | comp.add_node("input_bar", value=10) 69 | 70 | comp.compute_all() 71 | 72 | assert comp.v["foo/d"] == 22 73 | assert comp.v["bar/d"] == 31 74 | assert comp.v.output == 22 + 31 75 | 76 | 77 | def test_add_block_with_keep_values_false(): 78 | comp_inner = Computation() 79 | comp_inner.add_node("a", value=7) 80 | comp_inner.add_node("b", lambda a: a + 1) 81 | comp_inner.add_node("c", lambda a: 2 * a) 82 | comp_inner.add_node("d", lambda b, c: b + c) 83 | comp_inner.compute_all() 84 | 85 | comp = Computation() 86 | comp.add_block("foo", comp_inner, keep_values=False, links={"a": "input_foo"}) 87 | comp.add_block("bar", comp_inner, keep_values=False, links={"a": "input_bar"}) 88 | comp.add_node("output", lambda x, y: x + y, kwds={"x": "foo/d", "y": "bar/d"}) 89 | 90 | comp.add_node("input_foo", value=7) 91 | comp.add_node("input_bar", value=10) 92 | 93 | comp.compute_all() 94 | 95 | assert comp.v["foo/d"] == 22 96 | assert comp.v["bar/d"] == 31 97 | assert comp.v.output == 22 + 31 98 | 99 | 100 | def test_add_block_with_keep_values_true(): 101 | comp_inner = Computation() 102 | comp_inner.add_node("a", value=7) 103 | comp_inner.add_node("b", lambda a: a + 1) 104 | comp_inner.add_node("c", lambda a: 2 * a) 105 | comp_inner.add_node("d", lambda b, c: b + c) 106 | comp_inner.compute_all() 107 | 108 | comp = Computation() 109 | comp.add_block("foo", comp_inner, keep_values=True) 110 | comp.add_block("bar", comp_inner, keep_values=True, links={"a": "input_bar"}) 111 | comp.add_node("output", lambda x, y: x + y, kwds={"x": "foo/d", "y": "bar/d"}) 112 | 113 | comp.add_node("input_bar", value=10) 114 | 115 | assert comp.v["foo/d"] == 22 116 | assert comp.v["bar/d"] == 22 117 | 118 | comp.compute_all() 119 | 120 | assert comp.v["foo/d"] == 22 121 | assert comp.v["bar/d"] == 31 122 | assert comp.v.output == 22 + 31 123 | 124 | 125 | def test_block_accessors(): 126 | comp = Computation() 127 | comp.add_node("foo1/bar1/baz1/a", value=1) 128 | 129 | assert comp.v.foo1.bar1.baz1.a == 1 130 | assert comp.v["foo1/bar1/baz1/a"] == 1 131 | assert comp.v["foo1"].bar1.baz1.a == 1 132 | assert comp.v.foo1["bar1"].baz1.a == 1 133 | 134 | with pytest.raises(AttributeError): 135 | comp.v.foo1.bar1.baz1.nonexistent 136 | 137 | comp.add_node("foo1/bar1/baz1", value=2) 138 | 139 | assert comp.v.foo1.bar1.baz1 == 2 140 | with pytest.raises(AttributeError): 141 | comp.v.foo1.bar1.nonexistent 142 | 143 | 144 | def test_computation_factory_with_blocks(): 145 | @ComputationFactory 146 | class InnerComputation: 147 | a = input_node() 148 | 149 | @calc_node 150 | def b(self, a): 151 | return a + 1 152 | 153 | @calc_node 154 | def c(self, a): 155 | return 2 * a 156 | 157 | @calc_node 158 | def d(self, b, c): 159 | return b + c 160 | 161 | @ComputationFactory 162 | class OuterComputation: 163 | input_foo = input_node() 164 | input_bar = input_node() 165 | 166 | foo = block(InnerComputation, links={"a": "input_foo"}) 167 | bar = block(InnerComputation, links={"a": "input_bar"}) 168 | 169 | @calc_node(kwds={"a": "foo/d", "b": "bar/d"}) 170 | def output(self, a, b): 171 | return a + b 172 | 173 | comp = OuterComputation() 174 | 175 | comp.insert("input_foo", value=7) 176 | comp.insert("input_bar", value=10) 177 | 178 | comp.compute_all() 179 | 180 | assert comp.v.foo.d == 22 181 | assert comp.v.bar.d == 31 182 | assert comp.v.output == 22 + 31 183 | 184 | 185 | def test_block_add_to_comp(): 186 | inner_comp = Computation() 187 | inner_comp.add_node("a", value=10) 188 | inner_comp.add_node("b", lambda a: a * 2) 189 | outer_comp = Computation() 190 | Block(inner_comp).add_to_comp(outer_comp, "blk", None, True) 191 | outer_comp.compute("blk/b") 192 | assert outer_comp.nodes() == ["blk/a", "blk/b"] 193 | assert outer_comp.v["blk/b"] == 20 194 | -------------------------------------------------------------------------------- /tests/test_class_style_definition.py: -------------------------------------------------------------------------------- 1 | from loman import ( 2 | Computation, 3 | ComputationFactory, 4 | States, 5 | calc_node, 6 | input_node, 7 | ) 8 | 9 | 10 | def test_class_style_definition(): 11 | class FooComp: 12 | a = input_node(value=3) 13 | 14 | @calc_node 15 | def b(a): 16 | return a + 1 17 | 18 | @calc_node 19 | def c(a): 20 | return 2 * a 21 | 22 | @calc_node 23 | def d(b, c): 24 | return b + c 25 | 26 | comp = Computation.from_class(FooComp) 27 | comp.compute_all() 28 | 29 | assert comp.v.d == 10 30 | 31 | 32 | def test_class_style_definition_as_decorator(): 33 | @Computation.from_class 34 | class FooComp: 35 | a = input_node(value=3) 36 | 37 | @calc_node 38 | def b(a): 39 | return a + 1 40 | 41 | @calc_node 42 | def c(a): 43 | return 2 * a 44 | 45 | @calc_node 46 | def d(b, c): 47 | return b + c 48 | 49 | FooComp.compute_all() 50 | 51 | assert FooComp.v.d == 10 52 | 53 | 54 | def test_class_style_definition_as_factory_decorator(): 55 | @ComputationFactory 56 | class FooComp: 57 | a = input_node(value=3) 58 | 59 | @calc_node 60 | def b(a): 61 | return a + 1 62 | 63 | @calc_node 64 | def c(a): 65 | return 2 * a 66 | 67 | @calc_node 68 | def d(b, c): 69 | return b + c 70 | 71 | comp = FooComp() 72 | comp.compute_all() 73 | assert comp.s.d == States.UPTODATE and comp.v.d == 10 74 | 75 | 76 | def test_class_style_definition_as_factory_decorator_with_args(): 77 | @ComputationFactory() 78 | class FooComp: 79 | a = input_node(value=3) 80 | 81 | @calc_node 82 | def b(a): 83 | return a + 1 84 | 85 | @calc_node 86 | def c(a): 87 | return 2 * a 88 | 89 | @calc_node 90 | def d(b, c): 91 | return b + c 92 | 93 | comp = FooComp() 94 | comp.compute_all() 95 | assert comp.s.d == States.UPTODATE and comp.v.d == 10 96 | 97 | 98 | def test_computation_factory_methods_ignore_self_by_default(): 99 | @ComputationFactory 100 | class FooComp: 101 | a = input_node(value=3) 102 | 103 | @calc_node 104 | def b(self, a): 105 | return a + 1 106 | 107 | @calc_node 108 | def c(self, a): 109 | return 2 * a 110 | 111 | @calc_node 112 | def d(self, b, c): 113 | return b + c 114 | 115 | comp = FooComp() 116 | comp.compute_all() 117 | assert comp.s.d == States.UPTODATE and comp.v.d == 10 118 | 119 | 120 | def test_computation_factory_methods_explicitly_use_self(): 121 | @ComputationFactory(ignore_self=False) 122 | class FooComp: 123 | a = input_node(value=3) 124 | 125 | @calc_node 126 | def b(self, a): 127 | return a + 1 128 | 129 | @calc_node 130 | def c(self, a): 131 | return 2 * a 132 | 133 | @calc_node 134 | def d(self, b, c): 135 | return b + c 136 | 137 | comp = FooComp() 138 | comp.compute_all() 139 | assert comp.s.d == States.UNINITIALIZED 140 | 141 | comp.add_node("self", value=None) 142 | comp.compute_all() 143 | assert comp.s.d == States.UPTODATE and comp.v.d == 10 144 | 145 | 146 | def test_standard_computation_does_not_ignore_self(): 147 | def b(self, a): 148 | return a + 1 149 | 150 | def c(self, a): 151 | return 2 * a 152 | 153 | def d(self, b, c): 154 | return b + c 155 | 156 | comp = Computation() 157 | comp.add_node("a", value=3) 158 | comp.add_node("b", b) 159 | comp.add_node("c", c) 160 | comp.add_node("d", d) 161 | 162 | comp.compute_all() 163 | assert comp.s.d == States.UNINITIALIZED 164 | 165 | comp.add_node("self", value=1) 166 | comp.compute_all() 167 | assert comp.s.d == States.UPTODATE and comp.v.d == 10 168 | 169 | 170 | def test_computation_factory_methods_calc_node_ignore_self(): 171 | @ComputationFactory(ignore_self=False) 172 | class FooComp: 173 | a = input_node(value=3) 174 | 175 | @calc_node 176 | def b(a): 177 | return a + 1 178 | 179 | @calc_node(ignore_self=True) 180 | def c(self, a): 181 | return 2 * a 182 | 183 | @calc_node 184 | def d(b, c): 185 | return b + c 186 | 187 | comp = FooComp() 188 | comp.add_node("self", value=None) # Provide self node as required when ignore_self=False 189 | comp.compute_all() 190 | assert comp.s.d == States.UPTODATE and comp.v.d == 10 191 | 192 | 193 | def test_computation_factory_methods_calling_methods_on_self(): 194 | @ComputationFactory 195 | class FooComp: 196 | a = input_node(value=3) 197 | 198 | def add(self, x, y): 199 | return x + y 200 | 201 | @calc_node 202 | def b(self, a): 203 | return self.add(a, 1) 204 | 205 | @calc_node 206 | def c(self, a): 207 | return 2 * a 208 | 209 | @calc_node 210 | def d(self, b, c): 211 | return self.add(b, c) 212 | 213 | comp = FooComp() 214 | comp.compute_all() 215 | assert comp.s.d == States.UPTODATE and comp.v.d == 10 216 | 217 | 218 | def test_computation_factory_methods_calling_methods_on_self_recursively(): 219 | @ComputationFactory 220 | class FooComp: 221 | a = input_node(value=3) 222 | 223 | def really_add(self, x, y): 224 | return x + y 225 | 226 | def add(self, x, y): 227 | return self.really_add(x, y) 228 | 229 | @calc_node 230 | def b(self, a): 231 | return self.add(a, 1) 232 | 233 | @calc_node 234 | def c(self, a): 235 | return 2 * a 236 | 237 | @calc_node 238 | def d(self, b, c): 239 | return self.add(b, c) 240 | 241 | comp = FooComp() 242 | comp.compute_all() 243 | assert comp.s.d == States.UPTODATE and comp.v.d == 10 244 | 245 | 246 | def test_computation_factory_calc_node_no_args(): 247 | @ComputationFactory 248 | class FooComp: 249 | @calc_node 250 | def a(): 251 | return 3 252 | 253 | comp = FooComp() 254 | comp.compute_all() 255 | assert comp.s.a == States.UPTODATE and comp.v.a == 3 256 | -------------------------------------------------------------------------------- /tests/test_rhiza/test_bump_script.py: -------------------------------------------------------------------------------- 1 | """Tests for the bump.sh script using a sandboxed git environment. 2 | 3 | This file and its associated tests flow down via a SYNC action from the jebel-quant/rhiza repository 4 | (https://github.com/jebel-quant/rhiza). 5 | 6 | Provides test fixtures for testing git-based workflows and version management. 7 | """ 8 | 9 | import subprocess 10 | 11 | import pytest 12 | 13 | 14 | @pytest.mark.parametrize( 15 | "choice, expected_version", 16 | [ 17 | ("1", "0.1.1"), # patch 18 | ("2", "0.2.0"), # minor 19 | ("3", "1.0.0"), # major 20 | ], 21 | ) 22 | def test_bump_updates_version_no_commit(git_repo, choice, expected_version): 23 | """Running `bump` interactively updates pyproject.toml correctly.""" 24 | script = git_repo / ".github" / "rhiza" / "scripts" / "bump.sh" 25 | 26 | # Input: choice -> n (no commit) 27 | input_str = f"{choice}\nn\n" 28 | 29 | result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) 30 | 31 | assert result.returncode == 0 32 | assert f"-> {expected_version} in pyproject.toml" in result.stdout 33 | 34 | # Verify pyproject.toml updated 35 | with open(git_repo / "pyproject.toml") as f: 36 | content = f.read() 37 | assert f'version = "{expected_version}"' in content 38 | 39 | # Verify no tag created yet 40 | tags = subprocess.check_output(["git", "tag"], cwd=git_repo, text=True) 41 | assert f"v{expected_version}" not in tags 42 | 43 | 44 | def test_bump_commit_push(git_repo): 45 | """Bump with commit and push.""" 46 | script = git_repo / ".github" / "rhiza" / "scripts" / "bump.sh" 47 | 48 | # Input: 1 (patch) -> y (commit) -> y (push) 49 | input_str = "1\ny\ny\n" 50 | 51 | result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) 52 | 53 | assert result.returncode == 0 54 | assert "Version committed" in result.stdout 55 | assert "Pushed to origin/master" in result.stdout 56 | 57 | # Verify commit on remote 58 | remote_log = subprocess.check_output(["git", "log", "origin/master", "-1", "--pretty=%B"], cwd=git_repo, text=True) 59 | assert "chore: bump version to 0.1.1" in remote_log 60 | 61 | 62 | def test_uncommitted_changes_failure(git_repo): 63 | """Script fails if there are uncommitted changes.""" 64 | script = git_repo / ".github" / "rhiza" / "scripts" / "bump.sh" 65 | 66 | # Create a tracked file and commit it 67 | tracked_file = git_repo / "tracked_file.txt" 68 | tracked_file.touch() 69 | subprocess.run(["git", "add", "tracked_file.txt"], cwd=git_repo, check=True) 70 | subprocess.run(["git", "commit", "-m", "Add tracked file"], cwd=git_repo, check=True) 71 | 72 | # Modify tracked file to create uncommitted change 73 | with open(tracked_file, "a") as f: 74 | f.write("\n# change") 75 | 76 | # Input: 1 (patch) 77 | result = subprocess.run([str(script)], cwd=git_repo, input="1\n", capture_output=True, text=True) 78 | 79 | assert result.returncode == 1 80 | assert "You have uncommitted changes" in result.stdout 81 | 82 | 83 | @pytest.mark.parametrize( 84 | "input_version, expected_version", 85 | [ 86 | ("1.2.3", "1.2.3"), 87 | ("v1.2.4", "1.2.4"), 88 | ("2.0.0rc1", "2.0.0rc1"), 89 | ("2.0.0a1", "2.0.0a1"), 90 | ("2.0.0.post1", "2.0.0.post1"), 91 | ], 92 | ) 93 | def test_bump_explicit_version(git_repo, input_version, expected_version): 94 | """Bump with explicit version.""" 95 | script = git_repo / ".github" / "rhiza" / "scripts" / "bump.sh" 96 | 97 | # Input: 4 (explicit) -> input_version -> n (no commit) 98 | input_str = f"4\n{input_version}\nn\n" 99 | 100 | result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) 101 | 102 | assert result.returncode == 0 103 | assert f"-> {expected_version} in pyproject.toml" in result.stdout 104 | with open(git_repo / "pyproject.toml") as f: 105 | content = f.read() 106 | assert f'version = "{expected_version}"' in content 107 | 108 | 109 | def test_bump_explicit_version_invalid(git_repo): 110 | """Bump fails with invalid explicit version.""" 111 | script = git_repo / ".github" / "rhiza" / "scripts" / "bump.sh" 112 | version = "not-a-version" 113 | 114 | # Input: 4 (explicit) -> not-a-version 115 | input_str = f"4\n{version}\n" 116 | 117 | result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) 118 | 119 | assert result.returncode == 1 120 | assert f"Invalid version format: {version}" in result.stdout 121 | 122 | 123 | def test_bump_fails_existing_tag(git_repo): 124 | """Bump fails if tag already exists.""" 125 | script = git_repo / ".github" / "rhiza" / "scripts" / "bump.sh" 126 | 127 | # Create tag v0.1.1 128 | subprocess.run(["git", "tag", "v0.1.1"], cwd=git_repo, check=True) 129 | 130 | # Try to bump to 0.1.1 (patch bump from 0.1.0) 131 | # Input: 1 (patch) 132 | result = subprocess.run([str(script)], cwd=git_repo, input="1\n", capture_output=True, text=True) 133 | 134 | assert result.returncode == 1 135 | assert "Tag 'v0.1.1' already exists locally" in result.stdout 136 | 137 | 138 | def test_warn_on_non_default_branch(git_repo): 139 | """Script warns if not on default branch.""" 140 | script = git_repo / ".github" / "rhiza" / "scripts" / "bump.sh" 141 | 142 | # Create and switch to new branch 143 | subprocess.run(["git", "checkout", "-b", "feature"], cwd=git_repo, check=True) 144 | 145 | # Run bump (input 1 (patch), then 'y' to proceed with non-default branch, then n (no commit)) 146 | input_str = "1\ny\nn\n" 147 | 148 | result = subprocess.run([str(script)], cwd=git_repo, input=input_str, capture_output=True, text=True) 149 | assert result.returncode == 0 150 | assert "You are on branch 'feature' but the default branch is 'master'" in result.stdout 151 | 152 | 153 | def test_bump_fails_if_pyproject_toml_dirty(git_repo): 154 | """Bump fails if pyproject.toml has uncommitted changes.""" 155 | script = git_repo / ".github" / "rhiza" / "scripts" / "bump.sh" 156 | 157 | # Modify pyproject.toml 158 | with open(git_repo / "pyproject.toml", "a") as f: 159 | f.write("\n# dirty") 160 | 161 | # Input: 1 (patch) 162 | result = subprocess.run([str(script)], cwd=git_repo, input="1\n", capture_output=True, text=True) 163 | 164 | assert result.returncode == 1 165 | assert "You have uncommitted changes" in result.stdout 166 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Change Log 2 | 3 | ## [unreleased] 4 | - Added type hints on ComputationFactory 5 | - BUGFIX: `compute_and_get_value` sets error state on exception 6 | 7 | ## [0.5.3] (2025-06-20) 8 | 9 | - ComputationFactories can have blocks added directly to them 10 | - Add support for viewing node function source code 11 | - Added support for node, block and computation metadata 12 | - Added a custom json serializer for future use 13 | - Various bug fixes 14 | 15 | ## [0.5.2] (2025-05-28) 16 | - Added support for pattern matching in node transformations, including wildcard patterns 17 | - Add nested attribute views, so comp.v.foo.bar.baz is equivalent to comp.v['foo/bar/baz'] 18 | - Set COLLAPSE as default node transformation, and added EXPANDED NodeTransformation type (ancestors of expanded nodes are automatically expanded) 19 | - Added `collapse_all` flag to GraphView to support backward compatibility 20 | - Cleaned up GraphView.refresh implementation 21 | - Moved Path functionality to NodeKey 22 | 23 | ## [0.5.1] (2025-05-21) 24 | 25 | - Add root parameter to Computation.draw to support viewing sub-blocks. 26 | - Add NodeTransformations, including a new COLLAPSE node transformation 27 | - Modify add_node so that argument names of supplied function will look up within same block, rather than root block 28 | - Add links parameter to Computation.add_block 29 | - Add keep_values parameter to Computation.add_block 30 | - Blocks show state if all blocks have same state (or error or stale if any do) 31 | - BUGFIX: Linking a node to itself is a no-op 32 | - BUGFIX: Inserting to a placeholder node raises a specific exception 33 | - BUGFIX: Composite blocks retain sub-block on collapsing 34 | 35 | ## [0.5.0] (2025-04-10) 36 | 37 | - Add support for blocks (Computation.add_block) 38 | - Add support for links (Computation.link) 39 | - Nodes keyed using NodeKey with paths to support nested blocks 40 | - Visualization modified to support grouping elements in same block 41 | - BUGFIX: Fix calc nodes with no parameters 42 | - Switched to use Python build front-end 43 | 44 | ## [0.4.1] (2024-11-29) 45 | 46 | - If first parameter of a `@calc_node` is called `self`, then it can be used to call non-calc_node methods of the class. (Can be disabled with `@ComputationFactory(ignore_self=False)` or `@calc_node(ignore_self=False)` ). 47 | - Add support for convertors to force input and calc node values to a particular type/form 48 | - Add support for serializing nodes that are computations 49 | - create_viz_dag now takes a list of node_formatters, which apply arbitrary formatting to the visualization node based on the lomnan node (Included NodeFormatters are `ColorByState`, `ColorByTiming`, `ShapeByType`, `StandardLabel`, `StandardGroup`, `StandardStylingOverrides`) 50 | - Fix ReadTheDocs build 51 | - Convert documentation from reStructuredText to MyST Markdown 52 | 53 | ## [0.4.0] (2024-08-22) 54 | 55 | - Removed Python 2 support 56 | - Changed test framework from nose to pytest 57 | - Add `compute_and_get_value`, `x` attribute-style access to compute value of a node and get it in one step 58 | - Replace namedtuples with dataclasses 59 | - BUGFIX: Fix equality testing on Computation.insert 60 | - Use DataFrame.equals and Series.equals to test equality in Computation.insert 61 | - BUGFIX: Fix handling of groups in rendering functions 62 | 63 | ## [0.3.0] (2019-10-24) 64 | 65 | - Added `get_original_inputs` to see source inputs of entire computation or a given set of nodes 66 | - Added `get_outputs`, `o` attribute-style access to get list of nodes fed by a particular node 67 | - Added `get_final_outputs` to get end nodes of a computation or a given set of nodes 68 | - Added `restrict` method to remove nodes unnecessary to calculate a given set of outputs 69 | - Added `rename_node` method to rename a node, while ensuring that nodes which use it as an input continue to do so 70 | - Added `repoint` method allowing all nodes which use a given node as an input to use an alternative node instead 71 | - Documented `get_inputs` and `i` attribute-style accessor 72 | 73 | ## [0.2.1] (2017-12-29) 74 | 75 | - Added class-style definitions of computations 76 | 77 | ## [0.2.0] (2017-12-05) 78 | 79 | - Added support for multithreading when calculating nodes 80 | - Update to use networkx 2.0 81 | - Added `print_errors` method 82 | - Added `force` parameter to `insert` method to allow no recalculation if value is not updated 83 | - BUGFIX: Fix behavior when calculation node overwritten with value node 84 | 85 | ## [0.1.3] (2017-07-02) 86 | 87 | - Methods set_tag and clear_tag support lists or generators of tags. Method nodes_by_tag can retrieve a list of nodes with a specific tag. 88 | - Remove set_tags and clear_tags. 89 | - Add node computation timing data, accessible through tim attribute-style access or get_timing method. 90 | - compute method can accept a list of nodes to compute. 91 | - Loman now uses pydotplus for visualization. Internally, visualization has two steps: converting a Computation to a networkx visualization DAG, and then converting that to a pydotplus Dot object. 92 | - Added view method - creates and opens a temporary pdf visualization. 93 | - draw and view methods can show timing information with colors='timing' option 94 | 95 | ## [0.1.2] (2017-04-28) 96 | 97 | - Add @node function decorator 98 | - Add ConstantValue (with alias C) to provide constant values to function parameters without creating a placeholder node for that constant 99 | - BUGFIX: Visualizing computations was broken in v0.1.1! 100 | 101 | ## [0.1.1] (2017-04-25) 102 | 103 | - Support for Python 3.4 and 3.5 104 | - Method and attribute-style accessors support lists of nodes 105 | - Added support for node-tagging 106 | - Compute method can optionally throw exceptions, for easier interactive debugging 107 | - `get_inputs` method and `i` attribute-style access to get list of inputs to a node 108 | - `add_node` takes optional inspect parameter to avoid inspection for performance 109 | - `add_node` takes optional group to render graph layout with subgraphs 110 | - `draw_graphviz` renamed to `draw` 111 | - `draw_nx` removed 112 | - `get_df` renamed to `to_df` 113 | - `get_value_dict` renamed to `to_dict` 114 | - BUGFIX: implementation of \_get_calc_nodes used by compute fixed 115 | - BUGFIX: args parameters do not create spurious nodes 116 | - BUGFIX: default function parameters do not cause placeholder node to be created 117 | - BUGFIX: node states correctly updated when calling add_node with value parameter 118 | 119 | ## [0.1.0] (2017-04-05) 120 | 121 | - Added documentation: Introduction, Quickstart and Strategies for Use 122 | - Added docstrings to Computation methods 123 | - Added logging 124 | - Added `v` and `s` fields for attribute-style access to values and states of nodes 125 | - BUGFIX: Detect cycles in `compute_all` 126 | 127 | ## [0.0.1] (2017-03-24) 128 | 129 | - Computation object with `add_node`, `insert`, `compute`, `compute_all`, `state`, `value`, `set_stale` methods 130 | - Computation object can be drawn with `draw_graphviz` method 131 | - Nodes can be updated in place 132 | - Computation handles exceptions in node computation, storing exception and traceback 133 | - Can specify mapping between function parameters and input nodes 134 | - Convenience methods: `add_named_tuple_expansion`, `add_map_node`, `get_df`, `get_value_dict`, `insert_from`, `insert_multi` 135 | - Convenience method 136 | - Computation objects can be serialized 137 | - Computation objects can be shallow-copied with `copy` 138 | - Unit tests 139 | - Runs under Python 2.7, 3.6 140 | --------------------------------------------------------------------------------