├── .github └── workflows │ └── pyClarion-test.yaml ├── .gitignore ├── LICENSE ├── changelog.md ├── contributing.md ├── pyClarion ├── __init__.py ├── base │ ├── __init__.py │ ├── constructs.py │ ├── processes.py │ ├── symbols.py │ └── uris.py ├── components │ ├── __init__.py │ ├── basic.py │ ├── filters.py │ ├── ms.py │ ├── networks.py │ ├── stores.py │ └── wm.py ├── dev.py ├── numdicts │ ├── __init__.py │ ├── basic_ops.py │ ├── dict_ops.py │ ├── gradient_tape.py │ ├── nn_ops.py │ ├── numdict.py │ ├── utils.py │ └── vec_ops.py └── utils │ ├── __init__.py │ ├── inspect.py │ ├── load.py │ ├── pprint.py │ └── visualize.py ├── readme.md ├── setup.py └── tutorial ├── agent.png ├── demo.py ├── frs.ccml ├── notes.md └── out.txt /.github/workflows/pyClarion-test.yaml: -------------------------------------------------------------------------------- 1 | # This workflow will install Python dependencies and run tests with a variety of Python versions 2 | # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions 3 | 4 | name: pyClarion test 5 | 6 | on: 7 | push: 8 | branches: [ main, dev ] 9 | pull_request: 10 | branches: [ main, dev ] 11 | 12 | jobs: 13 | build: 14 | 15 | runs-on: ubuntu-latest 16 | strategy: 17 | matrix: 18 | python-version: ['3.7', '3.8', '3.9'] 19 | 20 | steps: 21 | - uses: actions/checkout@v2 22 | - name: Set up Python ${{ matrix.python-version }} 23 | uses: actions/setup-python@v2 24 | with: 25 | python-version: ${{ matrix.python-version }} 26 | - name: Install dependencies 27 | run: | 28 | python -m pip install --upgrade pip 29 | - name: Install pyClarion 30 | run: | 31 | python -m pip install -e . 32 | - name: Test with unittest 33 | run: | 34 | python -m unittest discover 35 | # - name: Run examples 36 | # run: | 37 | # python examples/gradients.py 38 | # python examples/lagged_features.py 39 | # python examples/free_association.py 40 | # python examples/flow_control.py 41 | # python examples/q_learning.py 42 | # python examples/chunk_extraction.py 43 | # python examples/working_memory.py 44 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled source # 2 | ################### 3 | *.com 4 | *.class 5 | *.dll 6 | *.exe 7 | *.o 8 | *.so 9 | 10 | # Packages # 11 | ############ 12 | # it's better to unpack these files and commit the raw source 13 | # git has its own built in compression methods 14 | *.7z 15 | *.dmg 16 | *.gz 17 | *.iso 18 | *.jar 19 | *.rar 20 | *.tar 21 | *.zip 22 | 23 | # Logs and databases # 24 | ###################### 25 | *.log 26 | *.sql 27 | *.sqlite 28 | 29 | # OS generated files # 30 | ###################### 31 | .DS_Store 32 | .DS_Store? 33 | ._* 34 | .Spotlight-V100 35 | .Trashes 36 | ehthumbs.db 37 | Thumbs.db 38 | 39 | # Python # 40 | ########## 41 | 42 | **/__pycache__/ 43 | *.mypy_cache 44 | *.egg-info 45 | 46 | # Misc # 47 | ######## 48 | 49 | .idea 50 | .vscode -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 Can Serif Mekik 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /changelog.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | All notable changes to this project will be documented in this file. 4 | 5 | The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), 6 | and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). 7 | 8 | ## [0.18.0] 2022-11-01 9 | 10 | This is a backwards incompatible rewrite. 11 | 12 | ## Added 13 | 14 | - A new `dimension` symbol. 15 | - Support for multiple outputs in each process. 16 | - A new mini language `ccml` and interpreter for easily specifying initial chunks & rules. 17 | - The `dev` module, collecting tools and resources relevant for developing new components. 18 | - Visualization and inspection tools. 19 | - New processes: `NAM`, `Drives` 20 | 21 | ## Changed 22 | 23 | - Renamed several classes in base; basic design is preserved. 24 | - Replaced construct symbols with URI strings in a majority of cases. Construct symbols still used as numdict keys for features, chunks and rules. Even in these cases, the design has been heavily revised. 25 | - Integrated `Domain` and `Interface` functionality into `Process` objects. 26 | - Simplified `Process` objects. Most notably, `call()` now expects a list of numdicts instead of a mapping. 27 | - Rewrote nearly all previous components with simplified implementations. Variable component parameters now passed through inputs to call. 28 | - Chunk and rule databases are now full-fledged `Process` objects. 29 | - Completely rewrote numdicts. 30 | 31 | ## Removed 32 | 33 | - Native QNet support. 34 | 35 | 36 | ## [0.17.1] (2022-02-09) 37 | 38 | ### Added 39 | 40 | - `Domain.disjoint(*domains)` for checking if domains are mutually disjoint. 41 | - `Chunk.support(self, *domains)` to check if domains support self. 42 | - `Rule.support(self, *cdbs)` to check if cdbs support self. 43 | - `Chunks.enforce_support(self, *domains)` context manager to ensure that chunks 44 | are built out of domain features. 45 | - `Rules.enforce_support(self, *cdbs)` context manager to ensure that rules are 46 | built out of predefined chunks. 47 | - `ActionRules` now has a threshold parameter. 48 | - `GoalStay` communicates previous goal information in dedicated features. 49 | 50 | ### Changed 51 | 52 | - `ChunkExtractor` no longer ensures chunk form uniqueness. 53 | - Adjusted defaults for buffer and goal stay interfaces. 54 | 55 | ### Fixed 56 | 57 | - Goal chunk invocation on resumption `GoalStay` would change the order in 58 | which goals are executed (away from FILO order). Removed for more consistent 59 | behavior. 60 | - A bug in `Pruned` caused it to excluded expected construct type instead of 61 | including it. 62 | 63 | ## [v0.17.0] (2021-01-28) 64 | 65 | ### Added 66 | 67 | - `BLAs.prune()`, which removes BLA records below threshold. 68 | - Automatic symbolic address expansion. 69 | 70 | ### Changed 71 | 72 | - `CompositeProcess` renamed to `Composite` 73 | - `WrappedProcess` renamed to `Wrapped` 74 | - `FeatureDomain` and `SimpleDomain` replaced with `Domain` 75 | - `FeatureInterface` and `SimpleInterface` replaced with `Interface` 76 | - `ReinforcementDomain` renamed to `Reinforcements` 77 | - Simplified implementation and control interfaces for `ParamSet`, `Register`, `RegisterArray`, `GoalStay` 78 | - `Process` input types now `Mapping[Any, NumDict]`. This is a compromise. The ideal would be to set keys to be of type `SymbolicAddress`. However, `SymbolicAddress` is a union type and `Mapping` is invariant in its key type, therefore being explicit about the key type is prone to false alarms. 79 | 80 | ### Removed 81 | 82 | - `ControlledExtractor` 83 | 84 | ## [v0.16.0] (2021-01-17) 85 | 86 | ### Added 87 | 88 | - Optional `blas` argument to `Register` and `RegisterArray`, supporting BLA based deletion of stored entries. 89 | - New construct type `updater` for constructs solely dedicated to update processes (e.g., for databases, or due to sequencing requirements). 90 | - Automatic activation sequence generation at assembly time: `Structure` instances now step member constructs in roughly the order they were added to the structure. 91 | - `CompositeProcess` and `WrappedProcess` classes for simplifying compositional component definitions. 92 | - Construct input structure checking and automated input extraction (see `Process.check_inputs()` and `Process.extract_inputs()`). 93 | 94 | ### Changed 95 | 96 | - Constructs now connect directly to each other. 97 | - Replaced `SymbolTries` in inputs and outputs with flat mappings from symbolic addresses to numdicts. 98 | - Renamed `BLADrivenStrengths` to `BLAStrengths` 99 | - Combined `BLAInvocationTracker` and `BLADrivenDeleter` into `BLAMaintainer` 100 | - Renamed `ReniforcementMap` to `ReinforcementDomain`. 101 | - Replaced `Component`, `Emitter`, `Propagator` with `Process`. 102 | - Former `Updater` components recast as `Process` components serving `updater` constructs. 103 | - Realizers now structurally immutable (see removed for list of removed methods.). Behavior can still be modified by replacing emitters, but constructs may not be added or removed after initial assembly. 104 | - Streamlined realizer construction; use of with syntax encouraged exclusively: 105 | - Several `Realizer` methods now protected: `offer()`, `accepts()`, `finalize_assembly()` 106 | - Several `Structure` methods now protected: `add()`, `update_links()`, `offer()`, `finalize_assembly()` 107 | 108 | ### Removed 109 | 110 | - `SymbolTrie` 111 | - `RegisterArrayBLAUpdater`, added BLA support to `Register` and `RegisterArray` instead (see added) 112 | - `Updater` and all child abstractions. 113 | - `Cycle` and all child classes and submodule `components.cycles`. 114 | - Separate update cycle; stepping constructs now issues calls to a single stepping function only (`step()` for basic constructs). 115 | - Removed realizer mutation methods: 116 | - Several `Realizer` methods: `drop()`, `clear_inputs()` 117 | - Several `Structure` methods: `__delitem__()`, `remove()`, `clear()`, `drop()`, `clear_inputs()`, `clear_links()`, `reweave()` 118 | 119 | ## [0.15.0] (2020-12-24) 120 | 121 | ### Added 122 | 123 | - Runtime checkable generic protocol `SymbolTrie` 124 | - New differentiable op `tanh` in `numdicts.ops`. 125 | - Functions `squeeze`, `with_default`, `val_max`, `val_min`, `all_val`, `any_val` in `numdicts.funcs` 126 | - Compact rule definitions; enabled by nested use of `Chunks.define` and 127 | `Rules.define` (see Changed). 128 | - Delete requests for Chunk, Rule, and BLA databases. 129 | - `GoalStay` propagator for maintaining and coordinating goals. 130 | - `BLADrivenStrengths` propagator for determining chunk (and other construct) strengths based on their BLAs. 131 | - `MSCycle`, experimental activation cycle for motivational subsystem 132 | 133 | ### Changed 134 | 135 | - Type `Inputs` to `SymbolTrie[NumDict]` to be more precise. 136 | - `Chunks.link()` renamed `Chunks.define()` and returns a `chunk`. 137 | - `Rules.link()` renamed `Rules.define()` and returns a `rule`. 138 | - For `Chunks` and `Rules`: `request_update` renamed to `request_add`, `resolve_update_requests` renamed `step` 139 | - `BLAs.update()` renamed `BLAs.step()` 140 | 141 | ### Fixed 142 | 143 | - `Structure` output type. 144 | - `PullFunc` output type. 145 | - `PullFuncs` output type. 146 | - Incorrect filtering behaviour for `MutableNumDict.keep()` and `MutableNumDict.drop()`. 147 | - `Pruned.preprocess()` 148 | 149 | ### Removed 150 | 151 | - `BLAs.request_reset()` 152 | 153 | ## [0.14.0] (2020-12-16) 154 | 155 | ### Added 156 | 157 | - Several additions to `components` subpackage, including: 158 | - `chunks_` submodule defining chunk databases and several related components. 159 | - `rules` submodule defining rule databases and several related components. 160 | - `qnets` submodule defining `SimpleQNet` and `ReinforcementMap` for building simple Q-learning models. 161 | - `buffers` submodule defnining buffer propagators `ParamSet`, `Register`, and `RegisterArray`. 162 | - `blas` defining BLA databases and some basic related updaters. 163 | - Various quality of life improvements including: 164 | - Use of `with` statements to automate adding constructs to containers. 165 | - `assets` attribute for `Structure` objects for storing datastructures shared by multiple components of the parent realizer (e.g, chunk database may be shared by updaters). 166 | - `utils.pprint` submodule which extends stdlib `pprint` to handle some `pyClarion` objects. 167 | - Examples `flow_control.py`, `q_learning.py`, `chunk_extraction.py`, `working_memory.py`. 168 | - New construct types and symbols for rules, feature/chunk pools, and preprocessing flows. 169 | - `numdicts` subpackage, providing dictionaries that support numerical operations and automatic differentiation. 170 | - `base.components` submodule defining basic abstractions for components: 171 | - `Component`, `Emitter`, `Updater` abstractions for specifying components and setting up links. 172 | - `Propagator` and `Cycle` classes for specifying activation propagation procedures for `Construct` and `Structure` instances. 173 | - `Assets`, a simple namespace object for holding structure assets. 174 | - `FeatureDomain`, `FeatureInterface`, `SimpleDomain`, `SimpleInterface` for structuring specification of feature domains and feature driven control of components 175 | 176 | ### Changed 177 | 178 | - Reorganzied library. The basic design has persisted, but almost everything else has changed. Expect no backwards compatibility. Some notable changes include: 179 | - `ConstructSymbol` replaced with new `Symbol` class. 180 | - Old construct realizer classes simplified and replaced: 181 | - `Structure` class for containers 182 | - `Construct` class for basic constructs. 183 | - Realizers and propagators all modified to emit and operate on numdicts, as defined by `numdicts` submodule. 184 | - Individual chunk and feature nodes no longer explicitly represented, instead use of feature pools is encouraged. 185 | 186 | ### Fixed 187 | 188 | - Circular imports. 189 | 190 | ## [0.13.1] (2019-03-07) 191 | 192 | ### Added 193 | 194 | - `SimpleFilterJunction` for filtering flow/response inputs. 195 | - `ConstructType.NullConstruct` alias for empty flag value. 196 | 197 | ### Changed 198 | 199 | - Python version requirement dropped down to `>=3.6`. 200 | - Reworked `examples/free_association.py` to be more detailed and more clear. 201 | - Replaced all `is` checks on flags and construct symbols with `==` checks. 202 | 203 | ### Fixed 204 | 205 | - `ConstructRealizer` could not be initialized due to failing construct symbol check and botched `__slots__` configuration. 206 | - `BasicConstructRealizer.clear_activations()` may throw attribute errors when realizer has already been cleared or has no output. 207 | - `SimpleNodeJunction` would not recognize a construct symbol of the same form as its stored construct symbol. This caused nodes to fail to output activations. Due to use of `is` in construct symbol checks (should have used `==`). 208 | 209 | ## [0.13.0] 210 | 211 | ### Added 212 | 213 | - `CategoricalSelector` object for categorical choices (essentially a boltzmann selector but on log strengths). 214 | - `ConstructRealizer.clear_activations()` for resetting agent/construct output state. 215 | - multiindexing for `ContainerConstructRealizer` members. 216 | - functions `make_realizer`, `make_subsystem`, `make_agent` for initializing empty realizers from symbolic templates. 217 | - `ConstructRealizer.ready()` method signaling whether realizer initalization is complete. 218 | - `ConstructRealizer.missing()` and `ContainerConstructRealizer.missing_recursive()` for identifying missing realizer components. 219 | - `SubsystemRealizer` and `AgentRealizer` automatically connect member realizers 220 | upon insertion (using data present in members' construct symbols) and disconnect 221 | them upon deletion. 222 | - `BufferRealizer.propagate()` docs now contain a warning about potential unexpected/unwanted behavior. 223 | 224 | ### Changed 225 | 226 | - `Microfeature` renamed to `Feature` for brevity and to better reflect theory. 227 | - `ContainerConstructRealizer` properties now return lists instead of iterables for easier interactive inspection. Generators still acssessible through iterator methods such as `realizer.iter_ctype()` and `realizer.items_ctype()`. 228 | - Improved `str` and `repr` outputs for construct symbols and realizers. 229 | - `SimpleBoltzmannSelector` renamed `BoltzmannSelector` 230 | - Additional `ConstructRealizer` subclass initialization arguments now optional. 231 | - `Behavior`, `Buffer` and `Response` factories assign `BehaviorID`, 232 | `BufferID`, and `ResponseID` tuples as construct identifiers. 233 | - `Appraisal` construct renamed `Response` to avoid association with appraisal theory. 234 | 235 | ### Fixed 236 | 237 | - Simplified `BasicConstructRealizer` initialization and data model. 238 | - Bug in `SubsystemRealizer` allowing connections between constructs that should 239 | not be linked. 240 | - Bug in `ConstantSource` allowing mutation of output activation packets. 241 | 242 | ### Removed 243 | 244 | - Dependency on `numpy`. 245 | -------------------------------------------------------------------------------- /contributing.md: -------------------------------------------------------------------------------- 1 | # Contributing to `pyClarion` 2 | 3 | Thank you for your interest in contributing to `pyClarion`! 4 | 5 | ## General Instructions 6 | 7 | Please use Github issues for all communications, including bug reports, feature requests, and general questions. 8 | 9 | Before working on a new feature, please open an issue to discuss it with project maintainers to make sure it is appropriate. 10 | 11 | When submitting a pull request, please make sure the following criteria are fulfilled. 12 | - Target issues are referenced in the description 13 | - Significant changes are reported in the changelog 14 | 15 | ## Coding Conventions 16 | 17 | - Follow PEP 8 coding style 18 | - Include type annotations 19 | - Use reStructuredText for markup in docstrings (keep it light) 20 | - When practical, explicitly express preconditions, postconditions, invariants etc. using exceptions and assertions 21 | - Avoid dependencies outside of Python stdlib 22 | - Focus testing on functional requirements 23 | -------------------------------------------------------------------------------- /pyClarion/__init__.py: -------------------------------------------------------------------------------- 1 | from .base import dimension, feature, chunk, rule, Module, Structure 2 | from .components import (Repeat, Receptors, Actions, CAM, Shift, 3 | BoltzmannSampler, ActionSampler, BottomUp, TopDown, AssociativeRules, 4 | ActionRules, BLATracker, Store, GoalStore, Flags, Slots, Gates, DimFilter, 5 | NAM, Drives) 6 | from .numdicts import NumDict 7 | from .utils import pprint, pformat, load, inspect 8 | 9 | __all__ = [ 10 | "dimension", "feature", "chunk", "rule", "Module", "Structure", 11 | "Repeat", "Receptors", "Actions", "CAM", "Shift", "BoltzmannSampler", 12 | "ActionSampler", "BottomUp", "TopDown", "AssociativeRules", "ActionRules", 13 | "BLATracker", "Store", "GoalStore", "Flags", "Slots", "Gates", "DimFilter", 14 | "NAM", "Drives", "NumDict", "pprint", "pformat", "load", "inspect" 15 | ] 16 | -------------------------------------------------------------------------------- /pyClarion/base/__init__.py: -------------------------------------------------------------------------------- 1 | """Framework for simulating Clarion constructs.""" 2 | 3 | 4 | from .symbols import dimension, feature, chunk, rule 5 | from .processes import Process 6 | from .constructs import Module, Structure 7 | 8 | 9 | __all__ = ["dimension", "feature", "chunk", "rule", "Process", "Module", 10 | "Structure"] 11 | -------------------------------------------------------------------------------- /pyClarion/base/constructs.py: -------------------------------------------------------------------------------- 1 | """Tools for networking simulated constructs.""" 2 | 3 | 4 | from __future__ import annotations 5 | from abc import abstractmethod 6 | from types import MappingProxyType 7 | from contextvars import ContextVar 8 | from typing import (Union, Tuple, Callable, Any, Sequence, Iterator, ClassVar, 9 | List, OrderedDict, Generic, TypeVar) 10 | from functools import partial 11 | import logging 12 | 13 | from .processes import Process 14 | from . import uris 15 | from .. import numdicts as nd 16 | 17 | 18 | __all__ = ["Module", "Structure"] 19 | 20 | 21 | BUILD_CTX: ContextVar = ContextVar("BUILD_CTX", default=uris.SEP) 22 | BUILD_LIST: ContextVar = ContextVar("BUILD_LIST") 23 | 24 | 25 | class Construct: 26 | """Base class for simulated constructs.""" 27 | 28 | _parent: str 29 | _name: str 30 | 31 | def __init__(self, name: str) -> None: 32 | """ 33 | Initialize a new Construct instance. 34 | 35 | :param name: Construct identifier. Must be a valid Python identifier. 36 | """ 37 | 38 | if not name.isidentifier(): 39 | raise ValueError("Name must be a valid Python identifier.") 40 | self._log_init(name) 41 | self._parent = BUILD_CTX.get() 42 | self._name = name 43 | self._update_add_queue() 44 | 45 | def __repr__(self) -> str: 46 | return f"<{type(self).__qualname__}:{self.path}>" 47 | 48 | @property 49 | def name(self) -> str: 50 | """Construct identifier.""" 51 | return self._name 52 | 53 | @property 54 | def parent(self) -> str: 55 | """Symbolic path to parent structure.""" 56 | return self._parent 57 | 58 | @property 59 | def path(self) -> str: 60 | """Symbolic path to self.""" 61 | return uris.join(self.parent, self.name) 62 | 63 | @abstractmethod 64 | def step(self) -> None: 65 | raise NotImplementedError() 66 | 67 | def _update_add_queue(self) -> None: 68 | try: 69 | lst = BUILD_LIST.get() 70 | except LookupError: 71 | pass 72 | else: 73 | lst.append(self) 74 | 75 | def _log_init(self, name: str) -> None: 76 | tname = type(self).__name__.lower() 77 | context = BUILD_CTX.get().rstrip(uris.SEP) 78 | if not context: 79 | logging.debug(f"Initializing {tname} '{name}'.") 80 | else: 81 | logging.debug(f"Initializing {tname} '{name}' in '{context}'.") 82 | 83 | 84 | P = TypeVar("P", bound=Process) 85 | 86 | class Module(Construct, Generic[P]): 87 | """An elementary module.""" 88 | 89 | _constant: ClassVar[float] = 0.0 90 | 91 | _process: P 92 | _inputs: List[Tuple[str, Callable]] 93 | _i_uris: Tuple[str, ...] 94 | _fs_uris: Tuple[str, ...] 95 | 96 | def __init__( 97 | self, 98 | name: str, 99 | process: P, 100 | i_uris: Sequence[str] = (), 101 | fs_uris: Sequence[str] = () 102 | ) -> None: 103 | """ 104 | Initialize a new module. 105 | 106 | :param name: Construct identifier. Must be a valid Python identifier. 107 | :param process: Module process; issues updates and emits activations. 108 | :param i_uris: Paths to process inputs. 109 | :param f_uris: Paths to external feature spaces. 110 | """ 111 | 112 | super().__init__(name=name) 113 | self._inputs = [] 114 | self._i_uris = tuple(i_uris) 115 | self._fs_uris = tuple(fs_uris) 116 | self.process = process 117 | 118 | @property 119 | def i_uris(self) -> Tuple[str, ...]: 120 | """Paths to process inputs.""" 121 | return self._i_uris 122 | 123 | @property 124 | def fs_uris(self) -> Tuple[str, ...]: 125 | """Paths to external feature spaces.""" 126 | return self._fs_uris 127 | 128 | @property 129 | def process(self) -> P: 130 | """Module process; issues updates and emits activations.""" 131 | return self._process 132 | 133 | @process.setter 134 | def process(self, process: P) -> None: 135 | self._process = process 136 | process.prefix = uris.split_head(self.path.lstrip(uris.SEP))[1] 137 | 138 | @property 139 | def inputs(self) -> List[Tuple[str, Callable]]: 140 | """Mapping from input constructs to pull funcs.""" 141 | return list(self._inputs) 142 | 143 | def step(self) -> None: 144 | try: 145 | self.output = self.process.call(*self._pull()) 146 | except Exception as e: 147 | raise RuntimeError(f"Error in process " 148 | f"{type(self.process).__name__} of module '{self.path}'") from e 149 | 150 | @property 151 | def output(self) -> Union[nd.NumDict, Tuple[nd.NumDict, ...]]: 152 | return self._output 153 | 154 | @output.setter 155 | def output(self, output: Union[nd.NumDict, Tuple[nd.NumDict, ...]]) -> None: 156 | if isinstance(output, tuple): 157 | if any(d.c != self._constant for d in output): 158 | raise RuntimeError("Unexpected strength constant.") 159 | for d in output: 160 | d.prot = True 161 | else: 162 | if output.c != self._constant: 163 | raise RuntimeError("Unexpected strength constant.") 164 | output.prot = True 165 | self._output = output 166 | 167 | def clear_output(self) -> None: 168 | """Set output to initial state.""" 169 | self.output = self.process.initial # default output 170 | 171 | def _view(self) -> Union[nd.NumDict, Tuple[nd.NumDict, ...]]: 172 | return self.output 173 | 174 | def _link(self, path: str, callback: Callable) -> None: 175 | logging.debug(f"Connecting '{path}' to '{self.path}'") 176 | self._inputs.append((path, callback)) 177 | 178 | def _pull(self) -> Tuple[nd.NumDict, ...]: 179 | return tuple(ask() for _, ask in self._inputs) 180 | 181 | 182 | class Structure(Construct): 183 | """ 184 | A composite construct. 185 | 186 | Defines constructs that may contain other constructs. 187 | 188 | Complex Structure instances may be assembled using with statements. 189 | 190 | >>> with Structure("agent") as agent: 191 | ... Construct("stimulus", Process()) 192 | ... with Structure("acs"): 193 | ... Construct("qnet", Process(), i_uris=["../stimulus"]) 194 | 195 | During simulation, each constituent is updated in the order that 196 | it was defined. 197 | """ 198 | 199 | _dict: OrderedDict[str, Construct] 200 | _assets: Any 201 | 202 | def __init__(self, name: str) -> None: 203 | """ 204 | Initialize a new Structure instance. 205 | 206 | :param name: Construct identifier. Must be a valid Python identifier. 207 | """ 208 | 209 | super().__init__(name=name) 210 | self._dict = OrderedDict[str, Construct]() 211 | self._dict_proxy = MappingProxyType(self._dict) 212 | 213 | def __contains__(self, key: str) -> bool: 214 | try: 215 | self.__getitem__(key) 216 | except KeyError: 217 | return False 218 | return True 219 | 220 | def __iter__(self) -> Iterator[str]: 221 | for construct in self._dict: 222 | yield construct 223 | 224 | def __getitem__(self, key: str) -> Any: 225 | if not uris.ispath(key): 226 | raise ValueError(f"Invalid URI '{key}'.") 227 | head, tail = uris.split_head(key) 228 | if tail: 229 | return self[head][tail] 230 | else: 231 | return self._dict[head] 232 | 233 | def __enter__(self): 234 | logging.debug(f"Entering context '{self.path}'") 235 | if 0 < len(self._dict): # This could probably be relaxed. 236 | raise RuntimeError("Structure already populated.") 237 | new_parent = uris.join(BUILD_CTX.get(), f"{self.name}{uris.SEP}") 238 | self._build_ctx_token = BUILD_CTX.set(new_parent) 239 | self._build_list_token = BUILD_LIST.set([]) 240 | return self 241 | 242 | def __exit__(self, exc_type, exc_value, traceback): 243 | if exc_type is None: # Populate structure 244 | add_list = BUILD_LIST.get() 245 | self._add(*add_list) 246 | if self.parent == uris.SEP: 247 | self._weave() 248 | logging.debug(f"Exiting context '{self.path}'") 249 | BUILD_CTX.reset(self._build_ctx_token) 250 | BUILD_LIST.reset(self._build_list_token) 251 | 252 | def step(self) -> None: 253 | """Advance simulation by one time step.""" 254 | for construct in self._dict.values(): 255 | construct.step() 256 | 257 | def modules(self) -> Iterator[Module]: 258 | """Return an interator over member modules.""" 259 | for construct in self._dict.values(): 260 | if isinstance(construct, Module): 261 | yield construct 262 | else: 263 | assert isinstance(construct, Structure) 264 | for element in construct.modules(): 265 | yield element 266 | 267 | def _add(self, *constructs: Construct) -> None: 268 | for construct in constructs: 269 | logging.debug(f"Adding '{construct.name}' to '{self.path}'") 270 | self._dict[construct.name] = construct 271 | 272 | def _weave(self) -> None: 273 | for module in self.modules(): 274 | self._set_links(module) 275 | self._set_fspaces(module) 276 | module.output = module.process.initial 277 | for module in self.modules(): 278 | self._validate_module(module) 279 | 280 | def _set_links(self, module: Module) -> None: 281 | for ref in module.i_uris: 282 | path, frag = self._parse_ref(module.path, ref) 283 | try: 284 | obj = self[path] 285 | except KeyError as e: 286 | raise KeyError(f"Module '{path}' not found.") from e 287 | else: 288 | if not isinstance(obj, Module): 289 | raise TypeError(f"Expected Module instance at '{path}', " 290 | f"got '{type(obj).__name__}' instead.") 291 | else: 292 | view = partial(type(obj).output.fget, obj) # type: ignore 293 | if frag: 294 | module._link(ref, lambda v=view, i=int(frag): v()[i]) 295 | else: 296 | module._link(ref, view) 297 | 298 | def _set_fspaces(self, module: Module) -> None: 299 | getters = [] 300 | for ref in module.fs_uris: 301 | path, frag = self._parse_ref(module.path, ref) 302 | if frag not in Process.fspace_names: 303 | raise ValueError(f"Unexpected fs_uri fragment '{frag}'.") 304 | try: 305 | process = self[path].process 306 | except KeyError as e: 307 | raise RuntimeError(f"Construct '{path}' not found.") from e 308 | else: 309 | getters.append(partial(getattr, process, frag)) 310 | module.process.fspaces = tuple(getters) 311 | 312 | def _validate_module(self, module): 313 | try: 314 | module.process.validate() 315 | except Exception as e: 316 | raise RuntimeError(f"Validation failed for module '{module.path}'") 317 | for path, view in module.inputs: 318 | if isinstance(view(), tuple): 319 | raise RuntimeError(f"Missing output index in path from " 320 | f"multi-output Module '{path}' to '{module.path}'.") 321 | 322 | def _parse_ref(self, parent, ref): 323 | uri = uris.split(uris.join(parent, ref)) 324 | return uris.relativize(uri.path, self.path), uri.fragment 325 | -------------------------------------------------------------------------------- /pyClarion/base/processes.py: -------------------------------------------------------------------------------- 1 | """Basic abstractions for component processes.""" 2 | 3 | 4 | from __future__ import annotations 5 | from typing import ClassVar, Tuple, Callable, Any 6 | from functools import partial 7 | 8 | from .symbols import feature 9 | 10 | 11 | __all__ = ["Process"] 12 | 13 | 14 | class Process: 15 | """Base class for simulated processes.""" 16 | 17 | fspace_names: ClassVar = ("reprs", "cmds", "params", "flags") 18 | 19 | prefix: str = "" 20 | fspaces: Tuple[partial[Tuple[feature, ...]], ...] = () 21 | 22 | initial: Any 23 | call: Callable 24 | 25 | def validate(self) -> None: 26 | """Validate process configuration.""" 27 | pass 28 | 29 | @property 30 | def reprs(self) -> Tuple[feature, ...]: 31 | """Feature symbols for state representations.""" 32 | raise NotImplementedError() 33 | 34 | @property 35 | def flags(self) -> Tuple[feature, ...]: 36 | """Feature symbols for process flags.""" 37 | raise NotImplementedError() 38 | 39 | @property 40 | def params(self) -> Tuple[feature, ...]: 41 | """Feature symbols for process parameters.""" 42 | raise NotImplementedError() 43 | 44 | @property 45 | def cmds(self) -> Tuple[feature, ...]: 46 | """Feature symbols for process commands.""" 47 | raise NotImplementedError() 48 | 49 | @property 50 | def nops(self) -> Tuple[feature, ...]: 51 | """Feature symbols for process nop commands.""" 52 | raise NotImplementedError() 53 | -------------------------------------------------------------------------------- /pyClarion/base/symbols.py: -------------------------------------------------------------------------------- 1 | """Basic Clarion datatypes.""" 2 | 3 | 4 | from __future__ import annotations 5 | from typing import Union, NamedTuple 6 | 7 | 8 | __all__ = ["dimension", "feature", "chunk", "rule"] 9 | 10 | 11 | class dimension(NamedTuple): 12 | """ 13 | A dimension symbol. 14 | 15 | :param id: Dimension URI. 16 | :param lag: Time-lag (in simulation steps). Defaults to 0. 17 | """ 18 | 19 | id: str 20 | lag: int = 0 21 | 22 | 23 | class feature(NamedTuple): 24 | """ 25 | A feature symbol. 26 | 27 | :param d: Feature dimension URI. 28 | :param v: Feature value. If str, should be a URI. 29 | :param l: Feature dimension time-lag (in simulation steps). Defaults to 0. 30 | """ 31 | 32 | d: str 33 | v: Union[str, int, None] = None 34 | l: int = 0 35 | 36 | @property 37 | def dim(self) -> dimension: 38 | """Feature dimension.""" 39 | return dimension(self.d, self.l) 40 | 41 | 42 | class chunk(NamedTuple): 43 | """ 44 | A chunk symbol. 45 | 46 | :param id: Chunk URI. 47 | """ 48 | 49 | id: str 50 | 51 | 52 | class rule(NamedTuple): 53 | """ 54 | A rule symbol. 55 | 56 | :param id: Rule URI. 57 | """ 58 | 59 | id: str 60 | -------------------------------------------------------------------------------- /pyClarion/base/uris.py: -------------------------------------------------------------------------------- 1 | """URI manipulation functions.""" 2 | 3 | from __future__ import annotations 4 | import re 5 | from urllib import parse 6 | from typing import Tuple, List, Dict, TypeVar, Any, overload 7 | 8 | 9 | ID, SEP, FSEP, SUP = re.compile(r"[\w-]+"), "/", "#", ".." # type: ignore 10 | 11 | 12 | join = parse.urljoin 13 | split = parse.urlsplit 14 | 15 | 16 | def ispath(s: str) -> bool: 17 | """Return True if s is a path.""" 18 | nodes = remove_prefix(s, SEP).split(SEP) 19 | nodes = nodes[slice(s.startswith(SEP), len(nodes) - s.endswith(SEP))] 20 | return all(ID.fullmatch(_s) is not None or _s == SUP for _s in nodes) 21 | 22 | 23 | def split_head(path: str) -> Tuple[str, str]: 24 | """Separate the first identifier in path from the rest.""" 25 | head, _, tail = path.partition(SEP) 26 | return head, tail 27 | 28 | 29 | def commonprefix(path1: str, path2: str) -> str: 30 | "Return the prefix common to path1 and path2." 31 | if not ispath(path1): 32 | raise ValueError(f"Invalid realizer path '{path1}'.") 33 | if not ispath(path2): 34 | raise ValueError(f"Invalid realizer path '{path2}'.") 35 | parts1 = path1.split(SEP) 36 | parts2 = path2.split(SEP) 37 | common_parts = [] 38 | for part1, part2 in zip(parts1, parts2): 39 | if part1 != part2: 40 | break 41 | common_parts.append(part1) 42 | return SEP.join(common_parts) 43 | 44 | 45 | def remove_prefix(path: str, prefix: str) -> str: 46 | """Remove prefix from path, if it is present.""" 47 | if path.startswith(prefix): 48 | return path[len(prefix):] 49 | else: 50 | return path 51 | 52 | 53 | def relativize(target: str, source: str) -> str: 54 | """Return path from source to target, where target is a child of source.""" 55 | common = commonprefix(target, source) 56 | if common == source: 57 | return remove_prefix(target, common).lstrip(SEP) 58 | raise ValueError(f"'{target}' is not subordinate to '{source}'.") 59 | 60 | 61 | T, T2 = TypeVar("T"), TypeVar("T2", str, list, tuple, dict) 62 | 63 | @overload 64 | def prefix(f: str, p: str) -> str: 65 | ... 66 | 67 | @overload 68 | def prefix(f: Dict[str, T], p: str) -> Dict[str, T]: 69 | ... 70 | 71 | @overload 72 | def prefix(f: List[str], p: str) -> List[str]: 73 | ... 74 | 75 | @overload 76 | def prefix(f: Tuple[str, ...], p: str) -> Tuple[str, ...]: 77 | ... 78 | 79 | def prefix(f: T2, p: str) -> T2: 80 | """Prefix fragment string or collection f with path p.""" 81 | if isinstance(f, str): 82 | return FSEP.join([p, f]).strip(FSEP) 83 | elif isinstance(f, dict): 84 | return {FSEP.join([p, k]).strip(FSEP): v for k, v in f.items()} 85 | elif isinstance(f, list): 86 | return list(FSEP.join([p, x]).strip(FSEP) for x in f) 87 | elif isinstance(f, tuple): 88 | return tuple(FSEP.join([p, x]).strip(FSEP) for x in f) 89 | else: 90 | raise TypeError(f"Unexpected type for arg 'f': {type(f)}") 91 | -------------------------------------------------------------------------------- /pyClarion/components/__init__.py: -------------------------------------------------------------------------------- 1 | """Provides pre-made components for assembing Clarion agents.""" 2 | 3 | 4 | from .basic import (Repeat, Receptors, Actions, CAM, Shift, BoltzmannSampler, 5 | ActionSampler, BottomUp, TopDown, AssociativeRules, ActionRules) 6 | from .stores import BLATracker, Store, GoalStore 7 | from .wm import Flags, Slots 8 | from .filters import Gates, DimFilter 9 | from .networks import NAM 10 | from .ms import Drives 11 | 12 | 13 | __all__ = ["Repeat", "Receptors", "Actions", "CAM", "Shift", "BoltzmannSampler", 14 | "ActionSampler", "BottomUp", "TopDown", "AssociativeRules", "ActionRules", 15 | "BLATracker", "Store", "GoalStore", "Flags", "Slots", "Gates", "DimFilter", 16 | "NAM", "Drives"] -------------------------------------------------------------------------------- /pyClarion/components/basic.py: -------------------------------------------------------------------------------- 1 | """Provides some basic propagators for building pyClarion agents.""" 2 | 3 | 4 | __all__ = ["Repeat", "Actions", "CAM", "Shift", "BoltzmannSampler", 5 | "ActionSampler", "BottomUp", "TopDown", "AssociativeRules", "ActionRules"] 6 | 7 | 8 | from ..base import dimension, feature, chunk, rule 9 | from .. import numdicts as nd 10 | from .. import dev as cld 11 | 12 | import re 13 | from typing import (OrderedDict, Tuple, Dict, List, TypeVar, Union, Sequence, 14 | Generator) 15 | from functools import partial 16 | 17 | 18 | T = TypeVar("T") 19 | 20 | 21 | class Repeat(cld.Process): 22 | """Copies signal from a single source.""" 23 | 24 | initial = nd.NumDict() 25 | 26 | def call(self, d: nd.NumDict[T]) -> nd.NumDict[T]: 27 | return d 28 | 29 | 30 | class Receptors(cld.Process): 31 | """Represents a perceptual channel.""" 32 | 33 | initial = nd.NumDict() 34 | 35 | def __init__( 36 | self, 37 | reprs: Union[List[str], Dict[str, List[str]], Dict[str, List[int]]] 38 | ) -> None: 39 | self._reprs = reprs 40 | self._data: nd.NumDict[feature] = nd.NumDict() 41 | 42 | def call(self) -> nd.NumDict[feature]: 43 | return self._data 44 | 45 | def stimulate( 46 | self, data: Union[List[Union[str, Tuple[str, Union[str, int]]]], 47 | Dict[Union[str, Tuple[str, Union[str, int]]], float]] 48 | ) -> None: 49 | """ 50 | Set perceptual stimulus levels for defined perceptual features. 51 | 52 | :param data: Stimulus data. If list, each entry is given an activation 53 | value of 1.0. If dict, each key is set to an activation level equal 54 | to its value. 55 | """ 56 | if isinstance(data, list): 57 | self._data = nd.NumDict({f: 1.0 for f in self._fseq(data)}) 58 | elif isinstance(data, dict): 59 | fspecs, strengths = zip(*data.items()) 60 | fseq = self._fseq(fspecs) # type: ignore 61 | self._data = nd.NumDict({f: v for f, v in zip(fseq, strengths)}) 62 | else: 63 | raise ValueError("Stimulus spec must be list or dict, " 64 | f"got {type(data)}.") 65 | 66 | def _fseq( 67 | self, data: Sequence[Union[str, Tuple[str, Union[str, int]]]] 68 | ) -> Generator[feature, None, None]: 69 | for x in data: 70 | if isinstance(x, tuple): 71 | f = feature(cld.prefix(x[0], self.prefix), x[1]) 72 | else: 73 | f = feature(cld.prefix(x, self.prefix)) 74 | if f in self.reprs: 75 | yield f 76 | else: 77 | raise ValueError(f"Unexpected stimulus feature spec: '{x}'") 78 | 79 | @property 80 | def reprs(self) -> Tuple[feature]: 81 | if isinstance(self._reprs, list): 82 | return tuple(feature(cld.prefix(x, self.prefix)) 83 | for x in self._reprs) 84 | else: 85 | assert isinstance(self._reprs, dict) 86 | return tuple( 87 | feature(cld.prefix(d, self.prefix), v) if len(l) 88 | else feature(cld.prefix(d, self.prefix)) 89 | for d, l in self._reprs.items() for v in l 90 | ) 91 | 92 | 93 | class Actions(cld.Process): 94 | """Represents external actions.""" 95 | 96 | initial = nd.NumDict() 97 | _cmd_pre = "cmd" 98 | 99 | def __init__( 100 | self, 101 | actions: Union[Dict[str, List[str]], Dict[str, List[int]]] 102 | ) -> None: 103 | self.actions = OrderedDict(actions) 104 | 105 | def call(self, c: nd.NumDict[feature]) -> nd.NumDict[feature]: 106 | return (c 107 | .drop(sf=lambda k: k.v is None) 108 | .transform_keys(kf=self._cmd2repr)) 109 | 110 | def parse_actions(self, a: nd.NumDict) -> Dict[str, str]: 111 | """Return a dictionary of selected action values for each dimension""" 112 | 113 | result = {} 114 | for d, vs in self.actions.items(): 115 | for v in vs: 116 | s = a[feature(cld.prefix(d, self.prefix), v)] 117 | if s == 1 and d not in result and v is not None: 118 | result[d] = v 119 | elif s == 1 and d in result and v is not None: 120 | raise ValueError(f"Multiple values for action dim '{d}'") 121 | else: 122 | continue 123 | return result 124 | 125 | def _cmd2repr(self, cmd: feature): 126 | _d, v, l = cmd 127 | if l != 0: raise ValueError("Lagged cmd not allowed.") 128 | d = re.sub(f"{cld.FSEP}{self._cmd_pre}-", f"{cld.FSEP}", _d) 129 | return feature(d, v) 130 | 131 | def _action_items(self): 132 | if len(self.actions) > 0: 133 | return list(zip(*self.actions.items())) 134 | else: 135 | return [], [] 136 | 137 | @property 138 | def reprs(self) -> Tuple[feature, ...]: 139 | ds, vls = list(zip(*self.actions.items())) 140 | ds = cld.prefix(ds, self.prefix) # type: ignore 141 | return tuple(feature(d, v) for d, vs in zip(ds, vls) for v in vs) 142 | 143 | @property 144 | def cmds(self) -> Tuple[feature, ...]: 145 | ds, vls = self._action_items() 146 | ds = ["-".join(filter(None, [self._cmd_pre, d])) # type: ignore 147 | for d in ds] 148 | ds = cld.prefix(ds, self.prefix) # type: ignore 149 | vls = [[None] + l for l in vls] # type: ignore 150 | return tuple(feature(d, v) for d, vs in zip(ds, vls) for v in vs) 151 | 152 | @property 153 | def nops(self) -> Tuple[feature, ...]: 154 | ds = ["-".join(filter(None, [self._cmd_pre, d])) # type: ignore 155 | for d in self.actions.keys()] 156 | ds = cld.prefix(ds, self.prefix) # type: ignore 157 | return tuple(feature(d, None) for d in ds) 158 | 159 | 160 | class CAM(cld.Process): 161 | """Computes the combined-add-max activation for each node in a pool.""" 162 | 163 | initial = nd.NumDict() 164 | 165 | def call(self, *inputs: nd.NumDict[T]) -> nd.NumDict[T]: 166 | return nd.NumDict.eltwise_cam(*inputs) 167 | 168 | 169 | class Shift(cld.Process): 170 | """Shifts feature strengths by one time step.""" 171 | 172 | initial = nd.NumDict() 173 | 174 | def __init__( 175 | self, lead: bool = False, max_lag: int = 1, min_lag: int = 0 176 | ) -> None: 177 | """ 178 | Initialize a new `Lag` propagator. 179 | 180 | :param lead: Whether to lead (or lag) features. 181 | :param max_lag: Drops features with lags greater than this value. 182 | :param min_lag: Drops features with lags less than this value. 183 | """ 184 | 185 | self.lag = partial(cld.lag, val=1 if not lead else -1) 186 | self.min_lag = min_lag 187 | self.max_lag = max_lag 188 | 189 | def call(self, d: nd.NumDict[feature]) -> nd.NumDict[feature]: 190 | return d.transform_keys(kf=self.lag).keep(sf=self._filter) 191 | 192 | def _filter(self, f) -> bool: 193 | return type(f) == feature and self.min_lag <= f.dim.lag <= self.max_lag 194 | 195 | 196 | class BoltzmannSampler(cld.Process): 197 | """Samples a node according to a Boltzmann distribution.""" 198 | 199 | initial = (nd.NumDict(), nd.NumDict()) 200 | 201 | def call( 202 | self, p: nd.NumDict[feature], d: nd.NumDict[T], *_: nd.NumDict 203 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 204 | """ 205 | Select chunks through an activation-based competition. 206 | 207 | Selection probabilities vary with chunk strengths according to a 208 | Boltzmann distribution. 209 | 210 | :param d: Incoming feature activations. 211 | :param p: Incoming parameters (temperature & threshold). 212 | """ 213 | 214 | # assuming d.c == 0 215 | _d = d.keep_greater(ref=p.isolate(key=self.params[0])) 216 | if len(_d): 217 | dist = _d.boltzmann(p.isolate(key=self.params[1])) 218 | return dist.sample().squeeze(), dist 219 | else: 220 | return self.initial 221 | 222 | @property 223 | def params(self) -> Tuple[feature, ...]: 224 | return tuple(feature(dim) 225 | for dim in cld.prefix(("th", "temp"), self.prefix)) 226 | 227 | 228 | class ActionSampler(cld.Process): 229 | """ 230 | Selects actions and relays action paramaters. 231 | 232 | Expects to be linked to an external 'cmds' fspace. 233 | 234 | Actions are selected for each command dimension according to a Boltzmann 235 | distribution. 236 | """ 237 | 238 | initial = (nd.NumDict(), nd.NumDict()) 239 | 240 | def call( 241 | self, p: nd.NumDict[feature], d: nd.NumDict[feature] 242 | ) -> Tuple[nd.NumDict[feature], nd.NumDict[feature]]: 243 | """ 244 | Select actions for each client command dimension. 245 | 246 | :param p: Selection parameters (temperature). See self.params for 247 | expected parameter keys. 248 | :param d: Action feature strengths. 249 | 250 | :returns: tuple (actions, distributions) 251 | where 252 | actions sends selected actions to 1 and everything else to 0, and 253 | distributions contains the sampling probabilities for each action 254 | """ 255 | 256 | dims = cld.group_by_dims(self.fspaces[0]()) 257 | temp = p.isolate(key=self.params[0]) 258 | _dists, _actions = [], [] 259 | for fs in dims.values(): 260 | dist = d.with_keys(ks=fs).boltzmann(temp) 261 | _dists.append(dist) 262 | _actions.append(dist.sample().squeeze()) 263 | dists = nd.NumDict().merge(*_dists) 264 | actions = nd.NumDict().merge(*_actions) 265 | 266 | return actions, dists 267 | 268 | def validate(self) -> None: 269 | nv = len(self.fspaces) 270 | if not nv: 271 | raise RuntimeError(f"Vocabs must be of length at least 1.") 272 | vt0 = self.fspaces[0].args[1] 273 | if vt0 != "cmds": 274 | raise RuntimeError(f"Expected vocab type 'cmds', got '{vt0}'.") 275 | 276 | @property 277 | def params(self) -> Tuple[feature, ...]: 278 | return (feature(cld.prefix("temp", self.prefix)),) 279 | 280 | 281 | class BottomUp(cld.Process): 282 | """Propagates bottom-up activations.""" 283 | 284 | initial = nd.NumDict() 285 | 286 | def call( 287 | self, 288 | fs: nd.NumDict[Tuple[chunk, feature]], 289 | ws: nd.NumDict[Tuple[chunk, dimension]], 290 | wn: nd.NumDict[chunk], 291 | d: nd.NumDict[feature] 292 | ) -> nd.NumDict[chunk]: 293 | """ 294 | Propagate bottom-up activations. 295 | 296 | :param fs: Chunk-feature associations (binary). 297 | :param ws: Chunk-dimension associations (i.e., top-down weights). 298 | :param wn: Normalization terms for each chunk. For each chunk, expected 299 | to be equal to g(sum(|w|)), where g is some superlinear function. 300 | :param d: Feature strengths in the bottom level. 301 | """ 302 | 303 | return (fs 304 | .put(d, kf=cld.second, strict=True) 305 | .cam_by(kf=cld.cf2cd) 306 | .mul_from(ws, kf=cld.eye) 307 | .sum_by(kf=cld.first) 308 | .squeeze() 309 | .div_from(wn, kf=cld.eye, strict=True)) 310 | 311 | 312 | class TopDown(cld.Process): 313 | """Propagates top-down activations.""" 314 | 315 | initial = nd.NumDict() 316 | 317 | def call( 318 | self, 319 | fs: nd.NumDict[Tuple[chunk, feature]], 320 | ws: nd.NumDict[Tuple[chunk, dimension]], 321 | d: nd.NumDict[chunk] 322 | ) -> nd.NumDict[feature]: 323 | """ 324 | Propagate top-down activations. 325 | 326 | :param fs: Chunk-feature associations (binary). 327 | :param ws: Chunk-dimension associations (i.e., top-down weights). 328 | :param d: Chunk strengths in the top level. 329 | """ 330 | 331 | return (fs 332 | .mul_from(d, kf=cld.first, strict=True) 333 | .mul_from(ws, kf=cld.cf2cd, strict=True) 334 | .cam_by(kf=cld.second) 335 | .squeeze()) 336 | 337 | 338 | class AssociativeRules(cld.Process): 339 | """Propagates activations according to associative rules.""" 340 | 341 | initial = (nd.NumDict(), nd.NumDict()) 342 | 343 | def call( 344 | self, 345 | cr: nd.NumDict[Tuple[chunk, rule]], 346 | rc: nd.NumDict[Tuple[rule, chunk]], 347 | d: nd.NumDict[chunk] 348 | ) -> Tuple[nd.NumDict[chunk], nd.NumDict[rule]]: 349 | """ 350 | Propagate activations through associative rules. 351 | 352 | :param cr: Chunk-to-rule associations (i.e., condition weights). 353 | :param rc: Rule-to-chunk associations (i.e., conclusion weights; 354 | typically binary). 355 | :param d: Condition chunk strengths. 356 | """ 357 | 358 | norm = (cr 359 | .put(cr.abs().sum_by(kf=cld.first), kf=cld.first) 360 | .set_c(1)) 361 | s_r = (cr 362 | .mul_from(d, kf=cld.first, strict=True) 363 | .div(norm) 364 | .sum_by(kf=cld.second) 365 | .squeeze()) 366 | s_c = (rc 367 | .mul_from(s_r, kf=cld.first, strict=True) 368 | .max_by(kf=cld.second) 369 | .squeeze()) 370 | 371 | return s_c, s_r 372 | 373 | 374 | class ActionRules(cld.Process): 375 | """Selects action chunks according to action rules.""" 376 | 377 | initial = (nd.NumDict(), nd.NumDict(), nd.NumDict()) 378 | 379 | def call( 380 | self, 381 | p: nd.NumDict[feature], 382 | cr: nd.NumDict[Tuple[chunk, rule]], 383 | rc: nd.NumDict[Tuple[rule, chunk]], 384 | d: nd.NumDict[chunk] 385 | ) -> Tuple[nd.NumDict[chunk], nd.NumDict[rule], nd.NumDict[rule]]: 386 | """ 387 | Select actions chunks through action rules. 388 | 389 | :param p: Selection parameters (threshold and temperature). See 390 | self.params for expected parameter keys. 391 | :param cr: Chunk-to-rule associations (i.e., condition weights). 392 | :param rc: Rule-to-chunk associations (i.e., conclusion weights; 393 | typically binary). 394 | :param d: Condition chunk strengths. 395 | """ 396 | 397 | # assuming d.c == 0 398 | _d = d.keep_greater(ref=p.isolate(key=self.params[0])) 399 | if len(_d) and len(cr): 400 | dist = (cr 401 | .mul_from(_d, kf=cld.first) 402 | .boltzmann(p.isolate(key=self.params[1])) 403 | .transform_keys(kf=cld.second) 404 | .squeeze()) 405 | r_sampled = rc.put(dist.sample().squeeze(), kf=cld.first) 406 | r_data = r_sampled.transform_keys(kf=cld.first).squeeze() 407 | action = r_sampled.mul(rc).transform_keys(kf=cld.second).squeeze() 408 | return action, r_data, dist 409 | else: 410 | return self.initial 411 | 412 | @property 413 | def params(self) -> Tuple[feature, ...]: 414 | return tuple(feature(dim) 415 | for dim in cld.prefix(("th", "temp"), self.prefix)) 416 | -------------------------------------------------------------------------------- /pyClarion/components/filters.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | from typing import Tuple, TypeVar, Sequence 3 | 4 | from ..base import feature 5 | from .wm import Flags 6 | from .. import dev as cld 7 | from .. import numdicts as nd 8 | 9 | 10 | __all__ = ["Gates", "DimFilter"] 11 | 12 | 13 | T = TypeVar("T") 14 | 15 | 16 | class Gates(cld.Process): 17 | """Selectively gates inputs.""" 18 | 19 | def __init__(self, fs: Sequence[str]) -> None: 20 | self._flags = Flags(fs=fs, vs=(0, 1)) 21 | 22 | def call( 23 | self, c: nd.NumDict[feature], *inputs: nd.NumDict 24 | ) -> Tuple[nd.NumDict, ...]: 25 | """Gate inputs, then update gate settings according to c.""" 26 | 27 | gs = [self.store.isolate(key=k) for k in self.flags] 28 | self.update(c) 29 | return (self.store, *(x.mul(g) for g, x in zip(gs, inputs))) 30 | 31 | def update(self, c): 32 | self._flags.update(c) 33 | 34 | @property 35 | def prefix(self) -> str: 36 | return self._flags.prefix 37 | 38 | @prefix.setter 39 | def prefix(self, val: str) -> None: 40 | self._flags.prefix = val 41 | 42 | @property 43 | def fs(self) -> Tuple[str, ...]: 44 | return self._flags.fs 45 | 46 | @fs.setter 47 | def fs(self, val: Sequence[str]) -> None: 48 | self._flags.fs = tuple(val) 49 | 50 | @property 51 | def store(self) -> nd.NumDict[feature]: 52 | return self._flags.store 53 | 54 | @property 55 | def initial(self): 56 | return tuple(nd.NumDict() for _ in range(len(self.fs) + 1)) 57 | 58 | @property 59 | def flags(self) -> Tuple[feature, ...]: 60 | return self._flags.flags 61 | 62 | @property 63 | def cmds(self) -> Tuple[feature, ...]: 64 | return self._flags.cmds 65 | 66 | @property 67 | def nops(self) -> Tuple[feature, ...]: 68 | return self._flags.nops 69 | 70 | 71 | class DimFilter(cld.Process): 72 | """Selectively filters dimensions.""" 73 | 74 | initial = (nd.NumDict(), nd.NumDict()) 75 | 76 | def __init__(self) -> None: 77 | self._flags = Flags(fs=(), vs=(0, 1)) 78 | 79 | def call( 80 | self, c: nd.NumDict[feature], d: nd.NumDict[feature] 81 | ) -> Tuple[nd.NumDict[feature], nd.NumDict[feature]]: 82 | 83 | store = self.store 84 | self.update(c) 85 | return store, d.mul_from(store, kf=self._feature2flag) 86 | 87 | def _feature2flag(self, f): 88 | return feature(cld.prefix(f.d.replace(cld.FSEP, "."), self.prefix)) 89 | 90 | def update(self, c): 91 | self._flags.fs = tuple(f.d.replace(cld.FSEP, ".") 92 | for fspace in self.fspaces for f in fspace()) 93 | self._flags.update(c) 94 | 95 | def validate(self): 96 | self.update(nd.NumDict()) 97 | self._flags.validate() 98 | 99 | @property 100 | def prefix(self) -> str: 101 | return self._flags.prefix 102 | 103 | @prefix.setter 104 | def prefix(self, val: str) -> None: 105 | self._flags.prefix = val 106 | 107 | @property 108 | def fs(self) -> Tuple[str, ...]: 109 | return self._flags.fs 110 | 111 | @fs.setter 112 | def fs(self, val: Sequence[str]) -> None: 113 | self._flags.fs = tuple(val) 114 | 115 | @property 116 | def store(self) -> nd.NumDict[feature]: 117 | return self._flags.store 118 | 119 | @property 120 | def flags(self) -> Tuple[feature, ...]: 121 | return self._flags.flags 122 | 123 | @property 124 | def cmds(self) -> Tuple[feature, ...]: 125 | return self._flags.cmds 126 | 127 | @property 128 | def nops(self) -> Tuple[feature, ...]: 129 | return self._flags.nops 130 | -------------------------------------------------------------------------------- /pyClarion/components/ms.py: -------------------------------------------------------------------------------- 1 | from typing import Sequence, Tuple 2 | 3 | from .. import dev as cld 4 | from ..base import feature 5 | from ..numdicts import NumDict 6 | 7 | 8 | class Drives(cld.Process): 9 | """ 10 | Maintains drive strengths. 11 | 12 | Houses deficits and baselines. 13 | """ 14 | 15 | initial = NumDict() 16 | 17 | def __init__(self, spec: Sequence[str]) -> None: 18 | for f in spec: 19 | if not cld.ispath(f): 20 | raise ValueError(f"Invalid drive name '{f}'") 21 | self.dspec = spec 22 | self.deficits: NumDict[feature] = NumDict() 23 | self.baselines: NumDict[feature] = NumDict() 24 | 25 | def call( 26 | self, stimuli: NumDict[feature], gains: NumDict[feature] 27 | ) -> NumDict[feature]: 28 | return self.baselines.add( 29 | (self.deficits 30 | .mul_from(stimuli, kf=cld.eye) 31 | .mul_from(gains, kf=cld.eye))) 32 | 33 | @property 34 | def reprs(self) -> Tuple[feature, ...]: 35 | return tuple(feature(cld.prefix(d, self.prefix)) for d in self.dspec) 36 | -------------------------------------------------------------------------------- /pyClarion/components/networks.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | from typing import List, Tuple, Callable 3 | 4 | from .. import dev as cld 5 | from ..base import feature 6 | from ..numdicts import NumDict 7 | 8 | 9 | class NAM(cld.Process): 10 | """ 11 | A neural associative memory. 12 | 13 | Implements a single fully connected layer. 14 | 15 | For validation, each weight and bias key must belong to a client fspace. 16 | 17 | May be used as a static network or as a base for various associative 18 | learning models such as Hopfield nets. 19 | """ 20 | 21 | initial = NumDict() 22 | 23 | w: NumDict[Tuple[feature, feature]] 24 | b: NumDict[feature] 25 | 26 | def __init__( 27 | self, 28 | f: Callable[[NumDict[feature]], NumDict[feature]] = cld.eye 29 | ) -> None: 30 | self.w = NumDict() 31 | self.b = NumDict() 32 | self.f = f 33 | 34 | def validate(self): 35 | if self.fspaces: 36 | fspace = set(f for fspace in self.fspaces for f in fspace()) 37 | if (any(k1 not in fspace or k2 not in fspace for k1, k2 in self.w) 38 | or any(k not in fspace for k in self.b)): 39 | raise ValueError("Parameter key not a member of set fspaces.") 40 | 41 | def call(self, x: NumDict[feature]) -> NumDict[feature]: 42 | return (self.w 43 | .mul_from(x, kf=cld.first) 44 | .sum_by(kf=cld.second) 45 | .add(self.b) 46 | .pipe(self.f)) 47 | -------------------------------------------------------------------------------- /pyClarion/components/stores.py: -------------------------------------------------------------------------------- 1 | """Tools for instantiating top level knowledge.""" 2 | 3 | 4 | from __future__ import annotations 5 | from typing import (OrderedDict, Tuple, Any, Dict, List, Generic, TypeVar, 6 | Container, Optional, Callable) 7 | from itertools import count 8 | 9 | from ..base import Process, uris, dimension, feature, chunk, rule 10 | from .. import numdicts as nd 11 | from .. import dev as cld 12 | 13 | 14 | __all__ = ["BLATracker", "Store", "GoalStore"] 15 | 16 | 17 | T = TypeVar("T") 18 | 19 | 20 | class BLATracker(Generic[T]): 21 | """BLA tracker for top level stores.""" 22 | 23 | params = ("th", "bln", "amp", "dec") 24 | 25 | lags: nd.NumDict[Tuple[T, int]] 26 | uses: nd.NumDict[T] 27 | lifetimes: nd.NumDict[T] 28 | 29 | def __init__(self, depth: int = 1) -> None: 30 | """ 31 | Initialize a new BLA tracker. 32 | 33 | :param depth: Depth of estimate. 34 | """ 35 | 36 | if depth < 0: 37 | raise ValueError("Depth must be non-negative.") 38 | self.depth = depth 39 | 40 | self.lags = nd.NumDict() 41 | self.uses = nd.NumDict() 42 | self.lifetimes = nd.NumDict() 43 | 44 | def call(self, p: nd.NumDict[str]) -> nd.NumDict: 45 | 46 | bln = p.isolate(key=self.params[1]) 47 | amp = p.isolate(key=self.params[2]) 48 | dec = p.isolate(key=self.params[3]) 49 | 50 | selector = self.uses > self.depth 51 | selector_lags = self.lags.put(selector, kf=cld.first) 52 | n = self.uses.keep_if(cond=selector) 53 | t_n = self.lifetimes.keep_if(cond=selector) 54 | t_k = self.lags.keep_if(cond=selector_lags).max_by(kf=self.xi2x) 55 | dec_rs_1 = (1 - dec) 56 | factor = (n - self.depth).set_c(0) / dec_rs_1 57 | t = (t_n ** dec_rs_1 - t_k ** dec_rs_1) / (t_n - t_k).set_c(1) 58 | distant_approx = factor * t 59 | 60 | dec_lags = self.lags.put(dec, kf=cld.first) 61 | sum_t = (self.lags ** -dec_lags).sum_by(kf=self.xi2x) + distant_approx 62 | 63 | return bln + amp * sum_t 64 | 65 | def update(self, p: nd.NumDict[str], d: nd.NumDict[T]) -> None: 66 | 67 | th = p.isolate(key=self.params[0]) 68 | invoked = (d > th).squeeze() 69 | 70 | self.uses += invoked 71 | self.lifetimes += ( 72 | self.lifetimes 73 | .add((d > th).mul(0)) # ensures new items added to lifetimes 74 | .mask()) 75 | 76 | invoked_lags = self.lags.put(invoked, kf=cld.first) 77 | unshifted = self.lags.keep_if(cond=invoked_lags.rsub(1)) 78 | self.lags = (self.lags 79 | .keep_if(invoked_lags) # select entries w/ new invocations 80 | .transform_keys(kf=self.shift) 81 | .drop(sf=self.expired) # drop entries beyond depth limit 82 | .merge(unshifted) # merge in unshifted entries 83 | .add(1) # update existing lags 84 | .merge(invoked.transform_keys(kf=self.x2xi))) 85 | 86 | def drop(self, d: Container[T]) -> None: 87 | self.uses = self.uses.drop(sf=d.__contains__) 88 | self.lifetimes = self.lifetimes.drop(sf=d.__contains__) 89 | self.lags = self.lags.drop(sf=lambda k: k[0] in d) 90 | 91 | def expired(self, key: Tuple[Any, int]) -> bool: 92 | return key[1] >= self.depth 93 | 94 | @staticmethod 95 | def shift(key: Tuple[T, int]) -> Tuple[T, int]: 96 | return (key[0], key[1] + 1) 97 | 98 | @staticmethod 99 | def xi2x(key: Tuple[T, int]) -> T: 100 | return key[0] 101 | 102 | @staticmethod 103 | def x2xi(key: T) -> Tuple[T, int]: 104 | return (key, 0) 105 | 106 | 107 | class Store(Process): 108 | """Basic store for top-level knowledge.""" 109 | 110 | initial = ( 111 | nd.NumDict(), nd.NumDict(), nd.NumDict(),# chunk weights 112 | nd.NumDict(), nd.NumDict(), # rule weights 113 | nd.NumDict(), nd.NumDict()) # blas 114 | 115 | cf: nd.NumDict[Tuple[chunk, feature]] # (chunk, feature): 1.0, c=0 116 | cw: nd.NumDict[Tuple[chunk, dimension]] # (chunk, dim): w, c=0 117 | wn: nd.NumDict[chunk] # chunk: sum(|w|), c=0 118 | cr: nd.NumDict[Tuple[chunk, rule]] # (chunk, rule): 1.0, c=0, rank=2 119 | rc: nd.NumDict[Tuple[rule, chunk]] # (rule, chunk): w, c=0, rank=2 120 | cb: Optional[BLATracker[chunk]] 121 | rb: Optional[BLATracker[rule]] 122 | 123 | # parameter prefixes for cb and rb 124 | c_pre = "c" 125 | r_pre = "r" 126 | 127 | def __init__( 128 | self, 129 | g: Callable[[nd.NumDict[chunk]], nd.NumDict[chunk]] = cld.eye, 130 | cbt: Optional[BLATracker[chunk]] = None, 131 | rbt: Optional[BLATracker[rule]] = None 132 | ) -> None: 133 | 134 | self.g = g 135 | self.cf = nd.NumDict() 136 | self.cw = nd.NumDict() 137 | self.wn = nd.NumDict() 138 | self.cr = nd.NumDict() 139 | self.rc = nd.NumDict() 140 | self.cb = cbt 141 | self.rb = rbt 142 | 143 | def call( 144 | self, 145 | p: nd.NumDict[feature], 146 | f: nd.NumDict[feature], 147 | c: nd.NumDict[chunk], 148 | r: nd.NumDict[rule] 149 | ) -> Tuple[ 150 | nd.NumDict[Tuple[chunk, feature]], 151 | nd.NumDict[Tuple[chunk, dimension]], 152 | nd.NumDict[chunk], 153 | nd.NumDict[Tuple[chunk, rule]], 154 | nd.NumDict[Tuple[rule, chunk]], 155 | nd.NumDict[chunk], 156 | nd.NumDict[rule]]: 157 | 158 | cb, rb = self.update_blas(p, c, r) 159 | wn = self.g(self.wn).set_c(0) 160 | return self.cf, self.cw, wn, self.cr, self.rc, cb, rb 161 | 162 | def update_blas( 163 | self, p: nd.NumDict[feature], c: nd.NumDict[chunk], r: nd.NumDict[rule] 164 | ) -> Tuple[nd.NumDict[chunk], nd.NumDict[rule]]: 165 | 166 | if self.cb is None: 167 | cb = nd.NumDict() 168 | else: 169 | cp = self._extract_cp(p) 170 | self.cb.update(cp, c) 171 | cb = self.cb.call(cp) 172 | 173 | if self.rb is None: 174 | rb = nd.NumDict() 175 | else: 176 | rp = self._extract_rp(p) 177 | self.rb.update(rp, r) 178 | rb = self.rb.call(rp) 179 | 180 | return cb, rb 181 | 182 | def _extract_cp(self, p: nd.NumDict): 183 | return (p 184 | .keep(sf=self._select_cps) 185 | .transform_keys(kf=self._transform_cps)) 186 | 187 | def _extract_rp(self, p: nd.NumDict): 188 | return (p 189 | .keep(sf=self._select_rps) 190 | .transform_keys(kf=self._transform_rps)) 191 | 192 | def _select_cps(self, k): 193 | if self.cb is not None: 194 | return k in self.params[0:len(self.cb.params)] 195 | else: 196 | raise ValueError("Chunk BLAs not defined") 197 | 198 | def _transform_cps(self, k): 199 | if self.cb is None: 200 | raise ValueError("Chunk BLAs not defined") 201 | else: 202 | params = self.params[0:len(self.cb.params)] 203 | return self.cb.params[params.index(k)] 204 | 205 | def _select_rps(self, k): 206 | if self.rb is None: 207 | raise ValueError("Rule BLAs not defined") 208 | else: 209 | offset = 0 if self.cb is None else len(self.cb.params) 210 | return k in self.params[offset:offset + len(self.rb.params)] 211 | 212 | def _transform_rps(self, k): 213 | if self.rb is None: 214 | raise ValueError("Rule BLAs not defined") 215 | else: 216 | offset = 0 if self.cb is None else len(self.cb.params) 217 | params = self.params[offset:offset + len(self.rb.params)] 218 | return self.rb.params[params.index(k)] 219 | 220 | @property 221 | def params(self): 222 | cps = ["-".join(filter(None, [self.c_pre, p])) 223 | for p in self.cb.params] if self.cb is not None else [] 224 | rps = ["-".join(filter(None, [self.r_pre, p])) 225 | for p in self.rb.params] if self.rb is not None else [] 226 | ps = cld.prefix(cps + rps, self.prefix) 227 | return tuple(feature(p) for p in ps) 228 | 229 | 230 | class GoalStore(Store): 231 | 232 | _set_pre = "set" 233 | _eval = "eval" 234 | _eval_vals = ("pass", "fail", "quit", None) 235 | 236 | def __init__(self, 237 | gspec: Dict[str, List[str]], 238 | g: Callable[[nd.NumDict[chunk]], nd.NumDict[chunk]] = cld.eye, 239 | cbt: Optional[BLATracker[chunk]] = None, 240 | ) -> None: 241 | 242 | super().__init__(g=g, cbt=cbt) 243 | self.gspec = OrderedDict(gspec) 244 | self.count = count() 245 | 246 | def call( 247 | self, 248 | p: nd.NumDict[feature], 249 | f: nd.NumDict[feature], 250 | c: nd.NumDict[chunk], 251 | r: nd.NumDict[rule] 252 | ) -> Tuple[ 253 | nd.NumDict[Tuple[chunk, feature]], 254 | nd.NumDict[Tuple[chunk, dimension]], 255 | nd.NumDict[chunk], 256 | nd.NumDict[Tuple[chunk, rule]], 257 | nd.NumDict[Tuple[rule, chunk]], 258 | nd.NumDict[chunk], 259 | nd.NumDict[rule]]: 260 | 261 | cmds = self.cmds 262 | f = f.drop(sf=lambda ftr: ftr.v is None) 263 | 264 | eval_ = f.keep(sf=cmds[-4:].__contains__) 265 | if len(eval_): 266 | self.cf = self.cf.drop(sf=lambda k: k[0] in c) 267 | self.cw = self.cw.drop(sf=lambda k: k[0] in c) 268 | if self.cb is not None: self.cb.drop(c) 269 | 270 | set_ = (f.keep(sf=cmds[:-4].__contains__) 271 | .transform_keys(kf=self._cmd2repr)) 272 | if len(set_): 273 | new = chunk(uris.FSEP.join([self.prefix, str(next(self.count))]) 274 | .strip(uris.FSEP)) 275 | d = nd.NumDict({new: 1.0}) 276 | self.cf = self.cf.merge(d.outer(set_)) 277 | cw = d.outer(set_.transform_keys(kf=feature.dim.fget)) #type: ignore 278 | self.cw = self.cw.merge(cw) 279 | self.wn = self.wn.merge(cw.abs().sum_by(kf=cld.first)) 280 | return super().call(p, nd.NumDict(), d, nd.NumDict()) 281 | else: 282 | return super().call(p, nd.NumDict(), nd.NumDict(), nd.NumDict()) 283 | 284 | def _cmd2repr(self, cmd): 285 | return self.reprs[self.cmds.index(cmd) - 1] 286 | 287 | def _goal_items(self): 288 | if len(self.gspec) > 0: 289 | return list(zip(*self.gspec.items())) 290 | else: 291 | return [], [] 292 | 293 | @property 294 | def reprs(self) -> Tuple[feature, ...]: 295 | ds, v_lists = self._goal_items() 296 | ds = cld.prefix(ds, self.prefix) # type: ignore 297 | return tuple(feature(d, v) for d, vs in zip(ds, v_lists) for v in vs) 298 | 299 | @property 300 | def cmds(self) -> Tuple[feature, ...]: 301 | ds, v_lists = self._goal_items() 302 | ds = ["-".join([self._set_pre, d]) for d in ds] # type: ignore 303 | ds = cld.prefix(ds, self.prefix) 304 | v_lists = [[None] + l for l in v_lists] # type: ignore 305 | set_ = tuple(feature(d, v) for d, vs in zip(ds, v_lists) for v in vs) 306 | eval_dim = cld.prefix(self._eval, self.prefix) 307 | eval_ = tuple(feature(eval_dim, v) for v in self._eval_vals) 308 | return set_ + eval_ 309 | 310 | @property 311 | def nops(self) -> Tuple[feature, ...]: 312 | ds = ["-".join(filter(None, [self._set_pre, d])) # type: ignore 313 | for d in self.gspec.keys()] 314 | ds = cld.prefix(ds, self.prefix) 315 | set_ = tuple(feature(d, None) for d in ds) 316 | eval_dim = cld.prefix(self._eval, self.prefix) 317 | return set_ + (feature(eval_dim, None), ) 318 | -------------------------------------------------------------------------------- /pyClarion/components/wm.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, List, Sequence 2 | import re 3 | 4 | from ..base import feature, chunk 5 | from .. import numdicts as nd 6 | from .. import dev as cld 7 | 8 | 9 | __all__ = ["Flags", "Slots"] 10 | 11 | 12 | class Flags(cld.Process): 13 | 14 | initial = nd.NumDict() 15 | set_prefix = "set" 16 | 17 | def __init__(self, fs: Sequence[str], vs: Sequence[int] = (-1, 0, 1)) -> None: 18 | for f in fs: 19 | if not cld.ispath(f): 20 | raise ValueError(f"Flag name '{f}' is not a valid path.") 21 | if f.startswith(f"{self.set_prefix}"): 22 | raise ValueError("Flag name starts with reserved prefix " 23 | f"'{self.set_prefix}'") 24 | 25 | self.fs = tuple(fs) 26 | self.vs = (None, *vs) # type: ignore 27 | self.store = nd.NumDict(c=0) 28 | 29 | def call(self, c: nd.NumDict[feature]) -> nd.NumDict[feature]: 30 | self.update(c) 31 | return self.store 32 | 33 | def update(self, c: nd.NumDict) -> None: 34 | self.store = (self.store 35 | .mul(c 36 | .keep(sf=lambda f: f.v is None) 37 | .transform_keys(kf=self.cmd2flag) 38 | .mask()) 39 | .squeeze() 40 | .merge(c 41 | .keep(sf=lambda f: f.v == 1) 42 | .transform_keys(kf=self.cmd2flag) 43 | .mask()) 44 | .merge(c 45 | .keep(sf=lambda f: f.v == -1) 46 | .transform_keys(kf=self.cmd2flag) 47 | .mask() 48 | .mul(-1))) 49 | 50 | def cmd2flag(self, f_cmd): 51 | l, sep, r = f_cmd.d.partition(cld.FSEP) 52 | d_cmd = l if not sep else r 53 | flag = re.sub("^set-", "", d_cmd) 54 | f = feature(cld.prefix(flag, self.prefix)) 55 | assert f in self.flags, f"regexp sub likely failed: '{f}'" 56 | return f 57 | 58 | @property 59 | def flags(self) -> Tuple[feature, ...]: 60 | return tuple(feature(dim) for dim in cld.prefix(self.fs, self.prefix)) 61 | 62 | @property 63 | def cmds(self): 64 | dims = ["-".join([self.set_prefix, f]) for f in self.fs] 65 | dims = cld.prefix(dims, self.prefix) 66 | return tuple(feature(dim, v) for dim in dims for v in self.vs) 67 | 68 | @property 69 | def nops(self): 70 | dims = ["-".join([self.set_prefix, f]) for f in self.fs] 71 | dims = cld.prefix(dims, self.prefix) 72 | return tuple(feature(dim, None) for dim in dims) 73 | 74 | 75 | class Slots(cld.Process): 76 | 77 | initial = (nd.NumDict(), nd.NumDict()) 78 | store: nd.NumDict[Tuple[int, chunk]] 79 | 80 | def __init__(self, slots: int) -> None: 81 | self.slots = slots 82 | self.store = nd.NumDict() # (slot, chunk): 1.0, c=0 83 | 84 | def call( 85 | self, 86 | c: nd.NumDict[feature], 87 | s: nd.NumDict[chunk], 88 | m: nd.NumDict[chunk] 89 | ) -> Tuple[nd.NumDict[chunk], nd.NumDict[feature]]: 90 | """ 91 | c: commands 92 | s: selected chunk 93 | m: match strengths 94 | """ 95 | 96 | self.update(c, s) 97 | 98 | rd = (c # rd indicates for each slot if its contents should be read 99 | .keep(sf=lambda k: k.d not in self._write_dims() and k.v == 1) 100 | .transform_keys(kf=self._cmd2slot)) 101 | chunks = (self.store 102 | .put(rd, kf=cld.first) 103 | .squeeze() 104 | .max_by(kf=cld.second)) 105 | 106 | full = (self.store 107 | .abs() 108 | .sum_by(kf=cld.first) 109 | .greater(0) 110 | .mul(2) 111 | .sub(1) 112 | .with_keys(ks=range(1, self.slots + 1)) 113 | .set_c(0) 114 | .transform_keys(kf=self._full_flag)) 115 | match = (self.store 116 | .put(m, kf=cld.second) 117 | .cam_by(kf=cld.first) 118 | .squeeze() 119 | .transform_keys(kf=self._match_flag)) 120 | flags = full + match 121 | 122 | return chunks, flags 123 | 124 | def update(self, c: nd.NumDict, s: nd.NumDict) -> None: 125 | ud = (c 126 | .keep(sf=lambda k: k.d in self._write_dims() and k.v != 0) 127 | .transform_keys(kf=self._cmd2slot)) 128 | wrt = (c 129 | .keep(sf=lambda k: k.d in self._write_dims() and k.v == 1) 130 | .transform_keys(kf=self._cmd2slot)) 131 | self.store = (wrt 132 | .outer(s) 133 | .merge(self.store 134 | .put(1 - ud, kf=lambda k: k[0]) 135 | .squeeze())) 136 | 137 | def _write_dims(self) -> List[str]: 138 | return cld.prefix([f"write-{i + 1}" for i in range(self.slots)], 139 | self.prefix) 140 | 141 | def _full_flag(self, i: int) -> feature: 142 | return feature(cld.prefix(f"full-{i}", self.prefix)) 143 | 144 | def _match_flag(self, i: int) -> feature: 145 | return feature(cld.prefix(f"match-{i}", self.prefix)) 146 | 147 | def _cmd2slot(self, cmd: feature) -> int: 148 | dim = cmd.d 149 | if self.prefix: 150 | dim = cld.split(dim).fragment 151 | return int(dim.split("-")[-1]) 152 | 153 | @property 154 | def flags(self) -> Tuple[feature, ...]: 155 | tup = cld.prefix(("full", "match"), self.prefix) 156 | return tuple(feature(f"{k}-{i + 1}") 157 | for k in tup for i in range(self.slots)) 158 | 159 | @property 160 | def cmds(self) -> Tuple[feature, ...]: 161 | d: dict[str, Tuple[int, ...]] = {} 162 | for i in range(self.slots): 163 | d[f"read-{i + 1}"] = (0, 1) 164 | d[f"write-{i + 1}"] = (-1, 0, +1) 165 | return tuple(feature(k, v) 166 | for k, vs in cld.prefix(d, self.prefix).items() for v in vs) 167 | 168 | @property 169 | def nops(self) -> Tuple[feature, ...]: 170 | d: dict[str, int] = {} 171 | for i in range(self.slots): 172 | d[f"read-{i + 1}"] = 0 173 | d[f"write-{i + 1}"] = 0 174 | return tuple(feature(k, v) 175 | for k, v in cld.prefix(d, self.prefix).items()) 176 | -------------------------------------------------------------------------------- /pyClarion/dev.py: -------------------------------------------------------------------------------- 1 | """Tools for developing new pyClarion components.""" 2 | 3 | 4 | from .base import feature, dimension, chunk, Process 5 | from .base.uris import (FSEP, SEP, SUP, ID, ispath, join, split, split_head, 6 | commonprefix, remove_prefix, relativize, prefix) 7 | from typing import overload, Dict, Any, Tuple, Iterable, Callable, TypeVar, List 8 | from .numdicts import GradientTape 9 | 10 | 11 | __all__ = ["FSEP", "SEP", "SUP", "ID", "Process", "GradientTape", "lag", 12 | "first", "second", "group_by", "group_by_dims", "ispath", "join", "split", 13 | "split_head", "commonprefix", "remove_prefix", "relativize", "prefix"] 14 | 15 | 16 | T = TypeVar("T") 17 | 18 | 19 | def eye(x: T) -> T: 20 | """Return input x (identity function).""" 21 | return x 22 | 23 | 24 | @overload 25 | def lag(x: feature, val: int = 1) -> feature: 26 | ... 27 | 28 | @overload 29 | def lag(x: dimension, val: int = 1) -> dimension: 30 | ... 31 | 32 | def lag(x, val = 1): 33 | """Return a copy of x with lag incremented by val.""" 34 | if isinstance(x, dimension): 35 | return dimension(x.id, x.lag + val) 36 | elif isinstance(x, feature): 37 | return feature(x.d, x.v, x.l + val) 38 | else: 39 | raise TypeError(f"Expected 'feature' or 'dimension', got {type(x)}") 40 | 41 | 42 | def first(pair: Tuple[T, Any]) -> T: 43 | """Return the first element in a pair.""" 44 | return pair[0] 45 | 46 | 47 | def second(pair: Tuple[Any, T]) -> T: 48 | """Return the second element in a pair.""" 49 | return pair[1] 50 | 51 | 52 | def cf2cd(key: Tuple[chunk, feature]) -> Tuple[chunk, dimension]: 53 | """Convert a chunk-feature pair to a chunk-dimension pair.""" 54 | return key[0], key[1].dim 55 | 56 | 57 | def group_by(iterable: Iterable, key: Callable) -> Dict[Any, Tuple]: 58 | """Return a dict grouping items in iterable by values of the key func.""" 59 | groups: dict = {} 60 | for item in iterable: 61 | k = key(item) 62 | groups.setdefault(k, []).append(item) 63 | return {k: tuple(v) for k, v in groups.items()} 64 | 65 | 66 | def group_by_dims( 67 | features: Iterable[feature] 68 | ) -> Dict[dimension, Tuple[feature, ...]]: 69 | """ 70 | Construct a dict grouping features by their dimensions. 71 | 72 | Returns a dict where each dim is mapped to a tuple of features of that dim. 73 | Does not check for duplicate features. 74 | 75 | :param features: An iterable of features to be grouped by dimension. 76 | """ 77 | return group_by(iterable=features, key=feature.dim.fget) 78 | -------------------------------------------------------------------------------- /pyClarion/numdicts/__init__.py: -------------------------------------------------------------------------------- 1 | from .numdict import NumDict 2 | from .gradient_tape import GradientTape 3 | 4 | __all__ = ["NumDict", "GradientTape"] 5 | -------------------------------------------------------------------------------- /pyClarion/numdicts/basic_ops.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from . import numdict as nd 4 | from . import gradient_tape as gt 5 | from .utils import coerce2, op1, op2 6 | from .utils import sign as _sign 7 | from .utils import isclose as _isclose 8 | from .utils import isnan as _isnan 9 | from .utils import isinf as _isinf 10 | from .utils import isfinite as _isfinite 11 | from .utils import lt as _lt 12 | from .utils import gt as _gt 13 | from .utils import le as _le 14 | from .utils import ge as _ge 15 | 16 | from typing import Tuple, TypeVar, Any 17 | from math import log as _log 18 | from math import exp as _exp 19 | 20 | 21 | __all__ = [ 22 | "isfinite", "isnan", "isinf", "replace_inf", "neg", "sign", "log", "exp", 23 | "isclose", "less", "greater", "less_equal", "greater_equal", "maximum", 24 | "minimum" 25 | ] 26 | 27 | 28 | T = TypeVar("T") 29 | 30 | 31 | ### BASIC UNARY OPS ### 32 | 33 | 34 | @gt.GradientTape.op(no_grad=True) 35 | def isfinite(d: nd.NumDict[T]) -> nd.NumDict[T]: 36 | return op1(_isfinite, d) 37 | 38 | 39 | @gt.GradientTape.op(no_grad=True) 40 | def isnan(d: nd.NumDict[T]) -> nd.NumDict[T]: 41 | return op1(_isnan, d) 42 | 43 | 44 | @gt.GradientTape.op(no_grad=True) 45 | def isinf(d: nd.NumDict[T]) -> nd.NumDict[T]: 46 | return op1(_isinf, d) 47 | 48 | 49 | @gt.GradientTape.op(no_grad=True) 50 | def replace_inf(d: nd.NumDict[T], val: Any) -> nd.NumDict[T]: 51 | return nd.NumDict._new( 52 | m={k: v if _isfinite(v) else float(val) for k, v in d.items()}, 53 | c=d._c) 54 | 55 | 56 | @gt.GradientTape.op() 57 | def neg(d: nd.NumDict[T]) -> nd.NumDict[T]: 58 | return op1(float.__neg__, d) 59 | 60 | @gt.GradientTape.grad(neg) 61 | def _grad_neg( 62 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T] 63 | ) -> Tuple[nd.NumDict[T]]: 64 | return (-grads,) 65 | 66 | 67 | @gt.GradientTape.op() 68 | def sign(d: nd.NumDict[T]) -> nd.NumDict[T]: 69 | return op1(_sign, d) 70 | 71 | 72 | @gt.GradientTape.op() 73 | def absolute(d: nd.NumDict[T]) -> nd.NumDict[T]: 74 | return op1(abs, d) 75 | 76 | @gt.GradientTape.grad(absolute) 77 | def _grad_absolute( 78 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T] 79 | ) -> Tuple[nd.NumDict[T]]: 80 | return (grads * sign(d),) 81 | 82 | 83 | @gt.GradientTape.op() 84 | def log(d: nd.NumDict[T]) -> nd.NumDict[T]: 85 | return op1(_log, d) 86 | 87 | @gt.GradientTape.grad(log) 88 | def _grad_log( 89 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T] 90 | ) -> Tuple[nd.NumDict[T]]: 91 | return (grads / d,) 92 | 93 | 94 | @gt.GradientTape.op() 95 | def exp(d: nd.NumDict[T]) -> nd.NumDict[T]: 96 | return op1(_exp, d) 97 | 98 | @gt.GradientTape.grad(exp) 99 | def _grad_exp( 100 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T] 101 | ) -> Tuple[nd.NumDict[T]]: 102 | return (grads * result,) 103 | 104 | 105 | ### BINARY ARITHMETIC OPS ### 106 | 107 | 108 | @coerce2 109 | @gt.GradientTape.op() 110 | def add(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 111 | return op2(float.__add__, d1, d2) 112 | 113 | @gt.GradientTape.grad(add) 114 | def _grad_add( 115 | grads: nd.NumDict[T], result: nd.NumDict[T], 116 | d1: nd.NumDict[T], d2: nd.NumDict[T] 117 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 118 | return (grads, grads) 119 | 120 | 121 | @coerce2 122 | @gt.GradientTape.op() 123 | def mul(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 124 | return op2(float.__mul__, d1, d2) 125 | 126 | @gt.GradientTape.grad(mul) 127 | def _grad_mul( 128 | grads: nd.NumDict[T], result: nd.NumDict[T], 129 | d1: nd.NumDict[T], d2: nd.NumDict[T] 130 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 131 | return (grads * d2, grads * d1) 132 | 133 | 134 | @coerce2 135 | @gt.GradientTape.op() 136 | def sub(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 137 | return op2(float.__sub__, d1, d2) 138 | 139 | @gt.GradientTape.grad(sub) 140 | def _grad_sub( 141 | grads: nd.NumDict[T], result: nd.NumDict[T], 142 | d1: nd.NumDict[T], d2: nd.NumDict[T] 143 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 144 | return (grads, -grads) 145 | 146 | 147 | @coerce2 148 | @gt.GradientTape.op() 149 | def rsub(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 150 | return op2(float.__rsub__, d1, d2) 151 | 152 | @gt.GradientTape.grad(rsub) 153 | def _grad_rsub( 154 | grads: nd.NumDict[T], result: nd.NumDict[T], 155 | d1: nd.NumDict[T], d2: nd.NumDict[T] 156 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 157 | return (-grads, grads) 158 | 159 | 160 | @coerce2 161 | @gt.GradientTape.op() 162 | def div(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 163 | return op2(float.__truediv__, d1, d2) 164 | 165 | @gt.GradientTape.grad(div) 166 | def _grad_div( 167 | grads: nd.NumDict[T], result: nd.NumDict[T], 168 | d1: nd.NumDict[T], d2: nd.NumDict[T] 169 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 170 | return (grads / d2, -(grads * d1) / (d2 * d2)) 171 | 172 | 173 | @coerce2 174 | @gt.GradientTape.op() 175 | def rdiv(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 176 | return op2(float.__rtruediv__, d1, d2) 177 | 178 | @gt.GradientTape.grad(rdiv) 179 | def _grad_rdiv( 180 | grads: nd.NumDict[T], result: nd.NumDict[T], 181 | d1: nd.NumDict[T], d2: nd.NumDict[T] 182 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 183 | return (-(grads * d2) / (d1 * d1), grads / d1) 184 | 185 | 186 | @coerce2 187 | @gt.GradientTape.op() 188 | def power(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 189 | return op2(float.__pow__, d1, d2) 190 | 191 | @gt.GradientTape.grad(power) 192 | def _grad_power( 193 | grads: nd.NumDict[T], result: nd.NumDict[T], 194 | d1: nd.NumDict[T], d2: nd.NumDict[T] 195 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 196 | return (grads * d2 * d1 ** (d2 - 1), grads * log(d1) * d1 ** d2) 197 | 198 | 199 | @coerce2 200 | @gt.GradientTape.op() 201 | def rpow(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 202 | return op2(float.__rpow__, d1, d2) 203 | 204 | @gt.GradientTape.grad(rpow) 205 | def _grad_rpow( 206 | grads: nd.NumDict[T], result: nd.NumDict[T], 207 | d1: nd.NumDict[T], d2: nd.NumDict[T] 208 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 209 | return (grads * log(d2) * d2 ** d1, grads * d1 * d2 ** (d1 - 1)) 210 | 211 | 212 | ### BINARY COMPARISON OPS ### 213 | 214 | 215 | @coerce2 216 | @gt.GradientTape.op(no_grad=True) 217 | def isclose(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 218 | return op2(_isclose, d1, d2) 219 | 220 | 221 | @coerce2 222 | @gt.GradientTape.op(no_grad=True) 223 | def less(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 224 | return op2(_lt, d1, d2) 225 | 226 | 227 | @coerce2 228 | @gt.GradientTape.op(no_grad=True) 229 | def greater(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 230 | return op2(_gt, d1, d2) 231 | 232 | 233 | @coerce2 234 | @gt.GradientTape.op(no_grad=True) 235 | def less_equal(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 236 | return op2(_le, d1, d2) 237 | 238 | 239 | @coerce2 240 | @gt.GradientTape.op(no_grad=True) 241 | def greater_equal(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 242 | return op2(_ge, d1, d2) 243 | 244 | 245 | @coerce2 246 | @gt.GradientTape.op() 247 | def maximum(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 248 | return op2(max, d1, d2) 249 | 250 | @gt.GradientTape.grad(maximum) 251 | def _grad_maximum( 252 | grads: nd.NumDict[T], result: nd.NumDict[T], 253 | d1: nd.NumDict[T], d2: nd.NumDict[T] 254 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 255 | return (grads * (d2 <= d1), grads * (d1 <= d2)) 256 | 257 | 258 | @coerce2 259 | @gt.GradientTape.op() 260 | def minimum(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[T]: 261 | return op2(min, d1, d2) 262 | 263 | @gt.GradientTape.grad(minimum) 264 | def _grad_minimum( 265 | grads: nd.NumDict[T], result: nd.NumDict[T], 266 | d1: nd.NumDict[T], d2: nd.NumDict[T] 267 | ) -> Tuple[nd.NumDict[T], nd.NumDict[T]]: 268 | return (grads * (d1 <= d2), grads * (d2 <= d1)) 269 | -------------------------------------------------------------------------------- /pyClarion/numdicts/dict_ops.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from . import numdict as nd 4 | from . import gradient_tape as gt 5 | 6 | from typing import Any, Tuple, Callable, TypeVar, Iterable, overload 7 | 8 | 9 | __all__ = ["mask", "isolate", "keep", "drop", "with_keys", "transform_keys", 10 | "merge"] 11 | 12 | 13 | T = TypeVar("T") 14 | T1, T2 = TypeVar("T1"), TypeVar("T2") 15 | 16 | 17 | @gt.GradientTape.op(no_grad=True) 18 | def mask(d: nd.NumDict[T]) -> nd.NumDict[T]: 19 | return nd.NumDict._new(m={k: 1.0 for k in d}, c=0.0) 20 | 21 | 22 | @gt.GradientTape.op() 23 | def set_c(d: nd.NumDict[T], c: Any) -> nd.NumDict[T]: 24 | return nd.NumDict._new(m={k: v for k, v in d.items()}, c=float(c)) 25 | 26 | @gt.GradientTape.grad(set_c) 27 | def _grad_set_c( 28 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T], *, c: Any 29 | ) -> Tuple[nd.NumDict[T]]: 30 | return (mask(grads) * grads,) 31 | 32 | 33 | @overload 34 | def isolate(d: nd.NumDict[Any]) -> nd.NumDict[Any]: 35 | ... 36 | 37 | @overload 38 | def isolate(d: nd.NumDict[T], *, key: T) -> nd.NumDict[Any]: 39 | ... 40 | 41 | @gt.GradientTape.op() 42 | def isolate(d, *, key=None): 43 | """ 44 | Return a constant NumDict isolating a value from d. 45 | 46 | If key is None, the constant is set to d.c. Otherwise, it is set to d[key]. 47 | """ 48 | if key is None: 49 | return nd.NumDict._new(c=d._c) 50 | else: 51 | return nd.NumDict._new(c=d[key]) 52 | 53 | @gt.GradientTape.grad(isolate) 54 | def _grad_isolate(grads, result, d, *, key): 55 | raise NotImplementedError() 56 | 57 | 58 | @gt.GradientTape.op() 59 | def keep(d: nd.NumDict[T], *, sf: Callable[[T], bool]) -> nd.NumDict[T]: 60 | return nd.NumDict._new(m={k: v for k, v in d.items() if sf(k)}, c=d._c) 61 | 62 | @gt.GradientTape.grad(keep) 63 | def _grad_keep( 64 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T], *, 65 | sf: Callable[[T], bool] 66 | ) -> Tuple[nd.NumDict[T]]: 67 | raise NotImplementedError() 68 | 69 | 70 | @gt.GradientTape.op() 71 | def drop(d: nd.NumDict[T], *, sf: Callable[[T], bool]) -> nd.NumDict[T]: 72 | return nd.NumDict._new(m={k: v for k, v in d.items() if not sf(k)}, c=d._c) 73 | 74 | @gt.GradientTape.grad(drop) 75 | def _grad_drop( 76 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T], *, 77 | sf: Callable[[T], bool] 78 | ) -> Tuple[nd.NumDict[T]]: 79 | raise NotImplementedError() 80 | 81 | 82 | @gt.GradientTape.op() 83 | def keep_less(d: nd.NumDict[T], ref: nd.NumDict[T]) -> nd.NumDict[T]: 84 | return nd.NumDict._new( 85 | m={k: v for k, v in d.items() if v < ref[k]}, 86 | c=d._c) 87 | 88 | @gt.GradientTape.grad(keep_less) 89 | def _grad_keep_less( 90 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T], 91 | ref: nd.NumDict[T] 92 | ) -> Tuple[nd.NumDict[T]]: 93 | raise NotImplementedError() 94 | 95 | 96 | @gt.GradientTape.op() 97 | def keep_greater(d: nd.NumDict[T], ref: nd.NumDict[T]) -> nd.NumDict[T]: 98 | return nd.NumDict._new( 99 | m={k: v for k, v in d.items() if v > ref[k]}, 100 | c=d._c) 101 | 102 | @gt.GradientTape.grad(keep_greater) 103 | def _grad_keep_greater( 104 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T], 105 | ref: nd.NumDict[T] 106 | ) -> Tuple[nd.NumDict[T]]: 107 | raise NotImplementedError() 108 | 109 | 110 | @gt.GradientTape.op() 111 | def keep_if(d: nd.NumDict[T], cond: nd.NumDict[T]) -> nd.NumDict[T]: 112 | return nd.NumDict._new( 113 | m={k: v for k, v in d.items() if cond[k] != 0.0}, 114 | c=d._c) 115 | 116 | @gt.GradientTape.grad(keep_if) 117 | def _grad_keep_if( 118 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T], 119 | ref: nd.NumDict[T] 120 | ) -> Tuple[nd.NumDict[T]]: 121 | raise NotImplementedError() 122 | 123 | 124 | @gt.GradientTape.op() 125 | def squeeze(d: nd.NumDict[T]) -> nd.NumDict[T]: 126 | return nd.NumDict._new(m={k: v for k, v in d.items() if v != d._c}, c=d._c) 127 | 128 | @gt.GradientTape.grad(squeeze) 129 | def _grad_squeeze( 130 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T] 131 | ) -> Tuple[nd.NumDict[T]]: 132 | raise NotImplementedError() 133 | 134 | 135 | @gt.GradientTape.op() 136 | def with_keys(d: nd.NumDict[T], *, ks: Iterable[T]) -> nd.NumDict[T]: 137 | return nd.NumDict._new(m={k: d[k] for k in ks}, c=d._c) 138 | 139 | @gt.GradientTape.grad(with_keys) 140 | def _grad_with_keys( 141 | grads: nd.NumDict[T], result: nd.NumDict[T], d: nd.NumDict[T], 142 | *, ks: Iterable[T] 143 | ) -> Tuple[nd.NumDict[T]]: 144 | raise NotImplementedError() 145 | 146 | 147 | @gt.GradientTape.op() 148 | def transform_keys( 149 | d: nd.NumDict[T1], *, kf: Callable[[T1], T2] 150 | ) -> nd.NumDict[T2]: 151 | """ 152 | Transform the keys of d using kf and return the result. 153 | 154 | :param d: The NumDict to be transformed. 155 | :param kf: A function taking keys of d to a new keys space. 156 | """ 157 | new = nd.NumDict._new(m={kf(k): d[k] for k in d}, c=d._c) 158 | if len(d) != len(new): 159 | raise ValueError("Function must be one-to-one on keys of arg d.") 160 | return new 161 | 162 | @gt.GradientTape.grad(transform_keys) 163 | def _grad_transform_keys( 164 | grads: nd.NumDict[T2], result: nd.NumDict[T2], d: nd.NumDict[T1], *, 165 | kf: Callable[[T1], T2] 166 | ) -> Tuple[nd.NumDict[T1]]: 167 | return (transform_keys(grads, kf={kf(k): k for k in d}.__getitem__),) 168 | 169 | 170 | @gt.GradientTape.op() 171 | def merge(*ds: nd.NumDict[T]) -> nd.NumDict[T]: 172 | if len(ds) == 0: 173 | raise ValueError("Merge must be provided with at least one argument.") 174 | d = nd.NumDict[T]._new(c=0.0) 175 | for _d in ds: 176 | d.update({k: v for k, v in _d.items()}, strict=True) 177 | return d 178 | 179 | @gt.GradientTape.grad(merge) 180 | def _grad_merge( 181 | grads: nd.NumDict[T], result: nd.NumDict[T], *ds: nd.NumDict[T] 182 | ) -> Tuple[nd.NumDict[T], ...]: 183 | return tuple(grads * mask(d) for d in ds) 184 | -------------------------------------------------------------------------------- /pyClarion/numdicts/gradient_tape.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from . import numdict as nd 4 | 5 | from typing import (Tuple, List, Dict, Any, Set, Union, TypeVar, 6 | Hashable, Callable, overload, ClassVar, Optional) 7 | from typing_extensions import ParamSpec 8 | from functools import wraps 9 | from contextlib import contextmanager 10 | from contextvars import ContextVar 11 | from dataclasses import dataclass, field 12 | 13 | 14 | __all__ = ["GradientTape"] 15 | 16 | 17 | T = TypeVar("T", bound=Hashable) 18 | P = ParamSpec("P") 19 | C = TypeVar("C", bound=Callable) 20 | 21 | 22 | class TapeError(RuntimeError): 23 | """Raised when an inappropriate GradientTape event occurs.""" 24 | pass 25 | 26 | 27 | # Needs to have a nice repr, awful to read w/ large numdicts. 28 | @dataclass 29 | class TapeCell: 30 | """A gradient tape entry.""" 31 | value: nd.NumDict 32 | op: str = "" 33 | operands: Tuple[int, ...] = () 34 | kwds: dict = field(default_factory=dict) 35 | 36 | 37 | class GradientTape: 38 | """ 39 | A gradient tape. 40 | 41 | Tracks diffable ops and computes forward and backward passes. Does not 42 | support nesting. 43 | 44 | Tracked NumDicts are protected until the first call to self.gradients(). 45 | Afterwards, protection settings are restored to their original values. 46 | """ 47 | 48 | __slots__ = ("_cells", "_index", "_token", "_rec", "_prot", "_block") 49 | 50 | TAPE: ClassVar[ContextVar] = ContextVar("TAPE") 51 | 52 | OPS: ClassVar = {} 53 | GRADS: ClassVar = {} 54 | 55 | _cells: List[TapeCell] 56 | _protected: List[bool] 57 | _index: Dict[int, int] 58 | _block: Set[int] 59 | _token: Any 60 | _rec: bool 61 | 62 | def __init__(self) -> None: 63 | self._cells = [] 64 | self._prot = [] 65 | self._index = {} 66 | self._block = set() 67 | self._rec = False 68 | self._token = None 69 | 70 | def __repr__(self): 71 | name = type(self).__name__ 72 | rec = self._rec 73 | length = len(self._cells) 74 | return f"<{name} length: {length} rec: {rec}>" 75 | 76 | def __enter__(self): 77 | try: 78 | self.TAPE.get() 79 | except LookupError: 80 | self._rec = True 81 | self._token = self.TAPE.set(self) 82 | else: 83 | raise TapeError("Cannot stack gradient tapes.") 84 | return self 85 | 86 | def __exit__(self, exc_type, exc_value, traceback): 87 | self.TAPE.reset(self._token) 88 | self._token = None 89 | self._rec = False 90 | 91 | def _register( 92 | self, 93 | value: nd.NumDict, 94 | op: str = "", 95 | inputs: Tuple[nd.NumDict, ...] = (), 96 | kwds: Optional[dict] = None 97 | ) -> None: 98 | # Add a new tape cell containing the given information. 99 | 100 | if not self._rec: 101 | raise TapeError("Register NumDict only when recording.") 102 | 103 | for d in inputs: # register any new operands 104 | if id(d) not in self._index: 105 | self._register(d) 106 | operands = tuple(self._index[id(d)] for d in inputs) 107 | kwds = {} if kwds is None else kwds 108 | new_cell = TapeCell(value, op, operands, kwds) 109 | 110 | self._index[id(value)] = len(self._cells) 111 | self._cells.append(new_cell) 112 | 113 | # Temporarily protect value 114 | self._prot.append(value.prot) 115 | value.prot = True 116 | 117 | def _get_index(self, d: nd.NumDict) -> int: 118 | # Return the tape index at which d is registered. 119 | return self._index[id(d)] 120 | 121 | def _backward( 122 | self, seed: int, indices: Set[int], seed_c: float 123 | ) -> Dict[int, nd.NumDict]: 124 | """ 125 | Perform a backward pass over the current tape. 126 | 127 | :param seed: Tape index seeding the backward pass. 128 | :param indices: A set of tape indices to be treated as variables in 129 | the backward pass. Gradients will be calculated only for these 130 | variables. 131 | :param seed_val: The seed value. 132 | """ 133 | 134 | if self._rec: raise TapeError("Stop recording before backward pass.") 135 | delta = {seed: nd.NumDict(c=seed_c)} 136 | for i, cell in reversed(list(enumerate(self._cells))): 137 | delta.setdefault(i, nd.NumDict(c=0.0)) 138 | if cell.op: 139 | grad_op = self.GRADS[cell.op] 140 | if grad_op is None or id(cell.value) in self._block: 141 | pass 142 | else: 143 | inputs = (self._cells[k].value for k in cell.operands) 144 | grads = grad_op(delta[i], cell.value, *inputs, **cell.kwds) 145 | for j, k in enumerate(cell.operands): 146 | if k in indices or self._cells[k].op != "": 147 | delta.setdefault(k, nd.NumDict(c=0.0)) 148 | delta[k] += grads[j] 149 | return delta 150 | 151 | def reset(self) -> None: 152 | """Reset tape.""" 153 | if self._rec: 154 | raise TapeError("Cannot reset while recording.") 155 | else: 156 | # Restore original protection settings 157 | for cell, prot in zip(self._cells, self._prot): 158 | cell.value.prot = prot 159 | self._prot.clear() 160 | self._cells.clear() 161 | self._index.clear() 162 | 163 | def block(self, d: nd.NumDict) -> nd.NumDict: 164 | """ 165 | Block gradient accumulation through d. 166 | 167 | Returns d as is for in-line use. 168 | 169 | >>> with GradientTape() as t: 170 | ... d1 = nd.NumDict(c=2) 171 | ... d2 = nd.NumDict({1: 1, 2: 2}) 172 | ... d3 = d2.reduce_sum() 173 | ... result = d1 * t.block(d3) 174 | >>> t.gradients(result, (d1, d2, d3)) 175 | nd.NumDict(c=2), (nd.NumDict(c=3), nd.NumDict(c=0), nd.NumDict(c=2)) 176 | """ 177 | 178 | self._block.add(id(d)) 179 | return d 180 | 181 | @classmethod 182 | @contextmanager 183 | def pause(cls): 184 | """ 185 | Suspend recording of autodiff ops. 186 | 187 | >>> with GradientTape() as t: 188 | ... d1 = nd.NumDict(c=3) 189 | ... with GradientTape.pause(): 190 | ... d2 = nd.NumDict(c=4) 191 | ... d3 = d2 / 5 192 | ... d4 = d1 * d3 193 | >>> t.gradients(d4, d2) 194 | Traceback (most recent call last): 195 | ... 196 | KeyError: 197 | """ 198 | 199 | try: 200 | tape = cls.TAPE.get() 201 | except LookupError: 202 | yield 203 | else: 204 | rec = tape._rec 205 | tape._rec = False 206 | yield 207 | tape._rec = rec 208 | 209 | @overload 210 | def gradients( 211 | self, output: nd.NumDict, variables: nd.NumDict 212 | ) -> Tuple[nd.NumDict, nd.NumDict]: 213 | ... 214 | 215 | @overload 216 | def gradients( 217 | self, output: nd.NumDict, variables: Tuple[nd.NumDict, ...] 218 | ) -> Tuple[nd.NumDict, Tuple[nd.NumDict, ...]]: 219 | ... 220 | 221 | def gradients( 222 | self, 223 | output: nd.NumDict, 224 | variables: Union[nd.NumDict, Tuple[nd.NumDict, ...]] 225 | ) -> Tuple[nd.NumDict, Union[nd.NumDict, Tuple[nd.NumDict, ...]]]: 226 | """ 227 | Compute gradients of variables against output. 228 | 229 | Accepts as variables a sequence of NumDicts. 230 | 231 | If variables contains only one element, will return a single value. 232 | Otherwise will return a tuple, with each element matching the 233 | corresponding variables entry. 234 | 235 | :param output: Value against which to take gradients. 236 | :param variables: A sequence of NumDicts containing variable values for 237 | the backward pass. Gradients will be calculated only for these 238 | values. 239 | """ 240 | 241 | if self._rec: raise TapeError("Stop recording to compute gradients.") 242 | seed = self._get_index(output) 243 | if isinstance(variables, tuple): 244 | indices = set(self._get_index(var) for var in variables) 245 | _grads = self._backward(seed, indices, 1.0) 246 | grads = tuple(_grads[self._get_index(var)] for var in variables) 247 | self.reset() 248 | return output, grads 249 | else: 250 | indices = {self._get_index(variables)} 251 | _grads = self._backward(seed, indices, 1.0) 252 | grads = _grads[self._get_index(variables)] 253 | self.reset() 254 | return output, grads 255 | 256 | @classmethod 257 | def op(cls, no_grad=False) -> Callable[[C], C]: 258 | """Register a new op.""" 259 | 260 | def wrapper(f: Callable[P, nd.NumDict]) -> Callable[P, nd.NumDict]: 261 | 262 | name = f.__qualname__ 263 | 264 | @wraps(f) 265 | def op_wrapper(*args: P.args, **kwargs: P.kwargs) -> nd.NumDict: 266 | d = f(*args, **kwargs) 267 | try: 268 | tape = cls.TAPE.get() 269 | except LookupError: 270 | pass 271 | else: 272 | tape._register(d, name, args, kwargs) 273 | return d 274 | 275 | if name in cls.OPS: 276 | raise ValueError(f"Op name '{name}' already in registry.") 277 | cls.OPS[name] = op_wrapper 278 | if no_grad: 279 | cls.GRADS[name] = None 280 | 281 | return op_wrapper 282 | 283 | return wrapper # type: ignore 284 | 285 | @classmethod 286 | def grad(cls, op: Callable) -> Callable[[C], C]: 287 | """Register gradient function for op.""" 288 | 289 | name = op.__qualname__ 290 | if name not in cls.OPS: 291 | raise ValueError(f"Unregistered op '{name}' passed to grad.") 292 | 293 | def wrapper(func: C) -> C: 294 | cls.GRADS[name] = func 295 | return func 296 | 297 | return wrapper 298 | -------------------------------------------------------------------------------- /pyClarion/numdicts/nn_ops.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from . import numdict as nd 4 | from . import gradient_tape as gt 5 | from . import basic_ops as bops 6 | from . import dict_ops as dops 7 | from . import vec_ops as vops 8 | 9 | from .utils import eltwise, op1, by 10 | from .utils import sigmoid as _sigmoid 11 | from .utils import tanh as _tanh 12 | 13 | from math import exp as _exp 14 | 15 | from typing import Tuple, Callable, Iterable, TypeVar, cast 16 | from random import choices 17 | 18 | 19 | __all__ = ["sigmoid", "tanh", "boltzmann", "sample", "cam_by", "eltwise_cam"] 20 | 21 | 22 | T = TypeVar("T") 23 | 24 | 25 | @gt.GradientTape.op() 26 | def sigmoid(d: nd.NumDict) -> nd.NumDict: 27 | return op1(_sigmoid, d) 28 | 29 | @gt.GradientTape.grad(sigmoid) 30 | def _grad_sigmoid( 31 | grads: nd.NumDict, result: nd.NumDict, d:nd.NumDict 32 | ) -> Tuple[nd.NumDict]: 33 | return (grads * result * (1 - result),) 34 | 35 | 36 | @gt.GradientTape.op() 37 | def tanh(d: nd.NumDict) -> nd.NumDict: 38 | """Apply the tanh function elementwise to d.""" 39 | return op1(_tanh, d) 40 | 41 | @gt.GradientTape.grad(tanh) 42 | def _grad_tanh( 43 | grads: nd.NumDict, result: nd.NumDict, d:nd.NumDict 44 | ) -> Tuple[nd.NumDict]: 45 | return (grads * (1 - result * result),) 46 | 47 | 48 | @gt.GradientTape.op() 49 | def boltzmann(d: nd.NumDict, t: nd.NumDict) -> nd.NumDict: 50 | """Construct a boltzmann distribution from d with temperature t.""" 51 | if not len(d): 52 | raise ValueError("Arg d should not be empty.") 53 | if len(t): 54 | raise ValueError("Temperature should be a scalar.") 55 | ks, vs = zip(*d.items()) 56 | vmax, _t = max(vs), t._c 57 | # v - vmax is a stability trick; softmax(x) = softmax(x + c) 58 | exp_v = [_exp((v - vmax) / _t) for v in vs] 59 | assert len(exp_v) 60 | sum_exp_v = sum(exp_v) 61 | vals = [v / sum_exp_v for v in exp_v] 62 | return nd.NumDict._new(m={k: v for k, v in zip(ks, vals)}) 63 | 64 | @gt.GradientTape.grad(boltzmann) 65 | def _grad_boltzmann( 66 | grads: nd.NumDict, result: nd.NumDict, d: nd.NumDict, t: nd.NumDict 67 | ) -> Tuple[nd.NumDict]: 68 | raise NotImplementedError() 69 | 70 | 71 | @gt.GradientTape.op() 72 | def sample(d: nd.NumDict) -> nd.NumDict: 73 | """Sample a key from d according to its strength.""" 74 | if not len(d): 75 | raise ValueError("NumDict must be non-empty.") 76 | cs, ws = tuple(zip(*d.items())) 77 | s = choices(cs, weights=cast(Tuple[float], ws)) 78 | return nd.NumDict._new(m={k: 1.0 if k in s else 0.0 for k in d}) 79 | 80 | @gt.GradientTape.grad(sample) 81 | def _grad_sample( 82 | grads: nd.NumDict, result: nd.NumDict, d: nd.NumDict 83 | ) -> Tuple[nd.NumDict]: 84 | return (grads * result,) 85 | 86 | 87 | @gt.GradientTape.op() 88 | def cam_by(d: nd.NumDict, *, kf: Callable) -> nd.NumDict: 89 | return by(d, f=_cam, kf=kf) 90 | 91 | @gt.GradientTape.grad(cam_by) 92 | def _grad_cam_by( 93 | grads: nd.NumDict, result: nd.NumDict, d: nd.NumDict 94 | ) -> Tuple[nd.NumDict]: 95 | raise NotImplementedError() 96 | 97 | def _cam(xs: Iterable[float]) -> float: 98 | _xs = [0.0]; _xs.extend(xs) 99 | return max(_xs) + min(_xs) 100 | 101 | 102 | @gt.GradientTape.op() 103 | def eltwise_cam(*ds: nd.NumDict) -> nd.NumDict: 104 | return eltwise(*ds, f=_cam) 105 | 106 | @gt.GradientTape.grad(eltwise_cam) 107 | def _grad_eltwise_cam( 108 | grads: nd.NumDict[T], result: nd.NumDict[T], *ds: nd.NumDict[T] 109 | ) -> Tuple[nd.NumDict[T], ...]: 110 | raise NotImplementedError() 111 | -------------------------------------------------------------------------------- /pyClarion/numdicts/numdict.py: -------------------------------------------------------------------------------- 1 | 2 | from __future__ import annotations 3 | 4 | __all__ = ["NumDict"] 5 | 6 | from . import basic_ops as bops 7 | from . import dict_ops as dops 8 | from . import vec_ops as vops 9 | from . import nn_ops 10 | 11 | from typing import Callable, Dict, Mapping, TypeVar, Iterator, Any, Optional 12 | from typing_extensions import Concatenate, ParamSpec 13 | from functools import wraps 14 | from math import isnan, isinf 15 | 16 | 17 | P = ParamSpec("P") 18 | R = TypeVar("R") 19 | T = TypeVar("T") 20 | T1, T2 = TypeVar("T1"), TypeVar("T2") 21 | 22 | 23 | def inplace( 24 | f: Callable[Concatenate["NumDict[T]", P], R] 25 | ) -> Callable[Concatenate["NumDict[T]", P], R]: 26 | 27 | @wraps(f) 28 | def wrapper(d: "NumDict[T]", *args: P.args, **kwargs: P.kwargs) -> R: 29 | if d.prot: raise RuntimeError("Cannot mutate protected NumDict data.") 30 | return f(d, *args, **kwargs) 31 | 32 | return wrapper 33 | 34 | 35 | class NumDict(Mapping[T, float]): 36 | """ 37 | A numerical dictionary. 38 | 39 | Represents a mapping from symbolic keys to (real) numerical values with 40 | support for mathematical operations, op chaining, and automatic 41 | differentiation. 42 | 43 | :param m: Symbol-value associations represented by the NumDict. 44 | :param c: Optional constant, default value for all keys not in the map given 45 | by m. 46 | :param prot: Bool indicating whether the NumDict is protected. When True, 47 | in-place operations are disabled. 48 | """ 49 | 50 | __slots__ = ("_m", "_c", "_prot") 51 | 52 | _m: Dict[T, float] 53 | _c: float 54 | _prot: bool 55 | 56 | def __init__( 57 | self, 58 | m: Optional[Mapping[T, Any]] = None, 59 | c: Any = 0.0, 60 | prot: bool = False 61 | ) -> None: 62 | self._m = {k: float(v) for k, v in m.items()} if m else {} 63 | self._c = float(c) 64 | self._prot = prot 65 | 66 | @classmethod 67 | def _new( 68 | cls, 69 | m: Optional[Dict[T, float]] = None, 70 | c: float = 0.0, 71 | prot: bool = False 72 | ) -> "NumDict[T]": 73 | # Fast instance constructor (omits checks); for use in op defs 74 | new = cls.__new__(cls) 75 | new._m = m if m is not None else {} 76 | new._c = c 77 | new._prot = prot 78 | return new 79 | 80 | @property 81 | def m(self) -> Dict[T, float]: 82 | return self._m.copy() 83 | 84 | @property 85 | def c(self) -> float: 86 | """The NumDict constant.""" 87 | return self._c 88 | 89 | @c.setter 90 | @inplace 91 | def c(self, val: float) -> None: 92 | self._c = float(val) 93 | 94 | @property 95 | def prot(self) -> bool: 96 | """Bool indicating whether self is protected.""" 97 | return self._prot 98 | 99 | @prot.setter 100 | def prot(self, val: bool) -> None: 101 | self._prot = bool(val) 102 | 103 | def __eq__(self, other: Any) -> bool: 104 | if isinstance(other, NumDict): 105 | return self._m == other._m and self.c == other.c 106 | else: 107 | return NotImplemented 108 | 109 | def __repr__(self) -> str: 110 | m = f"{repr(self._m)}" if self._m else "" 111 | c = f"c={self.c}" 112 | prot = f"prot={self.prot}" if self.prot else "" 113 | args = ", ".join(arg for arg in (m, c, prot) if arg) 114 | return f"{type(self).__name__}({args})" 115 | 116 | def __len__(self) -> int: 117 | return len(self._m) 118 | 119 | def __iter__(self) -> Iterator[T]: 120 | yield from iter(self._m) 121 | 122 | def __contains__(self, key: Any) -> bool: 123 | return key in self._m 124 | 125 | def __getitem__(self, key: Any) -> float: 126 | try: 127 | return self._m[key] # type: ignore 128 | except KeyError: 129 | return self._c 130 | 131 | @inplace 132 | def __setitem__(self, key: Any, val: float | int | bool) -> None: 133 | self._m[key] = float(val) # type: ignore 134 | 135 | @inplace 136 | def __delitem__(self, key: Any) -> None: 137 | del self._m[key] 138 | 139 | @inplace 140 | def clear(self) -> None: 141 | """Clear all key-value associations in self.""" 142 | self._m = {} 143 | 144 | @inplace 145 | def update( 146 | self, m: Mapping[T, Any], clear: bool = False, strict: bool = False 147 | ) -> None: 148 | """ 149 | Update self with new key-value pairs. 150 | 151 | Behaves like dict.update(). Will overwrite existing values. 152 | 153 | :param m: Mapping containing new key-value pairs. 154 | :param clear: If True, clear self prior to running the update. 155 | :param strict: If True, will throw a ValueError if existing values are 156 | overwritten. 157 | """ 158 | if clear: self.clear() 159 | n_old = len(self) 160 | self._m.update({k: float(v) for k, v in m.items()}) 161 | if strict and len(self._m) < n_old + len(m): 162 | raise ValueError("Arg m not disjoint with self") 163 | 164 | def has_inf(self) -> bool: 165 | """Return True iff any key is mapped to inf or the constant is inf.""" 166 | return any(map(isinf, self.values())) or isinf(self.c) 167 | 168 | def has_nan(self) -> bool: 169 | """Return True iff any key is mapped to nan or the constant is nan.""" 170 | return any(map(isnan, self.values())) or isnan(self.c) 171 | 172 | def copy(self: "NumDict[T]") -> "NumDict[T]": 173 | """Return an unprotected copy of self.""" 174 | return type(self)(m=self, c=self.c) 175 | 176 | def pipe( 177 | self, 178 | f: Callable[Concatenate["NumDict[T]", P], "NumDict[T2]"], 179 | *args: P.args, 180 | **kwdargs: P.kwargs 181 | ) -> "NumDict[T2]": 182 | """Call a custom function as part of a NumDict method chain.""" 183 | return f(self, *args, **kwdargs) 184 | 185 | ### Mathematical Dunder Methods ### 186 | 187 | __neg__ = bops.neg 188 | __abs__ = bops.absolute 189 | 190 | __lt__ = bops.less 191 | __gt__ = bops.greater 192 | __leq__ = bops.less_equal 193 | __geq__ = bops.greater_equal 194 | 195 | __add__ = bops.add 196 | __radd__ = bops.add 197 | __mul__ = bops.mul 198 | __rmul__ = bops.mul 199 | __sub__ = bops.sub 200 | __rsub__ = bops.rsub 201 | __truediv__ = bops.div 202 | __rtruediv__ = bops.rdiv 203 | __pow__ = bops.power 204 | __rpow__ = bops.rpow 205 | 206 | __or__ = bops.maximum 207 | __ror__ = bops.maximum 208 | __and__ = bops.minimum 209 | __rand__ = bops.minimum 210 | 211 | __matmul__ = vops.matmul 212 | __rmatmul__ = vops.matmul 213 | 214 | ### Mathematical Methods ### 215 | 216 | isfinite = bops.isfinite 217 | isnan = bops.isnan 218 | isinf = bops.isinf 219 | replace_inf = bops.replace_inf 220 | neg = bops.neg 221 | abs = bops.absolute 222 | sign = bops.sign 223 | log = bops.log 224 | exp = bops.exp 225 | 226 | isclose = bops.isclose 227 | less = bops.less 228 | greater = bops.greater 229 | less_equal = bops.less_equal 230 | greater_equal = bops.greater_equal 231 | 232 | add = bops.add 233 | mul = bops.mul 234 | sub = bops.sub 235 | rsub = bops.rsub 236 | div = bops.div 237 | rdiv = bops.rdiv 238 | pow = bops.power 239 | rpow = bops.rpow 240 | 241 | max = bops.maximum 242 | min = bops.minimum 243 | 244 | ### Dict Ops ### 245 | 246 | mask = dops.mask 247 | set_c = dops.set_c 248 | isolate = dops.isolate 249 | keep = dops.keep 250 | drop = dops.drop 251 | keep_less = dops.keep_less 252 | keep_greater = dops.keep_greater 253 | keep_if = dops.keep_if 254 | squeeze = dops.squeeze 255 | with_keys = dops.with_keys 256 | transform_keys = dops.transform_keys 257 | merge = dops.merge 258 | 259 | ### Aggregation Ops ### 260 | 261 | reduce_sum = vops.reduce_sum 262 | matmul = vops.matmul 263 | reduce_max = vops.reduce_max 264 | reduce_min = vops.reduce_min 265 | put = vops.put 266 | mul_from = vops.mul_from 267 | div_from = vops.div_from 268 | sum_by = vops.sum_by 269 | max_by = vops.max_by 270 | min_by = vops.min_by 271 | eltwise_max = vops.eltwise_max 272 | eltwise_min = vops.eltwise_min 273 | outer = vops.outer 274 | 275 | ### NN Ops ### 276 | 277 | sigmoid = nn_ops.sigmoid 278 | tanh = nn_ops.tanh 279 | boltzmann = nn_ops.boltzmann 280 | sample = nn_ops.sample 281 | cam_by = nn_ops.cam_by 282 | eltwise_cam = nn_ops.eltwise_cam 283 | -------------------------------------------------------------------------------- /pyClarion/numdicts/utils.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from . import numdict as nd 4 | 5 | from typing import (Callable, Union, Iterable, TypeVar, Any, Set, Dict, List, 6 | overload, Optional) 7 | from math import copysign, exp 8 | from math import isclose as _isclose 9 | from math import isinf as _isinf 10 | from math import isnan as _isnan 11 | from math import isfinite as _isfinite 12 | 13 | T, T1, T2 = TypeVar("T"), TypeVar("T1"), TypeVar("T2") 14 | NDLike = Union["nd.NumDict[T]", float, int, bool] 15 | 16 | 17 | def coerce2( 18 | f: Callable[[nd.NumDict[T], nd.NumDict[T]], nd.NumDict[T]] 19 | ) -> Callable[[nd.NumDict[T], NDLike[T]], nd.NumDict[T]]: 20 | 21 | def wrapper(d1: nd.NumDict[T], d2: NDLike[T]) -> nd.NumDict[T]: 22 | d2 = nd.NumDict[T](c=d2) if isinstance(d2, (float, int, bool)) else d2 23 | return f(d1, d2) 24 | 25 | wrapper.__name__ = f.__name__ 26 | wrapper.__qualname__ = f.__qualname__ 27 | 28 | return wrapper 29 | 30 | 31 | def op1(f: Callable[[float], float], d: nd.NumDict[T]) -> nd.NumDict[T]: 32 | return nd.NumDict._new(m={k: f(v) for k, v in d.items()}, c=f(d.c)) 33 | 34 | 35 | def op2( 36 | f: Callable[[float, float], float], d1: nd.NumDict[T], d2: nd.NumDict[T] 37 | ) -> nd.NumDict[T]: 38 | keys = set(d1) | set(d2) 39 | return nd.NumDict._new( 40 | m={k: f(d1[k], d2[k]) for k in keys}, 41 | c=f(d1.c, d2.c)) 42 | 43 | 44 | ### ABSTRACT AGGREGATION FUNCTIONS ### 45 | 46 | 47 | def reduce( 48 | d: nd.NumDict[Any], 49 | *, 50 | f: Callable[[Iterable[float]], float], 51 | initial: float, 52 | key: Optional[T] = None 53 | ) -> nd.NumDict[T]: 54 | values = [initial]; values.extend(d.values()) 55 | result = f(values) 56 | if key is None: 57 | return nd.NumDict[T]._new(m={}, c=result) 58 | else: 59 | return nd.NumDict[T]._new(m={key: result}) 60 | 61 | 62 | def by( 63 | d: nd.NumDict[T1], *, 64 | f: Callable[[Iterable[float]], float], 65 | kf: Callable[[T1], T2], 66 | ) -> nd.NumDict[T2]: 67 | groups: Dict[T2, List[float]] = {} 68 | for k, v in d.items(): groups.setdefault(kf(k), []).append(v) 69 | return nd.NumDict._new(m={k: f(v) for k, v in groups.items()}) 70 | 71 | 72 | def eltwise( 73 | *ds: nd.NumDict[T], f: Callable[[Iterable[float]], float] 74 | ) -> nd.NumDict[T]: 75 | if len(ds) < 1: raise ValueError("At least one input is necessary.") 76 | ks: Set[T] = set(); ks = ks.union(*ds) 77 | return nd.NumDict._new( 78 | m={k: f([d[k] for d in ds]) for k in ks}, 79 | c=f([d.c for d in ds])) 80 | 81 | 82 | # ARITHMETIC HELPER FUNCTIONS # 83 | 84 | 85 | def isinf(x: float) -> float: 86 | return float(_isinf(x)) 87 | 88 | 89 | def isnan(x: float) -> float: 90 | return float(_isnan(x)) 91 | 92 | 93 | def isfinite(x: float) -> float: 94 | return float(_isfinite(x)) 95 | 96 | 97 | def isclose(x: float, y: float) -> float: 98 | return float(_isclose(x, y)) 99 | 100 | 101 | def sign(x: float) -> float: 102 | return copysign(1.0, x) if x != 0.0 else 0.0 103 | 104 | 105 | def sigmoid(x: float) -> float: 106 | return 1 / (1 + exp(-x)) if x >= 0 else exp(x) / (1 + exp(x)) 107 | 108 | 109 | def tanh(x: float) -> float: 110 | return 2 * sigmoid(2 * x) - 1 111 | 112 | 113 | def lt(x: float, y: float) -> float: 114 | return float(x < y) 115 | 116 | 117 | def gt(x: float, y: float) -> float: 118 | return float(x > y) 119 | 120 | 121 | def le(x: float, y: float) -> float: 122 | return float(x <= y) 123 | 124 | 125 | def ge(x: float, y: float) -> float: 126 | return float(x >= y) 127 | -------------------------------------------------------------------------------- /pyClarion/numdicts/vec_ops.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from . import numdict as nd 4 | from . import gradient_tape as gt 5 | 6 | from . import basic_ops as bops 7 | from . import dict_ops as dops 8 | from .utils import reduce, by, eltwise 9 | 10 | from itertools import product 11 | from typing import TypeVar, Tuple, Callable, overload, Any 12 | 13 | 14 | __all__ = [ 15 | "reduce_sum", "reduce_max", "reduce_min", "put", "sum_by", "max_by", 16 | "min_by" 17 | ] 18 | 19 | T = TypeVar("T") 20 | T1, T2 = TypeVar("T1"), TypeVar("T2") 21 | 22 | 23 | @overload 24 | def reduce_sum(d: nd.NumDict[Any]) -> nd.NumDict[Any]: 25 | ... 26 | 27 | @overload 28 | def reduce_sum(d: nd.NumDict[Any], *, key: T) -> nd.NumDict[T]: 29 | ... 30 | 31 | @gt.GradientTape.op() 32 | def reduce_sum(d, *, key=None): 33 | return reduce(d, f=sum, initial=0.0, key=key) 34 | 35 | @overload 36 | def _grad_reduce_sum( 37 | grads: nd.NumDict[T2], result: nd.NumDict[T2], d: nd.NumDict[T1], *, 38 | key: T2 39 | ) -> Tuple[nd.NumDict[T1]]: 40 | ... 41 | 42 | @overload 43 | def _grad_reduce_sum( 44 | grads: nd.NumDict[Any], result: nd.NumDict[Any], d: nd.NumDict[T1], *, 45 | key: None 46 | ) -> Tuple[nd.NumDict[T1]]: 47 | ... 48 | 49 | @gt.GradientTape.grad(reduce_sum) 50 | def _grad_reduce_sum(grads, result, d, *, key): 51 | return (dops.isolate(grads, key=key) * dops.mask(d),) 52 | 53 | 54 | def matmul(d1: nd.NumDict[T], d2: nd.NumDict[T]) -> nd.NumDict[Any]: 55 | return reduce_sum(d1 * d2) 56 | 57 | 58 | @overload 59 | def reduce_max(d: nd.NumDict[Any]) -> nd.NumDict[Any]: 60 | ... 61 | 62 | @overload 63 | def reduce_max(d: nd.NumDict[Any], *, key: T) -> nd.NumDict[T]: 64 | ... 65 | 66 | @gt.GradientTape.op() 67 | def reduce_max(d, *, key=None): 68 | return reduce(d, f=max, initial=float("-inf"), key=key) 69 | 70 | @overload 71 | def _grad_reduce_max( 72 | grads: nd.NumDict[T2], result: nd.NumDict[T2], d: nd.NumDict[T1], *, 73 | key: T2 74 | ) -> Tuple[nd.NumDict[T1]]: 75 | ... 76 | 77 | @overload 78 | def _grad_reduce_max( 79 | grads: nd.NumDict[Any], result: nd.NumDict[Any], d: nd.NumDict[T1], *, 80 | key: None 81 | ) -> Tuple[nd.NumDict[T1]]: 82 | ... 83 | 84 | @gt.GradientTape.grad(reduce_max) 85 | def _grad_reduce_max(grads, result, d, *, key): 86 | return (dops.isolate(grads, key=key) * bops.isclose(d, result) * dops.mask(d),) 87 | 88 | 89 | @overload 90 | def reduce_min(d: nd.NumDict[Any]) -> nd.NumDict[Any]: 91 | ... 92 | 93 | @overload 94 | def reduce_min(d: nd.NumDict[Any], *, key: T) -> nd.NumDict[T]: 95 | ... 96 | 97 | @gt.GradientTape.op() 98 | def reduce_min(d, *, key=None): 99 | return reduce(d, f=min, initial=float("+inf"), key=key) 100 | 101 | @overload 102 | def _grad_reduce_min( 103 | grads: nd.NumDict[T2], result: nd.NumDict[T2], d: nd.NumDict[T1], *, 104 | key: T2 105 | ) -> Tuple[nd.NumDict[T1]]: 106 | ... 107 | 108 | @overload 109 | def _grad_reduce_min( 110 | grads: nd.NumDict[Any], result: nd.NumDict[Any], d: nd.NumDict[T1], *, 111 | key: None 112 | ) -> Tuple[nd.NumDict[T1]]: 113 | ... 114 | 115 | @gt.GradientTape.grad(reduce_min) 116 | def _grad_reduce_min(grads, result, d, *, key): 117 | return (dops.isolate(grads, key=key) * bops.isclose(d, result) * dops.mask(d),) 118 | 119 | 120 | @gt.GradientTape.op() 121 | def put( 122 | d: nd.NumDict[T1], source: nd.NumDict[T2], *, 123 | kf: Callable[[T1], T2], strict: bool = False 124 | ) -> nd.NumDict[T1]: 125 | """ 126 | Map keys of d to values from source according to kf. 127 | 128 | Constructs a new mapping such that its members are as follows: 129 | {k: source[kf(k)] for k in d} 130 | """ 131 | return nd.NumDict._new( 132 | m={k: source[kf(k)] for k in d if not strict or kf(k) in source}) 133 | 134 | @gt.GradientTape.grad(put) 135 | def _grad_put( 136 | grads: nd.NumDict[T1], 137 | result: nd.NumDict[T1], 138 | d: nd.NumDict[T1], 139 | source: nd.NumDict[T2], 140 | *, 141 | kf: Callable[[T1], T2], 142 | strict: bool 143 | ) -> Tuple[nd.NumDict[T1], nd.NumDict[T2]]: 144 | 145 | return (nd.NumDict(c=0), sum_by(grads * dops.mask(result), kf=kf)) 146 | 147 | 148 | @gt.GradientTape.op() 149 | def mul_from( 150 | d: nd.NumDict[T1], source: nd.NumDict[T2], *, 151 | kf: Callable[[T1], T2], strict: bool = False 152 | ) -> nd.NumDict[T1]: 153 | """ 154 | Map keys of d to values from source according to kf. 155 | 156 | Constructs a new mapping such that its members are as follows: 157 | {k: v * source[kf(k)] for k, v in d.items()} 158 | """ 159 | return nd.NumDict._new( 160 | m={k: v * source[kf(k)] for k, v in d.items() 161 | if not strict or kf(k) in source}) 162 | 163 | @gt.GradientTape.grad(mul_from) 164 | def _grad_mul_from( 165 | grads: nd.NumDict[T1], 166 | result: nd.NumDict[T1], 167 | d: nd.NumDict[T1], 168 | source: nd.NumDict[T2], 169 | *, 170 | kf: Callable[[T1], T2], 171 | strict: bool 172 | ) -> Tuple[nd.NumDict[T1], nd.NumDict[T2]]: 173 | raise NotImplementedError() 174 | 175 | 176 | @gt.GradientTape.op() 177 | def div_from( 178 | d: nd.NumDict[T1], source: nd.NumDict[T2], *, 179 | kf: Callable[[T1], T2], strict: bool = False 180 | ) -> nd.NumDict[T1]: 181 | """ 182 | Map keys of d to values from source according to kf. 183 | 184 | Constructs a new mapping such that its members are as follows: 185 | {k: v * source[kf(k)] for k, v in d.items()} 186 | """ 187 | return nd.NumDict._new( 188 | m={k: v / source[kf(k)] for k, v in d.items() 189 | if not strict or kf(k) in source}) 190 | 191 | @gt.GradientTape.grad(div_from) 192 | def _grad_div_from( 193 | grads: nd.NumDict[T1], 194 | result: nd.NumDict[T1], 195 | d: nd.NumDict[T1], 196 | source: nd.NumDict[T2], 197 | *, 198 | kf: Callable[[T1], T2], 199 | strict: bool 200 | ) -> Tuple[nd.NumDict[T1], nd.NumDict[T2]]: 201 | raise NotImplementedError() 202 | 203 | 204 | @gt.GradientTape.op() 205 | def sum_by(d: nd.NumDict[T1], *, kf: Callable[[T1], T2]) -> nd.NumDict[T2]: 206 | """ 207 | Sum the values of d grouped by kf. 208 | 209 | The resulting numdict contains a mapping of the following form. 210 | {k_out: sum(d[k] for k in d if kf(k) == k_out)} 211 | """ 212 | return by(d, f=sum, kf=kf) 213 | 214 | @gt.GradientTape.grad(sum_by) 215 | def _grad_sum_by( 216 | grads: nd.NumDict[T2], result: nd.NumDict[T2], d: nd.NumDict[T1], *, 217 | kf: Callable[[T1], T2] 218 | ) -> Tuple[nd.NumDict]: 219 | return (put(d, grads, kf=kf),) 220 | 221 | 222 | @gt.GradientTape.op() 223 | def max_by(d: nd.NumDict[T1], *, kf: Callable[[T1], T2]) -> nd.NumDict[T2]: 224 | """ 225 | Take the maximum of the values of d grouped by kf. 226 | 227 | The resulting numdict contains a mapping of the following form. 228 | {k_out: max(d[k] for k in d if kf(k) == k_out)} 229 | """ 230 | return by(d, f=max, kf=kf) 231 | 232 | @gt.GradientTape.grad(max_by) 233 | def _grad_max_by( 234 | grads: nd.NumDict[T2], result: nd.NumDict[T2], d: nd.NumDict[T1], *, 235 | kf: Callable[[T1], T2] 236 | ) -> Tuple[nd.NumDict]: 237 | return (put(d, grads, kf=kf) * bops.isclose(d, put(d, result, kf=kf)),) 238 | 239 | 240 | @gt.GradientTape.op() 241 | def min_by(d: nd.NumDict[T1], *, kf: Callable[[T1], T2]) -> nd.NumDict[T2]: 242 | """ 243 | Take the minimum of the values of d grouped by kf. 244 | 245 | The resulting numdict contains a mapping of the following form. 246 | {k_out: min(d[k] for k in d if kf(k) == k_out)} 247 | """ 248 | return by(d, f=min, kf=kf) 249 | 250 | @gt.GradientTape.grad(min_by) 251 | def _grad_min_by( 252 | grads: nd.NumDict[T2], result: nd.NumDict[T2], d: nd.NumDict[T1], *, 253 | kf: Callable[[T1], T2] 254 | ) -> Tuple[nd.NumDict]: 255 | return (put(d, grads, kf=kf) * bops.isclose(d, put(d, result, kf=kf)),) 256 | 257 | 258 | @gt.GradientTape.op() 259 | def eltwise_max(*ds: nd.NumDict[T]) -> nd.NumDict[T]: 260 | return eltwise(*ds, f=max) 261 | 262 | @gt.GradientTape.grad(eltwise_max) 263 | def _grad_eltwise_max( 264 | grads: nd.NumDict[T], result: nd.NumDict[T], *ds: nd.NumDict[T] 265 | ) -> Tuple[nd.NumDict[T], ...]: 266 | raise NotImplementedError() 267 | 268 | 269 | @gt.GradientTape.op() 270 | def eltwise_min(*ds: nd.NumDict[T]) -> nd.NumDict[T]: 271 | return eltwise(*ds, f=min) 272 | 273 | @gt.GradientTape.grad(eltwise_min) 274 | def _grad_eltwise_min( 275 | grads: nd.NumDict[T], result: nd.NumDict[T], *ds: nd.NumDict[T] 276 | ) -> Tuple[nd.NumDict[T], ...]: 277 | raise NotImplementedError() 278 | 279 | 280 | @gt.GradientTape.op() 281 | def outer(d1: nd.NumDict[T1], d2: nd.NumDict[T2]) -> nd.NumDict[Tuple[T1, T2]]: 282 | return nd.NumDict._new( 283 | m={(k1, k2): d1[k1] * d2[k2] for k1, k2 in product(d1, d2)}) 284 | 285 | @gt.GradientTape.grad(outer) 286 | def _grad_outer( 287 | grads: nd.NumDict[Tuple[T1, T2]], result: nd.NumDict[Tuple[T1, T2]], 288 | d1: nd.NumDict[T1], d2: nd.NumDict[T2] 289 | ) -> Tuple[nd.NumDict[T1], nd.NumDict[T2]]: 290 | raise NotImplementedError() 291 | 292 | -------------------------------------------------------------------------------- /pyClarion/utils/__init__.py: -------------------------------------------------------------------------------- 1 | from .pprint import pprint, pformat 2 | from .load import load 3 | from . import inspect 4 | 5 | 6 | __all__ = ["pprint", "pformat", "load", "inspect"] 7 | -------------------------------------------------------------------------------- /pyClarion/utils/inspect.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | from typing import Tuple, List, Callable 3 | 4 | from ..base import Structure, feature, uris 5 | 6 | 7 | def links(s: Structure) -> List[Tuple[str, str]]: 8 | return [(module.path, uris.join(module.path, input[0])) # type: ignore 9 | for module in s.modules() for input in module.inputs] 10 | 11 | 12 | def _fspace_key(f: feature): 13 | vkey = ((type(None), int, str).index(type(f.v)), f.v) 14 | return (f.d, f.l, vkey) 15 | 16 | 17 | def _get_fspace(s: Structure, getter: Callable) -> List[feature]: 18 | fspace = [] 19 | for module in s.modules(): 20 | try: 21 | _fspace = getter(module.process) 22 | except NotImplementedError: 23 | pass 24 | else: 25 | for f in _fspace: fspace.append(f) 26 | return sorted(fspace, key=_fspace_key) 27 | 28 | 29 | def fspace(s: Structure) -> List[feature]: 30 | return reprs(s) + flags(s) + params(s) + cmds(s) 31 | 32 | 33 | def reprs(s: Structure) -> List[feature]: 34 | return _get_fspace(s, lambda p: p.reprs) 35 | 36 | 37 | def flags(s: Structure) -> List[feature]: 38 | return _get_fspace(s, lambda p: p.flags) 39 | 40 | 41 | def params(s: Structure) -> List[feature]: 42 | return _get_fspace(s, lambda p: p.params) 43 | 44 | 45 | def cmds(s: Structure) -> List[feature]: 46 | return _get_fspace(s, lambda p: p.cmds) 47 | 48 | 49 | def nops(s: Structure) -> List[feature]: 50 | return _get_fspace(s, lambda p: p.nops) 51 | -------------------------------------------------------------------------------- /pyClarion/utils/load.py: -------------------------------------------------------------------------------- 1 | """Interpreter for the pyClarion Explicit Knowledge Language.""" 2 | 3 | 4 | from contextlib import contextmanager 5 | from dataclasses import dataclass, field 6 | import re 7 | from typing import Dict, List, Tuple, Iterable, Optional, Set, IO, Generator 8 | from collections import OrderedDict, ChainMap, deque 9 | from itertools import combinations 10 | 11 | from ..base import dimension, feature, chunk, rule, Structure, Module 12 | from .. import numdicts as nd 13 | from .. import dev as cld 14 | from .. import components as comps 15 | from . import inspect 16 | 17 | 18 | sign = r"\+|\-" 19 | uint = r"\d+" 20 | int_ = r"(?:\+|\-)?\d+" 21 | float_ = r"(?:\+|\-)?(?:\d+|\.\d+|\d+.\d+)(?:e(?:\+|\-)?\d+)?" 22 | term = r"\w+" 23 | ref = r"{\*?\w+(?:#(?:\+|\-)?\d+)?}" 24 | key = r"\w+=" 25 | pdelim, fdelim, dd = r"\/", r"#", r"\.\." 26 | ellipsis = r"\.\.\." 27 | name = fr"(?:{term}|{ref}|{sign})(?:\.?(?:{term}|{ref}|{sign}))*" 28 | path = fr"(?:{name})(?:{pdelim}{name})*" 29 | purepath = fr"(?:{term})(?:{pdelim}{term})*" 30 | uri = fr"(?:{dd}{pdelim})?{path}(?:{fdelim}{path})?" 31 | param = fr"{key}({float_}|{ref})" 32 | literal = fr"(?:{uri}|{float_}|{param})" 33 | indent = r"'INDENT" 34 | dedent = r"'DEDENT" 35 | 36 | 37 | class CCMLError(RuntimeError): 38 | pass 39 | 40 | 41 | @dataclass 42 | class Token: 43 | t: str = "" 44 | i: str = "" 45 | l: int = 1 46 | d: Dict[str, str] = field(default_factory=dict) 47 | elts: List["Token"] = field(default_factory=list) 48 | 49 | 50 | class IndentAnalyzer: 51 | def __init__(self): 52 | self.level = 0 53 | self.lineno = None 54 | 55 | def __call__(self, mo: re.Match) -> str: 56 | spaces, exprs = mo.groups() 57 | num = len(spaces) 58 | if num % 4: 59 | raise CCMLError(f"Line {self.lineno}: Indent must be a multiple " 60 | "of 4 spaces") 61 | delta = (num // 4) - self.level 62 | self.level += delta 63 | if delta == 0: 64 | return exprs 65 | if delta == 1: 66 | return "\n".join([indent, exprs]) 67 | if delta < 0: 68 | return "\n".join([dedent] * -delta + [exprs]) 69 | else: 70 | raise CCMLError(f"Line {self.lineno}: Indent too deep") 71 | 72 | @contextmanager 73 | def at_line(self, lineno: int) -> Generator[None, None, None]: 74 | self.lineno = lineno 75 | yield 76 | self.lineno = None 77 | 78 | def close(self): 79 | while self.level: 80 | self.level -= 1 81 | yield dedent 82 | 83 | 84 | class Tokenizer: 85 | # preprocessor patterns 86 | p_comment = r"(?:^|\s+)#(.*)" 87 | p_cmd_expr = ": (?! *\n)" 88 | p_cmd_trailing_space = r":\s*\Z" 89 | p_composite = "(.*:)(\n)(.+)" 90 | p_indent = r"^( *)(\S.*)" 91 | p_empty = r"^\s*" 92 | 93 | tokens = OrderedDict([ 94 | ("INDENT", indent), 95 | ("DEDENT", dedent), 96 | ("ELLIPSIS", ellipsis), 97 | ("DATA", fr"(?P{literal}(?: +{literal})*)"), 98 | ("CHUNK", fr"chunk(?: +(?P{term}|{ref}))?:"), 99 | ("CTX", fr"ctx:"), 100 | ("SIG", fr"sig:"), 101 | ("FOR", fr"(?:for +(?Peach|rotations|combinations(?= +k={uint}))(?:(?<=combinations) +k=(?P{uint}))?):"), 102 | ("VAR", fr"var(?: +(?P{term}))?:"), 103 | ("RULESET", fr"ruleset(?: +(?P{term}|{ref}))?:"), 104 | ("RULE", fr"rule(?: +(?P{term}|{ref}))?:"), 105 | ("CONC", fr"conc(?: +(?P{term}|{ref}))?:"), 106 | ("COND", fr"cond(?: +(?P{term}|{ref}))?:"), 107 | ("STORE", fr"store +(?P{purepath}):") 108 | ]) 109 | tok_re = re.compile(r"|".join(fr"(?P<{s}>{p})" for s, p in tokens.items())) 110 | 111 | def __call__(self, stream: IO) -> Generator[Token, None, None]: 112 | for lineno, logical_line in self.preprocess(stream): 113 | mo = self.tok_re.fullmatch(logical_line) 114 | if mo: 115 | t = mo.lastgroup 116 | args = re.fullmatch(self.tokens[t], mo[t]).groupdict() # type: ignore 117 | yield Token(t=f"'{t}", d=args, l=lineno) # type: ignore 118 | else: 119 | raise CCMLError(f"Line {lineno}: Invalid expression") 120 | 121 | def preprocess(self, stream: IO) -> Generator[Tuple[int, str], None, None]: 122 | indenter = IndentAnalyzer() 123 | for lineno, line in enumerate(stream, start=1): 124 | line = re.sub(self.p_comment, "", line) # type: ignore 125 | line = re.sub(self.p_cmd_expr, ":\n", line) 126 | line = re.sub(self.p_cmd_trailing_space, ":", line) 127 | line = re.sub(self.p_composite, self.delimit_composite_line, line) 128 | with indenter.at_line(lineno): 129 | line = re.sub(self.p_indent, indenter, line) 130 | line = re.sub(self.p_empty, "", line) 131 | for logical_line in line.splitlines(): 132 | yield lineno, logical_line.strip() 133 | else: 134 | for dedent in indenter.close(): 135 | yield lineno, dedent # type: ignore 136 | 137 | def delimit_composite_line(self, mo: re.Match) -> str: 138 | pre, delim, post = mo.groups() 139 | repr(post) 140 | if delim: 141 | res = "\n".join([fr"{pre}", r"'INDENT", post, r"'DEDENT"]) 142 | return res 143 | else: 144 | return pre 145 | 146 | 147 | class Parser: 148 | terminals = ["'DATA", "'ELLIPSIS"] 149 | grammar = { 150 | r"'ROOT": r"(?:'STORE|'VAR)*", 151 | r"'STORE": r"(?:'CHUNK|'RULE|'RULESET|'CTX|'SIG|'VAR|'FOR)*", 152 | r"'CHUNK": r"(?:(?:'DATA|'FOR)+|'ELLIPSIS)", 153 | r"'RULE": r"'CONC(?:'COND|'FOR)+", 154 | r"'CONC": r"(?:(?:'DATA|'FOR)+|'ELLIPSIS)", 155 | r"'COND": r"(?:(?:'DATA|'FOR)+|'ELLIPSIS)", 156 | r"'RULESET": r"(?:'RULE|'VAR|'CTX|'FOR|'SIG)+", 157 | r"'SIG": r"(?:'DATA|'FOR)+", 158 | r"'VAR": r"(?:'DATA|'FOR)+", 159 | (r"'CTX", r"∅"): r"(?:'CHUNK|'RULE|'RULESET|'CTX|'SIG|'VAR|'FOR)+", 160 | (r"'CTX", r"s"): r"(?:'RULE|'CTX|'SIG|'VAR|'FOR)+", 161 | (r"'FOR", r"∅"): r"(?:'VAR)+(?:'CHUNK|'RULE|'RULESET|'CTX|'SIG|'FOR)+", 162 | (r"'FOR", r"s"): r"(?:'VAR)+(?:'RULE|'CTX|'SIG|'FOR)+", 163 | (r"'FOR", r"r"): r"(?:'VAR)+(?:'COND|'FOR)+", 164 | (r"'FOR", r"c"): r"(?:'VAR)+(?:'DATA|'FOR)+", 165 | (r"'FOR", r"l"): r"(?:'VAR)+(?:'DATA|'FOR)+", 166 | } 167 | index_updates = { 168 | r"'CHUNK": r"c", 169 | r"'CONC": r"c", 170 | r"'COND": r"c", 171 | r"'RULE": r"r", 172 | r"'RULESET": r"s", 173 | r"'SIG": r"c", 174 | r"'VAR": r"l" 175 | } 176 | 177 | def __call__(self, stream: Iterable[Token]) -> Token: 178 | stack, current = [], Token(t="'ROOT", i="∅") 179 | for token in stream: 180 | current = self.build_tree(stack, current, token) 181 | assert current.t == "'ROOT" 182 | self.check_grammar(current) 183 | return current 184 | 185 | def build_tree( 186 | self, stack: List[Token], current: Token, token: Token 187 | ) -> Token: 188 | t = token.t 189 | if t == "'INDENT": 190 | stack.append(current) 191 | return current.elts[-1] 192 | elif t == "'DEDENT": 193 | self.check_grammar(current) 194 | assert stack 195 | return stack.pop() 196 | else: 197 | token.i = self.index_updates.get(current.t, current.i) 198 | current.elts.append(token) 199 | return current 200 | 201 | def check_grammar(self, tok: Token) -> None: 202 | seq = "".join(elt.t for elt in tok.elts) 203 | pat = (self.grammar[tok.t] if tok.t in self.grammar 204 | else self.grammar[(tok.t, tok.i)]) # type: ignore 205 | if not re.fullmatch(pat, seq): 206 | raise CCMLError(f"Line {tok.l}: Syntax error " 207 | f"in {tok.t[1:]} block") 208 | for elt in tok.elts: 209 | if elt.t not in self.terminals: 210 | self.check_grammar(elt) 211 | 212 | 213 | @dataclass 214 | class Load: 215 | address: str 216 | cs: List[chunk] = field(default_factory=list) 217 | rs: List[rule] = field(default_factory=list) 218 | fs: nd.NumDict[Tuple[chunk, feature]] = field(default_factory=nd.NumDict) 219 | ws: nd.NumDict[Tuple[chunk, dimension]] = field(default_factory=nd.NumDict) 220 | cr: nd.NumDict[Tuple[chunk, rule]] = field(default_factory=nd.NumDict) 221 | rc: nd.NumDict[Tuple[rule, chunk]] = field(default_factory=nd.NumDict) 222 | 223 | @property 224 | def wn(self) -> nd.NumDict[chunk]: 225 | return self.ws.abs().sum_by(kf=cld.first) 226 | 227 | 228 | @dataclass 229 | class Context: 230 | loaded: List[Load] = field(default_factory=list) 231 | load: Optional[Load] = None 232 | lstack: List[str] = field(default_factory=list) 233 | fstack: List[Tuple[feature, Optional[float]]] = field(default_factory=list) 234 | fdelims: List[int] = field(default_factory=list) 235 | fspace: Optional[Set[feature]] = None 236 | frames: ChainMap = field(default_factory=ChainMap) 237 | vstack: List[str] = field(default_factory=list) 238 | for_vars: list = field(default_factory=list) 239 | for_index: List[str] = field(default_factory=list) 240 | var_id: str = "" 241 | lineno: List[str] = field(default_factory=list) 242 | _ref: str = r"{(?P\*)?(?P\w+)(?:#(?P(?:\+|\-)?\d+))?}" 243 | 244 | @property 245 | def vdata(self): 246 | assert self.var_id in self.frames 247 | return self.frames[self.var_id] 248 | 249 | @property 250 | def l(self): 251 | return self.lineno[-1].lstrip("0") 252 | 253 | @contextmanager 254 | def fspace_scope(self, fspace: Optional[Set[feature]]) -> Generator[None, None, None]: 255 | self.fspace = fspace 256 | yield 257 | self.fspace = None 258 | 259 | @contextmanager 260 | def feature_scope(self) -> Generator[None, None, None]: 261 | self.fdelims.append(len(self.fstack)) 262 | yield 263 | self.fstack = self.fstack[:self.fdelims.pop()] 264 | 265 | @contextmanager 266 | def label_scope(self, lineno: int, lbl: str) -> Generator[None, None, None]: 267 | with self.at_line(lineno): 268 | self.lstack.append(self.deref(lineno, lbl)) 269 | yield 270 | self.lstack.pop() 271 | 272 | @contextmanager 273 | def ruleset_scope(self, lineno: int, lbl: str) -> Generator[None, None, None]: 274 | with self.label_scope(lineno, lbl): 275 | with self.var_frame(): 276 | with self.feature_scope(): 277 | yield 278 | 279 | @contextmanager 280 | def store_scope(self, store_id: str) -> Generator[None, None, None]: 281 | self.load = Load(store_id) 282 | with self.var_frame(): 283 | with self.feature_scope(): 284 | yield 285 | self.loaded.append(self.load) 286 | self.load = None 287 | 288 | @contextmanager 289 | def var_scope(self, lineno: int, var_id: str) -> Generator[None, None, None]: 290 | if var_id in self.frames.maps[0]: 291 | raise CCMLError(f"Line {lineno}: Var '{var_id}' already defined " 292 | f"in current scope") 293 | self.vstack.append(self.var_id) 294 | self.frames.maps[0][var_id] = [] 295 | self.var_id = var_id 296 | yield 297 | self.var_id = self.vstack.pop() 298 | 299 | @contextmanager 300 | def var_frame(self) -> Generator[None, None, None]: 301 | self.frames = self.frames.new_child() 302 | yield 303 | self.frames = self.frames.parents 304 | 305 | @contextmanager 306 | def for_scope(self): 307 | self.for_index.append("") 308 | with self.var_frame(): 309 | yield 310 | self.for_index.pop() 311 | 312 | @contextmanager 313 | def at_line(self, lineno: int) -> Generator[None, None, None]: 314 | self.lineno.append(str(lineno).zfill(4)) 315 | yield 316 | self.lineno.pop() 317 | 318 | def gen_uri(self) -> str: 319 | assert self.load 320 | coords = "-".join([self.lineno[-1], *self.for_index]) 321 | fragment = "-".join(filter(None, [coords, *self.lstack])) 322 | return "#".join(filter(None, [self.load.address, fragment])) 323 | 324 | def deref(self, lineno: int, literal: str) -> str: 325 | with self.at_line(lineno): 326 | return re.sub(self._ref, self._deref, literal) 327 | 328 | def _deref(self, mo: re.Match) -> str: 329 | try: 330 | data = self.frames[mo["id"]] 331 | except KeyError as e: 332 | raise CCMLError(f"Line {self.l}: Undefined reference") from e 333 | if mo["index"]: 334 | try: 335 | data = data[int(mo["index"])] 336 | except IndexError as e: 337 | raise CCMLError(f"Line {self.l}: Index out of bounds") from e 338 | if mo["level"]: 339 | if isinstance(data, str): 340 | mo2 = re.fullmatch(self._ref, f"{{{data}}}") 341 | if mo2: 342 | return self._deref(mo2) 343 | else: 344 | raise CCMLError(f"Line {self.l}: Invalid reference") 345 | else: 346 | raise CCMLError(f"Line {self.l}: Invalid list reference") 347 | if isinstance(data, list): 348 | data = " ".join(data) 349 | return data 350 | 351 | 352 | class Interpreter: 353 | data_patterns = { 354 | r"c": (fr"(?P{uri})(?: +(?P{uri}))?(?: +l=(?P{int_}|{ref}))?" 355 | fr"(?: +w=(?P{float_}|{ref}))?"), 356 | r"l": fr"(?P{literal}(?: +{literal})*)", 357 | } 358 | 359 | def __init__(self, structure: Optional[Structure] = None): 360 | self.structure = structure 361 | self.dispatcher = { 362 | (r"'DATA", r"c"): self.feature, 363 | (r"'DATA", r"l"): self.list_, 364 | r"'ELLIPSIS": self.ellipsis, 365 | r"'CHUNK": self.chunk, 366 | r"'CONC": self.conc, 367 | r"'COND": self.cond, 368 | r"'RULE": self.rule, 369 | r"'RULESET": self.ruleset, 370 | r"'STORE": self.store, 371 | r"'CTX": self.ctx_, 372 | r"'SIG": self.sig, 373 | r"'VAR": self.var, 374 | r"'FOR": self.for_, 375 | } 376 | 377 | def __call__(self, ast): 378 | ctx = Context() 379 | fspace = (set(inspect.fspace(self.structure)) 380 | if self.structure is not None else None) 381 | with ctx.fspace_scope(fspace): 382 | self.dispatch(ast.elts, ctx) 383 | return ctx.loaded 384 | 385 | def dispatch(self, stream, ctx): 386 | for node in stream: 387 | func = (self.dispatcher[node.t] if node.t in self.dispatcher 388 | else self.dispatcher[(node.t, node.i)]) 389 | func(node, ctx) 390 | 391 | def feature(self, tok: Token, ctx: Context) -> None: 392 | args = self.parse_data(tok) 393 | d, v, l, w = [ctx.deref(tok.l, args[k] or "") for k in "dvlw"] 394 | assert d 395 | if v == "": 396 | v = None 397 | else: 398 | try: v = int(v) 399 | except ValueError: pass 400 | l = int(l) if l else 0 401 | w = float(w) if w != "" else None 402 | f = feature(d, v, l) 403 | if ctx.fspace is not None and f not in ctx.fspace: 404 | raise CCMLError(f"Line {tok.l}: {f} not a member of working " 405 | "feature space") 406 | ctx.fstack.append((f, w)) 407 | 408 | def parse_data(self, tok: Token): 409 | mo = re.fullmatch(self.data_patterns[tok.i], tok.d["data"]) 410 | if mo: 411 | args = mo.groupdict() 412 | else: 413 | raise CCMLError(f"Line {tok.l}: Invalid feature expression") 414 | return args 415 | 416 | def ellipsis(self, tok: Token, ctx: Context) -> None: 417 | pass 418 | 419 | def chunk(self, tok: Token, ctx: Context) -> None: 420 | with ctx.label_scope(tok.l, tok.d["chunk_id"] or ""): 421 | with ctx.feature_scope(): 422 | self.dispatch(tok.elts, ctx) 423 | self.load_chunk(tok, ctx) 424 | 425 | @staticmethod 426 | def load_chunk(tok: Token, ctx: Context, noctx=False) -> None: 427 | assert ctx.load 428 | c = chunk(ctx.gen_uri()) 429 | ctx.load.cs.append(c) 430 | fdata, dims, ws = ctx.fstack, [], {} 431 | if noctx: fdata = fdata[ctx.fdelims[-1]:] 432 | for f, w in fdata: 433 | ctx.load.fs[(c, f)] = 1.0 434 | dims.append(f.dim) 435 | if w is not None: 436 | if f.dim in ws: 437 | raise CCMLError(f"Line {tok.l}: Ambiguous weight " 438 | f"specification in {tok.t} block") 439 | else: 440 | ws[f.dim] = w 441 | for dim in dims: 442 | ctx.load.ws[(c, dim)] = ws.get(dim, 1.0) 443 | 444 | def conc(self, tok: Token, ctx: Context) -> None: 445 | with ctx.label_scope(tok.l, tok.d["conc_id"] or ""): 446 | with ctx.feature_scope(): 447 | self.dispatch(tok.elts, ctx) 448 | self.load_chunk(tok, ctx, noctx=True) 449 | assert ctx.load 450 | ctx.load.rc[(ctx.load.rs[-1], ctx.load.cs[-1])] = 1.0 451 | 452 | def cond(self, tok: Token, ctx: Context) -> None: 453 | with ctx.label_scope(tok.l, tok.d["cond_id"] or ""): 454 | with ctx.feature_scope(): 455 | self.dispatch(tok.elts, ctx) 456 | self.load_chunk(tok, ctx, noctx=False) 457 | assert ctx.load 458 | ctx.load.cr[(ctx.load.cs[-1], ctx.load.rs[-1])] = 1.0 459 | 460 | def rule(self, tok: Token, ctx: Context) -> None: 461 | assert ctx.load 462 | with ctx.label_scope(tok.l, tok.d["rule_id"] or ""): 463 | ctx.load.rs.append(rule(ctx.gen_uri())) 464 | self.dispatch(tok.elts, ctx) 465 | 466 | def ruleset(self, tok: Token, ctx: Context) -> None: 467 | with ctx.ruleset_scope(tok.l, tok.d["ruleset_id"] or ""): 468 | self.dispatch(tok.elts, ctx) 469 | 470 | def store(self, tok: Token, ctx: Context) -> None: 471 | store_id = tok.d["store_id"] 472 | if self.structure is not None: 473 | try: 474 | store = self.structure[store_id] 475 | except KeyError as e: 476 | raise CCMLError(f"Line {tok.l}: No module at '{store_id}' " 477 | "in working structure") from e 478 | else: 479 | if not isinstance(store, Module): 480 | raise CCMLError(f"Line {tok.l}: Expected Module " 481 | f"instance at '{store_id}', found " 482 | f"{store.__class__.__name__} instead") 483 | elif not isinstance(store.process, comps.Store): 484 | raise CCMLError(f"Line {tok.l}: Expected process of " 485 | f"type Store at '{store_id}', found " 486 | f"{store.__class__.__name__} instead") 487 | with ctx.store_scope(store_id): 488 | self.dispatch(tok.elts, ctx) 489 | 490 | def ctx_(self, tok: Token, ctx: Context) -> None: 491 | with ctx.feature_scope(): 492 | self.dispatch(tok.elts, ctx) 493 | 494 | def sig(self, tok: Token, ctx: Context) -> None: 495 | self.dispatch(tok.elts, ctx) 496 | 497 | def list_(self, tok: Token, ctx: Context) -> None: 498 | args = self.parse_data(tok) 499 | _ldata = ctx.deref(tok.l, args["data"]) 500 | _ldata = re.sub(" +", " ", _ldata.strip()) 501 | ldata = _ldata.split(" ") 502 | ctx.vdata.extend(ldata) 503 | 504 | def var(self, tok: Token, ctx: Context) -> None: 505 | var_id = tok.d["var_id"] 506 | with ctx.var_scope(tok.l, var_id): 507 | self.dispatch(tok.elts, ctx) 508 | 509 | def for_(self, tok: Token, ctx: Context) -> None: 510 | index = 0 511 | for elt in tok.elts: 512 | if elt.t == r"'VAR": index += 1 513 | else: break 514 | with ctx.for_scope(): 515 | self.dispatch(tok.elts[:index], ctx) 516 | for _ in self._iter(tok, ctx): 517 | if tok.i in ["∅", "s"]: 518 | with ctx.feature_scope(): 519 | self.dispatch(tok.elts[index:], ctx) 520 | else: 521 | assert tok.i in ["r", "c", "l"] 522 | self.dispatch(tok.elts[index:], ctx) 523 | 524 | def _iter(self, tok: Token, ctx: Context): 525 | vars_, seqs = list(zip(*ctx.frames.maps[0].items())) 526 | lens = set(len(seq) for seq in seqs) 527 | if len(lens) != 1: 528 | raise CCMLError(f"Line {tok.l}: Iterated var lists must all " 529 | f"have the same length") 530 | else: 531 | n, = lens 532 | with ctx.var_frame(): 533 | if tok.d["mode"] == r"each": 534 | for i in range(n): 535 | for j, k in enumerate(vars_): 536 | ctx.frames[k] = seqs[j][i] 537 | ctx.for_index[-1] = str(i).zfill(3) 538 | yield 539 | elif tok.d["mode"] == r"rotations": 540 | deq = deque(range(n)) 541 | for i in range(n): 542 | for j, k in enumerate(vars_): 543 | ctx.frames[k] = [seqs[j][_i] for _i in deq] 544 | ctx.for_index[-1] = str(i).zfill(3) 545 | deq.rotate(-1) 546 | yield 547 | else: 548 | assert tok.d["mode"] == r"combinations" 549 | k = int(tok.d["k"]) 550 | for i, (i1, i2) in enumerate(combinations(range(n), k)): 551 | for j, k in enumerate(vars_): 552 | ctx.frames[k] = [seqs[j][i1], seqs[j][i2]] 553 | ctx.for_index[-1] = str(i).zfill(3) 554 | yield 555 | 556 | 557 | def load(f: IO, structure: Structure) -> None: 558 | t, p, i = Tokenizer(), Parser(), Interpreter(structure) 559 | for _load in i(p(t(f))): 560 | assert _load.address in structure 561 | module = structure[_load.address] 562 | store = module.process 563 | assert isinstance(store, comps.Store) 564 | p = module.inputs[0][1]() # pull parameters from parameter module 565 | # Populate store 566 | store.cf = _load.fs 567 | store.cw = _load.ws 568 | store.wn = _load.wn 569 | store.cr = _load.cr 570 | store.rc = _load.rc 571 | if store.cb is not None: 572 | store.cb.update(p, nd.NumDict({c: 1 for c in _load.cs})) 573 | if store.rb is not None: 574 | store.rb.update(p, nd.NumDict({r: 1 for r in _load.rs})) 575 | -------------------------------------------------------------------------------- /pyClarion/utils/pprint.py: -------------------------------------------------------------------------------- 1 | """A copy of stdlib pprint augmented to support pyClarion objects.""" 2 | 3 | 4 | __all__ = ["PrettyPrinter", "pprint", "pformat"] 5 | 6 | 7 | from ..numdicts import NumDict 8 | 9 | from typing import ClassVar 10 | import pprint as _pprint 11 | 12 | 13 | class PrettyPrinter(_pprint.PrettyPrinter): 14 | 15 | _dispatch: ClassVar[dict] = _pprint.PrettyPrinter._dispatch # type: ignore 16 | 17 | def _pprint_numdict( 18 | self, object, stream, indent, allowance, context, level 19 | ): 20 | 21 | write = stream.write 22 | name = type(object).__name__ 23 | indent += len(name) + 1 24 | end = [ 25 | " " * indent, 26 | 'c=', _pprint.saferepr(object.c), 27 | ')'] 28 | 29 | stream.write(name + '(') 30 | self._pprint_dict(object, stream, indent, allowance, context, level) # type: ignore 31 | stream.write(',\n') 32 | stream.write("".join(end)) 33 | 34 | _dispatch[NumDict.__repr__] = _pprint_numdict 35 | 36 | def pprint(object, stream=None, indent=1, width=80, depth=None, *, 37 | compact=False): 38 | """Pretty-print a Python object to a stream [default is sys.stdout].""" 39 | 40 | printer = PrettyPrinter( 41 | stream=stream, indent=indent, width=width, depth=depth, compact=compact 42 | ) 43 | printer.pprint(object) 44 | 45 | 46 | def pformat(object, indent=1, width=80, depth=None, *, compact=False): 47 | """Format a Python object into a pretty-printed representation.""" 48 | 49 | printer = PrettyPrinter( 50 | indent=indent, width=width, depth=depth, compact=compact 51 | ) 52 | 53 | return printer.pformat(object) 54 | -------------------------------------------------------------------------------- /pyClarion/utils/visualize.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | from typing import Sequence 3 | import warnings 4 | 5 | from ..base import Structure, uris 6 | from . import inspect 7 | 8 | try: 9 | import matplotlib as mpl 10 | import matplotlib.pyplot as plt 11 | except ImportError: 12 | warnings.warn("Install matplotlib to enable visualization tools.") 13 | else: 14 | def adjacency_matrix( 15 | ax: plt.Axes, s: Structure, exclude: Sequence[str] = () 16 | ) -> plt.Axes: 17 | ms = [m.path for m in s.modules() if m.path not in exclude] 18 | ms_rev = list(reversed(ms)) 19 | links = sorted(set((tgt, src.partition(uris.FSEP)[0]) 20 | for tgt, src in inspect.links(s))) 21 | lbls = [uris.relativize(path, s.path) for path in ms] 22 | coords = [(ms.index(src), ms_rev.index(tgt)) 23 | for tgt, src in links if tgt not in exclude and src not in exclude] 24 | x, y = list(zip(*coords)) 25 | ax.set_aspect("equal") 26 | ax.set( 27 | xlabel="Inputs", 28 | xticks=list(range(len(lbls))), 29 | yticks=list(range(len(lbls))), 30 | yticklabels=reversed(lbls),) 31 | ax.set_xlabel('Sources') 32 | ax.xaxis.tick_top() 33 | ax.xaxis.set_label_position('top') 34 | ax.set_xticklabels(lbls, rotation=90) 35 | ax.set_ylabel('Targets', rotation=90) 36 | ax.yaxis.tick_right() 37 | ax.yaxis.set_label_position('right') 38 | ax.scatter(x, y, c="black") 39 | ax.set_axisbelow(True) 40 | ax.grid() 41 | if len(exclude) > 0: 42 | exc = sorted( 43 | f"\'{uris.relativize(path, s.path)}\'" for path in exclude) 44 | ax.annotate(f"Excluded: {', '.join(exc)}", 45 | (0, - mpl.rcParams["font.size"] - 2), xycoords="axes points") 46 | return ax 47 | 48 | 49 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | `pyClarion` is a python package for implementing agents in the Clarion cognitive architecture. 2 | 3 | It is highly experimental and aims to be easy to learn, read, use and extend. 4 | 5 | The primary resource for the implementation is Ron Sun's [*Anatomy of the Mind*](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199794553.001.0001/acprof-9780199794553). For a short overview of the architecture, see the chapter on Clarion in the [*Oxford Handbook of Cognitive Science*](https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199842193.001.0001/oxfordhb-9780199842193). 6 | 7 | # Key Features 8 | 9 | - Convenient and modular agent assembly 10 | - A simple language for initializing explicit knowledge 11 | - Numerical dictionaries with autodiff support 12 | 13 | See the tutorial for a demonstration of most of these features. 14 | 15 | # Installation 16 | 17 | In a terminal, navigate to the pyClarion folder then: 18 | 19 | - To install in developer mode (recommended), run 20 | ```pip install -e .``` 21 | - To install as a regular library, run 22 | ```pip install .``` 23 | 24 | WARNING: Be sure to include the '`.`' in the install commands. Otherwise, your installation will most likely fail. 25 | 26 | Developer mode is currently recommended due to the evolving nature of the source code. Prior to installing in this mode, please ensure that the pyClarion folder is located at a convenient long-term location. Installing in developer mode means that changes made to the pyClarion folder will be reflected in the pyClarion package (moving the folder will break the package). 27 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import setuptools 2 | 3 | with open("readme.md", "r") as fh: 4 | long_description = fh.read() 5 | 6 | description = ( 7 | "Experimental Python Implementation of the Clarion Cognitive Architecture" 8 | ) 9 | 10 | setuptools.setup( 11 | name="pyClarion", 12 | version="0.18.0", 13 | author="Can Serif Mekik", 14 | author_email="can.mekik@gmail.com", 15 | description=description, 16 | long_description=long_description, 17 | long_description_content_type="text/markdown", 18 | url="https://github.com/cmekik/pyClarion", 19 | packages=setuptools.find_packages(exclude=["tests", "tests.*"]), 20 | classifiers=[ 21 | "Programming Language :: Python :: 3", 22 | "License :: OSI Approved :: MIT License", 23 | "Operating System :: OS Independent", 24 | ], 25 | python_requires='>=3.7', 26 | install_requires=["typing_extensions"] 27 | ) -------------------------------------------------------------------------------- /tutorial/agent.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmekik/pyClarion/84e89bfbdacef2ec1bf64388c10415ed8ae19d6a/tutorial/agent.png -------------------------------------------------------------------------------- /tutorial/demo.py: -------------------------------------------------------------------------------- 1 | """ 2 | This demo simulates a simple Braitenberg vehicle in pyClarion. 3 | 4 | The world is two dimensional. The agent can move left, right, up or down. 5 | The agent also has four virtual light sensors, one for each cardinal direction. 6 | 7 | The agent is configured to move away from light. The behavior is programmed as 8 | fixed rules in frs.ccml. 9 | """ 10 | 11 | 12 | import pyClarion as cl 13 | import random 14 | 15 | 16 | def build(scfg, acfg, path): 17 | """ 18 | Build a simple pyClarion agent. 19 | 20 | The agent has one stimulus module for inputs, one action module for 21 | external actions. Action selection is driven by explicit fixed rules. 22 | 23 | :param scfg: Stimulus config, passed to a `Receptors` constructor. 24 | :param acfg: Action config, passed to an `Actions` constructor. 25 | :param path: Path to a ccml file. Contents loaded to a `Store` instance. 26 | """ 27 | 28 | # construct agent 29 | with cl.Structure("agent") as agent: 30 | cl.Module("vis", cl.Receptors(scfg)) 31 | params = cl.Module("params", cl.Repeat(), ["params"]) 32 | cl.Module("null", cl.Repeat(), ["null"]) 33 | with cl.Structure("acs"): 34 | cl.Module("bi", cl.CAM(), ["../vis"]) 35 | cl.Module("bu", cl.BottomUp(), 36 | ["fr_store#0", "fr_store#1", "fr_store#2", "bi"]) 37 | cl.Module("fr", cl.ActionRules(), 38 | ["../params", "fr_store#3", "fr_store#4", "bu"]) 39 | cl.Module("td", cl.TopDown(), ["fr_store#0", "fr_store#1", "fr#0"]) 40 | cl.Module("bo", cl.CAM(), ["td"]) 41 | cl.Module("mov", cl.ActionSampler(), ["../params", "bo"], 42 | ["../mov#cmds"]) 43 | cl.Module("fr_store", cl.Store(), 44 | ["../params", "../null", "../null", "../null"]) 45 | cl.Module("mov", cl.Actions(acfg), ["acs/mov#0"]) 46 | 47 | # set temperature parameters for rule & action selection 48 | params.output = cl.NumDict({ 49 | cl.feature("acs/fr#temp"): 1e-2, 50 | cl.feature("acs/mov#temp"): 1e-2, 51 | }) 52 | 53 | # load fixed rules 54 | with open(path) as f: 55 | cl.load(f, agent) 56 | 57 | return agent 58 | 59 | 60 | def main(): 61 | # Stimulus config 62 | scfg = ["lum-L", "lum-R", "lum-U", "lum-D"] 63 | acfg = {"move": ["L", "R", "U", "D"]} 64 | 65 | # Build agent 66 | agent = build(scfg, acfg, "frs.ccml") 67 | vis = agent["vis"] # retrieve visual module 68 | mov = agent["mov"] # retrieve movement module 69 | 70 | # Pretty print all features defined in agent 71 | print("DEFINED FEATURES", end="\n\n") 72 | cl.pprint(cl.inspect.fspace(agent)) 73 | print() # leave a blank line 74 | 75 | # Visualize agent structure (if matplotlib is installed) 76 | try: 77 | import matplotlib.pyplot as plt 78 | except ImportError: 79 | pass 80 | else: 81 | import pyClarion.utils.visualize as clv 82 | fig, ax = plt.subplots() 83 | ax = clv.adjacency_matrix(ax, agent) 84 | fig.set_size_inches(6, 6) 85 | fig.tight_layout() 86 | plt.show() 87 | 88 | # Run simulation for 20 steps 89 | print("SIMULATION", end="\n\n") 90 | for i in range(20): 91 | vis.process.stimulate([random.choice(scfg)]) # Stimulate random sensor 92 | agent.step() 93 | 94 | display = [ 95 | f"Stimulus: {cl.pformat(vis.output)}", 96 | f"Response: {cl.pformat(mov.output)}" 97 | ] 98 | if i: print() 99 | print(f"Step {i}:") 100 | for s in display: 101 | print(" ", s, sep="") 102 | 103 | 104 | if __name__ == "__main__": 105 | main() 106 | -------------------------------------------------------------------------------- /tutorial/frs.ccml: -------------------------------------------------------------------------------- 1 | store acs/fr_store: 2 | ruleset bbv: 3 | for each: 4 | var direction: L R U D 5 | var opposite: R L D U 6 | rule: 7 | conc: 8 | mov#cmd-move {opposite} 9 | cond: 10 | vis#lum-{direction} 11 | -------------------------------------------------------------------------------- /tutorial/notes.md: -------------------------------------------------------------------------------- 1 | # Understanding `pyClarion` 2 | 3 | ## Overview 4 | 5 | The basic representational unit of Clarion theory is the connectionist node. Clarion encodes even explicit knowledge, which consists of chunks and rules, using localist connectionist nodes. Clarion may therefore be viewed as a modular neural network architecture. This is the fundamental insight informing the design of `pyClarion`. 6 | 7 | `PyClarion` agents are structured as collections of networked components. These networks are constructed using `Module` and `Structure` objects. 8 | 9 | The behavior of individual components are defined by `Process` objects, which reside in `Module` instances. `Process` objects compute activations and update parameters locally. They may have multiple output sites and they may expose a number of feature spaces for other components to use. 10 | 11 | All computations are carried out over pyClarion's native `NumDict` objects. To a first approximation, a `NumDict` is a vector with named dimensions. Typically, `NumDict` keys are symbolic identifiers for elementary representational nodes. 12 | 13 | ### Node Symbols 14 | 15 | Four main connectionist node types can be distinguished in Clarion theory: dimensions, features, chunks and rules. `pyClarion` provides node symbols for identifying each type of node. 16 | 17 | In many cases node symbols contain identifier strings. These strings are expected, in idiomatic usage, to be valid `pyClarion` URIs (explained below). 18 | 19 | Dimension nodes are represented by `dimension` symbols, which consist of an identifier (`str`) and a lag value (`int`, optional). 20 | 21 | ```python 22 | cl.dimension("dim-identifier", 0) 23 | ``` 24 | 25 | Feature nodes are represented by `feature` symbols, which consist of a dimension identifier (`str`), a value identifier (`str` or `int`, optional) and a lag identifier (`int`, optional). 26 | 27 | ```python 28 | cl.feature("dim-identifier", "val-identifier", 0) 29 | cl.feature("dim-identifier", 1, 0) 30 | ``` 31 | 32 | The `dimension` symbol associated with a given feature can be accessed through the `dim` property. 33 | 34 | ```python 35 | assert cl.feature("dim", "val", 1).dim == cl.dimension("dim", 1) 36 | ``` 37 | 38 | Chunk nodes are represented by `chunk` symbols, which consist of a chunk identifier (`str`). 39 | 40 | ```python 41 | cl.chunk("chunk-identifier") 42 | ``` 43 | 44 | Rule nodes are represented by `rule` symbols, which consist of a rule identifier (`str`). 45 | 46 | ```python 47 | cl.rule("rule-identifier") 48 | ``` 49 | 50 | ### An Example 51 | 52 | To illustrate the concepts above, consider the task of simulating an agent in the style of Braitenberg vehicles. The following python script, included in the tutorial as `demo.py`, simulates such an agent that navigates a 2D environment. 53 | 54 | ```python 55 | """ 56 | This demo simulates a simple Braitenberg vehicle in pyClarion. 57 | 58 | The world is two dimensional. The agent can move left, right, up or down. 59 | The agent also has four virtual light sensors, one for each cardinal direction. 60 | 61 | The agent is configured to move away from light. The behavior is programmed as 62 | fixed rules in frs.ccml. 63 | """ 64 | 65 | 66 | import pyClarion as cl 67 | import random 68 | 69 | 70 | def build(scfg, acfg, path): 71 | """ 72 | Build a simple pyClarion agent. 73 | 74 | The agent has one stimulus module for inputs, one action module for 75 | external actions. Action selection is driven by explicit fixed rules. 76 | 77 | :param scfg: Stimulus config, passed to a `Receptors` constructor. 78 | :param acfg: Action config, passed to an `Actions` constructor. 79 | :param path: Path to a ccml file. Contents loaded to a `Store` instance. 80 | """ 81 | 82 | # construct agent 83 | with cl.Structure("agent") as agent: 84 | cl.Module("vis", cl.Receptors(scfg)) 85 | params = cl.Module("params", cl.Repeat(), ["params"]) 86 | cl.Module("null", cl.Repeat(), ["null"]) 87 | with cl.Structure("acs"): 88 | cl.Module("bi", cl.CAM(), ["../vis"]) 89 | cl.Module("bu", cl.BottomUp(), 90 | ["fr_store#0", "fr_store#1", "fr_store#2", "bi"]) 91 | cl.Module("fr", cl.ActionRules(), 92 | ["../params", "fr_store#3", "fr_store#4", "bu"]) 93 | cl.Module("td", cl.TopDown(), ["fr_store#0", "fr_store#1", "fr#0"]) 94 | cl.Module("bo", cl.CAM(), ["td"]) 95 | cl.Module("mov", cl.ActionSampler(), ["../params", "bo"], 96 | ["../mov#cmds"]) 97 | cl.Module("fr_store", cl.Store(), 98 | ["../params", "../null", "../null", "../null"]) 99 | cl.Module("mov", cl.Actions(acfg), ["acs/mov#0"]) 100 | 101 | # set temperature parameters for rule & action selection 102 | params.output = cl.NumDict({ 103 | cl.feature("acs/fr#temp"): 1e-2, 104 | cl.feature("acs/mov#temp"): 1e-2, 105 | }) 106 | 107 | # load fixed rules 108 | with open(path) as f: 109 | cl.load(f, agent) 110 | 111 | return agent 112 | 113 | 114 | def main(): 115 | # Stimulus config 116 | scfg = ["lum-L", "lum-R", "lum-U", "lum-D"] 117 | acfg = {"move": ["L", "R", "U", "D"]} 118 | 119 | # Build agent 120 | agent = build(scfg, acfg, "frs.ccml") 121 | vis = agent["vis"] # retrieve visual module 122 | mov = agent["mov"] # retrieve movement module 123 | 124 | # Pretty print all features defined in agent 125 | print("DEFINED FEATURES", end="\n\n") 126 | cl.pprint(cl.inspect.fspace(agent)) 127 | print() # leave a blank line 128 | 129 | # Visualize agent structure (if matplotlib is installed) 130 | try: 131 | import matplotlib.pyplot as plt 132 | except ImportError: 133 | pass 134 | else: 135 | import pyClarion.utils.visualize as clv 136 | fig, ax = plt.subplots() 137 | ax = clv.adjacency_matrix(ax, agent) 138 | fig.set_size_inches(6, 6) 139 | fig.tight_layout() 140 | plt.show() 141 | 142 | # Run simulation for 20 steps 143 | print("SIMULATION", end="\n\n") 144 | for i in range(20): 145 | vis.process.stimulate([random.choice(scfg)]) # Stimulate random sensor 146 | agent.step() 147 | 148 | display = [ 149 | f"Stimulus: {cl.pformat(vis.output)}", 150 | f"Response: {cl.pformat(mov.output)}" 151 | ] 152 | if i: print() 153 | print(f"Step {i}:") 154 | for s in display: 155 | print(" ", s, sep="") 156 | 157 | 158 | if __name__ == "__main__": 159 | main() 160 | 161 | ``` 162 | 163 | For now, let us focus on a single representative line in the above defnition. 164 | 165 | Here is the definition for the action selection module, with added keywords for clarity. 166 | 167 | ```python 168 | cl.Module( 169 | name="mov", 170 | process=cl.ActionSampler(), 171 | i_uris=["../params", "bo"], 172 | fs_uris=["../mov#cmds"]) 173 | ``` 174 | 175 | Here, `cl.ActionSampler()` is a `Process` instance. The rest of the parameters are `pyClarion` URIs (or URI segments) which specify how this module networks with the rest of the agent. 176 | 177 | The `name` argument is the name of the module, and therefore the last entry in the module's path. Locators for module inputs are passed to the module through the `i_uris` argument. Finally, the `fs_uris` argument points the module to action feature spaces. In this case, the module is given a locator for the command features exposed by the top-level `'mov'` module. 178 | 179 | `pyClarion` provides some tools for us to inspect and visualize the agents that we construct. 180 | 181 | Here is a list of all features defined by the agent. These are the most basic representational units to which the agent has access. 182 | 183 | ```python 184 | [feature(d='mov#move', v='D', l=0), 185 | feature(d='mov#move', v='L', l=0), 186 | feature(d='mov#move', v='R', l=0), 187 | feature(d='mov#move', v='U', l=0), 188 | feature(d='vis#lum-D', v=None, l=0), 189 | feature(d='vis#lum-L', v=None, l=0), 190 | feature(d='vis#lum-R', v=None, l=0), 191 | feature(d='vis#lum-U', v=None, l=0), 192 | feature(d='acs/fr#temp', v=None, l=0), 193 | feature(d='acs/fr#th', v=None, l=0), 194 | feature(d='acs/mov#temp', v=None, l=0), 195 | feature(d='mov#cmd-move', v=None, l=0), 196 | feature(d='mov#cmd-move', v='D', l=0), 197 | feature(d='mov#cmd-move', v='L', l=0), 198 | feature(d='mov#cmd-move', v='R', l=0), 199 | feature(d='mov#cmd-move', v='U', l=0)] 200 | ``` 201 | 202 | This list is generated by the following function call in the script. 203 | 204 | ```python 205 | cl.inspect.fspace(agent) 206 | ``` 207 | 208 | Here is a visualization of the structure of the agent defined in the script above. 209 | 210 | ![agent.png](agent.png) 211 | 212 | This image is generated by the following passage in the script. 213 | 214 | ```python 215 | fig, ax = plt.subplots() 216 | ax = clv.adjacency_matrix(ax, agent) 217 | fig.set_size_inches(6, 6) 218 | fig.tight_layout() 219 | plt.show() 220 | ``` 221 | 222 | ## URIs 223 | 224 | URIs are used in pyClarion mainly for the following purposes: 225 | 226 | - Identifying and locating components within agents 227 | - Locating component outputs and feature spaces 228 | - Naming dimensions, features, chunks, and rules 229 | 230 | ### URI Structure 231 | 232 | A pyClarion URI consists of a relative or absolute path and an optional URI fragment. 233 | 234 | ### Identifying and Locating Components 235 | 236 | Within a `pyClarion` structure, every component has a unique path. Components may be retrieved by subscripting structures with a path segment. 237 | 238 | #### Example 239 | 240 | The absolute path for the action selector in the sample agent is given below. 241 | 242 | ```python 243 | "/agent/acs/mov" 244 | ``` 245 | 246 | We can retrieve the component by subscripting the `agent` object as follows. 247 | 248 | ```python 249 | agent["acs/mov"] 250 | ``` 251 | 252 | ### Locating Component Outputs and Feature Spaces 253 | 254 | When a component has a single output, its path is sufficient to locate its output. When a component has multiple outputs, they may each be located by appending the output index to the component path as a URI fragment. 255 | 256 | Feature spaces can be located in a similar manner through the use of URI fragments. To locate a feature space, simply append its name as a URI fragment to its owner's path. 257 | 258 | #### Example 259 | 260 | The action controller in the example has several outputs: one representing the selected commands and one representing the sampling distribution used to carry out the selection. 261 | 262 | A component sitting at the same level as `acs` may locate the first output of the working memory controller with the following URI. 263 | 264 | ```python 265 | "acs/mov#0" 266 | ``` 267 | 268 | The action selector knows what features to construct sampling distributions over because the action module exposes a set of command features. The selector is able to access these features thanks to the following reference. 269 | 270 | ```python 271 | "../mov#cmds" 272 | ``` 273 | 274 | ### Naming Dimensions, Features, Chunks, and Rules 275 | 276 | URIs are also used to identify elementary nodes. Such URIs typically consist of a path from the root to the owning component and a fragment identifying the specific item. 277 | 278 | #### Example 279 | 280 | The action controller has a parameter called `temp` for controlling the temperature of the Boltzmann distribution used in action selection. This parameter gets its value from a feature identified by the following symbol. 281 | 282 | ```python 283 | cl.feature("acs/mov#temp") 284 | ``` 285 | 286 | ### Composite URIs 287 | 288 | In some cases, it is necessary to have composite URIs in order to prevent name clashes. Such cases may arise, for instance, when automatically generating features to control selective filtering of feature dimensions. 289 | 290 | Composite URIs are constructed by replacing the `"#"` symbol of the original URI with `"."` and prepending the result with a path to the new owning construct. This construction can be applied iteratively to accommodate more complex scenarios (e.g., inhibiting filtering signals). 291 | 292 | #### Example 293 | 294 | A composite URI derived from the URI for the temperature parameter may be as follows. 295 | 296 | ```python 297 | cl.feature("compositor#acs/mov.temp") 298 | ``` 299 | 300 | This construction can be iterated without ambiguity. 301 | 302 | ```python 303 | cl.feature("compositor2#compositor.acs/mov.temp") 304 | ``` 305 | 306 | ### Naming Conventions 307 | 308 | Words within identifiers should be delimited by `'-'`. Using `'/'` as a delimiter should be viewed as a signal that a path is being referenced, as with composite features discussed above. Underscores can also be used. In this case the whole sequence joined by the underscore should be interpreted as one token. Dots should be reserved for composite URIs. 309 | 310 | `Process` objects defined in the library exposing their own feature spaces follow this convention. 311 | 312 | ## Controlling Agent Behavior 313 | 314 | The two main ways to control agent behavior is to inject activations (i.e., stimulus) into the agent, to control the flow of activations (e.g., through fixed rules). 315 | 316 | ### Activation-setting 317 | 318 | Some modules, like the `Receptors` module, may provide interfaces dedicated to receiving external input. 319 | 320 | It is also possible to directly set module outputs through the `Module.output` property. 321 | 322 | ```python 323 | params.output = cl.nd.NumDict({ 324 | cl.feature("acs/fr#temp"): 1e-2, 325 | cl.feature("acs/mov#temp"): 1e-2, 326 | }) 327 | ``` 328 | 329 | Direct setting of activations is a quick way to get activations into an agent. It is most useful when setting constant parameters, but may also serve during initial experimentation with activation flows. 330 | 331 | ### Declaring Explicit Knowledge with `ccml` 332 | 333 | We often want an agent to have some pre-programmed behavior and knowledge. 334 | 335 | Pre-programmed behavior is typically defined in the form of fixed rules and pre-programmed knowledge as initial chunks and associative rules. `pyClarion` provides a programming language, called `ccml`, to facilitate specification of such initial agent knowledge. 336 | 337 | A `ccml` script implementing fixed rules for Braitenberg-style behavior, called `"frs.ccml"`, is given below. 338 | 339 | ``` 340 | store acs/fr_store: 341 | ruleset bbv: 342 | for each: 343 | var direction: L R U D 344 | var opposite: R L D U 345 | rule: 346 | conc: 347 | mov#cmd-move {opposite} 348 | cond: 349 | vis#lum-{direction} 350 | ``` 351 | 352 | 353 | 354 | The contents of a ccml file can be loaded into an agent using the `cl.load()` function. 355 | 356 | ```python 357 | # load fixed rules 358 | with open(path) as f: 359 | cl.load(f, agent) 360 | ``` 361 | 362 | Rules defined in a ccml file are named according to their parent store and ruleset. They are also indexed by line number and iteration index for easy reference. Thus the `ccml` script defined above will create four rules with the following identifiers. 363 | 364 | ```python 365 | cl.rule("acs/fr_store#0006-000-bbv") # go right if light left 366 | cl.rule("acs/fr_store#0006-001-bbv") # go left if light right 367 | cl.rule("acs/fr_store#0006-002-bbv") # go down if light up 368 | cl.rule("acs/fr_store#0006-003-bbv") # go up if light down 369 | ``` 370 | 371 | ## Custom `Process` classes 372 | 373 | In some cases, it may be necessary to extend `pyClarion` with new or customized components. This can be done by subclassing `Process` or one of its existing subclasses. 374 | 375 | As an example, consider the following snippet, adapted from `networks.py` in the `components` subpackage. 376 | 377 | ```python 378 | from typing import List, Tuple, Callable 379 | 380 | import pyClarion as cl 381 | import pyClarion.dev as cld 382 | 383 | 384 | class NAM(cld.Process): 385 | """ 386 | A neural associative memory. 387 | 388 | Implements a single fully connected layer. 389 | 390 | For validation, each weight and bias key must belong to a client fspace. 391 | 392 | May be used as a static network or as a base for various associative 393 | learning models such as Hopfield nets. 394 | """ 395 | 396 | initial = cl.NumDict() 397 | 398 | w: cl.NumDict[Tuple[cl.feature, cl.feature]] 399 | b: cl.NumDict[cl.feature] 400 | 401 | def __init__( 402 | self, 403 | f: Callable[[cl.NumDict[cl.feature]], cl.NumDict[cl.feature]] = cld.eye 404 | ) -> None: 405 | self.w = cl.NumDict() 406 | self.b = cl.NumDict() 407 | self.f = f 408 | 409 | def validate(self): 410 | fspace = set(f for fspace in self.fspaces for f in fspace()) 411 | if (any(k1 not in fspace or k2 not in fspace for k1, k2 in self.w) 412 | or any(k not in fspace for k in self.b)): 413 | raise ValueError("Parameter key not a member of set fspaces.") 414 | 415 | def call(self, x: cl.NumDict[cl.feature]) -> cl.NumDict[cl.feature]: 416 | return (self.w 417 | .mul_from(x, kf=cld.first) 418 | .sum_by(kf=cld.second) 419 | .add(self.b) 420 | .pipe(self.f)) 421 | ``` 422 | 423 | The module `pyClarion.dev` provides some essential tools for implementing new `Process` subclasses, including exposing the `Process` class itself. 424 | 425 | A `Process` subclass minimally requires the implementation of the `initial` attribute and the `call()` method. The `initial` attribute defines the initial output and the `call()` method computes outputs in all subequent steps. 426 | 427 | The `validate()` method is a hook for checking whether a `Process` object has been correctly initialized. It is called during agent construction, after all components have been linked. 428 | 429 | A `Process` object may define its own feature spaces using the `reprs`, `flags`, `params`, `cmds`, and `nops` attributes, or it may use external feature spaces as in the case of the present `NAM` class or the `acs/mov` module in the initial example. 430 | 431 | External feature spaces are available as a sequence of callables through the `fspaces` property. This propety is populated during agent construction and each of its members returns the current state of a client feature space. 432 | -------------------------------------------------------------------------------- /tutorial/out.txt: -------------------------------------------------------------------------------- 1 | DEFINED FEATURES 2 | 3 | [feature(d='mov#move', v='D', l=0), 4 | feature(d='mov#move', v='L', l=0), 5 | feature(d='mov#move', v='R', l=0), 6 | feature(d='mov#move', v='U', l=0), 7 | feature(d='vis#lum-D', v=None, l=0), 8 | feature(d='vis#lum-L', v=None, l=0), 9 | feature(d='vis#lum-R', v=None, l=0), 10 | feature(d='vis#lum-U', v=None, l=0), 11 | feature(d='acs/fr#temp', v=None, l=0), 12 | feature(d='acs/fr#th', v=None, l=0), 13 | feature(d='acs/mov#temp', v=None, l=0), 14 | feature(d='mov#cmd-move', v=None, l=0), 15 | feature(d='mov#cmd-move', v='D', l=0), 16 | feature(d='mov#cmd-move', v='L', l=0), 17 | feature(d='mov#cmd-move', v='R', l=0), 18 | feature(d='mov#cmd-move', v='U', l=0)] 19 | 20 | SIMULATION 21 | 22 | Step 0: 23 | Stimulus: NumDict({feature(d='vis#lum-L', v=None, l=0): 1.0}, c=0.0, prot=True) 24 | Response: NumDict({feature(d='mov#move', v='L', l=0): 1.0}, c=0.0, prot=True) 25 | 26 | Step 1: 27 | Stimulus: NumDict({feature(d='vis#lum-D', v=None, l=0): 1.0}, c=0.0, prot=True) 28 | Response: NumDict({feature(d='mov#move', v='U', l=0): 1.0}, c=0.0, prot=True) 29 | 30 | Step 2: 31 | Stimulus: NumDict({feature(d='vis#lum-L', v=None, l=0): 1.0}, c=0.0, prot=True) 32 | Response: NumDict({feature(d='mov#move', v='R', l=0): 1.0}, c=0.0, prot=True) 33 | 34 | Step 3: 35 | Stimulus: NumDict({feature(d='vis#lum-R', v=None, l=0): 1.0}, c=0.0, prot=True) 36 | Response: NumDict({feature(d='mov#move', v='L', l=0): 1.0}, c=0.0, prot=True) 37 | 38 | Step 4: 39 | Stimulus: NumDict({feature(d='vis#lum-D', v=None, l=0): 1.0}, c=0.0, prot=True) 40 | Response: NumDict({feature(d='mov#move', v='U', l=0): 1.0}, c=0.0, prot=True) 41 | 42 | Step 5: 43 | Stimulus: NumDict({feature(d='vis#lum-D', v=None, l=0): 1.0}, c=0.0, prot=True) 44 | Response: NumDict({feature(d='mov#move', v='U', l=0): 1.0}, c=0.0, prot=True) 45 | 46 | Step 6: 47 | Stimulus: NumDict({feature(d='vis#lum-L', v=None, l=0): 1.0}, c=0.0, prot=True) 48 | Response: NumDict({feature(d='mov#move', v='R', l=0): 1.0}, c=0.0, prot=True) 49 | 50 | Step 7: 51 | Stimulus: NumDict({feature(d='vis#lum-R', v=None, l=0): 1.0}, c=0.0, prot=True) 52 | Response: NumDict({feature(d='mov#move', v='L', l=0): 1.0}, c=0.0, prot=True) 53 | 54 | Step 8: 55 | Stimulus: NumDict({feature(d='vis#lum-U', v=None, l=0): 1.0}, c=0.0, prot=True) 56 | Response: NumDict({feature(d='mov#move', v='D', l=0): 1.0}, c=0.0, prot=True) 57 | 58 | Step 9: 59 | Stimulus: NumDict({feature(d='vis#lum-D', v=None, l=0): 1.0}, c=0.0, prot=True) 60 | Response: NumDict({feature(d='mov#move', v='U', l=0): 1.0}, c=0.0, prot=True) 61 | 62 | Step 10: 63 | Stimulus: NumDict({feature(d='vis#lum-R', v=None, l=0): 1.0}, c=0.0, prot=True) 64 | Response: NumDict({feature(d='mov#move', v='L', l=0): 1.0}, c=0.0, prot=True) 65 | 66 | Step 11: 67 | Stimulus: NumDict({feature(d='vis#lum-L', v=None, l=0): 1.0}, c=0.0, prot=True) 68 | Response: NumDict({feature(d='mov#move', v='R', l=0): 1.0}, c=0.0, prot=True) 69 | 70 | Step 12: 71 | Stimulus: NumDict({feature(d='vis#lum-R', v=None, l=0): 1.0}, c=0.0, prot=True) 72 | Response: NumDict({feature(d='mov#move', v='L', l=0): 1.0}, c=0.0, prot=True) 73 | 74 | Step 13: 75 | Stimulus: NumDict({feature(d='vis#lum-D', v=None, l=0): 1.0}, c=0.0, prot=True) 76 | Response: NumDict({feature(d='mov#move', v='U', l=0): 1.0}, c=0.0, prot=True) 77 | 78 | Step 14: 79 | Stimulus: NumDict({feature(d='vis#lum-U', v=None, l=0): 1.0}, c=0.0, prot=True) 80 | Response: NumDict({feature(d='mov#move', v='D', l=0): 1.0}, c=0.0, prot=True) 81 | 82 | Step 15: 83 | Stimulus: NumDict({feature(d='vis#lum-R', v=None, l=0): 1.0}, c=0.0, prot=True) 84 | Response: NumDict({feature(d='mov#move', v='L', l=0): 1.0}, c=0.0, prot=True) 85 | 86 | Step 16: 87 | Stimulus: NumDict({feature(d='vis#lum-D', v=None, l=0): 1.0}, c=0.0, prot=True) 88 | Response: NumDict({feature(d='mov#move', v='U', l=0): 1.0}, c=0.0, prot=True) 89 | 90 | Step 17: 91 | Stimulus: NumDict({feature(d='vis#lum-R', v=None, l=0): 1.0}, c=0.0, prot=True) 92 | Response: NumDict({feature(d='mov#move', v='L', l=0): 1.0}, c=0.0, prot=True) 93 | 94 | Step 18: 95 | Stimulus: NumDict({feature(d='vis#lum-R', v=None, l=0): 1.0}, c=0.0, prot=True) 96 | Response: NumDict({feature(d='mov#move', v='L', l=0): 1.0}, c=0.0, prot=True) 97 | 98 | Step 19: 99 | Stimulus: NumDict({feature(d='vis#lum-U', v=None, l=0): 1.0}, c=0.0, prot=True) 100 | Response: NumDict({feature(d='mov#move', v='D', l=0): 1.0}, c=0.0, prot=True) 101 | --------------------------------------------------------------------------------