├── .gitignore ├── LICENSE ├── MANIFEST.in ├── README.md ├── docs ├── Getting started I.html ├── Getting started I.ipynb ├── Getting started II.html ├── Getting started II.ipynb ├── actrparameters.pdf ├── actrparameters.tex └── book_manual.pdf ├── pyactr ├── __init__.py ├── buffers.py ├── chunks.py ├── declarative.py ├── environment.py ├── goals.py ├── model.py ├── motor.py ├── productions.py ├── simulation.py ├── tests │ ├── __init__.py │ ├── modeltests.py │ └── tests.py ├── utilities.py └── vision.py ├── setup.py └── tutorials ├── forbook └── code │ ├── ch2_agreement.py │ ├── ch2_context_free_grammar.py │ ├── ch2_count.py │ ├── ch2_regular_grammar.py │ ├── ch3_topdown_parser.py │ ├── ch4_leftcorner_parser.py │ ├── ch4_leftcorner_parser_with_relative_clauses.py │ ├── ch4_lexical_decision.py │ ├── ch7_lexical_decision_pyactr_no_imaginal.py │ ├── ch7_lexical_decision_pyactr_with_imaginal.py │ └── ch7_lexical_decision_pyactr_with_imaginal_delay_0.py ├── plot_u8_estimating_using_pymc3.png ├── u1_addition.py ├── u1_count.py ├── u1_semantic.py ├── u2_demo.py ├── u3_multiple_objects.py ├── u4_paired.py ├── u5_fan.py ├── u5_grouped.py ├── u6_simple.py ├── u7_simplecompilation.py └── u8_estimating_using_pymc3.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled python modules. 2 | pyactr/__pycache__ 3 | *.pyc 4 | 5 | # Setuptools distribution folder. 6 | /build/ 7 | /distOLD/ 8 | /dist/ 9 | 10 | # Python egg metadata, regenerated from source files by setuptools. 11 | /*.egg-info 12 | /*.egg 13 | 14 | #ipynb 15 | /docs/.ipynb_checkpoints 16 | 17 | #gitignore 18 | .gitignore 19 | 20 | #.uploading incstructions 21 | .uploading 22 | 23 | #old tutorials 24 | /tutorials/old/ 25 | 26 | #todo 27 | /todo/ 28 | 29 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include tutorials/* 2 | include docs/*.pdf 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # pyactr 2 | 3 | Python package to create and run ACT-R cognitive models. 4 | 5 | The package supports symbolic and subsymbolic processes in ACT-R and it covers all basic cases of ACT-R modeling, including features that are not often implemented outside of the official Lisp ACT-R software. 6 | 7 | The package should allow you to run any ACT-R model. If you need an ACT-R feature that's missing in the package, please [open an issue](https://github.com/jakdot/pyactr/issues). 8 | 9 | Significant changes might still occur in the near future. 10 | 11 | ## Installing pyactr 12 | 13 | The best way to install pyactr is to run pip: 14 | 15 | ``` 16 | pip3 install pyactr 17 | ``` 18 | 19 | You can also clone this package and in the root folder, and run: 20 | 21 | ``` 22 | python setup.py install 23 | ``` 24 | 25 | ## Requirements 26 | 27 | pyactr requires Python3 (>=3.3), numpy, simpy, and pyparsing. 28 | 29 | You might also consider getting tkinter if you want to see visual output on how ACT-R models interact with environment. But this is not necessary to run any models. 30 | 31 | ## A note on Python 3.3 32 | 33 | pyactr works with Python 3.3 but some packages that it is dependent on dropped support for Python 3.3. If you want to use pyactr with Python 3.3 you must install numpy version 1.11.3 or lower. simpy is also planning to drop support of Python 3.3 in future versions (as of January 2019). 34 | 35 | ## Getting started 36 | 37 | A short introduction to ACT-R and pyactr may be found in [the wiki](https://github.com/jakdot/pyactr/wiki). 38 | 39 | ## Learning more 40 | 41 | There is a book published recently by Springer that uses pyactr. The book is geared towards (psycho)linguists but it includes a lot of code that can be useful to cognitive scientists outside of psycholinguistics. It explains how models can be created and run in pyactr, from simple counting models up to complex psychology models (fan effects, interpretation of complex sentences). 42 | 43 | **Computational Cognitive Modeling and Linguistic Theory** is open access and available [here](https://link.springer.com/book/10.1007/978-3-030-31846-8). 44 | 45 | ## Even more? 46 | 47 | Some more documents may be found in the [GitHub repository](https://github.com/jakdot/pyactr). In particular, check the folder tutorials for many examples of ACT-R models. Most of those models are translated from Lisp ACT-R, so if you are familiar with Lisp ACT-R they should be fairly easy to understand. 48 | 49 | ## Modifying pyactr 50 | 51 | To ensure that modifications do not break the current code, run unit tests in pyactr/tests/. 52 | 53 | ``` 54 | python -m unittest 55 | ``` -------------------------------------------------------------------------------- /docs/actrparameters.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jakdot/pyactr/7f9fb305208ae7cf65b360aed42ae245d9a63567/docs/actrparameters.pdf -------------------------------------------------------------------------------- /docs/actrparameters.tex: -------------------------------------------------------------------------------- 1 | \documentclass{article} 2 | 3 | \title{ACT-R -- subsymbolic parameters} 4 | \date{} 5 | \begin{document} 6 | \maketitle 7 | \section*{Base-level learning} 8 | \noindent Switched on by subsymbolic=True.\\ 9 | The equation describing learning of base-level activation for a chunk $i$ is: 10 | $$B_i = ln(\sum_{j=1}^{n}(t_j^{-d}))+ \eta$$ 11 | \begin{itemize} 12 | \item $n$: The number of presentations for chunk $i$ 13 | \item $t_j$ : The time since the jth presentation 14 | \item $d$: The decay parameter (set by decay) 15 | \item $\eta$: the instantaneous noise 16 | \end{itemize} 17 | The (instantaneous) noise: 18 | $$\sigma^2=s^2*\pi^2/3$$ 19 | \begin{itemize} 20 | \item $s$: The noise parameter (set by instantaneous\_noise) 21 | \end{itemize} 22 | Retrieval latency: 23 | $$T=Fe^{-A}$$ 24 | \begin{itemize} 25 | \item $A$: Activation of the chunk retrieved 26 | \item $F$: The latency parameter (set by latency\_parameter) 27 | \end{itemize} 28 | Retrieval latency when retrieval fails: 29 | $$T=Fe^{-\tau}$$ 30 | \begin{itemize} 31 | \item $\tau$: The retrieval threshold (set by retrieval\_threshold) 32 | \item $F$: The latency parameter (set by latency\_parameter) 33 | \end{itemize} 34 | For an example see u4\_paired in \textbf{tutorials}. 35 | \section*{Source and activation} 36 | \noindent Switched on by subsymbolic=True and specifying buffer\_spreading\_activation (see below).\\ 37 | $$A_i = B_i + \sum_{k}\sum_{j}W_{kj}*S_{ji}$$ 38 | \begin{itemize} 39 | \item $A_i$: activation of the chunk $i$ 40 | \item $B_i$: base-level activation, see above 41 | \item $W_{kj}$: the amount of activation from source $j$ in buffer $k$ 42 | \item $S_{ji}$: the strength of association from source $j$ to chunk $i$ 43 | \end{itemize} 44 | $W_{kj}$ is set by buffer\_spreading\_activation. The value of this parameter is a dictionary in which keys specify what buffers should be used for spreading activations, values specify the amount of activation in these buffers. 45 | $$S_{ji}=S - ln( fan_j )$$ 46 | \begin{itemize} 47 | \item $S$: the maximum associative strength (set by strength\_of\_association) 48 | \item $fan_j$: the number of chunks in declarative memory in which $j$ is the value of a slot plus one for chunk $j$ being associated with itself 49 | \end{itemize} 50 | For an example see u5\_fan in \textbf{tutorials}. 51 | \section*{Adding partial matching} 52 | Switched on by subsymbolic=True and partial\_matching=True. 53 | $$A_i = B_i + \sum_{k}\sum_{j}W_{kj}*S_{ji} + \sum_{l}M_{li}$$ 54 | \begin{itemize} 55 | \item $M_{li}$: The similarity between the value $l$ in the retrieval specification and the value in the corresponding slot of chunk $i$ 56 | \end{itemize} 57 | The similarity currently only uses default values - a maximum similarity (0) and a maximum different (-1). To be added: let the modeler set these values. For an example see u5\_grouped in \textbf{tutorials}. 58 | \section*{Utility in production rules} 59 | \noindent Switched on by partial\_matching=True. 60 | The (utility) noise: 61 | $$\sigma^2=s^2*\pi^2/3$$ 62 | \begin{itemize} 63 | \item $s$: The noise parameter (set by utility\_noise) 64 | \end{itemize} 65 | Each rule can specify its own utility (by having the parameter utility=n, where n is a number). Each rule can also specify reward it creates for utility learning (by having the parameter reward=n, where n is a number). Utility learning is set by utility\_learning=True. The learning rate for utility learning is set by utility\_alpha. For an example see u6\_simple in \textbf{tutorials}. 66 | 67 | 68 | \end{document} 69 | -------------------------------------------------------------------------------- /docs/book_manual.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jakdot/pyactr/7f9fb305208ae7cf65b360aed42ae245d9a63567/docs/book_manual.pdf -------------------------------------------------------------------------------- /pyactr/__init__.py: -------------------------------------------------------------------------------- 1 | __version__ = "0.3.2" 2 | 3 | from pyactr.model import ACTRModel 4 | from pyactr.environment import Environment 5 | from pyactr.chunks import chunktype, makechunk, chunkstring 6 | -------------------------------------------------------------------------------- /pyactr/buffers.py: -------------------------------------------------------------------------------- 1 | """ 2 | General class on buffers. 3 | """ 4 | 5 | import collections.abc 6 | 7 | from pyactr import chunks, utilities 8 | 9 | class Buffer(collections.abc.MutableSet): 10 | """ 11 | Buffer module. 12 | """ 13 | 14 | _BUSY = utilities._BUSY 15 | _FREE = utilities._FREE 16 | _ERROR = utilities._ERROR 17 | 18 | def __init__(self, dm=None, data=None): 19 | self.dm = dm 20 | self.state = self._FREE #set here but state of buffer instances controlled in productions 21 | if data == None: 22 | self._data = set([]) 23 | else: 24 | self._data = data 25 | assert len(self) <= 1, "Buffer can carry at most one element" 26 | 27 | @property 28 | def dm(self): 29 | """ 30 | Default harvest of goal buffer. 31 | """ 32 | return self.__dm 33 | 34 | @dm.setter 35 | def dm(self, value): 36 | if isinstance(value, collections.abc.MutableMapping) or not value: 37 | self.__dm = value 38 | else: 39 | raise ValueError('The attempted dm value cannot be set; it is not a possible declarative memory') 40 | 41 | def __contains__(self, elem): 42 | return elem in self._data 43 | 44 | def __iter__(self): 45 | for elem in self._data: 46 | yield elem 47 | 48 | def __len__(self): 49 | return len(self._data) 50 | 51 | def __repr__(self): 52 | return repr(self._data) 53 | 54 | def add(self, elem): 55 | """ 56 | Add a chunk into the buffer. 57 | 58 | elem must be a chunk. 59 | """ 60 | self._data = set() 61 | 62 | if isinstance(elem, chunks.Chunk): 63 | self._data.add(elem) 64 | else: 65 | raise TypeError("Only chunks can be added to Buffer") 66 | 67 | def discard(self, elem): 68 | """ 69 | Discard an element without clearing it into a memory. 70 | """ 71 | self._data.discard(elem) 72 | 73 | def show(self, attr): 74 | """ 75 | Print the content of the buffer. 76 | """ 77 | if self._data: 78 | chunk = self._data.copy().pop() 79 | else: 80 | chunk = None 81 | try: 82 | print(" ".join([str(attr), str(getattr(chunk, attr))])) 83 | except AttributeError: 84 | print(attr) 85 | 86 | def test_buffer(self, inquiry): 87 | """ 88 | Is buffer full/empty? 89 | """ 90 | if self._data: 91 | if inquiry == "full": return True 92 | else: 93 | if inquiry == "empty": return True 94 | return False 95 | 96 | def modify(self, otherchunk, actrvariables=None): 97 | """ 98 | Modify the chunk in Buffer according to the info in otherchunk. 99 | """ 100 | if actrvariables == None: 101 | actrvariables = {} 102 | elem = self._data.pop() 103 | try: 104 | mod_attr_val = {x[0]: utilities.check_bound_vars(actrvariables, x[1]) for x in otherchunk.removeunused()} #creates dict of attr-val pairs according to otherchunk 105 | except utilities.ACTRError as arg: 106 | raise utilities.ACTRError(f"The modification by the chunk '{otherchunk} is impossible; {arg}") 107 | elem_attr_val = {x[0]: x[1] for x in elem} 108 | elem_attr_val.update(mod_attr_val) #updates original chunk with attr-val from otherchunk 109 | mod_chunk = chunks.Chunk(otherchunk.typename, **elem_attr_val) #creates new chunk 110 | 111 | self._data.add(mod_chunk) #put chunk directly into buffer 112 | -------------------------------------------------------------------------------- /pyactr/declarative.py: -------------------------------------------------------------------------------- 1 | """ 2 | Declarative memory. Consists of the actual declarative memory, and its associated buffer. 3 | """ 4 | 5 | import collections 6 | import collections.abc 7 | import math 8 | 9 | import numpy as np 10 | 11 | from pyactr import buffers, chunks, utilities 12 | 13 | class DecMem(collections.abc.MutableMapping): 14 | """ 15 | Declarative memory module. 16 | """ 17 | 18 | def __init__(self, data=None): 19 | self._data = {} 20 | self.restricted_number_chunks = collections.Counter() #counter for pairs of slot - value, used to store strength association 21 | self.unrestricted_number_chunks = collections.Counter() # counter for chunks, used to store strength association 22 | self.activations = {} 23 | if data is not None: 24 | try: 25 | self.update(data) 26 | except ValueError: 27 | self.update({x:0 for x in data}) 28 | 29 | def __contains__(self, elem): 30 | return elem in self._data 31 | 32 | def __delitem__(self, key): 33 | del self._data[key] 34 | 35 | def __iter__(self): 36 | for elem in self._data: 37 | yield elem 38 | 39 | def __getitem__(self, key): 40 | return self._data[key] 41 | 42 | def __len__(self): 43 | return len(self._data) 44 | 45 | def __repr__(self): 46 | return repr(self._data) 47 | 48 | def __setitem__(self, key, time): 49 | if self.unrestricted_number_chunks and key not in self: 50 | for x in key: 51 | if utilities.splitting(x[1]).values and utilities.splitting(x[1]).values in self.unrestricted_number_chunks: 52 | self.unrestricted_number_chunks.update([utilities.splitting(x[1]).values]) 53 | if self.restricted_number_chunks and key not in self: 54 | for x in key: 55 | if utilities.splitting(x[1]).values and (x[0], utilities.splitting(x[1]).values) in self.restricted_number_chunks: 56 | self.restricted_number_chunks.update([(x[0], utilities.splitting(x[1]).values)]) 57 | if isinstance(key, chunks.Chunk): 58 | if isinstance(time, np.ndarray): 59 | self._data[key] = time 60 | else: 61 | try: 62 | self._data[key] = np.array([round(float(time), 4)]) 63 | except TypeError: 64 | self._data[key] = np.array(time) 65 | else: 66 | raise utilities.ACTRError(f"Only chunks can be added as attributes to Declarative Memory; '{key}' is not a chunk") 67 | 68 | def add_activation(self, element, activation): 69 | """ 70 | Add activation of an element. 71 | 72 | This raises an error if the element is not in the declarative memory 73 | """ 74 | if element in self: 75 | self.activations[element] = activation 76 | else: 77 | raise AttributeError(f"The chunk {element} is not in the declarative memory.") 78 | 79 | def add(self, element, time=0): 80 | """ 81 | Add an element to decl. mem. Add time to the existing element. 82 | 83 | element can be either one chunk, or an iterable of chunks. 84 | """ 85 | if isinstance(time, collections.abc.Iterable): 86 | try: 87 | new = np.concatenate((self.setdefault(element, np.array([])), np.array(time))) 88 | self[element] = new 89 | except TypeError: 90 | for x in element: 91 | new = np.concatenate((self.setdefault(x, np.array([])), np.array(time))) 92 | self[x] = new 93 | else: 94 | try: 95 | new = np.append(self.setdefault(element, np.array([])), round(float(time), 4)) 96 | self[element] = new 97 | except TypeError: 98 | for x in element: 99 | new = np.append(self.setdefault(x, np.array([])), round(float(time), 4)) 100 | self[x] = new 101 | 102 | def copy(self): 103 | """ 104 | Copy declarative memory. 105 | """ 106 | dm = DecMem(self._data.copy()) 107 | dm.activations = self.activations.copy() 108 | dm.restricted_number_chunks = self.restricted_number_chunks.copy() 109 | dm.unrestricted_number_chunks = self.unrestricted_number_chunks.copy() 110 | return dm 111 | 112 | class DecMemBuffer(buffers.Buffer): 113 | """ 114 | Declarative memory buffer. 115 | """ 116 | 117 | def __init__(self, decmem=None, data=None, finst=0): 118 | buffers.Buffer.__init__(self, decmem, data) 119 | self.recent = collections.deque() 120 | self.__finst = finst 121 | self.activation = None #activation of the last retrieved element 122 | 123 | #parameters 124 | self.model_parameters = {} 125 | 126 | @property 127 | def finst(self): 128 | """ 129 | Finst - how many chunks are 'remembered' in declarative memory buffer. 130 | """ 131 | return self.__finst 132 | 133 | @finst.setter 134 | def finst(self, value): 135 | if value >= 0: 136 | self.__finst = value 137 | else: 138 | raise ValueError('Finst in the dm buffer must be >= 0') 139 | 140 | @property 141 | def decmem(self): 142 | """ 143 | Default harvest of retrieval buffer. 144 | """ 145 | return self.dm 146 | 147 | @decmem.setter 148 | def decmem(self, value): 149 | try: 150 | self.dm = value 151 | except ValueError: 152 | raise utilities.ACTRError('The default harvest set in the retrieval buffer is not a possible declarative memory') 153 | 154 | def add(self, elem, time=0): 155 | """ 156 | Clear current buffer and adds a new chunk. 157 | """ 158 | self.clear(time) 159 | super().add(elem) 160 | 161 | def clear(self, time=0): 162 | """ 163 | Clear buffer, add cleared chunk into memory. 164 | """ 165 | if self._data: 166 | self.dm.add(self._data.pop(), time) 167 | 168 | def copy(self, dm=None): 169 | """ 170 | Copy buffer, along with its declarative memory, unless dm is specified. You need to specify new dm if 2 buffers share the same dm - only one of them should copy dm then. 171 | """ 172 | if dm == None: 173 | dm = self.dm 174 | copy_buffer = DecMemBuffer(dm, self._data.copy()) 175 | return copy_buffer 176 | 177 | def test(self, state, inquiry): 178 | """ 179 | Is current state busy/free/error? 180 | """ 181 | return getattr(self, state) == inquiry 182 | 183 | def retrieve(self, time, otherchunk, actrvariables, buffers, extra_tests, model_parameters): 184 | """ 185 | Retrieve a chunk from declarative memory that matches otherchunk. 186 | """ 187 | model_parameters = model_parameters.copy() 188 | model_parameters.update(self.model_parameters) 189 | 190 | if actrvariables == None: 191 | actrvariables = {} 192 | try: 193 | mod_attr_val = {x[0]: utilities.check_bound_vars(actrvariables, x[1], negative_impossible=False) for x in otherchunk.removeunused()} 194 | except utilities.ACTRError as arg: 195 | raise utilities.ACTRError(f"Retrieving the chunk '{otherchunk}' is impossible; {arg}") 196 | chunk_tobe_matched = chunks.Chunk(otherchunk.typename, **mod_attr_val) 197 | 198 | max_A = float("-inf") 199 | 200 | retrieved = None 201 | for chunk in self.dm: 202 | try: 203 | if extra_tests["recently_retrieved"] == False or extra_tests["recently_retrieved"] == 'False': 204 | if self.__finst and chunk in self.recent: 205 | continue 206 | 207 | else: 208 | if self.__finst and chunk not in self.recent: 209 | continue 210 | except KeyError: 211 | pass 212 | 213 | if model_parameters["subsymbolic"]: #if subsymbolic, check activation 214 | A_pm = 0 215 | if model_parameters["partial_matching"]: 216 | A_pm = chunk_tobe_matched.match(chunk, partialmatching=True, mismatch_penalty=model_parameters["mismatch_penalty"]) 217 | else: 218 | if not chunk_tobe_matched <= chunk: 219 | continue 220 | 221 | try: 222 | A_bll = utilities.baselevel_learning(time, self.dm[chunk], model_parameters["baselevel_learning"], model_parameters["decay"], self.dm.activations.get(chunk), optimized_learning=model_parameters["optimized_learning"]) #bll 223 | except UnboundLocalError: 224 | continue 225 | if math.isnan(A_bll): 226 | raise utilities.ACTRError(f"The following chunk cannot receive base activation: {chunk}. The reason is that one of its traces did not appear in a past moment.") 227 | A_sa = utilities.spreading_activation(chunk, buffers, self.dm, model_parameters["buffer_spreading_activation"], model_parameters["strength_of_association"], model_parameters["spreading_activation_restricted"], model_parameters["association_only_from_chunks"]) 228 | inst_noise = utilities.calculate_instantaneous_noise(model_parameters["instantaneous_noise"]) 229 | A = A_bll + A_sa + A_pm + inst_noise #chunk.activation is the manually specified activation, potentially used by the modeller 230 | 231 | if utilities.retrieval_success(A, model_parameters["retrieval_threshold"]) and max_A < A: 232 | max_A = A 233 | self.activation = max_A 234 | retrieved = chunk 235 | extra_time = utilities.retrieval_latency(A, model_parameters["latency_factor"], model_parameters["latency_exponent"]) 236 | 237 | if model_parameters["activation_trace"]: 238 | print("(Partially) matching chunk:", chunk) 239 | print("Base level learning:", A_bll) 240 | print("Spreading activation", A_sa) 241 | print("Partial matching", A_pm) 242 | print("Noise:", inst_noise) 243 | print("Total activation", A) 244 | print("Time to retrieve", extra_time) 245 | else: #otherwise, retrieval is instantaneous 246 | if chunk_tobe_matched <= chunk and self.dm[chunk][0] != time: #the second condition ensures that the chunk that was created are not retrieved at the same time 247 | retrieved = chunk 248 | extra_time = 0 249 | 250 | if not retrieved: 251 | if model_parameters["subsymbolic"]: 252 | extra_time = utilities.retrieval_latency(model_parameters["retrieval_threshold"], model_parameters["latency_factor"], model_parameters["latency_exponent"]) 253 | else: 254 | extra_time = model_parameters["rule_firing"] 255 | if self.__finst: 256 | self.recent.append(retrieved) 257 | if self.__finst < len(self.recent): 258 | self.recent.popleft() 259 | return retrieved, extra_time 260 | 261 | -------------------------------------------------------------------------------- /pyactr/environment.py: -------------------------------------------------------------------------------- 1 | """ 2 | Environment used for ACT-R model. 3 | """ 4 | 5 | import collections.abc 6 | 7 | from pyactr import utilities 8 | 9 | class Environment: 10 | """ 11 | Environment module for ACT-R. This shows whatever is seen on screen at the moment, allows interaction with ACT-R model (vision and motor modules). 12 | """ 13 | 14 | Event = utilities.Event 15 | _ENV = utilities._ENV 16 | 17 | def __init__(self, size=(640, 360), simulated_display_resolution=(1366, 768), simulated_screen_size=(50, 28), viewing_distance=50, focus_position=None): 18 | self.gui = True 19 | self.size = size 20 | try: 21 | if focus_position and len(focus_position) != 2: 22 | raise utilities.ACTRError("Focus position of the environment must be an iterable with 2 values.") 23 | except TypeError: 24 | raise utilities.ACTRError("Focus position of the environment must be an iterable with 2 values.") 25 | if not focus_position: 26 | focus_position = (size[0]/2, size[1]/2) 27 | 28 | self.__current_focus = list(focus_position) 29 | 30 | self.stimuli = None 31 | self.triggers = None 32 | self.times = None 33 | 34 | #below - used for interaction with vision and motor 35 | self.stimulus = None 36 | self.trigger = None 37 | self.simulated_display_resolution = simulated_display_resolution 38 | self.simulated_screen_size = simulated_screen_size 39 | self.viewing_distance = viewing_distance 40 | 41 | self.initial_time = 0 42 | 43 | @property 44 | def current_focus(self): 45 | """ 46 | Current focus of the vision module in the environment. 47 | """ 48 | return self.__current_focus 49 | 50 | @current_focus.setter 51 | def current_focus(self, value): 52 | if isinstance(value, collections.abc.Iterable) and len(value) == 2: 53 | self.__current_focus = list(value) 54 | else: 55 | raise ValueError('Current focus in the environment not defined properly. It must be a tuple.') 56 | 57 | def roundtime(self, time): 58 | """ 59 | Time (in seconds), rounded to tenths of milliseconds. 60 | """ 61 | return utilities.roundtime(time) 62 | 63 | def environment_process(self, stimuli=None, triggers=None, times=1, start_time=0): 64 | """ 65 | Example of environment process. Text appears, changes/disappers after run_time runs out. 66 | 67 | This does not do anything on its own, it has to be embedded in the simulation of an ACT-R Model. 68 | stimuli: list of stimuli 69 | triggers: list of triggers. 70 | times: how much time (in seconds) it takes before the screen is flushed and a new environment (next screen) appears 71 | start_time: starting point of the first stimulus. 72 | 73 | The length of triggers has to match the length of stimuli or one of them has to be of length 1. 74 | 75 | Arbitrary visual attributes can be added as keys within each stimulus dictionary 76 | You must also specify extended types using keys 'visual_typename' and/or 'visual_location_typename' as needed 77 | Additional key 'externally_visible' can pass a list of keys which will be visible to the visual location buffer 78 | (Use this key with caution: visual_location searches cannot return an object if it contains 79 | an externally-visible attribute that is not present in the search) 80 | """ 81 | #subtract start_time from initial_time 82 | start_time = self.initial_time - start_time 83 | #make all arguments iterables if they are not yet 84 | if isinstance(stimuli, str) or isinstance(stimuli, collections.abc.Mapping) or not isinstance(stimuli, collections.abc.Iterable): 85 | stimuli = [stimuli] 86 | for idx in range(len(stimuli)): 87 | if isinstance(stimuli[idx], collections.abc.Mapping): 88 | for each in stimuli[idx]: 89 | if not isinstance(stimuli[idx][each], collections.abc.Mapping): #stimuli[idx][each] encodes position etc. 90 | raise utilities.ACTRError("Stimuli must be a list of dictionaries, e.g.,: [{'stimulus1-0time': {'text': 'hi', 'position': (0, 0)}, 'stimulus2-0time': {'text': 'you', 'position': (10, 10)}}, {'stimulus3-latertime': {'text': 'new', 'position': (0, 0)}}] etc. Currently, you have this: '%s'" %stimuli[idx]) 91 | else: 92 | stimuli[idx] = {stimuli[idx]: {'position': (320, 180)}} #default position - 320, 180 93 | if isinstance(triggers, str) or not isinstance(triggers, collections.abc.Iterable): 94 | triggers = [triggers] 95 | if isinstance(times, str) or not isinstance(times, collections.abc.Iterable): 96 | times = [times] 97 | #sanity checks - each arg must match in length, or an argument must be of length 1 (2 for positions) 98 | if len(stimuli) != len(triggers): 99 | if len(stimuli) == 1: 100 | stimuli = stimuli * len(triggers) 101 | elif len(triggers) == 1: 102 | triggers = triggers * len(stimuli) 103 | else: 104 | raise utilities.ACTRError("In environment, stimuli must be the same length as triggers or one of the two must be of length 1") 105 | if len(stimuli) != len(times): 106 | if len(times) == 1: 107 | times = times * len(stimuli) 108 | else: 109 | raise utilities.ACTRError("In environment, times must be the same length as stimuli or times must be of length 1") 110 | self.stimuli = stimuli 111 | try: 112 | self.triggers = [] 113 | for trigger in triggers: 114 | if isinstance(trigger, str) and trigger.upper() == "SPACE": 115 | self.triggers.append(set(["SPACE"])) 116 | else: 117 | self.triggers.append(set(x.upper() for x in trigger)) 118 | except (TypeError, AttributeError): 119 | raise utilities.ACTRError("Triggers must be strings, a list of strings or a list of iterables of strings.") 120 | self.times = times 121 | time = start_time 122 | yield self.Event(self.roundtime(time), self._ENV, "STARTING ENVIRONMENT") #yield Event; Event has three positions - time, process, in this case, ENVIRONMENT (specified in self._ENV) and description of action 123 | for idx, stimulus in enumerate(self.stimuli): #run through elems, print them, yield a corresponding event 124 | self.run_time = self.times[idx] #current run_time 125 | time = time + self.run_time 126 | self.trigger = self.triggers[idx] #current trigger 127 | self.output(stimulus) #output on environment 128 | yield self.Event(self.roundtime(time), self._ENV, "PRINTED NEW STIMULUS") 129 | 130 | def output(self, stimulus): 131 | """ 132 | Output obj in environment. 133 | """ 134 | self.stimulus = stimulus 135 | #this part is visual re-encoding - encode new info in your current focus 136 | #TODO - check that the new stimulus is different from the last one; do stuffing visuallocation 137 | 138 | if not self.gui: 139 | printed_stimulus = self.stimulus.copy() 140 | try: 141 | printed_stimulus.pop('frequency') 142 | except KeyError: 143 | pass 144 | print("****Environment:", printed_stimulus) 145 | 146 | -------------------------------------------------------------------------------- /pyactr/goals.py: -------------------------------------------------------------------------------- 1 | """ 2 | Goals. 3 | """ 4 | 5 | from pyactr import buffers, chunks, utilities 6 | from pyactr.utilities import ACTRError 7 | 8 | class Goal(buffers.Buffer): 9 | """ 10 | Goal buffer module. 11 | """ 12 | 13 | def __init__(self, data=None, default_harvest=None, delay=0): 14 | buffers.Buffer.__init__(self, default_harvest, data) 15 | self.delay = delay 16 | 17 | @property 18 | def delay(self): 19 | """ 20 | Delay (in s) to create chunks in the goal buffer. 21 | """ 22 | return self.__delay 23 | 24 | @delay.setter 25 | def delay(self, value): 26 | if value >= 0: 27 | self.__delay = value 28 | else: 29 | raise ValueError('Delay in the goal buffer must be >= 0') 30 | 31 | @property 32 | def default_harvest(self): 33 | """ 34 | Default harvest of goal buffer. 35 | """ 36 | return self.dm 37 | 38 | @default_harvest.setter 39 | def default_harvest(self, value): 40 | try: 41 | self.dm = value 42 | except ValueError: 43 | raise ACTRError('The default harvest set in the goal buffer is not a possible declarative memory') 44 | 45 | def add(self, elem, time=0, harvest=None): 46 | """ 47 | If the buffer has a chunk, it clears current buffer (into the memory associated with the goal buffer). It adds a new chunk, specified as elem. Decl. memory is either specified as default_harvest, when Goal is initialized, or it can be specified as harvest. 48 | 49 | Neither time nor harvest currently affect the behavior of the goal buffer. 50 | """ 51 | super().add(elem) 52 | 53 | def clear(self, time=0, harvest=None): 54 | """ 55 | Clear buffer, add the cleared chunk into decl. memory. Decl. memory is either specified as default_harvest, when Goal is initialized, or it can be specified as harvest here. 56 | """ 57 | if harvest != None: 58 | if self._data: 59 | harvest.add(self._data.pop(), time) 60 | else: 61 | if self._data: 62 | self.dm.add(self._data.pop(), time) 63 | 64 | 65 | def copy(self, harvest=None): 66 | """ 67 | Copy the buffer. Unlike other buffers, this one does not copy the memory that is used for its harvest. This is because goal buffer will always share the memory to which it harvests with another retrieval buffer. You have to specify harvest (that is, which declarative memory should harvest the buffer) if you want clearing to work in the copied buffer. 68 | """ 69 | if harvest == None: 70 | harvest = self.dm 71 | copy_goal = Goal(self._data.copy(), harvest) 72 | return copy_goal 73 | 74 | def test(self, state, inquiry): 75 | """ 76 | Is current state busy/free/error? 77 | """ 78 | return getattr(self, state) == inquiry 79 | 80 | def retrieve(self, otherchunk, actrvariables=None): 81 | """ 82 | Retrieve a chunk. This is not possible in goal buffer, so an error is raised. 83 | """ 84 | raise utilities.ACTRError(f"An attempt to retrieve from goal in the chunk '{otherchunk}'; retrieving from goal is not possible") 85 | 86 | 87 | def create(self, otherchunk, harvest=None, actrvariables=None): 88 | """ 89 | Create (aka set) a chunk in goal buffer. 90 | """ 91 | try: 92 | mod_attr_val = {x[0]: utilities.check_bound_vars(actrvariables, x[1]) for x in otherchunk.removeunused()} #creates dict of attr-val pairs according to otherchunk 93 | except utilities.ACTRError as arg: 94 | raise utilities.ACTRError(f"Setting the buffer using the chunk '{otherchunk}' is impossible; {arg}") 95 | 96 | new_chunk = chunks.Chunk(otherchunk.typename, **mod_attr_val) #creates new chunk 97 | 98 | self.add(new_chunk, 0, harvest) #put chunk using add 99 | -------------------------------------------------------------------------------- /pyactr/model.py: -------------------------------------------------------------------------------- 1 | """ 2 | ACT-R Model. 3 | """ 4 | 5 | import pyparsing 6 | 7 | from pyactr import chunks, declarative, goals, motor, productions, simulation, utilities, vision 8 | 9 | class ACTRModel: 10 | """ 11 | ACT-R model, running ACT-R simulations. 12 | 13 | model_parameters and their default values are: 14 | {"subsymbolic": False, 15 | "rule_firing": 0.05, 16 | "latency_factor": 0.1, 17 | "latency_exponent": 1.0, 18 | "decay": 0.5, 19 | "baselevel_learning": True, 20 | "optimized_learning": False, 21 | "instantaneous_noise" : 0, 22 | "retrieval_threshold" : 0, 23 | "buffer_spreading_activation" : {}, 24 | "spreading_activation_restricted" : False, 25 | "strength_of_association": 0, 26 | "association_only_from_chunks": True, 27 | "partial_matching": False, 28 | "mismatch_penalty": 1, 29 | "activation_trace": False, 30 | "utility_noise": 0, 31 | "utility_learning": False, 32 | "utility_alpha": 0.2, 33 | "motor_prepared": False, 34 | "strict_harvesting": False, 35 | "production_compilation": False, 36 | "automatic_visual_search": True, 37 | "automatic_buffering": True, 38 | "emma": True, 39 | "emma_noise": True, 40 | "emma_landing_site_noise": False, 41 | "eye_mvt_angle_parameter": 1, 42 | "eye_mvt_scaling_parameter": 0.01 43 | } 44 | 45 | environment has to be an instantiation of the class Environment. 46 | """ 47 | 48 | MODEL_PARAMETERS = {"subsymbolic": False, 49 | "rule_firing": 0.05, 50 | "latency_factor": 0.1, 51 | "latency_exponent": 1.0, 52 | "decay": 0.5, 53 | "baselevel_learning": True, 54 | "optimized_learning": False, 55 | "instantaneous_noise" : 0, 56 | "retrieval_threshold" : 0, 57 | "buffer_spreading_activation" : {}, 58 | "spreading_activation_restricted" : False, 59 | "strength_of_association": 0, 60 | "association_only_from_chunks": True, 61 | "partial_matching": False, 62 | "mismatch_penalty": 1, 63 | "activation_trace": False, 64 | "utility_noise": 0, 65 | "utility_learning": False, 66 | "utility_alpha": 0.2, 67 | "motor_prepared": False, 68 | "strict_harvesting": False, 69 | "production_compilation": False, 70 | "automatic_visual_search": True, 71 | "automatic_buffering": True, 72 | "emma": True, 73 | "emma_noise": True, 74 | "emma_landing_site_noise": False, 75 | "eye_mvt_angle_parameter": 1, #in LispACT-R: 1 76 | "eye_mvt_scaling_parameter": 0.01, #in LispACT-R: 0.01, but dft rule firing -- 0.01 77 | } 78 | 79 | def __init__(self, environment=None, **model_parameters): 80 | 81 | self.chunktype = chunks.chunktype 82 | self.chunkstring = chunks.chunkstring 83 | 84 | self.visbuffers = {} 85 | 86 | start_goal = goals.Goal() 87 | self.goals = {"g": start_goal} 88 | 89 | self.__buffers = {"g": start_goal} 90 | 91 | start_retrieval = declarative.DecMemBuffer() 92 | self.retrievals = {"retrieval": start_retrieval} 93 | 94 | self.__buffers["retrieval"] = start_retrieval 95 | 96 | start_dm = declarative.DecMem() 97 | self.decmems = {"decmem": start_dm} 98 | 99 | self.productions = productions.Productions() 100 | self.__similarities = {} 101 | 102 | self.model_parameters = self.MODEL_PARAMETERS.copy() 103 | 104 | try: 105 | if not set(model_parameters.keys()).issubset(set(self.MODEL_PARAMETERS.keys())): 106 | params = set(model_parameters.keys()).difference(set(self.MODEL_PARAMETERS.keys())) 107 | allowed_params = set(self.MODEL_PARAMETERS.keys()) 108 | raise utilities.ACTRError(f"Incorrect model parameter(s) {params}. The only possible model parameters are: '{allowed_params}'") 109 | self.model_parameters.update(model_parameters) 110 | except TypeError: 111 | pass 112 | 113 | self.__env = environment 114 | 115 | @property 116 | def retrieval(self): 117 | """ 118 | Retrieval in the model. 119 | """ 120 | if len(self.retrievals) == 1: 121 | return list(self.retrievals.values())[0] 122 | 123 | raise ValueError("Zero or more than 1 retrieval specified, unclear which one should be shown. Use ACTRModel.retrievals instead.") 124 | 125 | @retrieval.setter 126 | def retrieval(self, name): 127 | self.set_retrieval(name) 128 | 129 | @property 130 | def decmem(self): 131 | """ 132 | Declarative memory in the model. 133 | """ 134 | if len(self.decmems) == 1: 135 | return list(self.decmems.values())[0] 136 | 137 | raise ValueError("Zero or more than 1 declarative memory specified, unclear which one should be shown. Use ACTRModel.decmems instead.") 138 | 139 | @decmem.setter 140 | def decmem(self, data): 141 | self.set_decmem(data) 142 | 143 | def set_decmem(self, data=None): 144 | """ 145 | Set declarative memory. 146 | """ 147 | dm = declarative.DecMem(data) 148 | if len(self.decmems) > 1: 149 | self.decmems["".join(["decmem", str(len(self.decmems))])] = dm 150 | else: 151 | self.decmems["decmem"] = dm 152 | return dm 153 | 154 | @property 155 | def goal(self): 156 | """ 157 | Goal buffer in the model. 158 | """ 159 | if len(self.goals) == 1: 160 | return list(self.goals.values())[0] 161 | else: 162 | raise ValueError("Zero or more than 1 goal specified, unclear which one should be shown. Use ACTRModel.goals instead.") 163 | 164 | @goal.setter 165 | def goal(self, name): 166 | self.set_goal(name, 0) 167 | 168 | def set_retrieval(self, name): 169 | """ 170 | Set retrieval. 171 | 172 | name: the name by which the retrieval buffer is referred to in production rules. 173 | """ 174 | if not isinstance(name, str): 175 | raise ValueError("Retrieval buffer can be only set with a string, the name of the retrieval buffer.") 176 | dmb = declarative.DecMemBuffer() 177 | self.__buffers[name] = dmb 178 | self.retrievals[name] = dmb 179 | return dmb 180 | 181 | def set_goal(self, name, delay=0): 182 | """ 183 | Set goal buffer. delay specifies the delay of setting a chunk in the buffer. 184 | 185 | name: the name by which the goal buffer is referred to in production rules. 186 | """ 187 | if not isinstance(name, str): 188 | raise ValueError("Goal buffer can be only set with a string, the name of the goal buffer.") 189 | g = goals.Goal(delay=delay) 190 | self.__buffers[name] = g 191 | self.goals[name] = g 192 | return g 193 | 194 | def visualBuffer(self, name_visual, name_visual_location, default_harvest=None, finst=4): 195 | """ 196 | Create visual buffers for ACTRModel. Two buffers are present in vision: visual What buffer, called just visual buffer (encoding seen objects) and visual Where buffer, called visual_location buffer (encoding positions). Both are created and returned. Finst is relevant only for the visual location buffer. 197 | 198 | name_visual: the name by which the visual buffer isreferred to in production rules. 199 | name_visual_location: the name by which the visual_location buffer is referred to in production rules. 200 | 201 | """ 202 | v1 = vision.Visual(self.__env, default_harvest) 203 | v2 = vision.VisualLocation(self.__env, default_harvest, finst) 204 | self.visbuffers[name_visual] = v1 205 | self.visbuffers[name_visual_location] = v2 206 | return v1, v2 207 | 208 | def set_productions(self, *rules): 209 | """ 210 | Creates production rules out of functions. One or more functions can be inserted. 211 | """ 212 | self.productions = productions.Productions(*rules) 213 | return self.productions 214 | 215 | def productionstring(self, name='', string='', utility=0, reward=None): 216 | """ 217 | Create a production rule when given a string. The string is specified in the following form (as a string): LHS ==> RHS 218 | 219 | name: name of the production rule 220 | string: string specifying the production rule 221 | utility: utility of the rule (default: 0) 222 | reward: reward of the rule (default: None) 223 | 224 | The following example would be a rule that checks the buffer 'g' and if the buffer has value one, it will reset it to two: 225 | >>> ACTRModel().productionstring(name='example0', string='=g>\ 226 | isa example\ 227 | value one\ 228 | ==>\ 229 | =g>\ 230 | isa example\ 231 | value two') 232 | {'=g': example(value= one)} 233 | ==> 234 | {'=g': example(value= two)} 235 | """ 236 | if not name: 237 | name = "unnamedrule" + productions.Productions._undefinedrulecounter 238 | productions.Productions._undefinedrulecounter += 1 239 | temp_dictRHS = {v: k for k, v in utilities._RHSCONVENTIONS.items()} 240 | temp_dictLHS = {v: k for k, v in utilities._LHSCONVENTIONS.items()} 241 | rule_reader = utilities.getrule() 242 | try: 243 | rule = rule_reader.parse_string(string, parse_all=True) 244 | except pyparsing.ParseException as e: 245 | raise utilities.ACTRError(f"The rule '{name}' could not be parsed. The following error was observed: {e}") 246 | lhs, rhs = {}, {} 247 | def func(): 248 | for each in rule[0]: 249 | if each[0] == temp_dictLHS["query"]: 250 | lhs[each[0]+each[1]] = {x[0]:x[1] for x in each[3]} 251 | else: 252 | try: 253 | type_chunk, chunk_dict = chunks.createchunkdict(each[3]) 254 | except utilities.ACTRError as e: 255 | raise utilities.ACTRError(f"The rule string {name} is not defined correctly; {e}") 256 | lhs[each[0]+each[1]] = chunks.makechunk("", type_chunk, **chunk_dict) 257 | yield lhs 258 | for each in rule[2]: 259 | if each[0] == temp_dictRHS["extra_test"]: 260 | rhs[each[0]+each[1]] = {x[0]:x[1] for x in each[3]} 261 | elif each[0] == temp_dictRHS["clear"]: 262 | rhs[each[0]+each[1]] = None 263 | elif each[0] == temp_dictRHS["execute"]: 264 | rhs[each[0]+each[1]] = each[3] 265 | else: 266 | try: 267 | type_chunk, chunk_dict = chunks.createchunkdict(each[3]) 268 | except utilities.ACTRError as e: 269 | raise utilities.ACTRError(f"The rule string {name} is not defined correctly; {e}") 270 | rhs[each[0]+each[1]] = chunks.makechunk("", type_chunk, **chunk_dict) 271 | yield rhs 272 | self.productions.update({name: {"rule": func, "utility": utility, "reward": reward}}) 273 | return self.productions[name] 274 | 275 | def set_similarities(self, chunk, otherchunk, value): 276 | """ 277 | Set similarities between chunks. By default, different chunks have the value of -1. 278 | 279 | chunk and otherchunk are two chunks whose similarities are set. value must be a non-positive number. 280 | """ 281 | if value > 0: 282 | raise utilities.ACTRError("Values in similarities must be 0 or smaller than 0") 283 | self.__similarities[tuple((chunk, otherchunk))] = value 284 | self.__similarities[tuple((otherchunk, chunk))] = value 285 | 286 | def simulation(self, realtime=False, trace=True, gui=True, initial_time=0, environment_process=None, **kwargs): 287 | """ 288 | Prepare simulation of the model 289 | 290 | This does not run the simulation, it only returns the simulation object. The object can then be run using run(max_time) command. 291 | 292 | realtime: should the simulation be run in real time or not? 293 | trace: should the trace of the simulation be printed? 294 | gui: should the environment appear on a separate screen? (This requires tkinter.) 295 | initial_time: what is the starting time point of the simulation? 296 | environment_process: what environment process should the simulation use? 297 | The environment_process argument should be supplied with the method environment_process of the environment used in the model. 298 | kwargs are arguments that environment_process will be supplied with. 299 | """ 300 | 301 | if len(self.decmems) == 1: 302 | for key in self.__buffers: 303 | self.__buffers[key].dm = self.decmem #if only one dm, let all buffers use it 304 | elif len([x for x in self.decmems.values() if x]) == 1: 305 | for key in self.__buffers: 306 | if not self.__buffers[key].dm: 307 | self.__buffers[key].dm = self.decmem #if only one non-trivial dm, let buffers use it that do not have a dm specified 308 | 309 | decmem = {name: self.__buffers[name].dm for name in self.__buffers\ 310 | if self.__buffers[name].dm != None} #dict of declarative memories used; more than 1 decmem might appear here 311 | 312 | self.__buffers["manual"] = motor.Motor() #adding motor buffer 313 | 314 | if self.__env: 315 | self.__env.initial_time = initial_time #set the initial time of the environment to be the same as simulation 316 | if self.visbuffers: 317 | self.__buffers.update(self.visbuffers) 318 | else: 319 | dm = list(decmem.values())[0] 320 | self.__buffers["visual"] = vision.Visual(self.__env, dm) #adding vision buffers 321 | self.__buffers["visual_location"] = vision.VisualLocation(self.__env, dm) #adding vision buffers 322 | 323 | self.productions.used_rulenames = {} # remove any previously stored rules for utility learning 324 | 325 | used_productions = productions.ProductionRules(self.productions, self.__buffers, decmem, self.model_parameters) 326 | 327 | chunks.Chunk._similarities = self.__similarities 328 | 329 | return simulation.Simulation(self.__env, realtime, trace, gui, self.__buffers, used_productions, initial_time, environment_process, **kwargs) 330 | -------------------------------------------------------------------------------- /pyactr/motor.py: -------------------------------------------------------------------------------- 1 | """ 2 | Motor module. Carries out key presses. 3 | """ 4 | 5 | from pyactr import buffers, chunks, utilities 6 | from pyactr.utilities import ACTRError 7 | 8 | class Motor(buffers.Buffer): 9 | """ 10 | Motor buffer. Only pressing keys possible. 11 | """ 12 | 13 | LEFT_HAND = ("1", "2", "3", "4", "5", "Q", "W", "E", "R", "T", "A", "S", "D", "F", "G", "Z", "X", "C", "V", "B", "SPACE") 14 | RIGHT_HAND = ("6", "7", "8", "9", "0", "Y", "U", "I", "O", "P", "H", "J", "K", "L", "N", "M", "SPACE") 15 | PRESSING = ("A", "S", "D", "F", "J", "K", "L", "SPACE") 16 | SLOWEST = ("5", "6") 17 | OTHERS = () 18 | _MANUAL = utilities.MANUAL 19 | 20 | TIME_PRESSES = {PRESSING: (0.15, 0.05, 0.01, 0.09), SLOWEST: (0.25, 0.05, 0.11, 0.16), OTHERS: (0.25, 0.05, 0.1, 0.15)} #numbers taken from the motor module of Lisp ACT-R models for all the standard keyboard keys; the numbers are: preparation, initiation, action, finishing movement 21 | 22 | def __init__(self): 23 | buffers.Buffer.__init__(self, None, None) 24 | self.preparation = self._FREE 25 | self.processor = self._FREE 26 | self.execution = self._FREE 27 | 28 | self.last_key = [None, 0] #the number says what the last key was and when the last press will be finished, so that the preparation of the next move can speed up if it is a similar key, and execution waits for the previous mvt (two mvts cannot be carried out at the same time, according to ACT-R motor module) 29 | 30 | def test(self, state, inquiry): 31 | """ 32 | Is current state/preparation etc. busy or free? 33 | """ 34 | return getattr(self, state) == inquiry 35 | 36 | def add(self, elem): 37 | """ 38 | Adding a chunk. This is illegal for motor buffer. 39 | """ 40 | raise AttributeError("Attempt to add an element to motor buffer. This is not possible.") 41 | 42 | def create(self, otherchunk, actrvariables=None): 43 | """ 44 | Create (aka set) a chunk for manual control. The chunk is returned (and could be used by device or external environment). 45 | """ 46 | if actrvariables == None: 47 | actrvariables = {} 48 | try: 49 | mod_attr_val = {x[0]: utilities.check_bound_vars(actrvariables, x[1]) for x in otherchunk.removeunused()} #creates dict of attr-val pairs according to otherchunk 50 | except ACTRError as arg: 51 | raise ACTRError(f"Setting the chunk '{otherchunk}' in the manual buffer is impossible; {arg}") 52 | 53 | new_chunk = chunks.Chunk(self._MANUAL, **mod_attr_val) #creates new chunk 54 | 55 | if new_chunk.cmd.values not in utilities.CMDMANUAL: 56 | raise ACTRError(f"Motor module received an invalid command: '{new_chunk.cmd.values}'. The valid commands are: '{utilities.CMDMANUAL}'") 57 | 58 | if new_chunk.cmd.values == utilities.CMDPRESSKEY: 59 | pressed_key = new_chunk.key.values.upper() #change key into upper case 60 | mod_attr_val["key"] = pressed_key 61 | new_chunk = chunks.Chunk(self._MANUAL, **mod_attr_val) #creates new chunk 62 | if pressed_key not in self.LEFT_HAND and new_chunk.key.values not in self.RIGHT_HAND: 63 | raise ACTRError(f"Motor module received an invalid key: {pressed_key}") 64 | 65 | return new_chunk 66 | -------------------------------------------------------------------------------- /pyactr/simulation.py: -------------------------------------------------------------------------------- 1 | """ 2 | ACT-R simulations. 3 | """ 4 | 5 | import warnings 6 | import simpy 7 | 8 | from pyactr import utilities, vision 9 | 10 | try: 11 | import tkinter as tk 12 | except ImportError: 13 | GUI = False 14 | else: 15 | GUI = True 16 | 17 | if GUI: 18 | import threading 19 | import queue 20 | import time 21 | 22 | Event = utilities.Event 23 | 24 | class Simulation: 25 | """ 26 | ACT-R simulations. 27 | """ 28 | 29 | _UNKNOWN = utilities._UNKNOWN 30 | 31 | def __init__(self, environment, realtime, trace, gui, buffers, used_productions, initial_time=0, environment_process=None, **kwargs): 32 | 33 | if gui: 34 | if not environment: 35 | # GUI requires an environment 36 | warnings.warn("Simulation GUI requested but no environment was set.") 37 | warnings.warn("Simulation GUI is set to False.") 38 | if not GUI: 39 | # tkinter import error 40 | warnings.warn("Simulation cannot start a new window because tkinter is not installed. This does not affect ACT-R models in any way, but you will see no separate window for environment. If you want to change that, install tkinter.") 41 | warnings.warn("Simulation GUI is set to False.") 42 | 43 | self.gui = environment and gui and GUI 44 | 45 | self.__simulation = simpy.Environment(initial_time=round(initial_time, 4)) 46 | 47 | self.__env = environment 48 | if self.__env: 49 | self.__env.gui = gui and GUI #set the GUI of the environment in the same way as this one; it is used so that Environment prints its output directly in simpy simulation 50 | 51 | self.__realtime = realtime 52 | 53 | if not self.gui and realtime: 54 | self.__simulation = simpy.RealtimeEnvironment() 55 | 56 | self.__trace = trace 57 | 58 | self.__dict_extra_proc = {key: None for key in buffers} 59 | 60 | self.__buffers = buffers 61 | 62 | self.__pr = used_productions 63 | 64 | self.__simulation.process(self.__procprocessGenerator__()) 65 | 66 | self.__interruptibles = {} #interruptible processes 67 | 68 | self.__dict_extra_proc_activate = {} 69 | 70 | for each in self.__dict_extra_proc: 71 | if each != self.__pr._PROCEDURAL: 72 | self.__dict_extra_proc[each] = self.__simulation.process(self.__extraprocessGenerator__(each)) #create simulation processes for all buffers, store them in dict_extra_proc 73 | self.__dict_extra_proc_activate[each] = self.__simulation.event() #create simulation events for all buffers that control simulation flow (they work as locks) 74 | 75 | self.__proc_activate = self.__simulation.event() #special event (lock) for procedural module 76 | 77 | self.__procs_started = [] #list of processes that are started as a result of production rules 78 | 79 | #activate environment process, if environment present 80 | if self.__env: 81 | 82 | self.__proc_environment = self.__simulation.process(self.__envGenerator__(ep=environment_process, **kwargs)) 83 | self.__environment_activate = self.__simulation.event() 84 | 85 | self.__last_event = None #used when stepping thru simulation 86 | 87 | #here below -- simulation values, accessible by user 88 | self.current_event = None 89 | self.now = self.__simulation.now 90 | 91 | def __activate__(self, event): 92 | """ 93 | Triggers proc_activate, needed to activate procedural process. 94 | """ 95 | if event.action != self.__pr._UNKNOWN and event.proc != self.__pr._PROCEDURAL: 96 | if not self.__proc_activate.triggered: 97 | self.__proc_activate.succeed() 98 | 99 | def __envGenerator__(self, ep, **kwargs): 100 | """ 101 | Creates simulation process for process in environment. 102 | """ 103 | generator = ep(**kwargs) 104 | event = next(generator) 105 | while True: 106 | pro = self.__simulation.process(self.__envprocess__(event)) 107 | yield pro | self.__environment_activate 108 | if self.__environment_activate.triggered: 109 | self.__environment_activate = self.__simulation.event() 110 | pro.interrupt() 111 | try: 112 | event = next(generator) 113 | except StopIteration: 114 | break 115 | #this part below ensures automatic buffering which proceeds independently of PROCEDURAL 116 | for name in self.__buffers: 117 | if isinstance(self.__buffers[name], vision.VisualLocation) and self.__buffers[name].environment == self.__env: 118 | proc = (name, self.__pr.automatic_search(name, self.__buffers[name], list(self.__env.stimulus.values()), self.__simulation.now)) 119 | self.__procs_started.append(proc) 120 | elif isinstance(self.__buffers[name], vision.Visual) and self.__buffers[name].environment == self.__env and self.__buffers[name].attend_automatic: 121 | try: 122 | cf = tuple(self.__buffers[name].current_focus) 123 | except AttributeError: 124 | pass 125 | else: 126 | proc = (name, self.__pr.automatic_buffering(name, self.__buffers[name], list(self.__env.stimulus.values()), self.__simulation.now)) 127 | self.__procs_started.append(proc) 128 | else: 129 | continue 130 | if not self.__dict_extra_proc_activate[proc[0]].triggered: 131 | self.__interruptibles[proc[0]] = proc[1] #add new process interruptibles if the process can be interrupted according to ACT-R 132 | self.__dict_extra_proc_activate[proc[0]].succeed() #activate modules that are used if not active 133 | else: 134 | if proc[1] != self.__interruptibles[proc[0]]: 135 | self.__interruptibles[proc[0]] = proc[1] 136 | self.__dict_extra_proc[proc[0]].interrupt() #otherwise, interrupt them 137 | 138 | def __envprocess__(self, event): 139 | """ 140 | Runs local environment process. 141 | """ 142 | try: 143 | yield self.__simulation.timeout(event.time-self.__simulation.now) 144 | except simpy.Interrupt: 145 | pass 146 | else: 147 | self.__printenv__(event) 148 | finally: 149 | self.__activate__(event) 150 | 151 | def __extraprocessGenerator__(self, name): 152 | """ 153 | Creates simulation process for other rules. 154 | """ 155 | while True: 156 | try: 157 | _, proc = next(filter(lambda x: x[0] == name, self.__procs_started)) 158 | except StopIteration: 159 | if name in self.__interruptibles: 160 | self.__interruptibles.pop(name) #remove this process from interruptibles since it's finished 161 | yield self.__dict_extra_proc_activate[name] 162 | self.__dict_extra_proc_activate[name] = self.__simulation.event() 163 | else: 164 | self.__procs_started.remove((name, proc)) 165 | if not self.__dict_extra_proc_activate[name].triggered: 166 | self.__dict_extra_proc_activate[name].succeed() #activate modules that were used 167 | pro = self.__simulation.process(self.__localprocess__(name, proc)) 168 | 169 | try: 170 | cont = yield pro 171 | except simpy.Interrupt: 172 | if not pro.triggered: 173 | warnings.warn(f"Process in {name} interrupted") 174 | pro.interrupt() #interrupt process 175 | 176 | #if first extra process is followed by another process (returned as cont), do what follows; used only for motor and visual 177 | else: 178 | if cont: 179 | pro = self.__simulation.process(self.__localprocess__(name, cont)) 180 | try: 181 | yield pro 182 | except simpy.Interrupt: 183 | pass 184 | 185 | def __localprocess__(self, name, generator): 186 | """ 187 | Triggers local process. name is the name of module. generator must only yield Events. 188 | """ 189 | while True: 190 | try: 191 | event = next(generator) 192 | except StopIteration: 193 | return 194 | if not isinstance(event, Event): 195 | return event 196 | try: 197 | yield self.__simulation.timeout(event.time-round(self.__simulation.now, 4)) #a hack -- rounded because otherwise there was a very tiny negative delay in some cases 198 | except simpy.Interrupt: 199 | break 200 | else: 201 | self.__printevent__(event) 202 | self.__activate__(event) 203 | try: 204 | if self.__env.trigger and self.__pr.env_interaction.intersection(self.__env.trigger): 205 | self.__environment_activate.succeed(value=(self.__env.trigger, self.__pr.env_interaction)) 206 | self.__pr.env_interaction = set() 207 | except AttributeError: 208 | pass 209 | 210 | def __printevent__(self, event): 211 | """ 212 | Stores current event in self.current_event and prints event. 213 | """ 214 | if event.action != self.__pr._UNKNOWN: 215 | self.current_event = event 216 | if self.__trace and not self.gui: 217 | print(event[0:3]) 218 | 219 | def __printenv__(self, event): 220 | """ 221 | Prints environment event. 222 | """ 223 | if event.action != self.__pr._UNKNOWN: 224 | self.current_event = event 225 | 226 | def __procprocessGenerator__(self): 227 | """ 228 | Creates simulation process for procedural rules. 229 | """ 230 | pro = self.__simulation.process(self.__localprocess__(self.__pr._PROCEDURAL, self.__pr.procedural_process(self.__simulation.now))) #create procedural process 231 | self.__procs_started = yield pro #run the process, keep its return value 232 | while True: 233 | try: 234 | self.__procs_started.remove(self.__pr._PROCEDURAL) 235 | except ValueError: 236 | yield self.__proc_activate #wait for proc_activate 237 | else: 238 | for proc in self.__procs_started: 239 | name = proc[0] 240 | if not self.__dict_extra_proc_activate[name].triggered: 241 | if proc[1].__name__ in self.__pr._INTERRUPTIBLE: 242 | self.__interruptibles[name] = proc[1] #add new process interruptibles if the process can be interrupted according to ACT-R 243 | self.__dict_extra_proc_activate[name].succeed() #activate modules that were used if not active 244 | else: 245 | if name in self.__interruptibles and proc[1] != self.__interruptibles[name]: 246 | self.__interruptibles[name] = proc[1] 247 | self.__dict_extra_proc[name].interrupt() #otherwise, interrupt them 248 | for _ in range(5): 249 | yield self.__simulation.timeout(0) #move procedural process to the bottom; right now, this is a hack - it yields 0 timeout five times, so other processes get enough cycles to start etc. 250 | pro = self.__simulation.process(self.__localprocess__(self.__pr._PROCEDURAL, self.__pr.procedural_process(self.__simulation.now))) 251 | self.__procs_started = yield pro 252 | self.__proc_activate = self.__simulation.event() #start the event 253 | 254 | def run(self, max_time=1): 255 | """ 256 | Run simulation for the number of seconds specified in max_time. 257 | """ 258 | if not self.gui: 259 | self.__simulation.run(max_time) 260 | else: 261 | self.__runGUI__() 262 | if self.__simulation.peek() == float("inf"): 263 | self.__pr.compile_rules() #at the end of the simulation, run compilation (the last two rules are not yet compiled) 264 | 265 | def show_time(self): 266 | """ 267 | Show current time in simulation. 268 | """ 269 | try: 270 | t = self.__simulation.now 271 | except AttributeError: 272 | raise AttributeError("No simulation is running") 273 | else: 274 | return t 275 | 276 | def step(self): 277 | """ 278 | Make one step through simulation. 279 | """ 280 | while True: 281 | self.__simulation.step() 282 | if self.current_event and self.current_event.action != self._UNKNOWN and self.current_event != self.__last_event: 283 | self.__last_event = self.current_event 284 | break 285 | if self.__simulation.peek() == float("inf"): 286 | self.__pr.compile_rules() #at the end of the simulation, run compilation (the last two rules are not yet compiled) 287 | 288 | 289 | def steps(self, count): 290 | """ 291 | Make several one or more steps through simulation. The number of steps is given in count. 292 | """ 293 | count = int(count) 294 | assert count > 0, "the 'count' argument in 'steps' must be a positive number" 295 | for _ in range(count): 296 | self.step() 297 | 298 | def __runGUI__(self): 299 | """ 300 | Simulation run using GUI for environment. 301 | """ 302 | 303 | self.__root = tk.Tk() 304 | self.__root.wm_title("Environment") 305 | 306 | # Create the queue 307 | self.__queue = queue.Queue( ) 308 | 309 | # Set up the GUI part 310 | self.__environmentGUI = GuiPart(self.__env, self.__root, self.__queue, self.__endGui__) 311 | 312 | # Set up the thread to do asynchronous I/O -- taken from Python cookbook 313 | self.__running = True 314 | self.__thread1 = threading.Thread(target=self.__workerThread1__) 315 | self.__thread1.start( ) 316 | 317 | # Start the periodic call in the GUI to check if the queue contains 318 | # anything 319 | self.__periodicCall__( ) 320 | self.__root.mainloop( ) 321 | self.__running = False 322 | 323 | def __periodicCall__(self): 324 | """ 325 | Check every 10 ms if there is something new in the queue. 326 | """ 327 | self.__environmentGUI.processIncoming( ) 328 | if not self.__running: 329 | # This is a brutal stop of the system. Should more cleanup take place? 330 | import sys 331 | sys.exit(1) 332 | self.__root.after(10, self.__periodicCall__) 333 | 334 | def __workerThread1__(self): 335 | """ 336 | Function handling the asynchronous trace output. 337 | """ 338 | self.__last_event = None 339 | old_time = 0 340 | old_stimulus = None 341 | old_focus = None 342 | while self.__running: 343 | try: 344 | self.__simulation.step() 345 | except simpy.core.EmptySchedule: 346 | self.__endGui__() 347 | if self.current_event and self.__last_event != self.current_event: 348 | self.__last_event = self.current_event 349 | if self.__realtime: 350 | time.sleep(self.current_event.time - old_time) 351 | old_time = self.current_event.time 352 | if self.__trace: 353 | if self.current_event.proc != utilities._ENV: 354 | print(self.current_event[0:3]) 355 | if self.__env.stimulus and old_stimulus != self.__env.stimulus: 356 | old_stimulus = self.__env.stimulus 357 | self.__queue.put({"stim": self.__env.stimulus}) 358 | if self.__env.current_focus and old_focus != self.__env.current_focus: 359 | old_focus = tuple(self.__env.current_focus) 360 | self.__queue.put({"focus": self.__env.current_focus}) 361 | 362 | 363 | def __endGui__(self): 364 | self.__running = False 365 | 366 | 367 | class GuiPart: 368 | """ 369 | GUI part is used to run GUI on top of ACT-R model. It is used for environment simulations. 370 | """ 371 | 372 | def __init__(self, env, master, queue, endCommand): 373 | self.queue = queue 374 | 375 | self.canvas_id = [] 376 | # Set up the GUI 377 | self.canvas = tk.Canvas(master, width=env.size[0], height=env.size[1], bg="white") 378 | self.canvas.pack() 379 | 380 | def processIncoming(self): 381 | """ 382 | Handle all messages currently in the queue, if any. 383 | """ 384 | while self.queue.qsize( ): 385 | try: 386 | stimulus = None 387 | focus = None 388 | element = self.queue.get(0) 389 | try: 390 | stimulus = element["stim"] 391 | except KeyError: 392 | focus = element["focus"] 393 | if stimulus: 394 | for elem in self.canvas.find_withtag("t"): 395 | self.canvas.delete(elem) 396 | for each in stimulus: 397 | try: 398 | position = stimulus[each]['position'] 399 | except KeyError: 400 | raise utilities.ACTRError("One of your stimuli for environment does not have a defined position; the element cannot be printed in environment; stimuli should look like this: [{'stimulus1-0time': {'text': 'hi', 'position': (0, 0)}, 'stimulus2-0time': {'text': 'you', 'position': (10, 10)}}, {'stimulus3-latertime': {'text': 'new', 'position': (0, 0)}}]") 401 | canvas_id = self.canvas.create_text(position, **{key: stimulus[each][key] for key in stimulus[each] if key != 'position' and key != 'vis_delay'}) 402 | try: 403 | self.canvas.itemconfig(canvas_id, text=stimulus[each]['text'], tags="t") 404 | except KeyError: 405 | raise utilities.ACTRError("One of your stimuli for environment does not have a text; the element cannot be printed in environment; stimuli should look like this: [{'stimulus1-0time': {'text': 'hi', 'position': (0, 0)}, 'stimulus2-0time': {'text': 'you', 'position': (10, 10)}}, {'stimulus3-latertime': {'text': 'new', 'position': (0, 0)}}]") 406 | if focus: 407 | for elem in self.canvas.find_withtag("foc"): 408 | self.canvas.delete(elem) 409 | self.canvas.create_oval(focus[0]-4, focus[1]-4, focus[0]+4, focus[1]+4, outline="red", tags="foc") 410 | 411 | except queue.Empty: 412 | # just on general principles, although we don't 413 | # expect this branch to be taken in this case 414 | pass 415 | -------------------------------------------------------------------------------- /pyactr/tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jakdot/pyactr/7f9fb305208ae7cf65b360aed42ae245d9a63567/pyactr/tests/__init__.py -------------------------------------------------------------------------------- /pyactr/vision.py: -------------------------------------------------------------------------------- 1 | """ 2 | Vision module. Just basic. 3 | """ 4 | 5 | import collections 6 | 7 | from pyactr import buffers, chunks, utilities 8 | from pyactr.utilities import ACTRError 9 | 10 | #TODO 11 | #setting ERROR/FREE should work even in buffers that can be interrupted -- check it does 12 | #encoding visual info should proceed silently, but currently we need to go thru loop in production rules to wait for a change in environment 13 | 14 | class VisualLocation(buffers.Buffer): 15 | """ 16 | Visual buffer. This buffer sees positions of objects in the environment. 17 | """ 18 | 19 | def __init__(self, environment, default_harvest=None, finst=4): 20 | buffers.Buffer.__init__(self, default_harvest, None) 21 | self.environment = environment 22 | self.recent = collections.deque() 23 | self.finst = finst 24 | 25 | @property 26 | def finst(self): 27 | """ 28 | Finst - how many chunks are 'remembered' in declarative memory buffer. 29 | """ 30 | return self.__finst 31 | 32 | @finst.setter 33 | def finst(self, value): 34 | if value >= 0: 35 | self.__finst = value 36 | else: 37 | raise ValueError('Finst in the dm buffer must be >= 0') 38 | 39 | @property 40 | def default_harvest(self): 41 | """ 42 | Default harvest of visuallocation buffer. 43 | """ 44 | return self.dm 45 | 46 | @default_harvest.setter 47 | def default_harvest(self, value): 48 | try: 49 | self.dm = value 50 | except ValueError: 51 | raise ACTRError('The default harvest set in the visuallocation buffer is not a possible declarative memory') 52 | 53 | def add(self, elem, found_stim, time=0, harvest=None): 54 | """ 55 | Clear current buffer (into a memory) and adds a new chunk. Decl. memory is either specified as default_harvest, when Visual is initialized, or it can be specified as the argument of harvest. 56 | """ 57 | self.clear(time, harvest) 58 | 59 | super().add(elem) 60 | 61 | if self.finst and found_stim: 62 | self.recent.append(found_stim) 63 | if self.finst < len(self.recent): 64 | self.recent.popleft() 65 | 66 | def modify(self, otherchunk, found_stim, actrvariables=None): 67 | """ 68 | Modify the chunk in VisualLocation buffer according to otherchunk. found_stim keeps information about the actual stimulus and it is used to update the queue of the recently visited chunks. 69 | """ 70 | 71 | super().modify(otherchunk, actrvariables) 72 | 73 | if self.finst and found_stim: 74 | self.recent.append(found_stim) 75 | if self.finst < len(self.recent): 76 | self.recent.popleft() 77 | 78 | 79 | def clear(self, time=0, harvest=None): 80 | """ 81 | Clear buffer, adds cleared chunk into decl. memory. Decl. memory is either specified as default_harvest, when Visual is initialized, or it can be specified here as harvest. 82 | """ 83 | if harvest != None: 84 | if self._data: 85 | harvest.add(self._data.pop(), time) 86 | else: 87 | if self._data: 88 | self.dm.add(self._data.pop(), time) 89 | 90 | def find(self, otherchunk, actrvariables=None, extra_tests=None): 91 | """ 92 | Set a chunk in vision based on what is on the screen. 93 | """ 94 | if extra_tests == None: 95 | extra_tests = {} 96 | if actrvariables == None: 97 | actrvariables = {} 98 | try: 99 | mod_attr_val = {x[0]: utilities.check_bound_vars(actrvariables, x[1], negative_impossible=False) for x in otherchunk.removeunused()} 100 | except utilities.ACTRError as arg: 101 | raise utilities.ACTRError(f"The chunk '{otherchunk}' is not defined correctly; {arg}") 102 | chunk_used_for_search = chunks.Chunk(utilities.VISUALLOCATION, **mod_attr_val) 103 | 104 | found = None 105 | found_stim = None 106 | closest = float("inf") 107 | x_closest = float("inf") 108 | y_closest = float("inf") 109 | current_x = None 110 | current_y = None 111 | for each in self.environment.stimulus: 112 | 113 | #extra test applied first 114 | try: 115 | if extra_tests["attended"] == False or extra_tests["attended"] == 'False': 116 | if self.finst and self.environment.stimulus[each] in self.recent: 117 | continue 118 | 119 | else: 120 | if self.finst and self.environment.stimulus[each] not in self.recent: 121 | continue 122 | except KeyError: 123 | pass 124 | 125 | #check value in text; in principle, you can search based on any value, so this is more powerful than actual visual search 126 | if chunk_used_for_search.value != chunk_used_for_search.EmptyValue() and chunk_used_for_search.value.values != self.environment.stimulus[each]["text"]: 127 | continue 128 | 129 | position = (int(self.environment.stimulus[each]['position'][0]), int(self.environment.stimulus[each]['position'][1])) 130 | 131 | #check absolute position; exception on AttributeError is to avoid the case in which the slot has empty value (in that case, the attribute "values" is undefined) 132 | try: 133 | if chunk_used_for_search.screen_x.values and int(chunk_used_for_search.screen_x.values) != position[0]: 134 | continue 135 | except (TypeError, ValueError, AttributeError): 136 | pass 137 | try: 138 | if chunk_used_for_search.screen_y.values and int(chunk_used_for_search.screen_y.values) != position[1]: 139 | continue 140 | except (TypeError, ValueError, AttributeError): 141 | pass 142 | 143 | #check on x and y relative positions 144 | try: 145 | if chunk_used_for_search.screen_x.values[0] == utilities.VISIONSMALLER and int(chunk_used_for_search.screen_x.values[1:]) <= position[0]: 146 | continue 147 | elif chunk_used_for_search.screen_x.values[0] == utilities.VISIONGREATER and int(chunk_used_for_search.screen_x.values[1:]) >= position[0]: 148 | continue 149 | except (TypeError, IndexError, AttributeError): 150 | pass 151 | 152 | try: 153 | if chunk_used_for_search.screen_y.values[0] == utilities.VISIONSMALLER and int(chunk_used_for_search.screen_y.values[1:]) <= position[1]: 154 | continue 155 | elif chunk_used_for_search.screen_y.values[0] == utilities.VISIONGREATER and int(chunk_used_for_search.screen_y.values[1:]) >= position[1]: 156 | continue 157 | except (TypeError, IndexError, AttributeError): 158 | pass 159 | 160 | #check on x and y absolute positions 161 | try: 162 | if chunk_used_for_search.screen_x.values == utilities.VISIONLOWEST and current_x != None and position[0] > current_x: 163 | continue 164 | elif chunk_used_for_search.screen_x.values == utilities.VISIONHIGHEST and current_x != None and position[0] < current_x: 165 | continue 166 | except (TypeError, AttributeError): 167 | pass 168 | 169 | try: 170 | if chunk_used_for_search.screen_y.values == utilities.VISIONLOWEST and current_y != None and position[1] > current_y: 171 | continue 172 | elif chunk_used_for_search.screen_y.values == utilities.VISIONHIGHEST and current_y != None and position[1] < current_y: 173 | continue 174 | except (TypeError, AttributeError): 175 | pass 176 | 177 | #check on closest 178 | try: 179 | if (chunk_used_for_search.screen_x.values == utilities.VISIONCLOSEST or chunk_used_for_search.screen_y.values == utilities.VISIONCLOSEST) and utilities.calculate_pythagorean_distance(self.environment.current_focus, position) > closest: 180 | continue 181 | except (TypeError, AttributeError): 182 | pass 183 | 184 | #check on onewayclosest 185 | try: 186 | if (chunk_used_for_search.screen_x.values == utilities.VISIONONEWAYCLOSEST) and utilities.calculate_onedimensional_distance(self.environment.current_focus, position, horizontal=True) > x_closest: 187 | continue 188 | except (TypeError, AttributeError): 189 | pass 190 | 191 | try: 192 | if (chunk_used_for_search.screen_y.values == utilities.VISIONONEWAYCLOSEST) and utilities.calculate_onedimensional_distance(self.environment.current_focus, position, horizontal=False) > y_closest: 193 | continue 194 | except (TypeError, AttributeError): 195 | pass 196 | 197 | found_stim = self.environment.stimulus[each] 198 | visible_chunk = chunk_from_stimulus(found_stim, "visual_location", position=False) 199 | 200 | if visible_chunk <= chunk_used_for_search: 201 | found = chunk_from_stimulus(found_stim, "visual_location", position=True) 202 | current_x = position[0] 203 | current_y = position[1] 204 | closest = utilities.calculate_pythagorean_distance(self.environment.current_focus, position) 205 | x_closest = utilities.calculate_onedimensional_distance(self.environment.current_focus, position, horizontal=True) 206 | y_closest = utilities.calculate_onedimensional_distance(self.environment.current_focus, position, horizontal=False) 207 | 208 | return found, found_stim 209 | 210 | def automatic_search(self, stim): 211 | """ 212 | Automatically search for a new stim in environment. 213 | """ 214 | new_chunk = None 215 | found = None 216 | closest = float("inf") 217 | for st in stim: 218 | if st not in self.recent: 219 | position = st['position'] 220 | #check on closest 221 | try: 222 | if utilities.calculate_pythagorean_distance(self.environment.current_focus, position) > closest: 223 | continue 224 | except TypeError: 225 | pass 226 | 227 | closest = utilities.calculate_pythagorean_distance(self.environment.current_focus, position) 228 | 229 | new_chunk = chunk_from_stimulus(st, "visual_location") 230 | found = st 231 | 232 | return new_chunk, found 233 | 234 | def test(self, state, inquiry): 235 | """ 236 | Is current state busy/free/error? 237 | """ 238 | return getattr(self, state) == inquiry 239 | 240 | class Visual(buffers.Buffer): 241 | """ 242 | Visual buffer. This sees objects in the environment. 243 | """ 244 | 245 | def __init__(self, environment, default_harvest=None): 246 | self.environment = environment 247 | buffers.Buffer.__init__(self, default_harvest, None) 248 | self.current_focus = self.environment.current_focus 249 | self.state = self._FREE 250 | self.preparation = self._FREE 251 | self.processor = self._FREE 252 | self.execution = self._FREE 253 | self.autoattending = self._FREE 254 | self.attend_automatic = True #the current focus automatically attends 255 | self.last_mvt = 0 256 | 257 | #parameters 258 | self.model_parameters = {} 259 | 260 | @property 261 | def default_harvest(self): 262 | """ 263 | Default harvest of visual buffer. 264 | """ 265 | return self.dm 266 | 267 | @default_harvest.setter 268 | def default_harvest(self, value): 269 | try: 270 | self.dm = value 271 | except ValueError: 272 | raise ACTRError('The default harvest set in the visual buffer is not a possible declarative memory') 273 | 274 | 275 | def add(self, elem, time=0, harvest=None): 276 | """ 277 | Clear current buffer (into a memory) and adds a new chunk. Decl. memory is either specified as default_harvest, when Visual is initialized, or it can be specified as the argument of harvest. 278 | """ 279 | self.clear(time, harvest) 280 | super().add(elem) 281 | 282 | def clear(self, time=0, harvest=None): 283 | """ 284 | Clear buffer, adds cleared chunk into decl. memory. Decl. memory is either specified as default_harvest, when Visual is initialized, or it can be specified here as harvest. 285 | """ 286 | if harvest != None: 287 | if self._data: 288 | harvest.add(self._data.pop(), time) 289 | else: 290 | if self._data: 291 | self.dm.add(self._data.pop(), time) 292 | 293 | def automatic_buffering(self, stim, model_parameters): 294 | """ 295 | Buffer visual object automatically. 296 | """ 297 | model_parameters = model_parameters.copy() 298 | model_parameters.update(self.model_parameters) 299 | 300 | new_chunk = chunk_from_stimulus(stim, "visual", position=False) 301 | 302 | if new_chunk: 303 | angle_distance = 2*utilities.calculate_visual_angle(self.environment.current_focus, (stim['position'][0], stim['position'][1]), self.environment.size, self.environment.simulated_screen_size, self.environment.viewing_distance) #the stimulus has to be within 2 degrees from the focus (foveal region) 304 | encoding_time = utilities.calculate_delay_visual_attention(angle_distance=angle_distance, K=model_parameters["eye_mvt_scaling_parameter"], k=model_parameters['eye_mvt_angle_parameter'], emma_noise=model_parameters['emma_noise'], vis_delay=stim.get('vis_delay')) 305 | return new_chunk, encoding_time 306 | 307 | def modify(self, otherchunk, actrvariables=None): 308 | """ 309 | Modify the chunk in visual buffer according to the info in otherchunk. 310 | """ 311 | super().modify(otherchunk, actrvariables) 312 | 313 | def stop_automatic_buffering(self): 314 | """ 315 | Stop automatic buffering of the visual buffer. 316 | """ 317 | self.attend_automatic = False 318 | 319 | def shift(self, otherchunk, harvest=None, actrvariables=None, model_parameters=None): 320 | """ 321 | Return a chunk, time needed to attend and shift eye focus to the chunk, and the landing site of eye mvt. 322 | """ 323 | if model_parameters == None: 324 | model_parameters = {} 325 | model_parameters = model_parameters.copy() 326 | model_parameters.update(self.model_parameters) 327 | 328 | if actrvariables == None: 329 | actrvariables = {} 330 | try: 331 | mod_attr_val = {x[0]: utilities.check_bound_vars(actrvariables, x[1]) for x in otherchunk.removeunused()} 332 | except ACTRError as arg: 333 | raise ACTRError(f"Shifting towards the chunk '{otherchunk}' is impossible; {arg}") 334 | 335 | vis_delay = None 336 | 337 | for each in self.environment.stimulus: 338 | try: 339 | if self.environment.stimulus[each]['position'] == (float(mod_attr_val['screen_pos'].values.screen_x.values), float(mod_attr_val['screen_pos'].values.screen_y.values)): 340 | vis_delay = self.environment.stimulus[each].get('vis_delay') 341 | stim = self.environment.stimulus[each].copy() 342 | stim.update({'cmd': mod_attr_val['cmd']}) 343 | except (AttributeError, KeyError): 344 | raise ACTRError("The chunk in the visual buffer is not defined correctly. It is not possible to move attention.") 345 | 346 | new_chunk = chunk_from_stimulus(stim, "visual", position=False) #creates new chunk 347 | 348 | if model_parameters['emma']: 349 | angle_distance = utilities.calculate_visual_angle(self.environment.current_focus, [float(new_chunk.screen_pos.values.screen_x.values), float(new_chunk.screen_pos.values.screen_y.values)], self.environment.size, self.environment.simulated_screen_size, self.environment.viewing_distance) 350 | encoding_time = utilities.calculate_delay_visual_attention(angle_distance=angle_distance, K=model_parameters["eye_mvt_scaling_parameter"], k=model_parameters['eye_mvt_angle_parameter'], emma_noise=model_parameters['emma_noise'], vis_delay=vis_delay) 351 | preparation_time = utilities.calculate_preparation_time(emma_noise=model_parameters['emma_noise']) 352 | execution_time = utilities.calculate_execution_time(angle_distance, emma_noise=model_parameters['emma_noise']) 353 | landing_site = utilities.calculate_landing_site([float(new_chunk.screen_pos.values.screen_x.values), float(new_chunk.screen_pos.values.screen_y.values)], angle_distance, emma_landing_site_noise=model_parameters['emma_landing_site_noise']) 354 | elif not model_parameters['emma']: 355 | encoding_time = 0.085 356 | preparation_time = 0 357 | execution_time = 0.085 358 | landing_site = (float(new_chunk.screen_pos.values.screen_x.values), float(new_chunk.screen_pos.values.screen_y.values)) 359 | return new_chunk, (encoding_time, preparation_time, execution_time), landing_site 360 | 361 | def move_eye(self, position): 362 | """ 363 | Move eyes in environment to a new position. 364 | """ 365 | self.environment.current_focus = [int(position[0]), int(position[1])] 366 | self.current_focus = self.environment.current_focus 367 | 368 | def test(self, state, inquiry): 369 | """ 370 | Is current state busy/free/error? 371 | """ 372 | return getattr(self, state) == inquiry 373 | 374 | def chunk_from_stimulus(stimulus, buffer_name, position=True): 375 | """ 376 | Given a stimulus dict from the environment, a buffer name, and whether to encode position, returns a chunk to be used in that buffer. 377 | """ 378 | # extract a possible extended chunk type from the stimulus 379 | # defaults to utilities.VISUALLOCATION/.VISUAL 380 | if buffer_name == "visual_location": 381 | stim_typename = stimulus.get(buffer_name + "_typename", utilities.VISUALLOCATION) 382 | elif buffer_name == "visual": 383 | stim_typename = stimulus.get(buffer_name + "_typename", utilities.VISUAL) 384 | else: 385 | raise ValueError("buffer_name must be either ""visual_location"" or ""visual""") 386 | 387 | # a list of reserved values for control parameters, never encoded into the chunk 388 | stim_control = ['text', 'position', 'vis_delay', 'visual_location_typename', 'visual_typename', 'externally_visible'] 389 | 390 | # determining the values that will be visible in the chunk 391 | # by default, for visual location chunks this is just 'screen_x' and 'screen_y', but others are merged from stimulus['externally_visible'] 392 | # in contrast, for visual chunks, all non-control keys will be encoded regardless (with 'text' as 'value', see below) 393 | # be careful when adding to stimulus['externally_visible']: visual search checks if the stimulus chunk subsumes the search chunk... 394 | # ... so ALL externally visible features of a stimulus must be included in a search in order to fixate it 395 | 396 | if buffer_name == "visual_location": 397 | visible_features = [] 398 | try: 399 | visible_features += stimulus.get('externally_visible', []) 400 | except AttributeError: 401 | raise ValueError("stimulus['externally_visible'] should be a list of strings") 402 | else: 403 | visible_features = [key for key in stimulus if key not in stim_control] 404 | 405 | temp_dict = {key: stimulus[key] for key in stimulus if key in visible_features} 406 | if position: 407 | temp_dict.update({'screen_x': int(stimulus['position'][0]), 408 | 'screen_y': int(stimulus['position'][1])}) 409 | if buffer_name == "visual": 410 | location = chunk_from_stimulus(stimulus, "visual_location") 411 | temp_dict.update({'screen_pos': location}) 412 | temp_dict.update({'value': stimulus.get('text', '')}) 413 | visible_chunk = chunks.Chunk(stim_typename, **temp_dict) 414 | 415 | return visible_chunk 416 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | 3 | VERSION = '0.3.2' 4 | 5 | setup(name='pyactr', 6 | version=VERSION, 7 | description='ACT-R in Python', 8 | url='https://github.com/jakdot/pyactr', 9 | author='jakdot', 10 | author_email='j.dotlacil@gmail.com', 11 | packages=['pyactr', 'pyactr/tests'], 12 | license='GPL', 13 | install_requires=['numpy', 'simpy', 'pyparsing'], 14 | classifiers=['Programming Language :: Python :: 3', 'License :: OSI Approved :: GNU General Public License v3 (GPLv3)', 'Operating System :: OS Independent', 'Development Status :: 3 - Alpha', 'Topic :: Scientific/Engineering'], 15 | zip_safe=False) 16 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch2_agreement.py: -------------------------------------------------------------------------------- 1 | """ 2 | A basic model that simulates subject-verb agreement. 3 | We abstract away from syntactic parsing, among other things. 4 | """ 5 | 6 | import pyactr as actr 7 | 8 | actr.chunktype("word", "phonology, meaning, category, number, synfunction") 9 | actr.chunktype("goal_lexeme", "task, category, number") 10 | 11 | carLexeme = actr.makechunk( 12 | nameofchunk="car", 13 | typename="word", 14 | phonology="/kar/", 15 | meaning="[[car]]", 16 | category="noun", 17 | number="sg", 18 | synfunction="subject") 19 | 20 | agreement = actr.ACTRModel() 21 | 22 | dm = agreement.decmem 23 | dm.add(carLexeme) 24 | 25 | agreement.goal.add(actr.chunkstring(string=""" 26 | isa goal_lexeme 27 | task agree 28 | category 'verb'""")) 29 | 30 | agreement.productionstring(name="retrieve", string=""" 31 | =g> 32 | isa goal_lexeme 33 | category 'verb' 34 | task agree 35 | ?retrieval> 36 | buffer empty 37 | ==> 38 | =g> 39 | isa goal_lexeme 40 | task trigger_agreement 41 | category 'verb' 42 | +retrieval> 43 | isa word 44 | category 'noun' 45 | synfunction 'subject' 46 | """) 47 | 48 | agreement.productionstring(name="agree", string=""" 49 | =g> 50 | isa goal_lexeme 51 | task trigger_agreement 52 | category 'verb' 53 | =retrieval> 54 | isa word 55 | category 'noun' 56 | synfunction 'subject' 57 | number =x 58 | ==> 59 | =g> 60 | isa goal_lexeme 61 | category 'verb' 62 | number =x 63 | task done 64 | """) 65 | 66 | agreement.productionstring(name="done", string=""" 67 | =g> 68 | isa goal_lexeme 69 | task done 70 | ==> 71 | ~g>""") 72 | 73 | if __name__ == "__main__": 74 | agreement_sim = agreement.simulation() 75 | agreement_sim.run() 76 | print("\nDeclarative memory at the end of the simulation:") 77 | print(dm) 78 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch2_context_free_grammar.py: -------------------------------------------------------------------------------- 1 | """ 2 | CFG for the a^n b^n language. 3 | """ 4 | 5 | import pyactr as actr 6 | 7 | cfg = actr.ACTRModel() 8 | 9 | actr.chunktype("countOrder", "first, second") 10 | actr.chunktype("countFrom", ("start", "end", "count", "terminal")) 11 | 12 | dm = cfg.decmem 13 | dm.add(actr.chunkstring(string=""" 14 | isa countOrder 15 | first 1 16 | second 2 17 | """)) 18 | dm.add(actr.chunkstring(string=""" 19 | isa countOrder 20 | first 2 21 | second 3 22 | """)) 23 | dm.add(actr.chunkstring(string=""" 24 | isa countOrder 25 | first 3 26 | second 4 27 | """)) 28 | dm.add(actr.chunkstring(string=""" 29 | isa countOrder 30 | first 4 31 | second 5 32 | """)) 33 | 34 | cfg.goal.add(actr.chunkstring(string=""" 35 | isa countFrom 36 | start 1 37 | end 3 38 | terminal 'a' 39 | """)) 40 | 41 | cfg.productionstring(name="start", string=""" 42 | =g> 43 | isa countFrom 44 | start =x 45 | count None 46 | ==> 47 | =g> 48 | isa countFrom 49 | count =x 50 | +retrieval> 51 | isa countOrder 52 | first =x 53 | """) 54 | 55 | cfg.productionstring(name="increment", string=""" 56 | =g> 57 | isa countFrom 58 | count =x 59 | end ~=x 60 | =retrieval> 61 | isa countOrder 62 | first =x 63 | second =y 64 | ==> 65 | !g> 66 | show terminal 67 | =g> 68 | isa countFrom 69 | count =y 70 | +retrieval> 71 | isa countOrder 72 | first =y 73 | """) 74 | 75 | cfg.productionstring(name="restart counting", string=""" 76 | =g> 77 | isa countFrom 78 | count =x 79 | end =x 80 | terminal 'a' 81 | ==> 82 | +g> 83 | isa countFrom 84 | start 1 85 | end =x 86 | terminal 'b' 87 | """) 88 | 89 | if __name__ == "__main__": 90 | cfg_sim = cfg.simulation(trace=False) 91 | cfg_sim.run() 92 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch2_count.py: -------------------------------------------------------------------------------- 1 | """ 2 | An example of a model using retrieval and goal buffers. It corresponds to 3 | the simplest model in ACT-R tutorials, Unit 1, 'count'. 4 | """ 5 | 6 | import pyactr as actr 7 | 8 | counting = actr.ACTRModel() 9 | 10 | #Each chunk type should be defined first. 11 | actr.chunktype("countOrder", ("first", "second")) 12 | #Chunk type is defined as (name, attributes) 13 | 14 | #Attributes are written as an iterable (above) or as a string (comma-separated): 15 | actr.chunktype("countOrder", "first, second") 16 | 17 | actr.chunktype("countFrom", ("start", "end", "count")) 18 | 19 | dm = counting.decmem #creates variable for declarative memory (easier to access) 20 | dm.add(actr.chunkstring(string=""" 21 | isa countOrder 22 | first 1 23 | second 2 24 | """)) 25 | dm.add(actr.chunkstring(string=""" 26 | isa countOrder 27 | first 2 28 | second 3 29 | """)) 30 | dm.add(actr.chunkstring(string=""" 31 | isa countOrder 32 | first 3 33 | second 4 34 | """)) 35 | dm.add(actr.chunkstring(string=""" 36 | isa countOrder 37 | first 4 38 | second 5 39 | """)) 40 | 41 | #creating goal buffer 42 | counting.goal.add(actr.chunkstring(string=""" 43 | isa countFrom 44 | start 2 45 | end 4 46 | """)) 47 | 48 | #production rules follow; using productionstring, they are similar to Lisp ACT-R 49 | 50 | counting.productionstring(name="start", string=""" 51 | =g> 52 | isa countFrom 53 | start =x 54 | count None 55 | ==> 56 | =g> 57 | isa countFrom 58 | count =x 59 | +retrieval> 60 | isa countOrder 61 | first =x 62 | """) 63 | 64 | counting.productionstring(name="increment", string=""" 65 | =g> 66 | isa countFrom 67 | count =x 68 | end ~=x 69 | =retrieval> 70 | isa countOrder 71 | first =x 72 | second =y 73 | ==> 74 | =g> 75 | isa countFrom 76 | count =y 77 | +retrieval> 78 | isa countOrder 79 | first =y 80 | """) 81 | 82 | counting.productionstring(name="stop", string=""" 83 | =g> 84 | isa countFrom 85 | count =x 86 | end =x 87 | ==> 88 | ~g> 89 | """) 90 | 91 | if __name__ == "__main__": 92 | counting_sim = counting.simulation() 93 | counting_sim.run() 94 | 95 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch2_regular_grammar.py: -------------------------------------------------------------------------------- 1 | """ 2 | A basic model of grammar. 3 | """ 4 | 5 | import pyactr as actr 6 | 7 | regular_grammar = actr.ACTRModel() 8 | 9 | actr.chunktype("goal_chunk", "mother daughter1 daughter2 state") 10 | 11 | dm = regular_grammar.decmem 12 | 13 | regular_grammar.goal.add(actr.chunkstring(string=""" 14 | isa goal_chunk 15 | mother 'NP' 16 | state rule 17 | """)) 18 | 19 | regular_grammar.productionstring(name="NP ==> N NP", string=""" 20 | =g> 21 | isa goal_chunk 22 | mother 'NP' 23 | daughter1 None 24 | daughter2 None 25 | state rule 26 | ==> 27 | =g> 28 | isa goal_chunk 29 | daughter1 'N' 30 | daughter2 'NP' 31 | state show 32 | """) 33 | 34 | regular_grammar.productionstring(name="print N", string=""" 35 | =g> 36 | isa goal_chunk 37 | state show 38 | ==> 39 | !g> 40 | show daughter1 41 | =g> 42 | isa goal_chunk 43 | state rule 44 | """) 45 | 46 | regular_grammar.productionstring(name="get new mother", string=""" 47 | =g> 48 | isa goal_chunk 49 | daughter2 =x 50 | daughter2 ~None 51 | state rule 52 | ==> 53 | =g> 54 | isa goal_chunk 55 | mother =x 56 | daughter1 None 57 | daughter2 None 58 | """) 59 | 60 | regular_grammar_sim = regular_grammar.simulation(trace=False) 61 | regular_grammar_sim.run(2) 62 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch3_topdown_parser.py: -------------------------------------------------------------------------------- 1 | """ 2 | A simple top-down parser. 3 | """ 4 | 5 | import pyactr as actr 6 | 7 | actr.chunktype("parsing_goal", "stack_top stack_bottom parsed_word task") 8 | actr.chunktype("sentence", "word1 word2 word3") 9 | actr.chunktype("word", "form, cat") 10 | 11 | parser = actr.ACTRModel() 12 | dm = parser.decmem 13 | g = parser.goal 14 | imaginal = parser.set_goal(name="imaginal", delay=0.2) 15 | 16 | dm.add(actr.chunkstring(string=""" 17 | isa word 18 | form 'Mary' 19 | cat 'ProperN' 20 | """)) 21 | dm.add(actr.chunkstring(string=""" 22 | isa word 23 | form 'Bill' 24 | cat 'ProperN' 25 | """)) 26 | dm.add(actr.chunkstring(string=""" 27 | isa word 28 | form 'likes' 29 | cat 'V' 30 | """)) 31 | 32 | g.add(actr.chunkstring(string=""" 33 | isa parsing_goal 34 | task parsing 35 | stack_top 'S' 36 | """)) 37 | imaginal.add(actr.chunkstring(string=""" 38 | isa sentence 39 | word1 'Mary' 40 | word2 'likes' 41 | word3 'Bill' 42 | """)) 43 | 44 | parser.productionstring(name="expand: S ==> NP VP", string=""" 45 | =g> 46 | isa parsing_goal 47 | task parsing 48 | stack_top 'S' 49 | ==> 50 | =g> 51 | isa parsing_goal 52 | stack_top 'NP' 53 | stack_bottom 'VP' 54 | """) 55 | 56 | parser.productionstring(name="expand: NP ==> ProperN", string=""" 57 | =g> 58 | isa parsing_goal 59 | task parsing 60 | stack_top 'NP' 61 | ==> 62 | =g> 63 | isa parsing_goal 64 | stack_top 'ProperN' 65 | """) 66 | 67 | parser.productionstring(name="expand: VP ==> V NP", string=""" 68 | =g> 69 | isa parsing_goal 70 | task parsing 71 | stack_top 'VP' 72 | ==> 73 | =g> 74 | isa parsing_goal 75 | stack_top 'V' 76 | stack_bottom 'NP' 77 | """) 78 | 79 | parser.productionstring(name="retrieve: ProperN", string=""" 80 | =g> 81 | isa parsing_goal 82 | task parsing 83 | stack_top 'ProperN' 84 | =imaginal> 85 | isa sentence 86 | word1 =w1 87 | ==> 88 | =g> 89 | isa parsing_goal 90 | task retrieving 91 | +retrieval> 92 | isa word 93 | form =w1 94 | """) 95 | 96 | parser.productionstring(name="retrieve: V", string=""" 97 | =g> 98 | isa parsing_goal 99 | task parsing 100 | stack_top 'V' 101 | =imaginal> 102 | isa sentence 103 | word1 =w1 104 | ==> 105 | =g> 106 | isa parsing_goal 107 | task retrieving 108 | +retrieval> 109 | isa word 110 | form =w1 111 | """) 112 | 113 | parser.productionstring(name="scan: word", string=""" 114 | =g> 115 | isa parsing_goal 116 | task retrieving 117 | stack_top =y 118 | stack_bottom =x 119 | =retrieval> 120 | isa word 121 | form =w1 122 | cat =y 123 | =imaginal> 124 | isa sentence 125 | word1 =w1 126 | word2 =w2 127 | word3 =w3 128 | ==> 129 | =g> 130 | isa parsing_goal 131 | task printing 132 | stack_top =x 133 | stack_bottom empty 134 | parsed_word =w1 135 | =imaginal> 136 | isa sentence 137 | word1 =w2 138 | word2 =w3 139 | word3 empty 140 | ~retrieval> 141 | """) 142 | 143 | parser.productionstring(name="print parsed word", string=""" 144 | =g> 145 | isa parsing_goal 146 | task printing 147 | =imaginal> 148 | isa sentence 149 | word1 ~empty 150 | ==> 151 | !g> 152 | show parsed_word 153 | =g> 154 | isa parsing_goal 155 | task parsing 156 | parsed_word None 157 | """) 158 | 159 | parser.productionstring(name="done", string=""" 160 | =g> 161 | isa parsing_goal 162 | task printing 163 | =imaginal> 164 | isa sentence 165 | word1 empty 166 | ==> 167 | =g> 168 | isa parsing_goal 169 | task done 170 | !g> 171 | show parsed_word 172 | ~imaginal> 173 | ~g> 174 | """) 175 | 176 | 177 | if __name__ == "__main__": 178 | parser_sim = parser.simulation() 179 | parser_sim.run() 180 | print("\nDeclarative memory at the end of the simulation:") 181 | print(dm) 182 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch4_leftcorner_parser.py: -------------------------------------------------------------------------------- 1 | """ 2 | A left-corner parser. 3 | """ 4 | 5 | import pyactr as actr 6 | from ete3 import Tree 7 | 8 | environment = actr.Environment(focus_position=(320, 180)) 9 | 10 | actr.chunktype("parsing_goal", 11 | "task stack_top stack_bottom parsed_word right_frontier") 12 | actr.chunktype("parse_state", 13 | "node_cat mother daughter1 daughter2 lex_head") 14 | actr.chunktype("word", "form cat") 15 | 16 | parser = actr.ACTRModel(environment) 17 | dm = parser.decmem 18 | g = parser.goal 19 | imaginal = parser.set_goal(name="imaginal", delay=0) 20 | 21 | dm.add(actr.chunkstring(string=""" 22 | isa word 23 | form 'Mary' 24 | cat 'ProperN' 25 | """)) 26 | dm.add(actr.chunkstring(string=""" 27 | isa word 28 | form 'Bill' 29 | cat 'ProperN' 30 | """)) 31 | dm.add(actr.chunkstring(string=""" 32 | isa word 33 | form 'likes' 34 | cat 'V' 35 | """)) 36 | g.add(actr.chunkstring(string=""" 37 | isa parsing_goal 38 | task read_word 39 | stack_top 'S' 40 | right_frontier 'S' 41 | """)) 42 | 43 | 44 | parser.productionstring(name="press spacebar", string=""" 45 | =g> 46 | isa parsing_goal 47 | task read_word 48 | stack_top ~None 49 | ?manual> 50 | state free 51 | ==> 52 | +manual> 53 | isa _manual 54 | cmd 'press_key' 55 | key 'space' 56 | """) 57 | 58 | parser.productionstring(name="encode word", string=""" 59 | =g> 60 | isa parsing_goal 61 | task read_word 62 | =visual> 63 | isa _visual 64 | value =val 65 | ==> 66 | =g> 67 | isa parsing_goal 68 | task get_word_cat 69 | parsed_word =val 70 | ~visual> 71 | """) 72 | 73 | parser.productionstring(name="retrieve category", string=""" 74 | =g> 75 | isa parsing_goal 76 | task get_word_cat 77 | parsed_word =w 78 | ==> 79 | +retrieval> 80 | isa word 81 | form =w 82 | =g> 83 | isa parsing_goal 84 | task retrieving_word 85 | """) 86 | 87 | parser.productionstring(name="shift and project word", string=""" 88 | =g> 89 | isa parsing_goal 90 | task retrieving_word 91 | stack_top =t 92 | stack_bottom None 93 | =retrieval> 94 | isa word 95 | form =w 96 | cat =c 97 | ==> 98 | =g> 99 | isa parsing_goal 100 | task parsing 101 | stack_top =c 102 | stack_bottom =t 103 | +imaginal> 104 | isa parse_state 105 | node_cat =c 106 | daughter1 =w 107 | ~retrieval> 108 | """) 109 | 110 | parser.productionstring(name="project: NP ==> ProperN", string=""" 111 | =g> 112 | isa parsing_goal 113 | stack_top 'ProperN' 114 | stack_bottom ~'NP' 115 | right_frontier =rf 116 | parsed_word =w 117 | ==> 118 | =g> 119 | isa parsing_goal 120 | stack_top 'NP' 121 | +imaginal> 122 | isa parse_state 123 | node_cat 'NP' 124 | daughter1 'ProperN' 125 | mother =rf 126 | lex_head =w 127 | """) 128 | 129 | parser.productionstring(name="project and complete: NP ==> ProperN", string=""" 130 | =g> 131 | isa parsing_goal 132 | stack_top 'ProperN' 133 | stack_bottom 'NP' 134 | right_frontier =rf 135 | parsed_word =w 136 | ==> 137 | =g> 138 | isa parsing_goal 139 | task read_word 140 | stack_top None 141 | stack_bottom None 142 | +imaginal> 143 | isa parse_state 144 | node_cat 'NP' 145 | daughter1 'ProperN' 146 | mother =rf 147 | lex_head =w 148 | """) 149 | 150 | parser.productionstring(name="project and complete: S ==> NP VP", string=""" 151 | =g> 152 | isa parsing_goal 153 | stack_top 'NP' 154 | stack_bottom 'S' 155 | ==> 156 | =g> 157 | isa parsing_goal 158 | task read_word 159 | stack_top 'VP' 160 | stack_bottom None 161 | right_frontier 'VP' 162 | +imaginal> 163 | isa parse_state 164 | node_cat 'S' 165 | daughter1 'NP' 166 | daughter2 'VP' 167 | """) 168 | 169 | parser.productionstring(name="project and complete: VP ==> V NP", string=""" 170 | =g> 171 | isa parsing_goal 172 | task parsing 173 | stack_top 'V' 174 | stack_bottom 'VP' 175 | ==> 176 | =g> 177 | isa parsing_goal 178 | task read_word 179 | stack_top 'NP' 180 | stack_bottom None 181 | +imaginal> 182 | isa parse_state 183 | node_cat 'VP' 184 | daughter1 'V' 185 | daughter2 'NP' 186 | """) 187 | 188 | parser.productionstring(name="finished", string=""" 189 | =g> 190 | isa parsing_goal 191 | task read_word 192 | stack_top None 193 | ==> 194 | ~g> 195 | ~imaginal> 196 | """) 197 | 198 | if __name__ == "__main__": 199 | stimuli = [{1: {'text': 'Mary', 'position': (320, 180)}}, 200 | {1: {'text': 'likes', 'position': (320, 180)}}, 201 | {1: {'text': 'Bill', 'position': (320, 180)}}] 202 | parser_sim = parser.simulation( 203 | realtime=True, 204 | gui=False, 205 | environment_process=environment.environment_process, 206 | stimuli=stimuli, 207 | triggers='space') 208 | parser_sim.run(1.1) 209 | 210 | sortedDM = sorted(([item[0], time] for item in dm.items()\ 211 | for time in item[1]),\ 212 | key=lambda item: item[1]) 213 | print("\nParse states in declarative memory at the end of the simulation", 214 | "\nordered by time of (re)activation:") 215 | for chunk in sortedDM: 216 | if chunk[0].typename == "parse_state": 217 | print(chunk[1], "\t", chunk[0]) 218 | print("\nWords in declarative memory at the end of the simulation", 219 | "\nordered by time of (re)activation:") 220 | for chunk in sortedDM: 221 | if chunk[0].typename == "word": 222 | print(chunk[1], "\t", chunk[0]) 223 | 224 | def final_tree(sortedDM): 225 | tree_list = [] 226 | parse_states = [chunk for chunk in sortedDM\ 227 | if chunk[0].typename == "parse_state" and\ 228 | chunk[0].daughter1 != None] 229 | words = set(str(chunk[0].form) for chunk in sortedDM\ 230 | if chunk[0].typename == "word") 231 | nodes = [chunk for chunk in parse_states 232 | if chunk[0].node_cat == "S"] 233 | while nodes: 234 | current_chunk = nodes.pop(0) 235 | current_node = str(current_chunk[0].node_cat) + " " +\ 236 | str(current_chunk[1]) 237 | current_tree = Tree(name=current_node) 238 | if current_chunk[0].daughter2 != None: 239 | child_categs = [current_chunk[0].daughter1,\ 240 | current_chunk[0].daughter2] 241 | else: 242 | child_categs = [current_chunk[0].daughter1] 243 | children = [] 244 | for cat in child_categs: 245 | if cat == 'NP': 246 | chunkFromCat = [chunk for chunk in parse_states\ 247 | if chunk[0].node_cat == cat and\ 248 | chunk[0].mother ==\ 249 | current_chunk[0].node_cat] 250 | if chunkFromCat: 251 | children += chunkFromCat 252 | current_child = str(chunkFromCat[-1][0].node_cat)\ 253 | + " " + str(chunkFromCat[-1][1]) 254 | current_tree.add_child(name=current_child) 255 | elif cat == 'ProperN': 256 | chunkFromCat = [chunk for chunk in parse_states if\ 257 | chunk[0].node_cat == cat and\ 258 | chunk[0].daughter1 ==\ 259 | current_chunk[0].lex_head] 260 | if chunkFromCat: 261 | children += chunkFromCat 262 | current_child = str(chunkFromCat[-1][0].node_cat)\ 263 | + " " + str(chunkFromCat[-1][1]) 264 | current_tree.add_child(name=current_child) 265 | elif cat in words: 266 | last_act_time = [chunk[1][-1] 267 | for chunk in dm.items()\ 268 | if chunk[0].typename == "word"\ 269 | and str(chunk[0].form) == cat] 270 | current_child = cat + " " + str(last_act_time[0]) 271 | current_tree.add_child(name=current_child) 272 | else: 273 | chunkFromCat = [chunk for chunk in parse_states\ 274 | if chunk[0].node_cat == cat] 275 | if chunkFromCat: 276 | children += chunkFromCat 277 | current_child = str(chunkFromCat[-1][0].node_cat)\ 278 | + " " + str(chunkFromCat[-1][1]) 279 | current_tree.add_child(name=current_child) 280 | tree_list.append(current_tree) 281 | nodes += children 282 | final_tree = tree_list[0] 283 | tree_list.remove(final_tree) 284 | while tree_list: 285 | leaves = final_tree.get_leaves() 286 | for leaf in leaves: 287 | subtree_list = [tree for tree in tree_list\ 288 | if tree.name == leaf.name] 289 | if subtree_list: 290 | subtree = subtree_list[0] 291 | tree_list.remove(subtree) 292 | leaf.add_sister(subtree) 293 | leaf.detach() 294 | return final_tree 295 | print("\nFinal tree:") 296 | print(final_tree(sortedDM).get_ascii(compact=False)) 297 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch4_leftcorner_parser_with_relative_clauses.py: -------------------------------------------------------------------------------- 1 | """ 2 | A left-corner parser. 3 | """ 4 | 5 | import pyactr as actr 6 | from ete3 import Tree 7 | import simpy 8 | import re 9 | 10 | environment = actr.Environment(focus_position=(320, 180)) 11 | 12 | actr.chunktype("parsing_goal", 13 | "task stack1 stack2 stack3 stack4 parsed_word right_frontier found gapped") 14 | actr.chunktype("parse_state", 15 | "node_cat mother daughter1 daughter2 lex_head") 16 | actr.chunktype("word", "form cat") 17 | 18 | parser = actr.ACTRModel(environment, subsymbolic=True, retrieval_threshold=-5, latency_factor=0.1, latency_exponent=0.13) 19 | dm = parser.decmem 20 | g = parser.goal 21 | imaginal = parser.set_goal(name="imaginal", delay=0) 22 | 23 | dm.add(actr.chunkstring(string=""" 24 | isa word 25 | form 'Mary' 26 | cat 'ProperN' 27 | """)) 28 | 29 | dm.add(actr.chunkstring(string=""" 30 | isa word 31 | form 'The' 32 | cat 'Det' 33 | """)) 34 | 35 | dm.add(actr.chunkstring(string=""" 36 | isa word 37 | form 'the' 38 | cat 'Det' 39 | """)) 40 | dm.add(actr.chunkstring(string=""" 41 | isa word 42 | form 'boy' 43 | cat 'N' 44 | """)) 45 | dm.add(actr.chunkstring(string=""" 46 | isa word 47 | form 'who' 48 | cat 'wh' 49 | """)) 50 | dm.add(actr.chunkstring(string=""" 51 | isa word 52 | form 'Bill' 53 | cat 'ProperN' 54 | """)) 55 | dm.add(actr.chunkstring(string=""" 56 | isa word 57 | form 'likes' 58 | cat 'V' 59 | """)) 60 | dm.add(actr.chunkstring(string=""" 61 | isa word 62 | form 'saw' 63 | cat 'V' 64 | """)) 65 | g.add(actr.chunkstring(string=""" 66 | isa parsing_goal 67 | task reading_word 68 | stack1 'S' 69 | right_frontier 'S' 70 | gapped False 71 | """)) 72 | 73 | parser.productionstring(name="encode word", string=""" 74 | =g> 75 | isa parsing_goal 76 | task reading_word 77 | =visual> 78 | isa _visual 79 | value =val 80 | ==> 81 | =g> 82 | isa parsing_goal 83 | task get_word_cat 84 | parsed_word =val 85 | ~visual> 86 | ~retrieval> 87 | """) 88 | 89 | parser.productionstring(name="retrieve category", string=""" 90 | =g> 91 | isa parsing_goal 92 | task get_word_cat 93 | parsed_word =w 94 | ==> 95 | +retrieval> 96 | isa word 97 | form =w 98 | =g> 99 | isa parsing_goal 100 | task retrieving_word 101 | """) 102 | 103 | parser.productionstring(name="shift and project word", string=""" 104 | =g> 105 | isa parsing_goal 106 | task retrieving_word 107 | =retrieval> 108 | isa word 109 | form =w 110 | cat =c 111 | ==> 112 | =g> 113 | isa parsing_goal 114 | task parsing 115 | found =c 116 | +imaginal> 117 | isa parse_state 118 | node_cat =c 119 | daughter1 =w 120 | ~retrieval> 121 | """) 122 | 123 | parser.productionstring(name="reanalyse: subject wh", string=""" 124 | =g> 125 | isa parsing_goal 126 | task parsing 127 | stack1 'VP' 128 | stack2 'VP' 129 | gapped False 130 | found ~'V' 131 | found ~None 132 | ==> 133 | =g> 134 | isa parsing_goal 135 | stack1 'S' 136 | stack2 'VP' 137 | stack3 'VP' 138 | right_frontier 'S' 139 | gapped True 140 | """) 141 | 142 | parser.productionstring(name="project: NP ==> ProperN", string=""" 143 | =g> 144 | isa parsing_goal 145 | task parsing 146 | stack1 'S' 147 | stack2 =s2 148 | stack3 =s3 149 | right_frontier =rf 150 | parsed_word =w 151 | found 'ProperN' 152 | ==> 153 | =g> 154 | isa parsing_goal 155 | stack1 'NP' 156 | stack2 'S' 157 | stack3 =s2 158 | stack4 =s3 159 | found None 160 | +imaginal> 161 | isa parse_state 162 | node_cat 'NP' 163 | daughter1 'ProperN' 164 | mother =rf 165 | lex_head =w 166 | """) 167 | 168 | parser.productionstring(name="project: NP ==> Det N", string=""" 169 | =g> 170 | isa parsing_goal 171 | task parsing 172 | stack1 'S' 173 | stack1 =s1 174 | stack2 =s2 175 | right_frontier =rf 176 | parsed_word =w 177 | found 'Det' 178 | ==> 179 | =g> 180 | isa parsing_goal 181 | stack1 'N' 182 | stack2 'NP' 183 | stack3 =s1 184 | stack4 =s2 185 | found None 186 | +imaginal> 187 | isa parse_state 188 | node_cat 'NP' 189 | daughter1 'Det' 190 | mother =rf 191 | """) 192 | 193 | parser.productionstring(name="project and complete: NP ==> Det N", string=""" 194 | =g> 195 | isa parsing_goal 196 | task parsing 197 | stack1 'NP' 198 | right_frontier =rf 199 | parsed_word =w 200 | found 'Det' 201 | ==> 202 | =g> 203 | isa parsing_goal 204 | stack1 'N' 205 | found None 206 | +imaginal> 207 | isa parse_state 208 | node_cat 'NP' 209 | daughter1 'Det' 210 | mother =rf 211 | """) 212 | 213 | parser.productionstring(name="project and complete: N", string=""" 214 | =g> 215 | isa parsing_goal 216 | task parsing 217 | stack1 'N' 218 | stack2 =s2 219 | stack3 =s3 220 | stack4 =s4 221 | right_frontier =rf 222 | parsed_word =w 223 | found 'N' 224 | ==> 225 | =g> 226 | isa parsing_goal 227 | stack1 =s2 228 | stack2 =s3 229 | stack3 =s4 230 | stack4 None 231 | found None 232 | +imaginal> 233 | isa parse_state 234 | node_cat 'NP' 235 | daughter1 'Det' 236 | daughter2 'N' 237 | lex_head =w 238 | mother 'NP' 239 | """) 240 | 241 | parser.productionstring(name="project and complete: wh", string=""" 242 | =g> 243 | isa parsing_goal 244 | task parsing 245 | stack1 'VP' 246 | stack2 =s2 247 | stack3 =s3 248 | right_frontier =rf 249 | parsed_word =w 250 | found 'wh' 251 | gapped ~filling 252 | ==> 253 | =g> 254 | isa parsing_goal 255 | stack1 'VP' 256 | stack2 'VP' 257 | stack3 =s2 258 | stack4 =s3 259 | gapped filling 260 | found None 261 | +imaginal> 262 | isa parse_state 263 | node_cat 'CP' 264 | daughter1 'wh-DP' 265 | daughter2 'S' 266 | lex_head =w 267 | mother 'NP' 268 | """) 269 | 270 | parser.productionstring(name="project and complete: subj wh-gap", string=""" 271 | =g> 272 | isa parsing_goal 273 | task parsing 274 | stack1 'VP' 275 | stack2 'VP' 276 | gapped filling 277 | parsed_word =w 278 | ==> 279 | =g> 280 | isa parsing_goal 281 | gapped False 282 | found None 283 | +imaginal> 284 | isa parse_state 285 | node_cat 'NP' 286 | daughter1 'gap' 287 | lex_head =w 288 | mother 'S' 289 | """) 290 | 291 | parser.productionstring(name="project and complete: NP ==> ProperN", string=""" 292 | =g> 293 | isa parsing_goal 294 | task parsing 295 | stack1 'NP' 296 | stack2 =s2 297 | stack3 =s3 298 | stack4 =s4 299 | right_frontier =rf 300 | parsed_word =w 301 | found 'ProperN' 302 | ==> 303 | =g> 304 | isa parsing_goal 305 | stack1 =s2 306 | stack2 =s3 307 | stack3 =s4 308 | found None 309 | +imaginal> 310 | isa parse_state 311 | node_cat 'NP' 312 | daughter1 'ProperN' 313 | mother =rf 314 | lex_head =w 315 | """) 316 | 317 | parser.productionstring(name="project and complete: S ==> NP VP", string=""" 318 | =g> 319 | isa parsing_goal 320 | task parsing 321 | stack1 'NP' 322 | stack2 'S' 323 | stack3 =s3 324 | stack4 =s4 325 | ==> 326 | =g> 327 | isa parsing_goal 328 | stack1 'VP' 329 | stack2 =s3 330 | stack3 =s4 331 | right_frontier 'VP' 332 | found None 333 | +imaginal> 334 | isa parse_state 335 | node_cat 'S' 336 | daughter1 'NP' 337 | daughter2 'VP' 338 | """) 339 | 340 | parser.productionstring(name="project and complete: VP ==> V NP", string=""" 341 | =g> 342 | isa parsing_goal 343 | task parsing 344 | stack1 'VP' 345 | found 'V' 346 | gapped ~True 347 | ==> 348 | =g> 349 | isa parsing_goal 350 | stack1 'NP' 351 | found None 352 | +imaginal> 353 | isa parse_state 354 | mother 'S' 355 | node_cat 'VP' 356 | daughter1 'V' 357 | daughter2 'NP' 358 | """) 359 | 360 | parser.productionstring(name="project and complete: VP ==> V NP gapped", string=""" 361 | =g> 362 | isa parsing_goal 363 | task parsing 364 | stack1 'VP' 365 | stack2 =s2 366 | stack3 =s3 367 | stack4 =s4 368 | found 'V' 369 | gapped True 370 | ==> 371 | +retrieval> 372 | isa parse_state 373 | node_cat 'wh' 374 | =g> 375 | isa parsing_goal 376 | stack1 =s2 377 | stack2 =s3 378 | stack3 =s4 379 | found None 380 | gapped False 381 | +imaginal> 382 | isa parse_state 383 | mother 'S' 384 | node_cat 'VP' 385 | daughter1 'V' 386 | daughter2 'NP' 387 | """) 388 | 389 | parser.productionstring(name="press spacebar", string=""" 390 | =g> 391 | isa parsing_goal 392 | task ~reading_word 393 | ?manual> 394 | state free 395 | ?retrieval> 396 | state free 397 | ==> 398 | =g> 399 | isa parsing_goal 400 | task reading_word 401 | +manual> 402 | isa _manual 403 | cmd 'press_key' 404 | key 'space' 405 | """, utility=-5) 406 | 407 | 408 | parser.productionstring(name="finished", string=""" 409 | =g> 410 | isa parsing_goal 411 | task reading_word 412 | stack1 None 413 | ==> 414 | ~g> 415 | ~imaginal> 416 | """) 417 | 418 | if __name__ == "__main__": 419 | 420 | sentence = "The boy who likes the boy saw Mary" 421 | sentence = sentence.split() 422 | 423 | stimuli = [] 424 | for word in sentence: 425 | stimuli.append({1: {'text': word, 'position': (320, 180)}}) 426 | parser_sim = parser.simulation( 427 | realtime=False, 428 | gui=False, 429 | environment_process=environment.environment_process, 430 | stimuli=stimuli, 431 | triggers='space') 432 | 433 | recorded = ["likes", "saw"] 434 | dict_recorded = {key: [0, 0] for key in recorded} 435 | recorded_word = None 436 | while True: 437 | try: 438 | parser_sim.step() 439 | except simpy.core.EmptySchedule: 440 | break 441 | if re.search("^RULE FIRED: press spacebar", str(parser_sim.current_event.action)): 442 | print(parser.goals["g"]) 443 | input() 444 | 445 | if recorded_word in dict_recorded and not re.search(recorded_word, str(environment.stimulus)) and dict_recorded[recorded_word][0] and not dict_recorded[recorded_word][1]: 446 | dict_recorded[recorded_word][1] = parser_sim.show_time() 447 | #print(parser.goals["g"]) 448 | #print(dict_recorded) 449 | #input() 450 | elif re.search("|".join(["(".join([x, ")"]) for x in recorded]), str(environment.stimulus)): 451 | recorded_word = str(environment.stimulus[1]['text']) 452 | if recorded_word in dict_recorded and not dict_recorded[recorded_word][0]: 453 | dict_recorded[recorded_word][0] = parser_sim.show_time() 454 | 455 | dict_recorded = {key: dict_recorded[key][1]-dict_recorded[key][0] for key in dict_recorded} 456 | print(dict_recorded) 457 | input() 458 | 459 | sortedDM = sorted(([item[0], time] for item in dm.items() for time in item[1]), 460 | key=lambda item: item[1]) 461 | print("\nParse states in declarative memory at the end of the simulation", 462 | "\nordered by time of (re)activation:") 463 | for chunk in sortedDM: 464 | if chunk[0].typename == "parse_state": 465 | print(chunk[1], "\t", chunk[0]) 466 | print("\nWords in declarative memory at the end of the simulation", 467 | "\nordered by time of (re)activation:") 468 | for chunk in sortedDM: 469 | if chunk[0].typename == "word": 470 | print(chunk[1], "\t", chunk[0]) 471 | 472 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch4_lexical_decision.py: -------------------------------------------------------------------------------- 1 | """ 2 | A simple model of lexical decision. 3 | """ 4 | 5 | import pyactr as actr 6 | 7 | environment = actr.Environment(focus_position=(0,0)) 8 | lex_decision = actr.ACTRModel(environment=environment, 9 | automatic_visual_search=False) 10 | 11 | actr.chunktype("goal", "state") 12 | actr.chunktype("word", "form") 13 | 14 | dm = lex_decision.decmem 15 | for string in {"elephant", "dog", "crocodile"}: 16 | dm.add(actr.makechunk(typename="word", form=string)) 17 | 18 | g = lex_decision.goal 19 | g.add(actr.makechunk(nameofchunk="start", 20 | typename="goal", 21 | state="start")) 22 | 23 | lex_decision.productionstring(name="find word", string=""" 24 | =g> 25 | isa goal 26 | state 'start' 27 | ?visual_location> 28 | buffer empty 29 | ==> 30 | =g> 31 | isa goal 32 | state 'attend' 33 | +visual_location> 34 | isa _visuallocation 35 | screen_x closest 36 | """) 37 | 38 | lex_decision.productionstring(name="attend word", string=""" 39 | =g> 40 | isa goal 41 | state 'attend' 42 | =visual_location> 43 | isa _visuallocation 44 | ?visual> 45 | state free 46 | ==> 47 | =g> 48 | isa goal 49 | state 'retrieving' 50 | +visual> 51 | isa _visual 52 | cmd move_attention 53 | screen_pos =visual_location 54 | ~visual_location> 55 | """) 56 | 57 | lex_decision.productionstring(name="retrieving", string=""" 58 | =g> 59 | isa goal 60 | state 'retrieving' 61 | =visual> 62 | isa _visual 63 | value =val 64 | ==> 65 | =g> 66 | isa goal 67 | state 'retrieval_done' 68 | +retrieval> 69 | isa word 70 | form =val 71 | """) 72 | 73 | lex_decision.productionstring(name="lexeme retrieved", string=""" 74 | =g> 75 | isa goal 76 | state 'retrieval_done' 77 | ?retrieval> 78 | buffer full 79 | state free 80 | ==> 81 | =g> 82 | isa goal 83 | state 'done' 84 | +manual> 85 | isa _manual 86 | cmd press_key 87 | key 'J' 88 | """) 89 | 90 | lex_decision.productionstring(name="no lexeme found", string=""" 91 | =g> 92 | isa goal 93 | state 'retrieval_done' 94 | ?retrieval> 95 | buffer empty 96 | state error 97 | ==> 98 | =g> 99 | isa goal 100 | state 'done' 101 | +manual> 102 | isa _manual 103 | cmd press_key 104 | key 'F' 105 | """) 106 | 107 | word = {1: {'text': 'elephant', 'position': (320, 180)}} 108 | 109 | if __name__ == "__main__": 110 | lex_dec_sim = lex_decision.simulation(realtime=True, gui=False, 111 | environment_process=environment.environment_process, 112 | stimuli=word, triggers='', times=1) 113 | lex_dec_sim.run() 114 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch7_lexical_decision_pyactr_no_imaginal.py: -------------------------------------------------------------------------------- 1 | """ 2 | A model of lexical decision: Bayes+ACT-R, no imaginal buffer 3 | """ 4 | 5 | import warnings 6 | import sys 7 | 8 | import matplotlib as mpl 9 | mpl.use("pgf") 10 | pgf_with_pdflatex = {"text.usetex": True, "pgf.texsystem": "pdflatex", 11 | "pgf.preamble": [r"\usepackage{mathpazo}", 12 | r"\usepackage[utf8x]{inputenc}", 13 | r"\usepackage[T1]{fontenc}", 14 | r"\usepackage{amsmath}"], 15 | "axes.labelsize": 8, 16 | "font.family": "serif", 17 | "font.serif":["Palatino"], 18 | "font.size": 8, 19 | "legend.fontsize": 8, 20 | "xtick.labelsize": 8, 21 | "ytick.labelsize": 8} 22 | mpl.rcParams.update(pgf_with_pdflatex) 23 | import matplotlib.pyplot as plt 24 | plt.style.use('seaborn') 25 | import seaborn as sns 26 | sns.set_style({"font.family":"serif", "font.serif":["Palatino"]}) 27 | 28 | import pandas as pd 29 | import pyactr as actr 30 | import math 31 | from simpy.core import EmptySchedule 32 | import numpy as np 33 | import re 34 | import scipy.stats as stats 35 | import scipy 36 | 37 | import pymc3 as pm 38 | from pymc3 import Gamma, Normal, HalfNormal, Deterministic, Uniform, find_MAP,\ 39 | Slice, sample, summary, Metropolis, traceplot, gelman_rubin 40 | from pymc3.backends.base import merge_traces 41 | from pymc3.backends import SQLite 42 | from pymc3.backends.sqlite import load 43 | import theano 44 | import theano.tensor as tt 45 | from theano.compile.ops import as_op 46 | 47 | warnings.filterwarnings("ignore") 48 | 49 | FREQ = np.array([242, 92.8, 57.7, 40.5, 30.6, 23.4, 19,\ 50 | 16, 13.4, 11.5, 10, 9, 7, 5, 3, 1]) 51 | RT = np.array([542, 555, 566, 562, 570, 569, 577, 587,\ 52 | 592, 605, 603, 575, 620, 607, 622, 674]) 53 | ACCURACY = np.array([97.22, 95.56, 95.56, 96.3, 96.11, 94.26,\ 54 | 95, 92.41, 91.67, 93.52, 91.85, 93.52,\ 55 | 91.48, 90.93, 84.44, 74.63])/100 56 | 57 | environment = actr.Environment(focus_position=(320, 180)) 58 | lex_decision = actr.ACTRModel(environment=environment,\ 59 | subsymbolic=True,\ 60 | automatic_visual_search=True,\ 61 | activation_trace=False,\ 62 | retrieval_threshold=-80,\ 63 | motor_prepared=True, 64 | eye_mvt_scaling_parameter=0.18,\ 65 | emma_noise=False) 66 | 67 | actr.chunktype("goal", "state") 68 | actr.chunktype("word", "form") 69 | 70 | # on average, 15 years of exposure is 112.5 million words 71 | 72 | SEC_IN_YEAR = 365*24*3600 73 | SEC_IN_TIME = 15*SEC_IN_YEAR 74 | 75 | FREQ_DICT = {} 76 | FREQ_DICT['guy'] = 242*112.5 77 | FREQ_DICT['somebody'] = 92*112.5 78 | FREQ_DICT['extend'] = 58*112.5 79 | FREQ_DICT['dance'] = 40.5*112.5 80 | FREQ_DICT['shape'] = 30.6*112.5 81 | FREQ_DICT['besides'] = 23.4*112.5 82 | FREQ_DICT['fit'] = 19*112.5 83 | FREQ_DICT['dedicate'] = 16*112.5 84 | FREQ_DICT['robot'] = 13.4*112.5 85 | FREQ_DICT['tile'] = 11.5*112.5 86 | FREQ_DICT['between'] = 10*112.5 87 | FREQ_DICT['precedent'] = 9*112.5 88 | FREQ_DICT['wrestle'] = 7*112.5 89 | FREQ_DICT['resonate'] = 5*112.5 90 | FREQ_DICT['seated'] = 3*112.5 91 | FREQ_DICT['habitually'] = 1*112.5 92 | 93 | ORDERED_FREQ = sorted(list(FREQ_DICT), key=lambda x:FREQ_DICT[x], reverse=True) 94 | 95 | def time_freq(freq): 96 | rehearsals = np.zeros((np.int(np.max(freq) * 113), len(freq))) 97 | for i in np.arange(len(freq)): 98 | temp = np.arange(np.int((freq[i]*112.5))) 99 | temp = temp * np.int(SEC_IN_TIME/(freq[i]*112.5)) 100 | rehearsals[:len(temp),i] = temp 101 | return(rehearsals.T) 102 | 103 | time = theano.shared(time_freq(FREQ), 'time') 104 | 105 | LEMMA_CHUNKS = [(actr.makechunk("", typename="word", form=word)) 106 | for word in ORDERED_FREQ] 107 | lex_decision.set_decmem({x: np.array([]) for x in LEMMA_CHUNKS}) 108 | 109 | lex_decision.goals = {} 110 | lex_decision.set_goal("g") 111 | 112 | lex_decision.productionstring(name="attend word", string=""" 113 | =g> 114 | isa goal 115 | state 'attend' 116 | =visual_location> 117 | isa _visuallocation 118 | ?visual> 119 | state free 120 | ==> 121 | =g> 122 | isa goal 123 | state 'retrieving' 124 | +visual> 125 | isa _visual 126 | cmd move_attention 127 | screen_pos =visual_location 128 | ~visual_location> 129 | """) 130 | 131 | lex_decision.productionstring(name="retrieving", string=""" 132 | =g> 133 | isa goal 134 | state 'retrieving' 135 | =visual> 136 | isa _visual 137 | value =val 138 | ==> 139 | =g> 140 | isa goal 141 | state 'retrieval_done' 142 | +retrieval> 143 | isa word 144 | form =val 145 | """) 146 | 147 | lex_decision.productionstring(name="lexeme retrieved", string=""" 148 | =g> 149 | isa goal 150 | state 'retrieval_done' 151 | ?retrieval> 152 | buffer full 153 | state free 154 | ==> 155 | =g> 156 | isa goal 157 | state 'done' 158 | +manual> 159 | isa _manual 160 | cmd press_key 161 | key 'J' 162 | """) 163 | 164 | lex_decision.productionstring(name="no lexeme found", string=""" 165 | =g> 166 | isa goal 167 | state 'retrieval_done' 168 | ?retrieval> 169 | buffer empty 170 | state error 171 | ==> 172 | =g> 173 | isa goal 174 | state 'done' 175 | +manual> 176 | isa _manual 177 | cmd press_key 178 | key 'F' 179 | """) 180 | 181 | def run_stimulus(word): 182 | """ 183 | Function running one instance of lexical decision for a word. 184 | """ 185 | # reset model state to initial state for a new simulation 186 | # (flush buffers without moving their contents to dec mem) 187 | try: 188 | lex_decision.retrieval.pop() 189 | except KeyError: 190 | pass 191 | try: 192 | lex_decision.goals["g"].pop() 193 | except KeyError: 194 | pass 195 | 196 | # reinitialize model 197 | stim = {1: {'text': word, 'position': (320, 180)}} 198 | lex_decision.goals["g"].add(actr.makechunk(nameofchunk='start', 199 | typename="goal", 200 | state='attend')) 201 | environment.current_focus = [320,180] 202 | lex_decision.model_parameters['motor_prepared'] = True 203 | 204 | # run new simulation 205 | lex_dec_sim = lex_decision.simulation(realtime=False, gui=False, trace=False, 206 | environment_process=environment.environment_process, 207 | stimuli=stim, triggers='', times=10) 208 | while True: 209 | lex_dec_sim.step() 210 | if lex_dec_sim.current_event.action == "KEY PRESSED: J": 211 | estimated_time = lex_dec_sim.show_time() 212 | break 213 | if lex_dec_sim.current_event.action == "KEY PRESSED: F": 214 | estimated_time = -1 215 | break 216 | return estimated_time 217 | 218 | def run_lex_decision_task(): 219 | """ 220 | Function running a full lexical decision task: 221 | it calls run_stimulus(word) for words from all 16 freq bands. 222 | """ 223 | sample = [] 224 | for word in ORDERED_FREQ: 225 | sample.append(1000*run_stimulus(word)) 226 | return sample 227 | 228 | @as_op(itypes=[tt.dscalar, tt.dscalar, tt.dscalar, tt.dvector], 229 | otypes=[tt.dvector]) 230 | def actrmodel_latency(lf, le, decay, activation_from_time): 231 | """ 232 | Function running the entire lexical decision task for specific 233 | values of the latency factor, latency exponent and decay parameters. 234 | The activation computed with the specific value of the decay 235 | parameter is also inherited as a separate argument to save expensive 236 | computation time. 237 | The function is wrapped inside the theano @as_op decorator so that 238 | pymc3 / theano can use it as part of the RT likelihood function in the 239 | Bayesian model below. 240 | """ 241 | lex_decision.model_parameters["latency_factor"] = lf 242 | lex_decision.model_parameters["latency_exponent"] = le 243 | lex_decision.model_parameters["decay"] = decay 244 | activation_dict = {x[0]: x[1] 245 | for x in zip(LEMMA_CHUNKS, activation_from_time)} 246 | lex_decision.decmem.activations.update(activation_dict) 247 | sample = run_lex_decision_task() 248 | return np.array(sample) 249 | 250 | lex_decision_with_bayes = pm.Model() 251 | with lex_decision_with_bayes: 252 | # prior for activation 253 | decay = Uniform('decay', lower=0, upper=1) 254 | # priors for accuracy 255 | noise = Uniform('noise', lower=0, upper=5) 256 | threshold = Normal('threshold', mu=0, sd=10) 257 | # priors for latency 258 | lf = HalfNormal('lf', sd=1) 259 | le = HalfNormal('le', sd=1) 260 | # compute activation 261 | scaled_time = time ** (-decay) 262 | def compute_activation(scaled_time_vector): 263 | compare = tt.isinf(scaled_time_vector) 264 | subvector = scaled_time_vector[(1-compare).nonzero()] 265 | activation_from_time = tt.log(subvector.sum()) 266 | return activation_from_time 267 | activation_from_time, _ = theano.scan(fn=compute_activation,\ 268 | sequences=scaled_time) 269 | # latency likelihood -- this is where pyactr is used 270 | pyactr_rt = actrmodel_latency(lf, le, decay, activation_from_time) 271 | mu_rt = Deterministic('mu_rt', pyactr_rt) 272 | rt_observed = Normal('rt_observed', mu=mu_rt, sd=0.01, observed=RT) 273 | # accuracy likelihood 274 | odds_reciprocal = tt.exp(-(activation_from_time - threshold)/noise) 275 | mu_prob = Deterministic('mu_prob', 1/(1 + odds_reciprocal)) 276 | prob_observed = Normal('prob_observed', mu=mu_prob, sd=0.01,\ 277 | observed=ACCURACY) 278 | # we start the sampling 279 | #step = Metropolis() 280 | #db = SQLite('lex_dec_pyactr_chain_no_imaginal.sqlite') 281 | #trace = sample(draws=60000, trace=db, njobs=1, step=step, init='auto') 282 | 283 | with lex_decision_with_bayes: 284 | trace = load('./data/lex_dec_pyactr_chain_no_imaginal.sqlite') 285 | trace = trace[10500:] 286 | 287 | mu_rt = pd.DataFrame(trace['mu_rt']) 288 | yerr_rt = [(mu_rt.mean()-mu_rt.quantile(0.025)),\ 289 | (mu_rt.quantile(0.975)-mu_rt.mean())] 290 | 291 | mu_prob = pd.DataFrame(trace['mu_prob']) 292 | yerr_prob = [(mu_prob.mean()-mu_prob.quantile(0.025)),\ 293 | (mu_prob.quantile(0.975)-mu_prob.mean())] 294 | 295 | def generate_lex_dec_pyactr_no_imaginal_figure(): 296 | fig, (ax1, ax2) = plt.subplots(ncols=1, nrows=2) 297 | fig.set_size_inches(6.0, 8.5) 298 | # plot 1: RTs 299 | ax1.errorbar(RT, mu_rt.mean(), yerr=yerr_rt, marker='o', linestyle='') 300 | ax1.plot(np.linspace(500, 800, 10), np.linspace(500, 800, 10),\ 301 | color='red', linestyle=':') 302 | ax1.set_title('Lex. dec. model (pyactr, no imaginal): RTs') 303 | ax1.set_xlabel('Observed RTs (ms)') 304 | ax1.set_ylabel('Predicted RTs (ms)') 305 | ax1.grid(b=True, which='minor', color='w', linewidth=1.0) 306 | # plot 2: probabilities 307 | ax2.errorbar(ACCURACY, mu_prob.mean(), yerr=yerr_prob, marker='o',\ 308 | linestyle='') 309 | ax2.plot(np.linspace(50, 100, 10)/100,\ 310 | np.linspace(50, 100, 10)/100,\ 311 | color='red', linestyle=':') 312 | ax2.set_title('Lex. dec. model (pyactr, no imaginal): Prob.s') 313 | ax2.set_xlabel('Observed probabilities') 314 | ax2.set_ylabel('Predicted probabilities') 315 | ax2.grid(b=True, which='minor', color='w', linewidth=1.0) 316 | # clean up and save 317 | plt.tight_layout(pad=0.5, w_pad=0.2, h_pad=0.7) 318 | plt.savefig('./figures/lex_dec_model_pyactr_no_imaginal.pgf') 319 | plt.savefig('./figures/lex_dec_model_pyactr_no_imaginal.pdf') 320 | 321 | generate_lex_dec_pyactr_no_imaginal_figure() 322 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch7_lexical_decision_pyactr_with_imaginal.py: -------------------------------------------------------------------------------- 1 | """ 2 | A model of lexical decision: Bayes+ACT-R, with imaginal buffer; 3 | default delay for the imaginal buffer (200 ms) 4 | """ 5 | 6 | import warnings 7 | import sys 8 | 9 | import matplotlib as mpl 10 | mpl.use("pgf") 11 | pgf_with_pdflatex = {"text.usetex": True, "pgf.texsystem": "pdflatex", 12 | "pgf.preamble": [r"\usepackage{mathpazo}", 13 | r"\usepackage[utf8x]{inputenc}", 14 | r"\usepackage[T1]{fontenc}", 15 | r"\usepackage{amsmath}"], 16 | "axes.labelsize": 8, 17 | "font.family": "serif", 18 | "font.serif":["Palatino"], 19 | "font.size": 8, 20 | "legend.fontsize": 8, 21 | "xtick.labelsize": 8, 22 | "ytick.labelsize": 8} 23 | mpl.rcParams.update(pgf_with_pdflatex) 24 | import matplotlib.pyplot as plt 25 | plt.style.use('seaborn') 26 | import seaborn as sns 27 | sns.set_style({"font.family":"serif", "font.serif":["Palatino"]}) 28 | 29 | import pandas as pd 30 | import pyactr as actr 31 | import math 32 | from simpy.core import EmptySchedule 33 | import numpy as np 34 | import re 35 | import scipy.stats as stats 36 | import scipy 37 | 38 | import pymc3 as pm 39 | from pymc3 import Gamma, Normal, HalfNormal, Deterministic, Uniform, find_MAP,\ 40 | Slice, sample, summary, Metropolis, traceplot, gelman_rubin 41 | from pymc3.backends.base import merge_traces 42 | from pymc3.backends import SQLite 43 | from pymc3.backends.sqlite import load 44 | import theano 45 | import theano.tensor as tt 46 | from theano.compile.ops import as_op 47 | 48 | warnings.filterwarnings("ignore") 49 | 50 | FREQ = np.array([242, 92.8, 57.7, 40.5, 30.6, 23.4, 19,\ 51 | 16, 13.4, 11.5, 10, 9, 7, 5, 3, 1]) 52 | RT = np.array([542, 555, 566, 562, 570, 569, 577, 587,\ 53 | 592, 605, 603, 575, 620, 607, 622, 674]) 54 | ACCURACY = np.array([97.22, 95.56, 95.56, 96.3, 96.11, 94.26,\ 55 | 95, 92.41, 91.67, 93.52, 91.85, 93.52,\ 56 | 91.48, 90.93, 84.44, 74.63])/100 57 | 58 | environment = actr.Environment(focus_position=(320, 180)) 59 | lex_decision = actr.ACTRModel(environment=environment,\ 60 | subsymbolic=True,\ 61 | automatic_visual_search=True,\ 62 | activation_trace=False,\ 63 | retrieval_threshold=-80,\ 64 | motor_prepared=True, 65 | eye_mvt_scaling_parameter=0.18,\ 66 | emma_noise=False) 67 | 68 | actr.chunktype("goal", "state") 69 | actr.chunktype("word", "form") 70 | 71 | # on average, 15 years of exposure is 112.5 million words 72 | 73 | SEC_IN_YEAR = 365*24*3600 74 | SEC_IN_TIME = 15*SEC_IN_YEAR 75 | 76 | FREQ_DICT = {} 77 | FREQ_DICT['guy'] = 242*112.5 78 | FREQ_DICT['somebody'] = 92*112.5 79 | FREQ_DICT['extend'] = 58*112.5 80 | FREQ_DICT['dance'] = 40.5*112.5 81 | FREQ_DICT['shape'] = 30.6*112.5 82 | FREQ_DICT['besides'] = 23.4*112.5 83 | FREQ_DICT['fit'] = 19*112.5 84 | FREQ_DICT['dedicate'] = 16*112.5 85 | FREQ_DICT['robot'] = 13.4*112.5 86 | FREQ_DICT['tile'] = 11.5*112.5 87 | FREQ_DICT['between'] = 10*112.5 88 | FREQ_DICT['precedent'] = 9*112.5 89 | FREQ_DICT['wrestle'] = 7*112.5 90 | FREQ_DICT['resonate'] = 5*112.5 91 | FREQ_DICT['seated'] = 3*112.5 92 | FREQ_DICT['habitually'] = 1*112.5 93 | 94 | ORDERED_FREQ = sorted(list(FREQ_DICT), key=lambda x:FREQ_DICT[x], reverse=True) 95 | 96 | def time_freq(freq): 97 | rehearsals = np.zeros((np.int(np.max(freq) * 113), len(freq))) 98 | for i in np.arange(len(freq)): 99 | temp = np.arange(np.int((freq[i]*112.5))) 100 | temp = temp * np.int(SEC_IN_TIME/(freq[i]*112.5)) 101 | rehearsals[:len(temp),i] = temp 102 | return(rehearsals.T) 103 | 104 | time = theano.shared(time_freq(FREQ), 'time') 105 | 106 | LEMMA_CHUNKS = [(actr.makechunk("", typename="word", form=word)) 107 | for word in ORDERED_FREQ] 108 | lex_decision.set_decmem({x: np.array([]) for x in LEMMA_CHUNKS}) 109 | 110 | lex_decision.goals = {} 111 | lex_decision.set_goal("g") 112 | lex_decision.set_goal("imaginal") 113 | 114 | lex_decision.productionstring(name="attend word", string=""" 115 | =g> 116 | isa goal 117 | state 'attend' 118 | =visual_location> 119 | isa _visuallocation 120 | ?visual> 121 | state free 122 | ==> 123 | =g> 124 | isa goal 125 | state 'encoding' 126 | +visual> 127 | isa _visual 128 | cmd move_attention 129 | screen_pos =visual_location 130 | ~visual_location> 131 | """) 132 | 133 | lex_decision.productionstring(name="encoding word", string=""" 134 | =g> 135 | isa goal 136 | state 'encoding' 137 | =visual> 138 | isa _visual 139 | value =val 140 | ==> 141 | =g> 142 | isa goal 143 | state 'retrieving' 144 | +imaginal> 145 | isa word 146 | form =val 147 | """) 148 | 149 | lex_decision.productionstring(name="retrieving", string=""" 150 | =g> 151 | isa goal 152 | state 'retrieving' 153 | =imaginal> 154 | isa word 155 | form =val 156 | ==> 157 | =g> 158 | isa goal 159 | state 'retrieval_done' 160 | +retrieval> 161 | isa word 162 | form =val 163 | """) 164 | 165 | lex_decision.productionstring(name="lexeme retrieved", string=""" 166 | =g> 167 | isa goal 168 | state 'retrieval_done' 169 | ?retrieval> 170 | buffer full 171 | state free 172 | ==> 173 | =g> 174 | isa goal 175 | state 'done' 176 | +manual> 177 | isa _manual 178 | cmd press_key 179 | key 'J' 180 | """) 181 | 182 | lex_decision.productionstring(name="no lexeme found", string=""" 183 | =g> 184 | isa goal 185 | state 'retrieval_done' 186 | ?retrieval> 187 | buffer empty 188 | state error 189 | ==> 190 | =g> 191 | isa goal 192 | state 'done' 193 | +manual> 194 | isa _manual 195 | cmd press_key 196 | key 'F' 197 | """) 198 | 199 | def run_stimulus(word): 200 | """ 201 | Function running one instance of lexical decision for a word. 202 | """ 203 | # reset model state to initial state for a new simulation 204 | # (flush buffers without moving their contents to dec mem) 205 | try: 206 | lex_decision.retrieval.pop() 207 | except KeyError: 208 | pass 209 | try: 210 | lex_decision.goals["g"].pop() 211 | except KeyError: 212 | pass 213 | try: 214 | lex_decision.goals["imaginal"].pop() 215 | except KeyError: 216 | pass 217 | 218 | # reinitialize model 219 | stim = {1: {'text': word, 'position': (320, 180)}} 220 | lex_decision.goals["g"].add(actr.makechunk(nameofchunk='start', 221 | typename="goal", 222 | state='attend')) 223 | lex_decision.goals["imaginal"].add(actr.makechunk(nameofchunk='start', 224 | typename="word")) 225 | lex_decision.goals["imaginal"].delay = 0.2 226 | environment.current_focus = [320,180] 227 | lex_decision.model_parameters['motor_prepared'] = True 228 | 229 | # run new simulation 230 | lex_dec_sim = lex_decision.simulation(realtime=False, gui=False, trace=False, 231 | environment_process=environment.environment_process, 232 | stimuli=stim, triggers='', times=10) 233 | while True: 234 | lex_dec_sim.step() 235 | if lex_dec_sim.current_event.action == "KEY PRESSED: J": 236 | estimated_time = lex_dec_sim.show_time() 237 | break 238 | if lex_dec_sim.current_event.action == "KEY PRESSED: F": 239 | estimated_time = -1 240 | break 241 | return estimated_time 242 | 243 | def run_lex_decision_task(): 244 | """ 245 | Function running a full lexical decision task: 246 | it calls run_stimulus(word) for words from all 16 freq bands. 247 | """ 248 | sample = [] 249 | for word in ORDERED_FREQ: 250 | sample.append(1000*run_stimulus(word)) 251 | return sample 252 | 253 | @as_op(itypes=[tt.dscalar, tt.dscalar, tt.dscalar, tt.dvector], 254 | otypes=[tt.dvector]) 255 | def actrmodel_latency(lf, le, decay, activation_from_time): 256 | """ 257 | Function running the entire lexical decision task for specific 258 | values of the latency factor, latency exponent and decay parameters. 259 | The activation computed with the specific value of the decay 260 | parameter is also inherited as a separate argument to save expensive 261 | computation time. 262 | The function is wrapped inside the theano @as_op decorator so that 263 | pymc3 / theano can use it as part of the RT likelihood function in the 264 | Bayesian model below. 265 | """ 266 | lex_decision.model_parameters["latency_factor"] = lf 267 | lex_decision.model_parameters["latency_exponent"] = le 268 | lex_decision.model_parameters["decay"] = decay 269 | activation_dict = {x[0]: x[1] 270 | for x in zip(LEMMA_CHUNKS, activation_from_time)} 271 | lex_decision.decmem.activations.update(activation_dict) 272 | sample = run_lex_decision_task() 273 | return np.array(sample) 274 | 275 | lex_decision_with_bayes = pm.Model() 276 | with lex_decision_with_bayes: 277 | # prior for activation 278 | decay = Uniform('decay', lower=0, upper=1) 279 | # priors for accuracy 280 | noise = Uniform('noise', lower=0, upper=5) 281 | threshold = Normal('threshold', mu=0, sd=10) 282 | # priors for latency 283 | lf = HalfNormal('lf', sd=1) 284 | le = HalfNormal('le', sd=1) 285 | # compute activation 286 | scaled_time = time ** (-decay) 287 | def compute_activation(scaled_time_vector): 288 | compare = tt.isinf(scaled_time_vector) 289 | subvector = scaled_time_vector[(1-compare).nonzero()] 290 | activation_from_time = tt.log(subvector.sum()) 291 | return activation_from_time 292 | activation_from_time, _ = theano.scan(fn=compute_activation,\ 293 | sequences=scaled_time) 294 | # latency likelihood -- this is where pyactr is used 295 | pyactr_rt = actrmodel_latency(lf, le, decay, activation_from_time) 296 | mu_rt = Deterministic('mu_rt', pyactr_rt) 297 | rt_observed = Normal('rt_observed', mu=mu_rt, sd=0.01, observed=RT) 298 | # accuracy likelihood 299 | odds_reciprocal = tt.exp(-(activation_from_time - threshold)/noise) 300 | mu_prob = Deterministic('mu_prob', 1/(1 + odds_reciprocal)) 301 | prob_observed = Normal('prob_observed', mu=mu_prob, sd=0.01,\ 302 | observed=ACCURACY) 303 | # we start the sampling 304 | #step = Metropolis() 305 | #db = SQLite('lex_dec_pyactr_chain_with_imaginal.sqlite') 306 | #trace = sample(draws=60000, trace=db, njobs=1, step=step, init='auto') 307 | 308 | with lex_decision_with_bayes: 309 | trace = load('./data/lex_dec_pyactr_chain_with_imaginal.sqlite') 310 | trace = trace[10500:] 311 | 312 | mu_rt = pd.DataFrame(trace['mu_rt']) 313 | yerr_rt = [(mu_rt.mean()-mu_rt.quantile(0.025)),\ 314 | (mu_rt.quantile(0.975)-mu_rt.mean())] 315 | 316 | mu_prob = pd.DataFrame(trace['mu_prob']) 317 | yerr_prob = [(mu_prob.mean()-mu_prob.quantile(0.025)),\ 318 | (mu_prob.quantile(0.975)-mu_prob.mean())] 319 | 320 | def generate_lex_dec_pyactr_with_imaginal_figure(): 321 | fig, (ax1, ax2) = plt.subplots(ncols=1, nrows=2) 322 | fig.set_size_inches(6.0, 8.5) 323 | # plot 1: RTs 324 | ax1.errorbar(RT, mu_rt.mean(), yerr=yerr_rt, marker='o', linestyle='') 325 | ax1.plot(np.linspace(500, 800, 10), np.linspace(500, 800, 10),\ 326 | color='red', linestyle=':') 327 | ax1.set_title('Lex. dec. model (pyactr, with imaginal, delay 200): RTs') 328 | ax1.set_xlabel('Observed RTs (ms)') 329 | ax1.set_ylabel('Predicted RTs (ms)') 330 | ax1.grid(b=True, which='minor', color='w', linewidth=1.0) 331 | # plot 2: probabilities 332 | ax2.errorbar(ACCURACY, mu_prob.mean(), yerr=yerr_prob, marker='o',\ 333 | linestyle='') 334 | ax2.plot(np.linspace(50, 100, 10)/100,\ 335 | np.linspace(50, 100, 10)/100,\ 336 | color='red', linestyle=':') 337 | ax2.set_title('Lex. dec. model (pyactr, with imaginal, delay 200): Prob.s') 338 | ax2.set_xlabel('Observed probabilities') 339 | ax2.set_ylabel('Predicted probabilities') 340 | ax2.grid(b=True, which='minor', color='w', linewidth=1.0) 341 | # clean up and save 342 | plt.tight_layout(pad=0.5, w_pad=0.2, h_pad=0.7) 343 | plt.savefig('./figures/lex_dec_model_pyactr_with_imaginal.pgf') 344 | plt.savefig('./figures/lex_dec_model_pyactr_with_imaginal.pdf') 345 | 346 | generate_lex_dec_pyactr_with_imaginal_figure() 347 | -------------------------------------------------------------------------------- /tutorials/forbook/code/ch7_lexical_decision_pyactr_with_imaginal_delay_0.py: -------------------------------------------------------------------------------- 1 | """ 2 | A model of lexical decision: Bayes+ACT-R, with imaginal buffer; 3 | no delay for the imaginal buffer (delay of 0 ms) 4 | """ 5 | 6 | import warnings 7 | import sys 8 | 9 | import matplotlib as mpl 10 | mpl.use("pgf") 11 | pgf_with_pdflatex = {"text.usetex": True, "pgf.texsystem": "pdflatex", 12 | "pgf.preamble": [r"\usepackage{mathpazo}", 13 | r"\usepackage[utf8x]{inputenc}", 14 | r"\usepackage[T1]{fontenc}", 15 | r"\usepackage{amsmath}"], 16 | "axes.labelsize": 8, 17 | "font.family": "serif", 18 | "font.serif":["Palatino"], 19 | "font.size": 8, 20 | "legend.fontsize": 8, 21 | "xtick.labelsize": 8, 22 | "ytick.labelsize": 8} 23 | mpl.rcParams.update(pgf_with_pdflatex) 24 | import matplotlib.pyplot as plt 25 | plt.style.use('seaborn') 26 | import seaborn as sns 27 | sns.set_style({"font.family":"serif", "font.serif":["Palatino"]}) 28 | 29 | import pandas as pd 30 | import pyactr as actr 31 | import math 32 | from simpy.core import EmptySchedule 33 | import numpy as np 34 | import re 35 | import scipy.stats as stats 36 | import scipy 37 | 38 | import pymc3 as pm 39 | from pymc3 import Gamma, Normal, HalfNormal, Deterministic, Uniform, find_MAP,\ 40 | Slice, sample, summary, Metropolis, traceplot, gelman_rubin 41 | from pymc3.backends.base import merge_traces 42 | from pymc3.backends import SQLite 43 | from pymc3.backends.sqlite import load 44 | import theano 45 | import theano.tensor as tt 46 | from theano.compile.ops import as_op 47 | 48 | warnings.filterwarnings("ignore") 49 | 50 | FREQ = np.array([242, 92.8, 57.7, 40.5, 30.6, 23.4, 19,\ 51 | 16, 13.4, 11.5, 10, 9, 7, 5, 3, 1]) 52 | RT = np.array([542, 555, 566, 562, 570, 569, 577, 587,\ 53 | 592, 605, 603, 575, 620, 607, 622, 674]) 54 | ACCURACY = np.array([97.22, 95.56, 95.56, 96.3, 96.11, 94.26,\ 55 | 95, 92.41, 91.67, 93.52, 91.85, 93.52,\ 56 | 91.48, 90.93, 84.44, 74.63])/100 57 | 58 | environment = actr.Environment(focus_position=(320, 180)) 59 | lex_decision = actr.ACTRModel(environment=environment,\ 60 | subsymbolic=True,\ 61 | automatic_visual_search=True,\ 62 | activation_trace=False,\ 63 | retrieval_threshold=-80,\ 64 | motor_prepared=True, 65 | eye_mvt_scaling_parameter=0.18,\ 66 | emma_noise=False) 67 | 68 | actr.chunktype("goal", "state") 69 | actr.chunktype("word", "form") 70 | 71 | # on average, 15 years of exposure is 112.5 million words 72 | 73 | SEC_IN_YEAR = 365*24*3600 74 | SEC_IN_TIME = 15*SEC_IN_YEAR 75 | 76 | FREQ_DICT = {} 77 | FREQ_DICT['guy'] = 242*112.5 78 | FREQ_DICT['somebody'] = 92*112.5 79 | FREQ_DICT['extend'] = 58*112.5 80 | FREQ_DICT['dance'] = 40.5*112.5 81 | FREQ_DICT['shape'] = 30.6*112.5 82 | FREQ_DICT['besides'] = 23.4*112.5 83 | FREQ_DICT['fit'] = 19*112.5 84 | FREQ_DICT['dedicate'] = 16*112.5 85 | FREQ_DICT['robot'] = 13.4*112.5 86 | FREQ_DICT['tile'] = 11.5*112.5 87 | FREQ_DICT['between'] = 10*112.5 88 | FREQ_DICT['precedent'] = 9*112.5 89 | FREQ_DICT['wrestle'] = 7*112.5 90 | FREQ_DICT['resonate'] = 5*112.5 91 | FREQ_DICT['seated'] = 3*112.5 92 | FREQ_DICT['habitually'] = 1*112.5 93 | 94 | ORDERED_FREQ = sorted(list(FREQ_DICT), key=lambda x:FREQ_DICT[x], reverse=True) 95 | 96 | def time_freq(freq): 97 | rehearsals = np.zeros((np.int(np.max(freq) * 113), len(freq))) 98 | for i in np.arange(len(freq)): 99 | temp = np.arange(np.int((freq[i]*112.5))) 100 | temp = temp * np.int(SEC_IN_TIME/(freq[i]*112.5)) 101 | rehearsals[:len(temp),i] = temp 102 | return(rehearsals.T) 103 | 104 | time = theano.shared(time_freq(FREQ), 'time') 105 | 106 | LEMMA_CHUNKS = [(actr.makechunk("", typename="word", form=word)) 107 | for word in ORDERED_FREQ] 108 | lex_decision.set_decmem({x: np.array([]) for x in LEMMA_CHUNKS}) 109 | 110 | lex_decision.goals = {} 111 | lex_decision.set_goal("g") 112 | lex_decision.set_goal("imaginal") 113 | 114 | lex_decision.productionstring(name="attend word", string=""" 115 | =g> 116 | isa goal 117 | state 'attend' 118 | =visual_location> 119 | isa _visuallocation 120 | ?visual> 121 | state free 122 | ==> 123 | =g> 124 | isa goal 125 | state 'encoding' 126 | +visual> 127 | isa _visual 128 | cmd move_attention 129 | screen_pos =visual_location 130 | ~visual_location> 131 | """) 132 | 133 | lex_decision.productionstring(name="encoding word", string=""" 134 | =g> 135 | isa goal 136 | state 'encoding' 137 | =visual> 138 | isa _visual 139 | value =val 140 | ==> 141 | =g> 142 | isa goal 143 | state 'retrieving' 144 | +imaginal> 145 | isa word 146 | form =val 147 | """) 148 | 149 | lex_decision.productionstring(name="retrieving", string=""" 150 | =g> 151 | isa goal 152 | state 'retrieving' 153 | =imaginal> 154 | isa word 155 | form =val 156 | ==> 157 | =g> 158 | isa goal 159 | state 'retrieval_done' 160 | +retrieval> 161 | isa word 162 | form =val 163 | """) 164 | 165 | lex_decision.productionstring(name="lexeme retrieved", string=""" 166 | =g> 167 | isa goal 168 | state 'retrieval_done' 169 | ?retrieval> 170 | buffer full 171 | state free 172 | ==> 173 | =g> 174 | isa goal 175 | state 'done' 176 | +manual> 177 | isa _manual 178 | cmd press_key 179 | key 'J' 180 | """) 181 | 182 | lex_decision.productionstring(name="no lexeme found", string=""" 183 | =g> 184 | isa goal 185 | state 'retrieval_done' 186 | ?retrieval> 187 | buffer empty 188 | state error 189 | ==> 190 | =g> 191 | isa goal 192 | state 'done' 193 | +manual> 194 | isa _manual 195 | cmd press_key 196 | key 'F' 197 | """) 198 | 199 | def run_stimulus(word): 200 | """ 201 | Function running one instance of lexical decision for a word. 202 | """ 203 | # reset model state to initial state for a new simulation 204 | # (flush buffers without moving their contents to dec mem) 205 | try: 206 | lex_decision.retrieval.pop() 207 | except KeyError: 208 | pass 209 | try: 210 | lex_decision.goals["g"].pop() 211 | except KeyError: 212 | pass 213 | try: 214 | lex_decision.goals["imaginal"].pop() 215 | except KeyError: 216 | pass 217 | 218 | # reinitialize model 219 | stim = {1: {'text': word, 'position': (320, 180)}} 220 | lex_decision.goals["g"].add(actr.makechunk(nameofchunk='start', 221 | typename="goal", 222 | state='attend')) 223 | lex_decision.goals["imaginal"].add(actr.makechunk(nameofchunk='start', 224 | typename="word")) 225 | lex_decision.goals["imaginal"].delay = 0 226 | environment.current_focus = [320,180] 227 | lex_decision.model_parameters['motor_prepared'] = True 228 | 229 | # run new simulation 230 | lex_dec_sim = lex_decision.simulation(realtime=False, gui=False, trace=False, 231 | environment_process=environment.environment_process, 232 | stimuli=stim, triggers='', times=10) 233 | while True: 234 | lex_dec_sim.step() 235 | if lex_dec_sim.current_event.action == "KEY PRESSED: J": 236 | estimated_time = lex_dec_sim.show_time() 237 | break 238 | if lex_dec_sim.current_event.action == "KEY PRESSED: F": 239 | estimated_time = -1 240 | break 241 | return estimated_time 242 | 243 | def run_lex_decision_task(): 244 | """ 245 | Function running a full lexical decision task: 246 | it calls run_stimulus(word) for words from all 16 freq bands. 247 | """ 248 | sample = [] 249 | for word in ORDERED_FREQ: 250 | sample.append(1000*run_stimulus(word)) 251 | return sample 252 | 253 | @as_op(itypes=[tt.dscalar, tt.dscalar, tt.dscalar, tt.dvector], 254 | otypes=[tt.dvector]) 255 | def actrmodel_latency(lf, le, decay, activation_from_time): 256 | """ 257 | Function running the entire lexical decision task for specific 258 | values of the latency factor, latency exponent and decay parameters. 259 | The activation computed with the specific value of the decay 260 | parameter is also inherited as a separate argument to save expensive 261 | computation time. 262 | The function is wrapped inside the theano @as_op decorator so that 263 | pymc3 / theano can use it as part of the RT likelihood function in the 264 | Bayesian model below. 265 | """ 266 | lex_decision.model_parameters["latency_factor"] = lf 267 | lex_decision.model_parameters["latency_exponent"] = le 268 | lex_decision.model_parameters["decay"] = decay 269 | activation_dict = {x[0]: x[1] 270 | for x in zip(LEMMA_CHUNKS, activation_from_time)} 271 | lex_decision.decmem.activations.update(activation_dict) 272 | sample = run_lex_decision_task() 273 | return np.array(sample) 274 | 275 | lex_decision_with_bayes = pm.Model() 276 | with lex_decision_with_bayes: 277 | # prior for activation 278 | decay = Uniform('decay', lower=0, upper=1) 279 | # priors for accuracy 280 | noise = Uniform('noise', lower=0, upper=5) 281 | threshold = Normal('threshold', mu=0, sd=10) 282 | # priors for latency 283 | lf = HalfNormal('lf', sd=1) 284 | le = HalfNormal('le', sd=1) 285 | # compute activation 286 | scaled_time = time ** (-decay) 287 | def compute_activation(scaled_time_vector): 288 | compare = tt.isinf(scaled_time_vector) 289 | subvector = scaled_time_vector[(1-compare).nonzero()] 290 | activation_from_time = tt.log(subvector.sum()) 291 | return activation_from_time 292 | activation_from_time, _ = theano.scan(fn=compute_activation,\ 293 | sequences=scaled_time) 294 | # latency likelihood -- this is where pyactr is used 295 | pyactr_rt = actrmodel_latency(lf, le, decay, activation_from_time) 296 | mu_rt = Deterministic('mu_rt', pyactr_rt) 297 | rt_observed = Normal('rt_observed', mu=mu_rt, sd=0.01, observed=RT) 298 | # accuracy likelihood 299 | odds_reciprocal = tt.exp(-(activation_from_time - threshold)/noise) 300 | mu_prob = Deterministic('mu_prob', 1/(1 + odds_reciprocal)) 301 | prob_observed = Normal('prob_observed', mu=mu_prob, sd=0.01,\ 302 | observed=ACCURACY) 303 | # we start the sampling 304 | #step = Metropolis() 305 | #db = SQLite('lex_dec_pyactr_chain_with_imaginal_delay_0.sqlite') 306 | #trace = sample(draws=60000, trace=db, njobs=1, step=step, init='auto') 307 | 308 | with lex_decision_with_bayes: 309 | trace = load('./data/lex_dec_pyactr_chain_with_imaginal_delay_0.sqlite') 310 | trace = trace[10500:] 311 | 312 | mu_rt = pd.DataFrame(trace['mu_rt']) 313 | yerr_rt = [(mu_rt.mean()-mu_rt.quantile(0.025)),\ 314 | (mu_rt.quantile(0.975)-mu_rt.mean())] 315 | 316 | mu_prob = pd.DataFrame(trace['mu_prob']) 317 | yerr_prob = [(mu_prob.mean()-mu_prob.quantile(0.025)),\ 318 | (mu_prob.quantile(0.975)-mu_prob.mean())] 319 | 320 | def generate_lex_dec_pyactr_with_imaginal_delay_0_figure(): 321 | fig, (ax1, ax2) = plt.subplots(ncols=1, nrows=2) 322 | fig.set_size_inches(6.0, 8.5) 323 | # plot 1: RTs 324 | ax1.errorbar(RT, mu_rt.mean(), yerr=yerr_rt, marker='o', linestyle='') 325 | ax1.plot(np.linspace(500, 800, 10), np.linspace(500, 800, 10),\ 326 | color='red', linestyle=':') 327 | ax1.set_title('Lex. dec. model (pyactr, with imaginal, delay 0): RTs') 328 | ax1.set_xlabel('Observed RTs (ms)') 329 | ax1.set_ylabel('Predicted RTs (ms)') 330 | ax1.grid(b=True, which='minor', color='w', linewidth=1.0) 331 | # plot 2: probabilities 332 | ax2.errorbar(ACCURACY, mu_prob.mean(), yerr=yerr_prob, marker='o',\ 333 | linestyle='') 334 | ax2.plot(np.linspace(50, 100, 10)/100,\ 335 | np.linspace(50, 100, 10)/100,\ 336 | color='red', linestyle=':') 337 | ax2.set_title('Lex. dec. model (pyactr, with imaginal, delay 0): Prob.s') 338 | ax2.set_xlabel('Observed probabilities') 339 | ax2.set_ylabel('Predicted probabilities') 340 | ax2.grid(b=True, which='minor', color='w', linewidth=1.0) 341 | # clean up and save 342 | plt.tight_layout(pad=0.5, w_pad=0.2, h_pad=0.7) 343 | plt.savefig('./figures/lex_dec_model_pyactr_with_imaginal_delay_0.pgf') 344 | plt.savefig('./figures/lex_dec_model_pyactr_with_imaginal_delay_0.pdf') 345 | 346 | generate_lex_dec_pyactr_with_imaginal_delay_0_figure() 347 | -------------------------------------------------------------------------------- /tutorials/plot_u8_estimating_using_pymc3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jakdot/pyactr/7f9fb305208ae7cf65b360aed42ae245d9a63567/tutorials/plot_u8_estimating_using_pymc3.png -------------------------------------------------------------------------------- /tutorials/u1_addition.py: -------------------------------------------------------------------------------- 1 | """ 2 | An example of a model using goal and retrieval. It corresponds to 'addition' in ACT-R tutorials, Unit 1. 3 | """ 4 | 5 | import pyactr as actr 6 | 7 | addition = actr.ACTRModel() 8 | 9 | actr.chunktype("countOrder", ("first", "second")) 10 | 11 | actr.chunktype("add", ("arg1", "arg2", "sum", "count")) 12 | 13 | dm = addition.decmem 14 | 15 | for i in range(0, 11): 16 | dm.add(actr.makechunk("chunk"+str(i), "countOrder", first=i, second=i+1)) 17 | 18 | addition.goal.add(actr.makechunk("", "add", arg1=5, arg2=2)) 19 | 20 | addition.productionstring(name="init_addition", string=""" 21 | =g> 22 | isa add 23 | arg1 =num1 24 | arg2 =num2 25 | sum None 26 | ==> 27 | =g> 28 | isa add 29 | sum =num1 30 | count 0 31 | +retrieval> 32 | isa countOrder 33 | first =num1""") 34 | 35 | addition.productionstring(name="terminate_addition", string=""" 36 | =g> 37 | isa add 38 | count =num 39 | arg2 =num 40 | sum =answer 41 | ==> 42 | ~g>""") 43 | 44 | addition.productionstring(name="increment_count", string=""" 45 | =g> 46 | isa add 47 | count =count 48 | sum =sum 49 | =retrieval> 50 | isa countOrder 51 | first =count 52 | second =newcount 53 | ==> 54 | =g> 55 | isa add 56 | count =newcount 57 | +retrieval> 58 | isa countOrder 59 | first =sum""") 60 | 61 | addition.productionstring(name="increment_sum", string=""" 62 | =g> 63 | isa add 64 | count =count 65 | arg2 ~=count 66 | sum =sum 67 | =retrieval> 68 | isa countOrder 69 | first =sum 70 | second =newsum 71 | ==> 72 | =g> 73 | isa add 74 | sum =newsum 75 | +retrieval> 76 | isa countOrder 77 | first =count""") 78 | 79 | if __name__ == "__main__": 80 | x = addition.simulation() 81 | x.run() 82 | -------------------------------------------------------------------------------- /tutorials/u1_count.py: -------------------------------------------------------------------------------- 1 | """ 2 | An example of a model using retrieval and goal buffers. It corresponds to the simplest model in ACT-R tutorials, Unit 1, 'count'. 3 | """ 4 | 5 | import pyactr as actr 6 | 7 | counting = actr.ACTRModel() 8 | 9 | #Each chunk type should be defined first. 10 | actr.chunktype("countOrder", ("first", "second")) 11 | #Chunk type is defined as (name, attributes) 12 | 13 | #Attributes are written as an iterable (above) or as a string, separated by comma: 14 | actr.chunktype("countOrder", "first, second") 15 | 16 | dm = counting.decmem 17 | #this creates declarative memory 18 | 19 | dm.add(actr.chunkstring(string="\ 20 | isa countOrder\ 21 | first 1\ 22 | second 2")) 23 | dm.add(actr.chunkstring(string="\ 24 | isa countOrder\ 25 | first 2\ 26 | second 3")) 27 | dm.add(actr.chunkstring(string="\ 28 | isa countOrder\ 29 | first 3\ 30 | second 4")) 31 | dm.add(actr.chunkstring(string="\ 32 | isa countOrder\ 33 | first 4\ 34 | second 5")) 35 | 36 | #creating goal buffer 37 | actr.chunktype("countFrom", ("start", "end", "count")) 38 | 39 | #production rules follow; using productionstring, they are similar to Lisp ACT-R 40 | 41 | counting.productionstring(name="start", string=""" 42 | =g> 43 | isa countFrom 44 | start =x 45 | count None 46 | ==> 47 | =g> 48 | isa countFrom 49 | count =x 50 | +retrieval> 51 | isa countOrder 52 | first =x""") 53 | 54 | counting.productionstring(name="increment", string=""" 55 | =g> 56 | isa countFrom 57 | count =x 58 | end ~=x 59 | =retrieval> 60 | isa countOrder 61 | first =x 62 | second =y 63 | ==> 64 | =g> 65 | isa countFrom 66 | count =y 67 | +retrieval> 68 | isa countOrder 69 | first =y""") 70 | 71 | counting.productionstring(name="stop", string=""" 72 | =g> 73 | isa countFrom 74 | count =x 75 | end =x 76 | ==> 77 | ~g>""") 78 | 79 | #adding stuff to goal buffer 80 | counting.goal.add(actr.chunkstring(string="isa countFrom start 2 end 4")) 81 | 82 | if __name__ == "__main__": 83 | x = counting.simulation() 84 | x.run() 85 | 86 | -------------------------------------------------------------------------------- /tutorials/u1_semantic.py: -------------------------------------------------------------------------------- 1 | """ 2 | The most complex model in unit 1 of ACT-R tutorials, 'semantic'. 3 | """ 4 | 5 | import pyactr as actr 6 | 7 | semantic = actr.ACTRModel() 8 | 9 | actr.chunktype("property", ("object", "attribute", "value")) 10 | 11 | actr.chunktype("isMember", ("object", "category", "judgment")) 12 | 13 | chunk_dict = {} 14 | chunk_dict['shark'] = actr.makechunk(nameofchunk='shark', typename="elem", elem="shark") 15 | chunk_dict['dangerous'] = actr.makechunk(nameofchunk='dangerous', typename="elem", elem="dangerous") 16 | chunk_dict['locomotion'] = actr.makechunk(nameofchunk='locomotion', typename="elem", elem="locomotion") 17 | chunk_dict['swimming'] = actr.makechunk(nameofchunk='swimming', typename="elem", elem="swimming") 18 | chunk_dict['fish'] = actr.makechunk(nameofchunk='fish', typename="elem", elem="fish") 19 | chunk_dict['salmon'] = actr.makechunk(nameofchunk='salmon', typename="elem", elem="salmon") 20 | chunk_dict['edible'] = actr.makechunk(nameofchunk='edible', typename="elem", elem="edible") 21 | chunk_dict['breathe'] = actr.makechunk(nameofchunk='breathe', typename="elem", elem="breathe") 22 | chunk_dict['gills'] = actr.makechunk(nameofchunk='gills', typename="elem", elem="gills") 23 | chunk_dict['animal'] = actr.makechunk(nameofchunk='animal', typename="elem", elem="animal") 24 | chunk_dict['moves'] = actr.makechunk(nameofchunk='moves', typename="elem", elem="moves") 25 | chunk_dict['skin'] = actr.makechunk(nameofchunk='skin', typename="elem", elem="skin") 26 | chunk_dict['canary'] = actr.makechunk(nameofchunk='canary', typename="elem", elem="canary") 27 | chunk_dict['color'] = actr.makechunk(nameofchunk='color', typename="elem", elem="color") 28 | chunk_dict['sings'] = actr.makechunk(nameofchunk='sings', typename="elem", elem="sings") 29 | chunk_dict['bird'] = actr.makechunk(nameofchunk='bird', typename="elem", elem="bird") 30 | chunk_dict['ostrich'] = actr.makechunk(nameofchunk='ostrich', typename="elem", elem="ostrich") 31 | chunk_dict['flies'] = actr.makechunk(nameofchunk='flies', typename="elem", elem="flies") 32 | chunk_dict['category'] = actr.makechunk(nameofchunk='category', typename="elem", elem="category") 33 | chunk_dict['height'] = actr.makechunk(nameofchunk='height', typename="elem", elem="height") 34 | chunk_dict['tall'] = actr.makechunk(nameofchunk='tall', typename="elem", elem="tall") 35 | chunk_dict['wings'] = actr.makechunk(nameofchunk='wings', typename="elem", elem="wings") 36 | chunk_dict['flying'] = actr.makechunk(nameofchunk='flying', typename="elem", elem="flying") 37 | chunk_dict['yellow'] = actr.makechunk(nameofchunk='yellow', typename="elem", elem="yellow") 38 | chunk_dict['true'] = actr.makechunk(nameofchunk='true', typename="tv", value="true") 39 | chunk_dict['false'] = actr.makechunk(nameofchunk='false', typename="tv", value="false") 40 | 41 | dm = semantic.decmem 42 | 43 | dm.add(set(chunk_dict.values())) 44 | 45 | dm.add(actr.makechunk(typename="property", object=chunk_dict['shark'], attribute=chunk_dict['dangerous'], value=chunk_dict['true'])) 46 | dm.add(actr.makechunk(typename="property", object=chunk_dict['shark'], attribute=chunk_dict['locomotion'], value=chunk_dict['swimming'])) 47 | dm.add(actr.makechunk(typename="property", object=chunk_dict['shark'], attribute=chunk_dict['category'], value=chunk_dict['fish'])) 48 | dm.add(actr.makechunk(typename="property", object=chunk_dict['salmon'], attribute=chunk_dict['edible'], value=chunk_dict['true'])) 49 | dm.add(actr.makechunk(typename="property", object=chunk_dict['salmon'], attribute=chunk_dict['locomotion'], value=chunk_dict['swimming'])) 50 | dm.add(actr.makechunk(typename="property", object=chunk_dict['salmon'], attribute=chunk_dict['category'], value=chunk_dict['fish'])) 51 | dm.add(actr.makechunk(typename="property", object=chunk_dict['fish'], attribute=chunk_dict['breathe'], value=chunk_dict['gills'])) 52 | dm.add(actr.makechunk(typename="property", object=chunk_dict['fish'], attribute=chunk_dict['locomotion'], value=chunk_dict['swimming'])) 53 | dm.add(actr.makechunk(typename="property", object=chunk_dict['fish'], attribute=chunk_dict['category'], value=chunk_dict['animal'])) 54 | dm.add(actr.makechunk(typename="property", object=chunk_dict['animal'], attribute=chunk_dict['moves'], value=chunk_dict['true'])) 55 | dm.add(actr.makechunk(typename="property", object=chunk_dict['animal'], attribute=chunk_dict['skin'], value=chunk_dict['true'])) 56 | dm.add(actr.makechunk(typename="property", object=chunk_dict['canary'], attribute=chunk_dict['color'], value=chunk_dict['yellow'])) 57 | dm.add(actr.makechunk(typename="property", object=chunk_dict['canary'], attribute=chunk_dict['sings'], value=chunk_dict['true'])) 58 | dm.add(actr.makechunk(typename="property", object=chunk_dict['canary'], attribute=chunk_dict['category'], value=chunk_dict['bird'])) 59 | dm.add(actr.makechunk(typename="property", object=chunk_dict['ostrich'], attribute=chunk_dict['flies'], value=chunk_dict['false'])) 60 | dm.add(actr.makechunk(typename="property", object=chunk_dict['ostrich'], attribute=chunk_dict['height'], value=chunk_dict['tall'])) 61 | dm.add(actr.makechunk(typename="property", object=chunk_dict['ostrich'], attribute=chunk_dict['category'], value=chunk_dict['bird'])) 62 | dm.add(actr.makechunk(typename="property", object=chunk_dict['bird'], attribute=chunk_dict['wings'], value=chunk_dict['true'])) 63 | dm.add(actr.makechunk(typename="property", object=chunk_dict['bird'], attribute=chunk_dict['locomotion'], value=chunk_dict['flying'])) 64 | dm.add(actr.makechunk(typename="property", object=chunk_dict['bird'], attribute=chunk_dict['category'], value=chunk_dict['animal'])) 65 | 66 | actr.chunktype("isMember", ("object", "category", "judgment")) 67 | 68 | #you can vary what will appear in goal buffer 69 | 70 | #semantic.goal.add(actr.makechunk(typename="isMember", object=chunk_dict['canary'], category=chunk_dict['bird'])) 71 | semantic.goal.add(actr.makechunk(typename="isMember", object=chunk_dict['canary'], category=chunk_dict['animal'])) 72 | #semantic.goal.add(actr.makechunk(typename="isMember", object=chunk_dict['canary'], category=chunk_dict['fish'])) 73 | 74 | semantic.productionstring(name="initialRetrieve", string=""" 75 | =g> 76 | isa isMember 77 | object =obj 78 | category =cat 79 | judgment None 80 | ==> 81 | =g> 82 | isa isMember 83 | judgment 'pending' 84 | +retrieval> 85 | isa property 86 | object =obj 87 | attribute category""") 88 | 89 | semantic.productionstring(name="directVerify", string=""" 90 | =g> 91 | isa isMember 92 | object =obj 93 | category =cat 94 | judgment 'pending' 95 | =retrieval> 96 | isa property 97 | object =obj 98 | attribute category 99 | value =cat 100 | ==> 101 | =g> 102 | isa isMember 103 | judgment 'yes'""") 104 | 105 | semantic.productionstring(name="chainCategory", string=""" 106 | =g> 107 | isa isMember 108 | object =obj1 109 | category =cat 110 | judgment 'pending' 111 | =retrieval> 112 | isa property 113 | object =obj1 114 | attribute category 115 | value =obj2 116 | value ~=cat 117 | ==> 118 | =g> 119 | isa isMember 120 | object =obj2 121 | +retrieval> 122 | isa property 123 | object =obj2 124 | attribute category""") 125 | 126 | semantic.productionstring(name="fail", string=""" 127 | =g> 128 | isa isMember 129 | object =obj1 130 | category =cat 131 | judgment 'pending' 132 | ?retrieval> 133 | state error 134 | ==> 135 | =g> 136 | isa isMember 137 | judgment 'no'""") 138 | 139 | if __name__ == "__main__": 140 | x = semantic.simulation() 141 | x.run(1) 142 | print(semantic.goal.pop()) 143 | 144 | -------------------------------------------------------------------------------- /tutorials/u2_demo.py: -------------------------------------------------------------------------------- 1 | """ 2 | Demo - pressing a key by ACT-R model. It corresponds to 'demo2' in Lisp ACT-R, unit 2. 3 | """ 4 | 5 | import string 6 | import random 7 | 8 | import pyactr as actr 9 | 10 | stimulus = random.sample(string.ascii_uppercase, 1)[0] 11 | text = {1: {'text': stimulus, 'position': (100,100)}} 12 | environ = actr.Environment(focus_position=(100,100)) 13 | 14 | m = actr.ACTRModel(environment=environ, motor_prepared=True) 15 | 16 | actr.chunktype("chunk", "value") 17 | actr.chunktype("read", "state") 18 | actr.chunktype("image", "img") 19 | actr.makechunk(nameofchunk="start", typename="chunk", value="start") 20 | actr.makechunk(nameofchunk="start", typename="chunk", value="start") 21 | actr.makechunk(nameofchunk="attend_let", typename="chunk", value="attend_let") 22 | actr.makechunk(nameofchunk="response", typename="chunk", value="response") 23 | actr.makechunk(nameofchunk="done", typename="chunk", value="done") 24 | m.goal.add(actr.chunkstring(name="reading", string=""" 25 | isa read 26 | state start""")) 27 | g2 = m.set_goal("g2") 28 | g2.delay = 0.2 29 | 30 | t2 = m.productionstring(name="encode_letter", string=""" 31 | =g> 32 | isa read 33 | state start 34 | =visual> 35 | isa _visual 36 | value =letter 37 | ==> 38 | =g> 39 | isa read 40 | state response 41 | +g2> 42 | isa image 43 | img =letter""") 44 | 45 | m.productionstring(name="respond", string=""" 46 | =g> 47 | isa read 48 | state response 49 | =g2> 50 | isa image 51 | img =letter 52 | ?manual> 53 | state free 54 | ==> 55 | =g> 56 | isa read 57 | state done 58 | +manual> 59 | isa _manual 60 | cmd 'press_key' 61 | key =letter""") 62 | 63 | if __name__ == "__main__": 64 | 65 | sim = m.simulation(realtime=True, environment_process=environ.environment_process, stimuli=text, triggers=stimulus, times=1) 66 | sim.run(1) 67 | -------------------------------------------------------------------------------- /tutorials/u3_multiple_objects.py: -------------------------------------------------------------------------------- 1 | """ 2 | A model for extended visual interface: multiple objects at the same time, vision checks them all and stores them. 3 | """ 4 | 5 | import string 6 | import random 7 | 8 | import pyactr as actr 9 | 10 | 11 | class Model: 12 | """ 13 | Model searching and attending to various stimuli. 14 | """ 15 | 16 | def __init__(self, env, **kwargs): 17 | self.m = actr.ACTRModel(environment=env, **kwargs) 18 | 19 | actr.chunktype("pair", "probe answer") 20 | 21 | actr.chunktype("goal", "state") 22 | 23 | self.dm = self.m.decmem 24 | 25 | self.m.visualBuffer("visual", "visual_location", self.dm, finst=30) 26 | 27 | start = actr.makechunk(nameofchunk="start", typename="chunk", value="start") 28 | actr.makechunk(nameofchunk="attending", typename="chunk", value="attending") 29 | actr.makechunk(nameofchunk="done", typename="chunk", value="done") 30 | self.m.goal.add(actr.makechunk(typename="read", state=start)) 31 | self.m.set_goal("g2") 32 | self.m.goals["g2"].delay=0.2 33 | 34 | self.m.productionstring(name="find_probe", string=""" 35 | =g> 36 | isa goal 37 | state start 38 | ?visual_location> 39 | buffer empty 40 | ==> 41 | =g> 42 | isa goal 43 | state attend 44 | ?visual_location> 45 | attended False 46 | +visual_location> 47 | isa _visuallocation 48 | screen_x closest""") #this rule is used if automatic visual search does not put anything in the buffer 49 | 50 | self.m.productionstring(name="check_probe", string=""" 51 | =g> 52 | isa goal 53 | state start 54 | ?visual_location> 55 | buffer full 56 | ==> 57 | =g> 58 | isa goal 59 | state attend""") #this rule is used if automatic visual search is enabled and it puts something in the buffer 60 | 61 | self.m.productionstring(name="attend_probe", string=""" 62 | =g> 63 | isa goal 64 | state attend 65 | =visual_location> 66 | isa _visuallocation 67 | ?visual> 68 | state free 69 | ==> 70 | =g> 71 | isa goal 72 | state reading 73 | +visual> 74 | isa _visual 75 | cmd move_attention 76 | screen_pos =visual_location 77 | ~visual_location>""") 78 | 79 | 80 | self.m.productionstring(name="encode_probe_and_find_new_location", string=""" 81 | =g> 82 | isa goal 83 | state reading 84 | =visual> 85 | isa _visual 86 | value =val 87 | ?visual_location> 88 | buffer empty 89 | ==> 90 | =g> 91 | isa goal 92 | state attend 93 | ~visual> 94 | ?visual_location> 95 | attended False 96 | +visual_location> 97 | isa _visuallocation 98 | screen_x closest""") 99 | 100 | 101 | if __name__ == "__main__": 102 | stim_d = {key: {'text': x, 'position': (random.randint(10,630), random.randint(10, 310)), 'vis_delay': 10} for key, x in enumerate(string.ascii_uppercase)} 103 | #stim_d = {key: {'text': x, 'position': (random.randint(10,630), random.randint(10, 310))} for key, x in enumerate(string.ascii_uppercase)} 104 | print(stim_d) 105 | #text = [{1: {'text': 'X', 'position': (10, 10)}, 2: {'text': 'Y', 'position': (10, 20)}, 3:{'text': 'Z', 'position': (10, 30)}},{1: {'text': 'A', 'position': (50, 10)}, 2: {'text': 'B', 'position': (50, 180)}, 3:{'text': 'C', 'position': (400, 180)}}] 106 | environ = actr.Environment(focus_position=(0,0)) 107 | m = Model(environ, subsymbolic=True, latency_factor=0.4, decay=0.5, retrieval_threshold=-2, instantaneous_noise=0, automatic_visual_search=True, eye_mvt_scaling_parameter=0.05, eye_mvt_angle_parameter=10) #If you don't want to use the EMMA model, specify emma=False in here 108 | sim = m.m.simulation(realtime=True, trace=True, gui=True, environment_process=environ.environment_process, stimuli=stim_d, triggers='X', times=50) 109 | sim.run(10) 110 | check = 0 111 | for key in m.dm: 112 | if key.typename == '_visual': 113 | print(key, m.dm[key]) 114 | check += 1 115 | print(check) 116 | print(len(stim_d)) 117 | -------------------------------------------------------------------------------- /tutorials/u4_paired.py: -------------------------------------------------------------------------------- 1 | """ 2 | Pairing a word to a number, can be run repeatedly. It corresponds to 'paired' in Lisp ACT-R, unit 4. 3 | """ 4 | 5 | import random 6 | 7 | import pyactr as actr 8 | 9 | 10 | class Model: 11 | """ 12 | Model pressing the right key. 13 | """ 14 | 15 | def __init__(self, env, **kwargs): 16 | self.m = actr.ACTRModel(environment=env, **kwargs) 17 | 18 | actr.chunktype("pair", "probe answer") 19 | 20 | actr.chunktype("goal", "state") 21 | 22 | self.dm = self.m.decmem 23 | 24 | start = actr.makechunk(nameofchunk="start", typename="chunk", value="start") 25 | actr.makechunk(nameofchunk="attending", typename="chunk", value="attending") 26 | actr.makechunk(nameofchunk="testing", typename="chunk", value="testing") 27 | actr.makechunk(nameofchunk="response", typename="chunk", value="response") 28 | actr.makechunk(nameofchunk="study", typename="chunk", value="study") 29 | actr.makechunk(nameofchunk="attending_target", typename="chunk", value="attending_target") 30 | actr.makechunk(nameofchunk="done", typename="chunk", value="done") 31 | self.m.goal.add(actr.makechunk(typename="read", state=start)) 32 | self.m.set_goal("g2") 33 | self.m.goals["g2"].delay=0.2 34 | 35 | self.m.productionstring(name="find_probe", string=""" 36 | =g> 37 | isa goal 38 | state start 39 | ?visual_location> 40 | buffer empty 41 | ==> 42 | =g> 43 | isa goal 44 | state attend 45 | ?visual_location> 46 | attended False 47 | +visual_location> 48 | isa _visuallocation 49 | screen_x 320""") 50 | 51 | self.m.productionstring(name="attend_probe", string=""" 52 | =g> 53 | isa goal 54 | state attend 55 | =visual_location> 56 | isa _visuallocation 57 | ?visual> 58 | state free 59 | ==> 60 | =g> 61 | isa goal 62 | state reading 63 | =visual_location> 64 | isa _visuallocation 65 | +visual> 66 | cmd move_attention 67 | screen_pos =visual_location""") 68 | 69 | self.m.productionstring(name="read_probe", string=""" 70 | =g> 71 | isa goal 72 | state reading 73 | =visual> 74 | isa _visual 75 | value =word 76 | ==> 77 | =g> 78 | isa goal 79 | state testing 80 | +g2> 81 | isa pair 82 | probe =word 83 | =visual> 84 | isa visual 85 | +retrieval> 86 | isa pair 87 | probe =word""") 88 | 89 | self.m.productionstring(name="recall", string=""" 90 | =g> 91 | isa goal 92 | state testing 93 | =retrieval> 94 | isa pair 95 | answer =ans 96 | ?manual> 97 | state free 98 | ?visual> 99 | state free 100 | ==> 101 | +manual> 102 | isa _manual 103 | cmd 'press_key' 104 | key =ans 105 | =g> 106 | isa goal 107 | state study 108 | ~visual>""") 109 | 110 | self.m.productionstring(name="cannot_recall", string=""" 111 | =g> 112 | isa goal 113 | state testing 114 | ?retrieval> 115 | state error 116 | ?visual> 117 | state free 118 | ==> 119 | =g> 120 | isa goal 121 | state attending_target 122 | ~visual>""") 123 | 124 | self.m.productionstring(name="associate", string=""" 125 | =g> 126 | isa goal 127 | state attending_target 128 | =visual> 129 | isa _visual 130 | value =val 131 | =g2> 132 | isa pair 133 | probe =word 134 | ?visual> 135 | state free 136 | ==> 137 | =g> 138 | isa goal 139 | state reading 140 | ~visual> 141 | =g2> 142 | isa pair 143 | answer =val 144 | ~g2>""") 145 | 146 | if __name__ == "__main__": 147 | text = {"bank": "0", "card": "1", "dart": "2", "face": "3", "game": "4", 148 | "hand": "5", "jack": "6", "king": "7", "lamb": "8", "mask": "9", 149 | "neck": "0", "pipe": "1", "quip": "2", "rope": "3", "sock": "4", 150 | "tent": "5", "vent": "6", "wall": "7", "xray": "8", "zinc": "9"} 151 | 152 | used_stim = {key: text[key] for key in random.sample(list(text), 1)} 153 | text = [] 154 | for x in zip(used_stim.keys(), used_stim.values()): 155 | text.append({1: {'text': x[0], 'position': (320, 180)}}) 156 | text.append({1: {'text': x[1], 'position': (320, 180)}}) 157 | print(text) 158 | trigger = list(used_stim.values()) 159 | environ = actr.Environment( focus_position=(0, 0)) 160 | m = Model(environ, subsymbolic=True, latency_factor=0.4, decay=0.5, retrieval_threshold=-2, instantaneous_noise=0, strict_harvesting=True, automatic_visual_search=False) 161 | sim = m.m.simulation(realtime=True, trace=True, gui=True, environment_process=environ.environment_process, stimuli=2*text, triggers=4*trigger,times=5) 162 | sim.run(12) 163 | print(m.dm) 164 | 165 | -------------------------------------------------------------------------------- /tutorials/u5_fan.py: -------------------------------------------------------------------------------- 1 | """ 2 | The fan experiment from unit 5 of Lisp ACT-R. 3 | """ 4 | 5 | import warnings 6 | 7 | import pyactr as actr 8 | 9 | class Model: 10 | """ 11 | Model for fan experiment. We will abstract away from environment, key presses and visual module (the same is done in the abstract model of Lisp ACT-R). 12 | """ 13 | 14 | def __init__(self, person, location, **kwargs): 15 | self.model = actr.ACTRModel(environment=None, **kwargs) 16 | 17 | actr.chunktype("comprehend", "relation arg1 arg2") 18 | actr.chunktype("meaning", "word") 19 | 20 | dict_dm = {} 21 | words = "hippie bank fireman lawyer guard beach castle dungeon earl forest giant park church captain cave debutante store in".split() 22 | 23 | for word in words: 24 | dict_dm[word] = actr.makechunk(nameofchunk=word, typename="meaning", word=word) 25 | 26 | for idx, word in enumerate("park church bank".split(), start=1): 27 | dict_dm[idx] = actr.makechunk(nameofchunk=idx, typename="comprehend", relation=dict_dm["in"], arg1=dict_dm["hippie"], arg2=dict_dm[word]) 28 | 29 | for idx, word in enumerate("park cave".split(), start=4): 30 | dict_dm[idx] = actr.makechunk(nameofchunk=idx, typename="comprehend", relation=dict_dm["in"], arg1=dict_dm["captain"], arg2=dict_dm[word]) 31 | 32 | dict_dm[6] = actr.makechunk(nameofchunk=6, typename="comprehend", relation=dict_dm["in"], arg1=dict_dm["debutante"], arg2=dict_dm["bank"]) 33 | dict_dm[7] = actr.makechunk(nameofchunk=7, typename="comprehend", relation=dict_dm["in"], arg1=dict_dm["fireman"], arg2=dict_dm["park"]) 34 | 35 | for idx, word in enumerate("beach castle dungeon".split(), start=8): 36 | dict_dm[idx] = actr.makechunk(nameofchunk=idx, typename="comprehend", relation=dict_dm["in"], arg1=dict_dm["giant"], arg2=dict_dm[word]) 37 | 38 | for idx, word in enumerate("castle forest".split(), start=11): 39 | dict_dm[idx] = actr.makechunk(nameofchunk=idx, typename="comprehend", relation=dict_dm["in"], arg1=dict_dm["earl"], arg2=dict_dm[word]) 40 | dict_dm[13] = actr.makechunk(nameofchunk=idx, typename="comprehend", relation=dict_dm["in"], arg1=dict_dm["lawyer"], arg2=dict_dm["store"]) 41 | 42 | self.model.set_decmem(set(dict_dm.values())) 43 | self.dm = self.model.decmem 44 | 45 | self.harvest_person = actr.makechunk(nameofchunk="harvest_person", typename="chunk", value="harvest_person") 46 | self.harvest_location = actr.makechunk(nameofchunk="harvest_location", typename="chunk", value="harvest_location") 47 | self.test = actr.makechunk(nameofchunk="test", typename="chunk", value="test") 48 | self.get_retrieval = actr.makechunk(nameofchunk="get_retrieval", typename="chunk", value="get_retrieval") 49 | 50 | actr.chunktype("sentence_goal", "arg1 arg2 state") 51 | self.model.goal.add(actr.makechunk(typename="sentence_goal", arg1=person, arg2=location, state=self.test)) 52 | 53 | self.model.productionstring(name="start", string=""" 54 | =g> 55 | isa sentence_goal 56 | arg1 =person 57 | state test 58 | ==> 59 | =g> 60 | isa sentence_goal 61 | state harvest_person 62 | +retrieval> 63 | isa meaning 64 | word =person""") 65 | 66 | self.model.productionstring(name="harvesting_person", string=""" 67 | =g> 68 | isa sentence_goal 69 | arg2 =location 70 | state harvest_person 71 | =retrieval> 72 | ==> 73 | =g> 74 | isa sentence_goal 75 | state harvest_location 76 | arg1 =retrieval 77 | +retrieval> 78 | isa meaning 79 | word =location""") 80 | 81 | self.model.productionstring(name="harvesting_location", string=""" 82 | =g> 83 | isa sentence_goal 84 | state harvest_location 85 | =retrieval> 86 | ?retrieval> 87 | state free 88 | ==> 89 | =g> 90 | isa sentence_goal 91 | state get_retrieval 92 | arg2 =retrieval""") 93 | 94 | self.model.productionstring(name="retrieve_from_person", string=""" 95 | =g> 96 | isa sentence_goal 97 | state get_retrieval 98 | arg1 =person 99 | ==> 100 | =g> 101 | isa sentence_goal 102 | state None 103 | +retrieval> 104 | isa comprehend 105 | arg1 =person""") 106 | 107 | self.model.productionstring(name="retrieve_from_location", string=""" 108 | =g> 109 | isa sentence_goal 110 | state get_retrieval 111 | arg2 =location 112 | ==> 113 | =g> 114 | isa sentence_goal 115 | state None 116 | +retrieval> 117 | isa comprehend 118 | arg2 =location""") 119 | 120 | self.model.productionstring(name="respond_yes", string=""" 121 | =g> 122 | isa sentence_goal 123 | state None 124 | arg1 =person 125 | arg2 =location 126 | =retrieval> 127 | isa comprehend 128 | arg1 =person 129 | arg2 =location 130 | ==> 131 | =g> 132 | isa sentence_goal 133 | state 'k'""") 134 | 135 | self.model.productionstring(name="mismatch_person_no", string=""" 136 | =g> 137 | isa sentence_goal 138 | state None 139 | arg1 =person 140 | arg2 =location 141 | =retrieval> 142 | isa comprehend 143 | arg1 ~=person 144 | ==> 145 | =g> 146 | isa sentence_goal 147 | state 'd'""") 148 | 149 | t3= self.model.productionstring(name="mismatch_location_no", string=""" 150 | =g> 151 | isa sentence_goal 152 | state None 153 | arg1 =person 154 | arg2 =location 155 | =retrieval> 156 | isa comprehend 157 | arg2 ~=location 158 | ==> 159 | =g> 160 | isa sentence_goal 161 | state 'd'""") 162 | 163 | if __name__ == "__main__": 164 | warnings.simplefilter("ignore") 165 | m = Model("hippie", "bank", subsymbolic=True, latency_factor=0.63, strength_of_association=1.6, buffer_spreading_activation={"g":1}, activation_trace=True, strict_harvesting=True) 166 | sim = m.model.simulation(realtime=True) 167 | sim.run(2) 168 | -------------------------------------------------------------------------------- /tutorials/u5_grouped.py: -------------------------------------------------------------------------------- 1 | """ 2 | Recalling numbers. It corresponds to 'grouped' in Lisp ACT-R, unit 5. 3 | """ 4 | 5 | import warnings 6 | 7 | import pyactr as actr 8 | 9 | class Model: 10 | """ 11 | Model 'grouped'. 12 | """ 13 | 14 | def __init__(self, **kwargs): 15 | self.model = actr.ACTRModel(**kwargs) 16 | 17 | actr.chunktype("recall_list", "group element list group_position") 18 | 19 | actr.chunktype("group", "id parent position") 20 | 21 | actr.chunktype("item", "name group position") 22 | 23 | li = actr.makechunk(typename="chunk", value="list") 24 | 25 | self.dictchunks = {1: actr.makechunk(nameofchunk="p1", typename="chunk", value="first"), 2: actr.makechunk(nameofchunk="p2", typename="chunk", value="second"), 3: actr.makechunk(nameofchunk="p3", typename="chunk", value="third"), 4: actr.makechunk(nameofchunk="p4", typename="chunk", value="fourth")} 26 | group1 = actr.makechunk(nameofchunk="group1", typename="group", parent=li, position=self.dictchunks[1], id="group1") 27 | group2 = actr.makechunk(nameofchunk="group2", typename="group", parent=li, position=self.dictchunks[2], id="group2") 28 | group3 = actr.makechunk(nameofchunk="group3", typename="group", parent=li, position=self.dictchunks[3], id="group3") 29 | 30 | self.model.set_decmem(set(self.dictchunks.values())) 31 | self.dm = self.model.decmem 32 | self.dm.add(set([group1, group2, group3])) 33 | self.dm.add(li) 34 | 35 | self.model.set_similarities(self.dictchunks[1], self.dictchunks[2], -0.5) 36 | self.model.set_similarities(self.dictchunks[2], self.dictchunks[3], -0.5) 37 | 38 | for n in range(1,4): 39 | self.dm.add(actr.makechunk(typename="item", name=n, group=group1, position=self.dictchunks[n])) 40 | 41 | for n in range(4,7): 42 | self.dm.add(actr.makechunk(typename="item", name=n, group=group2, position=self.dictchunks[(n+1)%4])) 43 | 44 | for n in range(7,10): 45 | self.dm.add(actr.makechunk(typename="item", name=n, group=group3, position=self.dictchunks[(n+1)%7])) 46 | 47 | self.model.retrieval.finst = 15 48 | 49 | self.model.goal.add(actr.makechunk(typename="recall_list", list=li)) 50 | 51 | self.model.productionstring(name="recall_first_group", string=""" 52 | =g> 53 | isa recall_list 54 | list =l 55 | ?retrieval> 56 | buffer empty 57 | state free 58 | ==> 59 | =g> 60 | isa recall_list 61 | group_position p1 62 | +retrieval> 63 | isa group 64 | parent =l 65 | position p1""") 66 | 67 | self.model.productionstring(name="start_recall_of_group", string=""" 68 | =g> 69 | isa recall_list 70 | list =l 71 | =retrieval> 72 | isa group 73 | id =sth 74 | ?retrieval> 75 | state free 76 | ==> 77 | =g> 78 | isa recall_list 79 | group =retrieval 80 | element p1 81 | ?retrieval> 82 | recently_retrieved False 83 | +retrieval> 84 | isa item 85 | group =retrieval 86 | position p1""") 87 | 88 | self.model.productionstring(name="harvest_first_item", string=""" 89 | =g> 90 | isa recall_list 91 | element p1 92 | group =group 93 | =retrieval> 94 | isa item 95 | name =name 96 | ?retrieval> 97 | state free 98 | ==> 99 | =g> 100 | isa recall_list 101 | element p2 102 | ?retrieval> 103 | recently_retrieved False 104 | +retrieval> 105 | isa item 106 | group =group 107 | position p2""") 108 | 109 | self.model.productionstring(name="harvest_second_item", string=""" 110 | =g> 111 | isa recall_list 112 | element p2 113 | group =group 114 | =retrieval> 115 | isa item 116 | name =name 117 | ?retrieval> 118 | state free 119 | ==> 120 | =g> 121 | isa recall_list 122 | element p3 123 | ?retrieval> 124 | recently_retrieved False 125 | +retrieval> 126 | isa item 127 | group =group 128 | position p3""") 129 | 130 | self.model.productionstring(name="harvest_third_item", string=""" 131 | =g> 132 | isa recall_list 133 | element p3 134 | group =group 135 | =retrieval> 136 | isa item 137 | name =name 138 | ?retrieval> 139 | state free 140 | ==> 141 | =g> 142 | isa recall_list 143 | element p4 144 | ?retrieval> 145 | recently_retrieved False 146 | +retrieval> 147 | isa item 148 | group =group 149 | position p4""") 150 | 151 | self.model.productionstring(name="recall_second_group", string=""" 152 | =g> 153 | isa recall_list 154 | group_position p1 155 | list =l 156 | ?retrieval> 157 | state error 158 | ==> 159 | =g> 160 | isa recall_list 161 | group_position p2 162 | +retrieval> 163 | isa group 164 | parent =l 165 | position p2""") 166 | 167 | self.model.productionstring(name="recall_third_group", string=""" 168 | =g> 169 | isa recall_list 170 | group_position p2 171 | list =l 172 | ?retrieval> 173 | state error 174 | ==> 175 | =g> 176 | isa recall_list 177 | group_position p3 178 | +retrieval> 179 | isa group 180 | parent =l 181 | position p3""") 182 | 183 | if __name__ == "__main__": 184 | warnings.simplefilter("ignore") 185 | m = Model(subsymbolic=True, instantaneous_noise=0.15, retrieval_threshold=-10, partial_matching=True, activation_trace=True, strict_harvesting=True) 186 | sim = m.model.simulation(realtime=False) 187 | sim.run(3) 188 | 189 | -------------------------------------------------------------------------------- /tutorials/u6_simple.py: -------------------------------------------------------------------------------- 1 | """ 2 | Testing utilities of production rules and changes in utilities. 3 | """ 4 | 5 | import warnings 6 | 7 | import pyactr as actr 8 | 9 | class Model: 10 | 11 | def __init__(self, **kwargs): 12 | self.m = actr.ACTRModel(**kwargs) 13 | 14 | self.m.goal.add(actr.makechunk(typename="start", state="start")) 15 | 16 | self.m.productionstring(name="one", string=""" 17 | =g> 18 | isa start 19 | state 'start' 20 | ==> 21 | =g> 22 | isa change 23 | state 'change'""", utility=1) 24 | 25 | self.m.productionstring(name="two", string=""" 26 | =g> 27 | isa start 28 | state 'start' 29 | ==> 30 | =g> 31 | isa dontchange 32 | state 'start'""", utility=5) 33 | 34 | self.m.productionstring(name="three", string=""" 35 | =g> 36 | isa change 37 | state 'change' 38 | ==> 39 | ~g>""", reward=10) 40 | 41 | 42 | if __name__ == "__main__": 43 | warnings.simplefilter("ignore") 44 | m = Model(subsymbolic=True, utility_noise=10, utility_learning=True, strict_harvesting=True) 45 | sim = m.m.simulation(realtime=True) 46 | print(m.m.productions) 47 | sim.run(1) 48 | print(m.m.productions) 49 | print(m.m.decmem) 50 | 51 | -------------------------------------------------------------------------------- /tutorials/u7_simplecompilation.py: -------------------------------------------------------------------------------- 1 | """ 2 | Testing a simple case of production compilation. The compilation also allows for utility learning, shown in the model below, as well. 3 | """ 4 | 5 | import warnings 6 | 7 | import pyactr as actr 8 | 9 | class Compilation1: 10 | """ 11 | Model testing compilation -- basic cases. 12 | """ 13 | 14 | def __init__(self, **kwargs): 15 | actr.chunktype("state", "starting ending") 16 | self.m = actr.ACTRModel(**kwargs) 17 | 18 | self.m.goal.add(actr.makechunk(nameofchunk="start", typename="state", starting=1)) 19 | 20 | self.m.productionstring(name="one", string=""" 21 | =g> 22 | isa state 23 | starting =x 24 | ending ~=x 25 | ==> 26 | =g> 27 | isa state 28 | ending =x""", utility=2) 29 | 30 | self.m.productionstring(name="two", string=""" 31 | =g> 32 | isa state 33 | starting =x 34 | ending =x 35 | ==> 36 | =g> 37 | isa state 38 | starting =x 39 | ending 4""") 40 | 41 | if __name__ == "__main__": 42 | warnings.simplefilter("ignore") 43 | mm = Compilation1(production_compilation=True, utility_learning=True) 44 | 45 | model = mm.m 46 | 47 | sim = model.simulation(realtime=True) 48 | sim.run(0.5) 49 | print(model.productions["one and two"]) 50 | -------------------------------------------------------------------------------- /tutorials/u8_estimating_using_pymc3.py: -------------------------------------------------------------------------------- 1 | """ 2 | This example shows the workings of pyMC3 in pyactr. pyMC3 allows Bayesian inference on user-defined probabilistic models. Combining this with pyactr will allow you to get the posterior on free parameters assumed in pyactr. 3 | 4 | You will need to install pymc3 and packages it depends on to make this work. 5 | """ 6 | 7 | import warnings 8 | import numpy as np 9 | import simpy 10 | import pyactr as actr 11 | from pymc3 import Model, Normal, Gamma, Uniform, sample, summary, Metropolis, traceplot 12 | import theano.tensor as T 13 | from theano.compile.ops import as_op 14 | import matplotlib.pyplot as pp 15 | 16 | warnings.simplefilter("ignore") 17 | 18 | #Each chunk type should be defined first. 19 | actr.chunktype("countOrder", ("first", "second")) 20 | 21 | #creating goal buffer 22 | actr.chunktype("countFrom", ("start", "end", "count")) 23 | 24 | def counting_model(sub=True): 25 | """ 26 | sub: is subsymbolic switched on? 27 | """ 28 | counting = actr.ACTRModel(subsymbolic=sub) 29 | 30 | 31 | #production rules follow; using productionstring, they are similar to Lisp ACT-R 32 | 33 | counting.productionstring(name="start", string=""" 34 | =g> 35 | isa countFrom 36 | start =x 37 | count None 38 | ==> 39 | =g> 40 | isa countFrom 41 | count =x 42 | +retrieval> 43 | isa countOrder 44 | first =x""") 45 | 46 | counting.productionstring(name="increment", string=""" 47 | =g> 48 | isa countFrom 49 | count =x 50 | end ~=x 51 | =retrieval> 52 | isa countOrder 53 | first =x 54 | second =y 55 | ==> 56 | =g> 57 | isa countFrom 58 | count =y 59 | +retrieval> 60 | isa countOrder 61 | first =y""") 62 | 63 | counting.productionstring(name="stop", string=""" 64 | =g> 65 | isa countFrom 66 | count =x 67 | end =x 68 | ==> 69 | ~g>""") 70 | 71 | return counting 72 | 73 | dd = {actr.chunkstring(string="\ 74 | isa countOrder\ 75 | first 1\ 76 | second 2"): [0], actr.chunkstring(string="\ 77 | isa countOrder\ 78 | first 2\ 79 | second 3"): [0], 80 | actr.chunkstring(string="\ 81 | isa countOrder\ 82 | first 3\ 83 | second 4"): [0], 84 | actr.chunkstring(string="\ 85 | isa countOrder\ 86 | first 4\ 87 | second 5"): [0]} #we have to store memory chunks separately, see below 88 | 89 | # Simulating data 90 | 91 | counting = counting_model(True) 92 | 93 | counting.decmems = {} 94 | counting.set_decmem(dd) 95 | 96 | counting.goal.add(actr.chunkstring(string="isa countFrom start 2 end 4")) 97 | #counting.model_parameters["latency_factor"] = 0.2 98 | sim = counting.simulation(trace=True) 99 | # an example of one run of the simulation 100 | sim.run() 101 | 102 | size = 5000 103 | 104 | Y = np.random.normal(loc=257.4, scale=10, size=size) #suppose these are data on counting, that is, how fast people are in counting from 2 to 4; we simulate them as normal distribution with mean 257.4 (milliseconds) and st.d. 10 (milliseconds) 105 | print("Simulated data") 106 | print(Y) 107 | # We would get the mean of 257.4 milliseconds if the latency factor would be equal to 0.1. 108 | 109 | # We now want to know what the posterior distribution of latency factor should be. 110 | # We find this using pymc3 combining a Bayesian model and our ACT-R counting model. 111 | # Ideally, we should get close to 0.1 for lf 112 | 113 | # The part below runs the ACT-R model; this is not run on its own but called from inside the Bayesian model 114 | @as_op(itypes=[T.dscalar], otypes=[T.dvector]) 115 | def model(lf): 116 | """ 117 | We will create a model on two rules. We will let the pyMC find the best value for firing a rule. 118 | """ 119 | #adding stuff to goal buffer 120 | counting.decmems = {} #we have to clean all the memories first, because each loop adds chunks into a memory and we want to ignore these 121 | counting.set_decmem(dd) #we then add only memory chunks that are present at the beginning 122 | counting.goal.add(actr.chunkstring(string="isa countFrom start 2 end 4")) # starting goal 123 | counting.model_parameters["latency_factor"] = lf 124 | sim = counting.simulation(trace=False) 125 | last_time = 0 126 | while True: 127 | if last_time > 10: #if the value is unreasonably high, which might happen with weird proposed estimates, break 128 | last_time = 10.0 129 | break 130 | try: 131 | sim.step() # run one step ahead in simulation 132 | last_time = sim.show_time() 133 | except simpy.core.EmptySchedule: #if you run out of actions, break 134 | last_time = 10.0 #some high value time so it is clear that this is not the right way to end 135 | break 136 | if not counting.goal: #if goal cleared (as should happen when you finish the task correctly and reach stop, break) 137 | break 138 | 139 | return np.repeat(np.array(1000*last_time), size) # we return time in ms 140 | 141 | basic_model = Model() 142 | 143 | with basic_model: 144 | 145 | # Priors for unknown model parameters 146 | lf = Gamma('lf', alpha=2, beta=4) 147 | 148 | sigma = Uniform('sigma', lower=0.1,upper=50) 149 | 150 | #you can print searched values from every draw 151 | #lf_print = T.printing.Print('lf')(lf) 152 | 153 | #Deterministic value (RT in ms) established by the ACT-R model 154 | mu = model(lf) 155 | 156 | # Likelihood (sampling distribution) of observations 157 | Normal('Y_obs', mu=mu, sd=sigma, observed=Y) 158 | 159 | #Metropolis algorithm for steps in simulation 160 | step = Metropolis(basic_model.vars) 161 | 162 | trace = sample(1000, tune=1000, step=step, init='auto') 163 | 164 | print(summary(trace)) 165 | traceplot(trace) 166 | pp.savefig("plot_u8_estimating_using_pymc3.png") 167 | print(trace['lf'], trace['sigma']) 168 | print("Latency factor: mean ", np.mean( trace['lf'] )) 169 | print("This value should be close to 0.1") 170 | print("Sigma estimate: mean ",np.mean( trace['sigma'] ) ) 171 | print("This value should be close to 10") 172 | 173 | # Of course, much more things can be explored this way: 174 | # more parameters could be studied; different priors could be used etc. 175 | --------------------------------------------------------------------------------