├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── aima3
├── __init__.py
├── agents.py
├── csp.py
├── games.py
├── ipyviews.py
├── knowledge.py
├── learning.py
├── logic.py
├── mcts.py
├── mdp.py
├── nlp.py
├── notebook.py
├── planning.py
├── probability.py
├── rl.py
├── search.py
├── tests
│ ├── __init__.py
│ ├── pytest.ini
│ ├── test_agents.py
│ ├── test_csp.py
│ ├── test_games.py
│ ├── test_knowledge.py
│ ├── test_learning.py
│ ├── test_logic.py
│ ├── test_mdp.py
│ ├── test_nlp.py
│ ├── test_planning.py
│ ├── test_probability.py
│ ├── test_rl.py
│ ├── test_search.py
│ ├── test_text.py
│ └── test_utils.py
├── text.py
└── utils.py
├── notebooks
├── agents.ipynb
├── connect_four.ipynb
├── csp.ipynb
├── games.ipynb
├── images
│ ├── IMAGE-CREDITS
│ ├── aima3e_big.jpg
│ ├── aima_logo.png
│ ├── bayesnet.png
│ ├── decisiontree_fruit.jpg
│ ├── dirt.svg
│ ├── dirt05-icon.jpg
│ ├── fig_5_2.png
│ ├── knn_plot.png
│ ├── makefile
│ ├── mdp-a.png
│ ├── mdp.png
│ ├── neural_net.png
│ ├── parse_tree.png
│ ├── perceptron.png
│ ├── pluralityLearner_plot.png
│ ├── point_crossover.png
│ ├── restaurant.png
│ ├── romania_map.png
│ ├── search_animal.svg
│ ├── sprinklernet.jpg
│ ├── uniform_crossover.png
│ ├── vacuum-icon.jpg
│ ├── vacuum.svg
│ └── wall-icon.jpg
├── index.ipynb
├── intro.ipynb
├── knowledge.ipynb
├── learning.ipynb
├── learning_apps.ipynb
├── logic.ipynb
├── mdp.ipynb
├── monte_carlo_tree_search.ipynb
├── neural_nets.ipynb
├── nlp.ipynb
├── nlp_apps.ipynb
├── planning.ipynb
├── probability-4e.ipynb
├── probability.ipynb
├── rl.ipynb
├── search-4e.ipynb
├── search.ipynb
└── text.ipynb
├── requirements.txt
└── setup.py
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | How to Contribute to aima-python
2 | ==========================
3 |
4 | Thanks for considering contributing to `aima-python`! Whether you are an aspiring [Google Summer of Code](https://summerofcode.withgoogle.com/organizations/5663121491361792/) student, or an independent contributor, here is a guide on how you can help.
5 |
6 | First of all, you can read these write-ups from past GSoC students to get an idea on what you can do for the project. [Chipe1](https://github.com/aimacode/aima-python/issues/641) - [MrDupin](https://github.com/aimacode/aima-python/issues/632)
7 |
8 | In general, the main ways you can contribute to the repository are the following:
9 |
10 | 1. Implement algorithms from the [list of algorithms](https://github.com/aimacode/aima-python/blob/master/README.md#index-of-algorithms).
11 | 1. Add tests for algorithms that are missing them (you can also add more tests to algorithms that already have some).
12 | 1. Take care of [issues](https://github.com/aimacode/aima-python/issues).
13 | 1. Write on the notebooks (`.ipynb` files).
14 | 1. Add and edit documentation (the docstrings in `.py` files).
15 |
16 | In more detail:
17 |
18 | ## Read the Code and Start on an Issue
19 |
20 | - First, read and understand the code to get a feel for the extent and the style.
21 | - Look at the [issues](https://github.com/aimacode/aima-python/issues) and pick one to work on.
22 | - One of the issues is that some algorithms are missing from the [list of algorithms](https://github.com/aimacode/aima-python/blob/master/README.md#index-of-algorithms) and that some don't have tests.
23 |
24 | ## Port to Python 3; Pythonic Idioms; py.test
25 |
26 | - Check for common problems in [porting to Python 3](http://python3porting.com/problems.html), such as: `print` is now a function; `range` and `map` and other functions no longer produce `list`s; objects of different types can no longer be compared with `<`; strings are now Unicode; it would be nice to move `%` string formating to `.format`; there is a new `next` function for generators; integer division now returns a float; we can now use set literals.
27 | - Replace old Lisp-based idioms with proper Python idioms. For example, we have many functions that were taken directly from Common Lisp, such as the `every` function: `every(callable, items)` returns true if every element of `items` is callable. This is good Lisp style, but good Python style would be to use `all` and a generator expression: `all(callable(f) for f in items)`. Eventually, fix all calls to these legacy Lisp functions and then remove the functions.
28 | - Add more tests in `test_*.py` files. Strive for terseness; it is ok to group multiple asserts into one `def test_something():` function. Move most tests to `test_*.py`, but it is fine to have a single `doctest` example in the docstring of a function in the `.py` file, if the purpose of the doctest is to explain how to use the function, rather than test the implementation.
29 |
30 | ## New and Improved Algorithms
31 |
32 | - Implement functions that were in the third edition of the book but were not yet implemented in the code. Check the [list of pseudocode algorithms (pdf)](https://github.com/aimacode/pseudocode/blob/master/aima3e-algorithms.pdf) to see what's missing.
33 | - As we finish chapters for the new fourth edition, we will share the new pseudocode in the [`aima-pseudocode`](https://github.com/aimacode/aima-pseudocode) repository, and describe what changes are necessary.
34 | We hope to have an `algorithm-name.md` file for each algorithm, eventually; it would be great if contributors could add some for the existing algorithms.
35 | - Give examples of how to use the code in the `.ipynb` files.
36 |
37 | We still support a legacy branch, `aima3python2` (for the third edition of the textbook and for Python 2 code).
38 |
39 | ## Jupyter Notebooks
40 |
41 | In this project we use Jupyter/IPython Notebooks to showcase the algorithms in the book. They serve as short tutorials on what the algorithms do, how they are implemented and how one can use them. To install Jupyter, you can follow the instructions [here](https://jupyter.org/install.html). These are some ways you can contribute to the notebooks:
42 |
43 | - Proofread the notebooks for grammar mistakes, typos, or general errors.
44 | - Move visualization and unrelated to the algorithm code from notebooks to `notebook.py` (a file used to store code for the notebooks, like visualization and other miscellaneous stuff). Make sure the notebooks still work and have their outputs showing!
45 | - Replace the `%psource` magic notebook command with the function `psource` from `notebook.py` where needed. Examples where this is useful are a) when we want to show code for algorithm implementation and b) when we have consecutive cells with the magic keyword (in this case, if the code is large, it's best to leave the output hidden).
46 | - Add the function `pseudocode(algorithm_name)` in algorithm sections. The function prints the pseudocode of the algorithm. You can see some example usage in [`knowledge.ipynb`](https://github.com/aimacode/aima-python/blob/master/knowledge.ipynb).
47 | - Edit existing sections for algorithms to add more information and/or examples.
48 | - Add visualizations for algorithms. The visualization code should go in `notebook.py` to keep things clean.
49 | - Add new sections for algorithms not yet covered. The general format we use in the notebooks is the following: First start with an overview of the algorithm, printing the pseudocode and explaining how it works. Then, add some implementation details, including showing the code (using `psource`). Finally, add examples for the implementations, showing how the algorithms work. Don't fret with adding complex, real-world examples; the project is meant for educational purposes. You can of course choose another format if something better suits an algorithm.
50 |
51 | Apart from the notebooks explaining how the algorithms work, we also have notebooks showcasing some indicative applications of the algorithms. These notebooks are in the `*_apps.ipynb` format. We aim to have an `apps` notebook for each module, so if you don't see one for the module you would like to contribute to, feel free to create it from scratch! In these notebooks we are looking for applications showing what the algorithms can do. The general format of these sections is this: Add a description of the problem you are trying to solve, then explain how you are going to solve it and finally provide your solution with examples. Note that any code you write should not require any external libraries apart from the ones already provided (like `matplotlib`).
52 |
53 | # Style Guide
54 |
55 | There are a few style rules that are unique to this project:
56 |
57 | - The first rule is that the code should correspond directly to the pseudocode in the book. When possible this will be almost one-to-one, just allowing for the syntactic differences between Python and pseudocode, and for different library functions.
58 | - Don't make a function more complicated than the pseudocode in the book, even if the complication would add a nice feature, or give an efficiency gain. Instead, remain faithful to the pseudocode, and if you must, add a new function (not in the book) with the added feature.
59 | - I use functional programming (functions with no side effects) in many cases, but not exclusively (sometimes classes and/or functions with side effects are used). Let the book's pseudocode be the guide.
60 |
61 | Beyond the above rules, we use [Pep 8](https://www.python.org/dev/peps/pep-0008), with a few minor exceptions:
62 |
63 | - I have set `--max-line-length 100`, not 79.
64 | - You don't need two spaces after a sentence-ending period.
65 | - Strunk and White is [not a good guide for English](http://chronicle.com/article/50-Years-of-Stupid-Grammar/25497).
66 | - I prefer more concise docstrings; I don't follow [Pep 257](https://www.python.org/dev/peps/pep-0257/). In most cases,
67 | a one-line docstring suffices. It is rarely necessary to list what each argument does; the name of the argument usually is enough.
68 | - Not all constants have to be UPPERCASE.
69 | - At some point I may add [Pep 484](https://www.python.org/dev/peps/pep-0484/) type annotations, but I think I'll hold off for now;
70 | I want to get more experience with them, and some people may still be in Python 3.4.
71 |
72 |
73 | Contributing a Patch
74 | ====================
75 |
76 | 1. Submit an issue describing your proposed change to the repo in question (or work on an existing issue).
77 | 1. The repo owner will respond to your issue promptly.
78 | 1. Fork the desired repo, develop and test your code changes.
79 | 1. Submit a pull request.
80 |
81 | Reporting Issues
82 | ================
83 |
84 | - Under which versions of Python does this happen?
85 |
86 | - Provide an example of the issue occuring.
87 |
88 | - Is anybody working on this?
89 |
90 | Patch Rules
91 | ===========
92 |
93 | - Ensure that the patch is Python 3.4 compliant.
94 |
95 | - Include tests if your patch is supposed to solve a bug, and explain
96 | clearly under which circumstances the bug happens. Make sure the test fails
97 | without your patch.
98 |
99 | - Follow the style guidelines described above.
100 |
101 | Running the Test-Suite
102 | =====================
103 |
104 | The minimal requirement for running the testsuite is ``py.test``. You can
105 | install it with:
106 |
107 | pip install pytest
108 |
109 | Clone this repository:
110 |
111 | git clone https://github.com/aimacode/aima-python.git
112 |
113 | Fetch the aima-data submodule:
114 |
115 | cd aima-python
116 | git submodule init
117 | git submodule update
118 |
119 | Then you can run the testsuite from the `aima-python` or `tests` directory with:
120 |
121 | py.test
122 |
123 | # Choice of Programming Languages
124 |
125 | Are we right to concentrate on Java and Python versions of the code? I think so; both languages are popular; Java is
126 | fast enough for our purposes, and has reasonable type declarations (but can be verbose); Python is popular and has a very direct mapping to the pseudocode in the book (but lacks type declarations and can be slow). The [TIOBE Index](http://www.tiobe.com/tiobe_index) says the top seven most popular languages, in order, are:
127 |
128 | Java, C, C++, C#, Python, PHP, Javascript
129 |
130 | So it might be reasonable to also support C++/C# at some point in the future. It might also be reasonable to support a language that combines the terse readability of Python with the type safety and speed of Java; perhaps Go or Julia. I see no reason to support PHP. Javascript is the language of the browser; it would be nice to have code that runs in the browser without need for any downloads; this would be in Javascript or a variant such as Typescript.
131 |
132 | There is also a `aima-lisp` project; in 1995 when we wrote the first edition of the book, Lisp was the right choice, but today it is less popular (currently #31 on the TIOBE index).
133 |
134 | What languages are instructors recommending for their AI class? To get an approximate idea, I gave the query [\[norvig russell "Modern Approach"\]](https://www.google.com/webhp#q=russell%20norvig%20%22modern%20approach%22%20java) along with the names of various languages and looked at the estimated counts of results on
135 | various dates. However, I don't have much confidence in these figures...
136 |
137 | |Language |2004 |2005 |2007 |2010 |2016 |
138 | |-------- |----: |----: |----: |----: |----: |
139 | |[none](http://www.google.com/search?q=norvig+russell+%22Modern+Approach%22)|8,080|20,100|75,200|150,000|132,000|
140 | |[java](http://www.google.com/search?q=java+norvig+russell+%22Modern+Approach%22)|1,990|4,930|44,200|37,000|50,000|
141 | |[c++](http://www.google.com/search?q=c%2B%2B+norvig+russell+%22Modern+Approach%22)|875|1,820|35,300|105,000|35,000|
142 | |[lisp](http://www.google.com/search?q=lisp+norvig+russell+%22Modern+Approach%22)|844|974|30,100|19,000|14,000|
143 | |[prolog](http://www.google.com/search?q=prolog+norvig+russell+%22Modern+Approach%22)|789|2,010|23,200|17,000|16,000|
144 | |[python](http://www.google.com/search?q=python+norvig+russell+%22Modern+Approach%22)|785|1,240|18,400|11,000|12,000|
145 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | The MIT License (MIT)
2 |
3 | Copyright (c) 2016 aima-python contributors
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
6 |
7 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
8 |
9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
10 |
--------------------------------------------------------------------------------
/aima3/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | __version__ = "1.0.11"
4 |
--------------------------------------------------------------------------------
/aima3/ipyviews.py:
--------------------------------------------------------------------------------
1 | from IPython.display import HTML, display, clear_output
2 | from collections import defaultdict
3 | from .agents import PolygonObstacle
4 | import time
5 | import json
6 | import copy
7 | import __main__
8 |
9 |
10 | # ______________________________________________________________________________
11 | # Continuous environment
12 |
13 |
14 | _CONTINUOUS_WORLD_HTML = '''
15 |
16 |
17 |
18 |
19 |
23 | ''' # noqa
24 |
25 | with open('js/continuousworld.js', 'r') as js_file:
26 | _JS_CONTINUOUS_WORLD = js_file.read()
27 |
28 |
29 | class ContinuousWorldView:
30 | """ View for continuousworld Implementation in agents.py """
31 |
32 | def __init__(self, world, fill="#AAA"):
33 | self.time = time.time()
34 | self.world = world
35 | self.width = world.width
36 | self.height = world.height
37 |
38 | def object_name(self):
39 | globals_in_main = {x: getattr(__main__, x) for x in dir(__main__)}
40 | for x in globals_in_main:
41 | if isinstance(globals_in_main[x], type(self)):
42 | if globals_in_main[x].time == self.time:
43 | return x
44 |
45 | def handle_add_obstacle(self, vertices):
46 | """ Vertices must be a nestedtuple. This method
47 | is called from kernel.execute on completion of
48 | a polygon. """
49 | self.world.add_obstacle(vertices)
50 | self.show()
51 |
52 | def handle_remove_obstacle(self):
53 | return NotImplementedError
54 |
55 | def get_polygon_obstacles_coordinates(self):
56 | obstacle_coordiantes = []
57 | for thing in self.world.things:
58 | if isinstance(thing, PolygonObstacle):
59 | obstacle_coordiantes.append(thing.coordinates)
60 | return obstacle_coordiantes
61 |
62 | def show(self):
63 | clear_output()
64 | total_html = _CONTINUOUS_WORLD_HTML.format(self.width, self.height, self.object_name(),
65 | str(self.get_polygon_obstacles_coordinates()),
66 | _JS_CONTINUOUS_WORLD)
67 | display(HTML(total_html))
68 |
69 |
70 | # ______________________________________________________________________________
71 | # Grid environment
72 |
73 | _GRID_WORLD_HTML = '''
74 |
75 |
76 |
77 |
78 |
79 |
80 |
84 | '''
85 |
86 | with open('js/gridworld.js', 'r') as js_file:
87 | _JS_GRID_WORLD = js_file.read()
88 |
89 |
90 | class GridWorldView:
91 | """ View for grid world. Uses XYEnviornment in agents.py as model.
92 | world: an instance of XYEnviornment.
93 | block_size: size of individual blocks in pixes.
94 | default_fill: color of blocks. A hex value or name should be passed.
95 | """
96 |
97 | def __init__(self, world, block_size=30, default_fill="white"):
98 | self.time = time.time()
99 | self.world = world
100 | self.labels = defaultdict(str) # locations as keys
101 | self.representation = {"default": {"type": "color", "source": default_fill}}
102 | self.block_size = block_size
103 |
104 | def object_name(self):
105 | globals_in_main = {x: getattr(__main__, x) for x in dir(__main__)}
106 | for x in globals_in_main:
107 | if isinstance(globals_in_main[x], type(self)):
108 | if globals_in_main[x].time == self.time:
109 | return x
110 |
111 | def set_label(self, coordinates, label):
112 | """ Add lables to a particular block of grid.
113 | coordinates: a tuple of (row, column).
114 | rows and columns are 0 indexed.
115 | """
116 | self.labels[coordinates] = label
117 |
118 | def set_representation(self, thing, repr_type, source):
119 | """ Set the representation of different things in the
120 | environment.
121 | thing: a thing object.
122 | repr_type : type of representation can be either "color" or "img"
123 | source: Hex value in case of color. Image path in case of image.
124 | """
125 | thing_class_name = thing.__class__.__name__
126 | if repr_type not in ("img", "color"):
127 | raise ValueError('Invalid repr_type passed. Possible types are img/color')
128 | self.representation[thing_class_name] = {"type": repr_type, "source": source}
129 |
130 | def handle_click(self, coordinates):
131 | """ This method needs to be overidden. Make sure to include a
132 | self.show() call at the end. """
133 | self.show()
134 |
135 | def map_to_render(self):
136 | default_representation = {"val": "default", "tooltip": ""}
137 | world_map = [[copy.deepcopy(default_representation) for _ in range(self.world.width)]
138 | for _ in range(self.world.height)]
139 |
140 | for thing in self.world.things:
141 | row, column = thing.location
142 | thing_class_name = thing.__class__.__name__
143 | if thing_class_name not in self.representation:
144 | raise KeyError('Representation not found for {}'.format(thing_class_name))
145 | world_map[row][column]["val"] = thing.__class__.__name__
146 |
147 | for location, label in self.labels.items():
148 | row, column = location
149 | world_map[row][column]["tooltip"] = label
150 |
151 | return json.dumps(world_map)
152 |
153 | def show(self):
154 | clear_output()
155 | total_html = _GRID_WORLD_HTML.format(
156 | self.object_name(), self.map_to_render(),
157 | self.block_size, json.dumps(self.representation), _JS_GRID_WORLD)
158 | display(HTML(total_html))
159 |
--------------------------------------------------------------------------------
/aima3/knowledge.py:
--------------------------------------------------------------------------------
1 | """Knowledge in learning, Chapter 19"""
2 |
3 | from random import shuffle
4 | from math import log
5 | from .utils import powerset
6 | from collections import defaultdict
7 | from itertools import combinations, product
8 | from .logic import (FolKB, constant_symbols, predicate_symbols, standardize_variables,
9 | variables, is_definite_clause, subst, expr, Expr)
10 |
11 | # ______________________________________________________________________________
12 |
13 |
14 | def current_best_learning(examples, h, examples_so_far=[]):
15 | """ [Figure 19.2]
16 | The hypothesis is a list of dictionaries, with each dictionary representing
17 | a disjunction."""
18 | if not examples:
19 | return h
20 |
21 | e = examples[0]
22 | if is_consistent(e, h):
23 | return current_best_learning(examples[1:], h, examples_so_far + [e])
24 | elif false_positive(e, h):
25 | for h2 in specializations(examples_so_far + [e], h):
26 | h3 = current_best_learning(examples[1:], h2, examples_so_far + [e])
27 | if h3 != 'FAIL':
28 | return h3
29 | elif false_negative(e, h):
30 | for h2 in generalizations(examples_so_far + [e], h):
31 | h3 = current_best_learning(examples[1:], h2, examples_so_far + [e])
32 | if h3 != 'FAIL':
33 | return h3
34 |
35 | return 'FAIL'
36 |
37 |
38 | def specializations(examples_so_far, h):
39 | """Specialize the hypothesis by adding AND operations to the disjunctions"""
40 | hypotheses = []
41 |
42 | for i, disj in enumerate(h):
43 | for e in examples_so_far:
44 | for k, v in e.items():
45 | if k in disj or k == 'GOAL':
46 | continue
47 |
48 | h2 = h[i].copy()
49 | h2[k] = '!' + v
50 | h3 = h.copy()
51 | h3[i] = h2
52 | if check_all_consistency(examples_so_far, h3):
53 | hypotheses.append(h3)
54 |
55 | shuffle(hypotheses)
56 | return hypotheses
57 |
58 |
59 | def generalizations(examples_so_far, h):
60 | """Generalize the hypothesis. First delete operations
61 | (including disjunctions) from the hypothesis. Then, add OR operations."""
62 | hypotheses = []
63 |
64 | # Delete disjunctions
65 | disj_powerset = powerset(range(len(h)))
66 | for disjs in disj_powerset:
67 | h2 = h.copy()
68 | for d in reversed(list(disjs)):
69 | del h2[d]
70 |
71 | if check_all_consistency(examples_so_far, h2):
72 | hypotheses += h2
73 |
74 | # Delete AND operations in disjunctions
75 | for i, disj in enumerate(h):
76 | a_powerset = powerset(disj.keys())
77 | for attrs in a_powerset:
78 | h2 = h[i].copy()
79 | for a in attrs:
80 | del h2[a]
81 |
82 | if check_all_consistency(examples_so_far, [h2]):
83 | h3 = h.copy()
84 | h3[i] = h2.copy()
85 | hypotheses += h3
86 |
87 | # Add OR operations
88 | if hypotheses == [] or hypotheses == [{}]:
89 | hypotheses = add_or(examples_so_far, h)
90 | else:
91 | hypotheses.extend(add_or(examples_so_far, h))
92 |
93 | shuffle(hypotheses)
94 | return hypotheses
95 |
96 |
97 | def add_or(examples_so_far, h):
98 | """Adds an OR operation to the hypothesis. The AND operations in the disjunction
99 | are generated by the last example (which is the problematic one)."""
100 | ors = []
101 | e = examples_so_far[-1]
102 |
103 | attrs = {k: v for k, v in e.items() if k != 'GOAL'}
104 | a_powerset = powerset(attrs.keys())
105 |
106 | for c in a_powerset:
107 | h2 = {}
108 | for k in c:
109 | h2[k] = attrs[k]
110 |
111 | if check_negative_consistency(examples_so_far, h2):
112 | h3 = h.copy()
113 | h3.append(h2)
114 | ors.append(h3)
115 |
116 | return ors
117 |
118 | # ______________________________________________________________________________
119 |
120 |
121 | def version_space_learning(examples):
122 | """ [Figure 19.3]
123 | The version space is a list of hypotheses, which in turn are a list
124 | of dictionaries/disjunctions."""
125 | V = all_hypotheses(examples)
126 | for e in examples:
127 | if V:
128 | V = version_space_update(V, e)
129 |
130 | return V
131 |
132 |
133 | def version_space_update(V, e):
134 | return [h for h in V if is_consistent(e, h)]
135 |
136 |
137 | def all_hypotheses(examples):
138 | """Builds a list of all the possible hypotheses"""
139 | values = values_table(examples)
140 | h_powerset = powerset(values.keys())
141 | hypotheses = []
142 | for s in h_powerset:
143 | hypotheses.extend(build_attr_combinations(s, values))
144 |
145 | hypotheses.extend(build_h_combinations(hypotheses))
146 |
147 | return hypotheses
148 |
149 |
150 | def values_table(examples):
151 | """Builds a table with all the possible values for each attribute.
152 | Returns a dictionary with keys the attribute names and values a list
153 | with the possible values for the corresponding attribute."""
154 | values = defaultdict(lambda: [])
155 | for e in examples:
156 | for k, v in e.items():
157 | if k == 'GOAL':
158 | continue
159 |
160 | mod = '!'
161 | if e['GOAL']:
162 | mod = ''
163 |
164 | if mod + v not in values[k]:
165 | values[k].append(mod + v)
166 |
167 | values = dict(values)
168 | return values
169 |
170 |
171 | def build_attr_combinations(s, values):
172 | """Given a set of attributes, builds all the combinations of values.
173 | If the set holds more than one attribute, recursively builds the
174 | combinations."""
175 | if len(s) == 1:
176 | # s holds just one attribute, return its list of values
177 | k = values[s[0]]
178 | h = [[{s[0]: v}] for v in values[s[0]]]
179 | return h
180 |
181 | h = []
182 | for i, a in enumerate(s):
183 | rest = build_attr_combinations(s[i+1:], values)
184 | for v in values[a]:
185 | o = {a: v}
186 | for r in rest:
187 | t = o.copy()
188 | for d in r:
189 | t.update(d)
190 | h.append([t])
191 |
192 | return h
193 |
194 |
195 | def build_h_combinations(hypotheses):
196 | """Given a set of hypotheses, builds and returns all the combinations of the
197 | hypotheses."""
198 | h = []
199 | h_powerset = powerset(range(len(hypotheses)))
200 |
201 | for s in h_powerset:
202 | t = []
203 | for i in s:
204 | t.extend(hypotheses[i])
205 | h.append(t)
206 |
207 | return h
208 |
209 | # ______________________________________________________________________________
210 |
211 |
212 | def minimal_consistent_det(E, A):
213 | """Returns a minimal set of attributes which give consistent determination"""
214 | n = len(A)
215 |
216 | for i in range(n + 1):
217 | for A_i in combinations(A, i):
218 | if consistent_det(A_i, E):
219 | return set(A_i)
220 |
221 |
222 | def consistent_det(A, E):
223 | """Checks if the attributes(A) is consistent with the examples(E)"""
224 | H = {}
225 |
226 | for e in E:
227 | attr_values = tuple(e[attr] for attr in A)
228 | if attr_values in H and H[attr_values] != e['GOAL']:
229 | return False
230 | H[attr_values] = e['GOAL']
231 |
232 | return True
233 |
234 | # ______________________________________________________________________________
235 |
236 |
237 | class FOIL_container(FolKB):
238 | """Holds the kb and other necessary elements required by FOIL"""
239 |
240 | def __init__(self, clauses=[]):
241 | self.const_syms = set()
242 | self.pred_syms = set()
243 | FolKB.__init__(self, clauses)
244 |
245 | def tell(self, sentence):
246 | if is_definite_clause(sentence):
247 | self.clauses.append(sentence)
248 | self.const_syms.update(constant_symbols(sentence))
249 | self.pred_syms.update(predicate_symbols(sentence))
250 | else:
251 | raise Exception("Not a definite clause: {}".format(sentence))
252 |
253 | def foil(self, examples, target):
254 | """Learns a list of first-order horn clauses
255 | 'examples' is a tuple: (positive_examples, negative_examples).
256 | positive_examples and negative_examples are both lists which contain substitutions."""
257 | clauses = []
258 |
259 | pos_examples = examples[0]
260 | neg_examples = examples[1]
261 |
262 | while pos_examples:
263 | clause, extended_pos_examples = self.new_clause((pos_examples, neg_examples), target)
264 | # remove positive examples covered by clause
265 | pos_examples = self.update_examples(target, pos_examples, extended_pos_examples)
266 | clauses.append(clause)
267 |
268 | return clauses
269 |
270 | def new_clause(self, examples, target):
271 | """Finds a horn clause which satisfies part of the positive
272 | examples but none of the negative examples.
273 | The horn clause is specified as [consequent, list of antecedents]
274 | Return value is the tuple (horn_clause, extended_positive_examples)"""
275 | clause = [target, []]
276 | # [positive_examples, negative_examples]
277 | extended_examples = examples
278 | while extended_examples[1]:
279 | l = self.choose_literal(self.new_literals(clause), extended_examples)
280 | clause[1].append(l)
281 | extended_examples = [sum([list(self.extend_example(example, l)) for example in
282 | extended_examples[i]], []) for i in range(2)]
283 |
284 | return (clause, extended_examples[0])
285 |
286 | def extend_example(self, example, literal):
287 | """Generates extended examples which satisfy the literal"""
288 | # find all substitutions that satisfy literal
289 | for s in self.ask_generator(subst(example, literal)):
290 | s.update(example)
291 | yield s
292 |
293 | def new_literals(self, clause):
294 | """Generates new literals based on known predicate symbols.
295 | Generated literal must share atleast one variable with clause"""
296 | share_vars = variables(clause[0])
297 | for l in clause[1]:
298 | share_vars.update(variables(l))
299 |
300 | for pred, arity in self.pred_syms:
301 | new_vars = {standardize_variables(expr('x')) for _ in range(arity - 1)}
302 | for args in product(share_vars.union(new_vars), repeat=arity):
303 | if any(var in share_vars for var in args):
304 | yield Expr(pred, *[var for var in args])
305 |
306 | def choose_literal(self, literals, examples):
307 | """Chooses the best literal based on the information gain"""
308 | def gain(l):
309 | pre_pos = len(examples[0])
310 | pre_neg = len(examples[1])
311 | extended_examples = [sum([list(self.extend_example(example, l)) for example in
312 | examples[i]], []) for i in range(2)]
313 | post_pos = len(extended_examples[0])
314 | post_neg = len(extended_examples[1])
315 | if pre_pos + pre_neg == 0 or post_pos + post_neg == 0:
316 | return -1
317 |
318 | # number of positive example that are represented in extended_examples
319 | T = 0
320 | for example in examples[0]:
321 | def represents(d):
322 | return all(d[x] == example[x] for x in example)
323 | if any(represents(l_) for l_ in extended_examples[0]):
324 | T += 1
325 |
326 | return T * log((post_pos*(pre_pos + pre_neg) + 1e-4) / ((post_pos + post_neg)*pre_pos))
327 |
328 | return max(literals, key=gain)
329 |
330 | def update_examples(self, target, examples, extended_examples):
331 | """Adds to the kb those examples what are represented in extended_examples
332 | List of omitted examples is returned"""
333 | uncovered = []
334 | for example in examples:
335 | def represents(d):
336 | return all(d[x] == example[x] for x in example)
337 | if any(represents(l) for l in extended_examples):
338 | self.tell(subst(example, target))
339 | else:
340 | uncovered.append(example)
341 |
342 | return uncovered
343 |
344 |
345 | # ______________________________________________________________________________
346 |
347 |
348 | def check_all_consistency(examples, h):
349 | """Check for the consistency of all examples under h"""
350 | for e in examples:
351 | if not is_consistent(e, h):
352 | return False
353 |
354 | return True
355 |
356 |
357 | def check_negative_consistency(examples, h):
358 | """Check if the negative examples are consistent under h"""
359 | for e in examples:
360 | if e['GOAL']:
361 | continue
362 |
363 | if not is_consistent(e, [h]):
364 | return False
365 |
366 | return True
367 |
368 |
369 | def disjunction_value(e, d):
370 | """The value of example e under disjunction d"""
371 | for k, v in d.items():
372 | if v[0] == '!':
373 | # v is a NOT expression
374 | # e[k], thus, should not be equal to v
375 | if e[k] == v[1:]:
376 | return False
377 | elif e[k] != v:
378 | return False
379 |
380 | return True
381 |
382 |
383 | def guess_value(e, h):
384 | """Guess value of example e under hypothesis h"""
385 | for d in h:
386 | if disjunction_value(e, d):
387 | return True
388 |
389 | return False
390 |
391 |
392 | def is_consistent(e, h):
393 | return e["GOAL"] == guess_value(e, h)
394 |
395 |
396 | def false_positive(e, h):
397 | if e["GOAL"] == False:
398 | if guess_value(e, h):
399 | return True
400 |
401 | return False
402 |
403 |
404 | def false_negative(e, h):
405 | if e["GOAL"] == True:
406 | if not guess_value(e, h):
407 | return True
408 |
409 | return False
410 |
--------------------------------------------------------------------------------
/aima3/mcts.py:
--------------------------------------------------------------------------------
1 | """
2 |
3 | Based on various code examples, including:
4 |
5 | https://github.com/junxiaosong/AlphaZero_Gomoku/blob/master/mcts_alphaZero.py
6 |
7 | """
8 |
9 | import numpy as np
10 | import copy
11 |
12 | def softmax(x):
13 | probs = np.exp(x - np.max(x))
14 | probs /= np.sum(probs)
15 | return probs
16 |
17 | class Node(object):
18 | """
19 | A node in the MCTS tree. Each node keeps track of its own value Q, prior probability P, and
20 | its visit-count-adjusted prior score u.
21 | """
22 | def __init__(self, parent, P):
23 | self.parent = parent
24 | self.children = {} # a map from action to Node
25 | self.n_visits = 0
26 | self.P = P
27 | self.Q = 0
28 | self.u = 0
29 |
30 | def depth(self):
31 | """
32 | Find depth of tree.
33 | """
34 | children = list(self.children.values())
35 | if children == []:
36 | return 0
37 | else:
38 | return max([child.depth() for child in children]) + 1
39 |
40 | def visit(self, function):
41 | """
42 | Visit the nodes in a tree, layer by layer.
43 | """
44 | return (function(self), [child.visit(function) for child in self.children.values()])
45 |
46 | def expand(self, action_priors):
47 | """
48 | Expand tree by creating new children.
49 |
50 | action_priors -- output from policy function - a list of tuples of actions
51 | and their prior probability according to the policy function.
52 | """
53 | for action, prob in action_priors:
54 | if action not in self.children:
55 | self.children[action] = Node(self, prob)
56 |
57 | def select(self, c_puct):
58 | """
59 | Select action among children that gives maximum action value, Q plus bonus u(P).
60 |
61 | Returns:
62 | A tuple of (action, next_node)
63 | """
64 | return max(self.children.items(), key=lambda act_node: act_node[1].get_value(c_puct))
65 |
66 | def update(self, leaf_value):
67 | """
68 | Update node values from leaf evaluation.
69 |
70 | Arguments:
71 | leaf_value -- the value of subtree evaluation from the current player's perspective.
72 | """
73 | # Count visit.
74 | self.n_visits += 1
75 | # Update Q, a running average of values for all visits.
76 | self.Q += 1.0 * (leaf_value - self.Q) / self.n_visits
77 |
78 | def update_recursive(self, leaf_value):
79 | """
80 | Like a call to update(), but applied recursively for all ancestors.
81 | """
82 | # If it is not root, this node's parent should be updated first.
83 | if self.parent:
84 | self.parent.update_recursive(-leaf_value)
85 | self.update(leaf_value)
86 |
87 | def get_value(self, c_puct):
88 | """
89 | Calculate and return the value for this node: a combination of leaf evaluations, Q, and
90 | this node's prior adjusted for its visit count, u
91 |
92 | c_puct -- a number in (0, inf) controlling the relative impact of values, Q, and
93 | prior probability, P, on this node's score.
94 | """
95 | self.u = c_puct * self.P * np.sqrt(self.parent.n_visits) / (1 + self.n_visits)
96 | return self.Q + self.u
97 |
98 | def is_leaf(self):
99 | """
100 | Check if leaf node (i.e. no nodes below this have been expanded).
101 | """
102 | return self.children == {}
103 |
104 | def is_root(self):
105 | return self.parent is None
106 |
107 | class MCTS(object):
108 | """
109 | A simple implementation of Monte Carlo Tree Search.
110 | """
111 |
112 | def __init__(self, game, policy_value_fn, c_puct=5, n_playout=10000, temp=0.5):
113 | """
114 | Arguments:
115 | policy_value_fn -- a function that takes in a game and board state and outputs a list of (action, probability)
116 | tuples and also a score in [-1, 1] (i.e. the expected value of the end game score from
117 | the current player's perspective) for the current player.
118 | c_puct -- a number in (0, inf) that controls how quickly exploration converges to the
119 | maximum-value policy, where a higher value means relying on the prior more
120 | temp -- temperature parameter in (0, 1] that controls the level of exploration;
121 | very small almost always picks the best path; 1.0 is random chance.
122 | """
123 | self.game = game
124 | self.root = Node(None, 1.0)
125 | self.policy = policy_value_fn
126 | self.c_puct = c_puct
127 | self.n_playout = n_playout
128 | self.temp = temp
129 |
130 | def playout(self, state):
131 | """Run a single playout from the root to the leaf, getting a value at the leaf and
132 | propagating it back through its parents. State is modified in-place, so a copy must be
133 | provided.
134 | Arguments:
135 | state -- a copy of the state.
136 | """
137 | node = self.root
138 | while (1):
139 | if node.is_leaf():
140 | break
141 | # Greedily select next move. Depth first
142 | action, node = node.select(self.c_puct)
143 | state = self.game.result(state, action)
144 | # Evaluate the leaf using a network which outputs a list of (action, probability)
145 | # tuples p and also a score v in [-1, 1] for the current player.
146 |
147 | # Check for end of game.
148 | end = self.game.terminal_test(state)
149 | if not end:
150 | action_probs, leaf_value = self.policy(self.game, state)
151 | node.expand(action_probs)
152 | else:
153 | # for end state,return the "true" leaf_value
154 | leaf_value = self.game.utility(state, self.game.to_move(state))
155 | # Update value and visit count of nodes in this traversal.
156 | node.update_recursive(-leaf_value)
157 |
158 | def get_move_probs(self, state):
159 | """
160 | Runs all playouts sequentially and returns the available actions and their corresponding probabilities
161 | Arguments:
162 | state -- the current state, including both game state and the current player.
163 | Returns:
164 | the available actions and the corresponding probabilities
165 | """
166 | allowed_actions = self.game.actions(state)
167 | for n in range(self.n_playout):
168 | state_copy = copy.deepcopy(state)
169 | self.playout(state_copy)
170 | # calc the move probabilities based on the visit counts at the root node
171 | act_visits = [(act, node.n_visits) for act, node in self.root.children.items()
172 | if act in allowed_actions]
173 | if len(act_visits) == 0: ## HOW? Childred, but no valid actions?
174 | if len(allowed_actions) == 0:
175 | acts, act_probs = [], [] ## No possible moves!
176 | else:
177 | acts = allowed_actions
178 | act_probs = np.array([1/len(allowed_actions) for i in range(len(allowed_actions))])
179 | else:
180 | acts, visits = zip(*act_visits)
181 | act_probs = softmax(1.0/self.temp * np.log(np.array(visits) + 1e-10))
182 | return acts, act_probs
183 |
184 | def update_with_move(self, last_move):
185 | """
186 | Step forward in the tree, keeping everything we already know about the subtree.
187 | """
188 | if last_move in self.root.children:
189 | self.root = self.root.children[last_move]
190 | self.root.parent = None
191 | else:
192 | self.root = Node(None, 1.0)
193 |
194 | def __str__(self):
195 | return "MCTS"
196 |
--------------------------------------------------------------------------------
/aima3/mdp.py:
--------------------------------------------------------------------------------
1 | """Markov Decision Processes (Chapter 17)
2 |
3 | First we define an MDP, and the special case of a GridMDP, in which
4 | states are laid out in a 2-dimensional grid. We also represent a policy
5 | as a dictionary of {state:action} pairs, and a Utility function as a
6 | dictionary of {state:number} pairs. We then define the value_iteration
7 | and policy_iteration algorithms."""
8 |
9 | from .utils import argmax, vector_add, orientations, turn_right, turn_left
10 |
11 | import random
12 |
13 |
14 | class MDP:
15 |
16 | """A Markov Decision Process, defined by an initial state, transition model,
17 | and reward function. We also keep track of a gamma value, for use by
18 | algorithms. The transition model is represented somewhat differently from
19 | the text. Instead of P(s' | s, a) being a probability number for each
20 | state/state/action triplet, we instead have T(s, a) return a
21 | list of (p, s') pairs. We also keep track of the possible states,
22 | terminal states, and actions for each state. [page 646]"""
23 |
24 | def __init__(self, init, actlist, terminals, transitions={}, states=None, gamma=.9):
25 | if not (0 < gamma <= 1):
26 | raise ValueError("An MDP must have 0 < gamma <= 1")
27 |
28 | if states:
29 | self.states = states
30 | else:
31 | self.states = set()
32 | self.init = init
33 | self.actlist = actlist
34 | self.terminals = terminals
35 | self.transitions = transitions
36 | self.gamma = gamma
37 | self.reward = {}
38 |
39 | def R(self, state):
40 | """Return a numeric reward for this state."""
41 | return self.reward[state]
42 |
43 | def T(self, state, action):
44 | """Transition model. From a state and an action, return a list
45 | of (probability, result-state) pairs."""
46 | if(self.transitions == {}):
47 | raise ValueError("Transition model is missing")
48 | else:
49 | return self.transitions[state][action]
50 |
51 | def actions(self, state):
52 | """Set of actions that can be performed in this state. By default, a
53 | fixed list of actions, except for terminal states. Override this
54 | method if you need to specialize by state."""
55 | if state in self.terminals:
56 | return [None]
57 | else:
58 | return self.actlist
59 |
60 |
61 | class GridMDP(MDP):
62 |
63 | """A two-dimensional grid MDP, as in [Figure 17.1]. All you have to do is
64 | specify the grid as a list of lists of rewards; use None for an obstacle
65 | (unreachable state). Also, you should specify the terminal states.
66 | An action is an (x, y) unit vector; e.g. (1, 0) means move east."""
67 |
68 | def __init__(self, grid, terminals, init=(0, 0), gamma=.9):
69 | grid.reverse() # because we want row 0 on bottom, not on top
70 | MDP.__init__(self, init, actlist=orientations,
71 | terminals=terminals, gamma=gamma)
72 | self.grid = grid
73 | self.rows = len(grid)
74 | self.cols = len(grid[0])
75 | for x in range(self.cols):
76 | for y in range(self.rows):
77 | self.reward[x, y] = grid[y][x]
78 | if grid[y][x] is not None:
79 | self.states.add((x, y))
80 |
81 | def T(self, state, action):
82 | if action is None:
83 | return [(0.0, state)]
84 | else:
85 | return [(0.8, self.go(state, action)),
86 | (0.1, self.go(state, turn_right(action))),
87 | (0.1, self.go(state, turn_left(action)))]
88 |
89 | def go(self, state, direction):
90 | """Return the state that results from going in this direction."""
91 | state1 = vector_add(state, direction)
92 | return state1 if state1 in self.states else state
93 |
94 | def to_grid(self, mapping):
95 | """Convert a mapping from (x, y) to v into a [[..., v, ...]] grid."""
96 | return list(reversed([[mapping.get((x, y), None)
97 | for x in range(self.cols)]
98 | for y in range(self.rows)]))
99 |
100 | def to_arrows(self, policy):
101 | chars = {
102 | (1, 0): '>', (0, 1): '^', (-1, 0): '<', (0, -1): 'v', None: '.'}
103 | return self.to_grid({s: chars[a] for (s, a) in policy.items()})
104 |
105 | # ______________________________________________________________________________
106 |
107 |
108 | """ [Figure 17.1]
109 | A 4x3 grid environment that presents the agent with a sequential decision problem.
110 | """
111 |
112 | sequential_decision_environment = GridMDP([[-0.04, -0.04, -0.04, +1],
113 | [-0.04, None, -0.04, -1],
114 | [-0.04, -0.04, -0.04, -0.04]],
115 | terminals=[(3, 2), (3, 1)])
116 |
117 | # ______________________________________________________________________________
118 |
119 |
120 | def value_iteration(mdp, epsilon=0.001):
121 | """Solving an MDP by value iteration. [Figure 17.4]"""
122 | U1 = {s: 0 for s in mdp.states}
123 | R, T, gamma = mdp.R, mdp.T, mdp.gamma
124 | while True:
125 | U = U1.copy()
126 | delta = 0
127 | for s in mdp.states:
128 | U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])
129 | for a in mdp.actions(s)])
130 | delta = max(delta, abs(U1[s] - U[s]))
131 | if delta < epsilon * (1 - gamma) / gamma:
132 | return U
133 |
134 |
135 | def best_policy(mdp, U):
136 | """Given an MDP and a utility function U, determine the best policy,
137 | as a mapping from state to action. (Equation 17.4)"""
138 | pi = {}
139 | for s in mdp.states:
140 | pi[s] = argmax(mdp.actions(s), key=lambda a: expected_utility(a, s, U, mdp))
141 | return pi
142 |
143 |
144 | def expected_utility(a, s, U, mdp):
145 | """The expected utility of doing a in state s, according to the MDP and U."""
146 | return sum([p * U[s1] for (p, s1) in mdp.T(s, a)])
147 |
148 | # ______________________________________________________________________________
149 |
150 |
151 | def policy_iteration(mdp):
152 | """Solve an MDP by policy iteration [Figure 17.7]"""
153 | U = {s: 0 for s in mdp.states}
154 | pi = {s: random.choice(mdp.actions(s)) for s in mdp.states}
155 | while True:
156 | U = policy_evaluation(pi, U, mdp)
157 | unchanged = True
158 | for s in mdp.states:
159 | a = argmax(mdp.actions(s), key=lambda a: expected_utility(a, s, U, mdp))
160 | if a != pi[s]:
161 | pi[s] = a
162 | unchanged = False
163 | if unchanged:
164 | return pi
165 |
166 |
167 | def policy_evaluation(pi, U, mdp, k=20):
168 | """Return an updated utility mapping U from each state in the MDP to its
169 | utility, using an approximation (modified policy iteration)."""
170 | R, T, gamma = mdp.R, mdp.T, mdp.gamma
171 | for i in range(k):
172 | for s in mdp.states:
173 | U[s] = R(s) + gamma * sum([p * U[s1] for (p, s1) in T(s, pi[s])])
174 | return U
175 |
176 |
177 | __doc__ += """
178 | >>> pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .01))
179 |
180 | >>> sequential_decision_environment.to_arrows(pi)
181 | [['>', '>', '>', '.'], ['^', None, '^', '.'], ['^', '>', '^', '<']]
182 |
183 | >>> from utils import print_table
184 |
185 | >>> print_table(sequential_decision_environment.to_arrows(pi))
186 | > > > .
187 | ^ None ^ .
188 | ^ > ^ <
189 |
190 | >>> print_table(sequential_decision_environment.to_arrows(policy_iteration(sequential_decision_environment)))
191 | > > > .
192 | ^ None ^ .
193 | ^ > ^ <
194 | """ # noqa
195 |
--------------------------------------------------------------------------------
/aima3/rl.py:
--------------------------------------------------------------------------------
1 | """Reinforcement Learning (Chapter 21)"""
2 |
3 | from collections import defaultdict
4 | from .utils import argmax
5 | from .mdp import MDP, policy_evaluation
6 |
7 | import random
8 |
9 |
10 | class PassiveADPAgent:
11 |
12 | """Passive (non-learning) agent that uses adaptive dynamic programming
13 | on a given MDP and policy. [Figure 21.2]"""
14 |
15 | class ModelMDP(MDP):
16 | """ Class for implementing modifed Version of input MDP with
17 | an editable transition model P and a custom function T. """
18 | def __init__(self, init, actlist, terminals, gamma, states):
19 | super().__init__(init, actlist, terminals, gamma)
20 | nested_dict = lambda: defaultdict(nested_dict)
21 | # StackOverflow:whats-the-best-way-to-initialize-a-dict-of-dicts-in-python
22 | self.P = nested_dict()
23 |
24 | def T(self, s, a):
25 | """Returns a list of tuples with probabilities for states
26 | based on the learnt model P."""
27 | return [(prob, res) for (res, prob) in self.P[(s, a)].items()]
28 |
29 | def __init__(self, pi, mdp):
30 | self.pi = pi
31 | self.mdp = PassiveADPAgent.ModelMDP(mdp.init, mdp.actlist,
32 | mdp.terminals, mdp.gamma, mdp.states)
33 | self.U = {}
34 | self.Nsa = defaultdict(int)
35 | self.Ns1_sa = defaultdict(int)
36 | self.s = None
37 | self.a = None
38 |
39 | def __call__(self, percept):
40 | s1, r1 = percept
41 | self.mdp.states.add(s1) # Model keeps track of visited states.
42 | R, P, mdp, pi = self.mdp.reward, self.mdp.P, self.mdp, self.pi
43 | s, a, Nsa, Ns1_sa, U = self.s, self.a, self.Nsa, self.Ns1_sa, self.U
44 |
45 | if s1 not in R: # Reward is only available for visted state.
46 | U[s1] = R[s1] = r1
47 | if s is not None:
48 | Nsa[(s, a)] += 1
49 | Ns1_sa[(s1, s, a)] += 1
50 | # for each t such that Ns′|sa [t, s, a] is nonzero
51 | for t in [res for (res, state, act), freq in Ns1_sa.items()
52 | if (state, act) == (s, a) and freq != 0]:
53 | P[(s, a)][t] = Ns1_sa[(t, s, a)] / Nsa[(s, a)]
54 |
55 | U = policy_evaluation(pi, U, mdp)
56 | if s1 in mdp.terminals:
57 | self.s = self.a = None
58 | else:
59 | self.s, self.a = s1, self.pi[s1]
60 | return self.a
61 |
62 | def update_state(self, percept):
63 | '''To be overridden in most cases. The default case
64 | assumes the percept to be of type (state, reward)'''
65 | return percept
66 |
67 |
68 | class PassiveTDAgent:
69 | """The abstract class for a Passive (non-learning) agent that uses
70 | temporal differences to learn utility estimates. Override update_state
71 | method to convert percept to state and reward. The mdp being provided
72 | should be an instance of a subclass of the MDP Class. [Figure 21.4]
73 | """
74 |
75 | def __init__(self, pi, mdp, alpha=None):
76 |
77 | self.pi = pi
78 | self.U = {s: 0. for s in mdp.states}
79 | self.Ns = {s: 0 for s in mdp.states}
80 | self.s = None
81 | self.a = None
82 | self.r = None
83 | self.gamma = mdp.gamma
84 | self.terminals = mdp.terminals
85 |
86 | if alpha:
87 | self.alpha = alpha
88 | else:
89 | self.alpha = lambda n: 1./(1+n) # udacity video
90 |
91 | def __call__(self, percept):
92 | s1, r1 = self.update_state(percept)
93 | pi, U, Ns, s, r = self.pi, self.U, self.Ns, self.s, self.r
94 | alpha, gamma, terminals = self.alpha, self.gamma, self.terminals
95 | if not Ns[s1]:
96 | U[s1] = r1
97 | if s is not None:
98 | Ns[s] += 1
99 | U[s] += alpha(Ns[s]) * (r + gamma * U[s1] - U[s])
100 | if s1 in terminals:
101 | self.s = self.a = self.r = None
102 | else:
103 | self.s, self.a, self.r = s1, pi[s1], r1
104 | return self.a
105 |
106 | def update_state(self, percept):
107 | ''' To be overridden in most cases. The default case
108 | assumes the percept to be of type (state, reward)'''
109 | return percept
110 |
111 |
112 | class QLearningAgent:
113 | """ An exploratory Q-learning agent. It avoids having to learn the transition
114 | model because the Q-value of a state can be related directly to those of
115 | its neighbors. [Figure 21.8]
116 | """
117 | def __init__(self, mdp, Ne, Rplus, alpha=None):
118 |
119 | self.gamma = mdp.gamma
120 | self.terminals = mdp.terminals
121 | self.all_act = mdp.actlist
122 | self.Ne = Ne # iteration limit in exploration function
123 | self.Rplus = Rplus # large value to assign before iteration limit
124 | self.Q = defaultdict(float)
125 | self.Nsa = defaultdict(float)
126 | self.s = None
127 | self.a = None
128 | self.r = None
129 |
130 | if alpha:
131 | self.alpha = alpha
132 | else:
133 | self.alpha = lambda n: 1./(1+n) # udacity video
134 |
135 | def f(self, u, n):
136 | """ Exploration function. Returns fixed Rplus until
137 | agent has visited state, action a Ne number of times.
138 | Same as ADP agent in book."""
139 | if n < self.Ne:
140 | return self.Rplus
141 | else:
142 | return u
143 |
144 | def actions_in_state(self, state):
145 | """ Returns actions possible in given state.
146 | Useful for max and argmax. """
147 | if state in self.terminals:
148 | return [None]
149 | else:
150 | return self.all_act
151 |
152 | def __call__(self, percept):
153 | s1, r1 = self.update_state(percept)
154 | Q, Nsa, s, a, r = self.Q, self.Nsa, self.s, self.a, self.r
155 | alpha, gamma, terminals = self.alpha, self.gamma, self.terminals,
156 | actions_in_state = self.actions_in_state
157 |
158 | if s in terminals:
159 | Q[s, None] = r1
160 | if s is not None:
161 | Nsa[s, a] += 1
162 | Q[s, a] += alpha(Nsa[s, a]) * (r + gamma * max(Q[s1, a1]
163 | for a1 in actions_in_state(s1)) - Q[s, a])
164 | if s in terminals:
165 | self.s = self.a = self.r = None
166 | else:
167 | self.s, self.r = s1, r1
168 | self.a = argmax(actions_in_state(s1), key=lambda a1: self.f(Q[s1, a1], Nsa[s1, a1]))
169 | return self.a
170 |
171 | def update_state(self, percept):
172 | ''' To be overridden in most cases. The default case
173 | assumes the percept to be of type (state, reward)'''
174 | return percept
175 |
176 |
177 | def run_single_trial(agent_program, mdp):
178 | ''' Execute trial for given agent_program
179 | and mdp. mdp should be an instance of subclass
180 | of mdp.MDP '''
181 |
182 | def take_single_action(mdp, s, a):
183 | '''
184 | Selects outcome of taking action a
185 | in state s. Weighted Sampling.
186 | '''
187 | x = random.uniform(0, 1)
188 | cumulative_probability = 0.0
189 | for probability_state in mdp.T(s, a):
190 | probability, state = probability_state
191 | cumulative_probability += probability
192 | if x < cumulative_probability:
193 | break
194 | return state
195 |
196 | current_state = mdp.init
197 | while True:
198 | current_reward = mdp.R(current_state)
199 | percept = (current_state, current_reward)
200 | next_action = agent_program(percept)
201 | if next_action is None:
202 | break
203 | current_state = take_single_action(mdp, current_state, next_action)
204 |
--------------------------------------------------------------------------------
/aima3/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/aima3/tests/__init__.py
--------------------------------------------------------------------------------
/aima3/tests/pytest.ini:
--------------------------------------------------------------------------------
1 | [pytest]
2 | filterwarnings =
3 | ignore::ResourceWarning
--------------------------------------------------------------------------------
/aima3/tests/test_agents.py:
--------------------------------------------------------------------------------
1 | import random
2 | from aima3.agents import Direction
3 | from aima3.agents import Agent
4 | from aima3.agents import (ReflexVacuumAgent, ModelBasedVacuumAgent, TrivialVacuumEnvironment, compare_agents,
5 | RandomVacuumAgent)
6 |
7 |
8 | random.seed("aima-python")
9 |
10 |
11 | def test_move_forward():
12 | d = Direction("up")
13 | l1 = d.move_forward((0, 0))
14 | assert l1 == (0, -1)
15 |
16 | d = Direction(Direction.R)
17 | l1 = d.move_forward((0, 0))
18 | assert l1 == (1, 0)
19 |
20 | d = Direction(Direction.D)
21 | l1 = d.move_forward((0, 0))
22 | assert l1 == (0, 1)
23 |
24 | d = Direction("left")
25 | l1 = d.move_forward((0, 0))
26 | assert l1 == (-1, 0)
27 |
28 | l2 = d.move_forward((1, 0))
29 | assert l2 == (0, 0)
30 |
31 |
32 | def test_add():
33 | d = Direction(Direction.U)
34 | l1 = d + "right"
35 | l2 = d + "left"
36 | assert l1.direction == Direction.R
37 | assert l2.direction == Direction.L
38 |
39 | d = Direction("right")
40 | l1 = d.__add__(Direction.L)
41 | l2 = d.__add__(Direction.R)
42 | assert l1.direction == "up"
43 | assert l2.direction == "down"
44 |
45 | d = Direction("down")
46 | l1 = d.__add__("right")
47 | l2 = d.__add__("left")
48 | assert l1.direction == Direction.L
49 | assert l2.direction == Direction.R
50 |
51 | d = Direction(Direction.L)
52 | l1 = d + Direction.R
53 | l2 = d + Direction.L
54 | assert l1.direction == Direction.U
55 | assert l2.direction == Direction.D
56 |
57 |
58 | def test_RandomVacuumAgent() :
59 | # create an object of the RandomVacuumAgent
60 | agent = RandomVacuumAgent()
61 | # create an object of TrivialVacuumEnvironment
62 | environment = TrivialVacuumEnvironment()
63 | # add agent to the environment
64 | environment.add_thing(agent)
65 | # run the environment
66 | environment.run()
67 | # check final status of the environment
68 | assert environment.status == {(1,0):'Clean' , (0,0) : 'Clean'}
69 |
70 |
71 | def test_ReflexVacuumAgent() :
72 | # create an object of the ReflexVacuumAgent
73 | agent = ReflexVacuumAgent()
74 | # create an object of TrivialVacuumEnvironment
75 | environment = TrivialVacuumEnvironment()
76 | # add agent to the environment
77 | environment.add_thing(agent)
78 | # run the environment
79 | environment.run()
80 | # check final status of the environment
81 | assert environment.status == {(1,0):'Clean' , (0,0) : 'Clean'}
82 |
83 |
84 | def test_ModelBasedVacuumAgent() :
85 | # create an object of the ModelBasedVacuumAgent
86 | agent = ModelBasedVacuumAgent()
87 | # create an object of TrivialVacuumEnvironment
88 | environment = TrivialVacuumEnvironment()
89 | # add agent to the environment
90 | environment.add_thing(agent)
91 | # run the environment
92 | environment.run()
93 | # check final status of the environment
94 | assert environment.status == {(1,0):'Clean' , (0,0) : 'Clean'}
95 |
96 |
97 | def test_compare_agents() :
98 | environment = TrivialVacuumEnvironment
99 | agents = [ModelBasedVacuumAgent, ReflexVacuumAgent]
100 |
101 | result = compare_agents(environment, agents)
102 | performance_ModelBasedVacummAgent = result[0][1]
103 | performance_ReflexVacummAgent = result[1][1]
104 |
105 | # The performance of ModelBasedVacuumAgent will be at least as good as that of
106 | # ReflexVacuumAgent, since ModelBasedVacuumAgent can identify when it has
107 | # reached the terminal state (both locations being clean) and will perform
108 | # NoOp leading to 0 performance change, whereas ReflexVacuumAgent cannot
109 | # identify the terminal state and thus will keep moving, leading to worse
110 | # performance compared to ModelBasedVacuumAgent.
111 | assert performance_ReflexVacummAgent <= performance_ModelBasedVacummAgent
112 |
113 |
114 | def test_Agent():
115 | def constant_prog(percept):
116 | return percept
117 | agent = Agent(constant_prog)
118 | result = agent.program(5)
119 | assert result == 5
120 |
--------------------------------------------------------------------------------
/aima3/tests/test_csp.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from aima3.utils import failure_test
3 | from aima3.csp import *
4 | import random
5 |
6 |
7 | random.seed("aima-python")
8 |
9 |
10 | def test_csp_assign():
11 | var = 10
12 | val = 5
13 | assignment = {}
14 | australia.assign(var, val, assignment)
15 |
16 | assert australia.nassigns == 1
17 | assert assignment[var] == val
18 |
19 |
20 | def test_csp_unassign():
21 | var = 10
22 | assignment = {var: 5}
23 | australia.unassign(var, assignment)
24 |
25 | assert var not in assignment
26 |
27 |
28 | def test_csp_nconflits():
29 | map_coloring_test = MapColoringCSP(list('RGB'), 'A: B C; B: C; C: ')
30 | assignment = {'A': 'R', 'B': 'G'}
31 | var = 'C'
32 | val = 'R'
33 | assert map_coloring_test.nconflicts(var, val, assignment) == 1
34 |
35 | val = 'B'
36 | assert map_coloring_test.nconflicts(var, val, assignment) == 0
37 |
38 |
39 | def test_csp_actions():
40 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
41 |
42 | state = {'A': '1', 'B': '2', 'C': '3'}
43 | assert map_coloring_test.actions(state) == []
44 |
45 | state = {'A': '1', 'B': '3'}
46 | assert map_coloring_test.actions(state) == [('C', '2')]
47 |
48 | state = {'A': '1', 'C': '2'}
49 | assert map_coloring_test.actions(state) == [('B', '3')]
50 |
51 | state = (('A', '1'), ('B', '3'))
52 | assert map_coloring_test.actions(state) == [('C', '2')]
53 |
54 | state = {'A': '1'}
55 | assert (map_coloring_test.actions(state) == [('C', '2'), ('C', '3')] or
56 | map_coloring_test.actions(state) == [('B', '2'), ('B', '3')])
57 |
58 |
59 | def test_csp_result():
60 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
61 |
62 | state = (('A', '1'), ('B', '3'))
63 | action = ('C', '2')
64 |
65 | assert map_coloring_test.result(state, action) == (('A', '1'), ('B', '3'), ('C', '2'))
66 |
67 |
68 | def test_csp_goal_test():
69 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
70 | state = (('A', '1'), ('B', '3'), ('C', '2'))
71 | assert map_coloring_test.goal_test(state) is True
72 |
73 | state = (('A', '1'), ('C', '2'))
74 | assert map_coloring_test.goal_test(state) is False
75 |
76 |
77 | def test_csp_support_pruning():
78 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
79 | map_coloring_test.support_pruning()
80 | assert map_coloring_test.curr_domains == {'A': ['1', '2', '3'], 'B': ['1', '2', '3'],
81 | 'C': ['1', '2', '3']}
82 |
83 |
84 | def test_csp_suppose():
85 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
86 | var = 'A'
87 | value = '1'
88 |
89 | removals = map_coloring_test.suppose(var, value)
90 |
91 | assert removals == [('A', '2'), ('A', '3')]
92 | assert map_coloring_test.curr_domains == {'A': ['1'], 'B': ['1', '2', '3'],
93 | 'C': ['1', '2', '3']}
94 |
95 |
96 | def test_csp_prune():
97 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
98 | removals = None
99 | var = 'A'
100 | value = '3'
101 |
102 | map_coloring_test.support_pruning()
103 | map_coloring_test.prune(var, value, removals)
104 | assert map_coloring_test.curr_domains == {'A': ['1', '2'], 'B': ['1', '2', '3'],
105 | 'C': ['1', '2', '3']}
106 | assert removals is None
107 |
108 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
109 | removals = [('A', '2')]
110 | map_coloring_test.support_pruning()
111 | map_coloring_test.prune(var, value, removals)
112 | assert map_coloring_test.curr_domains == {'A': ['1', '2'], 'B': ['1', '2', '3'],
113 | 'C': ['1', '2', '3']}
114 | assert removals == [('A', '2'), ('A', '3')]
115 |
116 |
117 | def test_csp_choices():
118 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
119 | var = 'A'
120 | assert map_coloring_test.choices(var) == ['1', '2', '3']
121 |
122 | map_coloring_test.support_pruning()
123 | removals = None
124 | value = '3'
125 | map_coloring_test.prune(var, value, removals)
126 | assert map_coloring_test.choices(var) == ['1', '2']
127 |
128 |
129 | def test_csp_infer_assignement():
130 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
131 | map_coloring_test.infer_assignment() == {}
132 |
133 | var = 'A'
134 | value = '3'
135 | map_coloring_test.prune(var, value, None)
136 | value = '1'
137 | map_coloring_test.prune(var, value, None)
138 |
139 | map_coloring_test.infer_assignment() == {'A': '2'}
140 |
141 |
142 | def test_csp_restore():
143 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
144 | map_coloring_test.curr_domains = {'A': ['2', '3'], 'B': ['1'], 'C': ['2', '3']}
145 | removals = [('A', '1'), ('B', '2'), ('B', '3')]
146 |
147 | map_coloring_test.restore(removals)
148 |
149 | assert map_coloring_test.curr_domains == {'A': ['2', '3', '1'], 'B': ['1', '2', '3'],
150 | 'C': ['2', '3']}
151 |
152 |
153 | def test_csp_conflicted_vars():
154 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
155 |
156 | current = {}
157 | var = 'A'
158 | val = '1'
159 | map_coloring_test.assign(var, val, current)
160 |
161 | var = 'B'
162 | val = '3'
163 | map_coloring_test.assign(var, val, current)
164 |
165 | var = 'C'
166 | val = '3'
167 | map_coloring_test.assign(var, val, current)
168 |
169 | conflicted_vars = map_coloring_test.conflicted_vars(current)
170 |
171 | assert (conflicted_vars == ['B', 'C'] or conflicted_vars == ['C', 'B'])
172 |
173 |
174 | def test_revise():
175 | neighbors = parse_neighbors('A: B; B: ')
176 | domains = {'A': [0], 'B': [4]}
177 | constraints = lambda X, x, Y, y: x % 2 == 0 and (x+y) == 4
178 |
179 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
180 | csp.support_pruning()
181 | Xi = 'A'
182 | Xj = 'B'
183 | removals = []
184 |
185 | assert revise(csp, Xi, Xj, removals) is False
186 | assert len(removals) == 0
187 |
188 | domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4]}
189 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
190 | csp.support_pruning()
191 |
192 | assert revise(csp, Xi, Xj, removals) is True
193 | assert removals == [('A', 1), ('A', 3)]
194 |
195 |
196 | def test_AC3():
197 | neighbors = parse_neighbors('A: B; B: ')
198 | domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4]}
199 | constraints = lambda X, x, Y, y: x % 2 == 0 and (x+y) == 4 and y % 2 != 0
200 | removals = []
201 |
202 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
203 |
204 | assert AC3(csp, removals=removals) is False
205 |
206 | constraints = lambda X, x, Y, y: (x % 2) == 0 and (x+y) == 4
207 | removals = []
208 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
209 |
210 | assert AC3(csp, removals=removals) is True
211 | assert (removals == [('A', 1), ('A', 3), ('B', 1), ('B', 3)] or
212 | removals == [('B', 1), ('B', 3), ('A', 1), ('A', 3)])
213 |
214 |
215 | def test_first_unassigned_variable():
216 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
217 | assignment = {'A': '1', 'B': '2'}
218 | assert first_unassigned_variable(assignment, map_coloring_test) == 'C'
219 |
220 | assignment = {'B': '1'}
221 | assert (first_unassigned_variable(assignment, map_coloring_test) == 'A' or
222 | first_unassigned_variable(assignment, map_coloring_test) == 'C')
223 |
224 |
225 | def test_num_legal_values():
226 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
227 | map_coloring_test.support_pruning()
228 | var = 'A'
229 | assignment = {}
230 |
231 | assert num_legal_values(map_coloring_test, var, assignment) == 3
232 |
233 | map_coloring_test = MapColoringCSP(list('RGB'), 'A: B C; B: C; C: ')
234 | assignment = {'A': 'R', 'B': 'G'}
235 | var = 'C'
236 |
237 | assert num_legal_values(map_coloring_test, var, assignment) == 1
238 |
239 |
240 | def test_mrv():
241 | neighbors = parse_neighbors('A: B; B: C; C: ')
242 | domains = {'A': [0, 1, 2, 3, 4], 'B': [4], 'C': [0, 1, 2, 3, 4]}
243 | constraints = lambda X, x, Y, y: x % 2 == 0 and (x+y) == 4
244 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
245 | assignment = {'A': 0}
246 |
247 | assert mrv(assignment, csp) == 'B'
248 |
249 | domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4], 'C': [0, 1, 2, 3, 4]}
250 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
251 |
252 | assert (mrv(assignment, csp) == 'B' or
253 | mrv(assignment, csp) == 'C')
254 |
255 | domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4, 5, 6], 'C': [0, 1, 2, 3, 4]}
256 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
257 | csp.support_pruning()
258 |
259 | assert mrv(assignment, csp) == 'C'
260 |
261 |
262 | def test_unordered_domain_values():
263 | map_coloring_test = MapColoringCSP(list('123'), 'A: B C; B: C; C: ')
264 | assignment = None
265 | assert unordered_domain_values('A', assignment, map_coloring_test) == ['1', '2', '3']
266 |
267 |
268 | def test_lcv():
269 | neighbors = parse_neighbors('A: B; B: C; C: ')
270 | domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4, 5], 'C': [0, 1, 2, 3, 4]}
271 | constraints = lambda X, x, Y, y: x % 2 == 0 and (x+y) == 4
272 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
273 | assignment = {'A': 0}
274 |
275 | var = 'B'
276 |
277 | assert lcv(var, assignment, csp) == [4, 0, 1, 2, 3, 5]
278 | assignment = {'A': 1, 'C': 3}
279 |
280 | constraints = lambda X, x, Y, y: (x + y) % 2 == 0 and (x + y) < 5
281 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
282 |
283 | assert lcv(var, assignment, csp) == [1, 3, 0, 2, 4, 5]
284 |
285 |
286 | def test_forward_checking():
287 | neighbors = parse_neighbors('A: B; B: C; C: ')
288 | domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4, 5], 'C': [0, 1, 2, 3, 4]}
289 | constraints = lambda X, x, Y, y: (x + y) % 2 == 0 and (x + y) < 8
290 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
291 |
292 | csp.support_pruning()
293 | A_curr_domains = csp.curr_domains['A']
294 | C_curr_domains = csp.curr_domains['C']
295 |
296 | var = 'B'
297 | value = 3
298 | assignment = {'A': 1, 'C': '3'}
299 | assert forward_checking(csp, var, value, assignment, None) == True
300 | assert csp.curr_domains['A'] == A_curr_domains
301 | assert csp.curr_domains['C'] == C_curr_domains
302 |
303 | assignment = {'C': 3}
304 |
305 | assert forward_checking(csp, var, value, assignment, None) == True
306 | assert csp.curr_domains['A'] == [1, 3]
307 |
308 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
309 | csp.support_pruning()
310 |
311 | assignment = {}
312 | assert forward_checking(csp, var, value, assignment, None) == True
313 | assert csp.curr_domains['A'] == [1, 3]
314 | assert csp.curr_domains['C'] == [1, 3]
315 |
316 | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
317 | domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4, 7], 'C': [0, 1, 2, 3, 4]}
318 | csp.support_pruning()
319 |
320 | value = 7
321 | assignment = {}
322 | assert forward_checking(csp, var, value, assignment, None) == False
323 | assert (csp.curr_domains['A'] == [] or csp.curr_domains['C'] == [])
324 |
325 |
326 | def test_backtracking_search():
327 | assert backtracking_search(australia)
328 | assert backtracking_search(australia, select_unassigned_variable=mrv)
329 | assert backtracking_search(australia, order_domain_values=lcv)
330 | assert backtracking_search(australia, select_unassigned_variable=mrv,
331 | order_domain_values=lcv)
332 | assert backtracking_search(australia, inference=forward_checking)
333 | assert backtracking_search(australia, inference=mac)
334 | assert backtracking_search(usa, select_unassigned_variable=mrv,
335 | order_domain_values=lcv, inference=mac)
336 |
337 |
338 | def test_min_conflicts():
339 | assert min_conflicts(australia)
340 | assert min_conflicts(france)
341 |
342 | tests = [(usa, None)] * 3
343 | assert failure_test(min_conflicts, tests) >= 1/3
344 |
345 | australia_impossible = MapColoringCSP(list('RG'), 'SA: WA NT Q NSW V; NT: WA Q; NSW: Q V; T: ')
346 | assert min_conflicts(australia_impossible, 1000) is None
347 |
348 |
349 | def test_universal_dict():
350 | d = UniversalDict(42)
351 | assert d['life'] == 42
352 |
353 |
354 | def test_parse_neighbours():
355 | assert parse_neighbors('X: Y Z; Y: Z') == {'Y': ['X', 'Z'], 'X': ['Y', 'Z'], 'Z': ['X', 'Y']}
356 |
357 |
358 | def test_topological_sort():
359 | root = 'NT'
360 | Sort, Parents = topological_sort(australia,root)
361 |
362 | assert Sort == ['NT','SA','Q','NSW','V','WA']
363 | assert Parents['NT'] == None
364 | assert Parents['SA'] == 'NT'
365 | assert Parents['Q'] == 'SA'
366 | assert Parents['NSW'] == 'Q'
367 | assert Parents['V'] == 'NSW'
368 | assert Parents['WA'] == 'SA'
369 |
370 |
371 | def test_tree_csp_solver():
372 | australia_small = MapColoringCSP(list('RB'),
373 | 'NT: WA Q; NSW: Q V')
374 | tcs = tree_csp_solver(australia_small)
375 | assert (tcs['NT'] == 'R' and tcs['WA'] == 'B' and tcs['Q'] == 'B' and tcs['NSW'] == 'R' and tcs['V'] == 'B') or \
376 | (tcs['NT'] == 'B' and tcs['WA'] == 'R' and tcs['Q'] == 'R' and tcs['NSW'] == 'B' and tcs['V'] == 'R')
377 |
378 |
379 | if __name__ == "__main__":
380 | pytest.main()
381 |
--------------------------------------------------------------------------------
/aima3/tests/test_games.py:
--------------------------------------------------------------------------------
1 | from aima3.games import *
2 |
3 | # Creating the game instances
4 | f52 = Fig52Game()
5 | ttt = TicTacToe()
6 |
7 |
8 | def gen_state(to_move='X', x_positions=[], o_positions=[], h=3, v=3, k=3):
9 | """Given whose turn it is to move, the positions of X's on the board, the
10 | positions of O's on the board, and, (optionally) number of rows, columns
11 | and how many consecutive X's or O's required to win, return the corresponding
12 | game state"""
13 |
14 | moves = set([(x, y) for x in range(1, h + 1) for y in range(1, v + 1)]) \
15 | - set(x_positions) - set(o_positions)
16 | moves = list(moves)
17 | board = {}
18 | for pos in x_positions:
19 | board[pos] = 'X'
20 | for pos in o_positions:
21 | board[pos] = 'O'
22 | return GameState(to_move=to_move, utility=0, board=board, moves=moves)
23 |
24 |
25 | def test_minimax_decision():
26 | assert minimax_decision('A', f52) == 'a1'
27 | assert minimax_decision('B', f52) == 'b1'
28 | assert minimax_decision('C', f52) == 'c1'
29 | assert minimax_decision('D', f52) == 'd3'
30 |
31 |
32 | def test_alphabeta_search():
33 | assert alphabeta_search('A', f52) == 'a1'
34 | assert alphabeta_search('B', f52) == 'b1'
35 | assert alphabeta_search('C', f52) == 'c1'
36 | assert alphabeta_search('D', f52) == 'd3'
37 |
38 | state = gen_state(to_move='X', x_positions=[(1, 1), (3, 3)],
39 | o_positions=[(1, 2), (3, 2)])
40 | assert alphabeta_search(state, ttt) == (2, 2)
41 |
42 | state = gen_state(to_move='O', x_positions=[(1, 1), (3, 1), (3, 3)],
43 | o_positions=[(1, 2), (3, 2)])
44 | assert alphabeta_search(state, ttt) == (2, 2)
45 |
46 | state = gen_state(to_move='O', x_positions=[(1, 1)],
47 | o_positions=[])
48 | assert alphabeta_search(state, ttt) == (2, 2)
49 |
50 | state = gen_state(to_move='X', x_positions=[(1, 1), (3, 1)],
51 | o_positions=[(2, 2), (3, 1)])
52 | assert alphabeta_search(state, ttt) == (1, 3)
53 |
54 |
55 | def test_random_tests():
56 | assert Fig52Game().play_game(alphabeta_player, alphabeta_player) == 3
57 |
58 | # The player 'X' (one who plays first) in TicTacToe never loses:
59 | assert ttt.play_game(alphabeta_player, alphabeta_player) >= 0
60 |
61 | # The player 'X' (one who plays first) in TicTacToe never loses:
62 | assert ttt.play_game(alphabeta_player, random_player) >= 0
63 |
--------------------------------------------------------------------------------
/aima3/tests/test_learning.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import math
3 | import random
4 | from aima3.utils import open_data
5 | from aima3.learning import *
6 |
7 |
8 | random.seed("aima-python")
9 |
10 |
11 | def test_euclidean():
12 | distance = euclidean_distance([1, 2], [3, 4])
13 | assert round(distance, 2) == 2.83
14 |
15 | distance = euclidean_distance([1, 2, 3], [4, 5, 6])
16 | assert round(distance, 2) == 5.2
17 |
18 | distance = euclidean_distance([0, 0, 0], [0, 0, 0])
19 | assert distance == 0
20 |
21 |
22 | def test_rms_error():
23 | assert rms_error([2, 2], [2, 2]) == 0
24 | assert rms_error((0, 0), (0, 1)) == math.sqrt(0.5)
25 | assert rms_error((1, 0), (0, 1)) == 1
26 | assert rms_error((0, 0), (0, -1)) == math.sqrt(0.5)
27 | assert rms_error((0, 0.5), (0, -0.5)) == math.sqrt(0.5)
28 |
29 |
30 | def test_manhattan_distance():
31 | assert manhattan_distance([2, 2], [2, 2]) == 0
32 | assert manhattan_distance([0, 0], [0, 1]) == 1
33 | assert manhattan_distance([1, 0], [0, 1]) == 2
34 | assert manhattan_distance([0, 0], [0, -1]) == 1
35 | assert manhattan_distance([0, 0.5], [0, -0.5]) == 1
36 |
37 |
38 | def test_mean_boolean_error():
39 | assert mean_boolean_error([1, 1], [0, 0]) == 1
40 | assert mean_boolean_error([0, 1], [1, 0]) == 1
41 | assert mean_boolean_error([1, 1], [0, 1]) == 0.5
42 | assert mean_boolean_error([0, 0], [0, 0]) == 0
43 | assert mean_boolean_error([1, 1], [1, 1]) == 0
44 |
45 |
46 | def test_mean_error():
47 | assert mean_error([2, 2], [2, 2]) == 0
48 | assert mean_error([0, 0], [0, 1]) == 0.5
49 | assert mean_error([1, 0], [0, 1]) == 1
50 | assert mean_error([0, 0], [0, -1]) == 0.5
51 | assert mean_error([0, 0.5], [0, -0.5]) == 0.5
52 |
53 |
54 | def test_exclude():
55 | iris = DataSet(name='iris', exclude=[3])
56 | assert iris.inputs == [0, 1, 2]
57 |
58 |
59 | def test_parse_csv():
60 | Iris = open_data('iris.csv').read()
61 | assert parse_csv(Iris)[0] == [5.1, 3.5, 1.4, 0.2, 'setosa']
62 |
63 |
64 | def test_weighted_mode():
65 | assert weighted_mode('abbaa', [1, 2, 3, 1, 2]) == 'b'
66 |
67 |
68 | def test_weighted_replicate():
69 | assert weighted_replicate('ABC', [1, 2, 1], 4) == ['A', 'B', 'B', 'C']
70 |
71 |
72 | def test_means_and_deviation():
73 | iris = DataSet(name="iris")
74 |
75 | means, deviations = iris.find_means_and_deviations()
76 |
77 | assert round(means["setosa"][0], 3) == 5.006
78 | assert round(means["versicolor"][0], 3) == 5.936
79 | assert round(means["virginica"][0], 3) == 6.588
80 |
81 | assert round(deviations["setosa"][0], 3) == 0.352
82 | assert round(deviations["versicolor"][0], 3) == 0.516
83 | assert round(deviations["virginica"][0], 3) == 0.636
84 |
85 |
86 | def test_plurality_learner():
87 | zoo = DataSet(name="zoo")
88 |
89 | pL = PluralityLearner(zoo)
90 | assert pL([1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1]) == "mammal"
91 |
92 |
93 | def test_naive_bayes():
94 | iris = DataSet(name="iris")
95 |
96 | # Discrete
97 | nBD = NaiveBayesLearner(iris, continuous=False)
98 | assert nBD([5, 3, 1, 0.1]) == "setosa"
99 | assert nBD([6, 3, 4, 1.1]) == "versicolor"
100 | assert nBD([7.7, 3, 6, 2]) == "virginica"
101 |
102 | # Continuous
103 | nBC = NaiveBayesLearner(iris, continuous=True)
104 | assert nBC([5, 3, 1, 0.1]) == "setosa"
105 | assert nBC([6, 5, 3, 1.5]) == "versicolor"
106 | assert nBC([7, 3, 6.5, 2]) == "virginica"
107 |
108 | # Simple
109 | data1 = 'a'*50 + 'b'*30 + 'c'*15
110 | dist1 = CountingProbDist(data1)
111 | data2 = 'a'*30 + 'b'*45 + 'c'*20
112 | dist2 = CountingProbDist(data2)
113 | data3 = 'a'*20 + 'b'*20 + 'c'*35
114 | dist3 = CountingProbDist(data3)
115 |
116 | dist = {('First', 0.5): dist1, ('Second', 0.3): dist2, ('Third', 0.2): dist3}
117 | nBS = NaiveBayesLearner(dist, simple=True)
118 | assert nBS('aab') == 'First'
119 | assert nBS(['b', 'b']) == 'Second'
120 | assert nBS('ccbcc') == 'Third'
121 |
122 |
123 | def test_k_nearest_neighbors():
124 | iris = DataSet(name="iris")
125 | kNN = NearestNeighborLearner(iris, k=3)
126 | assert kNN([5, 3, 1, 0.1]) == "setosa"
127 | assert kNN([5, 3, 1, 0.1]) == "setosa"
128 | assert kNN([6, 5, 3, 1.5]) == "versicolor"
129 | assert kNN([7.5, 4, 6, 2]) == "virginica"
130 |
131 |
132 | def test_truncated_svd():
133 | test_mat = [[17, 0],
134 | [0, 11]]
135 | _, _, eival = truncated_svd(test_mat)
136 | assert isclose(abs(eival[0]), 17)
137 | assert isclose(abs(eival[1]), 11)
138 |
139 | test_mat = [[17, 0],
140 | [0, -34]]
141 | _, _, eival = truncated_svd(test_mat)
142 | assert isclose(abs(eival[0]), 34)
143 | assert isclose(abs(eival[1]), 17)
144 |
145 | test_mat = [[1, 0, 0, 0, 2],
146 | [0, 0, 3, 0, 0],
147 | [0, 0, 0, 0, 0],
148 | [0, 2, 0, 0, 0]]
149 | _, _, eival = truncated_svd(test_mat)
150 | assert isclose(abs(eival[0]), 3)
151 | assert isclose(abs(eival[1]), 5**0.5)
152 |
153 | test_mat = [[3, 2, 2],
154 | [2, 3, -2]]
155 | _, _, eival = truncated_svd(test_mat)
156 | assert isclose(abs(eival[0]), 5)
157 | assert isclose(abs(eival[1]), 3)
158 |
159 |
160 | def test_decision_tree_learner():
161 | iris = DataSet(name="iris")
162 | dTL = DecisionTreeLearner(iris)
163 | assert dTL([5, 3, 1, 0.1]) == "setosa"
164 | assert dTL([6, 5, 3, 1.5]) == "versicolor"
165 | assert dTL([7.5, 4, 6, 2]) == "virginica"
166 |
167 |
168 | def test_random_forest():
169 | iris = DataSet(name="iris")
170 | rF = RandomForest(iris)
171 | tests = [([5.0, 3.0, 1.0, 0.1], "setosa"),
172 | ([5.1, 3.3, 1.1, 0.1], "setosa"),
173 | ([6.0, 5.0, 3.0, 1.0], "versicolor"),
174 | ([6.1, 2.2, 3.5, 1.0], "versicolor"),
175 | ([7.5, 4.1, 6.2, 2.3], "virginica"),
176 | ([7.3, 3.7, 6.1, 2.5], "virginica")]
177 | assert grade_learner(rF, tests) >= 1/3
178 |
179 |
180 | def test_neural_network_learner():
181 | iris = DataSet(name="iris")
182 | classes = ["setosa", "versicolor", "virginica"]
183 | iris.classes_to_numbers(classes)
184 | nNL = NeuralNetLearner(iris, [5], 0.15, 75)
185 | tests = [([5.0, 3.1, 0.9, 0.1], 0),
186 | ([5.1, 3.5, 1.0, 0.0], 0),
187 | ([4.9, 3.3, 1.1, 0.1], 0),
188 | ([6.0, 3.0, 4.0, 1.1], 1),
189 | ([6.1, 2.2, 3.5, 1.0], 1),
190 | ([5.9, 2.5, 3.3, 1.1], 1),
191 | ([7.5, 4.1, 6.2, 2.3], 2),
192 | ([7.3, 4.0, 6.1, 2.4], 2),
193 | ([7.0, 3.3, 6.1, 2.5], 2)]
194 | assert grade_learner(nNL, tests) >= 1/3
195 | assert err_ratio(nNL, iris) < 0.2
196 |
197 |
198 | def test_perceptron():
199 | iris = DataSet(name="iris")
200 | iris.classes_to_numbers()
201 | classes_number = len(iris.values[iris.target])
202 | perceptron = PerceptronLearner(iris)
203 | tests = [([5, 3, 1, 0.1], 0),
204 | ([5, 3.5, 1, 0], 0),
205 | ([6, 3, 4, 1.1], 1),
206 | ([6, 2, 3.5, 1], 1),
207 | ([7.5, 4, 6, 2], 2),
208 | ([7, 3, 6, 2.5], 2)]
209 | assert grade_learner(perceptron, tests) > 1/2
210 | assert err_ratio(perceptron, iris) < 0.4
211 |
212 |
213 | def test_random_weights():
214 | min_value = -0.5
215 | max_value = 0.5
216 | num_weights = 10
217 | test_weights = random_weights(min_value, max_value, num_weights)
218 | assert len(test_weights) == num_weights
219 | for weight in test_weights:
220 | assert weight >= min_value and weight <= max_value
221 |
--------------------------------------------------------------------------------
/aima3/tests/test_logic.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from aima3.logic import *
3 | from aima3.utils import expr_handle_infix_ops, count, Symbol
4 |
5 |
6 | def test_is_symbol():
7 | assert is_symbol('x')
8 | assert is_symbol('X')
9 | assert is_symbol('N245')
10 | assert not is_symbol('')
11 | assert not is_symbol('1L')
12 | assert not is_symbol([1, 2, 3])
13 |
14 |
15 | def test_is_var_symbol():
16 | assert is_var_symbol('xt')
17 | assert not is_var_symbol('Txt')
18 | assert not is_var_symbol('')
19 | assert not is_var_symbol('52')
20 |
21 |
22 | def test_is_prop_symbol():
23 | assert not is_prop_symbol('xt')
24 | assert is_prop_symbol('Txt')
25 | assert not is_prop_symbol('')
26 | assert not is_prop_symbol('52')
27 |
28 |
29 | def test_variables():
30 | assert variables(expr('F(x, x) & G(x, y) & H(y, z) & R(A, z, 2)')) == {x, y, z}
31 | assert variables(expr('(x ==> y) & B(x, y) & A')) == {x, y}
32 |
33 |
34 | def test_expr():
35 | assert repr(expr('P <=> Q(1)')) == '(P <=> Q(1))'
36 | assert repr(expr('P & Q | ~R(x, F(x))')) == '((P & Q) | ~R(x, F(x)))'
37 | assert (expr_handle_infix_ops('P & Q ==> R & ~S')
38 | == "P & Q |'==>'| R & ~S")
39 |
40 |
41 | def test_extend():
42 | assert extend({x: 1}, y, 2) == {x: 1, y: 2}
43 |
44 |
45 | def test_subst():
46 | assert subst({x: 42, y:0}, F(x) + y) == (F(42) + 0)
47 |
48 |
49 | def test_PropKB():
50 | kb = PropKB()
51 | assert count(kb.ask(expr) for expr in [A, C, D, E, Q]) is 0
52 | kb.tell(A & E)
53 | assert kb.ask(A) == kb.ask(E) == {}
54 | kb.tell(E |'==>'| C)
55 | assert kb.ask(C) == {}
56 | kb.retract(E)
57 | assert kb.ask(E) is False
58 | assert kb.ask(C) is False
59 |
60 |
61 | def test_wumpus_kb():
62 | # Statement: There is no pit in [1,1].
63 | assert wumpus_kb.ask(~P11) == {}
64 |
65 | # Statement: There is no pit in [1,2].
66 | assert wumpus_kb.ask(~P12) == {}
67 |
68 | # Statement: There is a pit in [2,2].
69 | assert wumpus_kb.ask(P22) is False
70 |
71 | # Statement: There is a pit in [3,1].
72 | assert wumpus_kb.ask(P31) is False
73 |
74 | # Statement: Neither [1,2] nor [2,1] contains a pit.
75 | assert wumpus_kb.ask(~P12 & ~P21) == {}
76 |
77 | # Statement: There is a pit in either [2,2] or [3,1].
78 | assert wumpus_kb.ask(P22 | P31) == {}
79 |
80 |
81 | def test_is_definite_clause():
82 | assert is_definite_clause(expr('A & B & C & D ==> E'))
83 | assert is_definite_clause(expr('Farmer(Mac)'))
84 | assert not is_definite_clause(expr('~Farmer(Mac)'))
85 | assert is_definite_clause(expr('(Farmer(f) & Rabbit(r)) ==> Hates(f, r)'))
86 | assert not is_definite_clause(expr('(Farmer(f) & ~Rabbit(r)) ==> Hates(f, r)'))
87 | assert not is_definite_clause(expr('(Farmer(f) | Rabbit(r)) ==> Hates(f, r)'))
88 |
89 |
90 | def test_parse_definite_clause():
91 | assert parse_definite_clause(expr('A & B & C & D ==> E')) == ([A, B, C, D], E)
92 | assert parse_definite_clause(expr('Farmer(Mac)')) == ([], expr('Farmer(Mac)'))
93 | assert parse_definite_clause(expr('(Farmer(f) & Rabbit(r)) ==> Hates(f, r)')) == ([expr('Farmer(f)'), expr('Rabbit(r)')], expr('Hates(f, r)'))
94 |
95 |
96 | def test_pl_true():
97 | assert pl_true(P, {}) is None
98 | assert pl_true(P, {P: False}) is False
99 | assert pl_true(P | Q, {P: True}) is True
100 | assert pl_true((A | B) & (C | D), {A: False, B: True, D: True}) is True
101 | assert pl_true((A & B) & (C | D), {A: False, B: True, D: True}) is False
102 | assert pl_true((A & B) | (A & C), {A: False, B: True, C: True}) is False
103 | assert pl_true((A | B) & (C | D), {A: True, D: False}) is None
104 | assert pl_true(P | P, {}) is None
105 |
106 |
107 | def test_tt_true():
108 | assert tt_true(P | ~P)
109 | assert tt_true('~~P <=> P')
110 | assert not tt_true((P | ~Q) & (~P | Q))
111 | assert not tt_true(P & ~P)
112 | assert not tt_true(P & Q)
113 | assert tt_true((P | ~Q) | (~P | Q))
114 | assert tt_true('(A & B) ==> (A | B)')
115 | assert tt_true('((A & B) & C) <=> (A & (B & C))')
116 | assert tt_true('((A | B) | C) <=> (A | (B | C))')
117 | assert tt_true('(A ==> B) <=> (~B ==> ~A)')
118 | assert tt_true('(A ==> B) <=> (~A | B)')
119 | assert tt_true('(A <=> B) <=> ((A ==> B) & (B ==> A))')
120 | assert tt_true('~(A & B) <=> (~A | ~B)')
121 | assert tt_true('~(A | B) <=> (~A & ~B)')
122 | assert tt_true('(A & (B | C)) <=> ((A & B) | (A & C))')
123 | assert tt_true('(A | (B & C)) <=> ((A | B) & (A | C))')
124 |
125 |
126 | def test_dpll():
127 | assert (dpll_satisfiable(A & ~B & C & (A | ~D) & (~E | ~D) & (C | ~D) & (~A | ~F) & (E | ~F)
128 | & (~D | ~F) & (B | ~C | D) & (A | ~E | F) & (~A | E | D))
129 | == {B: False, C: True, A: True, F: False, D: True, E: False})
130 | assert dpll_satisfiable(A & ~B) == {A: True, B: False}
131 | assert dpll_satisfiable(P & ~P) is False
132 |
133 |
134 | def test_find_pure_symbol():
135 | assert find_pure_symbol([A, B, C], [A|~B,~B|~C,C|A]) == (A, True)
136 | assert find_pure_symbol([A, B, C], [~A|~B,~B|~C,C|A]) == (B, False)
137 | assert find_pure_symbol([A, B, C], [~A|B,~B|~C,C|A]) == (None, None)
138 |
139 |
140 | def test_unit_clause_assign():
141 | assert unit_clause_assign(A|B|C, {A:True}) == (None, None)
142 | assert unit_clause_assign(B|C, {A:True}) == (None, None)
143 | assert unit_clause_assign(B|~A, {A:True}) == (B, True)
144 |
145 |
146 | def test_find_unit_clause():
147 | assert find_unit_clause([A|B|C, B|~C, ~A|~B], {A:True}) == (B, False)
148 |
149 |
150 | def test_unify():
151 | assert unify(x, x, {}) == {}
152 | assert unify(x, 3, {}) == {x: 3}
153 |
154 |
155 | def test_pl_fc_entails():
156 | assert pl_fc_entails(horn_clauses_KB, expr('Q'))
157 | assert not pl_fc_entails(horn_clauses_KB, expr('SomethingSilly'))
158 |
159 |
160 | def test_tt_entails():
161 | assert tt_entails(P & Q, Q)
162 | assert not tt_entails(P | Q, Q)
163 | assert tt_entails(A & (B | C) & E & F & ~(P | Q), A & E & F & ~P & ~Q)
164 |
165 |
166 | def test_prop_symbols():
167 | assert prop_symbols(expr('x & y & z | A')) == {A}
168 | assert prop_symbols(expr('(x & B(z)) ==> Farmer(y) | A')) == {A, expr('Farmer(y)'), expr('B(z)')}
169 |
170 |
171 | def test_constant_symbols():
172 | assert constant_symbols(expr('x & y & z | A')) == {A}
173 | assert constant_symbols(expr('(x & B(z)) & Father(John) ==> Farmer(y) | A')) == {A, expr('John')}
174 |
175 |
176 | def test_predicate_symbols():
177 | assert predicate_symbols(expr('x & y & z | A')) == set()
178 | assert predicate_symbols(expr('(x & B(z)) & Father(John) ==> Farmer(y) | A')) == {
179 | ('B', 1),
180 | ('Father', 1),
181 | ('Farmer', 1)}
182 | assert predicate_symbols(expr('(x & B(x, y, z)) & F(G(x, y), x) ==> P(Q(R(x, y)), x, y, z)')) == {
183 | ('B', 3),
184 | ('F', 2),
185 | ('G', 2),
186 | ('P', 4),
187 | ('Q', 1),
188 | ('R', 2)}
189 |
190 |
191 | def test_eliminate_implications():
192 | assert repr(eliminate_implications('A ==> (~B <== C)')) == '((~B | ~C) | ~A)'
193 | assert repr(eliminate_implications(A ^ B)) == '((A & ~B) | (~A & B))'
194 | assert repr(eliminate_implications(A & B | C & ~D)) == '((A & B) | (C & ~D))'
195 |
196 |
197 | def test_dissociate():
198 | assert dissociate('&', [A & B]) == [A, B]
199 | assert dissociate('|', [A, B, C & D, P | Q]) == [A, B, C & D, P, Q]
200 | assert dissociate('&', [A, B, C & D, P | Q]) == [A, B, C, D, P | Q]
201 |
202 |
203 | def test_associate():
204 | assert (repr(associate('&', [(A & B), (B | C), (B & C)]))
205 | == '(A & B & (B | C) & B & C)')
206 | assert (repr(associate('|', [A | (B | (C | (A & B)))]))
207 | == '(A | B | C | (A & B))')
208 |
209 |
210 | def test_move_not_inwards():
211 | assert repr(move_not_inwards(~(A | B))) == '(~A & ~B)'
212 | assert repr(move_not_inwards(~(A & B))) == '(~A | ~B)'
213 | assert repr(move_not_inwards(~(~(A | ~B) | ~~C))) == '((A | ~B) & ~C)'
214 |
215 |
216 | def test_distribute_and_over_or():
217 | def test_enatilment(s, has_and = False):
218 | result = distribute_and_over_or(s)
219 | if has_and:
220 | assert result.op == '&'
221 | assert tt_entails(s, result)
222 | assert tt_entails(result, s)
223 | test_enatilment((A & B) | C, True)
224 | test_enatilment((A | B) & C, True)
225 | test_enatilment((A | B) | C, False)
226 | test_enatilment((A & B) | (C | D), True)
227 |
228 | def test_to_cnf():
229 | assert (repr(to_cnf(wumpus_world_inference & ~expr('~P12'))) ==
230 | "((~P12 | B11) & (~P21 | B11) & (P12 | P21 | ~B11) & ~B11 & P12)")
231 | assert repr(to_cnf((P & Q) | (~P & ~Q))) == '((~P | P) & (~Q | P) & (~P | Q) & (~Q | Q))'
232 | assert repr(to_cnf("B <=> (P1 | P2)")) == '((~P1 | B) & (~P2 | B) & (P1 | P2 | ~B))'
233 | assert repr(to_cnf("a | (b & c) | d")) == '((b | a | d) & (c | a | d))'
234 | assert repr(to_cnf("A & (B | (D & E))")) == '(A & (D | B) & (E | B))'
235 | assert repr(to_cnf("A | (B | (C | (D & E)))")) == '((D | A | B | C) & (E | A | B | C))'
236 |
237 |
238 | def test_pl_resolution():
239 | # TODO: Add fast test cases
240 | assert pl_resolution(wumpus_kb, ~P11)
241 |
242 |
243 | def test_standardize_variables():
244 | e = expr('F(a, b, c) & G(c, A, 23)')
245 | assert len(variables(standardize_variables(e))) == 3
246 | # assert variables(e).intersection(variables(standardize_variables(e))) == {}
247 | assert is_variable(standardize_variables(expr('x')))
248 |
249 |
250 | def test_fol_bc_ask():
251 | def test_ask(query, kb=None):
252 | q = expr(query)
253 | test_variables = variables(q)
254 | answers = fol_bc_ask(kb or test_kb, q)
255 | return sorted(
256 | [dict((x, v) for x, v in list(a.items()) if x in test_variables)
257 | for a in answers], key=repr)
258 | assert repr(test_ask('Farmer(x)')) == '[{x: Mac}]'
259 | assert repr(test_ask('Human(x)')) == '[{x: Mac}, {x: MrsMac}]'
260 | assert repr(test_ask('Rabbit(x)')) == '[{x: MrsRabbit}, {x: Pete}]'
261 | assert repr(test_ask('Criminal(x)', crime_kb)) == '[{x: West}]'
262 |
263 |
264 | def test_fol_fc_ask():
265 | def test_ask(query, kb=None):
266 | q = expr(query)
267 | test_variables = variables(q)
268 | answers = fol_fc_ask(kb or test_kb, q)
269 | return sorted(
270 | [dict((x, v) for x, v in list(a.items()) if x in test_variables)
271 | for a in answers], key=repr)
272 | assert repr(test_ask('Criminal(x)', crime_kb)) == '[{x: West}]'
273 | assert repr(test_ask('Enemy(x, America)', crime_kb)) == '[{x: Nono}]'
274 | assert repr(test_ask('Farmer(x)')) == '[{x: Mac}]'
275 | assert repr(test_ask('Human(x)')) == '[{x: Mac}, {x: MrsMac}]'
276 | assert repr(test_ask('Rabbit(x)')) == '[{x: MrsRabbit}, {x: Pete}]'
277 |
278 |
279 | def test_d():
280 | assert d(x * x - x, x) == 2 * x - 1
281 |
282 |
283 | def test_WalkSAT():
284 | def check_SAT(clauses, single_solution={}):
285 | # Make sure the solution is correct if it is returned by WalkSat
286 | # Sometimes WalkSat may run out of flips before finding a solution
287 | soln = WalkSAT(clauses)
288 | if soln:
289 | assert all(pl_true(x, soln) for x in clauses)
290 | if single_solution: # Cross check the solution if only one exists
291 | assert all(pl_true(x, single_solution) for x in clauses)
292 | assert soln == single_solution
293 | # Test WalkSat for problems with solution
294 | check_SAT([A & B, A & C])
295 | check_SAT([A | B, P & Q, P & B])
296 | check_SAT([A & B, C | D, ~(D | P)], {A: True, B: True, C: True, D: False, P: False})
297 | # Test WalkSat for problems without solution
298 | assert WalkSAT([A & ~A], 0.5, 100) is None
299 | assert WalkSAT([A | B, ~A, ~(B | C), C | D, P | Q], 0.5, 100) is None
300 | assert WalkSAT([A | B, B & C, C | D, D & A, P, ~P], 0.5, 100) is None
301 |
302 |
303 | def test_SAT_plan():
304 | transition = {'A': {'Left': 'A', 'Right': 'B'},
305 | 'B': {'Left': 'A', 'Right': 'C'},
306 | 'C': {'Left': 'B', 'Right': 'C'}}
307 | assert SAT_plan('A', transition, 'C', 2) is None
308 | assert SAT_plan('A', transition, 'B', 3) == ['Right']
309 | assert SAT_plan('C', transition, 'A', 3) == ['Left', 'Left']
310 |
311 | transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
312 | (0, 1): {'Left': (1, 0), 'Down': (1, 1)},
313 | (1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
314 | (1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
315 | assert SAT_plan((0, 0), transition, (1, 1), 4) == ['Right', 'Down']
316 |
317 |
318 | if __name__ == '__main__':
319 | pytest.main()
320 |
--------------------------------------------------------------------------------
/aima3/tests/test_mdp.py:
--------------------------------------------------------------------------------
1 | from aima3.mdp import *
2 |
3 |
4 | def test_value_iteration():
5 | assert value_iteration(sequential_decision_environment, .01) == {
6 | (3, 2): 1.0, (3, 1): -1.0,
7 | (3, 0): 0.12958868267972745, (0, 1): 0.39810203830605462,
8 | (0, 2): 0.50928545646220924, (1, 0): 0.25348746162470537,
9 | (0, 0): 0.29543540628363629, (1, 2): 0.64958064617168676,
10 | (2, 0): 0.34461306281476806, (2, 1): 0.48643676237737926,
11 | (2, 2): 0.79536093684710951}
12 |
13 |
14 | def test_policy_iteration():
15 | assert policy_iteration(sequential_decision_environment) == {
16 | (0, 0): (0, 1), (0, 1): (0, 1), (0, 2): (1, 0),
17 | (1, 0): (1, 0), (1, 2): (1, 0), (2, 0): (0, 1),
18 | (2, 1): (0, 1), (2, 2): (1, 0), (3, 0): (-1, 0),
19 | (3, 1): None, (3, 2): None}
20 |
21 |
22 | def test_best_policy():
23 | pi = best_policy(sequential_decision_environment,
24 | value_iteration(sequential_decision_environment, .01))
25 | assert sequential_decision_environment.to_arrows(pi) == [['>', '>', '>', '.'],
26 | ['^', None, '^', '.'],
27 | ['^', '>', '^', '<']]
28 |
29 |
30 | def test_transition_model():
31 | transition_model = {
32 | "A": {"a1": (0.3, "B"), "a2": (0.7, "C")},
33 | "B": {"a1": (0.5, "B"), "a2": (0.5, "A")},
34 | "C": {"a1": (0.9, "A"), "a2": (0.1, "B")},
35 | }
36 |
37 | mdp = MDP(init="A", actlist={"a1","a2"}, terminals={"C"}, states={"A","B","C"}, transitions=transition_model)
38 |
39 | assert mdp.T("A","a1") == (0.3, "B")
40 | assert mdp.T("B","a2") == (0.5, "A")
41 | assert mdp.T("C","a1") == (0.9, "A")
42 |
--------------------------------------------------------------------------------
/aima3/tests/test_nlp.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import aima3.nlp as nlp
3 |
4 | from aima3.nlp import loadPageHTML, stripRawHTML, findOutlinks, onlyWikipediaURLS
5 | from aima3.nlp import expand_pages, relevant_pages, normalize, ConvergenceDetector, getInlinks
6 | from aima3.nlp import getOutlinks, Page, determineInlinks, HITS
7 | from aima3.nlp import Rules, Lexicon, Grammar, ProbRules, ProbLexicon, ProbGrammar
8 | from aima3.nlp import Chart, CYK_parse
9 | # Clumsy imports because we want to access certain nlp.py globals explicitly, because
10 | # they are accessed by functions within nlp.py
11 |
12 | from unittest.mock import patch
13 | from io import BytesIO
14 |
15 |
16 | def test_rules():
17 | check = {'A': [['B', 'C'], ['D', 'E']], 'B': [['E'], ['a'], ['b', 'c']]}
18 | assert Rules(A="B C | D E", B="E | a | b c") == check
19 |
20 |
21 | def test_lexicon():
22 | check = {'Article': ['the', 'a', 'an'], 'Pronoun': ['i', 'you', 'he']}
23 | lexicon = Lexicon(Article="the | a | an", Pronoun="i | you | he")
24 | assert lexicon == check
25 |
26 |
27 | def test_grammar():
28 | rules = Rules(A="B C | D E", B="E | a | b c")
29 | lexicon = Lexicon(Article="the | a | an", Pronoun="i | you | he")
30 | grammar = Grammar("Simplegram", rules, lexicon)
31 |
32 | assert grammar.rewrites_for('A') == [['B', 'C'], ['D', 'E']]
33 | assert grammar.isa('the', 'Article')
34 |
35 | grammar = nlp.E_Chomsky
36 | for rule in grammar.cnf_rules():
37 | assert len(rule) == 3
38 |
39 |
40 | def test_generation():
41 | lexicon = Lexicon(Article="the | a | an",
42 | Pronoun="i | you | he")
43 |
44 | rules = Rules(
45 | S="Article | More | Pronoun",
46 | More="Article Pronoun | Pronoun Pronoun"
47 | )
48 |
49 | grammar = Grammar("Simplegram", rules, lexicon)
50 |
51 | sentence = grammar.generate_random('S')
52 | for token in sentence.split():
53 | found = False
54 | for non_terminal, terminals in grammar.lexicon.items():
55 | if token in terminals:
56 | found = True
57 | assert found
58 |
59 |
60 | def test_prob_rules():
61 | check = {'A': [(['B', 'C'], 0.3), (['D', 'E'], 0.7)],
62 | 'B': [(['E'], 0.1), (['a'], 0.2), (['b', 'c'], 0.7)]}
63 | rules = ProbRules(A="B C [0.3] | D E [0.7]", B="E [0.1] | a [0.2] | b c [0.7]")
64 | assert rules == check
65 |
66 |
67 | def test_prob_lexicon():
68 | check = {'Article': [('the', 0.5), ('a', 0.25), ('an', 0.25)],
69 | 'Pronoun': [('i', 0.4), ('you', 0.3), ('he', 0.3)]}
70 | lexicon = ProbLexicon(Article="the [0.5] | a [0.25] | an [0.25]",
71 | Pronoun="i [0.4] | you [0.3] | he [0.3]")
72 | assert lexicon == check
73 |
74 |
75 | def test_prob_grammar():
76 | rules = ProbRules(A="B C [0.3] | D E [0.7]", B="E [0.1] | a [0.2] | b c [0.7]")
77 | lexicon = ProbLexicon(Article="the [0.5] | a [0.25] | an [0.25]",
78 | Pronoun="i [0.4] | you [0.3] | he [0.3]")
79 | grammar = ProbGrammar("Simplegram", rules, lexicon)
80 |
81 | assert grammar.rewrites_for('A') == [(['B', 'C'], 0.3), (['D', 'E'], 0.7)]
82 | assert grammar.isa('the', 'Article')
83 |
84 | grammar = nlp.E_Prob_Chomsky
85 | for rule in grammar.cnf_rules():
86 | assert len(rule) == 4
87 |
88 |
89 | def test_prob_generation():
90 | lexicon = ProbLexicon(Verb="am [0.5] | are [0.25] | is [0.25]",
91 | Pronoun="i [0.4] | you [0.3] | he [0.3]")
92 |
93 | rules = ProbRules(
94 | S="Verb [0.5] | More [0.3] | Pronoun [0.1] | nobody is here [0.1]",
95 | More="Pronoun Verb [0.7] | Pronoun Pronoun [0.3]"
96 | )
97 |
98 | grammar = ProbGrammar("Simplegram", rules, lexicon)
99 |
100 | sentence = grammar.generate_random('S')
101 | assert len(sentence) == 2
102 |
103 |
104 | def test_chart_parsing():
105 | chart = Chart(nlp.E0)
106 | parses = chart.parses('the stench is in 2 2')
107 | assert len(parses) == 1
108 |
109 |
110 | def test_CYK_parse():
111 | grammar = nlp.E_Prob_Chomsky
112 | words = ['the', 'robot', 'is', 'good']
113 | P = CYK_parse(words, grammar)
114 | assert len(P) == 52
115 |
116 |
117 | # ______________________________________________________________________________
118 | # Data Setup
119 |
120 | testHTML = """Keyword String 1: A man is a male human.
121 | Keyword String 2: Like most other male mammals, a man inherits an
122 | X from his mom and a Y from his dad.
123 | Links:
124 | href="https://google.com.au"
125 | < href="/wiki/TestThing" > href="/wiki/TestBoy"
126 | href="/wiki/TestLiving" href="/wiki/TestMan" >"""
127 | testHTML2 = "a mom and a dad"
128 | testHTML3 = """
129 |
130 |
131 |
132 | Page Title
133 |
134 |
135 |
136 |
AIMA book
137 |
138 |
139 |
140 | """
141 |
142 | pA = Page("A", ["B", "C", "E"], ["D"], 1, 6)
143 | pB = Page("B", ["E"], ["A", "C", "D"], 2, 5)
144 | pC = Page("C", ["B", "E"], ["A", "D"], 3, 4)
145 | pD = Page("D", ["A", "B", "C", "E"], [], 4, 3)
146 | pE = Page("E", [], ["A", "B", "C", "D", "F"], 5, 2)
147 | pF = Page("F", ["E"], [], 6, 1)
148 | pageDict = {pA.address: pA, pB.address: pB, pC.address: pC,
149 | pD.address: pD, pE.address: pE, pF.address: pF}
150 | nlp.pagesIndex = pageDict
151 | nlp.pagesContent ={pA.address: testHTML, pB.address: testHTML2,
152 | pC.address: testHTML, pD.address: testHTML2,
153 | pE.address: testHTML, pF.address: testHTML2}
154 |
155 | # This test takes a long time (> 60 secs)
156 | # def test_loadPageHTML():
157 | # # first format all the relative URLs with the base URL
158 | # addresses = [examplePagesSet[0] + x for x in examplePagesSet[1:]]
159 | # loadedPages = loadPageHTML(addresses)
160 | # relURLs = ['Ancient_Greek','Ethics','Plato','Theology']
161 | # fullURLs = ["https://en.wikipedia.org/wiki/"+x for x in relURLs]
162 | # assert all(x in loadedPages for x in fullURLs)
163 | # assert all(loadedPages.get(key,"") != "" for key in addresses)
164 |
165 |
166 | @patch('urllib.request.urlopen', return_value=BytesIO(testHTML3.encode()))
167 | def test_stripRawHTML(html_mock):
168 | addr = "https://en.wikipedia.org/wiki/Ethics"
169 | aPage = loadPageHTML([addr])
170 | someHTML = aPage[addr]
171 | strippedHTML = stripRawHTML(someHTML)
172 | assert "" not in strippedHTML and "" not in strippedHTML
173 | assert "AIMA book" in someHTML and "AIMA book" in strippedHTML
174 |
175 |
176 | def test_determineInlinks():
177 | assert set(determineInlinks(pA)) == set(['B', 'C', 'E'])
178 | assert set(determineInlinks(pE)) == set([])
179 | assert set(determineInlinks(pF)) == set(['E'])
180 |
181 | def test_findOutlinks_wiki():
182 | testPage = pageDict[pA.address]
183 | outlinks = findOutlinks(testPage, handleURLs=onlyWikipediaURLS)
184 | assert "https://en.wikipedia.org/wiki/TestThing" in outlinks
185 | assert "https://en.wikipedia.org/wiki/TestThing" in outlinks
186 | assert "https://google.com.au" not in outlinks
187 | # ______________________________________________________________________________
188 | # HITS Helper Functions
189 |
190 |
191 | def test_expand_pages():
192 | pages = {k: pageDict[k] for k in ('F')}
193 | pagesTwo = {k: pageDict[k] for k in ('A', 'E')}
194 | expanded_pages = expand_pages(pages)
195 | assert all(x in expanded_pages for x in ['F', 'E'])
196 | assert all(x not in expanded_pages for x in ['A', 'B', 'C', 'D'])
197 | expanded_pages = expand_pages(pagesTwo)
198 | print(expanded_pages)
199 | assert all(x in expanded_pages for x in ['A', 'B', 'C', 'D', 'E', 'F'])
200 |
201 |
202 | def test_relevant_pages():
203 | pages = relevant_pages("his dad")
204 | assert all((x in pages) for x in ['A', 'C', 'E'])
205 | assert all((x not in pages) for x in ['B', 'D', 'F'])
206 | pages = relevant_pages("mom and dad")
207 | assert all((x in pages) for x in ['A', 'B', 'C', 'D', 'E', 'F'])
208 | pages = relevant_pages("philosophy")
209 | assert all((x not in pages) for x in ['A', 'B', 'C', 'D', 'E', 'F'])
210 |
211 |
212 | def test_normalize():
213 | normalize(pageDict)
214 | print(page.hub for addr, page in nlp.pagesIndex.items())
215 | expected_hub = [1/91**0.5, 2/91**0.5, 3/91**0.5, 4/91**0.5, 5/91**0.5, 6/91**0.5] # Works only for sample data above
216 | expected_auth = list(reversed(expected_hub))
217 | assert len(expected_hub) == len(expected_auth) == len(nlp.pagesIndex)
218 | assert expected_hub == [page.hub for addr, page in sorted(nlp.pagesIndex.items())]
219 | assert expected_auth == [page.authority for addr, page in sorted(nlp.pagesIndex.items())]
220 |
221 |
222 | def test_detectConvergence():
223 | # run detectConvergence once to initialise history
224 | convergence = ConvergenceDetector()
225 | convergence()
226 | assert convergence() # values haven't changed so should return True
227 | # make tiny increase/decrease to all values
228 | for _, page in nlp.pagesIndex.items():
229 | page.hub += 0.0003
230 | page.authority += 0.0004
231 | # retest function with values. Should still return True
232 | assert convergence()
233 | for _, page in nlp.pagesIndex.items():
234 | page.hub += 3000000
235 | page.authority += 3000000
236 | # retest function with values. Should now return false
237 | assert not convergence()
238 |
239 |
240 | def test_getInlinks():
241 | inlnks = getInlinks(pageDict['A'])
242 | assert sorted(inlnks) == pageDict['A'].inlinks
243 |
244 |
245 | def test_getOutlinks():
246 | outlnks = getOutlinks(pageDict['A'])
247 | assert sorted(outlnks) == pageDict['A'].outlinks
248 |
249 |
250 | def test_HITS():
251 | HITS('inherit')
252 | auth_list = [pA.authority, pB.authority, pC.authority, pD.authority, pE.authority, pF.authority]
253 | hub_list = [pA.hub, pB.hub, pC.hub, pD.hub, pE.hub, pF.hub]
254 | assert max(auth_list) == pD.authority
255 | assert max(hub_list) == pE.hub
256 |
257 |
258 | if __name__ == '__main__':
259 | pytest.main()
260 |
--------------------------------------------------------------------------------
/aima3/tests/test_planning.py:
--------------------------------------------------------------------------------
1 | from aima3.planning import *
2 | from aima3.utils import expr
3 | from aima3.logic import FolKB
4 |
5 |
6 | def test_action():
7 | precond = [[expr("P(x)"), expr("Q(y, z)")], [expr("Q(x)")]]
8 | effect = [[expr("Q(x)")], [expr("P(x)")]]
9 | a=Action(expr("A(x,y,z)"), precond, effect)
10 | args = [expr("A"), expr("B"), expr("C")]
11 | assert a.substitute(expr("P(x, z, y)"), args) == expr("P(A, C, B)")
12 | test_kb = FolKB([expr("P(A)"), expr("Q(B, C)"), expr("R(D)")])
13 | assert a.check_precond(test_kb, args)
14 | a.act(test_kb, args)
15 | assert test_kb.ask(expr("P(A)")) is False
16 | assert test_kb.ask(expr("Q(A)")) is not False
17 | assert test_kb.ask(expr("Q(B, C)")) is not False
18 | assert not a.check_precond(test_kb, args)
19 |
20 |
21 | def test_air_cargo_1():
22 | p = air_cargo()
23 | assert p.goal_test() is False
24 | solution_1 = [expr("Load(C1 , P1, SFO)"),
25 | expr("Fly(P1, SFO, JFK)"),
26 | expr("Unload(C1, P1, JFK)"),
27 | expr("Load(C2, P2, JFK)"),
28 | expr("Fly(P2, JFK, SFO)"),
29 | expr("Unload (C2, P2, SFO)")]
30 |
31 | for action in solution_1:
32 | p.act(action)
33 |
34 | assert p.goal_test()
35 |
36 |
37 | def test_air_cargo_2():
38 | p = air_cargo()
39 | assert p.goal_test() is False
40 | solution_2 = [expr("Load(C2, P2, JFK)"),
41 | expr("Fly(P2, JFK, SFO)"),
42 | expr("Unload (C2, P2, SFO)"),
43 | expr("Load(C1 , P1, SFO)"),
44 | expr("Fly(P1, SFO, JFK)"),
45 | expr("Unload(C1, P1, JFK)")]
46 |
47 | for action in solution_2:
48 | p.act(action)
49 |
50 | assert p.goal_test()
51 |
52 |
53 | def test_spare_tire():
54 | p = spare_tire()
55 | assert p.goal_test() is False
56 | solution = [expr("Remove(Flat, Axle)"),
57 | expr("Remove(Spare, Trunk)"),
58 | expr("PutOn(Spare, Axle)")]
59 |
60 | for action in solution:
61 | p.act(action)
62 |
63 | assert p.goal_test()
64 |
65 |
66 | def test_three_block_tower():
67 | p = three_block_tower()
68 | assert p.goal_test() is False
69 | solution = [expr("MoveToTable(C, A)"),
70 | expr("Move(B, Table, C)"),
71 | expr("Move(A, Table, B)")]
72 |
73 | for action in solution:
74 | p.act(action)
75 |
76 | assert p.goal_test()
77 |
78 |
79 | def test_have_cake_and_eat_cake_too():
80 | p = have_cake_and_eat_cake_too()
81 | assert p.goal_test() is False
82 | solution = [expr("Eat(Cake)"),
83 | expr("Bake(Cake)")]
84 |
85 | for action in solution:
86 | p.act(action)
87 |
88 | assert p.goal_test()
89 |
90 |
91 | def test_graph_call():
92 | pddl = spare_tire()
93 | negkb = FolKB([expr('At(Flat, Trunk)')])
94 | graph = Graph(pddl, negkb)
95 |
96 | levels_size = len(graph.levels)
97 | graph()
98 |
99 | assert levels_size == len(graph.levels) - 1
100 |
101 |
102 | def test_job_shop_problem():
103 | p = job_shop_problem()
104 | assert p.goal_test() is False
105 |
106 | solution = [p.jobs[1][0],
107 | p.jobs[0][0],
108 | p.jobs[0][1],
109 | p.jobs[0][2],
110 | p.jobs[1][1],
111 | p.jobs[1][2]]
112 |
113 | for action in solution:
114 | p.act(action)
115 |
116 | assert p.goal_test()
117 |
118 | def test_refinements() :
119 | init = [expr('At(Home)')]
120 | def goal_test(kb):
121 | return kb.ask(expr('At(SFO)'))
122 |
123 | library = {"HLA": ["Go(Home,SFO)","Taxi(Home, SFO)"],
124 | "steps": [["Taxi(Home, SFO)"],[]],
125 | "precond_pos": [["At(Home)"],["At(Home)"]],
126 | "precond_neg": [[],[]],
127 | "effect_pos": [["At(SFO)"],["At(SFO)"]],
128 | "effect_neg": [["At(Home)"],["At(Home)"],]}
129 | # Go SFO
130 | precond_pos = [expr("At(Home)")]
131 | precond_neg = []
132 | effect_add = [expr("At(SFO)")]
133 | effect_rem = [expr("At(Home)")]
134 | go_SFO = HLA(expr("Go(Home,SFO)"),
135 | [precond_pos, precond_neg], [effect_add, effect_rem])
136 | # Taxi SFO
137 | precond_pos = [expr("At(Home)")]
138 | precond_neg = []
139 | effect_add = [expr("At(SFO)")]
140 | effect_rem = [expr("At(Home)")]
141 | taxi_SFO = HLA(expr("Go(Home,SFO)"),
142 | [precond_pos, precond_neg], [effect_add, effect_rem])
143 | prob = Problem(init, [go_SFO, taxi_SFO], goal_test)
144 | result = [i for i in Problem.refinements(go_SFO, prob, library)]
145 | assert(len(result) == 1)
146 | assert(result[0].name == "Taxi")
147 | assert(result[0].args == (expr("Home"), expr("SFO")))
148 |
--------------------------------------------------------------------------------
/aima3/tests/test_probability.py:
--------------------------------------------------------------------------------
1 | import random
2 | from aima3.probability import *
3 | from aima3.utils import rounder
4 |
5 |
6 | def tests():
7 | cpt = burglary.variable_node('Alarm')
8 | event = {'Burglary': True, 'Earthquake': True}
9 | assert cpt.p(True, event) == 0.95
10 | event = {'Burglary': False, 'Earthquake': True}
11 | assert cpt.p(False, event) == 0.71
12 | # #enumeration_ask('Earthquake', {}, burglary)
13 |
14 | s = {'A': True, 'B': False, 'C': True, 'D': False}
15 | assert consistent_with(s, {})
16 | assert consistent_with(s, s)
17 | assert not consistent_with(s, {'A': False})
18 | assert not consistent_with(s, {'D': True})
19 |
20 | random.seed(21)
21 | p = rejection_sampling('Earthquake', {}, burglary, 1000)
22 | assert p[True], p[False] == (0.001, 0.999)
23 |
24 | random.seed(71)
25 | p = likelihood_weighting('Earthquake', {}, burglary, 1000)
26 | assert p[True], p[False] == (0.002, 0.998)
27 |
28 |
29 | def test_probdist_basic():
30 | P = ProbDist('Flip')
31 | P['H'], P['T'] = 0.25, 0.75
32 | assert P['H'] == 0.25
33 |
34 |
35 | def test_probdist_frequency():
36 | P = ProbDist('X', {'lo': 125, 'med': 375, 'hi': 500})
37 | assert (P['lo'], P['med'], P['hi']) == (0.125, 0.375, 0.5)
38 |
39 |
40 | def test_probdist_normalize():
41 | P = ProbDist('Flip')
42 | P['H'], P['T'] = 35, 65
43 | P = P.normalize()
44 | assert (P.prob['H'], P.prob['T']) == (0.350, 0.650)
45 |
46 |
47 | def test_jointprob():
48 | P = JointProbDist(['X', 'Y'])
49 | P[1, 1] = 0.25
50 | assert P[1, 1] == 0.25
51 | P[dict(X=0, Y=1)] = 0.5
52 | assert P[dict(X=0, Y=1)] == 0.5
53 |
54 |
55 | def test_event_values():
56 | assert event_values({'A': 10, 'B': 9, 'C': 8}, ['C', 'A']) == (8, 10)
57 | assert event_values((1, 2), ['C', 'A']) == (1, 2)
58 |
59 |
60 | def test_enumerate_joint():
61 | P = JointProbDist(['X', 'Y'])
62 | P[0, 0] = 0.25
63 | P[0, 1] = 0.5
64 | P[1, 1] = P[2, 1] = 0.125
65 | assert enumerate_joint(['Y'], dict(X=0), P) == 0.75
66 | assert enumerate_joint(['X'], dict(Y=2), P) == 0
67 | assert enumerate_joint(['X'], dict(Y=1), P) == 0.75
68 |
69 |
70 | def test_enumerate_joint_ask():
71 | P = JointProbDist(['X', 'Y'])
72 | P[0, 0] = 0.25
73 | P[0, 1] = 0.5
74 | P[1, 1] = P[2, 1] = 0.125
75 | assert enumerate_joint_ask(
76 | 'X', dict(Y=1), P).show_approx() == '0: 0.667, 1: 0.167, 2: 0.167'
77 |
78 |
79 | def test_bayesnode_p():
80 | bn = BayesNode('X', 'Burglary', {T: 0.2, F: 0.625})
81 | assert bn.p(False, {'Burglary': False, 'Earthquake': True}) == 0.375
82 | assert BayesNode('W', '', 0.75).p(False, {'Random': True}) == 0.25
83 |
84 |
85 | def test_bayesnode_sample():
86 | X = BayesNode('X', 'Burglary', {T: 0.2, F: 0.625})
87 | assert X.sample({'Burglary': False, 'Earthquake': True}) in [True, False]
88 | Z = BayesNode('Z', 'P Q', {(True, True): 0.2, (True, False): 0.3,
89 | (False, True): 0.5, (False, False): 0.7})
90 | assert Z.sample({'P': True, 'Q': False}) in [True, False]
91 |
92 |
93 | def test_enumeration_ask():
94 | assert enumeration_ask(
95 | 'Burglary', dict(JohnCalls=T, MaryCalls=T),
96 | burglary).show_approx() == 'False: 0.716, True: 0.284'
97 |
98 |
99 | def test_elemination_ask():
100 | elimination_ask(
101 | 'Burglary', dict(JohnCalls=T, MaryCalls=T),
102 | burglary).show_approx() == 'False: 0.716, True: 0.284'
103 |
104 |
105 | def test_rejection_sampling():
106 | random.seed(47)
107 | rejection_sampling(
108 | 'Burglary', dict(JohnCalls=T, MaryCalls=T),
109 | burglary, 10000).show_approx() == 'False: 0.7, True: 0.3'
110 |
111 |
112 | def test_likelihood_weighting():
113 | random.seed(1017)
114 | assert likelihood_weighting(
115 | 'Burglary', dict(JohnCalls=T, MaryCalls=T),
116 | burglary, 10000).show_approx() == 'False: 0.702, True: 0.298'
117 |
118 |
119 | def test_forward_backward():
120 | umbrella_prior = [0.5, 0.5]
121 | umbrella_transition = [[0.7, 0.3], [0.3, 0.7]]
122 | umbrella_sensor = [[0.9, 0.2], [0.1, 0.8]]
123 | umbrellaHMM = HiddenMarkovModel(umbrella_transition, umbrella_sensor)
124 |
125 | umbrella_evidence = [T, T, F, T, T]
126 | assert (rounder(forward_backward(umbrellaHMM, umbrella_evidence, umbrella_prior)) ==
127 | [[0.6469, 0.3531], [0.8673, 0.1327], [0.8204, 0.1796], [0.3075, 0.6925],
128 | [0.8204, 0.1796], [0.8673, 0.1327]])
129 |
130 | umbrella_evidence = [T, F, T, F, T]
131 | assert rounder(forward_backward(umbrellaHMM, umbrella_evidence, umbrella_prior)) == [
132 | [0.5871, 0.4129], [0.7177, 0.2823], [0.2324, 0.7676], [0.6072, 0.3928],
133 | [0.2324, 0.7676], [0.7177, 0.2823]]
134 |
135 |
136 | def test_fixed_lag_smoothing():
137 | umbrella_evidence = [T, F, T, F, T]
138 | e_t = F
139 | t = 4
140 | umbrella_transition = [[0.7, 0.3], [0.3, 0.7]]
141 | umbrella_sensor = [[0.9, 0.2], [0.1, 0.8]]
142 | umbrellaHMM = HiddenMarkovModel(umbrella_transition, umbrella_sensor)
143 |
144 | d = 2
145 | assert rounder(fixed_lag_smoothing(e_t, umbrellaHMM, d,
146 | umbrella_evidence, t)) == [0.1111, 0.8889]
147 | d = 5
148 | assert fixed_lag_smoothing(e_t, umbrellaHMM, d, umbrella_evidence, t) is None
149 |
150 | umbrella_evidence = [T, T, F, T, T]
151 | # t = 4
152 | e_t = T
153 |
154 | d = 1
155 | assert rounder(fixed_lag_smoothing(e_t, umbrellaHMM,
156 | d, umbrella_evidence, t)) == [0.9939, 0.0061]
157 |
158 |
159 | def test_particle_filtering():
160 | N = 10
161 | umbrella_evidence = T
162 | umbrella_transition = [[0.7, 0.3], [0.3, 0.7]]
163 | umbrella_sensor = [[0.9, 0.2], [0.1, 0.8]]
164 | umbrellaHMM = HiddenMarkovModel(umbrella_transition, umbrella_sensor)
165 | s = particle_filtering(umbrella_evidence, N, umbrellaHMM)
166 | assert len(s) == N
167 | assert all(state in 'AB' for state in s)
168 | # XXX 'A' and 'B' are really arbitrary names, but I'm letting it stand for now
169 |
170 |
171 | def test_monte_carlo_localization():
172 | ## TODO: Add tests for random motion/inaccurate sensors
173 | random.seed('aima-python')
174 | m = MCLmap([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0],
175 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
176 | [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0],
177 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
178 | [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0],
179 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
180 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
181 | [0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
182 | [0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
183 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0],
184 | [0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0]])
185 |
186 | def P_motion_sample(kin_state, v, w):
187 | """Sample from possible kinematic states.
188 | Returns from a single element distribution (no uncertainity in motion)"""
189 | pos = kin_state[:2]
190 | orient = kin_state[2]
191 |
192 | # for simplicity the robot first rotates and then moves
193 | orient = (orient + w)%4
194 | for _ in range(orient):
195 | v = (v[1], -v[0])
196 | pos = vector_add(pos, v)
197 | return pos + (orient,)
198 |
199 | def P_sensor(x, y):
200 | """Conditional probability for sensor reading"""
201 | # Need not be exact probability. Can use a scaled value.
202 | if x == y:
203 | return 0.8
204 | elif abs(x - y) <= 2:
205 | return 0.05
206 | else:
207 | return 0
208 |
209 | from aima3.utils import print_table
210 | a = {'v': (0, 0), 'w': 0}
211 | z = (2, 4, 1, 6)
212 | S = monte_carlo_localization(a, z, 1000, P_motion_sample, P_sensor, m)
213 | grid = [[0]*17 for _ in range(11)]
214 | for x, y, _ in S:
215 | if 0 <= x < 11 and 0 <= y < 17:
216 | grid[x][y] += 1
217 | print("GRID:")
218 | print_table(grid)
219 |
220 | a = {'v': (0, 1), 'w': 0}
221 | z = (2, 3, 5, 7)
222 | S = monte_carlo_localization(a, z, 1000, P_motion_sample, P_sensor, m, S)
223 | grid = [[0]*17 for _ in range(11)]
224 | for x, y, _ in S:
225 | if 0 <= x < 11 and 0 <= y < 17:
226 | grid[x][y] += 1
227 | print("GRID:")
228 | print_table(grid)
229 |
230 | assert grid[6][7] > 700
231 |
232 |
233 | def test_gibbs_ask():
234 | possible_solutions = ['False: 0.16, True: 0.84', 'False: 0.17, True: 0.83',
235 | 'False: 0.15, True: 0.85']
236 | g_solution = gibbs_ask('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()
237 | assert g_solution in possible_solutions
238 |
239 |
240 | # The following should probably go in .ipynb:
241 |
242 | """
243 | # We can build up a probability distribution like this (p. 469):
244 | >>> P = ProbDist()
245 | >>> P['sunny'] = 0.7
246 | >>> P['rain'] = 0.2
247 | >>> P['cloudy'] = 0.08
248 | >>> P['snow'] = 0.02
249 |
250 | # and query it like this: (Never mind this ELLIPSIS option
251 | # added to make the doctest portable.)
252 | >>> P['rain'] #doctest:+ELLIPSIS
253 | 0.2...
254 |
255 | # A Joint Probability Distribution is dealt with like this [Figure 13.3]:
256 | >>> P = JointProbDist(['Toothache', 'Cavity', 'Catch'])
257 | >>> T, F = True, False
258 | >>> P[T, T, T] = 0.108; P[T, T, F] = 0.012; P[F, T, T] = 0.072; P[F, T, F] = 0.008
259 | >>> P[T, F, T] = 0.016; P[T, F, F] = 0.064; P[F, F, T] = 0.144; P[F, F, F] = 0.576
260 |
261 | >>> P[T, T, T]
262 | 0.108
263 |
264 | # Ask for P(Cavity|Toothache=T)
265 | >>> PC = enumerate_joint_ask('Cavity', {'Toothache': T}, P)
266 | >>> PC.show_approx()
267 | 'False: 0.4, True: 0.6'
268 |
269 | >>> 0.6-epsilon < PC[T] < 0.6+epsilon
270 | True
271 |
272 | >>> 0.4-epsilon < PC[F] < 0.4+epsilon
273 | True
274 | """
275 |
276 | if __name__ == '__main__':
277 | pytest.main()
278 |
--------------------------------------------------------------------------------
/aima3/tests/test_rl.py:
--------------------------------------------------------------------------------
1 | import pytest
2 |
3 | from aima3.rl import *
4 | from aima3.mdp import sequential_decision_environment
5 |
6 |
7 | north = (0, 1)
8 | south = (0,-1)
9 | west = (-1, 0)
10 | east = (1, 0)
11 |
12 | policy = {
13 | (0, 2): east, (1, 2): east, (2, 2): east, (3, 2): None,
14 | (0, 1): north, (2, 1): north, (3, 1): None,
15 | (0, 0): north, (1, 0): west, (2, 0): west, (3, 0): west,
16 | }
17 |
18 |
19 |
20 | def test_PassiveADPAgent():
21 | agent = PassiveADPAgent(policy, sequential_decision_environment)
22 | for i in range(75):
23 | run_single_trial(agent,sequential_decision_environment)
24 |
25 | # Agent does not always produce same results.
26 | # Check if results are good enough.
27 | assert agent.U[(0, 0)] > 0.15 # In reality around 0.3
28 | assert agent.U[(0, 1)] > 0.15 # In reality around 0.4
29 | assert agent.U[(1, 0)] > 0 # In reality around 0.2
30 |
31 |
32 |
33 | def test_PassiveTDAgent():
34 | agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n))
35 | for i in range(200):
36 | run_single_trial(agent,sequential_decision_environment)
37 |
38 | # Agent does not always produce same results.
39 | # Check if results are good enough.
40 | assert agent.U[(0, 0)] > 0.15 # In reality around 0.3
41 | assert agent.U[(0, 1)] > 0.15 # In reality around 0.35
42 | assert agent.U[(1, 0)] > 0.15 # In reality around 0.25
43 |
44 |
45 | def test_QLearning():
46 | q_agent = QLearningAgent(sequential_decision_environment, Ne=5, Rplus=2,
47 | alpha=lambda n: 60./(59+n))
48 |
49 | for i in range(200):
50 | run_single_trial(q_agent,sequential_decision_environment)
51 |
52 | # Agent does not always produce same results.
53 | # Check if results are good enough.
54 | assert q_agent.Q[((0, 1), (0, 1))] >= -0.5 # In reality around 0.1
55 | assert q_agent.Q[((1, 0), (0, -1))] <= 0.5 # In reality around -0.1
56 |
--------------------------------------------------------------------------------
/aima3/tests/test_search.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from aima3.search import *
3 |
4 |
5 | romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
6 | vacumm_world = GraphProblemStochastic('State_1', ['State_7', 'State_8'], vacumm_world)
7 | LRTA_problem = OnlineSearchProblem('State_3', 'State_5', one_dim_state_space)
8 |
9 | def test_find_min_edge():
10 | assert romania_problem.find_min_edge() == 70
11 |
12 |
13 | def test_breadth_first_tree_search():
14 | assert breadth_first_tree_search(
15 | romania_problem).solution() == ['Sibiu', 'Fagaras', 'Bucharest']
16 |
17 |
18 | def test_breadth_first_search():
19 | assert breadth_first_search(romania_problem).solution() == ['Sibiu', 'Fagaras', 'Bucharest']
20 |
21 |
22 | def test_best_first_graph_search():
23 | # uniform_cost_search and astar_search test it indirectly
24 | assert best_first_graph_search(
25 | romania_problem,
26 | lambda node: node.state).solution() == ['Sibiu', 'Fagaras', 'Bucharest']
27 | assert best_first_graph_search(
28 | romania_problem,
29 | lambda node: node.state[::-1]).solution() == ['Timisoara',
30 | 'Lugoj',
31 | 'Mehadia',
32 | 'Drobeta',
33 | 'Craiova',
34 | 'Pitesti',
35 | 'Bucharest']
36 |
37 |
38 | def test_uniform_cost_search():
39 | assert uniform_cost_search(
40 | romania_problem).solution() == ['Sibiu', 'Rimnicu', 'Pitesti', 'Bucharest']
41 |
42 |
43 | def test_depth_first_graph_search():
44 | solution = depth_first_graph_search(romania_problem).solution()
45 | assert solution[-1] == 'Bucharest'
46 |
47 |
48 | def test_iterative_deepening_search():
49 | assert iterative_deepening_search(
50 | romania_problem).solution() == ['Sibiu', 'Fagaras', 'Bucharest']
51 |
52 |
53 | def test_depth_limited_search():
54 | solution_3 = depth_limited_search(romania_problem, 3).solution()
55 | assert solution_3[-1] == 'Bucharest'
56 | assert depth_limited_search(romania_problem, 2) == 'cutoff'
57 | solution_50 = depth_limited_search(romania_problem).solution()
58 | assert solution_50[-1] == 'Bucharest'
59 |
60 |
61 | def test_bidirectional_search():
62 | assert bidirectional_search(romania_problem) == 418
63 |
64 |
65 | def test_astar_search():
66 | assert astar_search(romania_problem).solution() == ['Sibiu', 'Rimnicu', 'Pitesti', 'Bucharest']
67 |
68 |
69 | def test_recursive_best_first_search():
70 | assert recursive_best_first_search(
71 | romania_problem).solution() == ['Sibiu', 'Rimnicu', 'Pitesti', 'Bucharest']
72 |
73 |
74 | def test_hill_climbing():
75 | prob = PeakFindingProblem((0, 0), [[0, 5, 10, 20],
76 | [-3, 7, 11, 5]])
77 | assert hill_climbing(prob) == (0, 3)
78 | prob = PeakFindingProblem((0, 0), [[0, 5, 10, 8],
79 | [-3, 7, 9, 999],
80 | [1, 2, 5, 11]])
81 | assert hill_climbing(prob) == (0, 2)
82 | prob = PeakFindingProblem((2, 0), [[0, 5, 10, 8],
83 | [-3, 7, 9, 999],
84 | [1, 2, 5, 11]])
85 | assert hill_climbing(prob) == (1, 3)
86 |
87 |
88 | def test_simulated_annealing():
89 | random.seed("aima-python")
90 | prob = PeakFindingProblem((0, 0), [[0, 5, 10, 20],
91 | [-3, 7, 11, 5]], directions4)
92 | sols = {prob.value(simulated_annealing(prob)) for i in range(100)}
93 | assert max(sols) == 20
94 | prob = PeakFindingProblem((0, 0), [[0, 5, 10, 8],
95 | [-3, 7, 9, 999],
96 | [1, 2, 5, 11]], directions8)
97 | sols = {prob.value(simulated_annealing(prob)) for i in range(100)}
98 | assert max(sols) == 999
99 |
100 |
101 | def test_BoggleFinder():
102 | board = list('SARTELNID')
103 | """
104 | >>> print_boggle(board)
105 | S A R
106 | T E L
107 | N I D
108 | """
109 | f = BoggleFinder(board)
110 | assert len(f) == 206
111 |
112 |
113 | def test_and_or_graph_search():
114 | def run_plan(state, problem, plan):
115 | if problem.goal_test(state):
116 | return True
117 | if len(plan) is not 2:
118 | return False
119 | predicate = lambda x: run_plan(x, problem, plan[1][x])
120 | return all(predicate(r) for r in problem.result(state, plan[0]))
121 | plan = and_or_graph_search(vacumm_world)
122 | assert run_plan('State_1', vacumm_world, plan)
123 |
124 |
125 | def test_LRTAStarAgent():
126 | my_agent = LRTAStarAgent(LRTA_problem)
127 | assert my_agent('State_3') == 'Right'
128 | assert my_agent('State_4') == 'Left'
129 | assert my_agent('State_3') == 'Right'
130 | assert my_agent('State_4') == 'Right'
131 | assert my_agent('State_5') is None
132 |
133 | my_agent = LRTAStarAgent(LRTA_problem)
134 | assert my_agent('State_4') == 'Left'
135 |
136 | my_agent = LRTAStarAgent(LRTA_problem)
137 | assert my_agent('State_5') is None
138 |
139 |
140 | def test_genetic_algorithm():
141 | # Graph coloring
142 | edges = {
143 | 'A': [0, 1],
144 | 'B': [0, 3],
145 | 'C': [1, 2],
146 | 'D': [2, 3]
147 | }
148 |
149 | def fitness(c):
150 | return sum(c[n1] != c[n2] for (n1, n2) in edges.values())
151 |
152 | solution_chars = GA_GraphColoringChars(edges, fitness)
153 | assert solution_chars == ['R', 'G', 'R', 'G'] or solution_chars == ['G', 'R', 'G', 'R']
154 |
155 | solution_bools = GA_GraphColoringBools(edges, fitness)
156 | assert solution_bools == [True, False, True, False] or solution_bools == [False, True, False, True]
157 |
158 | solution_ints = GA_GraphColoringInts(edges, fitness)
159 | assert solution_ints == [0, 1, 0, 1] or solution_ints == [1, 0, 1, 0]
160 |
161 | # Queens Problem
162 | gene_pool = range(8)
163 | population = init_population(100, gene_pool, 8)
164 |
165 | def fitness(q):
166 | non_attacking = 0
167 | for row1 in range(len(q)):
168 | for row2 in range(row1+1, len(q)):
169 | col1 = int(q[row1])
170 | col2 = int(q[row2])
171 | row_diff = row1 - row2
172 | col_diff = col1 - col2
173 |
174 | if col1 != col2 and row_diff != col_diff and row_diff != -col_diff:
175 | non_attacking += 1
176 |
177 | return non_attacking
178 |
179 |
180 | solution = genetic_algorithm(population, fitness, gene_pool=gene_pool, f_thres=25)
181 | assert fitness(solution) >= 25
182 |
183 |
184 | def GA_GraphColoringChars(edges, fitness):
185 | gene_pool = ['R', 'G']
186 | population = init_population(8, gene_pool, 4)
187 |
188 | return genetic_algorithm(population, fitness, gene_pool=gene_pool)
189 |
190 |
191 | def GA_GraphColoringBools(edges, fitness):
192 | gene_pool = [True, False]
193 | population = init_population(8, gene_pool, 4)
194 |
195 | return genetic_algorithm(population, fitness, gene_pool=gene_pool)
196 |
197 |
198 | def GA_GraphColoringInts(edges, fitness):
199 | population = init_population(8, [0, 1], 4)
200 |
201 | return genetic_algorithm(population, fitness)
202 |
203 |
204 |
205 | # TODO: for .ipynb:
206 | """
207 | >>> compare_graph_searchers()
208 | Searcher romania_map(A, B) romania_map(O, N) australia_map
209 | breadth_first_tree_search < 21/ 22/ 59/B> <1158/1159/3288/N> < 7/ 8/ 22/WA>
210 | breadth_first_search < 7/ 11/ 18/B> < 19/ 20/ 45/N> < 2/ 6/ 8/WA>
211 | depth_first_graph_search < 8/ 9/ 20/B> < 16/ 17/ 38/N> < 4/ 5/ 11/WA>
212 | iterative_deepening_search < 11/ 33/ 31/B> < 656/1815/1812/N> < 3/ 11/ 11/WA>
213 | depth_limited_search < 54/ 65/ 185/B> < 387/1012/1125/N> < 50/ 54/ 200/WA>
214 | recursive_best_first_search < 5/ 6/ 15/B> <5887/5888/16532/N> < 11/12/ 43/WA>
215 |
216 | >>> ' '.join(f.words())
217 | 'LID LARES DEAL LIE DIETS LIN LINT TIL TIN RATED ERAS LATEN DEAR TIE LINE INTER
218 | STEAL LATED LAST TAR SAL DITES RALES SAE RETS TAE RAT RAS SAT IDLE TILDES LEAST
219 | IDEAS LITE SATED TINED LEST LIT RASE RENTS TINEA EDIT EDITS NITES ALES LATE
220 | LETS RELIT TINES LEI LAT ELINT LATI SENT TARED DINE STAR SEAR NEST LITAS TIED
221 | SEAT SERAL RATE DINT DEL DEN SEAL TIER TIES NET SALINE DILATE EAST TIDES LINTER
222 | NEAR LITS ELINTS DENI RASED SERA TILE NEAT DERAT IDLEST NIDE LIEN STARED LIER
223 | LIES SETA NITS TINE DITAS ALINE SATIN TAS ASTER LEAS TSAR LAR NITE RALE LAS
224 | REAL NITER ATE RES RATEL IDEA RET IDEAL REI RATS STALE DENT RED IDES ALIEN SET
225 | TEL SER TEN TEA TED SALE TALE STILE ARES SEA TILDE SEN SEL ALINES SEI LASE
226 | DINES ILEA LINES ELD TIDE RENT DIEL STELA TAEL STALED EARL LEA TILES TILER LED
227 | ETA TALI ALE LASED TELA LET IDLER REIN ALIT ITS NIDES DIN DIE DENTS STIED LINER
228 | LASTED RATINE ERA IDLES DIT RENTAL DINER SENTI TINEAL DEIL TEAR LITER LINTS
229 | TEAL DIES EAR EAT ARLES SATE STARE DITS DELI DENTAL REST DITE DENTIL DINTS DITA
230 | DIET LENT NETS NIL NIT SETAL LATS TARE ARE SATI'
231 |
232 | >>> boggle_hill_climbing(list('ABCDEFGHI'), verbose=False)
233 | (['E', 'P', 'R', 'D', 'O', 'A', 'G', 'S', 'T'], 123)
234 | """
235 |
236 | if __name__ == '__main__':
237 | pytest.main()
238 |
--------------------------------------------------------------------------------
/aima3/tests/test_text.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import os
3 | import random
4 |
5 | from aima3.text import *
6 | from aima3.utils import isclose, open_data
7 |
8 |
9 |
10 | def test_text_models():
11 | flatland = open_data("EN-text/flatland.txt").read()
12 | wordseq = words(flatland)
13 | P1 = UnigramWordModel(wordseq)
14 | P2 = NgramWordModel(2, wordseq)
15 | P3 = NgramWordModel(3, wordseq)
16 |
17 | # Test top
18 | assert P1.top(5) == [(2081, 'the'), (1479, 'of'),
19 | (1021, 'and'), (1008, 'to'),
20 | (850, 'a')]
21 |
22 | assert P2.top(5) == [(368, ('of', 'the')), (152, ('to', 'the')),
23 | (152, ('in', 'the')), (86, ('of', 'a')),
24 | (80, ('it', 'is'))]
25 |
26 | assert P3.top(5) == [(30, ('a', 'straight', 'line')),
27 | (19, ('of', 'three', 'dimensions')),
28 | (16, ('the', 'sense', 'of')),
29 | (13, ('by', 'the', 'sense')),
30 | (13, ('as', 'well', 'as'))]
31 |
32 | # Test isclose
33 | assert isclose(P1['the'], 0.0611, rel_tol=0.001)
34 | assert isclose(P2['of', 'the'], 0.0108, rel_tol=0.01)
35 | assert isclose(P3['so', 'as', 'to'], 0.000323, rel_tol=0.001)
36 |
37 | # Test cond_prob.get
38 | assert P2.cond_prob.get(('went',)) is None
39 | assert P3.cond_prob['in', 'order'].dictionary == {'to': 6}
40 |
41 | # Test dictionary
42 | test_string = 'unigram'
43 | wordseq = words(test_string)
44 | P1 = UnigramWordModel(wordseq)
45 | assert P1.dictionary == {('unigram'): 1}
46 |
47 | test_string = 'bigram text'
48 | wordseq = words(test_string)
49 | P2 = NgramWordModel(2, wordseq)
50 | assert P2.dictionary == {('bigram', 'text'): 1}
51 |
52 | test_string = 'test trigram text here'
53 | wordseq = words(test_string)
54 | P3 = NgramWordModel(3, wordseq)
55 | assert ('test', 'trigram', 'text') in P3.dictionary
56 | assert ('trigram', 'text', 'here') in P3.dictionary
57 |
58 |
59 | def test_char_models():
60 | test_string = 'test unigram'
61 | wordseq = words(test_string)
62 | P1 = UnigramCharModel(wordseq)
63 |
64 | expected_unigrams = {'n': 1, 's': 1, 'e': 1, 'i': 1, 'm': 1, 'g': 1, 'r': 1, 'a': 1, 't': 2, 'u': 1}
65 | assert len(P1.dictionary) == len(expected_unigrams)
66 | for char in test_string.replace(' ', ''):
67 | assert char in P1.dictionary
68 |
69 | test_string = 'alpha beta'
70 | wordseq = words(test_string)
71 | P1 = NgramCharModel(1, wordseq)
72 |
73 | assert len(P1.dictionary) == len(set(test_string))
74 | for char in set(test_string):
75 | assert tuple(char) in P1.dictionary
76 |
77 | test_string = 'bigram'
78 | wordseq = words(test_string)
79 | P2 = NgramCharModel(2, wordseq)
80 |
81 | expected_bigrams = {(' ', 'b'): 1, ('b', 'i'): 1, ('i', 'g'): 1, ('g', 'r'): 1, ('r', 'a'): 1, ('a', 'm'): 1}
82 |
83 | assert len(P2.dictionary) == len(expected_bigrams)
84 | for bigram, count in expected_bigrams.items():
85 | assert bigram in P2.dictionary
86 | assert P2.dictionary[bigram] == count
87 |
88 | test_string = 'bigram bigram'
89 | wordseq = words(test_string)
90 | P2 = NgramCharModel(2, wordseq)
91 |
92 | expected_bigrams = {(' ', 'b'): 2, ('b', 'i'): 2, ('i', 'g'): 2, ('g', 'r'): 2, ('r', 'a'): 2, ('a', 'm'): 2}
93 |
94 | assert len(P2.dictionary) == len(expected_bigrams)
95 | for bigram, count in expected_bigrams.items():
96 | assert bigram in P2.dictionary
97 | assert P2.dictionary[bigram] == count
98 |
99 | test_string = 'trigram'
100 | wordseq = words(test_string)
101 | P3 = NgramCharModel(3, wordseq)
102 | expected_trigrams = {(' ', 't', 'r'): 1, ('t', 'r', 'i'): 1,
103 | ('r', 'i', 'g'): 1, ('i', 'g', 'r'): 1,
104 | ('g', 'r', 'a'): 1, ('r', 'a', 'm'): 1}
105 |
106 | assert len(P3.dictionary) == len(expected_trigrams)
107 | for bigram, count in expected_trigrams.items():
108 | assert bigram in P3.dictionary
109 | assert P3.dictionary[bigram] == count
110 |
111 | test_string = 'trigram trigram trigram'
112 | wordseq = words(test_string)
113 | P3 = NgramCharModel(3, wordseq)
114 | expected_trigrams = {(' ', 't', 'r'): 3, ('t', 'r', 'i'): 3,
115 | ('r', 'i', 'g'): 3, ('i', 'g', 'r'): 3,
116 | ('g', 'r', 'a'): 3, ('r', 'a', 'm'): 3}
117 |
118 | assert len(P3.dictionary) == len(expected_trigrams)
119 | for bigram, count in expected_trigrams.items():
120 | assert bigram in P3.dictionary
121 | assert P3.dictionary[bigram] == count
122 |
123 |
124 | def test_samples():
125 | story = open_data("EN-text/flatland.txt").read()
126 | story += open_data("gutenberg.txt").read()
127 | wordseq = words(story)
128 | P1 = UnigramWordModel(wordseq)
129 | P2 = NgramWordModel(2, wordseq)
130 | P3 = NgramWordModel(3, wordseq)
131 |
132 | s1 = P1.samples(10)
133 | s2 = P3.samples(10)
134 | s3 = P3.samples(10)
135 |
136 | assert len(s1.split(' ')) == 10
137 | assert len(s2.split(' ')) == 10
138 | assert len(s3.split(' ')) == 10
139 |
140 |
141 | def test_viterbi_segmentation():
142 | flatland = open_data("EN-text/flatland.txt").read()
143 | wordseq = words(flatland)
144 | P = UnigramWordModel(wordseq)
145 | text = "itiseasytoreadwordswithoutspaces"
146 |
147 | s, p = viterbi_segment(text, P)
148 | assert s == [
149 | 'it', 'is', 'easy', 'to', 'read', 'words', 'without', 'spaces']
150 |
151 |
152 | def test_shift_encoding():
153 | code = shift_encode("This is a secret message.", 17)
154 |
155 | assert code == 'Kyzj zj r jvtivk dvjjrxv.'
156 |
157 |
158 | def test_shift_decoding():
159 | flatland = open_data("EN-text/flatland.txt").read()
160 | ring = ShiftDecoder(flatland)
161 | msg = ring.decode('Kyzj zj r jvtivk dvjjrxv.')
162 |
163 | assert msg == 'This is a secret message.'
164 |
165 |
166 | def test_permutation_decoder():
167 | gutenberg = open_data("gutenberg.txt").read()
168 | flatland = open_data("EN-text/flatland.txt").read()
169 |
170 | pd = PermutationDecoder(canonicalize(gutenberg))
171 | assert pd.decode('aba') in ('ece', 'ete', 'tat', 'tit', 'txt')
172 |
173 | pd = PermutationDecoder(canonicalize(flatland))
174 | assert pd.decode('aba') in ('ded', 'did', 'ece', 'ele', 'eme', 'ere', 'eve', 'eye', 'iti', 'mom', 'ses', 'tat', 'tit')
175 |
176 |
177 | def test_rot13_encoding():
178 | code = rot13('Hello, world!')
179 |
180 | assert code == 'Uryyb, jbeyq!'
181 |
182 |
183 | def test_rot13_decoding():
184 | flatland = open_data("EN-text/flatland.txt").read()
185 | ring = ShiftDecoder(flatland)
186 | msg = ring.decode(rot13('Hello, world!'))
187 |
188 | assert msg == 'Hello, world!'
189 |
190 |
191 | def test_counting_probability_distribution():
192 | D = CountingProbDist()
193 |
194 | for i in range(10000):
195 | D.add(random.choice('123456'))
196 |
197 | ps = [D[n] for n in '123456']
198 |
199 | assert 1 / 7 <= min(ps) <= max(ps) <= 1 / 5
200 |
201 |
202 | def test_ir_system():
203 | from collections import namedtuple
204 | Results = namedtuple('IRResults', ['score', 'url'])
205 |
206 | uc = UnixConsultant()
207 |
208 | def verify_query(query, expected):
209 | assert len(expected) == len(query)
210 |
211 | for expected, (score, d) in zip(expected, query):
212 | doc = uc.documents[d]
213 | assert "{0:.2f}".format(
214 | expected.score) == "{0:.2f}".format(score * 100)
215 | assert os.path.basename(expected.url) == os.path.basename(doc.url)
216 |
217 | return True
218 |
219 | q1 = uc.query("how do I remove a file")
220 | assert verify_query(q1, [
221 | Results(76.83, "aima-data/MAN/rm.txt"),
222 | Results(67.83, "aima-data/MAN/tar.txt"),
223 | Results(67.79, "aima-data/MAN/cp.txt"),
224 | Results(66.58, "aima-data/MAN/zip.txt"),
225 | Results(64.58, "aima-data/MAN/gzip.txt"),
226 | Results(63.74, "aima-data/MAN/pine.txt"),
227 | Results(62.95, "aima-data/MAN/shred.txt"),
228 | Results(57.46, "aima-data/MAN/pico.txt"),
229 | Results(43.38, "aima-data/MAN/login.txt"),
230 | Results(41.93, "aima-data/MAN/ln.txt"),
231 | ])
232 |
233 | q2 = uc.query("how do I delete a file")
234 | assert verify_query(q2, [
235 | Results(75.47, "aima-data/MAN/diff.txt"),
236 | Results(69.12, "aima-data/MAN/pine.txt"),
237 | Results(63.56, "aima-data/MAN/tar.txt"),
238 | Results(60.63, "aima-data/MAN/zip.txt"),
239 | Results(57.46, "aima-data/MAN/pico.txt"),
240 | Results(51.28, "aima-data/MAN/shred.txt"),
241 | Results(26.72, "aima-data/MAN/tr.txt"),
242 | ])
243 |
244 | q3 = uc.query("email")
245 | assert verify_query(q3, [
246 | Results(18.39, "aima-data/MAN/pine.txt"),
247 | Results(12.01, "aima-data/MAN/info.txt"),
248 | Results(9.89, "aima-data/MAN/pico.txt"),
249 | Results(8.73, "aima-data/MAN/grep.txt"),
250 | Results(8.07, "aima-data/MAN/zip.txt"),
251 | ])
252 |
253 | q4 = uc.query("word count for files")
254 | assert verify_query(q4, [
255 | Results(128.15, "aima-data/MAN/grep.txt"),
256 | Results(94.20, "aima-data/MAN/find.txt"),
257 | Results(81.71, "aima-data/MAN/du.txt"),
258 | Results(55.45, "aima-data/MAN/ps.txt"),
259 | Results(53.42, "aima-data/MAN/more.txt"),
260 | Results(42.00, "aima-data/MAN/dd.txt"),
261 | Results(12.85, "aima-data/MAN/who.txt"),
262 | ])
263 |
264 | q5 = uc.query("learn: date")
265 | assert verify_query(q5, [])
266 |
267 | q6 = uc.query("2003")
268 | assert verify_query(q6, [
269 | Results(14.58, "aima-data/MAN/pine.txt"),
270 | Results(11.62, "aima-data/MAN/jar.txt"),
271 | ])
272 |
273 |
274 | def test_words():
275 | assert words("``EGAD!'' Edgar cried.") == ['egad', 'edgar', 'cried']
276 |
277 |
278 | def test_canonicalize():
279 | assert canonicalize("``EGAD!'' Edgar cried.") == 'egad edgar cried'
280 |
281 |
282 | def test_translate():
283 | text = 'orange apple lemon '
284 | func = lambda x: ('s ' + x) if x ==' ' else x
285 |
286 | assert translate(text, func) == 'oranges apples lemons '
287 |
288 |
289 | def test_bigrams():
290 | assert bigrams('this') == ['th', 'hi', 'is']
291 | assert bigrams(['this', 'is', 'a', 'test']) == [['this', 'is'], ['is', 'a'], ['a', 'test']]
292 |
293 |
294 |
295 | if __name__ == '__main__':
296 | pytest.main()
297 |
--------------------------------------------------------------------------------
/aima3/tests/test_utils.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from aima3.utils import *
3 | import random
4 |
5 |
6 | def test_removeall_list():
7 | assert removeall(4, []) == []
8 | assert removeall(4, [1, 2, 3, 4]) == [1, 2, 3]
9 | assert removeall(4, [4, 1, 4, 2, 3, 4, 4]) == [1, 2, 3]
10 |
11 |
12 | def test_removeall_string():
13 | assert removeall('s', '') == ''
14 | assert removeall('s', 'This is a test. Was a test.') == 'Thi i a tet. Wa a tet.'
15 |
16 |
17 | def test_unique():
18 | assert unique([1, 2, 3, 2, 1]) == [1, 2, 3]
19 | assert unique([1, 5, 6, 7, 6, 5]) == [1, 5, 6, 7]
20 |
21 |
22 | def test_count():
23 | assert count([1, 2, 3, 4, 2, 3, 4]) == 7
24 | assert count("aldpeofmhngvia") == 14
25 | assert count([True, False, True, True, False]) == 3
26 | assert count([5 > 1, len("abc") == 3, 3+1 == 5]) == 2
27 |
28 |
29 | def test_product():
30 | assert product([1, 2, 3, 4]) == 24
31 | assert product(list(range(1, 11))) == 3628800
32 |
33 |
34 | def test_first():
35 | assert first('word') == 'w'
36 | assert first('') is None
37 | assert first('', 'empty') == 'empty'
38 | assert first(range(10)) == 0
39 | assert first(x for x in range(10) if x > 3) == 4
40 | assert first(x for x in range(10) if x > 100) is None
41 |
42 |
43 | def test_is_in():
44 | e = []
45 | assert is_in(e, [1, e, 3]) is True
46 | assert is_in(e, [1, [], 3]) is False
47 |
48 |
49 | def test_mode():
50 | assert mode([12, 32, 2, 1, 2, 3, 2, 3, 2, 3, 44, 3, 12, 4, 9, 0, 3, 45, 3]) == 3
51 | assert mode("absndkwoajfkalwpdlsdlfllalsflfdslgflal") == 'l'
52 |
53 |
54 | def test_powerset():
55 | assert powerset([1, 2, 3]) == [(1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]
56 |
57 |
58 | def test_argminmax():
59 | assert argmin([-2, 1], key=abs) == 1
60 | assert argmax([-2, 1], key=abs) == -2
61 | assert argmax(['one', 'to', 'three'], key=len) == 'three'
62 |
63 |
64 | def test_histogram():
65 | assert histogram([1, 2, 4, 2, 4, 5, 7, 9, 2, 1]) == [(1, 2), (2, 3),
66 | (4, 2), (5, 1),
67 | (7, 1), (9, 1)]
68 | assert histogram([1, 2, 4, 2, 4, 5, 7, 9, 2, 1], 0, lambda x: x*x) == [(1, 2), (4, 3),
69 | (16, 2), (25, 1),
70 | (49, 1), (81, 1)]
71 | assert histogram([1, 2, 4, 2, 4, 5, 7, 9, 2, 1], 1) == [(2, 3), (4, 2),
72 | (1, 2), (9, 1),
73 | (7, 1), (5, 1)]
74 |
75 |
76 | def test_dotproduct():
77 | assert dotproduct([1, 2, 3], [1000, 100, 10]) == 1230
78 |
79 |
80 | def test_element_wise_product():
81 | assert element_wise_product([1, 2, 5], [7, 10, 0]) == [7, 20, 0]
82 | assert element_wise_product([1, 6, 3, 0], [9, 12, 0, 0]) == [9, 72, 0, 0]
83 |
84 |
85 | def test_matrix_multiplication():
86 | assert matrix_multiplication([[1, 2, 3],
87 | [2, 3, 4]],
88 | [[3, 4],
89 | [1, 2],
90 | [1, 0]]) == [[8, 8], [13, 14]]
91 |
92 | assert matrix_multiplication([[1, 2, 3],
93 | [2, 3, 4]],
94 | [[3, 4, 8, 1],
95 | [1, 2, 5, 0],
96 | [1, 0, 0, 3]],
97 | [[1, 2],
98 | [3, 4],
99 | [5, 6],
100 | [1, 2]]) == [[132, 176], [224, 296]]
101 |
102 |
103 | def test_vector_to_diagonal():
104 | assert vector_to_diagonal([1, 2, 3]) == [[1, 0, 0], [0, 2, 0], [0, 0, 3]]
105 | assert vector_to_diagonal([0, 3, 6]) == [[0, 0, 0], [0, 3, 0], [0, 0, 6]]
106 |
107 |
108 | def test_vector_add():
109 | assert vector_add((0, 1), (8, 9)) == (8, 10)
110 |
111 |
112 | def test_scalar_vector_product():
113 | assert scalar_vector_product(2, [1, 2, 3]) == [2, 4, 6]
114 |
115 |
116 | def test_scalar_matrix_product():
117 | assert rounder(scalar_matrix_product(-5, [[1, 2], [3, 4], [0, 6]])) == [[-5, -10], [-15, -20],
118 | [0, -30]]
119 | assert rounder(scalar_matrix_product(0.2, [[1, 2], [2, 3]])) == [[0.2, 0.4], [0.4, 0.6]]
120 |
121 |
122 | def test_inverse_matrix():
123 | assert rounder(inverse_matrix([[1, 0], [0, 1]])) == [[1, 0], [0, 1]]
124 | assert rounder(inverse_matrix([[2, 1], [4, 3]])) == [[1.5, -0.5], [-2.0, 1.0]]
125 | assert rounder(inverse_matrix([[4, 7], [2, 6]])) == [[0.6, -0.7], [-0.2, 0.4]]
126 |
127 |
128 | def test_rounder():
129 | assert rounder(5.3330000300330) == 5.3330
130 | assert rounder(10.234566) == 10.2346
131 | assert rounder([1.234566, 0.555555, 6.010101]) == [1.2346, 0.5556, 6.0101]
132 | assert rounder([[1.234566, 0.555555, 6.010101],
133 | [10.505050, 12.121212, 6.030303]]) == [[1.2346, 0.5556, 6.0101],
134 | [10.5051, 12.1212, 6.0303]]
135 |
136 |
137 | def test_num_or_str():
138 | assert num_or_str('42') == 42
139 | assert num_or_str(' 42x ') == '42x'
140 |
141 |
142 | def test_normalize():
143 | assert normalize([1, 2, 1]) == [0.25, 0.5, 0.25]
144 |
145 |
146 | def test_norm():
147 | assert isclose(norm([1, 2, 1], 1), 4)
148 | assert isclose(norm([3, 4], 2), 5)
149 | assert isclose(norm([-1, 1, 2], 4), 18**0.25)
150 |
151 |
152 | def test_clip():
153 | assert [clip(x, 0, 1) for x in [-1, 0.5, 10]] == [0, 0.5, 1]
154 |
155 |
156 | def test_sigmoid():
157 | assert isclose(0.5, sigmoid(0))
158 | assert isclose(0.7310585786300049, sigmoid(1))
159 | assert isclose(0.2689414213699951, sigmoid(-1))
160 |
161 |
162 | def test_gaussian():
163 | assert gaussian(1,0.5,0.7) == 0.6664492057835993
164 | assert gaussian(5,2,4.5) == 0.19333405840142462
165 | assert gaussian(3,1,3) == 0.3989422804014327
166 |
167 |
168 | def test_sigmoid_derivative():
169 | value = 1
170 | assert sigmoid_derivative(value) == 0
171 |
172 | value = 3
173 | assert sigmoid_derivative(value) == -6
174 |
175 |
176 | def test_weighted_choice():
177 | choices = [('a', 0.5), ('b', 0.3), ('c', 0.2)]
178 | choice = weighted_choice(choices)
179 | assert choice in choices
180 |
181 |
182 | def compare_list(x, y):
183 | return all([elm_x == y[i] for i, elm_x in enumerate(x)])
184 |
185 |
186 | def test_distance():
187 | assert distance((1, 2), (5, 5)) == 5.0
188 |
189 |
190 | def test_distance_squared():
191 | assert distance_squared((1, 2), (5, 5)) == 25.0
192 |
193 |
194 | def test_vector_clip():
195 | assert vector_clip((-1, 10), (0, 0), (9, 9)) == (0, 9)
196 |
197 |
198 | def test_turn_heading():
199 | assert turn_heading((0, 1), 1) == (-1, 0)
200 | assert turn_heading((0, 1), -1) == (1, 0)
201 | assert turn_heading((1, 0), 1) == (0, 1)
202 | assert turn_heading((1, 0), -1) == (0, -1)
203 | assert turn_heading((0, -1), 1) == (1, 0)
204 | assert turn_heading((0, -1), -1) == (-1, 0)
205 | assert turn_heading((-1, 0), 1) == (0, -1)
206 | assert turn_heading((-1, 0), -1) == (0, 1)
207 |
208 |
209 | def test_turn_left():
210 | assert turn_left((0, 1)) == (-1, 0)
211 |
212 |
213 | def test_turn_right():
214 | assert turn_right((0, 1)) == (1, 0)
215 |
216 |
217 | def test_step():
218 | assert step(1) == step(0.5) == 1
219 | assert step(0) == 1
220 | assert step(-1) == step(-0.5) == 0
221 |
222 |
223 | def test_Expr():
224 | A, B, C = symbols('A, B, C')
225 | assert symbols('A, B, C') == (Symbol('A'), Symbol('B'), Symbol('C'))
226 | assert A.op == repr(A) == 'A'
227 | assert arity(A) == 0 and A.args == ()
228 |
229 | b = Expr('+', A, 1)
230 | assert arity(b) == 2 and b.op == '+' and b.args == (A, 1)
231 |
232 | u = Expr('-', b)
233 | assert arity(u) == 1 and u.op == '-' and u.args == (b,)
234 |
235 | assert (b ** u) == (b ** u)
236 | assert (b ** u) != (u ** b)
237 |
238 | assert A + b * C ** 2 == A + (b * (C ** 2))
239 |
240 | ex = C + 1 / (A % 1)
241 | assert list(subexpressions(ex)) == [(C + (1 / (A % 1))), C, (1 / (A % 1)), 1, (A % 1), A, 1]
242 | assert A in subexpressions(ex)
243 | assert B not in subexpressions(ex)
244 |
245 |
246 | def test_expr():
247 | P, Q, x, y, z, GP = symbols('P, Q, x, y, z, GP')
248 | assert (expr(y + 2 * x)
249 | == expr('y + 2 * x')
250 | == Expr('+', y, Expr('*', 2, x)))
251 | assert expr('P & Q ==> P') == Expr('==>', P & Q, P)
252 | assert expr('P & Q <=> Q & P') == Expr('<=>', (P & Q), (Q & P))
253 | assert expr('P(x) | P(y) & Q(z)') == (P(x) | (P(y) & Q(z)))
254 | # x is grandparent of z if x is parent of y and y is parent of z:
255 | assert (expr('GP(x, z) <== P(x, y) & P(y, z)')
256 | == Expr('<==', GP(x, z), P(x, y) & P(y, z)))
257 |
258 | def test_FIFOQueue() :
259 | # Create an object
260 | queue = FIFOQueue()
261 | # Generate an array of number to be used for testing
262 | test_data = [ random.choice(range(100)) for i in range(100) ]
263 | # Index of the element to be added in the queue
264 | front_head = 0
265 | # Index of the element to be removed from the queue
266 | back_head = 0
267 | while front_head < 100 or back_head < 100 :
268 | if front_head == 100 : # only possible to remove
269 | # check for pop and append method
270 | assert queue.pop() == test_data[back_head]
271 | back_head += 1
272 | elif back_head == front_head : # only possible to push element into queue
273 | queue.append(test_data[front_head])
274 | front_head += 1
275 | # else do it in a random manner
276 | elif random.random() < 0.5 :
277 | assert queue.pop() == test_data[back_head]
278 | back_head += 1
279 | else :
280 | queue.append(test_data[front_head])
281 | front_head += 1
282 | # check for __len__ method
283 | assert len(queue) == front_head - back_head
284 | # chek for __contains__ method
285 | if front_head - back_head > 0 :
286 | assert random.choice(test_data[back_head:front_head]) in queue
287 |
288 | # check extend method
289 | test_data1 = [ random.choice(range(100)) for i in range(50) ]
290 | test_data2 = [ random.choice(range(100)) for i in range(50) ]
291 | # append elements of test data 1
292 | queue.extend(test_data1)
293 | # append elements of test data 2
294 | queue.extend(test_data2)
295 | # reset front_head
296 | front_head = 0
297 |
298 | while front_head < 50 :
299 | assert test_data1[front_head] == queue.pop()
300 | front_head += 1
301 |
302 | while front_head < 100 :
303 | assert test_data2[front_head - 50] == queue.pop()
304 | front_head += 1
305 |
306 | if __name__ == '__main__':
307 | pytest.main()
308 |
--------------------------------------------------------------------------------
/notebooks/connect_four.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# ConnectFour\n",
8 | "\n",
9 | "This notebook explores the game ConnectFour in the aima3 collection.\n",
10 | "\n",
11 | "* You drop pieces from the top into one of the columns, and they land in the next available open spot.\n",
12 | "* You must get 4 positions in a row in this grid, either horizontally, vertically, or diagonally.\n",
13 | "\n",
14 | "\n",
15 | " - | 1 | 2 | 3 | 4 | 5 | 6 | 7\n",
16 | "--------|---|---|---|---|---|---|---\n",
17 | "**6** |( 1, 6) | ( 2, 6) | ( 3, 6) | (4,6) |(5,6) |(6,6) |(7,6)\n",
18 | "**5** |( 1, 5) | ( 2, 5) | ( 3, 5) | (4,5) |(5,5) |(6,5) |(7,5)\n",
19 | "**4** |( 1, 4) | ( 2, 4) | ( 3, 4) | (4,4) |(5,4) |(6,4) |(7,4)\n",
20 | "**3** |( 1, 3) | ( 2, 3) | ( 3, 3) | (4,3) |(5,3) |(6,3) |(7,3)\n",
21 | "**2** |( 1, 2) | ( 2, 2) | ( 3, 2) | (4,2) |(5,2) |(6,2) |(7,2)\n",
22 | "**1** |( 1, 1) | ( 2, 1) | ( 3, 1) | (4,1) |(5,1) |(6,1) |(7,1)"
23 | ]
24 | },
25 | {
26 | "cell_type": "markdown",
27 | "metadata": {},
28 | "source": [
29 | "There are a few pre-defined AI agents that can play these games. QueryPlayer is for humans, and MCTS is MonteCarloTreeSearch. The other are well-known search-based algorithms."
30 | ]
31 | },
32 | {
33 | "cell_type": "code",
34 | "execution_count": 1,
35 | "metadata": {},
36 | "outputs": [],
37 | "source": [
38 | "from aima3.games import (ConnectFour, RandomPlayer, QueryPlayer, players,\n",
39 | " MCTSPlayer, MiniMaxPlayer, AlphaBetaCutoffPlayer,\n",
40 | " AlphaBetaPlayer)"
41 | ]
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "metadata": {},
46 | "source": [
47 | "Let's play a game:"
48 | ]
49 | },
50 | {
51 | "cell_type": "code",
52 | "execution_count": 2,
53 | "metadata": {},
54 | "outputs": [
55 | {
56 | "name": "stdout",
57 | "output_type": "stream",
58 | "text": [
59 | "Rando is thinking...\n",
60 | "Rando makes action (3, 1):\n",
61 | ". . . . . . . \n",
62 | ". . . . . . . \n",
63 | ". . . . . . . \n",
64 | ". . . . . . . \n",
65 | ". . . . . . . \n",
66 | ". . X . . . . \n",
67 | "Alphie is thinking...\n",
68 | "Alphie makes action (1, 1):\n",
69 | ". . . . . . . \n",
70 | ". . . . . . . \n",
71 | ". . . . . . . \n",
72 | ". . . . . . . \n",
73 | ". . . . . . . \n",
74 | "O . X . . . . \n",
75 | "Rando is thinking...\n",
76 | "Rando makes action (2, 1):\n",
77 | ". . . . . . . \n",
78 | ". . . . . . . \n",
79 | ". . . . . . . \n",
80 | ". . . . . . . \n",
81 | ". . . . . . . \n",
82 | "O X X . . . . \n",
83 | "Alphie is thinking...\n",
84 | "Alphie makes action (1, 2):\n",
85 | ". . . . . . . \n",
86 | ". . . . . . . \n",
87 | ". . . . . . . \n",
88 | ". . . . . . . \n",
89 | "O . . . . . . \n",
90 | "O X X . . . . \n",
91 | "Rando is thinking...\n",
92 | "Rando makes action (3, 2):\n",
93 | ". . . . . . . \n",
94 | ". . . . . . . \n",
95 | ". . . . . . . \n",
96 | ". . . . . . . \n",
97 | "O . X . . . . \n",
98 | "O X X . . . . \n",
99 | "Alphie is thinking...\n",
100 | "Alphie makes action (1, 3):\n",
101 | ". . . . . . . \n",
102 | ". . . . . . . \n",
103 | ". . . . . . . \n",
104 | "O . . . . . . \n",
105 | "O . X . . . . \n",
106 | "O X X . . . . \n",
107 | "Rando is thinking...\n",
108 | "Rando makes action (6, 1):\n",
109 | ". . . . . . . \n",
110 | ". . . . . . . \n",
111 | ". . . . . . . \n",
112 | "O . . . . . . \n",
113 | "O . X . . . . \n",
114 | "O X X . . X . \n",
115 | "Alphie is thinking...\n",
116 | "Alphie makes action (1, 4):\n",
117 | ". . . . . . . \n",
118 | ". . . . . . . \n",
119 | "O . . . . . . \n",
120 | "O . . . . . . \n",
121 | "O . X . . . . \n",
122 | "O X X . . X . \n",
123 | "***** Alphie wins!\n"
124 | ]
125 | },
126 | {
127 | "data": {
128 | "text/plain": [
129 | "['Alphie']"
130 | ]
131 | },
132 | "execution_count": 2,
133 | "metadata": {},
134 | "output_type": "execute_result"
135 | }
136 | ],
137 | "source": [
138 | "p1 = RandomPlayer(\"Rando\")\n",
139 | "p2 = AlphaBetaCutoffPlayer(\"Alphie\")\n",
140 | "game = ConnectFour()\n",
141 | "game.play_game(p1, p2)"
142 | ]
143 | },
144 | {
145 | "cell_type": "code",
146 | "execution_count": 3,
147 | "metadata": {},
148 | "outputs": [
149 | {
150 | "name": "stdout",
151 | "output_type": "stream",
152 | "text": [
153 | ". . . . . . . \n",
154 | ". . . . . . . \n",
155 | ". . . . . . . \n",
156 | ". . . . . . . \n",
157 | ". . . . . . . \n",
158 | ". . . . . . . \n"
159 | ]
160 | }
161 | ],
162 | "source": [
163 | "game.display(game.initial)"
164 | ]
165 | },
166 | {
167 | "cell_type": "code",
168 | "execution_count": 4,
169 | "metadata": {},
170 | "outputs": [
171 | {
172 | "data": {
173 | "text/plain": [
174 | "(2, 1)"
175 | ]
176 | },
177 | "execution_count": 4,
178 | "metadata": {},
179 | "output_type": "execute_result"
180 | }
181 | ],
182 | "source": [
183 | "p1.get_action(game.initial, turn=1)"
184 | ]
185 | },
186 | {
187 | "cell_type": "code",
188 | "execution_count": 5,
189 | "metadata": {},
190 | "outputs": [
191 | {
192 | "data": {
193 | "text/plain": [
194 | "(1, 1)"
195 | ]
196 | },
197 | "execution_count": 5,
198 | "metadata": {},
199 | "output_type": "execute_result"
200 | }
201 | ],
202 | "source": [
203 | "p2.get_action(game.initial, turn=1)"
204 | ]
205 | },
206 | {
207 | "cell_type": "markdown",
208 | "metadata": {},
209 | "source": [
210 | "## One player playing both sides:"
211 | ]
212 | },
213 | {
214 | "cell_type": "code",
215 | "execution_count": 6,
216 | "metadata": {},
217 | "outputs": [
218 | {
219 | "name": "stdout",
220 | "output_type": "stream",
221 | "text": [
222 | "Current state:\n",
223 | ". . . . . . . \n",
224 | ". . . . . . . \n",
225 | ". . . . . . . \n",
226 | ". . . . . . . \n",
227 | ". . . . . . . \n",
228 | ". . . . . . . \n",
229 | "Made the action: (7, 1)\n",
230 | "Current state:\n",
231 | ". . . . . . . \n",
232 | ". . . . . . . \n",
233 | ". . . . . . . \n",
234 | ". . . . . . . \n",
235 | ". . . . . . . \n",
236 | ". . . . . . X \n",
237 | "Made the action: (6, 1)\n",
238 | "Current state:\n",
239 | ". . . . . . . \n",
240 | ". . . . . . . \n",
241 | ". . . . . . . \n",
242 | ". . . . . . . \n",
243 | ". . . . . . . \n",
244 | ". . . . . O X \n",
245 | "Made the action: (1, 1)\n",
246 | ". . . . . . . \n",
247 | ". . . . . . . \n",
248 | ". . . . . . . \n",
249 | ". . . . . . . \n",
250 | ". . . . . . . \n",
251 | "X . . . . O X \n"
252 | ]
253 | }
254 | ],
255 | "source": [
256 | "state = game.initial\n",
257 | "turn = 1\n",
258 | "for i in range(3):\n",
259 | " print(\"Current state:\")\n",
260 | " game.display(state)\n",
261 | " action = p1.get_action(state, round(turn))\n",
262 | " state = game.result(state, action)\n",
263 | " print(\"Made the action: %s\" % (action, ))\n",
264 | " turn += .5\n",
265 | "game.display(state)"
266 | ]
267 | },
268 | {
269 | "cell_type": "code",
270 | "execution_count": 7,
271 | "metadata": {},
272 | "outputs": [
273 | {
274 | "name": "stdout",
275 | "output_type": "stream",
276 | "text": [
277 | "0 RandomPlayer-0\n",
278 | "1 AlphaBetaPlayer-0\n",
279 | "2 MiniMaxPlayer-0\n",
280 | "3 AlphaBetaCutoffPlayer-0\n",
281 | "4 MCTSPlayer-0\n"
282 | ]
283 | }
284 | ],
285 | "source": [
286 | "for i,p in enumerate(players):\n",
287 | " print(i, p.name)"
288 | ]
289 | },
290 | {
291 | "cell_type": "code",
292 | "execution_count": 8,
293 | "metadata": {},
294 | "outputs": [
295 | {
296 | "name": "stderr",
297 | "output_type": "stream",
298 | "text": [
299 | "100%|██████████| 10/10 [00:04<00:00, 2.02it/s]\n"
300 | ]
301 | },
302 | {
303 | "data": {
304 | "text/plain": [
305 | "{'AlphaBetaCutoffPlayer-0': 10, 'DRAW': 0, 'MCTSPlayer-0': 0}"
306 | ]
307 | },
308 | "execution_count": 8,
309 | "metadata": {},
310 | "output_type": "execute_result"
311 | }
312 | ],
313 | "source": [
314 | "game.play_matches(10, players[4], players[3])"
315 | ]
316 | },
317 | {
318 | "cell_type": "code",
319 | "execution_count": 9,
320 | "metadata": {},
321 | "outputs": [
322 | {
323 | "name": "stderr",
324 | "output_type": "stream",
325 | "text": [
326 | "100%|██████████| 60/60 [00:14<00:00, 11.60it/s]\n"
327 | ]
328 | },
329 | {
330 | "data": {
331 | "text/plain": [
332 | "{'AlphaBetaCutoffPlayer-0': 38,\n",
333 | " 'DRAW': 0,\n",
334 | " 'MCTSPlayer-0': 20,\n",
335 | " 'RandomPlayer-0': 2}"
336 | ]
337 | },
338 | "execution_count": 9,
339 | "metadata": {},
340 | "output_type": "execute_result"
341 | }
342 | ],
343 | "source": [
344 | "game.play_tournament(10, players[4], players[3], players[0])"
345 | ]
346 | }
347 | ],
348 | "metadata": {
349 | "kernelspec": {
350 | "display_name": "Python 3",
351 | "language": "python",
352 | "name": "python3"
353 | },
354 | "language_info": {
355 | "codemirror_mode": {
356 | "name": "ipython",
357 | "version": 3
358 | },
359 | "file_extension": ".py",
360 | "mimetype": "text/x-python",
361 | "name": "python",
362 | "nbconvert_exporter": "python",
363 | "pygments_lexer": "ipython3",
364 | "version": "3.6.3"
365 | }
366 | },
367 | "nbformat": 4,
368 | "nbformat_minor": 2
369 | }
370 |
--------------------------------------------------------------------------------
/notebooks/images/IMAGE-CREDITS:
--------------------------------------------------------------------------------
1 | PHOTO CREDITS
2 |
3 | Image After http://www.imageafter.com/
4 |
5 | b15woods003.jpg
6 | (Cropped to 764x764 and scaled to 50x50 to make wall-icon.jpg
7 | by Gregory Weber)
8 |
9 | Noctua Graphics, http://www.noctua-graphics.de/english/fraset_e.htm
10 |
11 | dirt05.jpg 512x512
12 | (Scaled to 50x50 to make dirt05-icon.jpg by Gregory Weber)
13 |
14 | Gregory Weber
15 |
16 | dirt.svg, dirt.png
17 | vacuum.svg, vacuum.png
18 |
--------------------------------------------------------------------------------
/notebooks/images/aima3e_big.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/aima3e_big.jpg
--------------------------------------------------------------------------------
/notebooks/images/aima_logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/aima_logo.png
--------------------------------------------------------------------------------
/notebooks/images/bayesnet.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/bayesnet.png
--------------------------------------------------------------------------------
/notebooks/images/decisiontree_fruit.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/decisiontree_fruit.jpg
--------------------------------------------------------------------------------
/notebooks/images/dirt05-icon.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/dirt05-icon.jpg
--------------------------------------------------------------------------------
/notebooks/images/fig_5_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/fig_5_2.png
--------------------------------------------------------------------------------
/notebooks/images/knn_plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/knn_plot.png
--------------------------------------------------------------------------------
/notebooks/images/makefile:
--------------------------------------------------------------------------------
1 | # makefile for images
2 |
3 | Sources = dirt.svg vacuum.svg
4 |
5 | Targets = $(Sources:.svg=.png)
6 |
7 | ImageScale = 50x50
8 |
9 | Temporary = tmp.jpg
10 |
11 | .PHONY: all
12 |
13 | all: $(Targets)
14 |
15 | .PHONY: clean
16 |
17 | clean:
18 | rm -f $(Targets) $(Temporary)
19 |
20 | %.png: %.svg
21 | convert -scale $(ImageScale) $< $@
22 |
23 | %-icon.jpg: %.svg
24 | convert -scale $(ImageScale) $< $@
25 |
26 | %-icon.jpg: %.jpg
27 | convert -scale $(ImageScale) $< $@
28 |
29 | wall-icon.jpg: b15woods003.jpg
30 | convert -crop 764x764+0+0 $< tmp.jpg
31 | convert -resize 50x50+0+0 tmp.jpg $@
32 |
33 | vacuum-icon.jpg: vacuum.svg
34 | convert -scale $(ImageScale) -transparent white $< $@
35 |
--------------------------------------------------------------------------------
/notebooks/images/mdp-a.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/mdp-a.png
--------------------------------------------------------------------------------
/notebooks/images/mdp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/mdp.png
--------------------------------------------------------------------------------
/notebooks/images/neural_net.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/neural_net.png
--------------------------------------------------------------------------------
/notebooks/images/parse_tree.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/parse_tree.png
--------------------------------------------------------------------------------
/notebooks/images/perceptron.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/perceptron.png
--------------------------------------------------------------------------------
/notebooks/images/pluralityLearner_plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/pluralityLearner_plot.png
--------------------------------------------------------------------------------
/notebooks/images/point_crossover.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/point_crossover.png
--------------------------------------------------------------------------------
/notebooks/images/restaurant.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/restaurant.png
--------------------------------------------------------------------------------
/notebooks/images/romania_map.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/romania_map.png
--------------------------------------------------------------------------------
/notebooks/images/sprinklernet.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/sprinklernet.jpg
--------------------------------------------------------------------------------
/notebooks/images/uniform_crossover.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/uniform_crossover.png
--------------------------------------------------------------------------------
/notebooks/images/vacuum-icon.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/vacuum-icon.jpg
--------------------------------------------------------------------------------
/notebooks/images/vacuum.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
151 |
--------------------------------------------------------------------------------
/notebooks/images/wall-icon.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ArtificialIntelligenceToolkit/aima3/4e6dc9b467d28015630274ff183b75b2ff4ab6eb/notebooks/images/wall-icon.jpg
--------------------------------------------------------------------------------
/notebooks/index.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# AIMA Python Binder Index\n",
8 | "\n",
9 | "Welcome to the AIMA Python Code Repository. You should be seeing this index notebook if you clicked on the **Launch Binder** button on the [repository](https://github.com/aimacode/aima-python). If you are viewing this notebook directly on Github we suggest that you use the **Launch Binder** button instead. Binder allows you to experiment with all the code in the browser itself without the need of installing anything on your local machine. Below is the list of notebooks that should assist you in navigating the different notebooks available. \n",
10 | "\n",
11 | "If you are completely new to AIMA Python or Jupyter Notebooks we suggest that you start with the Introduction Notebook.\n",
12 | "\n",
13 | "# List of Notebooks\n",
14 | "\n",
15 | "1. [**Introduction**](./intro.ipynb)\n",
16 | "\n",
17 | "2. [**Agents**](./agents.ipynb)\n",
18 | "\n",
19 | "3. [**Search**](./search.ipynb)\n",
20 | "\n",
21 | "4. [**Search - 4th edition**](./search-4e.ipynb)\n",
22 | "\n",
23 | "4. [**Games**](./games.ipynb)\n",
24 | "\n",
25 | "5. [**Constraint Satisfaction Problems**](./csp.ipynb)\n",
26 | "\n",
27 | "6. [**Logic**](./logic.ipynb)\n",
28 | "\n",
29 | "7. [**Planning**](./planning.ipynb)\n",
30 | "\n",
31 | "8. [**Probability**](./probability.ipynb)\n",
32 | "\n",
33 | "9. [**Markov Decision Processes**](./mdp.ipynb)\n",
34 | "\n",
35 | "10. [**Learning**](./learning.ipynb)\n",
36 | "\n",
37 | "11. [**Reinforcement Learning**](./rl.ipynb)\n",
38 | "\n",
39 | "12. [**Statistical Language Processing Tools**](./text.ipynb)\n",
40 | "\n",
41 | "13. [**Natural Language Processing**](./nlp.ipynb)\n",
42 | "\n",
43 | "Besides the notebooks it is also possible to make direct modifications to the Python/JS code. To view/modify the complete set of files [click here](.) to view the Directory structure."
44 | ]
45 | }
46 | ],
47 | "metadata": {
48 | "kernelspec": {
49 | "display_name": "Python 3",
50 | "language": "python",
51 | "name": "python3"
52 | },
53 | "language_info": {
54 | "codemirror_mode": {
55 | "name": "ipython",
56 | "version": 3
57 | },
58 | "file_extension": ".py",
59 | "mimetype": "text/x-python",
60 | "name": "python",
61 | "nbconvert_exporter": "python",
62 | "pygments_lexer": "ipython3",
63 | "version": "3.5.1"
64 | }
65 | },
66 | "nbformat": 4,
67 | "nbformat_minor": 0
68 | }
69 |
--------------------------------------------------------------------------------
/notebooks/intro.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# An Introduction To `aima-python` \n",
8 | " \n",
9 | "The [aima-python](https://github.com/aimacode/aima-python) repository implements, in Python code, the algorithms in the textbook *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. A typical module in the repository has the code for a single chapter in the book, but some modules combine several chapters. See [the index](https://github.com/aimacode/aima-python#index-of-code) if you can't find the algorithm you want. The code in this repository attempts to mirror the pseudocode in the textbook as closely as possible and to stress readability foremost; if you are looking for high-performance code with advanced features, there are other repositories for you. For each module, there are three/four files, for example:\n",
10 | "\n",
11 | "- [**`nlp.py`**](https://github.com/aimacode/aima-python/blob/master/nlp.py): Source code with data types and algorithms for natural language processing; functions have docstrings explaining their use.\n",
12 | "- [**`nlp.ipynb`**](https://github.com/aimacode/aima-python/blob/master/nlp.ipynb): A notebook like this one; gives more detailed examples and explanations of use.\n",
13 | "- [**`nlp_apps.ipynb`**](https://github.com/aimacode/aima-python/blob/master/nlp_apps.ipynb): A Jupyter notebook that gives example applications of the code.\n",
14 | "- [**`tests/test_nlp.py`**](https://github.com/aimacode/aima-python/blob/master/tests/test_nlp.py): Test cases, used to verify the code is correct, and also useful to see examples of use.\n",
15 | "\n",
16 | "There is also an [aima-java](https://github.com/aimacode/aima-java) repository, if you prefer Java.\n",
17 | " \n",
18 | "## What version of Python?\n",
19 | " \n",
20 | "The code is tested in Python [3.4](https://www.python.org/download/releases/3.4.3/) and [3.5](https://www.python.org/downloads/release/python-351/). If you try a different version of Python 3 and find a problem, please report it as an [Issue](https://github.com/aimacode/aima-python/issues). There is an incomplete [legacy branch](https://github.com/aimacode/aima-python/tree/aima3python2) for those who must run in Python 2. \n",
21 | " \n",
22 | "We recommend the [Anaconda](https://www.continuum.io/downloads) distribution of Python 3.5. It comes with additional tools like the powerful IPython interpreter, the Jupyter Notebook and many helpful packages for scientific computing. After installing Anaconda, you will be good to go to run all the code and all the IPython notebooks. \n",
23 | "\n",
24 | "## IPython notebooks \n",
25 | " \n",
26 | "The IPython notebooks in this repository explain how to use the modules, and give examples of usage. \n",
27 | "You can use them in three ways: \n",
28 | "\n",
29 | "1. View static HTML pages. (Just browse to the [repository](https://github.com/aimacode/aima-python) and click on a `.ipynb` file link.)\n",
30 | "2. Run, modify, and re-run code, live. (Download the repository (by [zip file](https://github.com/aimacode/aima-python/archive/master.zip) or by `git` commands), start a Jupyter notebook server with the shell command \"`jupyter notebook`\" (issued from the directory where the files are), and click on the notebook you want to interact with.)\n",
31 | "3. Binder - Click on the binder badge on the [repository](https://github.com/aimacode/aima-python) main page to open the notebooks in an executable environment, online. This method does not require any extra installation. The code can be executed and modified from the browser itself. Note that this is an unstable option; there is a chance the notebooks will never load.\n",
32 | "\n",
33 | " \n",
34 | "You can [read about notebooks](https://jupyter-notebook-beginner-guide.readthedocs.org/en/latest/) and then [get started](https://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Running%20Code.ipynb)."
35 | ]
36 | },
37 | {
38 | "cell_type": "markdown",
39 | "metadata": {
40 | "collapsed": true
41 | },
42 | "source": [
43 | "# Helpful Tips\n",
44 | "\n",
45 | "Most of these notebooks start by importing all the symbols in a module:"
46 | ]
47 | },
48 | {
49 | "cell_type": "code",
50 | "execution_count": 1,
51 | "metadata": {
52 | "collapsed": true
53 | },
54 | "outputs": [],
55 | "source": [
56 | "from logic import *"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {},
62 | "source": [
63 | "From there, the notebook alternates explanations with examples of use. You can run the examples as they are, and you can modify the code cells (or add new cells) and run your own examples. If you have some really good examples to add, you can make a github pull request.\n",
64 | "\n",
65 | "If you want to see the source code of a function, you can open a browser or editor and see it in another window, or from within the notebook you can use the IPython magic function `%psource` (for \"print source\") or the function `psource` from `notebook.py`. Also, if the algorithm has pseudocode, you can read it by calling the `pseudocode` function with input the name of the algorithm."
66 | ]
67 | },
68 | {
69 | "cell_type": "code",
70 | "execution_count": 2,
71 | "metadata": {
72 | "collapsed": true
73 | },
74 | "outputs": [],
75 | "source": [
76 | "%psource WalkSAT"
77 | ]
78 | },
79 | {
80 | "cell_type": "code",
81 | "execution_count": null,
82 | "metadata": {},
83 | "outputs": [],
84 | "source": [
85 | "from notebook import psource, pseudocode\n",
86 | "\n",
87 | "psource(WalkSAT)\n",
88 | "pseudocode(\"WalkSAT\")"
89 | ]
90 | },
91 | {
92 | "cell_type": "markdown",
93 | "metadata": {},
94 | "source": [
95 | "Or see an abbreviated description of an object with a trailing question mark:"
96 | ]
97 | },
98 | {
99 | "cell_type": "code",
100 | "execution_count": 3,
101 | "metadata": {
102 | "collapsed": true
103 | },
104 | "outputs": [],
105 | "source": [
106 | "WalkSAT?"
107 | ]
108 | },
109 | {
110 | "cell_type": "markdown",
111 | "metadata": {},
112 | "source": [
113 | "# Authors\n",
114 | "\n",
115 | "This notebook is written by [Chirag Vertak](https://github.com/chiragvartak) and [Peter Norvig](https://github.com/norvig)."
116 | ]
117 | }
118 | ],
119 | "metadata": {
120 | "kernelspec": {
121 | "display_name": "Python 3",
122 | "language": "python",
123 | "name": "python3"
124 | },
125 | "language_info": {
126 | "codemirror_mode": {
127 | "name": "ipython",
128 | "version": 3
129 | },
130 | "file_extension": ".py",
131 | "mimetype": "text/x-python",
132 | "name": "python",
133 | "nbconvert_exporter": "python",
134 | "pygments_lexer": "ipython3",
135 | "version": "3.5.3"
136 | }
137 | },
138 | "nbformat": 4,
139 | "nbformat_minor": 1
140 | }
141 |
--------------------------------------------------------------------------------
/notebooks/neural_nets.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# NEURAL NETWORKS\n",
8 | "\n",
9 | "This notebook covers the neural network algorithms from chapter 18 of the book *Artificial Intelligence: A Modern Approach*, by Stuart Russel and Peter Norvig. The code in the notebook can be found in [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py).\n",
10 | "\n",
11 | "Execute the below cell to get started:"
12 | ]
13 | },
14 | {
15 | "cell_type": "code",
16 | "execution_count": 1,
17 | "metadata": {
18 | "collapsed": true
19 | },
20 | "outputs": [],
21 | "source": [
22 | "from learning import *\n",
23 | "\n",
24 | "from notebook import psource, pseudocode"
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {},
30 | "source": [
31 | "## NEURAL NETWORK ALGORITHM\n",
32 | "\n",
33 | "### Overview\n",
34 | "\n",
35 | "Although the Perceptron may seem like a good way to make classifications, it is a linear classifier (which, roughly, means it can only draw straight lines to divide spaces) and therefore it can be stumped by more complex problems. We can extend Perceptron to solve this issue, by employing multiple layers of its functionality. The construct we are left with is called a Neural Network, or a Multi-Layer Perceptron, and it is a non-linear classifier. It achieves that by combining the results of linear functions on each layer of the network.\n",
36 | "\n",
37 | "Similar to the Perceptron, this network also has an input and output layer. However it can also have a number of hidden layers. These hidden layers are responsible for the non-linearity of the network. The layers are comprised of nodes. Each node in a layer (excluding the input one), holds some values, called *weights*, and takes as input the output values of the previous layer. The node then calculates the dot product of its inputs and its weights and then activates it with an *activation function* (sometimes a sigmoid). Its output is fed to the nodes of the next layer. Note that sometimes the output layer does not use an activation function, or uses a different one from the rest of the network. The process of passing the outputs down the layer is called *feed-forward*.\n",
38 | "\n",
39 | "After the input values are fed-forward into the network, the resulting output can be used for classification. The problem at hand now is how to train the network (ie. adjust the weights in the nodes). To accomplish that we utilize the *Backpropagation* algorithm. In short, it does the opposite of what we were doing up to now. Instead of feeding the input forward, it will feed the error backwards. So, after we make a classification, we check whether it is correct or not, and how far off we were. We then take this error and propagate it backwards in the network, adjusting the weights of the nodes accordingly. We will run the algorithm on the given input/dataset for a fixed amount of time, or until we are satisfied with the results. The number of times we will iterate over the dataset is called *epochs*. In a later section we take a detailed look at how this algorithm works.\n",
40 | "\n",
41 | "NOTE: Sometimes we add to the input of each layer another node, called *bias*. This is a constant value that will be fed to the next layer, usually set to 1. The bias generally helps us \"shift\" the computed function to the left or right."
42 | ]
43 | },
44 | {
45 | "cell_type": "markdown",
46 | "metadata": {},
47 | "source": [
48 | ""
49 | ]
50 | },
51 | {
52 | "cell_type": "markdown",
53 | "metadata": {},
54 | "source": [
55 | "### Implementation\n",
56 | "\n",
57 | "The `NeuralNetLearner` function takes as input a dataset to train upon, the learning rate (in (0, 1]), the number of epochs and finally the size of the hidden layers. This last argument is a list, with each element corresponding to one hidden layer.\n",
58 | "\n",
59 | "After that we will create our neural network in the `network` function. This function will make the necessary connections between the input layer, hidden layer and output layer. With the network ready, we will use the `BackPropagationLearner` to train the weights of our network for the examples provided in the dataset.\n",
60 | "\n",
61 | "The NeuralNetLearner returns the `predict` function which, in short, can receive an example and feed-forward it into our network to generate a prediction.\n",
62 | "\n",
63 | "In more detail, the example values are first passed to the input layer and then they are passed through the rest of the layers. Each node calculates the dot product of its inputs and its weights, activates it and pushes it to the next layer. The final prediction is the node with the maximum value from the output layer."
64 | ]
65 | },
66 | {
67 | "cell_type": "code",
68 | "execution_count": null,
69 | "metadata": {},
70 | "outputs": [],
71 | "source": [
72 | "psource(NeuralNetLearner)"
73 | ]
74 | },
75 | {
76 | "cell_type": "markdown",
77 | "metadata": {},
78 | "source": [
79 | "## BACKPROPAGATION\n",
80 | "\n",
81 | "### Overview\n",
82 | "\n",
83 | "In both the Perceptron and the Neural Network, we are using the Backpropagation algorithm to train our weights. Basically it achieves that by propagating the errors from our last layer into our first layer, this is why it is called Backpropagation. In order to use Backpropagation, we need a cost function. This function is responsible for indicating how good our neural network is for a given example. One common cost function is the *Mean Squared Error* (MSE). This cost function has the following format:\n",
84 | "\n",
85 | "$$MSE=\\frac{1}{2} \\sum_{i=1}^{n}(y - \\hat{y})^{2}$$\n",
86 | "\n",
87 | "Where `n` is the number of training examples, $\\hat{y}$ is our prediction and $y$ is the correct prediction for the example.\n",
88 | "\n",
89 | "The algorithm combines the concept of partial derivatives and the chain rule to generate the gradient for each weight in the network based on the cost function.\n",
90 | "\n",
91 | "For example, if we are using a Neural Network with three layers, the sigmoid function as our activation function and the MSE cost function, we want to find the gradient for the a given weight $w_{j}$, we can compute it like this:\n",
92 | "\n",
93 | "$$\\frac{\\partial MSE(\\hat{y}, y)}{\\partial w_{j}} = \\frac{\\partial MSE(\\hat{y}, y)}{\\partial \\hat{y}}\\times\\frac{\\partial\\hat{y}(in_{j})}{\\partial in_{j}}\\times\\frac{\\partial in_{j}}{\\partial w_{j}}$$\n",
94 | "\n",
95 | "Solving this equation, we have:\n",
96 | "\n",
97 | "$$\\frac{\\partial MSE(\\hat{y}, y)}{\\partial w_{j}} = (\\hat{y} - y)\\times{\\hat{y}}'(in_{j})\\times a_{j}$$\n",
98 | "\n",
99 | "Remember that $\\hat{y}$ is the activation function applied to a neuron in our hidden layer, therefore $$\\hat{y} = sigmoid(\\sum_{i=1}^{num\\_neurons}w_{i}\\times a_{i})$$\n",
100 | "\n",
101 | "Also $a$ is the input generated by feeding the input layer variables into the hidden layer.\n",
102 | "\n",
103 | "We can use the same technique for the weights in the input layer as well. After we have the gradients for both weights, we use gradient descent to update the weights of the network."
104 | ]
105 | },
106 | {
107 | "cell_type": "markdown",
108 | "metadata": {},
109 | "source": [
110 | "### Pseudocode"
111 | ]
112 | },
113 | {
114 | "cell_type": "code",
115 | "execution_count": 3,
116 | "metadata": {},
117 | "outputs": [
118 | {
119 | "data": {
120 | "text/markdown": [
121 | "### AIMA3e\n",
122 | "__function__ BACK-PROP-LEARNING(_examples_, _network_) __returns__ a neural network \n",
123 | " __inputs__ _examples_, a set of examples, each with input vector __x__ and output vector __y__ \n",
124 | " _network_, a multilayer network with _L_ layers, weights _wi,j_, activation function _g_ \n",
125 | " __local variables__: Δ, a vector of errors, indexed by network node \n",
126 | "\n",
127 | " __repeat__ \n",
128 | " __for each__ weight _wi,j_ in _network_ __do__ \n",
129 | " _wi,j_ ← a small random number \n",
130 | " __for each__ example (__x__, __y__) __in__ _examples_ __do__ \n",
131 | " /\\* _Propagate the inputs forward to compute the outputs_ \\*/ \n",
132 | " __for each__ node _i_ in the input layer __do__ \n",
133 | " _ai_ ← _xi_ \n",
134 | " __for__ _l_ = 2 __to__ _L_ __do__ \n",
135 | " __for each__ node _j_ in layer _l_ __do__ \n",
136 | " _inj_ ← Σ_i_ _wi,j_ _ai_ \n",
137 | " _aj_ ← _g_(_inj_) \n",
138 | " /\\* _Propagate deltas backward from output layer to input layer_ \\*/ \n",
139 | " __for each__ node _j_ in the output layer __do__ \n",
140 | " Δ\\[_j_\\] ← _g_′(_inj_) × (_yi_ − _aj_) \n",
141 | " __for__ _l_ = _L_ − 1 __to__ 1 __do__ \n",
142 | " __for each__ node _i_ in layer _l_ __do__ \n",
143 | " Δ\\[_i_\\] ← _g_′(_ini_) Σ_j_ _wi,j_ Δ\\[_j_\\] \n",
144 | " /\\* _Update every weight in network using deltas_ \\*/ \n",
145 | " __for each__ weight _wi,j_ in _network_ __do__ \n",
146 | " _wi,j_ ← _wi,j_ + _α_ × _ai_ × Δ\\[_j_\\] \n",
147 | " __until__ some stopping criterion is satisfied \n",
148 | " __return__ _network_ \n",
149 | "\n",
150 | "---\n",
151 | "__Figure ??__ The back\\-propagation algorithm for learning in multilayer networks."
152 | ],
153 | "text/plain": [
154 | ""
155 | ]
156 | },
157 | "execution_count": 3,
158 | "metadata": {},
159 | "output_type": "execute_result"
160 | }
161 | ],
162 | "source": [
163 | "pseudocode('Back-Prop-Learning')"
164 | ]
165 | },
166 | {
167 | "cell_type": "markdown",
168 | "metadata": {},
169 | "source": [
170 | "### Implementation\n",
171 | "\n",
172 | "First, we feed-forward the examples in our neural network. After that, we calculate the gradient for each layer weights. Once that is complete, we update all the weights using gradient descent. After running these for a given number of epochs, the function returns the trained Neural Network."
173 | ]
174 | },
175 | {
176 | "cell_type": "code",
177 | "execution_count": null,
178 | "metadata": {},
179 | "outputs": [],
180 | "source": [
181 | "psource(BackPropagationLearner)"
182 | ]
183 | },
184 | {
185 | "cell_type": "code",
186 | "execution_count": 4,
187 | "metadata": {},
188 | "outputs": [
189 | {
190 | "name": "stdout",
191 | "output_type": "stream",
192 | "text": [
193 | "0\n"
194 | ]
195 | }
196 | ],
197 | "source": [
198 | "iris = DataSet(name=\"iris\")\n",
199 | "iris.classes_to_numbers()\n",
200 | "\n",
201 | "nNL = NeuralNetLearner(iris)\n",
202 | "print(nNL([5, 3, 1, 0.1]))"
203 | ]
204 | },
205 | {
206 | "cell_type": "markdown",
207 | "metadata": {},
208 | "source": [
209 | "The output should be 0, which means the item should get classified in the first class, \"setosa\". Note that since the algorithm is non-deterministic (because of the random initial weights) the classification might be wrong. Usually though it should be correct.\n",
210 | "\n",
211 | "To increase accuracy, you can (most of the time) add more layers and nodes. Unfortunately the more layers and nodes you have, the greater the computation cost."
212 | ]
213 | }
214 | ],
215 | "metadata": {
216 | "kernelspec": {
217 | "display_name": "Python 3",
218 | "language": "python",
219 | "name": "python3"
220 | },
221 | "language_info": {
222 | "codemirror_mode": {
223 | "name": "ipython",
224 | "version": 3
225 | },
226 | "file_extension": ".py",
227 | "mimetype": "text/x-python",
228 | "name": "python",
229 | "nbconvert_exporter": "python",
230 | "pygments_lexer": "ipython3",
231 | "version": "3.5.3"
232 | }
233 | },
234 | "nbformat": 4,
235 | "nbformat_minor": 2
236 | }
237 |
--------------------------------------------------------------------------------
/notebooks/nlp_apps.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# NATURAL LANGUAGE PROCESSING APPLICATIONS\n",
8 | "\n",
9 | "In this notebook we will take a look at some indicative applications of natural language processing. We will cover content from [`nlp.py`](https://github.com/aimacode/aima-python/blob/master/nlp.py) and [`text.py`](https://github.com/aimacode/aima-python/blob/master/text.py), for chapters 22 and 23 of Stuart Russel's and Peter Norvig's book [*Artificial Intelligence: A Modern Approach*](http://aima.cs.berkeley.edu/)."
10 | ]
11 | },
12 | {
13 | "cell_type": "markdown",
14 | "metadata": {},
15 | "source": [
16 | "## CONTENTS\n",
17 | "\n",
18 | "* Language Recognition"
19 | ]
20 | },
21 | {
22 | "cell_type": "markdown",
23 | "metadata": {},
24 | "source": [
25 | "## LANGUAGE RECOGNITION\n",
26 | "\n",
27 | "A very useful application of text models (you can read more on them on the [`text notebook`](https://github.com/aimacode/aima-python/blob/master/text.ipynb)) is categorizing text into a language. In fact, with enough data we can categorize correctly mostly any text. That is because different languages have certain characteristics that set them apart. For example, in German it is very usual for 'c' to be followed by 'h' while in English we see 't' followed by 'h' a lot.\n",
28 | "\n",
29 | "Here we will build an application to categorize sentences in either English or German.\n",
30 | "\n",
31 | "First we need to build our dataset. We will take as input text in English and in German and we will extract n-gram character models (in this case, *bigrams* for n=2). For English, we will use *Flatland* by Edwin Abbott and for German *Faust* by Goethe.\n",
32 | "\n",
33 | "Let's build our text models for each language, which will hold the probability of each bigram occuring in the text."
34 | ]
35 | },
36 | {
37 | "cell_type": "code",
38 | "execution_count": 1,
39 | "metadata": {
40 | "collapsed": true
41 | },
42 | "outputs": [],
43 | "source": [
44 | "from utils import open_data\n",
45 | "from text import *\n",
46 | "\n",
47 | "flatland = open_data(\"EN-text/flatland.txt\").read()\n",
48 | "wordseq = words(flatland)\n",
49 | "\n",
50 | "P_flatland = NgramCharModel(2, wordseq)\n",
51 | "\n",
52 | "faust = open_data(\"GE-text/faust.txt\").read()\n",
53 | "wordseq = words(faust)\n",
54 | "\n",
55 | "P_faust = NgramCharModel(2, wordseq)"
56 | ]
57 | },
58 | {
59 | "cell_type": "markdown",
60 | "metadata": {},
61 | "source": [
62 | "We can use this information to build a *Naive Bayes Classifier* that will be used to categorize sentences (you can read more on Naive Bayes on the [`learning notebook`](https://github.com/aimacode/aima-python/blob/master/learning.ipynb)). The classifier will take as input the probability distribution of bigrams and given a list of bigrams (extracted from the sentence to be classified), it will calculate the probability of the example/sentence coming from each language and pick the maximum.\n",
63 | "\n",
64 | "Let's build our classifier, with the assumption that English is as probable as German (the input is a dictionary with values the text models and keys the tuple `language, probability`):"
65 | ]
66 | },
67 | {
68 | "cell_type": "code",
69 | "execution_count": 2,
70 | "metadata": {
71 | "collapsed": true
72 | },
73 | "outputs": [],
74 | "source": [
75 | "from learning import NaiveBayesLearner\n",
76 | "\n",
77 | "dist = {('English', 1): P_flatland, ('German', 1): P_faust}\n",
78 | "\n",
79 | "nBS = NaiveBayesLearner(dist, simple=True)"
80 | ]
81 | },
82 | {
83 | "cell_type": "markdown",
84 | "metadata": {},
85 | "source": [
86 | "Now we need to write a function that takes as input a sentence, breaks it into a list of bigrams and classifies it with the naive bayes classifier from above.\n",
87 | "\n",
88 | "Once we get the text model for the sentence, we need to unravel it. The text models show the probability of each bigram, but the classifier can't handle that extra data. It requires a simple *list* of bigrams. So, if the text model shows that a bigram appears three times, we need to add it three times in the list. Since the text model stores the n-gram information in a dictionary (with the key being the n-gram and the value the number of times the n-gram appears) we need to iterate through the items of the dictionary and manually add them to the list of n-grams."
89 | ]
90 | },
91 | {
92 | "cell_type": "code",
93 | "execution_count": 3,
94 | "metadata": {
95 | "collapsed": true
96 | },
97 | "outputs": [],
98 | "source": [
99 | "def recognize(sentence, nBS, n):\n",
100 | " sentence = sentence.lower()\n",
101 | " wordseq = words(sentence)\n",
102 | " \n",
103 | " P_sentence = NgramCharModel(n, wordseq)\n",
104 | " \n",
105 | " ngrams = []\n",
106 | " for b, p in P_sentence.dictionary.items():\n",
107 | " ngrams += [b]*p\n",
108 | " \n",
109 | " return nBS(ngrams)"
110 | ]
111 | },
112 | {
113 | "cell_type": "markdown",
114 | "metadata": {},
115 | "source": [
116 | "Now we can start categorizing sentences."
117 | ]
118 | },
119 | {
120 | "cell_type": "code",
121 | "execution_count": 4,
122 | "metadata": {},
123 | "outputs": [
124 | {
125 | "data": {
126 | "text/plain": [
127 | "'German'"
128 | ]
129 | },
130 | "execution_count": 4,
131 | "metadata": {},
132 | "output_type": "execute_result"
133 | }
134 | ],
135 | "source": [
136 | "recognize(\"Ich bin ein platz\", nBS, 2)"
137 | ]
138 | },
139 | {
140 | "cell_type": "code",
141 | "execution_count": 5,
142 | "metadata": {},
143 | "outputs": [
144 | {
145 | "data": {
146 | "text/plain": [
147 | "'English'"
148 | ]
149 | },
150 | "execution_count": 5,
151 | "metadata": {},
152 | "output_type": "execute_result"
153 | }
154 | ],
155 | "source": [
156 | "recognize(\"Turtles fly high\", nBS, 2)"
157 | ]
158 | },
159 | {
160 | "cell_type": "code",
161 | "execution_count": 6,
162 | "metadata": {},
163 | "outputs": [
164 | {
165 | "data": {
166 | "text/plain": [
167 | "'German'"
168 | ]
169 | },
170 | "execution_count": 6,
171 | "metadata": {},
172 | "output_type": "execute_result"
173 | }
174 | ],
175 | "source": [
176 | "recognize(\"Der pelikan ist hier\", nBS, 2)"
177 | ]
178 | },
179 | {
180 | "cell_type": "code",
181 | "execution_count": 7,
182 | "metadata": {},
183 | "outputs": [
184 | {
185 | "data": {
186 | "text/plain": [
187 | "'English'"
188 | ]
189 | },
190 | "execution_count": 7,
191 | "metadata": {},
192 | "output_type": "execute_result"
193 | }
194 | ],
195 | "source": [
196 | "recognize(\"And thus the wizard spoke\", nBS, 2)"
197 | ]
198 | },
199 | {
200 | "cell_type": "markdown",
201 | "metadata": {},
202 | "source": [
203 | "You can add more languages if you want, the algorithm works for as many as you like! Also, you can play around with *n*. Here we used 2, but other numbers work too (even though 2 suffices). The algorithm is not perfect, but it has high accuracy even for small samples like the ones we used. That is because English and German are very different languages. The closer together languages are (for example, Norwegian and Swedish share a lot of common ground) the lower the accuracy of the classifier."
204 | ]
205 | }
206 | ],
207 | "metadata": {
208 | "kernelspec": {
209 | "display_name": "Python 3",
210 | "language": "python",
211 | "name": "python3"
212 | },
213 | "language_info": {
214 | "codemirror_mode": {
215 | "name": "ipython",
216 | "version": 3
217 | },
218 | "file_extension": ".py",
219 | "mimetype": "text/x-python",
220 | "name": "python",
221 | "nbconvert_exporter": "python",
222 | "pygments_lexer": "ipython3",
223 | "version": "3.5.3"
224 | }
225 | },
226 | "nbformat": 4,
227 | "nbformat_minor": 2
228 | }
229 |
--------------------------------------------------------------------------------
/notebooks/planning.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "collapsed": true
7 | },
8 | "source": [
9 | "# Planning: planning.py; chapters 10-11"
10 | ]
11 | },
12 | {
13 | "cell_type": "markdown",
14 | "metadata": {},
15 | "source": [
16 | "This notebook describes the [planning.py](https://github.com/aimacode/aima-python/blob/master/planning.py) module, which covers Chapters 10 (Classical Planning) and 11 (Planning and Acting in the Real World) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.\n",
17 | "\n",
18 | "We'll start by looking at `PDDL` and `Action` data types for defining problems and actions. Then, we will see how to use them by trying to plan a trip from *Sibiu* to *Bucharest* across the familiar map of Romania, from [search.ipynb](https://github.com/aimacode/aima-python/blob/master/search.ipynb). Finally, we will look at the implementation of the GraphPlan algorithm.\n",
19 | "\n",
20 | "The first step is to load the code:"
21 | ]
22 | },
23 | {
24 | "cell_type": "code",
25 | "execution_count": 1,
26 | "metadata": {
27 | "collapsed": false
28 | },
29 | "outputs": [],
30 | "source": [
31 | "from planning import *"
32 | ]
33 | },
34 | {
35 | "cell_type": "markdown",
36 | "metadata": {},
37 | "source": [
38 | "To be able to model a planning problem properly, it is essential to be able to represent an Action. Each action we model requires at least three things:\n",
39 | "* preconditions that the action must meet\n",
40 | "* the effects of executing the action\n",
41 | "* some expression that represents the action"
42 | ]
43 | },
44 | {
45 | "cell_type": "markdown",
46 | "metadata": {},
47 | "source": [
48 | "Planning actions have been modelled using the `Action` class. Let's look at the source to see how the internal details of an action are implemented in Python."
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "execution_count": 2,
54 | "metadata": {
55 | "collapsed": false
56 | },
57 | "outputs": [],
58 | "source": [
59 | "%psource Action"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "It is interesting to see the way preconditions and effects are represented here. Instead of just being a list of expressions each, they consist of two lists - `precond_pos` and `precond_neg`. This is to work around the fact that PDDL doesn't allow for negations. Thus, for each precondition, we maintain a seperate list of those preconditions that must hold true, and those whose negations must hold true. Similarly, instead of having a single list of expressions that are the result of executing an action, we have two. The first (`effect_add`) contains all the expressions that will evaluate to true if the action is executed, and the the second (`effect_neg`) contains all those expressions that would be false if the action is executed (ie. their negations would be true).\n",
67 | "\n",
68 | "The constructor parameters, however combine the two precondition lists into a single `precond` parameter, and the effect lists into a single `effect` parameter."
69 | ]
70 | },
71 | {
72 | "cell_type": "markdown",
73 | "metadata": {},
74 | "source": [
75 | "The `PDDL` class is used to represent planning problems in this module. The following attributes are essential to be able to define a problem:\n",
76 | "* a goal test\n",
77 | "* an initial state\n",
78 | "* a set of viable actions that can be executed in the search space of the problem\n",
79 | "\n",
80 | "View the source to see how the Python code tries to realise these."
81 | ]
82 | },
83 | {
84 | "cell_type": "code",
85 | "execution_count": 3,
86 | "metadata": {
87 | "collapsed": false
88 | },
89 | "outputs": [],
90 | "source": [
91 | "%psource PDDL"
92 | ]
93 | },
94 | {
95 | "cell_type": "markdown",
96 | "metadata": {},
97 | "source": [
98 | "The `initial_state` attribute is a list of `Expr` expressions that forms the initial knowledge base for the problem. Next, `actions` contains a list of `Action` objects that may be executed in the search space of the problem. Lastly, we pass a `goal_test` function as a parameter - this typically takes a knowledge base as a parameter, and returns whether or not the goal has been reached."
99 | ]
100 | },
101 | {
102 | "cell_type": "markdown",
103 | "metadata": {},
104 | "source": [
105 | "Now lets try to define a planning problem using these tools. Since we already know about the map of Romania, lets see if we can plan a trip across a simplified map of Romania.\n",
106 | "\n",
107 | "Here is our simplified map definition:"
108 | ]
109 | },
110 | {
111 | "cell_type": "code",
112 | "execution_count": 4,
113 | "metadata": {
114 | "collapsed": false
115 | },
116 | "outputs": [],
117 | "source": [
118 | "from utils import *\n",
119 | "# this imports the required expr so we can create our knowledge base\n",
120 | "\n",
121 | "knowledge_base = [\n",
122 | " expr(\"Connected(Bucharest,Pitesti)\"),\n",
123 | " expr(\"Connected(Pitesti,Rimnicu)\"),\n",
124 | " expr(\"Connected(Rimnicu,Sibiu)\"),\n",
125 | " expr(\"Connected(Sibiu,Fagaras)\"),\n",
126 | " expr(\"Connected(Fagaras,Bucharest)\"),\n",
127 | " expr(\"Connected(Pitesti,Craiova)\"),\n",
128 | " expr(\"Connected(Craiova,Rimnicu)\")\n",
129 | " ]"
130 | ]
131 | },
132 | {
133 | "cell_type": "markdown",
134 | "metadata": {},
135 | "source": [
136 | "Let us add some logic propositions to complete our knowledge about travelling around the map. These are the typical symmetry and transitivity properties of connections on a map. We can now be sure that our `knowledge_base` understands what it truly means for two locations to be connected in the sense usually meant by humans when we use the term.\n",
137 | "\n",
138 | "Let's also add our starting location - *Sibiu* to the map."
139 | ]
140 | },
141 | {
142 | "cell_type": "code",
143 | "execution_count": 5,
144 | "metadata": {
145 | "collapsed": true
146 | },
147 | "outputs": [],
148 | "source": [
149 | "knowledge_base.extend([\n",
150 | " expr(\"Connected(x,y) ==> Connected(y,x)\"),\n",
151 | " expr(\"Connected(x,y) & Connected(y,z) ==> Connected(x,z)\"),\n",
152 | " expr(\"At(Sibiu)\")\n",
153 | " ])"
154 | ]
155 | },
156 | {
157 | "cell_type": "markdown",
158 | "metadata": {},
159 | "source": [
160 | "We now have a complete knowledge base, which can be seen like this:"
161 | ]
162 | },
163 | {
164 | "cell_type": "code",
165 | "execution_count": 6,
166 | "metadata": {
167 | "collapsed": false
168 | },
169 | "outputs": [
170 | {
171 | "data": {
172 | "text/plain": [
173 | "[Connected(Bucharest, Pitesti),\n",
174 | " Connected(Pitesti, Rimnicu),\n",
175 | " Connected(Rimnicu, Sibiu),\n",
176 | " Connected(Sibiu, Fagaras),\n",
177 | " Connected(Fagaras, Bucharest),\n",
178 | " Connected(Pitesti, Craiova),\n",
179 | " Connected(Craiova, Rimnicu),\n",
180 | " (Connected(x, y) ==> Connected(y, x)),\n",
181 | " ((Connected(x, y) & Connected(y, z)) ==> Connected(x, z)),\n",
182 | " At(Sibiu)]"
183 | ]
184 | },
185 | "execution_count": 6,
186 | "metadata": {},
187 | "output_type": "execute_result"
188 | }
189 | ],
190 | "source": [
191 | "knowledge_base"
192 | ]
193 | },
194 | {
195 | "cell_type": "markdown",
196 | "metadata": {},
197 | "source": [
198 | "We now define possible actions to our problem. We know that we can drive between any connected places. But, as is evident from [this](https://en.wikipedia.org/wiki/List_of_airports_in_Romania) list of Romanian airports, we can also fly directly between Sibiu, Bucharest, and Craiova.\n",
199 | "\n",
200 | "We can define these flight actions like this:"
201 | ]
202 | },
203 | {
204 | "cell_type": "code",
205 | "execution_count": 7,
206 | "metadata": {
207 | "collapsed": false
208 | },
209 | "outputs": [],
210 | "source": [
211 | "#Sibiu to Bucharest\n",
212 | "precond_pos = [expr('At(Sibiu)')]\n",
213 | "precond_neg = []\n",
214 | "effect_add = [expr('At(Bucharest)')]\n",
215 | "effect_rem = [expr('At(Sibiu)')]\n",
216 | "fly_s_b = Action(expr('Fly(Sibiu, Bucharest)'), [precond_pos, precond_neg], [effect_add, effect_rem])\n",
217 | "\n",
218 | "#Bucharest to Sibiu\n",
219 | "precond_pos = [expr('At(Bucharest)')]\n",
220 | "precond_neg = []\n",
221 | "effect_add = [expr('At(Sibiu)')]\n",
222 | "effect_rem = [expr('At(Bucharest)')]\n",
223 | "fly_b_s = Action(expr('Fly(Bucharest, Sibiu)'), [precond_pos, precond_neg], [effect_add, effect_rem])\n",
224 | "\n",
225 | "#Sibiu to Craiova\n",
226 | "precond_pos = [expr('At(Sibiu)')]\n",
227 | "precond_neg = []\n",
228 | "effect_add = [expr('At(Craiova)')]\n",
229 | "effect_rem = [expr('At(Sibiu)')]\n",
230 | "fly_s_c = Action(expr('Fly(Sibiu, Craiova)'), [precond_pos, precond_neg], [effect_add, effect_rem])\n",
231 | "\n",
232 | "#Craiova to Sibiu\n",
233 | "precond_pos = [expr('At(Craiova)')]\n",
234 | "precond_neg = []\n",
235 | "effect_add = [expr('At(Sibiu)')]\n",
236 | "effect_rem = [expr('At(Craiova)')]\n",
237 | "fly_c_s = Action(expr('Fly(Craiova, Sibiu)'), [precond_pos, precond_neg], [effect_add, effect_rem])\n",
238 | "\n",
239 | "#Bucharest to Craiova\n",
240 | "precond_pos = [expr('At(Bucharest)')]\n",
241 | "precond_neg = []\n",
242 | "effect_add = [expr('At(Craiova)')]\n",
243 | "effect_rem = [expr('At(Bucharest)')]\n",
244 | "fly_b_c = Action(expr('Fly(Bucharest, Craiova)'), [precond_pos, precond_neg], [effect_add, effect_rem])\n",
245 | "\n",
246 | "#Craiova to Bucharest\n",
247 | "precond_pos = [expr('At(Craiova)')]\n",
248 | "precond_neg = []\n",
249 | "effect_add = [expr('At(Bucharest)')]\n",
250 | "effect_rem = [expr('At(Craiova)')]\n",
251 | "fly_c_b = Action(expr('Fly(Craiova, Bucharest)'), [precond_pos, precond_neg], [effect_add, effect_rem])"
252 | ]
253 | },
254 | {
255 | "cell_type": "markdown",
256 | "metadata": {},
257 | "source": [
258 | "And the drive actions like this."
259 | ]
260 | },
261 | {
262 | "cell_type": "code",
263 | "execution_count": 8,
264 | "metadata": {
265 | "collapsed": true
266 | },
267 | "outputs": [],
268 | "source": [
269 | "#Drive\n",
270 | "precond_pos = [expr('At(x)')]\n",
271 | "precond_neg = []\n",
272 | "effect_add = [expr('At(y)')]\n",
273 | "effect_rem = [expr('At(x)')]\n",
274 | "drive = Action(expr('Drive(x, y)'), [precond_pos, precond_neg], [effect_add, effect_rem])"
275 | ]
276 | },
277 | {
278 | "cell_type": "markdown",
279 | "metadata": {},
280 | "source": [
281 | "Finally, we can define a a function that will tell us when we have reached our destination, Bucharest."
282 | ]
283 | },
284 | {
285 | "cell_type": "code",
286 | "execution_count": 9,
287 | "metadata": {
288 | "collapsed": true
289 | },
290 | "outputs": [],
291 | "source": [
292 | "def goal_test(kb):\n",
293 | " return kb.ask(expr(\"At(Bucharest)\"))"
294 | ]
295 | },
296 | {
297 | "cell_type": "markdown",
298 | "metadata": {},
299 | "source": [
300 | "Thus, with all the components in place, we can define the planning problem."
301 | ]
302 | },
303 | {
304 | "cell_type": "code",
305 | "execution_count": 10,
306 | "metadata": {
307 | "collapsed": false
308 | },
309 | "outputs": [],
310 | "source": [
311 | "prob = PDDL(knowledge_base, [fly_s_b, fly_b_s, fly_s_c, fly_c_s, fly_b_c, fly_c_b, drive], goal_test)"
312 | ]
313 | },
314 | {
315 | "cell_type": "code",
316 | "execution_count": null,
317 | "metadata": {
318 | "collapsed": false
319 | },
320 | "outputs": [],
321 | "source": []
322 | },
323 | {
324 | "cell_type": "code",
325 | "execution_count": null,
326 | "metadata": {
327 | "collapsed": true
328 | },
329 | "outputs": [],
330 | "source": []
331 | }
332 | ],
333 | "metadata": {
334 | "kernelspec": {
335 | "display_name": "Python 3",
336 | "language": "python",
337 | "name": "python3"
338 | },
339 | "language_info": {
340 | "codemirror_mode": {
341 | "name": "ipython",
342 | "version": 3
343 | },
344 | "file_extension": ".py",
345 | "mimetype": "text/x-python",
346 | "name": "python",
347 | "nbconvert_exporter": "python",
348 | "pygments_lexer": "ipython3",
349 | "version": "3.4.3"
350 | }
351 | },
352 | "nbformat": 4,
353 | "nbformat_minor": 0
354 | }
355 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | networkx==1.11
2 | jupyter
3 | tqdm
4 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | import io
2 | import sys
3 | try:
4 | import pypandoc
5 | except:
6 | pypandoc = None
7 |
8 | from setuptools import find_packages, setup
9 |
10 | with io.open('aima3/__init__.py', encoding='utf-8') as fid:
11 | for line in fid:
12 | if line.startswith('__version__'):
13 | version = line.strip().split()[-1][1:-1]
14 | break
15 |
16 | with io.open('README.md', encoding='utf-8') as fp:
17 | long_desc = fp.read()
18 | if pypandoc is not None:
19 | try:
20 | long_desc = pypandoc.convert(long_desc, "rst", "markdown_github")
21 | except:
22 | pass
23 |
24 | setup(name='aima3',
25 | version=version,
26 | description='Artificial Intelligence: A Modern Approach, in Python3',
27 | long_description=long_desc,
28 | author='Douglas Blank',
29 | author_email='doug.blank@gmail.com',
30 | url='https://github.com/Calysto/aima3',
31 | install_requires=['networkx==1.11', 'jupyter', 'tqdm'],
32 | packages=find_packages(include=['aima3', 'aima3.*']),
33 | classifiers=[
34 | 'Framework :: IPython',
35 | 'License :: OSI Approved :: BSD License',
36 | 'Programming Language :: Python :: 3',
37 | ]
38 | )
39 |
--------------------------------------------------------------------------------