├── .gitignore ├── README.md ├── backtranslate.py ├── codex.py ├── consts ├── __init__.py ├── lean_consts.py ├── py_consts.py └── vh_consts.py ├── execute_virtual_home.py ├── fn.py ├── graph.py ├── parsel.ipynb ├── parsel.py ├── parsify.py ├── programs ├── and_commute.lean ├── and_commute.ss ├── collatz_simplest_full.py ├── collatz_simplest_full.ss ├── collatz_simplest_no_tests.py ├── collatz_simplest_no_tests.ss ├── game_of_life_inverse_expand.py ├── game_of_life_inverse_expand.ss ├── game_of_life_inverse_fill.py ├── game_of_life_inverse_fill.ss ├── game_of_life_inverse_no_args.py ├── game_of_life_inverse_no_args.ss ├── lisp.py ├── lisp.ss ├── problem_solving.py ├── problem_solving.ss ├── problem_solving_no_args_no_tests.py ├── problem_solving_no_args_no_tests.ss ├── problem_solving_no_tests.py ├── problem_solving_no_tests.ss ├── virtualhome.py └── virtualhome.ss └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | APPS 2 | __pycache__ 3 | keys/* 4 | .DS_Store 5 | cache.json 6 | old/* 7 | generated/* 8 | !programs/lisp.ss -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 🐍 Parsel 2 | 3 | **Parsel** is a natural language framework for writing programs for any target language using code language models. Parsel considers multiple implementations for each function, searching sets of implementations to find programs passing unit tests (more generally, program constraints). It can be used for many kinds of algorithmic tasks, e.g. code synthesis, robotic planning, and theorem proving. 4 | 5 | - 🕸️ [Website](http://zelikman.me/parselpaper/) 6 | - 📜 [Preprint](https://arxiv.org/abs/2212.10561) 7 | - 🐦 Twitter threads: [Current paper version](https://twitter.com/ericzelikman/status/1618426056163356675), [automatic test generation](https://twitter.com/ericzelikman/status/1622605951835705344), and [automatic function naming](https://twitter.com/ericzelikman/status/1625593237946912768). 8 | 9 | ## Installation 10 | To get use this repo, it should be enough to just: 11 | 12 | ``` 13 | git clone https://github.com/ezelikman/parsel.git 14 | pip install openai 15 | ``` 16 | ## Notebook 17 | We provide an [intro notebook](https://github.com/ezelikman/parsel/blob/main/parsel.ipynb) showing how to interact with Parsel. 18 | 19 | ## Examples 20 | In order to run this project, here are some example commands: 21 | - `python parsel.py programs/problem_solving.ss` will run and transpile the program in the file `programs/problem_solving.ss`. 22 | - `python parsel.py programs/collatz_recursion.ss` shows an example of a recursive function. 23 | - `python parsel.py programs/problem_solving_no_tests.ss -g` shows how you can use automatic test generation to write programs without tests. 24 | - `python parsel.py programs/game_of_life_inverse_no_args.ss -n` shows how you can use natural language to write programs without argument names. 25 | - `python parsel.py programs/problem_solving_no_args_no_tests.ss -g -n` shows how you can use automatic test generation to write programs without tests or argument names, i.e. in entirely natural language. 26 | - To run `python parsel.py programs/and_commute.ss`, change the mode in `consts/__init__.py` to `lean` and (assuming you have Lean installed), it will generate a Lean file in the same directory as the input file. 27 | - To run `game_of_life_inverse_expand.ss` you will need to use the `-e` flag to automatically expand / decompose it. 28 | - To run `game_of_life_inverse_fill.ss` you will need to use the `-a` to autofill the program with functions that are called but not used. 29 | 30 | In general, to configure Parsel for a new target programming, you'll need to create a new file in `consts/` and add it to `consts/__init__.py`. In addition, to use the OpenAI models, you'll need to create a `keys/codex_key.txt` file in the format `organization_id:api_key`. 31 | 32 | ## Citation 33 | 34 | If you find this repo or the paper useful in your research, please feel free to cite [our paper](https://arxiv.org/abs/2212.10561): 35 | ``` 36 | @misc{zelikman2022parsel, 37 | url = {https://arxiv.org/abs/2212.10561}, 38 | author = {Zelikman, Eric and Huang, Qian and Poesia, Gabriel and Goodman, Noah D and Haber, Nick}, 39 | keywords = {Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)}, 40 | title = {Parsel 🐍: A (De-)compositional Framework for Algorithmic Reasoning with Language Models}, 41 | publisher = {arXiv}, 42 | year = {2022}, 43 | copyright = {arXiv.org perpetual, non-exclusive license} 44 | } 45 | ``` 46 | -------------------------------------------------------------------------------- /backtranslate.py: -------------------------------------------------------------------------------- 1 | from parsel import parsel 2 | from parsify import to_parsel 3 | from consts import CONSTS 4 | import os 5 | 6 | def backtranslate(solution, root, asserts, save_file, codegen): 7 | defined_fns = to_parsel(solution) 8 | defined_fns[root].names_to_fns(defined_fns) 9 | tree_str = defined_fns[root].to_parsel_str() 10 | if len(tree_str.strip().split('\n')) > 1: 11 | for fn in defined_fns: 12 | defined_fns[fn].describe(codegen, names_to_avoid=list(defined_fns.keys())) 13 | # Load input_output.json 14 | defined_fns[root].asserts += asserts 15 | defined_fns[root].rearrange() 16 | parsel_str = defined_fns[root].to_parsel_str() 17 | # Write to file 18 | if not os.path.exists(save_file.replace(".ss", CONSTS["extension"])): 19 | with open(save_file, 'w') as f: 20 | f.write(parsel_str) 21 | print(f"Writing: {save_file}") 22 | print(solution) 23 | try: 24 | parsel(codegen, save_file) 25 | except KeyboardInterrupt: 26 | raise KeyboardInterrupt 27 | except: 28 | pass -------------------------------------------------------------------------------- /codex.py: -------------------------------------------------------------------------------- 1 | import json 2 | import openai 3 | import os 4 | import time 5 | from consts import CONSTS 6 | import random 7 | 8 | class CodeGen(): 9 | def __init__(self, cache="cache.json", key="keys/codex_key.txt"): 10 | self.cache_file = cache 11 | self.exponential_backoff = 1 12 | # Load the cache JSON file, if cache file exists. Else, cache is {} 13 | if os.path.exists(cache): 14 | while os.path.exists(self.cache_file + ".tmp") or os.path.exists(self.cache_file + ".lock"): 15 | time.sleep(0.1) 16 | with open(cache, "r") as f: 17 | self.cache = json.load(f) 18 | else: 19 | self.cache = {} 20 | 21 | # Load codex key from file 22 | with open(key, "r") as f: 23 | codex_key = f.read().strip() 24 | openai.organization, openai.api_key = codex_key.split(":") 25 | 26 | def generate(self, 27 | codex_in, num_completions=8, max_tokens=500, temperature=0.5, presence_penalty=0.0, 28 | stop=["\ndef"], indented=True, indented_after_first_line=False, require=None, cache_key=None, 29 | rate_limit_tokens=4000, verbose=False, logit_bias=None, model_name=None 30 | ): 31 | if model_name is None: 32 | model_name = "code-davinci-002" 33 | if verbose: 34 | print(codex_in) 35 | print("-----") 36 | assert isinstance(codex_in, str) 37 | cache_key_base = codex_in if cache_key is None else cache_key 38 | cache_key_list = (cache_key_base, max_tokens, temperature, stop, indented, indented_after_first_line, require) 39 | if presence_penalty != 0.0: 40 | cache_key_list = cache_key_list + (presence_penalty,) 41 | if model_name != "code-davinci-002": 42 | cache_key_list = cache_key_list + (model_name,) 43 | cache_key = str(cache_key_list) 44 | if cache_key in self.cache: 45 | if len(self.cache[cache_key]) < num_completions: 46 | num_completions -= len(self.cache[cache_key]) 47 | results = self.cache[cache_key] 48 | else: 49 | cur_implementations = self.cache[cache_key].copy() 50 | if "shuffle_implementations" in CONSTS and CONSTS["shuffle_implementations"]: 51 | random.shuffle(cur_implementations) 52 | return cur_implementations[:num_completions] 53 | else: 54 | results = [] 55 | 56 | if model_name != "code-davinci-002": 57 | print("WARNING, using davinci text model") 58 | 59 | print("Calling Codex!") 60 | # raise Exception("Codex is not available") 61 | total_tokens = num_completions * max_tokens 62 | completions_per_call = rate_limit_tokens // max_tokens 63 | while total_tokens > 0: 64 | num_completions = min(total_tokens // max_tokens, completions_per_call) 65 | print(num_completions, "completions", max_tokens, "tokens each") 66 | while True: 67 | try: 68 | time.sleep(8) 69 | if logit_bias is None: 70 | completions = openai.Completion.create( 71 | model=model_name, 72 | prompt=codex_in, 73 | max_tokens=max_tokens, 74 | temperature=temperature, 75 | presence_penalty=presence_penalty, 76 | stop=stop, 77 | n=num_completions, 78 | )['choices'] 79 | else: 80 | completions = openai.Completion.create( 81 | model=model_name, 82 | prompt=codex_in, 83 | max_tokens=max_tokens, 84 | temperature=temperature, 85 | presence_penalty=presence_penalty, 86 | stop=stop, 87 | n=num_completions, 88 | logit_bias=logit_bias 89 | )['choices'] 90 | self.exponential_backoff = 1 91 | break 92 | except openai.error.RateLimitError: 93 | print("Rate limit reached. Waiting before retrying...") 94 | time.sleep(16 * self.exponential_backoff) 95 | self.exponential_backoff *= 2 96 | for completion in completions: 97 | result = [] 98 | for line_idx, line in enumerate(completion.text.split("\n")): 99 | if (indented or (indented_after_first_line and line_idx > 0)) and line.lstrip() == line and line.strip() != "": 100 | break 101 | if require is not None and line.strip() != "" and require not in line: 102 | break 103 | result += [line] 104 | results.append(result) 105 | 106 | # Save updated cache - reopen in case multiple processes running 107 | # Save to a temp file first, then rename 108 | # Check if a temp file exists, and if so, wait for it to be deleted 109 | while os.path.exists(self.cache_file + ".tmp") or os.path.exists(self.cache_file + ".lock"): 110 | time.sleep(0.1) 111 | # create an empty file to indicate that we are writing to the cache 112 | with open(self.cache_file + ".lock", "w") as f: 113 | pass 114 | if os.path.exists(self.cache_file): 115 | with open(self.cache_file, "r") as f: 116 | self.cache = json.load(f) 117 | self.cache[cache_key] = results 118 | with open(self.cache_file + ".tmp", "w") as f: 119 | json.dump(self.cache, f) 120 | os.rename(self.cache_file + ".tmp", self.cache_file) 121 | os.remove(self.cache_file + ".lock") 122 | total_tokens -= num_completions * max_tokens 123 | return results -------------------------------------------------------------------------------- /consts/__init__.py: -------------------------------------------------------------------------------- 1 | mode = 'py' 2 | if mode == 'py': 3 | from .py_consts import CONSTS 4 | elif mode == 'lean': 5 | from .lean_consts import CONSTS 6 | elif mode == 'vh': 7 | from .vh_consts import CONSTS 8 | 9 | def assert_mode(target_mode): 10 | if mode == target_mode: 11 | return 12 | # if the mode is not the target mode, 13 | # ask the user if he wants to change the mode 14 | print(f'The current mode is {mode}, but the target mode is {target_mode}.') 15 | choice = input('Do you want to change the mode? [y/N]') 16 | if choice == 'y': 17 | # edit the file 18 | with open(__file__, 'r') as f: 19 | lines = f.readlines() 20 | for i, line in enumerate(lines): 21 | if line.startswith('mode ') or line.startswith('mode='): 22 | lines[i] = f"mode = '{target_mode}'\n" 23 | break 24 | with open(__file__, 'w') as f: 25 | f.writelines(lines) 26 | raise RuntimeError(f'Mode changed to {target_mode}. Please restart the program.') 27 | else: 28 | raise RuntimeError(f'Incorrect mode.') -------------------------------------------------------------------------------- /consts/lean_consts.py: -------------------------------------------------------------------------------- 1 | import tempfile 2 | import subprocess 3 | import os 4 | 5 | def lean_exec(code): 6 | # Create a temporary file to store the code 7 | assert "begin" in code 8 | assert "sorry" not in code 9 | with tempfile.NamedTemporaryFile(mode='w', suffix='.lean', delete=False) as f: 10 | f.write(code) 11 | f.flush() 12 | # Execute the code 13 | try: 14 | subprocess.run(['lean', f.name], check=True) 15 | except subprocess.CalledProcessError: 16 | print("Error executing code: ", code) 17 | raise 18 | finally: 19 | os.remove(f.name) 20 | 21 | CONSTS = { 22 | "rate_limit_tokens_text": 16000, 23 | "exec_pre": "", 24 | "needs_indent": False, 25 | "fn_init": "def ", 26 | "exclude_init": ["from ", "import "], 27 | "fn_end": "return", 28 | "gen_stop": ["\nlemma", "\ntheorem", "\nexample", "\nimport"], 29 | "import": "import {name} helpers\n", 30 | "header_str": lambda name, arg_list: f"lemma {name}{' '.join(f'({arg})' for arg in arg_list if arg.strip())}", 31 | "sig_helper": "-- Signature: {sig}\n", 32 | "desc_helper": "-- Description: {desc}\n", 33 | "ret_helper": "-- Returns: {ret}\n", 34 | "use_helper": "-- Applies: {uses}\n", 35 | "impl_helper": "{header}:\n{asserts}\n{impls}", 36 | "assert_helper": lambda assertion: " {assertion} :=".format( 37 | assertion=assertion.replace('show', '').strip()), 38 | "assert_check": lambda line: line.strip().startswith('show'), 39 | "assert_break": lambda cur_assert: (cur_assert, None), 40 | "assert_format": "-- {assert_in}\n", 41 | "explain_helper": "-- Reviewer:\n" 42 | "-- Please explain the above function in one sentence with as much detail as possible.\n" 43 | "-- In your one-sentence description, specify the range and domain of your function precisely.\n" 44 | "-- Your description should be clear enough that someone could reimplement the function from it.\n" 45 | "-- Author:\n" 46 | "-- Sounds good, here's my one-sentence explanation of {name}:\n" 47 | "-- {name}", 48 | "decompose_helper": "-- Let's decompose this lemma into two lemmas:\n" 49 | "-- Lemma to decompose:\n" 50 | "-- - {parsel_str}\n" 51 | "-- Sublemmas in the same format of 'lamma_name(hypotheses): description':\n", 52 | "example_helper": "-- {example}\n", 53 | "missing_gen_helper": "-- Helper function for {parent_name}\n" 54 | "-- Usage examples:\n" 55 | "-- {examples_str}\n" 56 | "-- def {missing_fn_name}(", 57 | "decompose_example_prefix": " - ", 58 | 'extension': '.lean', 59 | "output_fn": "-- ({output_str})\n", 60 | "full_fn_str": "-- {desc}\n{fn_impl}\n", 61 | "get_assert_in": lambda assert_str: assert_str.split('==')[0].replace('assert', '').strip(), 62 | "exist_asserts": lambda _: True, 63 | "exec": lean_exec, 64 | "impl_filter": lambda impl: "begin" in impl and "sorry" not in impl, 65 | "implicit_assert": True, 66 | } -------------------------------------------------------------------------------- /consts/py_consts.py: -------------------------------------------------------------------------------- 1 | from contextlib import redirect_stdout 2 | 3 | vis = False 4 | 5 | exec_imports = ( 6 | "import sys\nimport time\nimport itertools\nfrom itertools import accumulate, product, permutations, " 7 | "combinations\nimport collections\nfrom collections import Counter, OrderedDict, deque, defaultdict, " 8 | "ChainMap\nfrom functools import lru_cache\nimport math\nfrom math import sqrt, sin, cos, tan, ceil, " 9 | "fabs, floor, gcd, exp, log, log2\nimport fractions\nfrom typing import List, Tuple\nimport numpy as " 10 | "np\nimport random\nimport heapq\n" 11 | ) 12 | 13 | def add_name_and_args(parsel_text): 14 | return """Parses the input and number of lines and returns the number of operations required. 15 | Takes the input line and first splits on newlines. Ignores the first line, and parses each of the remaining lines as a list of one character and one number, which give a list L of lists. Returns L. 16 | Repeatedly calls do_operation on the list until it returns false. Returns how many calls were made. 17 | Receives a list of pairs where the first element is the color and the second is a rank. For each index i, looks for any previous element 0 <= j < i that has the same color and with a larger rank. If no such element exists, keeps going. Otherwise, swaps elements i and i - 1 and returns true. If it never found any element at the end, returns false. 18 | ------> 19 | num_ops(input_string, n_lines) 20 | parse_input(input_string) 21 | minimum_operations(l) 22 | do_operation(l) 23 | 24 | 25 | 26 | Parses the input and returns the minimum area of the input. 27 | Takes the input line and first splits on newline. Ignores the first line, and parses each of the remaining lines as a tuple of two numbers, which give a list L of tuples. Returns L. 28 | Splits l on space, converts each element to int, and returns the result of converting the result to a list. 29 | Returns all subsets of L with sizes ranging from 0 to k, inclusive. 30 | recusively enumerates the subsets of size k of the list L. Base cases: if k = 0, returns a list containing the empty list. If k > len(L), returns the empty list. Otherwise, first construct the subsets that contain the first element, then those that do not, and return their concatenation. 31 | First, calls enumerate_subsets_at_most_k passing whs and half the length of whs rounded down. Returns the minimum result of calling compute_area on the list given by apply_inversions with whs and the subset. 32 | Returns all subsets of L with sizes ranging from 0 to k, inclusive. 33 | takes a list of pairs (width, height). Computes the sum of the widths and the maximum of the heights. Returns the product of those two numbers. 34 | Takes a list of pairs of form (w, h) and a subset of indices to invert. Returns a list where the elements of whs whose index is in the subset are inverted to (h, w), and the others appear as given. 35 | ------> 36 | min_input_area(input_string) 37 | parse_input(input_string) 38 | parse_line(l) 39 | enumerate_subsets_at_most_k(L, k) 40 | enumerate_subsets(L, k) 41 | minimum_area(whs) 42 | enumerate_subsets_at_most_k 43 | compute_area(whs) 44 | apply_inversions(whs, subset) 45 | 46 | 47 | 48 | Calls base_case if 1, otherwise recursion_rule 49 | Returns the list with the number appended to it 50 | Add num to list, collatz with 3n + 1 if odd or n / 2 if even 51 | Calls base_case if 1, otherwise recursion_rule 52 | ------> 53 | collatz_recursion(num, cur_list=None) 54 | base_case(num, cur_list) 55 | recursion_rule(num, cur_list) 56 | collatz_recursion 57 | 58 | 59 | 60 | {parsel_text} 61 | ------> 62 | """.format(parsel_text=parsel_text) 63 | 64 | if vis: 65 | exec_imports += "import clip\nfrom PIL import Image\n" 66 | exec_imports += "import torch\nfrom torch import nn\nfrom torch.nn import functional as F\n" 67 | 68 | def eval_fn(fn_str): 69 | if vis: 70 | import clip 71 | from PIL import Image 72 | import torch 73 | from torch import nn 74 | from torch.nn import functional as F 75 | import io, contextlib 76 | import sys 77 | import time 78 | import resource 79 | import itertools 80 | from itertools import accumulate, product, permutations, combinations 81 | import collections 82 | from collections import Counter, OrderedDict, deque, defaultdict, ChainMap 83 | from functools import lru_cache 84 | import math 85 | from math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2 86 | import fractions 87 | from typing import List, Tuple 88 | import numpy as np 89 | import random 90 | import heapq 91 | f = io.StringIO() 92 | with redirect_stdout(f): 93 | exec(exec_imports + fn_str, locals()) 94 | 95 | def prepend_hash(lines_str): 96 | return "\n".join(["#" + line for line in lines_str.split("\n")]) 97 | 98 | def find_str(line, target): 99 | # Find the first : not in parentheses 100 | paren_count = 0 101 | bracket_count = 0 102 | curly_count = 0 103 | in_string = None 104 | for i, c in enumerate(line): 105 | if c == "(": 106 | paren_count += 1 107 | elif c == ")": 108 | paren_count -= 1 109 | elif c == "[": 110 | bracket_count += 1 111 | elif c == "]": 112 | bracket_count -= 1 113 | elif c == "{": 114 | curly_count += 1 115 | elif c == "}": 116 | curly_count -= 1 117 | elif c == "\"" or c == "'": 118 | if in_string == c: 119 | in_string = None 120 | else: 121 | in_string = c 122 | elif c == target and paren_count == 0 and bracket_count == 0 and curly_count == 0 and in_string is None: 123 | return i 124 | return -1 125 | 126 | def assert_check(line): 127 | line = line.strip() 128 | return find_str(line, ":") == -1 and "->" in line and (find_str(line, "-") == find_str(line, ">") - 1) 129 | 130 | def assert_break(cur_assert): 131 | if "->" not in cur_assert: 132 | return cur_assert, None 133 | return (cur_assert.split("->")[0].strip(), cur_assert.split("->")[1].strip()) 134 | 135 | def unwrap_parens(line): 136 | if line[0] == "(" and line[-1] == ")": 137 | # Check that the parens wrap the whole line 138 | # Do this by checking if "=" is in the line 139 | equals = find_str(line, "=") 140 | if equals == -1: 141 | return line[1:-1] 142 | return line 143 | 144 | def assert_format(name, assert_in, assert_out): 145 | if assert_out is not None: 146 | return f"assert repr(str({name}({assert_in}))) == repr(str({assert_out})) or ({name}({assert_in}) == {assert_out})\n" 147 | return f"assert {unwrap_parens(assert_in)}\n" 148 | 149 | # Simplify an assert generated by the language model 150 | def simplify_assert(assert_passed): 151 | assert_passed = assert_passed.replace("assert", "", 1).strip() 152 | from consts.py_consts import unwrap_parens 153 | assert_passed = unwrap_parens(assert_passed) 154 | if find_str(assert_passed, "#") != -1: 155 | assert_passed = assert_passed[:find_str(assert_passed, "#")].strip() 156 | return assert_passed 157 | 158 | 159 | CONSTS = { 160 | "rate_limit_tokens_text": 16000, 161 | "num_completions": 64, 162 | "min_completions": 8, 163 | "num_completions_eval": 64, 164 | "text_model_name": None, 165 | "timeout": 0.5, 166 | "shuffle_always": True, 167 | "num_text_completions": 8, 168 | "max_text_completions": 8, 169 | "exec_pre": exec_imports, 170 | 'strict_mode': False, 171 | "needs_indent": True, 172 | "eval_mode": False, 173 | "fn_init": "def ", 174 | "exclude_init": ["from ", "import "], 175 | "fn_end": "return", 176 | "gen_stop": ["\ndef"], 177 | "import": "from helpers import {name}\n", 178 | "header_str": lambda name, args: f"def {name}({', '.join(args)})", 179 | "sig_helper": "# Signature: {sig}\n", 180 | "desc_helper": lambda desc: prepend_hash(f" Description: {desc}") + "\n", 181 | "ret_helper": "# Returns: {ret}\n", 182 | "use_helper": "# Uses: {uses}\n", 183 | "impl_helper": "{header}:\n{impls}", 184 | "assert_helper": lambda _: "", 185 | "assert_check": assert_check, 186 | "assert_break": assert_break, 187 | "assert_format": assert_format, 188 | "simplify_assert": simplify_assert, 189 | "add_name_and_args": add_name_and_args, 190 | "explain_helper": "# Reviewer:\n" 191 | "# Please explain the above function in one sentence with as much detail as possible.\n" 192 | "# In your one-sentence description, specify the range and domain of your function precisely.\n" 193 | "# Your description should be clear enough that someone could reimplement the function from it.\n" 194 | "# Author:\n" 195 | "# Sounds good, here's my one-sentence explanation of {name}:\n" 196 | "# {name}", 197 | "decompose_helper": "# Let's decompose this function into at most three functions:\n" 198 | "# Function to decompose:\n" 199 | "# - {parsel_str}\n" 200 | "# Necessary helper functions in the same format of 'fn_name(inputs): description':\n", 201 | "example_helper": "# {example}\n", 202 | "missing_gen_helper": "# Helper function for {parent_name}\n" 203 | "# Usage examples:\n" 204 | "# {examples_str}\n" 205 | "# def {missing_fn_name}(", 206 | "decompose_example_prefix": " - ", 207 | 'extension': '.py', 208 | "output_fn": "print({output_str})\n", 209 | "full_fn_str": "# {desc}\n{fn_impl}\n", 210 | "get_assert_in": lambda assert_str: assert_str.split('==')[0].replace('assert', '').strip(), 211 | "exist_asserts": lambda assert_str: 'assert' in assert_str, 212 | "exec": eval_fn, 213 | "impl_filter": lambda _: True, 214 | "implicit_assert": False, 215 | } 216 | -------------------------------------------------------------------------------- /consts/vh_consts.py: -------------------------------------------------------------------------------- 1 | exec_imports = ( 2 | "import sys\nimport time\nimport itertools\nfrom itertools import accumulate, product, permutations, " 3 | "combinations\nimport collections\nfrom collections import Counter, OrderedDict, deque, defaultdict, " 4 | "ChainMap\nfrom functools import lru_cache\nimport math\nfrom math import sqrt, sin, cos, tan, ceil, " 5 | "fabs, floor, gcd, exp, log, log2\nimport fractions\nfrom typing import List, Tuple\nimport numpy as " 6 | "np\nimport random\nimport heapq\n" 7 | ) 8 | 9 | def eval_fn(fn_str): 10 | import io, contextlib 11 | import sys 12 | import time 13 | import resource 14 | import itertools 15 | from itertools import accumulate, product, permutations, combinations 16 | import collections 17 | from collections import Counter, OrderedDict, deque, defaultdict, ChainMap 18 | from functools import lru_cache 19 | import math 20 | from math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2 21 | import fractions 22 | from typing import List, Tuple 23 | import numpy as np 24 | import random 25 | import heapq 26 | exec(exec_imports + fn_str, locals()) 27 | 28 | def prepend_hash(lines_str): 29 | return "\n".join(["#" + line for line in lines_str.split("\n")]) 30 | 31 | def find_str(line, target): 32 | # Find the first : not in parentheses 33 | paren_count = 0 34 | bracket_count = 0 35 | curly_count = 0 36 | in_string = None 37 | for i, c in enumerate(line): 38 | if c == "(": 39 | paren_count += 1 40 | elif c == ")": 41 | paren_count -= 1 42 | elif c == "[": 43 | bracket_count += 1 44 | elif c == "]": 45 | bracket_count -= 1 46 | elif c == "{": 47 | curly_count += 1 48 | elif c == "}": 49 | curly_count -= 1 50 | elif c == "\"" or c == "'": 51 | if in_string == c: 52 | in_string = None 53 | else: 54 | in_string = c 55 | elif c == target and paren_count == 0 and bracket_count == 0 and curly_count == 0 and in_string is None: 56 | return i 57 | return -1 58 | 59 | def assert_check(line): 60 | line = line.strip() 61 | return find_str(line, ":") == -1 and "->" in line and (find_str(line, "-") == find_str(line, ">") - 1) 62 | 63 | CONSTS = { 64 | "rate_limit_tokens_text": 16000, 65 | "num_completions": 16, 66 | "num_completions_eval": 8, 67 | "text_model_name": None, 68 | "num_text_completions": 8, 69 | "max_text_completions": 8, 70 | "exec_pre": exec_imports, 71 | 'strict_mode': False, 72 | "needs_indent": True, 73 | "eval_mode": False, 74 | "fn_init": "def ", 75 | "exclude_init": ["from ", "import "], 76 | "fn_end": "return", 77 | "gen_stop": ["\ndef"], 78 | "default_prefix": "\"\"\"An action plan is a list of strings that describes a sequence of steps to accomplish a task, To be correctly parsed, an action plan must be syntactically correct and contain only allowed actions and recognizable simple objects. Allowed actions: 'close' , 'cut' , 'drink' , 'drop' , 'eat' , 'find' , 'grab' , 'greet' , 'lie on' , 'look at' , 'open' , 'plug in' , 'plug out' , 'point at' , 'pour' 'into' , 'pull' , 'push' , 'put' 'on' , 'put' 'in' , 'put back' , 'take off' , 'put on' , 'read' , 'release', 'rinse' , 'run to' , 'scrub' , 'sit on' , 'sleep', 'squeeze' , 'stand up', 'switch off' , 'switch on' , 'touch' , 'turn to' , 'type on' , 'wake up', 'walk to' , 'wash' , 'watch' , 'wipe' . To satisfy the common-sense constraints, each action step in this action plan must not violate the set of its pre-conditions (e.g. the agent cannot grab milk from the fridge before opening it) and post-conditions (e.g. the state of the fridge changes from \“closed\” to \“open\” after the agent opens it).\"\"\"\n", 79 | "import": "from helpers import {name}\n", 80 | "header_str": lambda name, args: f"def {name}({', '.join(args)})", 81 | "sig_helper": "# Signature: {sig}\n", 82 | "desc_helper": lambda desc: prepend_hash(f" Description: {desc}") + "\n", 83 | "ret_helper": "# Returns: {ret}\n", 84 | "use_helper": "# Uses: {uses}\n", 85 | "impl_helper": "{header}:\n{impls}", 86 | "assert_helper": lambda _: "", 87 | "assert_check": assert_check, 88 | "assert_break": lambda cur_assert: (cur_assert.split("->")[0].strip(), cur_assert.split("->")[1].strip()), 89 | "assert_format": "from execute_virtual_home import test_script;assert test_script({name}({assert_in}))\n", 90 | "explain_helper": "# Reviewer:\n" 91 | "# Please explain the above function in one sentence with as much detail as possible.\n" 92 | "# In your one-sentence description, specify the range and domain of your function precisely.\n" 93 | "# Your description should be clear enough that someone could reimplement the function from it.\n" 94 | "# Author:\n" 95 | "# Sounds good, here's my one-sentence explanation of {name}:\n" 96 | "# {name}", 97 | "decompose_helper": "# Let's decompose this function into at most three functions:\n" 98 | "# Function to decompose:\n" 99 | "# - {parsel_str}\n" 100 | "# Necessary helper functions in the same format of 'fn_name(inputs): description':\n", 101 | "example_helper": "# {example}\n", 102 | "missing_gen_helper": "# Helper function for {parent_name}\n" 103 | "# Usage examples:\n" 104 | "# {examples_str}\n" 105 | "# def {missing_fn_name}(", 106 | "decompose_example_prefix": " - ", 107 | 'extension': '.py', 108 | "output_fn": "print({output_str})\n", 109 | "full_fn_str": "# {desc}\n{fn_impl}\n", 110 | "get_assert_in": lambda assert_str: assert_str.split('==')[0].replace('assert', '').strip(), 111 | "exist_asserts": lambda assert_str: 'assert' in assert_str, 112 | "exec": eval_fn, 113 | "impl_filter": lambda _: True, 114 | "implicit_assert": False, 115 | } -------------------------------------------------------------------------------- /execute_virtual_home.py: -------------------------------------------------------------------------------- 1 | import glob 2 | from sys import platform 3 | import sys 4 | from PIL import Image 5 | import matplotlib.pyplot as plt 6 | from tqdm import tqdm 7 | import os 8 | import virtualhome 9 | from unity_simulator.comm_unity import UnityCommunication 10 | from unity_simulator import utils_viz 11 | import json 12 | from virtualhome.simulation.evolving_graph.check_programs import ScriptParseException 13 | from virtualhome.simulation.evolving_graph.scripts import read_script_from_string 14 | from virtualhome.simulation.evolving_graph.execution import ScriptExecutor 15 | from virtualhome.simulation.evolving_graph.environment import EnvironmentGraph 16 | from virtualhome.simulation.evolving_graph import utils 17 | 18 | def add_node(graph, n): 19 | graph['nodes'].append(n) 20 | 21 | def add_edge(graph, fr_id, rel, to_id): 22 | graph['edges'].append({'from_id': fr_id, 'relation_type': rel, 'to_id': to_id}) 23 | 24 | 25 | def find_nodes(graph, **kwargs): 26 | if len(kwargs) == 0: 27 | return None 28 | else: 29 | k, v = next(iter(kwargs.items())) 30 | return [n for n in graph['nodes'] if n[k] == v] 31 | 32 | def setup(): 33 | mode = 'auto' # auto / manual 34 | if mode == 'auto': 35 | if platform == 'darwin': 36 | exec_file = 'macos_exec.v2.3.0*' 37 | else: 38 | exec_file = 'linux_exec*.x86_64' 39 | file_names = glob.glob(exec_file) 40 | if len(file_names) > 0: 41 | file_name = file_names[0] 42 | comm = UnityCommunication(file_name=file_name, port="8082", x_display="0") 43 | else: 44 | print("Error: executable path not found.") 45 | else: 46 | comm = UnityCommunication() 47 | return comm 48 | 49 | def init_graph(comm, graph): 50 | 51 | sofa = find_nodes(graph, class_name='sofa')[-2] 52 | 53 | add_node(graph, {'class_name': 'bread', 54 | 'id': 1000, 55 | 'properties': [], 56 | 'states': []}) 57 | 58 | floor = find_nodes(graph, class_name='floor')[1] 59 | 60 | add_node(graph, {'class_name': 'table', 61 | 'id': 1001, 62 | 'properties': [], 63 | 'states': []}) ## somehow this just refuses to be added ... 64 | 65 | add_edge(graph, 1000, 'ON', sofa['id']) 66 | add_edge(graph, 1001, 'ON', floor['id']) 67 | 68 | success, message = comm.expand_scene(graph) 69 | assert success 70 | comm.add_character('chars/Female2', initial_room='kitchen') 71 | 72 | s, g = comm.environment_graph() 73 | return g 74 | 75 | import re 76 | 77 | NL_ACTIONS = [['close'], ['cut'], ['drink'], ['drop'], ['eat'], ['find'], ['grab'], ['greet'], ['lie on'], ['look at'], ['move'], ['open'], ['plug in'], ['plug out'], ['point at'], ['pour', 'into'], ['pull'], ['push'], ['put', 'on'], ['put', 'in'], ['put back'], ['take off'], ['put on'], ['read'], ["release"], ['rinse'], ['run to'], ['scrub'], ['sit on'], ['sleep'], ['squeeze'], ['stand up'], ['switch off'], ['switch on'], ['touch'], ['turn to'], ['type on'], ['wake up'], ['walk to'], ['wash'], ['watch'], ['wipe']] 78 | M_ACTIONS = ['[CLOSE]', '[CUT]', '[DRINK]', '[DROP]', '[EAT]', '[FIND]', '[GRAB]', '[GREET]', '[LIE]', '[LOOKAT]', '[MOVE]', '[OPEN]', '[PLUGIN]', '[PLUGOUT]', '[POINTAT]', '[POUR]', '[PULL]', '[PUSH]', '[PUTBACK]', '[PUTIN]', '[PUTOBJBACK]', '[PUTOFF]', '[PUTON]', '[READ]', '[RELEASE]', '[RINSE]', '[RUN]', '[SCRUB]', '[SIT]', '[SLEEP]', '[SQUEEZE]', '[STANDUP]', '[SWITCHOFF]', '[SWITCHON]', '[TOUCH]', '[TURNTO]', '[TYPE]', '[WAKEUP]', '[WALK]', '[WASH]', '[WATCH]', '[WIPE]'] 79 | 80 | def get_obj_id(obj, graph): 81 | print(obj) 82 | return [node['id'] for node in graph['nodes'] if node['class_name'] == obj][0] 83 | 84 | def formalize_script(gen_script, graph): 85 | script = [] 86 | 87 | for l in gen_script: 88 | for acts_idx, acts in enumerate(NL_ACTIONS): 89 | if all([a in l for a in acts]): 90 | for a in acts: 91 | l = l.replace(a + " ", "|") 92 | objects = l.split("|")[1:] 93 | l = M_ACTIONS[acts_idx] + " " + " ".join(["<{}> ({})".format( obj.strip(), get_obj_id(obj.strip(), graph)) for obj in objects]) 94 | break 95 | script.append(l) 96 | return script 97 | 98 | 99 | def check_executability(string, graph_dict): 100 | able_to_be_parsed = False 101 | able_to_be_executed = False 102 | 103 | try: 104 | script = read_script_from_string(string) 105 | able_to_be_parsed = True 106 | except ScriptParseException: 107 | return able_to_be_parsed, able_to_be_executed, None 108 | 109 | graph = EnvironmentGraph(graph_dict) 110 | name_equivalence = utils.load_name_equivalence() 111 | executor = ScriptExecutor(graph, name_equivalence) 112 | 113 | try: 114 | state_enum = executor.find_solutions(script) 115 | executable = state_enum is not None 116 | except AttributeError: 117 | print("Attribute error") 118 | print("Program:") 119 | programs = string.split(', ') 120 | for p in programs: 121 | print(p) 122 | return able_to_be_parsed, able_to_be_executed, None 123 | except: 124 | print("Unexpected error:", sys.exc_info()[0]) 125 | print("Program:") 126 | programs = string.split(', ') 127 | for p in programs: 128 | print(p) 129 | return able_to_be_parsed, able_to_be_executed, None 130 | 131 | if executable: 132 | able_to_be_executed = True 133 | return able_to_be_parsed, True, state_enum 134 | else: 135 | print(executor.info.get_error_string()) 136 | return able_to_be_parsed, able_to_be_executed, None 137 | 138 | def test_script(gen_script, strict = False): 139 | cache_file = "virtual_home_test_graph.json" 140 | if os.path.exists(cache_file) and not strict: 141 | with open(cache_file, 'r') as f: 142 | graph = json.load(f) 143 | else: 144 | comm = setup() 145 | comm.reset(4) 146 | success, graph = comm.environment_graph() 147 | 148 | graph = init_graph(comm, graph) 149 | with open(cache_file, 'w') as f: 150 | json.dump(graph, f) 151 | 152 | 153 | print(gen_script) 154 | # gen_script = ['grab cup', 'put cup on table', 'grab bread', 'put bread on desk'] 155 | script = formalize_script(gen_script, graph) 156 | 157 | print(script) 158 | ### check soft execution via executor.find_solutions; mainly check whether stuff are possible to parse and execute in any way 159 | able_to_be_parsed, able_to_be_executed, final_state = check_executability( ", ".join(script), graph) 160 | 161 | 162 | print(able_to_be_parsed, able_to_be_executed) 163 | # assert able_to_be_parsed 164 | # assert able_to_be_executed 165 | if not (able_to_be_parsed and able_to_be_executed): 166 | return "not executable" 167 | if strict: 168 | ### execution check; too expensive for on the fly 169 | script = [" " + s for s in script] 170 | print(script) 171 | success, message = comm.render_script(script=script, 172 | processing_time_limit=60, 173 | find_solution=False, 174 | image_width=320, 175 | image_height=240, 176 | skip_animation=True, 177 | recording=False, 178 | save_pose_data=False, 179 | file_name_prefix='relax') 180 | print(message) 181 | if not success: 182 | return "not executable" 183 | return "executable" 184 | 185 | if __name__ == "__main__": 186 | from programs.saycan import task_plan 187 | gen_script = task_plan() 188 | # gen_script = ['grab mug'] 189 | gen_script = gen_script 190 | test_script(gen_script, strict = True) -------------------------------------------------------------------------------- /fn.py: -------------------------------------------------------------------------------- 1 | from consts import CONSTS 2 | import random 3 | 4 | # Parsel representation of a function 5 | class Function: 6 | def __init__(self, name, args, ret, desc, parent, asserts, prefix=''): 7 | self.name = name 8 | self.args = args 9 | self.ret = ret 10 | self.desc = desc 11 | if "default_prefix" in CONSTS: 12 | prefix = prefix + CONSTS["default_prefix"] 13 | self.prefix = prefix 14 | if parent is None: 15 | self.parents = [] 16 | else: 17 | self.parents = [parent] 18 | self.asserts = asserts 19 | self.children = [] 20 | self.implementations = [] 21 | self.fixed_implementation = None 22 | 23 | # i.e. returns the function signature 24 | def header(self): 25 | return CONSTS["header_str"](self.name, self.args) 26 | 27 | # Constructs prompt for code generation 28 | def get_codex_input(self): 29 | base_str = "" 30 | base_str += self.prefix 31 | already_listed = [self.name] 32 | for child in self.children: 33 | if child.name in already_listed: 34 | continue 35 | already_listed.append(child.name) 36 | ret_str = (" -> " + ", ".join(child.ret)) if child.ret else "" 37 | if isinstance(CONSTS["desc_helper"], str): 38 | base_str += CONSTS["desc_helper"].format(desc=child.desc) 39 | else: 40 | base_str += CONSTS["desc_helper"](child.desc) 41 | base_str += CONSTS["sig_helper"].format( 42 | sig=f"{child.name}({', '.join(child.args)}){ret_str}") 43 | base_str += CONSTS["import"].format(name=child.name) 44 | base_str += f"\n" 45 | if self.desc: 46 | if isinstance(CONSTS["desc_helper"], str): 47 | base_str += CONSTS["desc_helper"].format(desc=self.desc) 48 | else: 49 | base_str += CONSTS["desc_helper"](self.desc) 50 | if self.ret and ', '.join(self.ret): 51 | base_str += CONSTS["ret_helper"].format(ret=', '.join(self.ret)) 52 | other_children = [child for child in self.children if child.name != self.name] 53 | if other_children: 54 | base_str += CONSTS["use_helper"].format( 55 | uses=', '.join([child.name for child in other_children])) 56 | base_str += f"{self.header()}:\n" 57 | if self.asserts: 58 | for cur_assert in self.asserts: 59 | base_str += CONSTS["assert_helper"](cur_assert) 60 | return base_str 61 | 62 | # Constructs prompt for code generation 63 | def get_codex_test_input(self): 64 | base_str = self.get_codex_input() 65 | base_str += f""" pass 66 | 67 | # check the correctness of {self.name} 68 | assert""" 69 | return base_str 70 | 71 | # Convert Parsel-style asserts to asserts in the target language 72 | def get_assert_str(self): 73 | assert_str = "" 74 | for cur_assert in self.asserts: 75 | assert_in, assert_out = CONSTS["assert_break"](cur_assert) 76 | if isinstance(CONSTS["assert_format"], str): 77 | assert_str += CONSTS["assert_format"].format( 78 | name=self.name, assert_in=assert_in, assert_out=assert_out) 79 | else: 80 | assert_str += CONSTS["assert_format"](self.name, assert_in, assert_out) 81 | return assert_str 82 | 83 | # Get the string representation of all implementations of this function 84 | def get_implementation_strs(self): 85 | def join_str(strs): 86 | return "\n".join(strs) 87 | return [CONSTS["impl_helper"].format( 88 | header=self.header(), 89 | impls=join_str(impl), 90 | asserts=join_str([ 91 | CONSTS["assert_helper"](cur_assert) for cur_assert in self.asserts]), 92 | ) for impl in self.implementations] 93 | 94 | # Call code model and optionally filter the results 95 | # Generate implementations for the function 96 | def implement(self, codex, num_completions=None): 97 | if 'max_tokens' in CONSTS: 98 | max_tokens = CONSTS['max_tokens'] 99 | else: 100 | max_tokens = 500 101 | if num_completions is None: 102 | num_completions = CONSTS['num_completions'] 103 | self.implementations = codex.generate( 104 | codex_in=self.get_codex_input(), 105 | num_completions=num_completions, 106 | max_tokens=max_tokens, 107 | temperature=0.6, 108 | stop=CONSTS["gen_stop"], 109 | indented=CONSTS['needs_indent'], 110 | indented_after_first_line=False, 111 | require=None, 112 | cache_key=None, 113 | ) 114 | self.implementations = list(filter(CONSTS["impl_filter"], self.implementations)) 115 | if "shuffle_implementations" in CONSTS and CONSTS["shuffle_implementations"]: 116 | random.shuffle(self.implementations) 117 | self.implementations = self.implementations[:CONSTS['num_completions_eval']] 118 | 119 | # Generate tests for this function 120 | def generate_tests(self, codex, num_completions=None): 121 | if num_completions is None: 122 | num_completions = CONSTS['num_completions'] 123 | tests = codex.generate( 124 | codex_in=self.get_codex_test_input(), 125 | num_completions=num_completions * 5, 126 | max_tokens=100, 127 | temperature=0.6, 128 | stop="\n", 129 | indented=CONSTS['needs_indent'], 130 | indented_after_first_line=False, 131 | require=None, 132 | cache_key=None, 133 | ) 134 | tests = set([test[0] for test in tests if test]) 135 | self.asserts = tests 136 | return tests 137 | 138 | # Converts any parent names to references to the actual parent functions 139 | # Same for children 140 | # This is because sometimes we need to create references to functions before they are defined 141 | # Even when they are technically in scope 142 | # e.g. consider the following valid Parsel program 143 | # """ 144 | # a: description1 145 | # b 146 | # b: description2 147 | # """ 148 | def names_to_fns(self, defined_fns): 149 | for i, child_name in enumerate(self.children): 150 | if isinstance(child_name, str): 151 | self.children[i] = defined_fns[child_name] 152 | defined_fns[child_name].names_to_fns(defined_fns) 153 | for i, parent_name in enumerate(self.parents): 154 | if isinstance(parent_name, str): 155 | self.parents[i] = defined_fns[parent_name] 156 | defined_fns[parent_name].names_to_fns(defined_fns) 157 | 158 | # Describe what a function does using code model 159 | # This is for backtranslation / decompilation 160 | def describe(self, codex, names_to_avoid=None): 161 | if names_to_avoid is None: 162 | names_to_avoid = [] 163 | body_str = f"{self.fixed_implementation}\n" 164 | body_str += CONSTS["explain_helper"].format(name=self.name) 165 | gen_desc = codex.generate(body_str, num_completions=1, temperature=0., indented=False)[0] 166 | new_desc = [] 167 | first_line = True 168 | for line in gen_desc[:5]: 169 | line = line.lstrip("#").rstrip() 170 | if first_line: 171 | new_desc.append(self.name + line) 172 | if line.strip().endswith('.'): 173 | break 174 | else: 175 | first_line = False 176 | continue 177 | if not line.strip(): 178 | break 179 | if line.strip().endswith('.'): 180 | new_desc.append(line.strip()) 181 | break 182 | new_desc.append(line.strip()) 183 | self.desc = " ".join(new_desc) 184 | 185 | # If the function has no children, we generate children using codex 186 | def expand(self, codex): 187 | if self.children: 188 | return 189 | body_str = CONSTS["decompose_helper"].format( 190 | parsel_str=self.to_parsel_str(include_asserts=False).rstrip()) 191 | completions = codex.generate(body_str, num_completions=3, temperature=0.01, indented=False, max_tokens=250, stop=["\n#\n"]) 192 | for completion in completions: 193 | gen_fns = [gen_fn.split( 194 | CONSTS["decompose_example_prefix"], 1)[-1] for gen_fn in completion] 195 | if len(gen_fns) > 3: 196 | continue 197 | defined_fns = {self.name: self} 198 | for gen_fn in gen_fns: 199 | parse_to_fn(gen_fn, self, defined_fns) 200 | return defined_fns 201 | 202 | # Add a child to this function 203 | def add_child(self, child): 204 | self.children.append(child) 205 | child.parents.append(self) 206 | 207 | # Set the implementation of this function as fixed to a particular string 208 | def fix_implementation(self, impl_str): 209 | self.fixed_implementation = impl_str 210 | 211 | # We need to be careful about infinite recursion here 212 | # Get all functions that are descendants of this function 213 | def get_descendants(self, visited=None): 214 | if visited is None: 215 | visited = {self.name: self} 216 | for child in self.children: 217 | if child.name not in visited: 218 | visited[child.name] = child 219 | child.get_descendants(visited) 220 | return visited 221 | 222 | # Get all functions that are ancestors of this function 223 | def get_ancestors(self, visited=None): 224 | if visited is None: 225 | visited = {self.name: self} 226 | for parent in self.parents: 227 | if parent.name not in visited: 228 | visited[parent.name] = parent 229 | parent.get_ancestors(visited) 230 | return visited 231 | 232 | # Check if this function uses another function (even indirectly) 233 | def uses(self, fn_name): 234 | return fn_name in self.get_descendants() 235 | 236 | # Creates a copy of this function 237 | def copy(self): 238 | new_function = Function( 239 | name=self.name, 240 | args=self.args.copy(), 241 | ret=self.ret.copy(), 242 | desc=self.desc, 243 | parent=None, 244 | asserts=self.asserts.copy(), 245 | ) 246 | new_function.children = self.children.copy() 247 | new_function.implementations = self.implementations.copy() 248 | new_function.fixed_implementation = self.fixed_implementation 249 | new_function.parents = self.parents.copy() 250 | return new_function 251 | 252 | # Identify functions that are defined by multiple children 253 | # and move the definition to self 254 | def rearrange(self, already_defined=None): 255 | if already_defined is None: 256 | already_defined = set() 257 | else: 258 | already_defined = already_defined.copy() 259 | 260 | # Note that we have defined self and its children 261 | already_defined.add(self.name) 262 | for child in self.children: 263 | already_defined.add(child.name) 264 | 265 | descendant_fns = self.get_descendants() 266 | # Find all functions that are defined in multiple children 267 | n_uses = {} 268 | for fn in descendant_fns.values(): 269 | if fn.name not in already_defined: 270 | for child in self.children: 271 | if child.uses(fn.name): 272 | n_uses[fn.name] = n_uses.get(fn.name, 0) + 1 273 | # Move the definition of each function to self 274 | # if it is defined in multiple children 275 | for fn_name, n in n_uses.items(): 276 | if n > 1: 277 | fn = descendant_fns[fn_name] 278 | self.children.insert(0, fn) 279 | fn.parents.append(self) 280 | already_defined.add(fn.name) 281 | for child in self.children: 282 | # If the child has descendants that are not yet defined, 283 | # recurse on the child 284 | child_descendants_set = set(child.get_descendants().keys()) 285 | if not child_descendants_set.issubset(already_defined): 286 | child.rearrange(already_defined) 287 | 288 | # Convert the function and its children to a Parsel string 289 | def to_parsel_str(self, already_defined=None, override_names=True, include_children=True, include_asserts=True): 290 | if already_defined is None: 291 | already_defined = {self.name} 292 | else: 293 | already_defined = already_defined.copy() 294 | outputs = self.ret 295 | desc = self.desc 296 | cur_str = f"{self.name}({', '.join(self.args)})" 297 | if outputs: 298 | if override_names: 299 | if len(outputs) == 1: 300 | outputs = "res" 301 | else: 302 | outputs = ", ".join(["res" + str(i) for i, _ in enumerate(outputs)]) 303 | else: 304 | outputs = ", ".join(outputs) 305 | output_str = f' -> {outputs}' 306 | else: 307 | output_str = "" 308 | cur_str += output_str 309 | if desc: 310 | cur_str += ": " + desc 311 | cur_str += '\n' 312 | if include_asserts and self.asserts: 313 | cur_str += '\n'.join(self.asserts) + '\n' 314 | if not include_children: 315 | return cur_str 316 | 317 | to_def = [] 318 | to_ref = [] 319 | for child in self.children: 320 | if child.name not in already_defined: 321 | already_defined.add(child.name) 322 | to_def.append(child) 323 | else: 324 | to_ref.append(child) 325 | 326 | for child in to_def: 327 | cur_str += indent_str(child.to_parsel_str(already_defined)) 328 | for child in to_ref: 329 | if child.name != self.name: 330 | cur_str += indent_str(child.name) 331 | return cur_str 332 | 333 | def __repr__(self): 334 | parent_names = [parent.name for parent in self.parents] 335 | child_name = [child.name for child in self.children] 336 | ret_str = f" -> {self.ret}" if self.ret else "" 337 | return f"Function({self.name}({self.args}){ret_str}); parents: {parent_names}; children: {child_name})" 338 | 339 | def indent_str(s, n=2): 340 | indented_str = "" 341 | for line in s.splitlines(): 342 | indented_str += ' ' * n + line + '\n' 343 | return indented_str 344 | 345 | def get_function_from_examples(missing_fn_name, examples, parent, codex, include_rets=False): 346 | examples_str = "\n".join(CONSTS['example_helper'].format( 347 | example=example) for example in examples) 348 | generation_str = CONSTS['missing_gen_helper'].format( 349 | parent_name=parent.name, 350 | examples_str=examples_str, 351 | missing_fn_name=missing_fn_name) 352 | implementations = codex.generate( 353 | codex_in=generation_str, 354 | num_completions=CONSTS['num_completions'], 355 | max_tokens=250, 356 | temperature=0.2, 357 | indented_after_first_line=True, 358 | indented=False) 359 | sig_lines = [impl[0] for impl in implementations] 360 | implementations = [impl[1:] for impl in implementations] 361 | def strip_list(l): 362 | return [s.strip() for s in l] 363 | args = [strip_list(sig_line.split(")")[0].split(",")) for sig_line in sig_lines] 364 | rets = [] 365 | def get_ret(impl): 366 | for line in impl: 367 | if CONSTS['fn_end'] in line: 368 | ret = line.split(CONSTS['fn_end'])[1].strip().split(",") 369 | ret = strip_list(ret) 370 | return ret 371 | if include_rets: 372 | rets = [get_ret(impl) for impl in implementations] 373 | else: 374 | rets = [[] for _ in implementations] 375 | missing_fns = [ 376 | Function(missing_fn_name, arg, ret, "", parent, []) 377 | for arg, ret in zip(args, rets) 378 | ] 379 | 380 | for missing_fn, impl in zip(missing_fns, implementations): 381 | missing_fn.implementations = [impl] 382 | impl_str = missing_fn.get_implementation_strs()[0] 383 | missing_fn.fix_implementation(impl_str) 384 | return missing_fns 385 | 386 | def find_str(line, target): 387 | # Find the first : not in parentheses 388 | paren_count = 0 389 | bracket_count = 0 390 | curly_count = 0 391 | in_string = None 392 | for i, c in enumerate(line): 393 | if c == "(": 394 | paren_count += 1 395 | elif c == ")": 396 | paren_count -= 1 397 | elif c == "[": 398 | bracket_count += 1 399 | elif c == "]": 400 | bracket_count -= 1 401 | elif c == "{": 402 | curly_count += 1 403 | elif c == "}": 404 | curly_count -= 1 405 | elif c == "\"" or c == "'": 406 | if in_string == c: 407 | in_string = None 408 | else: 409 | in_string = c 410 | elif c == target and paren_count == 0 and bracket_count == 0 and curly_count == 0 and in_string is None: 411 | return i 412 | return -1 413 | 414 | def parse_line(line): 415 | # Parse a function definition 416 | colon_idx = find_str(line, ":") 417 | if colon_idx == -1: 418 | return line, None, None, None 419 | fn_sig, desc = line[:colon_idx], line[colon_idx + 1:] 420 | desc = desc.strip() 421 | if len(fn_sig.split("(", 1)) == 1: 422 | raise ValueError(f"Invalid function signature: {fn_sig}") 423 | fn_name, fn_args = fn_sig.split("(", 1) 424 | if "->" in fn_args: 425 | fn_args, fn_ret = fn_args.split("->", 1) 426 | fn_args, fn_ret = fn_args.strip(), fn_ret.strip() 427 | fn_ret = fn_ret.split(",") 428 | fn_ret = [ret.strip() for ret in fn_ret] 429 | else: 430 | fn_ret = [] 431 | assert fn_args.endswith(")") 432 | fn_args = fn_args[:-1].split(",") 433 | 434 | fn_args = [arg.strip() for arg in fn_args] 435 | return fn_name, fn_args, fn_ret, desc 436 | 437 | def parse_to_fn(line, parent, defined_fns, scope=None, loose_ref=False, loose_def=False): 438 | if scope is None: 439 | scope = defined_fns 440 | fn_name, fn_args, fn_ret, desc = parse_line(line.strip()) 441 | # print(f"Parsing {fn_name}({fn_args}) -> {fn_ret}") 442 | # print("Line:", line) 443 | if fn_name in scope: 444 | if fn_args is not None: 445 | failure_output = print if loose_ref else RuntimeError 446 | failure_output(f"Warning: Function {fn_name} already defined") 447 | new_fn = defined_fns[fn_name] 448 | if parent is not None: 449 | if parent not in new_fn.parents: 450 | new_fn.parents.append(parent) 451 | if new_fn not in parent.children: 452 | parent.children.append(new_fn) 453 | return new_fn 454 | new_fn = defined_fns[fn_name] 455 | if parent is not None: 456 | new_fn.parents.append(parent) 457 | parent.children.append(new_fn) 458 | return new_fn 459 | else: 460 | if fn_args is not None: 461 | new_fn = Function( 462 | name=fn_name, 463 | args=fn_args, 464 | ret=fn_ret, 465 | desc=desc, 466 | parent=parent, 467 | asserts=[], 468 | ) 469 | if parent is not None: 470 | parent.children.append(new_fn) 471 | defined_fns[fn_name] = new_fn 472 | new_fn.prefix = parent.prefix 473 | return new_fn 474 | else: 475 | failure_output = print if loose_def else RuntimeError 476 | failure_output(f"Function {fn_name} not defined; skipped") 477 | -------------------------------------------------------------------------------- /graph.py: -------------------------------------------------------------------------------- 1 | from fn import Function, parse_to_fn 2 | from consts import CONSTS 3 | 4 | def initial_node(line, cur_node): 5 | new_node = { 6 | 'name': line.split("(")[0].strip(), 7 | 'line': line, 8 | 'children': [], 9 | 'parent': cur_node, 10 | 'asserts': [], 11 | } 12 | if cur_node is not None: 13 | cur_node['children'].append(new_node) 14 | return new_node 15 | 16 | def fill_graph(node, node_equiv, defined_fns=None, scope=None): 17 | if defined_fns is None: 18 | defined_fns = {} 19 | if scope is None: 20 | scope = set() 21 | else: 22 | scope = scope.copy() 23 | scope.add(node['name']) 24 | child_equivs = [] 25 | for child in node['children']: 26 | asserts = child['asserts'] 27 | child_node = parse_to_fn(child['line'], node_equiv, defined_fns, scope) 28 | defined_fns[child_node.name].asserts += asserts 29 | scope.add(child_node.name) 30 | child_equivs.append(child_node) 31 | for child, child_equiv in zip(node['children'], child_equivs): 32 | fill_graph(child, child_equiv, defined_fns, scope) 33 | return defined_fns 34 | 35 | # Inspired by https://stackoverflow.com/questions/45964731/how-to-parse-hierarchy-based-on-indents-with-python 36 | def get_graph(program): 37 | root = initial_node("root", None) 38 | cur_node = root 39 | indentation = [-1] 40 | depth = -1 41 | buffer_line = "" 42 | for cur_line in program: 43 | # Handle line continuations 44 | if cur_line[-1] == "\\": 45 | buffer_line += cur_line[:-1] + "\n" 46 | continue 47 | line = buffer_line + cur_line 48 | buffer_line = "" 49 | 50 | indent = len(line) - len(line.lstrip()) 51 | if not line.strip(): 52 | continue 53 | if indent > indentation[-1]: 54 | new_node = initial_node(line, cur_node) 55 | cur_node = new_node 56 | depth += 1 57 | indentation.append(indent) 58 | continue 59 | 60 | if indent < indentation[-1]: 61 | while indent < indentation[-1]: 62 | depth -= 1 63 | indentation.pop() 64 | cur_node = cur_node['parent'] 65 | 66 | if indent != indentation[-1]: 67 | raise RuntimeError("Bad formatting") 68 | 69 | if indent == indentation[-1]: 70 | if CONSTS['assert_check'](line): 71 | cur_node['asserts'].append(line.strip()) 72 | else: 73 | new_node = initial_node(line, cur_node['parent']) 74 | cur_node = new_node 75 | 76 | temp_root = Function(name="root", args=[], desc="Main function", ret=[], parent=None, asserts=[]) 77 | defined_fns = {'root': temp_root} 78 | fill_graph(root, temp_root, defined_fns=defined_fns, scope={'root'}) 79 | del defined_fns['root'] 80 | assert len(temp_root.children) == 1, "There should only be one root function" 81 | root_fn_graph = temp_root.children[0] 82 | return root_fn_graph, defined_fns 83 | 84 | def strongly_connected_components(defined_fns, consider_asserts=True): 85 | # Identify the nodes reachable from each node 86 | reachable = {fn_name: {fn_name} for fn_name in defined_fns} 87 | changed = True 88 | while changed: 89 | changed = False 90 | # Loop through all the pairs of fn_name and the functions reachable from it 91 | for fn_name, fns_reachable in reachable.items(): 92 | # Loop through all the functions reachable from fn_name 93 | for fn_reachable_name in fns_reachable.copy(): 94 | fn = defined_fns[fn_reachable_name] 95 | # Loop through all the children of the functions reachable from fn_name 96 | for child in fn.children: 97 | initial_len = len(reachable[fn_name]) 98 | # Try to add the child to the set of functions reachable from fn_name 99 | reachable[fn_name].add(child.name) 100 | # If the child has no asserts, it also depends on the parent 101 | if not child.asserts and not CONSTS['implicit_assert'] and consider_asserts: 102 | initial_len_2 = len(reachable[child.name]) 103 | reachable[child.name].add(fn_reachable_name) 104 | if len(reachable[child.name]) > initial_len_2: 105 | changed = True 106 | if len(reachable[fn_name]) > initial_len: 107 | changed = True 108 | # Reachability is transitive, so add everything reachable from anything reachable from fn_name 109 | for fn_reachable_name_2 in fns_reachable.copy(): 110 | initial_len = len(reachable[fn_name]) 111 | reachable[fn_name].update(reachable[fn_reachable_name_2]) 112 | if len(reachable[fn_name]) > initial_len: 113 | changed = True 114 | 115 | # Identify the strongly connected components 116 | sccs = [] 117 | remaining_nodes = set(defined_fns) 118 | for fn_name in defined_fns.keys(): 119 | if fn_name not in remaining_nodes: 120 | continue 121 | remaining_nodes.remove(fn_name) 122 | scc = {fn_name} 123 | for child_name in reachable[fn_name]: 124 | if fn_name in reachable[child_name]: 125 | if child_name in remaining_nodes: 126 | scc.add(child_name) 127 | remaining_nodes.remove(child_name) 128 | sccs.append(scc) 129 | 130 | # Identify the relationships between the strongly connected components 131 | scc_edges = [] 132 | for scc_1_idx, scc_1 in enumerate(sccs): 133 | scc_1_edges = [] 134 | for scc_2_idx, scc_2 in enumerate(sccs): 135 | if scc_1_idx == scc_2_idx: 136 | continue 137 | if list(scc_2)[0] in reachable[list(scc_1)[0]]: 138 | scc_1_edges += [scc_2_idx] 139 | scc_edges.append(scc_1_edges) 140 | return sccs, scc_edges 141 | 142 | def get_root(defined_fns): 143 | # Identify a function which is the parent of all other functions 144 | # We allow for cycles, so we can't use just parents 145 | shared_ancestors = None 146 | for fn in defined_fns.values(): 147 | if shared_ancestors is None: 148 | shared_ancestors = set(fn.get_ancestors()) | {fn.name} 149 | else: 150 | shared_ancestors.intersection_update(fn.get_ancestors()) 151 | shared_defined = shared_ancestors & set(defined_fns.keys()) 152 | return shared_defined.pop() -------------------------------------------------------------------------------- /parsel.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "provenance": [], 7 | "authorship_tag": "ABX9TyPZho3H70kp8ggLwqpIcCES", 8 | "include_colab_link": true 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | }, 14 | "language_info": { 15 | "name": "python" 16 | } 17 | }, 18 | "cells": [ 19 | { 20 | "cell_type": "markdown", 21 | "metadata": { 22 | "id": "view-in-github", 23 | "colab_type": "text" 24 | }, 25 | "source": [ 26 | "\"Open" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "source": [ 32 | "# Parsel\n", 33 | "\n", 34 | "This notebook is meant to provide a high-level intro to using Parsel. First we're going to clone the Parsel repo and `pip install openai` so we can use Codex." 35 | ], 36 | "metadata": { 37 | "id": "KTUWfYeAcQ31" 38 | } 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": null, 43 | "metadata": { 44 | "id": "b8KwW8WKsOJ-" 45 | }, 46 | "outputs": [], 47 | "source": [ 48 | "%cd /content\n", 49 | "# Get Parsel\n", 50 | "!git clone https://github.com/ezelikman/parsel\n", 51 | "%cd parsel\n", 52 | "# Get OpenAI API (for Codex/GPT3)\n", 53 | "!pip install openai -q" 54 | ] 55 | }, 56 | { 57 | "cell_type": "markdown", 58 | "source": [ 59 | "You need to authenticate to use Codex, so here's a small helper for that" 60 | ], 61 | "metadata": { 62 | "id": "n5yXQt-Aci2i" 63 | } 64 | }, 65 | { 66 | "cell_type": "code", 67 | "source": [ 68 | "import openai\n", 69 | "import os\n", 70 | "from getpass import getpass\n", 71 | "\n", 72 | "organization = getpass(\"What is your OpenAI organization? You can find it here: https://beta.openai.com/account/org-settings\")\n", 73 | "api_key = getpass(\"What is your OpenAI API key? You can create one here: https://beta.openai.com/account/api-keys\")\n", 74 | "try:\n", 75 | " openai.organization = organization\n", 76 | " openai.api_key = api_key\n", 77 | " openai.Model.list()\n", 78 | " print(\"Success! You're logged in\")\n", 79 | " os.makedirs('keys', exist_ok=True)\n", 80 | " with open('keys/codex_key.txt', 'w') as f:\n", 81 | " f.write(f\"{organization}:{api_key}\")\n", 82 | "except Exception as e:\n", 83 | " print(e)\n", 84 | " print(\"Something is wrong with the organization or key you entered!\")" 85 | ], 86 | "metadata": { 87 | "colab": { 88 | "base_uri": "https://localhost:8080/" 89 | }, 90 | "id": "65kUPqFxuWkx", 91 | "outputId": "74992b0f-fc55-47d4-bceb-ae7431e49573" 92 | }, 93 | "execution_count": null, 94 | "outputs": [ 95 | { 96 | "output_type": "stream", 97 | "name": "stdout", 98 | "text": [ 99 | "What is your OpenAI organization? You can find it here: https://beta.openai.com/account/org-settings··········\n", 100 | "What is your OpenAI API key? You can create one here: https://beta.openai.com/account/api-keys··········\n", 101 | "Success! You're logged in\n" 102 | ] 103 | } 104 | ] 105 | }, 106 | { 107 | "cell_type": "markdown", 108 | "source": [ 109 | "Let's implement the problem solving example from the paper!" 110 | ], 111 | "metadata": { 112 | "id": "9v5WFGnocvJu" 113 | } 114 | }, 115 | { 116 | "cell_type": "code", 117 | "source": [ 118 | "!python3 parsel.py programs/problem_solving.ss" 119 | ], 120 | "metadata": { 121 | "colab": { 122 | "base_uri": "https://localhost:8080/" 123 | }, 124 | "id": "EtZbs8WJv2EH", 125 | "outputId": "d4374be8-4443-441a-bdc7-c5053cf7e270" 126 | }, 127 | "execution_count": null, 128 | "outputs": [ 129 | { 130 | "output_type": "stream", 131 | "name": "stdout", 132 | "text": [ 133 | "Implementing SCC 0 {'select_airport_cities'}\n", 134 | "Implementing SCC 1 {'sky_city_cost'}\n", 135 | "Trying 8 completions\n", 136 | "Calling Codex!\n", 137 | "8 completions 500 tokens each\n", 138 | "Attempting to implement {'sky_city_cost'}\n", 139 | " 38% 3/8 [00:01<00:01, 2.89it/s]Killing subprocesses\n", 140 | " 88% 7/8 [00:01<00:00, 5.82it/s]\n", 141 | "Successfully implemented {'sky_city_cost'}\n", 142 | "Implementing SCC 2 {'minimum_spanning_tree'}\n", 143 | "Trying 8 completions\n", 144 | "Calling Codex!\n", 145 | "8 completions 500 tokens each\n", 146 | "Attempting to implement {'minimum_spanning_tree'}\n", 147 | " 25% 2/8 [00:01<00:03, 1.97it/s]Killing subprocesses\n", 148 | " 38% 3/8 [00:01<00:01, 2.67it/s]\n", 149 | "Successfully implemented {'minimum_spanning_tree'}\n", 150 | "Implementing SCC 3 {'final_node_connectors'}\n", 151 | "Trying 8 completions\n", 152 | "Calling Codex!\n", 153 | "8 completions 500 tokens each\n", 154 | "Attempting to implement {'final_node_connectors'}\n", 155 | " 25% 2/8 [00:01<00:03, 1.97it/s]Killing subprocesses\n", 156 | " 25% 2/8 [00:01<00:03, 1.75it/s]\n", 157 | "Successfully implemented {'final_node_connectors'}\n", 158 | "Trying 8 completions\n", 159 | "Calling Codex!\n", 160 | "8 completions 500 tokens each\n", 161 | "Attempting to implement {'select_airport_cities'}\n", 162 | " 25% 2/8 [00:01<00:03, 1.96it/s]Killing subprocesses\n", 163 | " 25% 2/8 [00:01<00:03, 1.73it/s]\n", 164 | "Successfully implemented {'select_airport_cities'}\n", 165 | "Implementing SCC 1 {'sky_city_cost'}\n", 166 | "Implementing SCC 2 {'minimum_spanning_tree'}\n", 167 | "Implementing SCC 3 {'final_node_connectors'}\n", 168 | "Writing to programs/problem_solving.py\n", 169 | "Done writing to programs/problem_solving.py\n" 170 | ] 171 | } 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "source": [ 177 | "And here's the lisp interpreter written in Parsel, with the asserts and testing-specific functions removed to be concise - you can see the full code in `programs/lisp.ss`.\n", 178 | "\n", 179 | "\n", 180 | "\n", 181 | "```\n", 182 | "An env is a dictionary of {'var':val} pairs, with a link to its outer environment in env['_outer'].\n", 183 | "A procedure is a lambda expression, with parms, body, and env which calls eval_exp on the body.\n", 184 | " #*#*#\n", 185 | "evaluate_program(program): Initialize a standard environment. Parse and evaluate a list of expressions, returning the final result.\n", 186 | " get_env(parms, args, env=None): Return a new env inside env with parms mapped to their corresponding args, and env as the new env's outer env.\n", 187 | " standard_env(includes=['math','ops','simple_math']): An environment with some Scheme standard procedures. Start with an environment and update it with standard functions.\n", 188 | " get_math(): Get a dictionary mapping math library function names to their functions.\n", 189 | " get_ops(): Get a dictionary mapping operator symbols to their functions: +, -, *, /, >, <, >=, <=, =.\n", 190 | " get_simple_math(): Get a dictionary mapping 'abs', 'min', 'max', 'not', 'round' to their functions.\n", 191 | " parse_and_update(expression, env): Parse an expression, return the result.\n", 192 | " eval_exp(x, env): Evaluate an expression in an environment and return the result. Check if x is a list, a string, or neither, and call the corresponding function.\n", 193 | " find(env, var): Find the value of var in the innermost env where var appears.\n", 194 | " string_case(x, env): Return find(env, x).\n", 195 | " find\n", 196 | " list_case(x, env): Handle the function specified by the first value of x. Handle the first value of x being quote, if, define, set!, lambda, or otherwise. Return the result.\n", 197 | " get_procedure(parms, body, env): Return a procedure which evaluates body in a new environment with parms bound to the args passed to the procedure (in the same order as parms).\n", 198 | " eval_procedure(parms, body, env, args): Gets a procedure and returns the result of evaluating proc(*args) in env. Should not be called directly.\n", 199 | " get_procedure\n", 200 | " get_env\n", 201 | " eval_exp\n", 202 | " otherwise_case(x, env): Get the procedure by evaluating the first value of x. Then, evaluate the arguments and apply the procedure to them. Return the result.\n", 203 | " eval_exp\n", 204 | " eval_exp\n", 205 | " not_list_case(x, env): Return x\n", 206 | " parse(program): Read a Scheme expression from a string.\n", 207 | " tokenize(s): Convert a string into a list of tokens, including parens.\n", 208 | " read_from_tokens(tokens): Translate tokens to their corresponding atoms, using parentheses for nesting lists.\n", 209 | " atom(token): Numbers become numbers; every other token is a string.\n", 210 | " nested_list_to_str(exp): Convert a nested list into a string with nesting represented by parentheses.\n", 211 | "```\n", 212 | "\n", 213 | "Let's compile it!" 214 | ], 215 | "metadata": { 216 | "id": "U9QXfBXXcyUo" 217 | } 218 | }, 219 | { 220 | "cell_type": "code", 221 | "source": [ 222 | "!python3 parsel.py programs/lisp.ss" 223 | ], 224 | "metadata": { 225 | "colab": { 226 | "base_uri": "https://localhost:8080/" 227 | }, 228 | "id": "gKMtXngT5bk7", 229 | "outputId": "8285c2f4-14d6-4ff7-ce87-eeaa82faabea" 230 | }, 231 | "execution_count": null, 232 | "outputs": [ 233 | { 234 | "output_type": "stream", 235 | "name": "stdout", 236 | "text": [ 237 | "Implementing SCC 0 {'evaluate_program'}\n", 238 | "Implementing SCC 1 {'get_env'}\n", 239 | "Trying 8 completions\n", 240 | "Calling Codex!\n", 241 | "8 completions 500 tokens each\n", 242 | "Attempting to implement {'get_env'}\n", 243 | " 25% 2/8 [00:01<00:03, 1.94it/s]Killing subprocesses\n", 244 | " 25% 2/8 [00:01<00:03, 1.62it/s]\n", 245 | "Successfully implemented {'get_env'}\n", 246 | "Implementing SCC 2 {'get_math', 'apply_fn_dict_key', 'get_simple_math', 'get_ops', 'standard_env'}\n", 247 | "Trying 8 completions\n", 248 | "Calling Codex!\n", 249 | "8 completions 500 tokens each\n", 250 | "Calling Codex!\n", 251 | "8 completions 500 tokens each\n", 252 | "Calling Codex!\n", 253 | "8 completions 500 tokens each\n", 254 | "Calling Codex!\n", 255 | "8 completions 500 tokens each\n", 256 | "Calling Codex!\n", 257 | "8 completions 500 tokens each\n", 258 | "Rate limit reached. Waiting before retrying...\n", 259 | "Attempting to implement {'get_math', 'apply_fn_dict_key', 'get_simple_math', 'get_ops', 'standard_env'}\n", 260 | " 3% 485/14336 [00:02<00:38, 355.89it/s]Killing subprocesses\n", 261 | " 4% 510/14336 [00:02<01:09, 198.09it/s]\n", 262 | "Successfully implemented {'get_math', 'apply_fn_dict_key', 'get_simple_math', 'get_ops', 'standard_env'}\n", 263 | "Implementing SCC 3 {'parse_and_update'}\n", 264 | "Implementing SCC 1 {'get_env'}\n", 265 | "Implementing SCC 4 {'list_case', 'otherwise_case', 'eval_procedure', 'eval_exp', 'get_procedure'}\n", 266 | "Implementing SCC 1 {'get_env'}\n", 267 | "Implementing SCC 7 {'find'}\n", 268 | "Trying 8 completions\n", 269 | "Calling Codex!\n", 270 | "8 completions 500 tokens each\n", 271 | "Attempting to implement {'find'}\n", 272 | " 25% 2/8 [00:01<00:03, 1.97it/s]Killing subprocesses\n", 273 | " 25% 2/8 [00:01<00:03, 1.75it/s]\n", 274 | "Successfully implemented {'find'}\n", 275 | "Implementing SCC 8 {'string_case'}\n", 276 | "Implementing SCC 7 {'find'}\n", 277 | "Trying 8 completions\n", 278 | "Calling Codex!\n", 279 | "8 completions 500 tokens each\n", 280 | "Attempting to implement {'string_case'}\n", 281 | "100% 2/2 [00:01<00:00, 1.97it/s]Reattempting {'string_case'}\n", 282 | "Reattempt error: 'NoneType' object is not iterable\n", 283 | "Killing subprocesses\n", 284 | "100% 2/2 [00:01<00:00, 1.77it/s]\n", 285 | "Failed implementing {'string_case'}, best attempt: 0 / 1\n", 286 | "Error No implementation found for {'string_case'}\n", 287 | "Trying 16 completions\n", 288 | "Calling Codex!\n", 289 | "8 completions 500 tokens each\n", 290 | "Attempting to implement {'string_case'}\n", 291 | "100% 2/2 [00:01<00:00, 1.97it/s]Reattempting {'string_case'}\n", 292 | "Reattempt error: 'NoneType' object is not iterable\n", 293 | "Killing subprocesses\n", 294 | "100% 2/2 [00:01<00:00, 1.86it/s]\n", 295 | "Failed implementing {'string_case'}, best attempt: 0 / 1\n", 296 | "Error No implementation found for {'string_case'}\n", 297 | "Trying 32 completions\n", 298 | "Calling Codex!\n", 299 | "8 completions 500 tokens each\n", 300 | "8 completions 500 tokens each\n", 301 | "Attempting to implement {'string_case'}\n", 302 | " 38% 3/8 [00:01<00:01, 2.99it/s]Killing subprocesses\n", 303 | " 38% 3/8 [00:01<00:01, 2.57it/s]\n", 304 | "Successfully implemented {'string_case'}\n", 305 | "Implementing SCC 9 {'not_list_case'}\n", 306 | "Trying 8 completions\n", 307 | "Calling Codex!\n", 308 | "8 completions 500 tokens each\n", 309 | "Attempting to implement {'not_list_case'}\n", 310 | " 33% 2/6 [00:01<00:02, 1.97it/s]Killing subprocesses\n", 311 | " 50% 3/6 [00:01<00:01, 2.68it/s]\n", 312 | "Successfully implemented {'not_list_case'}\n", 313 | "Trying 8 completions\n", 314 | "Calling Codex!\n", 315 | "8 completions 500 tokens each\n", 316 | "Rate limit reached. Waiting before retrying...\n", 317 | "Calling Codex!\n", 318 | "8 completions 500 tokens each\n", 319 | "Calling Codex!\n", 320 | "8 completions 500 tokens each\n", 321 | "Calling Codex!\n", 322 | "8 completions 500 tokens each\n", 323 | "Calling Codex!\n", 324 | "8 completions 500 tokens each\n", 325 | "Rate limit reached. Waiting before retrying...\n", 326 | "Rate limit reached. Waiting before retrying...\n", 327 | "Attempting to implement {'list_case', 'otherwise_case', 'eval_procedure', 'eval_exp', 'get_procedure'}\n", 328 | " 2% 87/4608 [00:01<00:46, 97.59it/s]Killing subprocesses\n", 329 | " 2% 96/4608 [00:02<01:36, 46.54it/s]\n", 330 | "Successfully implemented {'list_case', 'otherwise_case', 'eval_procedure', 'eval_exp', 'get_procedure'}\n", 331 | "Implementing SCC 5 {'parse'}\n", 332 | "Implementing SCC 10 {'tokenize'}\n", 333 | "Trying 8 completions\n", 334 | "Calling Codex!\n", 335 | "8 completions 500 tokens each\n", 336 | "Attempting to implement {'tokenize'}\n", 337 | " 33% 2/6 [00:01<00:02, 1.98it/s]Killing subprocesses\n", 338 | " 33% 2/6 [00:01<00:02, 1.78it/s]\n", 339 | "Successfully implemented {'tokenize'}\n", 340 | "Implementing SCC 11 {'read_from_tokens'}\n", 341 | "Implementing SCC 12 {'atom'}\n", 342 | "Trying 8 completions\n", 343 | "Calling Codex!\n", 344 | "8 completions 500 tokens each\n", 345 | "Rate limit reached. Waiting before retrying...\n", 346 | "Attempting to implement {'atom'}\n", 347 | " 29% 2/7 [00:01<00:02, 1.98it/s]Killing subprocesses\n", 348 | " 29% 2/7 [00:01<00:02, 1.77it/s]\n", 349 | "Successfully implemented {'atom'}\n", 350 | "Trying 8 completions\n", 351 | "Calling Codex!\n", 352 | "8 completions 500 tokens each\n", 353 | "Rate limit reached. Waiting before retrying...\n", 354 | "Attempting to implement {'read_from_tokens'}\n", 355 | "100% 2/2 [00:01<00:00, 1.97it/s]Reattempting {'read_from_tokens'}\n", 356 | "Reattempt error: 'NoneType' object is not iterable\n", 357 | "Killing subprocesses\n", 358 | "100% 2/2 [00:01<00:00, 1.75it/s]\n", 359 | "Failed implementing {'read_from_tokens'}, best attempt: 0 / 1\n", 360 | "Error No implementation found for {'read_from_tokens'}\n", 361 | "Trying 16 completions\n", 362 | "Calling Codex!\n", 363 | "8 completions 500 tokens each\n", 364 | "Rate limit reached. Waiting before retrying...\n", 365 | "Attempting to implement {'read_from_tokens'}\n", 366 | " 50% 2/4 [00:01<00:01, 1.97it/s]Killing subprocesses\n", 367 | " 50% 2/4 [00:01<00:01, 1.76it/s]\n", 368 | "Successfully implemented {'read_from_tokens'}\n", 369 | "Implementing SCC 12 {'atom'}\n", 370 | "Trying 8 completions\n", 371 | "Calling Codex!\n", 372 | "8 completions 500 tokens each\n", 373 | "Attempting to implement {'parse'}\n", 374 | "100% 2/2 [00:01<00:00, 1.96it/s]Reattempting {'parse'}\n", 375 | "Reattempt error: 'NoneType' object is not iterable\n", 376 | "Killing subprocesses\n", 377 | "100% 2/2 [00:01<00:00, 1.65it/s]\n", 378 | "Failed implementing {'parse'}, best attempt: 0 / 1\n", 379 | "Error No implementation found for {'parse'}\n", 380 | "Trying 16 completions\n", 381 | "Calling Codex!\n", 382 | "8 completions 500 tokens each\n", 383 | "Attempting to implement {'parse'}\n", 384 | " 50% 2/4 [00:01<00:01, 1.97it/s]Killing subprocesses\n", 385 | " 50% 2/4 [00:01<00:01, 1.71it/s]\n", 386 | "Successfully implemented {'parse'}\n", 387 | "Implementing SCC 6 {'nested_list_to_str'}\n", 388 | "Trying 8 completions\n", 389 | "Calling Codex!\n", 390 | "8 completions 500 tokens each\n", 391 | "Attempting to implement {'nested_list_to_str'}\n", 392 | " 25% 2/8 [00:01<00:03, 1.97it/s]Killing subprocesses\n", 393 | " 25% 2/8 [00:01<00:03, 1.79it/s]\n", 394 | "Successfully implemented {'nested_list_to_str'}\n", 395 | "Implementing SCC 7 {'find'}\n", 396 | "Implementing SCC 8 {'string_case'}\n", 397 | "Implementing SCC 9 {'not_list_case'}\n", 398 | "Implementing SCC 10 {'tokenize'}\n", 399 | "Implementing SCC 11 {'read_from_tokens'}\n", 400 | "Implementing SCC 12 {'atom'}\n", 401 | "Trying 8 completions\n", 402 | "Calling Codex!\n", 403 | "8 completions 500 tokens each\n", 404 | "Attempting to implement {'parse_and_update'}\n", 405 | " 29% 2/7 [00:01<00:02, 1.91it/s]Killing subprocesses\n", 406 | "7\n", 407 | " 29% 2/7 [00:01<00:03, 1.64it/s]\n", 408 | "Successfully implemented {'parse_and_update'}\n", 409 | "Implementing SCC 4 {'list_case', 'otherwise_case', 'eval_procedure', 'eval_exp', 'get_procedure'}\n", 410 | "Implementing SCC 5 {'parse'}\n", 411 | "Implementing SCC 6 {'nested_list_to_str'}\n", 412 | "Implementing SCC 7 {'find'}\n", 413 | "Implementing SCC 8 {'string_case'}\n", 414 | "Implementing SCC 9 {'not_list_case'}\n", 415 | "Implementing SCC 10 {'tokenize'}\n", 416 | "Implementing SCC 11 {'read_from_tokens'}\n", 417 | "Implementing SCC 12 {'atom'}\n", 418 | "Trying 8 completions\n", 419 | "Calling Codex!\n", 420 | "8 completions 500 tokens each\n", 421 | "Rate limit reached. Waiting before retrying...\n", 422 | "Rate limit reached. Waiting before retrying...\n", 423 | "Attempting to implement {'evaluate_program'}\n", 424 | " 25% 2/8 [00:01<00:03, 1.81it/s]Killing subprocesses\n", 425 | " 25% 2/8 [00:01<00:03, 1.51it/s]\n", 426 | "Successfully implemented {'evaluate_program'}\n", 427 | "Implementing SCC 1 {'get_env'}\n", 428 | "Implementing SCC 2 {'get_math', 'apply_fn_dict_key', 'get_simple_math', 'get_ops', 'standard_env'}\n", 429 | "Implementing SCC 3 {'parse_and_update'}\n", 430 | "Implementing SCC 4 {'list_case', 'otherwise_case', 'eval_procedure', 'eval_exp', 'get_procedure'}\n", 431 | "Implementing SCC 5 {'parse'}\n", 432 | "Implementing SCC 6 {'nested_list_to_str'}\n", 433 | "Implementing SCC 7 {'find'}\n", 434 | "Implementing SCC 8 {'string_case'}\n", 435 | "Implementing SCC 9 {'not_list_case'}\n", 436 | "Implementing SCC 10 {'tokenize'}\n", 437 | "Implementing SCC 11 {'read_from_tokens'}\n", 438 | "Implementing SCC 12 {'atom'}\n", 439 | "Writing to programs/lisp.py\n", 440 | "Done writing to programs/lisp.py\n" 441 | ] 442 | } 443 | ] 444 | }, 445 | { 446 | "cell_type": "markdown", 447 | "source": [ 448 | "What if we want to apply the generated lisp interpreter using Python? Here's an example - in the above Parsel code, we named our top level lisp interpreter function `evaluate_program`, which takes a list of lisp commands:" 449 | ], 450 | "metadata": { 451 | "id": "DST8UJB1c3SS" 452 | } 453 | }, 454 | { 455 | "cell_type": "code", 456 | "source": [ 457 | "from programs.lisp import evaluate_program\n", 458 | "\n", 459 | "# Print 3^2\n", 460 | "three_squared = evaluate_program(\n", 461 | " [\n", 462 | " '(define square (lambda (r) (* r r)))',\n", 463 | " '(square 3)'\n", 464 | " ]\n", 465 | ")\n", 466 | "print(f\"3 ** 2 = {three_squared}\")\n", 467 | "\n", 468 | "# Print 10!\n", 469 | "fact_ten = evaluate_program(\n", 470 | " [\n", 471 | " '(define fact (lambda (n) (if (<= n 1) 1 (* n (fact (- n 1))))))',\n", 472 | " '(fact 10)'\n", 473 | " ]\n", 474 | ")\n", 475 | "print(f\"10! = {fact_ten}\")" 476 | ], 477 | "metadata": { 478 | "colab": { 479 | "base_uri": "https://localhost:8080/" 480 | }, 481 | "id": "tUAv4zrCb4it", 482 | "outputId": "269f73a3-273d-4f4d-90b8-311193565e29" 483 | }, 484 | "execution_count": 8, 485 | "outputs": [ 486 | { 487 | "output_type": "stream", 488 | "name": "stdout", 489 | "text": [ 490 | "3 ** 2 = 9\n", 491 | "10! = 3628800\n" 492 | ] 493 | } 494 | ] 495 | }, 496 | { 497 | "cell_type": "markdown", 498 | "source": [ 499 | "And that's most of what there is to it! If you want to call the Parsel compiler directly on a string, you can get the graph by using `get_graph` in `parsel.py` and then compiling the graph by applying `parsel_graph`." 500 | ], 501 | "metadata": { 502 | "id": "B0jLur5RgNpV" 503 | } 504 | }, 505 | { 506 | "cell_type": "code", 507 | "source": [ 508 | "from parsel import get_graph, parsel_graph\n", 509 | "from codex import CodeGen\n", 510 | "\n", 511 | "parsel_code = (\n", 512 | "\"\"\"\n", 513 | "collatz_recursion(num, cur_list=list()): Calls base_case if 1, otherwise recursion_rule\n", 514 | "19 -> [19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]\n", 515 | " base_case(num, cur_list): Returns the list with the number appended to it\n", 516 | " 1, [1, 2, 3] -> [1, 2, 3, 1]\n", 517 | " recursion_rule(num, cur_list): Add num to list, collatz with 3n + 1 if odd or n / 2 if even\n", 518 | " 2, [1, 2, 3] -> [1, 2, 3, 2, 1]\n", 519 | " collatz_recursion\n", 520 | "\"\"\"\n", 521 | ").strip().splitlines()\n", 522 | "\n", 523 | "codegen = CodeGen(cache='cache.json', key='keys/codex_key.txt')\n", 524 | "_, defined_fns = get_graph(parsel_code)\n", 525 | "compiled_fns = parsel_graph(defined_fns, codegen)" 526 | ], 527 | "metadata": { 528 | "colab": { 529 | "base_uri": "https://localhost:8080/" 530 | }, 531 | "id": "gEbFGeV6gwNQ", 532 | "outputId": "c4a04ad2-f9b2-4e83-8654-922da095afed" 533 | }, 534 | "execution_count": 14, 535 | "outputs": [ 536 | { 537 | "output_type": "stream", 538 | "name": "stdout", 539 | "text": [ 540 | "Implementing SCC 0 {'recursion_rule', 'collatz_recursion'}\n", 541 | "Implementing SCC 1 {'base_case'}\n", 542 | "Trying 8 completions\n", 543 | "Attempting to implement {'base_case'}\n" 544 | ] 545 | }, 546 | { 547 | "output_type": "stream", 548 | "name": "stderr", 549 | "text": [ 550 | " 29%|██▊ | 2/7 [00:01<00:03, 1.54it/s]\n" 551 | ] 552 | }, 553 | { 554 | "output_type": "stream", 555 | "name": "stdout", 556 | "text": [ 557 | "Killing subprocesses\n", 558 | "Successfully implemented {'base_case'}\n", 559 | "Trying 8 completions\n", 560 | "Attempting to implement {'recursion_rule', 'collatz_recursion'}\n" 561 | ] 562 | }, 563 | { 564 | "output_type": "stream", 565 | "name": "stderr", 566 | "text": [ 567 | " 10%|▉ | 2/21 [00:01<00:10, 1.81it/s]" 568 | ] 569 | }, 570 | { 571 | "output_type": "stream", 572 | "name": "stdout", 573 | "text": [ 574 | "Killing subprocesses\n", 575 | "Successfully implemented {'recursion_rule', 'collatz_recursion'}\n", 576 | "Implementing SCC 1 {'base_case'}\n" 577 | ] 578 | }, 579 | { 580 | "output_type": "stream", 581 | "name": "stderr", 582 | "text": [ 583 | "\n" 584 | ] 585 | } 586 | ] 587 | }, 588 | { 589 | "cell_type": "markdown", 590 | "source": [ 591 | "And if we want to convert the function list to code, you can use `fns_to_str`" 592 | ], 593 | "metadata": { 594 | "id": "PE3qMHVliw5S" 595 | } 596 | }, 597 | { 598 | "cell_type": "code", 599 | "source": [ 600 | "from parsel import fns_to_str\n", 601 | "from graph import get_root\n", 602 | "\n", 603 | "# in this case, we know the root is 'collatz_recursion', but more generally:\n", 604 | "root = get_root(defined_fns)\n", 605 | "\n", 606 | "print(\"# CODE:\")\n", 607 | "print(fns_to_str(defined_fns[root], set()))\n", 608 | "\n", 609 | "print(\"# ASSERTS:\")\n", 610 | "print(\"\\n\".join(fn.get_assert_str() for fn in defined_fns.values()))" 611 | ], 612 | "metadata": { 613 | "colab": { 614 | "base_uri": "https://localhost:8080/" 615 | }, 616 | "id": "SaOJm6vhiwbv", 617 | "outputId": "f12de4e0-a7fc-495f-887d-b99b174e0c38" 618 | }, 619 | "execution_count": 17, 620 | "outputs": [ 621 | { 622 | "output_type": "stream", 623 | "name": "stdout", 624 | "text": [ 625 | "# CODE:\n", 626 | "# Returns the list with the number appended to it\n", 627 | "def base_case(num, cur_list):\n", 628 | "\tcur_list.append(num)\n", 629 | "\treturn cur_list\n", 630 | "\n", 631 | "# Calls base_case if 1, otherwise recursion_rule\n", 632 | "def collatz_recursion(num, cur_list=list()):\n", 633 | " \"\"\"\n", 634 | " This function recursively calculates the collatz sequence\n", 635 | " \"\"\"\n", 636 | " # Base case\n", 637 | " if (num == 1):\n", 638 | " return base_case(num, cur_list)\n", 639 | " # Recursive case\n", 640 | " else:\n", 641 | " return recursion_rule(num, cur_list)\n", 642 | "\n", 643 | "# Add num to list, collatz with 3n + 1 if odd or n / 2 if even\n", 644 | "def recursion_rule(num, cur_list):\n", 645 | " cur_list.append(num)\n", 646 | " if num % 2 == 0:\n", 647 | " return collatz_recursion(int(num / 2), cur_list)\n", 648 | " else:\n", 649 | " return collatz_recursion(3 * num + 1, cur_list)\n", 650 | "\n", 651 | "\n", 652 | "# ASSERTS:\n", 653 | "assert repr(str(collatz_recursion(19))) == repr(str([19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1])) or (collatz_recursion(19) == [19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1])\n", 654 | "\n", 655 | "assert repr(str(base_case(1, [1, 2, 3]))) == repr(str([1, 2, 3, 1])) or (base_case(1, [1, 2, 3]) == [1, 2, 3, 1])\n", 656 | "\n", 657 | "assert repr(str(recursion_rule(2, [1, 2, 3]))) == repr(str([1, 2, 3, 2, 1])) or (recursion_rule(2, [1, 2, 3]) == [1, 2, 3, 2, 1])\n", 658 | "\n" 659 | ] 660 | } 661 | ] 662 | } 663 | ] 664 | } -------------------------------------------------------------------------------- /parsel.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from codex import CodeGen 3 | from graph import get_graph, strongly_connected_components, get_root 4 | from parsify import add_fn_name_and_args 5 | import itertools 6 | from fn import get_function_from_examples, find_str 7 | import concurrent.futures 8 | from multiprocessing import get_context 9 | from tqdm import tqdm 10 | import random 11 | import time 12 | from consts import CONSTS, mode 13 | import os 14 | import ast 15 | 16 | 17 | # When a function is called but not defined, we can try to 18 | # infer its implementation from the other functions in the 19 | # same SCC. 20 | def fill_in_missing_fn(missing_fn_name, scc, defined_fns, implementation_set, implementation_attempt, codegen): 21 | usage_examples = set() 22 | using_fns = set() 23 | for fn_name, implementation in zip(scc, implementation_set): 24 | for line in implementation.splitlines(): 25 | if missing_fn_name in line: 26 | usage_examples.add(line.strip()) 27 | using_fns.add(fn_name) 28 | parent = defined_fns[list(using_fns)[0]] 29 | missing_fn_attempts = get_function_from_examples(missing_fn_name, usage_examples, parent, codegen) 30 | for missing_fn_attempt in missing_fn_attempts: 31 | try: 32 | impl_str = missing_fn_attempt.get_implementation_strs()[0] 33 | new_implementation_attempt = impl_str + "\n" + implementation_attempt 34 | CONSTS['exec'](new_implementation_attempt) 35 | defined_fns[missing_fn_name] = missing_fn_attempt 36 | # Let all functions using the missing function know about their new child 37 | missing_fn_attempt.parent = [] 38 | for fn_name in using_fns: 39 | defined_fns[fn_name].add_child(missing_fn_attempt) 40 | for fn_name, implementation in zip(scc, implementation_set): 41 | fn = defined_fns[fn_name] 42 | if fn.fixed_implementation is None: 43 | fn.fix_implementation(implementation) 44 | return new_implementation_attempt 45 | except: 46 | continue 47 | 48 | 49 | # Join a set of function implementations to a string, along with 50 | # the already-implemented dependencies of the functions 51 | def to_implementation_str(implementation_set, dependencies_str): 52 | implementation_attempt = dependencies_str 53 | for fn_implementation in implementation_set: 54 | implementation_attempt += fn_implementation + "\n" 55 | return implementation_attempt 56 | 57 | 58 | # Try to fill in the implementation of a set of functions in an SCC 59 | def eval_implementation(implementation_set, dependencies_str, asserts_str, verbose=True, best_attempt=0): 60 | implementation_attempt = to_implementation_str( 61 | implementation_set, dependencies_str) 62 | asserts_passed = [] 63 | failure = None 64 | attempted = 0 65 | # Try all of the asserts one at a time 66 | # This is perhaps less efficient for Python, but it 67 | # gives us a lot more information about what went wrong 68 | # and although not currently explicitly supported, 69 | # it may be necessary for other languages 70 | # where multiple constraints can't be applied at once 71 | for assert_str in asserts_str.splitlines(): 72 | # We do still give up if we've already done better than this attempt 73 | if (len(asserts_str.splitlines()) - attempted < best_attempt) or ( 74 | CONSTS['strict_mode'] and (len(asserts_passed) != attempted)): 75 | failure = Exception("Already beat this attempt") 76 | break 77 | 78 | # Construct the code to execute 79 | exec_implementation_attempt = implementation_attempt 80 | if verbose: 81 | assert_in = CONSTS["get_assert_in"](assert_str) 82 | exec_implementation_attempt += CONSTS["output_fn"].format(output_str=assert_in) 83 | 84 | # Add the assert to the code to execute 85 | exec_implementation_attempt += assert_str 86 | 87 | if verbose: 88 | print("--------") 89 | print(CONSTS["exec_pre"]) 90 | print(exec_implementation_attempt) 91 | try: 92 | # Try to execute the code 93 | CONSTS['exec'](exec_implementation_attempt) 94 | except Exception as e: 95 | if failure is None: 96 | failure = e 97 | continue 98 | 99 | # If we get here, the assert passed 100 | asserts_passed += [assert_str] 101 | attempted += 1 102 | 103 | # If we get here, we've tried all of the asserts 104 | # If we failed, return the failure as the third return value 105 | if failure is not None: 106 | if isinstance(failure, NameError): 107 | return implementation_attempt + asserts_str, implementation_set, failure, asserts_passed 108 | else: 109 | return None, implementation_set, failure, asserts_passed 110 | 111 | # Otherwise, let's keep track of the asserts so that the final 112 | # Implementation has access to them 113 | implementation_attempt += asserts_str 114 | return implementation_attempt, implementation_set, None, asserts_passed 115 | 116 | 117 | # Kill any futures that haven't finished yet 118 | # I don't know why this is necessary, and it really feels like overkill 119 | # but sometimes I can't seem to kill the subprocesses otherwise 120 | def kill_remaining_futures(executor, futures): 121 | # Attempt 1 122 | try: 123 | executor.shutdown(wait=False, cancel_futures=True) 124 | except: 125 | pass 126 | 127 | # Attempt 2 128 | for future in futures: 129 | try: 130 | if not future.done(): 131 | future.result(timeout=0) 132 | except: 133 | pass 134 | 135 | # Attempt 3 136 | try: 137 | for work_item in executor._pending_work_items.values(): 138 | work_item.future.cancel() 139 | except: 140 | pass 141 | 142 | # Last resort. This kind of sucks because it means you can't run multiple compilers in parallel 143 | # Would love to find a better solution 144 | print("Killing subprocesses") 145 | os.system("pkill -f multiprocessing.spawn") 146 | 147 | 148 | # Wrap up the results of a function implementation attempt 149 | def collect_result(scc, dependencies_str, defined_fns, asserts_str, pbar, executor, futures, best_attempt): 150 | implementation_set = best_attempt[1] 151 | asserts_passed = best_attempt[2] 152 | error = best_attempt[3] 153 | print("Asserts passed:", len(asserts_passed)) 154 | for assert_passed in asserts_passed: 155 | print(" ", assert_passed) 156 | if error is not None and not generate_tests: 157 | if len(asserts_passed) > best_attempt[0]: 158 | best_attempt = ( 159 | len(asserts_passed), implementation_set, asserts_passed) 160 | raise error 161 | try: 162 | kill_remaining_futures(executor, futures) 163 | except: 164 | pass 165 | try: 166 | pbar.close() 167 | except: 168 | pass 169 | print("Successfully implemented", scc) 170 | if generate_tests: 171 | # Clear out the asserts in the SCC 172 | for fn_name in scc: 173 | fn = defined_fns[fn_name] 174 | fn.asserts = [] 175 | # Write the passed asserts to the root 176 | new_fns = {dict_key: dict_value for dict_key, dict_value in defined_fns.items() if dict_key in scc} 177 | root = get_root(new_fns) 178 | defined_fns[root].asserts = [assert_passed.replace("assert", "", 1).strip() for assert_passed in asserts_passed] 179 | 180 | if CONSTS['eval_mode']: 181 | with open(CONSTS['eval_filename'], "a+") as f: 182 | f.write(f", {len(asserts_str.splitlines())} / {len(asserts_str.splitlines())}") 183 | for fn_name, implementation in zip(scc, implementation_set): 184 | fn = defined_fns[fn_name] 185 | if fn.fixed_implementation is None: 186 | fn.fix_implementation(implementation) 187 | implementation_attempt = to_implementation_str(implementation_set, dependencies_str) + "\n" + "\n".join(asserts_passed) 188 | return implementation_attempt, best_attempt 189 | 190 | 191 | # This is a helper function to keep track of the best attempts 192 | def update_best_attempt(scc, all_attempts, implementation_set, asserts_passed, error): 193 | asserts_passed_hash = hash(tuple(sorted(asserts_passed))) 194 | all_found = {fn: [] for fn in scc} 195 | for fn in scc: 196 | for assert_passed in asserts_passed: 197 | if fn in assert_passed: 198 | if generate_tests: 199 | assert_passed = CONSTS['simplify_assert'](assert_passed) 200 | assert_target = assert_passed.split("==")[-1].strip() 201 | comma_idx = find_str(assert_target, ",") 202 | if comma_idx != -1: 203 | assert_target = assert_target[:comma_idx].strip() 204 | all_found[fn] += [assert_target] 205 | if generate_tests: 206 | # If we're generating tests, we need to make sure that we've found at least two different values for each function 207 | found_successful_generation = len(all_found) == len(scc) and all(len(set(found)) >= 2 for found in all_found.values()) 208 | else: 209 | # If we're not generating tests and we get here, we've found a successful implementation 210 | found_successful_generation = True 211 | min_found = min(len(found) for found in all_found.values()) 212 | if found_successful_generation: 213 | if generate_tests: 214 | score = min_found 215 | if asserts_passed_hash in all_attempts: 216 | # Inspired by the CodeT approach, we evaluate based on the product of |implementations| and |asserts| 217 | # We do this for the least-tested function in the SCC 218 | score += all_attempts[asserts_passed_hash][0] 219 | implementation_set = all_attempts[asserts_passed_hash][1] 220 | asserts_passed = all_attempts[asserts_passed_hash][2] 221 | error = all_attempts[asserts_passed_hash][3] 222 | else: 223 | score = len(asserts_passed) 224 | all_attempts[asserts_passed_hash] = ( 225 | score, 226 | implementation_set, 227 | asserts_passed, 228 | error 229 | ) 230 | 231 | 232 | # Process the result of a single implementation attempt 233 | def eval_result(scc, defined_fns, asserts_str, implementation_set_keys, all_attempts, pbar, executor, futures, result): 234 | implementation_attempt, implementation_set, error, asserts_passed = result 235 | 236 | # If we got an error, check if we have a new best attempt 237 | if error is not None: 238 | update_best_attempt(scc, all_attempts, implementation_set, asserts_passed, error) 239 | raise error 240 | 241 | kill_remaining_futures(executor, futures) 242 | pbar.close() 243 | 244 | # We succeeded, so we can return the implementation 245 | print("Successfully implemented", scc) 246 | if CONSTS['eval_mode']: 247 | # We manually edit a csv file to store the results 248 | # Right now evaluation on datasets isn't the main focus, so this is fine 249 | # There's almost certainly a more elegant way to do this 250 | with open(CONSTS['eval_filename'], "a+") as f: 251 | f.write(f", {len(asserts_str.splitlines())} / {len(asserts_str.splitlines())}") 252 | 253 | # Since we found a working solution, we can consider the implementation fixed 254 | for fn_name, implementation in zip(implementation_set_keys, implementation_set): 255 | fn = defined_fns[fn_name] 256 | if fn.fixed_implementation is None: 257 | fn.fix_implementation(implementation) 258 | return implementation_set,implementation_attempt 259 | 260 | 261 | # If we fail to implement an SCC, we need to kill all the remaining futures and clean up 262 | def handle_failure(scc, asserts_str, pbar, executor, futures, best_attempt): 263 | kill_remaining_futures(executor, futures) 264 | pbar.close() 265 | print(f"Failed implementing {scc}, best attempt: {best_attempt[0]} / {len(asserts_str.splitlines())}") 266 | # open "performance.csv" and write the score in the current line 267 | if CONSTS['eval_mode']: 268 | with open(CONSTS['eval_filename'], "a+") as f: 269 | f.write(f", {best_attempt[0]} / {len(asserts_str.splitlines())}") 270 | 271 | 272 | # Use multiprocessing to try to fill in the implementation of an SCC 273 | # This function could definitely use some refactoring 274 | # Maybe in the future Parsel will do that for me 275 | def multiprocess_fill(scc, dependencies_str, defined_fns, all_implementations, asserts_str, timeout, num_workers=None, min_attempts=500, max_attempts=100000, min_time=120, max_time=240, debug=False, seed=42): 276 | if num_workers is None: 277 | cpu_count = os.cpu_count() 278 | num_workers = cpu_count if cpu_count is not None else 1 279 | if 'max_attempts' in CONSTS: 280 | max_attempts = CONSTS['max_attempts'] 281 | if debug and debug != "best": 282 | num_workers = 1 283 | verbose = True 284 | else: 285 | verbose = False 286 | implementation_set_keys = all_implementations.keys() 287 | random.seed(seed) 288 | all_implementation_sets = [list(set(impls)) for impls in all_implementations.values()] 289 | # We save memory by only storing the index of the implementation in all_implementation_sets 290 | implementation_sets = list(itertools.product(*[list(range(len(impls))) for impls in all_implementation_sets])) 291 | 292 | # We can't try more than the number of implementations we have 293 | n_to_try = min(max_attempts, len(implementation_sets)) 294 | 295 | # We're assuming at least 10% of the implementations are valid 296 | max_to_try = min(max_attempts * 10, len(implementation_sets)) 297 | if CONSTS['eval_mode'] or ("shuffle_always" in CONSTS and CONSTS['shuffle_always']): 298 | # We should only shuffle if we are going to evaluate everything 299 | # Otherwise we can save some time by sampling 300 | if max_to_try == len(implementation_sets): 301 | random.shuffle(implementation_sets) 302 | else: 303 | implementation_sets = random.sample(implementation_sets, max_to_try) 304 | all_attempts = {} 305 | start_time = time.time() 306 | 307 | # We use a ProcessPoolExecutor to parallelize the work 308 | # This helps us handle variance in the runtime of each implementation 309 | # As well as broadly make better use of compute 310 | pbar = tqdm(total=n_to_try) 311 | with concurrent.futures.ProcessPoolExecutor(max_workers=num_workers, mp_context=get_context('spawn')) as executor: 312 | futures = [] 313 | submitted = 0 314 | 315 | # Loop over as many implementation sets as we can 316 | for implementation_set_indices in implementation_sets: 317 | # We need to convert the implementation set indices to the actual implementation_set 318 | implementation_set = [all_implementation_sets[i][impl_id] for i, impl_id in enumerate(implementation_set_indices)] 319 | if (not (time.time() - start_time > min_time and submitted > min_attempts)) and ((time.time() - start_time) < max_time): 320 | try: 321 | # Check for syntax errors 322 | # CONSTS['exec'](to_implementation_str(implementation_set, dependencies_str)) 323 | ast.parse(to_implementation_str(implementation_set, dependencies_str)) 324 | futures.append(executor.submit( 325 | eval_implementation, implementation_set, dependencies_str, asserts_str, verbose)) 326 | submitted += 1 327 | except: 328 | pass 329 | 330 | # Once we have a full batch of futures, avoid submitting more 331 | # Check if any of the futures have finished and succeeded 332 | if submitted % num_workers == 0 or submitted == n_to_try: 333 | for future in futures: 334 | # If we are debugging in the full debug mode, we want to stop at every implementation 335 | if debug and debug != "best": 336 | breakpoint() 337 | # Check if the future has succeeded 338 | try: 339 | result = future.result(timeout=timeout) 340 | if result is not None: 341 | implementation_set, implementation_attempt = eval_result(scc, defined_fns, asserts_str, implementation_set_keys, all_attempts, pbar, executor, futures, result) 342 | return implementation_attempt 343 | else: 344 | pbar.update(1) 345 | except KeyboardInterrupt: 346 | kill_remaining_futures(executor, futures) 347 | raise KeyboardInterrupt 348 | except: 349 | pbar.update(1) 350 | continue 351 | futures = [] 352 | # If we have submitted all the futures we want to submit, break 353 | if submitted == n_to_try: 354 | break 355 | 356 | # When we have exhausted all the implementation sets, we can try reattempting the best attempt 357 | # If we are debugging, even in 'best' mode, we want to stop at this point 358 | if len(all_attempts) == 0: 359 | kill_remaining_futures(executor, futures) 360 | raise Exception("No implementations found") 361 | best_attempt = max(all_attempts.values()) 362 | if debug: 363 | print(CONSTS['exec_pre']) 364 | print(best_attempt[0]) 365 | print(dependencies_str) 366 | if best_attempt[1] is not None: 367 | print("\n".join(best_attempt[1])) 368 | else: 369 | # Print the most recent attempt 370 | print("\n".join(implementation_set)) 371 | pass 372 | print(asserts_str) 373 | breakpoint() 374 | 375 | # This is pretty similar to eval_result, so could probably be refactored 376 | try: 377 | implementation_attempt, best_attempt = collect_result( 378 | scc, dependencies_str, defined_fns, asserts_str, pbar, executor, futures, best_attempt) 379 | return implementation_attempt 380 | except Exception as e: 381 | print("Error:", e) 382 | 383 | # Note that the repetitiveness of kill_remaining_futures is intentional 384 | # If we do it outside the loop, we have the unfortunate situation where 385 | # sometimes we get stuck in the executor context, and the futures never get cancelled. 386 | # I have no idea why this happens, but this is a workaround 387 | handle_failure(scc, asserts_str, pbar, executor, futures, best_attempt) 388 | 389 | 390 | # Fill in functions which are called by the generated code but not defined 391 | def autofill(scc, dependencies_str, defined_fns, all_implementations, asserts_str, codegen, remaining_attempts=10): 392 | all_implementation_sets = [list(set(impls)) for impls in all_implementations.values()] 393 | implementation_sets = list(itertools.product(*all_implementation_sets)) 394 | random.shuffle(implementation_sets) 395 | for implementation_set in implementation_sets: 396 | if remaining_attempts > 0: 397 | implementation_attempt, implementation_set, e, _ = eval_implementation( 398 | implementation_set, dependencies_str, asserts_str) 399 | if implementation_attempt is None: 400 | continue 401 | remaining_attempts -= 1 402 | missing_fn_name = e.args[0].split("'")[1] 403 | # Find lines that call the missing function 404 | new_implementation_attempt = fill_in_missing_fn( 405 | missing_fn_name, scc, defined_fns, implementation_set, implementation_attempt, codegen) 406 | if new_implementation_attempt is not None: 407 | return new_implementation_attempt 408 | 409 | 410 | # If we're in VirtualHome mode, we keep track of different kinds 411 | # of recursion depth to avoid infinite loops 412 | if mode == 'vh': 413 | force_expand_counter = 0 414 | max_expand_counter = 2 415 | 416 | 417 | def attempt_implementations(scc, dependencies_str, defined_fns, all_implementations, asserts_str, codegen, should_fill_in_missing=False, should_expand=False, remaining_attempts=5, timeout=0.5, debug=False, seed=42, backtrack=False): 418 | if 'timeout' in CONSTS: 419 | timeout = CONSTS['timeout'] 420 | print("Attempting to implement", scc) 421 | if mode == 'vh': 422 | # One alternative would be to make this all into a class, but for now we don't use this except in VH mode 423 | global force_expand_counter, max_expand_counter 424 | else: 425 | force_expand_counter = 1 426 | max_expand_counter = 1 427 | if force_expand_counter: 428 | implementation_attempt = multiprocess_fill( 429 | scc, dependencies_str, defined_fns, all_implementations, asserts_str, timeout, debug=debug, seed=seed) 430 | if implementation_attempt is not None: 431 | return implementation_attempt 432 | else: 433 | force_expand_counter -= 1 434 | 435 | 436 | if should_fill_in_missing: 437 | implementation_attempt = autofill( 438 | scc, dependencies_str, defined_fns, all_implementations, asserts_str, codegen, remaining_attempts) 439 | if implementation_attempt is not None: 440 | print("Successfully implemented", scc, "with autofill") 441 | return implementation_attempt 442 | 443 | if should_expand and max_expand_counter > 0: 444 | max_expand_counter -= 1 445 | print("Attempting to expand", scc) 446 | new_scc = set() 447 | # Copy the old implementations 448 | new_implementations = all_implementations.copy() 449 | for fn_name in scc: 450 | fn = defined_fns[fn_name] 451 | if fn.fixed_implementation is None: 452 | new_fn_defs = fn.expand(codegen) 453 | if new_fn_defs is None: 454 | continue 455 | for new_fn_name, new_fn in new_fn_defs.items(): 456 | new_fn.implement(codegen) 457 | new_implementations[new_fn_name] = new_fn.get_implementation_strs() 458 | defined_fns.update(new_fn_defs) 459 | new_scc.update(new_fn_defs.keys()) 460 | print("Expanded", scc, "to", new_scc) 461 | 462 | return attempt_implementations( 463 | new_scc, dependencies_str, defined_fns, new_implementations, asserts_str, codegen, should_fill_in_missing=False, should_expand=False, remaining_attempts=remaining_attempts, timeout=timeout, debug=debug, seed=seed, backtrack=backtrack) 464 | raise RuntimeError(f"No implementation found for {scc}") 465 | 466 | 467 | # Evaluate all the combinations of possible 468 | # implementations of the functions in the SCC 469 | def eval_scc(scc, dependencies_str, defined_fns, codegen, allow_autofill=False, should_expand=False, debug=False, seed=42, backtrack=False): 470 | all_implementations = {} 471 | asserts_str = "" 472 | for fn_name in scc: 473 | fn = defined_fns[fn_name] 474 | all_implementations[fn_name] = fn.get_implementation_strs() 475 | asserts_str += fn.get_assert_str() 476 | return attempt_implementations( 477 | scc, dependencies_str, defined_fns, all_implementations, asserts_str, codegen, should_fill_in_missing=allow_autofill, should_expand=should_expand, debug=debug, seed=seed, backtrack=backtrack) 478 | 479 | 480 | # Clear the implementations of all the functions in the SCC 481 | def clear_scc(scc_idx, sccs, implemented_sccs, scc_edges, defined_fns, codegen, allow_autofill=False, should_expand=False, debug=False): 482 | for edge in scc_edges[scc_idx]: 483 | clear_scc(edge, sccs, implemented_sccs, scc_edges, defined_fns, codegen, allow_autofill, should_expand, debug) 484 | for fn_name in sccs[scc_idx]: 485 | fn = defined_fns[fn_name] 486 | fn.fixed_implementation = None 487 | 488 | 489 | # Implement the SCC and return the string 490 | def implement_scc(scc_idx, sccs, implemented_sccs, scc_edges, defined_fns, codegen, allow_autofill=False, should_expand=False, debug=False, sample_only=False, seed=42, backtrack=False): 491 | print("Implementing SCC", scc_idx, sccs[scc_idx]) 492 | if scc_idx in implemented_sccs: 493 | return implemented_sccs[scc_idx] 494 | dependencies_str = "" 495 | for edge in scc_edges[scc_idx]: 496 | dependencies_str += implement_scc(edge, sccs, implemented_sccs, scc_edges, defined_fns, codegen, allow_autofill, should_expand, debug) 497 | 498 | num_completions = CONSTS["min_completions"] if "min_completions" in CONSTS else CONSTS["num_completions"] 499 | error = None 500 | # We exponentially increase the number of completions until we reach the max, "num_completions" 501 | while num_completions <= CONSTS["num_completions"]: 502 | print(f"Trying {num_completions} completions") 503 | try: 504 | for fn_name in sccs[scc_idx]: 505 | fn = defined_fns[fn_name] 506 | fn.implement(codegen, num_completions=num_completions) 507 | if generate_tests: 508 | fn.generate_tests(codegen, num_completions=num_completions * 2) 509 | 510 | # We support a "sample only" mode, where we don't actually 511 | # implement the SCC, but just try to run inference. 512 | # This let's us parallelize inference and implementation. 513 | if not sample_only: 514 | new_str = dependencies_str + eval_scc( 515 | sccs[scc_idx], dependencies_str, defined_fns, codegen, allow_autofill, should_expand, debug, seed=seed, backtrack=False) 516 | else: 517 | new_str = dependencies_str 518 | implemented_sccs[scc_idx] = new_str 519 | return new_str 520 | except KeyboardInterrupt: 521 | raise KeyboardInterrupt 522 | except Exception as e: 523 | error = e 524 | print("Error", e) 525 | num_completions *= 2 526 | if backtrack: 527 | # Backtracking allows us to try new implementations 528 | # of the dependencies if we fail to implement the SCC 529 | print("Backtracking due to error", error) 530 | clear_scc(scc_idx, sccs, implemented_sccs, scc_edges, defined_fns, codegen, allow_autofill, should_expand, debug) 531 | for implemented_scc in list(implemented_sccs.keys()): 532 | del implemented_sccs[implemented_scc] 533 | new_str = implement_scc( 534 | scc_idx, sccs, implemented_sccs, scc_edges, defined_fns, codegen, allow_autofill, should_expand, debug, seed=seed + 1, backtrack=True) 535 | implemented_sccs[scc_idx] = new_str 536 | return new_str 537 | raise error 538 | 539 | 540 | # Convert a function to its string representation 541 | # Including all its children 542 | # Note that this assumes that functions need to be defined 543 | # Before they are used, which is true for some languages 544 | # But not all. To my knowledge, no standard language 545 | # Requires the reverse, that functions must be defined 546 | # After they are used. 547 | def fns_to_str(fn, written): 548 | if fn.name in written: 549 | return "" 550 | written.add(fn.name) 551 | total_str = "" 552 | for child in fn.children: 553 | total_str += fns_to_str(child, written) 554 | return total_str + CONSTS['full_fn_str'].format( 555 | desc=fn.desc, fn_impl=fn.fixed_implementation) 556 | 557 | 558 | # Figure out which function is the root of the graph 559 | # And then write a file with all the functions, 560 | # Generated from the graph 561 | def write_to_file(filename, defined_fns): 562 | fn_defs = "" 563 | root = get_root(defined_fns) 564 | fn_defs = fns_to_str(defined_fns[root], set()) 565 | asserts = "\n".join(fn.get_assert_str() for fn in defined_fns.values()) 566 | if generate_tests: 567 | # Remove duplicate asserts but keep the order 568 | asserts_dict = {} 569 | for assert_fn in asserts.split("\n"): 570 | if assert_fn.strip() == "": 571 | continue 572 | asserts_dict[assert_fn] = True 573 | asserts = "\n".join(list(asserts_dict.keys())) 574 | assert CONSTS['exist_asserts'](asserts) 575 | exec_pre = CONSTS['exec_pre'] 576 | contents = f"{exec_pre}{fn_defs}\n{asserts}" 577 | with open(filename, "w") as f: 578 | print("Writing to " + str(filename)) 579 | f.write(contents) 580 | print("Done writing to " + str(filename)) 581 | 582 | 583 | # The key function of the program, which takes a function graph 584 | # Decomposes them to their strongly connected components 585 | # And then implements each SCC in turn 586 | def parsel_graph(defined_fns, codegen, allow_autofill=False, should_expand=False, debug=False, sample_only=False): 587 | sccs, scc_edges = strongly_connected_components(defined_fns)#, consider_asserts=not generate_tests) 588 | implemented_sccs = {} 589 | for scc_idx, _ in enumerate(sccs): 590 | implement_scc(scc_idx, sccs, implemented_sccs, scc_edges, defined_fns, codegen, allow_autofill, should_expand, debug, sample_only) 591 | return defined_fns 592 | 593 | 594 | # Used to parse a Parsel file to a target language 595 | def parsel(codegen, source_file, target_file=None, allow_autofill=False, should_expand=False, debug=False, add_name_and_args=False): 596 | assert source_file.split(".")[-1] == 'ss' 597 | if target_file is None: 598 | target_file = source_file.split(".")[0] + CONSTS['extension'] 599 | # Load the program to be parsed 600 | with open(source_file, "r") as f: 601 | program = f.readlines() 602 | 603 | # Extract out the header, if it exists 604 | if "#*#*#\n" in program: 605 | header = program[:program.index("#*#*#\n")] 606 | program = program[program.index("#*#*#\n") + 1:] 607 | else: 608 | header = [] 609 | 610 | if add_name_and_args: 611 | if "\\" in program[0]: 612 | print("Warning: multiline function descriptions are not fully supported with add_name_and_args") 613 | program = add_fn_name_and_args(program, codegen) 614 | 615 | # Parse the program into a graph of functions 616 | # And add the header to each function 617 | _, defined_fns = get_graph(program) 618 | for fn in defined_fns.values(): 619 | fn.prefix = "\n".join(header) 620 | 621 | # Compile the graph into a target language 622 | defined_fns = parsel_graph(defined_fns, codegen, allow_autofill, should_expand, debug) 623 | 624 | # Write the compiled program to a file 625 | write_to_file(target_file, defined_fns) 626 | 627 | 628 | if __name__ == "__main__": 629 | argparser = argparse.ArgumentParser() 630 | argparser.add_argument("source_file", help="The program to parse") 631 | argparser.add_argument("-v", "--verbose", help="increase output verbosity", action="store_true") 632 | argparser.add_argument("-c", "--cache", help="Where to store the cache", default="cache.json") 633 | argparser.add_argument("-k", "--key", help="Codex API key file", default="keys/codex_key.txt") 634 | argparser.add_argument("-i", "--allow_imports", help="Allow imports", action="store_true") 635 | argparser.add_argument("-a", "--allow_autofill", help="Allow autofill", action="store_true") 636 | argparser.add_argument("-e", "--allow_expand", help="Allow autofill", action="store_true") 637 | argparser.add_argument("-d", "--debug", help="Debug", action="store_true") 638 | argparser.add_argument("-b", "--best", help="Best", action="store_true") 639 | argparser.add_argument("-g", "--generate_tests", help="Generate tests", action="store_true") 640 | argparser.add_argument("-n", "--add_name_and_args", help="Add name and args", action="store_true") 641 | args = argparser.parse_args() 642 | 643 | assert args.source_file.split(".")[-1] == 'ss' 644 | codegen = CodeGen(args.cache, args.key) 645 | if args.best: 646 | debug = 'best' 647 | else: 648 | debug = args.debug 649 | generate_tests = args.generate_tests 650 | parsel(codegen, args.source_file, allow_autofill=args.allow_autofill, should_expand=args.allow_expand, debug=debug, add_name_and_args=args.add_name_and_args) 651 | else: 652 | generate_tests = False 653 | -------------------------------------------------------------------------------- /parsify.py: -------------------------------------------------------------------------------- 1 | """Used to convert an existing program in text to a parsable format.""" 2 | 3 | from fn import Function 4 | from consts import CONSTS 5 | 6 | # Used for backtranslation / decompilation 7 | # Get the names of all the functions defined in the solution 8 | def get_defs(solution): 9 | defined_functions = [] 10 | for line in solution.split('\n'): 11 | if line.startswith(CONSTS["fn_init"]): 12 | defined_functions.append(line.split('(')[0].split(' ')[1]) 13 | return defined_functions 14 | 15 | # Used for backtranslation / decompilation 16 | # Heuristically get the names of the inputs and outputs of each function 17 | # Ideally, this should be done by parsing the AST instead 18 | def get_fns(solution, defs, get_rets=False): 19 | fns = {fn: { 20 | 'name': fn, 21 | 'args': [], 22 | 'ret': [], 23 | 'parent': set(), 24 | 'children': set(), 25 | 'implementations': [], 26 | } for fn in defs} 27 | cur_fn = None 28 | for line in solution.split('\n'): 29 | if len(line) == 0: continue 30 | if line.startswith(CONSTS["fn_init"]): 31 | cur_fn = line.split('(')[0].split(' ')[1] 32 | # Get inputs 33 | inputs = line.split('(')[1].split(')')[0].split(',') 34 | fns[cur_fn]['args'] = [inp.strip() for inp in inputs] 35 | for fn in defs: 36 | if fn + "(" in line.split(':', 1)[1]: 37 | fns[fn]['parent'].add(cur_fn) 38 | fns[cur_fn]['children'].add(fn) 39 | fns[cur_fn]['implementations'] = [line] 40 | elif any(line.startswith(exclude) for exclude in CONSTS["exclude_init"]): 41 | cur_fn = None 42 | elif len(line) == len(line.lstrip()): 43 | cur_fn = None 44 | elif cur_fn is not None: 45 | fns[cur_fn]['implementations'].append(line) 46 | if CONSTS['fn_end'] in line and get_rets: 47 | # Calculate the number of commas in the line which are not in parentheses or brackets 48 | num_commas = 0 49 | in_paren = 0 50 | in_bracket = 0 51 | for char in line: 52 | if char == '(': 53 | in_paren += 1 54 | elif char == ')': 55 | in_paren -= 1 56 | elif char == '[': 57 | in_bracket += 1 58 | elif char == ']': 59 | in_bracket -= 1 60 | elif char == ',' and in_paren == 0 and in_bracket == 0: 61 | num_commas += 1 62 | rets = list(range(num_commas + 1)) 63 | if not rets: 64 | fns[cur_fn]['ret'] = [] 65 | elif len(rets) == 1: 66 | fns[cur_fn]['ret'] = ["res"] 67 | else: 68 | fns[cur_fn]['ret'] = ["res" + str(i) for i in range(len(rets))] 69 | 70 | for fn in defs: 71 | if fn + "(" in line: 72 | fns[fn]['parent'].add(cur_fn) 73 | fns[cur_fn]['children'].add(fn) 74 | 75 | fn_objs = {} 76 | for fn_name, fn_dict in fns.items(): 77 | # Add empty asserts list 78 | fn_dict['asserts'] = [] 79 | fn_objs[fn_name] = Function( 80 | name=fn_dict['name'], 81 | args=fn_dict['args'], 82 | ret=fn_dict['ret'], 83 | desc='', 84 | parent=None, 85 | asserts=[] 86 | ) 87 | # Convert parents and children to a list 88 | fn_objs[fn_name].parents = list(fn_dict['parent']) 89 | fn_objs[fn_name].children = list(fn_dict['children']) 90 | # Convert implementation to a string 91 | fn_objs[fn_name].implementations = [fn_dict['implementations']] 92 | # Set fixed_implementations to the value of implementations 93 | fn_objs[fn_name].fixed_implementation = '\n'.join(fn_dict['implementations']) 94 | return fn_objs 95 | 96 | def to_parsel(solution): 97 | defined_functions = get_defs(solution) 98 | basic_graph = get_fns(solution, defined_functions) 99 | return basic_graph 100 | 101 | def add_fn_name_and_args(parsel_text, codegen, max_tokens=500): 102 | parsel_lines = [line.rstrip() for line in parsel_text if line.strip() != ""] 103 | is_assert_lines = [CONSTS['assert_check'](line) for line in parsel_lines] 104 | assert_lines = [line for line, is_assert in zip(parsel_lines, is_assert_lines) if is_assert] 105 | parsel_lines = [parsel_line for parsel_line, assert_line in zip(parsel_lines, is_assert_lines) if not assert_line] 106 | parsel_text = '\n'.join(parsel_lines) 107 | prompt = CONSTS["add_name_and_args"](parsel_text) 108 | added_args = codegen.generate( 109 | codex_in=prompt, 110 | num_completions=8, 111 | max_tokens=max_tokens, 112 | temperature=0.2, 113 | stop=["\n\n"], 114 | indented=False, 115 | indented_after_first_line=True, 116 | require=None, 117 | cache_key=None, 118 | ) 119 | for added_arg in added_args: 120 | # get the non-empty lines 121 | added_arg = [line.strip() for line in added_arg if line.strip() != ""] 122 | print("Added args:", added_arg) 123 | # zip the lines together 124 | new_parsel = [] 125 | if len(added_arg) != len(parsel_lines): 126 | continue 127 | for name_and_args, line in zip(added_arg, parsel_lines): 128 | indentation = len(line) - len(line.lstrip()) 129 | new_parsel.append(" " * indentation + name_and_args + ": " + line.strip()) 130 | final_parsel = [] 131 | for is_assert in is_assert_lines: 132 | if is_assert: 133 | final_parsel.append(assert_lines.pop(0)) 134 | else: 135 | final_parsel.append(new_parsel.pop(0)) 136 | return final_parsel -------------------------------------------------------------------------------- /programs/and_commute.lean: -------------------------------------------------------------------------------- 1 | -- if p ∧ q, then q ∧ p 2 | lemma p_q_implies_q_p(p q: Prop): 3 | 4 | p ∧ q → q ∧ p := 5 | begin 6 | intro h, 7 | cases h with hp hq, 8 | split, 9 | exact hq, 10 | exact hp, 11 | end 12 | 13 | -- Description: if p ∨ q, then q ∨ p 14 | -- if q ∧ p, then p ∧ q 15 | lemma q_p_implies_p_q(p q: Prop): 16 | 17 | (q ∧ p) → (p ∧ q) := 18 | begin 19 | intro h, 20 | split, 21 | exact h.right, 22 | exact h.left, 23 | end 24 | 25 | /- 26 | Theorem: 27 | If q ∧ p, then p ∧ q 28 | -/ 29 | -- the and operator is commutative 30 | lemma and_commute(p q: Prop): 31 | (p ∧ q → q ∧ p) ∧ (q ∧ p → p ∧ q) := 32 | 33 | begin 34 | apply and.intro, 35 | { apply p_q_implies_q_p }, 36 | { apply q_p_implies_p_q } 37 | end 38 | 39 | -- Description: if p ∧ q, then p 40 | -- Signature: p_and_q_implies_p(p q: Prop) 41 | 42 | -- show (p ∧ q → q ∧ p) ∧ (q ∧ p → p ∧ q) 43 | 44 | 45 | -------------------------------------------------------------------------------- /programs/and_commute.ss: -------------------------------------------------------------------------------- 1 | and_commute(p q: Prop): the and operator is commutative 2 | show (p ∧ q → q ∧ p) ∧ (q ∧ p → p ∧ q) 3 | p_q_implies_q_p(p q: Prop): if p ∧ q, then q ∧ p 4 | q_p_implies_p_q(p q: Prop): if q ∧ p, then p ∧ q -------------------------------------------------------------------------------- /programs/collatz_simplest_full.py: -------------------------------------------------------------------------------- 1 | # Calls base_case if 1, otherwise recursion_rule 2 | def collatz_recursion(num, cur_list=list()): 3 | if num == 1: 4 | return base_case(num, cur_list) 5 | else: 6 | return recursion_rule(num, cur_list) 7 | 8 | # Returns the list with the number appended to it 9 | def base_case(num, cur_list): 10 | cur_list.append(num) 11 | return cur_list 12 | 13 | 14 | # Add num to list, collatz with 3n + 1 if odd or n / 2 if even 15 | def recursion_rule(num, cur_list): 16 | cur_list.append(num) 17 | if num % 2 == 0: 18 | return collatz_recursion(int(num / 2), cur_list) 19 | else: 20 | return collatz_recursion(int(3 * num + 1), cur_list) 21 | 22 | 23 | assert collatz_recursion(19) == [19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] 24 | 25 | assert base_case(1, [1, 2, 3]) == [1, 2, 3, 1] 26 | 27 | assert recursion_rule(2, [1, 2, 3]) == [1, 2, 3, 2, 1] 28 | -------------------------------------------------------------------------------- /programs/collatz_simplest_full.ss: -------------------------------------------------------------------------------- 1 | collatz_recursion(num, cur_list=list()): Calls base_case if 1, otherwise recursion_rule 2 | 19 -> [19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] 3 | base_case(num, cur_list): Returns the list with the number appended to it 4 | 1, [1, 2, 3] -> [1, 2, 3, 1] 5 | recursion_rule(num, cur_list): Add num to list, collatz with 3n + 1 if odd or n / 2 if even 6 | 2, [1, 2, 3] -> [1, 2, 3, 2, 1] 7 | collatz_recursion -------------------------------------------------------------------------------- /programs/collatz_simplest_no_tests.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | import itertools 4 | from itertools import accumulate, product, permutations, combinations 5 | import collections 6 | from collections import Counter, OrderedDict, deque, defaultdict, ChainMap 7 | from functools import lru_cache 8 | import math 9 | from math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2 10 | import fractions 11 | from typing import List, Tuple 12 | import numpy as np 13 | import random 14 | import heapq 15 | # Returns the list with the number appended to it 16 | def base_case(num, cur_list): 17 | cur_list.append(num) 18 | return cur_list 19 | 20 | # Add num to list, collatz with 3n + 1 if odd or n / 2 if even 21 | def recursion_rule(num, cur_list): 22 | cur_list.append(num) 23 | if num % 2 == 0: 24 | return collatz_recursion(num / 2, cur_list) 25 | else: 26 | return collatz_recursion(3 * num + 1, cur_list) 27 | 28 | # Calls base_case if 1, otherwise recursion_rule 29 | def collatz_recursion(num, cur_list=None): 30 | if cur_list is None: 31 | return collatz_recursion(num, []) 32 | else: 33 | return base_case(num, cur_list) if num == 1 else recursion_rule(num, cur_list) 34 | 35 | 36 | assert base_case(2, []) == [2] 37 | assert base_case(1, []) == [1] 38 | assert base_case(0, []) == [0], "base_case incorrect" 39 | assert base_case(2, [1]) == [1, 2] 40 | assert base_case(10, [1, 2, 3]) == [1, 2, 3, 10] 41 | assert base_case(1, []) == [1], "The base_case is incorrect" 42 | assert base_case(4, []) == [4] 43 | assert base_case(5, [2,2,2]) == [2,2,2,5] 44 | assert base_case(1, [0]) == [0, 1] 45 | assert base_case(0, []) == [0], 'base_case(0, []) should return [0]' 46 | assert base_case(1, [1]) == [1, 1] 47 | assert base_case(2, [1]) == [1, 2], "base_case is not correct" 48 | assert base_case(2, []) == [2], 'error1' 49 | assert base_case(1, [1, 2, 3]) == [1, 2, 3, 1] 50 | assert base_case(3, []) == [3] 51 | assert base_case(0, []) == [0] 52 | assert base_case(5, []) == [5] 53 | assert base_case(1, []) == [1], 'base_case is incorrect' 54 | assert base_case(5, [1,2,3,4]) == [1,2,3,4,5], 'base_case should return [1,2,3,4,5]' 55 | assert collatz_recursion(3) == [3, 10, 5, 16, 8, 4, 2, 1] 56 | assert collatz_recursion(1) == [1], "collatz_recursion is incorrect" 57 | assert collatz_recursion(1, [1]) == [1, 1] 58 | assert collatz_recursion(1) == [1], f"collatz_recursion(1) should return [1]" 59 | assert collatz_recursion(5) == [5, 16, 8, 4, 2, 1] 60 | assert collatz_recursion(2) == [2, 1] 61 | assert collatz_recursion(9) == [9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] 62 | assert [1] == collatz_recursion(1) 63 | assert collatz_recursion(1) == [1] 64 | assert collatz_recursion(1, []) == [1] 65 | assert recursion_rule(5, []) == [5, 16, 8, 4, 2, 1] 66 | assert recursion_rule(5, []) == [5, 16, 8, 4, 2, 1], "recursion_rule incorrect" 67 | assert recursion_rule(3, [1]) == [1, 3, 10, 5, 16, 8, 4, 2, 1] 68 | assert recursion_rule(5, []) == [5, 16, 8, 4, 2, 1], 'incorrect recursion rule' 69 | assert recursion_rule(4, []) == [4, 2, 1] 70 | assert recursion_rule(2, []) == [2, 1], 'recursion_rule(2, []) should return [2, 1]' 71 | assert recursion_rule(3, []) == [3, 10, 5, 16, 8, 4, 2, 1] 72 | assert recursion_rule(3, []) == [3, 10, 5, 16, 8, 4, 2, 1] # odd 73 | assert recursion_rule(7, []) == [7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] 74 | assert recursion_rule(2, []) == [2, 1] 75 | assert recursion_rule(5, []) == [5, 16, 8, 4, 2, 1], 'recursion_rule does not work correctly' 76 | assert recursion_rule(6, []) == [6, 3, 10, 5, 16, 8, 4, 2, 1] 77 | assert recursion_rule(3, [1, 2]) == [1, 2, 3, 10, 5, 16, 8, 4, 2, 1] 78 | assert recursion_rule(2, [1]) == [1, 2, 1] 79 | assert recursion_rule(11, []) == [11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] -------------------------------------------------------------------------------- /programs/collatz_simplest_no_tests.ss: -------------------------------------------------------------------------------- 1 | collatz_recursion(num, cur_list=None): Calls base_case if 1, otherwise recursion_rule 2 | base_case(num, cur_list): Returns the list with the number appended to it 3 | recursion_rule(num, cur_list): Add num to list, collatz with 3n + 1 if odd or n / 2 if even 4 | collatz_recursion -------------------------------------------------------------------------------- /programs/game_of_life_inverse_expand.py: -------------------------------------------------------------------------------- 1 | # Takes a board and returns the next iteration of the game of life, but with all values flipped 2 | def game_of_life_inversion_iteration(array_at_time_t): 3 | # Your code here 4 | #return game_of_life_iteration(invert_array(array_at_time_t)) 5 | return invert_array(game_of_life_iteration(array_at_time_t)) 6 | 7 | # Takes a board and returns the next iteration of the game of life 8 | def invert_array(array_at_time_t): 9 | return [list(map(lambda x: 1-x, row)) for row in array_at_time_t] 10 | 11 | # Takes a board and returns the board with all values flipped 12 | def game_of_life_iteration(array_at_time_t): 13 | # The array that will be returned 14 | array_at_time_t_plus_1 = [] 15 | 16 | # Iterate through the rows of the array 17 | for i in range(0, len(array_at_time_t)): 18 | # The array that will contain the next row 19 | next_row = [] 20 | 21 | # Iterate through the columns of the array 22 | for j in range(0, len(array_at_time_t[i])): 23 | # The number of neighbors 24 | num_neighbors = 0 25 | 26 | # Iterate through the neighbors of the cell 27 | for k in range(-1, 2): 28 | for l in range(-1, 2): 29 | # Don't count the cell itself 30 | if k == 0 and l == 0: 31 | continue 32 | 33 | # Check if the neighbor is valid 34 | if i + k >= 0 and i + k < len(array_at_time_t) and j + l >= 0 and j + l < len(array_at_time_t[i]): 35 | # If the neighbor is alive, increment the number of neighbors 36 | if array_at_time_t[i + k][j + l] == 1: 37 | num_neighbors += 1 38 | 39 | # If the cell is alive, check if it should die 40 | if array_at_time_t[i][j] == 1: 41 | if num_neighbors < 2 or num_neighbors > 3: 42 | next_row.append(0) 43 | else: 44 | next_row.append(1) 45 | # If the cell is dead, check if it should become alive 46 | else: 47 | if num_neighbors == 3: 48 | next_row.append(1) 49 | else: 50 | next_row.append(0) 51 | 52 | # Add the next row to the array 53 | array_at_time_t_plus_1.append(next_row) 54 | 55 | # Return the next array 56 | return array_at_time_t_plus_1 57 | 58 | 59 | 60 | assert game_of_life_inversion_iteration([[0, 0, 1], [1, 0, 0], [1, 0, 0]]) == [[1, 1, 1], [1, 0, 1], [1, 1, 1]] 61 | assert game_of_life_inversion_iteration([[0, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [0, 1, 1, 0]]) == [[1, 0, 1, 1], [0, 1, 0, 1], [0, 1, 1, 0], [1, 0, 0, 1]] 62 | 63 | 64 | -------------------------------------------------------------------------------- /programs/game_of_life_inverse_expand.ss: -------------------------------------------------------------------------------- 1 | game_of_life_inversion_iteration(array_at_time_t): Takes a board and returns the next iteration of the game of life, but with all values flipped 2 | [[0, 0, 1], [1, 0, 0], [1, 0, 0]] -> [[1, 1, 1], [1, 0, 1], [1, 1, 1]] 3 | [[0, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [0, 1, 1, 0]] -> [[1, 0, 1, 1], [0, 1, 0, 1], [0, 1, 1, 0], [1, 0, 0, 1]] -------------------------------------------------------------------------------- /programs/game_of_life_inverse_fill.py: -------------------------------------------------------------------------------- 1 | # Takes a board and returns the next iteration of the game of life, but with all values flipped 2 | def game_of_life_inversion_iteration(array_at_time_t): 3 | next_array = game_of_life_iteration(array_at_time_t) 4 | return array_inversion(next_array) 5 | 6 | # Takes a board with active and inactive cells as a list of lists and returns the next iteration of the game of life 7 | def game_of_life_iteration(array_at_time_t): 8 | # Create an empty array to store the next iteration of the game of life 9 | array_at_time_t_plus_1 = [] 10 | # Iterate over the rows of the board 11 | for row in range(len(array_at_time_t)): 12 | # Create an empty row to store the next iteration of the game of life 13 | array_at_time_t_plus_1_row = [] 14 | # Iterate over the columns of the board 15 | for column in range(len(array_at_time_t[0])): 16 | # Get the number of active cells around the current cell 17 | active_cells_around_current_cell = get_number_of_active_cells_around_cell(row, column, array_at_time_t) 18 | # Check whether the current cell is active or inactive 19 | if array_at_time_t[row][column] == 1: 20 | # Check whether the current cell has two or three active cells around it 21 | if active_cells_around_current_cell == 2 or active_cells_around_current_cell == 3: 22 | # Set the current cell to active for the next iteration 23 | array_at_time_t_plus_1_row.append(1) 24 | else: 25 | # Set the current cell to inactive for the next iteration 26 | array_at_time_t_plus_1_row.append(0) 27 | else: 28 | # Check whether the current cell has three active cells around it 29 | if active_cells_around_current_cell == 3: 30 | # Set the current cell to active for the next iteration 31 | array_at_time_t_plus_1_row.append(1) 32 | else: 33 | # Set the current cell to inactive for the next iteration 34 | array_at_time_t_plus_1_row.append(0) 35 | # Add the row to the board for the next iteration 36 | array_at_time_t_plus_1.append(array_at_time_t_plus_1_row) 37 | # Return the next iteration of the game of life 38 | return array_at_time_t_plus_1 39 | 40 | 41 | # Invert a square array by flipping 0's and 1's 42 | def array_inversion(array): 43 | for i in range(len(array)): 44 | for j in range(len(array[i])): 45 | if array[i][j] == 1: 46 | array[i][j] = 0 47 | else: 48 | array[i][j] = 1 49 | return array 50 | 51 | # 52 | def get_number_of_active_cells_around_cell(row, column, array_at_time_t): 53 | active_cells_around_current_cell = 0 54 | for i in range(row - 1, row + 2): 55 | for j in range(column - 1, column + 2): 56 | if i >= 0 and j >= 0 and i < len(array_at_time_t) and j < len(array_at_time_t[0]): 57 | if array_at_time_t[i][j] == 1: 58 | active_cells_around_current_cell += 1 59 | if array_at_time_t[row][column] == 1: 60 | active_cells_around_current_cell -= 1 61 | return active_cells_around_current_cell 62 | 63 | 64 | 65 | assert game_of_life_inversion_iteration([[0, 0, 1], [1, 0, 0], [1, 0, 0]]) == [[1, 1, 1], [1, 0, 1], [1, 1, 1]] 66 | assert game_of_life_inversion_iteration([[0, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [0, 1, 1, 0]]) == [[1, 0, 1, 1], [0, 1, 0, 1], [0, 1, 1, 0], [1, 0, 0, 1]] 67 | 68 | 69 | 70 | -------------------------------------------------------------------------------- /programs/game_of_life_inverse_fill.ss: -------------------------------------------------------------------------------- 1 | game_of_life_inversion_iteration(array_at_time_t): Takes a board and returns the next iteration of the game of life, but with all values flipped 2 | [[0, 0, 1], [1, 0, 0], [1, 0, 0]] -> [[1, 1, 1], [1, 0, 1], [1, 1, 1]] 3 | [[0, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [0, 1, 1, 0]] -> [[1, 0, 1, 1], [0, 1, 0, 1], [0, 1, 1, 0], [1, 0, 0, 1]] 4 | game_of_life_iteration(array_at_time_t): Takes a board with active and inactive cells as a list of lists and returns the next iteration of the game of life 5 | array_inversion(array): Invert a square array by flipping 0's and 1's -------------------------------------------------------------------------------- /programs/game_of_life_inverse_no_args.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | import itertools 4 | from itertools import accumulate, product, permutations, combinations 5 | import collections 6 | from collections import Counter, OrderedDict, deque, defaultdict, ChainMap 7 | from functools import lru_cache 8 | import math 9 | from math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2 10 | import fractions 11 | from typing import List, Tuple 12 | import numpy as np 13 | import random 14 | import heapq 15 | # Takes a board with active and inactive cells as a list of lists and returns the next iteration of the game of life 16 | def next_board(board): 17 | # 1) Create a new, empty board that is the same size as the old one 18 | new_board = [] 19 | for i in range(len(board)): 20 | new_board.append([]) 21 | for j in range(len(board[i])): 22 | new_board[i].append(0) 23 | # 2) Iterate over the old board and determine the new state of each cell using the rules of the game of life 24 | for i in range(len(board)): 25 | for j in range(len(board[i])): 26 | # Determine the number of active cells in the neighbourhood 27 | live_neighbours = 0 28 | for x in range(i-1, i+2): 29 | for y in range(j-1, j+2): 30 | # Check that the indices are valid 31 | if x>=0 and y>=0 and x [[1, 1, 1], [1, 0, 1], [1, 1, 1]] 3 | [[0, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [0, 1, 1, 0]] -> [[1, 0, 1, 1], [0, 1, 0, 1], [0, 1, 1, 0], [1, 0, 0, 1]] 4 | Takes a board with active and inactive cells as a list of lists and returns the next iteration of the game of life 5 | Invert a square array by flipping 0's and 1's -------------------------------------------------------------------------------- /programs/lisp.py: -------------------------------------------------------------------------------- 1 | import io, contextlib 2 | import sys 3 | import time 4 | import resource 5 | import itertools 6 | from itertools import accumulate, product, permutations, combinations 7 | import collections 8 | from collections import Counter, OrderedDict, deque, defaultdict, ChainMap 9 | from functools import lru_cache 10 | import math 11 | from math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2 12 | import fractions 13 | from typing import List, Tuple 14 | import numpy as np 15 | import random 16 | import heapq 17 | 18 | # Return a new env inside env with parms mapped to their corresponding args, and env as the new env's outer env. 19 | def get_env(parms, args, env=None): 20 | new_env = {'_outer':env} 21 | for (parm, arg) in zip(parms, args): 22 | new_env[parm] = arg 23 | return new_env 24 | 25 | # Get a dictionary mapping math library function names to their functions. 26 | def get_math(): 27 | d = {} 28 | for name in dir(math): 29 | if name[:2] != '__': 30 | d[name] = getattr(math, name) 31 | return d 32 | 33 | # Get a dictionary mapping operator symbols to their functions: +, -, *, /, >, <, >=, <=, =. 34 | def get_ops(): 35 | return { 36 | "+" : (lambda x,y: x+y), 37 | "-" : (lambda x,y: x-y), 38 | "*" : (lambda x,y: x*y), 39 | "/" : (lambda x,y: x/y), 40 | ">" : (lambda x,y: x>y), 41 | "<" : (lambda x,y: x=": (lambda x,y: x>=y), 43 | "<=": (lambda x,y: x<=y), 44 | "=": (lambda x,y: x==y) 45 | } 46 | # Get a dictionary mapping 'abs', 'min', 'max', 'not', 'round' to their functions. 47 | def get_simple_math(): 48 | return {'abs':abs, 'min':min, 'max':max, 'not':lambda x: not x, 'round':round} 49 | 50 | # Return the value of fn_dict_generator()[key](*args_list) in standard_env. 51 | def apply_fn_dict_key(fn_dict_generator, key, args_list): 52 | fn_dict = fn_dict_generator() 53 | return fn_dict[key](*args_list) 54 | 55 | # An environment with some Scheme standard procedures. Start with an environment and update it with standard functions. 56 | def standard_env(includes=['math', 'ops', 'simple_math']): 57 | env = {'_outer': None} 58 | if 'math' in includes: 59 | env.update(get_math()) 60 | if 'ops' in includes: 61 | env.update(get_ops()) 62 | if 'simple_math' in includes: 63 | env.update(get_simple_math()) 64 | return env 65 | 66 | # Find the value of var in the innermost env where var appears. 67 | def find(env, var): 68 | if var in env: 69 | return env[var] 70 | else: 71 | return find(env['_outer'], var) 72 | 73 | # Return find(env, x). 74 | def string_case(x, env): 75 | return find(env, x) 76 | 77 | # Gets a procedure and returns the result of evaluating proc(*args) in env. Should not be called directly. 78 | def eval_procedure(parms, body, env, args): 79 | proc = get_procedure(parms, body, env) 80 | new_env = get_env(parms, args, env) 81 | return eval_exp(body, new_env) 82 | 83 | # Return a procedure which evaluates body in a new environment with parms bound to the args passed to the procedure (in the same order as parms). 84 | def get_procedure(parms, body, env): 85 | return lambda *args: eval_procedure(parms, body, env, args) 86 | 87 | 88 | # Get the procedure by evaluating the first value of x. Then, evaluate the arguments and apply the procedure to them. Return the result. 89 | def otherwise_case(x, env): 90 | p = eval_exp(x[0], env) 91 | args = [eval_exp(arg, env) for arg in x[1:]] 92 | return p(*args) 93 | 94 | # Handle the function specified by the first value of x. Handle the first value of x being quote, if, define, set!, lambda, or otherwise. Return the result. 95 | def list_case(x, env): 96 | if x[0] == 'quote': 97 | return x[1] 98 | elif x[0] == 'if': 99 | if eval_exp(x[1], env): 100 | return eval_exp(x[2], env) 101 | elif len(x) == 4: 102 | return eval_exp(x[3], env) 103 | elif x[0] == 'define': 104 | env[x[1]] = eval_exp(x[2], env) 105 | elif x[0] == 'set!': 106 | env.find(x[1])[x[1]] = eval_exp(x[2], env) 107 | elif x[0] == 'lambda': 108 | return get_procedure(x[1], x[2], env) 109 | else: 110 | proc = eval_exp(x[0], env) 111 | args = [ eval_exp(arg, env) for arg in x[1:] ] 112 | return proc(*args) 113 | 114 | # Return x 115 | def not_list_case(x, env): 116 | if isinstance(x, list): 117 | return None 118 | return x 119 | 120 | # Evaluate an expression in an environment and return the result. Check if x is a list, a string, or neither, and call the corresponding function. 121 | def eval_exp(x, env): 122 | if isinstance(x, list): 123 | return list_case(x, env) 124 | elif isinstance(x, str): 125 | return string_case(x, env) 126 | else: 127 | return not_list_case(x, env) 128 | # Convert a string into a list of tokens, including parens. 129 | def tokenize(s): 130 | "Convert a string into a list of tokens, including parens." 131 | return s.replace('(', ' ( ').replace(')', ' ) ').split() 132 | 133 | # Numbers become numbers; every other token is a string. 134 | def atom(token): 135 | try: return int(token) 136 | except ValueError: 137 | try: return float(token) 138 | except ValueError: 139 | return token 140 | # Translate tokens to their corresponding atoms, using parentheses for nesting lists. 141 | def read_from_tokens(tokens): 142 | if len(tokens) == 0: 143 | raise SyntaxError('unexpected EOF while reading') 144 | token = tokens.pop(0) 145 | if token == '(': 146 | L = [] 147 | while tokens[0] != ')': 148 | L.append(read_from_tokens(tokens)) 149 | tokens.pop(0) # pop off ')' 150 | return L 151 | elif token == ')': 152 | raise SyntaxError('unexpected )') 153 | else: 154 | return atom(token) 155 | 156 | # Read a Scheme expression from a string. 157 | def parse(program): 158 | return read_from_tokens(tokenize(program)) 159 | 160 | # Convert a nested list into a string with nesting represented by parentheses. 161 | def nested_list_to_str(exp): 162 | if isinstance(exp, list): 163 | return '(' + ' '.join(map(nested_list_to_str, exp)) + ')' 164 | else: 165 | return str(exp) 166 | # Parse an expression, return the result. 167 | def parse_and_update(expression, env): 168 | exp = parse(expression) 169 | result = eval_exp(exp, env) 170 | return nested_list_to_str(result) 171 | 172 | # Initialize a standard environment. Parse and evaluate a list of expressions, returning the final result. 173 | def evaluate_program(program): 174 | env = standard_env() 175 | last = None 176 | for expression in program: 177 | last = parse_and_update(expression, env) 178 | return last 179 | 180 | 181 | assert repr(str(evaluate_program(['(define square (lambda (r) (* r r)))', '(square 3)']))) == repr(str(9)) or (evaluate_program(['(define square (lambda (r) (* r r)))', '(square 3)']) == 9) 182 | 183 | assert repr(str(get_env([], []))) == repr(str({'_outer': None})) or (get_env([], []) == {'_outer': None}) 184 | assert repr(str(get_env(['a'], [1]))) == repr(str({'a': 1, '_outer': None})) or (get_env(['a'], [1]) == {'a': 1, '_outer': None}) 185 | 186 | assert repr(str(standard_env([]))) == repr(str({'_outer': None})) or (standard_env([]) == {'_outer': None}) 187 | 188 | assert repr(str(parse_and_update("(+ 1 (* 2 3))", {'+': (lambda x, y: x + y), '*': (lambda x, y: x * y), '_outer': None}))) == repr(str(7)) or (parse_and_update("(+ 1 (* 2 3))", {'+': (lambda x, y: x + y), '*': (lambda x, y: x * y), '_outer': None}) == 7) 189 | 190 | 191 | 192 | 193 | assert repr(str(apply_fn_dict_key(get_math, 'sqrt', [4]))) == repr(str(2.0)) or (apply_fn_dict_key(get_math, 'sqrt', [4]) == 2.0) 194 | assert repr(str(apply_fn_dict_key(get_ops, '+', [1, 2]))) == repr(str(3)) or (apply_fn_dict_key(get_ops, '+', [1, 2]) == 3) 195 | assert repr(str(apply_fn_dict_key(get_simple_math, 'abs', [-1]))) == repr(str(1)) or (apply_fn_dict_key(get_simple_math, 'abs', [-1]) == 1) 196 | 197 | assert repr(str(eval_exp(1, {'_outer': None}))) == repr(str(1)) or (eval_exp(1, {'_outer': None}) == 1) 198 | 199 | assert repr(str(parse('(1 + (2 * 3))'))) == repr(str([1, '+', [2, '*', 3]])) or (parse('(1 + (2 * 3))') == [1, '+', [2, '*', 3]]) 200 | 201 | assert repr(str(nested_list_to_str(1))) == repr(str("1")) or (nested_list_to_str(1) == "1") 202 | assert repr(str(nested_list_to_str([1, '+', [2, '*', 3]]))) == repr(str("(1 + (2 * 3))")) or (nested_list_to_str([1, '+', [2, '*', 3]]) == "(1 + (2 * 3))") 203 | 204 | assert repr(str(find({'a':4, '_outer':None}, 'a'))) == repr(str(4)) or (find({'a':4, '_outer':None}, 'a') == 4) 205 | assert repr(str(find({'_outer':{'a':4, '_outer':None}}, 'a'))) == repr(str(4)) or (find({'_outer':{'a':4, '_outer':None}}, 'a') == 4) 206 | assert repr(str(find({'a':3, '_outer':{'a':4, '_outer':None}}, 'a'))) == repr(str(3)) or (find({'a':3, '_outer':{'a':4, '_outer':None}}, 'a') == 3) 207 | 208 | assert repr(str(string_case('a', {'a':4, '_outer':None}))) == repr(str(4)) or (string_case('a', {'a':4, '_outer':None}) == 4) 209 | 210 | assert repr(str(list_case(['quote', 'a'], {'_outer': None}))) == repr(str('a')) or (list_case(['quote', 'a'], {'_outer': None}) == 'a') 211 | assert repr(str(list_case(['if', True, 1, 2], {'_outer': None}))) == repr(str(1)) or (list_case(['if', True, 1, 2], {'_outer': None}) == 1) 212 | assert repr(str(list_case(['define', 'a', 1], {'_outer': None}))) == repr(str(None)) or (list_case(['define', 'a', 1], {'_outer': None}) == None) 213 | 214 | assert repr(str(not_list_case(1, {}))) == repr(str(1)) or (not_list_case(1, {}) == 1) 215 | 216 | 217 | assert repr(str(otherwise_case(['+', 1, 2], {'+': (lambda x, y: x + y), '_outer': None}))) == repr(str(3)) or (otherwise_case(['+', 1, 2], {'+': (lambda x, y: x + y), '_outer': None}) == 3) 218 | 219 | assert repr(str(eval_procedure(['r'], ['*', 'pi', ['*', 'r', 'r']], {'*': (lambda x, y: x * y), 'pi': 3, '_outer': None}, [1]))) == repr(str(3)) or (eval_procedure(['r'], ['*', 'pi', ['*', 'r', 'r']], {'*': (lambda x, y: x * y), 'pi': 3, '_outer': None}, [1]) == 3) 220 | 221 | assert repr(str(tokenize("1 + 2"))) == repr(str(['1', '+', '2'])) or (tokenize("1 + 2") == ['1', '+', '2']) 222 | assert repr(str(tokenize("1 + (2 * 3)"))) == repr(str(['1', '+', '(', '2', '*', '3', ')'])) or (tokenize("1 + (2 * 3)") == ['1', '+', '(', '2', '*', '3', ')']) 223 | 224 | assert repr(str(read_from_tokens(['(', '1', '+', '(', '2', '*', '3', ')', ')']))) == repr(str([1, '+', [2, '*', 3]])) or (read_from_tokens(['(', '1', '+', '(', '2', '*', '3', ')', ')']) == [1, '+', [2, '*', 3]]) 225 | 226 | assert repr(str(atom("1"))) == repr(str(1)) or (atom("1") == 1) 227 | assert repr(str(atom("a"))) == repr(str("a")) or (atom("a") == "a") 228 | assert repr(str(atom("1.2"))) == repr(str(1.2)) or (atom("1.2") == 1.2) 229 | -------------------------------------------------------------------------------- /programs/lisp.ss: -------------------------------------------------------------------------------- 1 | An env is a dictionary of {'var':val} pairs, with a link to its outer environment in env['_outer']. 2 | A procedure is a lambda expression, with parms, body, and env which calls eval_exp on the body. 3 | #*#*# 4 | evaluate_program(program): Initialize a standard environment. Parse and evaluate a list of expressions, returning the final result. 5 | ['(define square (lambda (r) (* r r)))', '(square 3)'] -> 9 6 | get_env(parms, args, env=None): Return a new env inside env with parms mapped to their corresponding args, and env as the new env's outer env. 7 | [], [] -> {'_outer': None} 8 | ['a'], [1] -> {'a': 1, '_outer': None} 9 | standard_env(includes=['math','ops','simple_math']): An environment with some Scheme standard procedures. Start with an environment and update it with standard functions. 10 | [] -> {'_outer': None} 11 | get_math(): Get a dictionary mapping math library function names to their functions. 12 | get_ops(): Get a dictionary mapping operator symbols to their functions: +, -, *, /, >, <, >=, <=, =. 13 | get_simple_math(): Get a dictionary mapping 'abs', 'min', 'max', 'not', 'round' to their functions. 14 | apply_fn_dict_key(fn_dict_generator, key, args_list): Return the value of fn_dict_generator()[key](*args_list) in standard_env. 15 | get_math, 'sqrt', [4] -> 2.0 16 | get_ops, '+', [1, 2] -> 3 17 | get_simple_math, 'abs', [-1] -> 1 18 | get_math 19 | get_ops 20 | get_simple_math 21 | parse_and_update(expression, env): Parse an expression, return the result. 22 | "(+ 1 (* 2 3))", {'+': (lambda x, y: x + y), '*': (lambda x, y: x * y), '_outer': None} -> 7 23 | eval_exp(x, env): Evaluate an expression in an environment and return the result. Check if x is a list, a string, or neither, and call the corresponding function. 24 | 1, {'_outer': None} -> 1 25 | find(env, var): Find the value of var in the innermost env where var appears. 26 | {'a':4, '_outer':None}, 'a' -> 4 27 | {'_outer':{'a':4, '_outer':None}}, 'a' -> 4 28 | {'a':3, '_outer':{'a':4, '_outer':None}}, 'a' -> 3 29 | string_case(x, env): Return find(env, x). 30 | 'a', {'a':4, '_outer':None} -> 4 31 | find 32 | list_case(x, env): Handle the function specified by the first value of x. Handle the first value of x being quote, if, define, set!, lambda, or otherwise. Return the result. 33 | ['quote', 'a'], {'_outer': None} -> 'a' 34 | ['if', True, 1, 2], {'_outer': None} -> 1 35 | ['define', 'a', 1], {'_outer': None} -> None 36 | get_procedure(parms, body, env): Return a procedure which evaluates body in a new environment with parms bound to the args passed to the procedure (in the same order as parms). 37 | eval_procedure(parms, body, env, args): Gets a procedure and returns the result of evaluating proc(*args) in env. Should not be called directly. 38 | ['r'], ['*', 'pi', ['*', 'r', 'r']], {'*': (lambda x, y: x * y), 'pi': 3, '_outer': None}, [1] -> 3 39 | get_procedure 40 | get_env 41 | eval_exp 42 | otherwise_case(x, env): Get the procedure by evaluating the first value of x. Then, evaluate the arguments and apply the procedure to them. Return the result. 43 | ['+', 1, 2], {'+': (lambda x, y: x + y), '_outer': None} -> 3 44 | eval_exp 45 | eval_exp 46 | not_list_case(x, env): Return x 47 | 1, {} -> 1 48 | parse(program): Read a Scheme expression from a string. 49 | '(1 + (2 * 3))' -> [1, '+', [2, '*', 3]] 50 | tokenize(s): Convert a string into a list of tokens, including parens. 51 | "1 + 2" -> ['1', '+', '2'] 52 | "1 + (2 * 3)" -> ['1', '+', '(', '2', '*', '3', ')'] 53 | read_from_tokens(tokens): Translate tokens to their corresponding atoms, using parentheses for nesting lists. 54 | ['(', '1', '+', '(', '2', '*', '3', ')', ')'] -> [1, '+', [2, '*', 3]] 55 | atom(token): Numbers become numbers; every other token is a string. 56 | "1" -> 1 57 | "a" -> "a" 58 | "1.2" -> 1.2 59 | nested_list_to_str(exp): Convert a nested list into a string with nesting represented by parentheses. 60 | 1 -> "1" 61 | [1, '+', [2, '*', 3]] -> "(1 + (2 * 3))" 62 | -------------------------------------------------------------------------------- /programs/problem_solving.py: -------------------------------------------------------------------------------- 1 | # given a list of lists representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city, return a new cost matrix with a new node corresponding to the sky. 2 | def sky_city_cost(city_road_cost, city_airport_cost): 3 | """ 4 | :param city_road_cost: list of lists representing cost of road between any two cities 5 | :param city_airport_cost: list representing cost of an airport in a city 6 | :return: new cost matrix with a new node corresponding to the sky 7 | """ 8 | # add new node for sky to cost matrix 9 | num_cities = len(city_road_cost) 10 | sky_city_cost = [[0 for _ in range(num_cities + 1)] for _ in range(num_cities + 1)] 11 | for i in range(num_cities): 12 | for j in range(num_cities): 13 | sky_city_cost[i][j] = city_road_cost[i][j] 14 | for i in range(num_cities): 15 | sky_city_cost[i][-1] = city_airport_cost[i] 16 | sky_city_cost[-1][i] = city_airport_cost[i] 17 | return sky_city_cost 18 | 19 | 20 | # given a list of lists representing the cost of each edge, return an adjacency matrix corresponding to the minimum spanning true. 21 | def minimum_spanning_tree(cost_matrix): 22 | # This is a list of the vertices that have been added to the MST 23 | visited = [0] 24 | # This is a list of the vertices that have not been added to the MST 25 | unvisited = [i for i in range(1, len(cost_matrix))] 26 | # This is a list of edges that are part of the MST 27 | edges = [] 28 | # This is the adjacency matrix corresponding to the MST 29 | adjacency_matrix = [[0 for i in range(len(cost_matrix))] for j in range(len(cost_matrix))] 30 | while len(unvisited) > 0: 31 | # Get the index of the minimum edge 32 | min_edge_index = -1 33 | min_edge_value = float('inf') 34 | for i in range(len(visited)): 35 | for j in range(len(unvisited)): 36 | if cost_matrix[visited[i]][unvisited[j]] < min_edge_value: 37 | min_edge_index = (visited[i], unvisited[j]) 38 | min_edge_value = cost_matrix[visited[i]][unvisited[j]] 39 | # Add the minimum edge to our MST 40 | edges.append(min_edge_index) 41 | # Add the unvisited vertex to the list of visited vertices 42 | visited.append(min_edge_index[1]) 43 | # Remove the unvisited vertex from the list of unvisited vertices 44 | unvisited.remove(min_edge_index[1]) 45 | # Add edges to the adjacency matrix 46 | for edge in edges: 47 | adjacency_matrix[edge[0]][edge[1]] = 1 48 | adjacency_matrix[edge[1]][edge[0]] = 1 49 | return adjacency_matrix 50 | 51 | # given a list of lists representing an adjacency matrix, return a list of the nodes connected to the final node. However, if only one node is connected to the final node, return an empty list. 52 | def final_node_connectors(adjacency_matrix): 53 | final_node = len(adjacency_matrix) - 1 54 | final_node_connectors = [] 55 | for i in range(len(adjacency_matrix) - 1): 56 | if adjacency_matrix[i][final_node] == 1: 57 | final_node_connectors.append(i) 58 | if len(final_node_connectors) == 1: 59 | return [] 60 | return final_node_connectors 61 | 62 | # given a matrix representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city (where any two cities with airports are connected), return a list of the cities that should have airports built in them to minimize the total cost of building roads and airports such that all cities are connected. The list should be sorted in ascending order. 63 | def select_airport_cities(city_road_cost, city_airport_cost): 64 | cost_matrix = sky_city_cost(city_road_cost, city_airport_cost) 65 | adjacency_matrix = minimum_spanning_tree(cost_matrix) 66 | return final_node_connectors(adjacency_matrix) 67 | 68 | 69 | assert repr(str(select_airport_cities([[0, 3, 3], [3, 0, 3], [3, 3, 0]], [0, 0, 0]))) == repr(str([0, 1, 2])) 70 | assert repr(str(select_airport_cities([[0, 3, 3], [3, 0, 3], [3, 3, 0]], [10, 10, 10]))) == repr(str([])) 71 | assert repr(str(select_airport_cities([[0, 10, 3], [10, 0, 11], [3, 11, 0]], [1, 4, 5]))) == repr(str([0, 1])) 72 | 73 | assert repr(str(sky_city_cost([[1, 2, 3], [1, 2, 3], [1, 2, 3]], [4, 5, 6]))) == repr(str([[1, 2, 3, 4], [1, 2, 3, 5], [1, 2, 3, 6], [4, 5, 6, 0]])) 74 | 75 | assert repr(str(minimum_spanning_tree([[0, 1, 3, 4], [1, 0, 2, 100], [3, 2, 0, 5], [4, 100, 5, 0]]))) == repr(str([[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 0], [1, 0, 0, 0]])) 76 | 77 | assert repr(str(final_node_connectors([[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 0], [1, 0, 0, 0]]))) == repr(str([])) 78 | assert repr(str(final_node_connectors([[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 1], [1, 0, 1, 0]]))) == repr(str([0, 2])) 79 | -------------------------------------------------------------------------------- /programs/problem_solving.ss: -------------------------------------------------------------------------------- 1 | select_airport_cities(city_road_cost, city_airport_cost): given a matrix representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city (where any two cities with airports are connected), return a list of the cities that should have airports built in them to minimize the total cost of building roads and airports such that all cities are connected. The list should be sorted in ascending order. 2 | [[0,3,3],[3,0,3],[3,3,0]],[0,0,0] -> [0,1,2] 3 | [[0,3,3],[3,0,3],[3,3,0]],[10,10,10] -> [] 4 | [[0,10,3],[10,0,11],[3,11,0]],[1,4,5] -> [0,1] 5 | sky_city_cost(city_road_cost, city_airport_cost): given a list of lists representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city, return a new cost matrix with a new node corresponding to the sky. 6 | [[1,2,3],[1,2,3],[1,2,3]],[4,5,6] -> [[1,2,3,4],[1,2,3,5],[1,2,3,6],[4,5,6,0]] 7 | minimum_spanning_tree(cost_matrix): given a list of lists representing the cost of each edge, return an adjacency matrix corresponding to the minimum spanning true. 8 | [[0,1,3,4],[1,0,2,100],[3,2,0,5],[4,100,5,0]] -> [[0,1,0,1],[1,0,1,0],[0,1,0,0],[1,0,0,0]] 9 | final_node_connectors(adjacency_matrix): given a list of lists representing an adjacency matrix, return a list of the nodes connected to the final node. However, if only one node is connected to the final node, return an empty list. 10 | [[0,1,0,1],[1,0,1,0],[0,1,0,0],[1,0,0,0]] -> [] 11 | [[0,1,0,1],[1,0,1,0],[0,1,0,1],[1,0,1,0]] -> [0,2] -------------------------------------------------------------------------------- /programs/problem_solving_no_args_no_tests.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | import itertools 4 | from itertools import accumulate, product, permutations, combinations 5 | import collections 6 | from collections import Counter, OrderedDict, deque, defaultdict, ChainMap 7 | from functools import lru_cache 8 | import math 9 | from math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2 10 | import fractions 11 | from typing import List, Tuple 12 | import numpy as np 13 | import random 14 | import heapq 15 | # given a list of lists representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city, return a new cost matrix with a new node corresponding to the sky. 16 | def add_sky_node(cost_matrix, airport_costs): 17 | # First, add the new row and column representing the sky node 18 | for row in cost_matrix: 19 | row.append(0) 20 | cost_matrix.append([0]*len(cost_matrix[0])) 21 | # Now add the costs of the airports to the new row and column 22 | for i in range(len(airport_costs)): 23 | cost_matrix[i][-1] = airport_costs[i] 24 | cost_matrix[-1][i] = airport_costs[i] 25 | # Return the new cost matrix 26 | return cost_matrix 27 | 28 | # given a list of lists representing the cost of each edge, return an adjacency matrix corresponding to the minimum spanning true. all entries in the adjacency matrix should be 0 or 1. 29 | def min_spanning_tree(cost_matrix): 30 | n = len(cost_matrix) 31 | if n == 0: 32 | return [] 33 | m = len(cost_matrix[0]) 34 | if n != m: 35 | raise Exception("The cost matrix is not symmetric") 36 | adjacency_matrix = [[0] * n for i in range(n)] 37 | # Start with the first vertex. 38 | visited = [0] 39 | while len(visited) < n: 40 | min_cost = sys.maxsize 41 | min_edge = None 42 | for vertex in visited: 43 | for i in range(n): 44 | if i in visited: 45 | continue 46 | if cost_matrix[vertex][i] < min_cost: 47 | min_cost = cost_matrix[vertex][i] 48 | min_edge = (vertex, i) 49 | adjacency_matrix[min_edge[0]][min_edge[1]] = 1 50 | adjacency_matrix[min_edge[1]][min_edge[0]] = 1 51 | visited.append(min_edge[1]) 52 | return adjacency_matrix 53 | 54 | # given a list of lists representing an adjacency matrix without self-loops, return a list of the nodes connected to the final node. However, if only one node is connected to the final node, return an empty list. 55 | def find_connected_cities(adj_matrix): 56 | # YOUR CODE HERE 57 | finalNode = len(adj_matrix) 58 | lastRow = adj_matrix[finalNode-1] 59 | count = 0 60 | connectedCities = [] 61 | for i in range(0, len(lastRow)): 62 | if lastRow[i] == 1: 63 | connectedCities.append(i) 64 | count += 1 65 | if count == 1: 66 | return [] 67 | else: 68 | return connectedCities 69 | 70 | # given a matrix representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city (where any two cities with airports are connected), return a list of the cities that should have airports built in them to minimize the total cost of building roads and airports such that all cities are connected. The list should be sorted in ascending order. 71 | def min_cost_airports(cost_matrix, airport_costs): 72 | cost_matrix = add_sky_node(cost_matrix, airport_costs) 73 | adj_matrix = min_spanning_tree(cost_matrix) 74 | connected_cities = find_connected_cities(adj_matrix) 75 | return connected_cities 76 | 77 | 78 | assert find_connected_cities([[0,1,0,0],[0,0,1,0],[0,0,0,1],[0,0,0,0]]) == [] 79 | assert find_connected_cities([[0, 0, 0, 1], [0, 0, 1, 1], [0, 1, 0, 1], [1, 1, 1, 0]]) == [0, 1, 2] 80 | assert find_connected_cities([[0,1],[0,1],[1,0],[1,0],[0,0]]) == [] 81 | assert find_connected_cities([[0, 1, 0], [0, 0, 1], [1, 0, 0]]) == [] 82 | assert find_connected_cities([[0,1,1],[1,0,1],[1,1,0]]) == [0, 1] 83 | assert find_connected_cities([[0,1],[1,0]]) == [] 84 | assert find_connected_cities([[0, 0, 1], [0, 0, 1], [1, 1, 0]]) == [0, 1] 85 | assert find_connected_cities([[False, True, True], [True, False, False], [True, False, False]]) == [] 86 | assert find_connected_cities([[0,0,0,1],[1,0,0,0],[0,1,0,0],[0,0,0,0]]) == [], "Test #1" 87 | assert find_connected_cities([[False, True, False, True], [True, False, True, False], [False, True, False, True], [True, False, True, False]]) == [0, 2] 88 | assert find_connected_cities([[0, 1, 1], [0, 0, 1], [1, 1, 0]]) == [0, 1] 89 | assert find_connected_cities([[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) == [] 90 | assert find_connected_cities([[0,1,1,0],[1,0,0,1],[1,0,0,1],[0,1,1,0]]) == [1,2] 91 | assert find_connected_cities([[0, 1, 1],[1, 0, 1],[1, 1, 0]]) == [0, 1] 92 | assert find_connected_cities([[0,1,1,0], [1,0,0,1], [1,0,0,1], [0,1,1,0]]) == [1,2] 93 | assert min_spanning_tree([[0, 2, 3], [2, 0, 6], [3, 6, 0]]) == [[0, 1, 1], [1, 0, 0], [1, 0, 0]] 94 | assert min_spanning_tree([[0, 1, 2], [1, 0, 1], [2, 1, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 95 | assert min_spanning_tree([[0, 1, 3, 1000], [1, 0, 1, 4], [3, 1, 0, 2], [1000, 4, 2, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]] 96 | assert min_spanning_tree([[0, 1, 3], [1, 0, 1], [3, 1, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 97 | assert min_spanning_tree([[0, 1, 3, 5], [1, 0, 2, 4], [3, 2, 0, 1], [5, 4, 1, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]] 98 | assert min_spanning_tree([[0, 2, 5], [2, 0, 1], [5, 1, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 99 | assert min_spanning_tree([[0, 2, 5], [2, 0, 3], [5, 3, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 100 | assert min_cost_airports([[0, 1, 1, 1], [1, 0, 1, 1], [1, 1, 0, 1], [1, 1, 1, 0]], [1, 1, 1, 1]) == [] 101 | assert min_cost_airports([[0, 1, 2, 4], [1, 0, 1, 3], [2, 1, 0, 2], [4, 3, 2, 0]], [3, 1, 2, 0]) == [1, 3] 102 | assert min_cost_airports([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [0, 1, 1]) == [] 103 | assert min_cost_airports([[0, 2, 2], [2, 0, 2], [2, 2, 0]], [1, 2, 1]) == [0, 2] 104 | assert min_cost_airports([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [0, 0, 0, 0]) == [] 105 | assert add_sky_node([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [1, 1, 1]) == [[0, 1, 1, 1], [1, 0, 1, 1], [1, 1, 0, 1], [1, 1, 1, 0]] 106 | assert add_sky_node([[0, 1, 2], [3, 0, 4], [5, 6, 0]], [1, 1, 1]) == [[0, 1, 2, 1], [3, 0, 4, 1], [5, 6, 0, 1], [1, 1, 1, 0]] 107 | assert add_sky_node([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [5, 1, 2]) == [[0, 1, 2, 5], [1, 0, 3, 1], [2, 3, 0, 2], [5, 1, 2, 0]] 108 | assert add_sky_node([[0, 1, 2], [2, 0, 3], [4, 3, 0]], [1, 1, 1]) == [[0, 1, 2, 1], [2, 0, 3, 1], [4, 3, 0, 1], [1, 1, 1, 0]] 109 | assert add_sky_node([[0, 1, 3], [1, 0, 4], [3, 4, 0]], [5, 6, 7]) == [[0, 1, 3, 5], [1, 0, 4, 6], [3, 4, 0, 7], [5, 6, 7, 0]] 110 | assert add_sky_node([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [0, 1, 2]) == [[0, 1, 2, 0], [1, 0, 3, 1], [2, 3, 0, 2], [0, 1, 2, 0]] 111 | assert add_sky_node([[0,1,2],[1,0,1],[2,1,0]], [0,1,2]) == [[0,1,2,0],[1,0,1,1],[2,1,0,2],[0,1,2,0]] 112 | assert add_sky_node([[0, 10, 20], [10, 0, 30], [20, 30, 0]], [100, 200, 300]) == [[0, 10, 20, 100], [10, 0, 30, 200], [20, 30, 0, 300], [100, 200, 300, 0]] 113 | assert add_sky_node([[0, 10], [10, 0]], [5, 5]) == [[0, 10, 5], [10, 0, 5], [5, 5, 0]] 114 | assert add_sky_node([[0, 2, 5], [2, 0, 3], [5, 3, 0]], [1, 3, 2]) == [[0, 2, 5, 1], [2, 0, 3, 3], [5, 3, 0, 2], [1, 3, 2, 0]] -------------------------------------------------------------------------------- /programs/problem_solving_no_args_no_tests.ss: -------------------------------------------------------------------------------- 1 | given a matrix representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city (where any two cities with airports are connected), return a list of the cities that should have airports built in them to minimize the total cost of building roads and airports such that all cities are connected. The list should be sorted in ascending order. 2 | given a list of lists representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city, return a new cost matrix with a new node corresponding to the sky. 3 | given a list of lists representing the cost of each edge, return an adjacency matrix corresponding to the minimum spanning true. all entries in the adjacency matrix should be 0 or 1. 4 | given a list of lists representing an adjacency matrix without self-loops, return a list of the nodes connected to the final node. However, if only one node is connected to the final node, return an empty list. -------------------------------------------------------------------------------- /programs/problem_solving_no_tests.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | import itertools 4 | from itertools import accumulate, product, permutations, combinations 5 | import collections 6 | from collections import Counter, OrderedDict, deque, defaultdict, ChainMap 7 | from functools import lru_cache 8 | import math 9 | from math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2 10 | import fractions 11 | from typing import List, Tuple 12 | import numpy as np 13 | import random 14 | import heapq 15 | # given a list of lists representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city, return a new cost matrix with a new node corresponding to the sky. 16 | def sky_city_cost(city_road_cost, city_airport_cost): 17 | assert len(city_road_cost) == len(city_airport_cost) 18 | num_cities = len(city_airport_cost) 19 | sky_city_cost = [[0] * (num_cities + 1) for _ in range(num_cities + 1)] 20 | for i in range(num_cities): 21 | for j in range(num_cities): 22 | sky_city_cost[i][j] = city_road_cost[i][j] 23 | sky_city_cost[i][num_cities] = city_airport_cost[i] 24 | sky_city_cost[num_cities][i] = city_airport_cost[i] 25 | return sky_city_cost 26 | 27 | # given a list of lists representing the cost of each edge, return an adjacency matrix corresponding to the minimum spanning true. all entries in the adjacency matrix should be 0 or 1. 28 | def minimum_spanning_tree(cost_matrix): 29 | ''' 30 | You will have to implement this method. 31 | ''' 32 | # First we need to initialize the graph 33 | num_vertices = len(cost_matrix) 34 | graph = [[0 for i in range(num_vertices)] for j in range(num_vertices)] 35 | for i in range(num_vertices): 36 | for j in range(i + 1, num_vertices): 37 | graph[i][j] = cost_matrix[i][j] 38 | graph[j][i] = cost_matrix[i][j] 39 | 40 | # Now we need to use Prim's algorithm to find the minimum spanning tree 41 | # We will start with vertex 0 42 | visited = [0] 43 | edges = [] 44 | # We will keep track of the edges and the total cost 45 | total_cost = 0 46 | # We need to find the next vertex to visit 47 | while len(visited) < num_vertices: 48 | # We need to find the lowest cost edge 49 | lowest_cost = float("inf") 50 | lowest_vertex = None 51 | lowest_edge = None 52 | for vertex in visited: 53 | for i in range(num_vertices): 54 | if i not in visited and graph[vertex][i] < lowest_cost: 55 | lowest_cost = graph[vertex][i] 56 | lowest_vertex = i 57 | lowest_edge = (vertex, i) 58 | # Now we can add the lowest cost edge to the tree 59 | visited.append(lowest_vertex) 60 | edges.append(lowest_edge) 61 | total_cost += lowest_cost 62 | 63 | # Now we need to return the adjacency matrix of the minimum spanning tree 64 | adjacency_matrix = [[0 for i in range(num_vertices)] for j in range(num_vertices)] 65 | for edge in edges: 66 | adjacency_matrix[edge[0]][edge[1]] = 1 67 | adjacency_matrix[edge[1]][edge[0]] = 1 68 | 69 | return adjacency_matrix 70 | 71 | # given a list of lists representing an adjacency matrix, return a list of the nodes connected to the final node. However, if only one node is connected to the final node, return an empty list. 72 | def final_node_connectors(adjacency_matrix): 73 | final_node_connectors = [] 74 | for i in range(0, len(adjacency_matrix)): 75 | if adjacency_matrix[i][-1] == 1: 76 | final_node_connectors.append(i) 77 | if len(final_node_connectors) == 1: 78 | final_node_connectors = [] 79 | return final_node_connectors 80 | 81 | # given a matrix representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city (where any two cities with airports are connected), return a list of the cities that should have airports built in them to minimize the total cost of building roads and airports such that all cities are connected. The list should be sorted in ascending order. 82 | def select_airport_cities(city_road_cost, city_airport_cost): 83 | # YOUR CODE GOES HERE 84 | new_cost_matrix = sky_city_cost(city_road_cost, city_airport_cost) 85 | adjacency_matrix = minimum_spanning_tree(new_cost_matrix) 86 | airport_cities = final_node_connectors(adjacency_matrix) 87 | airport_cities.sort() 88 | 89 | return airport_cities 90 | 91 | 92 | assert minimum_spanning_tree([[0, 2, 1], [2, 0, 4], [1, 4, 0]]) == [[0, 1, 1], [1, 0, 0], [1, 0, 0]] 93 | assert minimum_spanning_tree([[0,1],[1,0]]) == [[0,1],[1,0]] 94 | assert minimum_spanning_tree([[0, 1, 2], [1, 0, 1], [2, 1, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 95 | assert minimum_spanning_tree([[0,2,4],[2,0,2],[4,2,0]]) == [[0,1,0],[1,0,1],[0,1,0]] 96 | assert minimum_spanning_tree([[0,2,1],[2,0,3],[1,3,0]]) == [[0,1,1],[1,0,0],[1,0,0]] 97 | assert minimum_spanning_tree([[0, 2, 5], [2, 0, 3], [5, 3, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 98 | assert minimum_spanning_tree([[0, 2, 1], [2, 0, 2], [1, 2, 0]]) == [[0, 1, 1], [1, 0, 0], [1, 0, 0]] 99 | assert minimum_spanning_tree([[0, 1, 3, 100], [1, 0, 1, 3], [3, 1, 0, 1], [100, 3, 1, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]] 100 | assert minimum_spanning_tree([[0, 1, 3, 5], [1, 0, 1, 4], [3, 1, 0, 2], [5, 4, 2, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]] 101 | assert minimum_spanning_tree([[0, 1, 3], [1, 0, 2], [3, 2, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 102 | assert isinstance(minimum_spanning_tree([[0, 2, 1], [2, 0, 1], [1, 1, 0]]), list) 103 | assert minimum_spanning_tree([[0, 2, 5], [2, 0, 1], [5, 1, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 104 | assert minimum_spanning_tree([[0, 2, 3], [2, 0, 1], [3, 1, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 105 | assert minimum_spanning_tree([[0, 2, 1, 4], [2, 0, 4, 1], [1, 4, 0, 2], [4, 1, 2, 0]]) == [[0, 1, 1, 0], [1, 0, 0, 1], [1, 0, 0, 0], [0, 1, 0, 0]] 106 | assert minimum_spanning_tree([[0, 1, 1], [1, 0, 1], [1, 1, 0]]) == [[0, 1, 1], [1, 0, 0], [1, 0, 0]] 107 | assert minimum_spanning_tree([[0,2,3],[2,0,1],[3,1,0]]) == [[0,1,0],[1,0,1],[0,1,0]] 108 | assert minimum_spanning_tree([[0, 1, 3], [1, 0, 1], [3, 1, 0]]) == [[0, 1, 0], [1, 0, 1], [0, 1, 0]] 109 | assert minimum_spanning_tree([[0, 2, 1, 3], [2, 0, 3, 1], [1, 3, 0, 2], [3, 1, 2, 0]]) == [[0, 1, 1, 0], [1, 0, 0, 1], [1, 0, 0, 0], [0, 1, 0, 0]] 110 | assert minimum_spanning_tree([[0, 1, 3, 1000], [1, 0, 1, 3], [3, 1, 0, 1], [1000, 3, 1, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]], 'incorrect answer' 111 | assert minimum_spanning_tree([[0, 2, 5, 1], [2, 0, 3, 2], [5, 3, 0, 4], [1, 2, 4, 0]]) == [[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 0], [1, 0, 0, 0]] 112 | assert minimum_spanning_tree([[0, 1, 2], [1, 0, 2], [2, 2, 0]]) == [[0, 1, 1], [1, 0, 0], [1, 0, 0]] 113 | assert minimum_spanning_tree([]) == [] 114 | assert minimum_spanning_tree([[0, 1, 3, 100], [1, 0, 1, 2], [3, 1, 0, 1], [100, 2, 1, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]] 115 | assert minimum_spanning_tree([[0, 1, 3, 100], [1, 0, 1, 100], [3, 1, 0, 1], [100, 100, 1, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]] 116 | assert minimum_spanning_tree([[0, 1, 3, 4], [1, 0, 1, 2], [3, 1, 0, 1], [4, 2, 1, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]] 117 | assert minimum_spanning_tree([[0,1,3],[1,0,2],[3,2,0]]) == [[0,1,0],[1,0,1],[0,1,0]] 118 | assert minimum_spanning_tree([[0, 1, 3, 100], [1, 0, 1, 5], [3, 1, 0, 2], [100, 5, 2, 0]]) == [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0]] 119 | assert minimum_spanning_tree([[0, 2, 1], [2, 0, 3], [1, 3, 0]]) == [[0, 1, 1], [1, 0, 0], [1, 0, 0]] 120 | assert sky_city_cost([[0, 1], [1, 0]], [2, 3]) == [[0, 1, 2], [1, 0, 3], [2, 3, 0]] 121 | assert sky_city_cost([[0, 1, 2], [1, 0, 2], [2, 2, 0]], [100, 200, 300]) == [[0, 1, 2, 100], [1, 0, 2, 200], [2, 2, 0, 300], [100, 200, 300, 0]] 122 | assert sky_city_cost([[0, 1, 2],[1, 0, 2], [2, 2, 0]], [3, 4, 5]) == [[0, 1, 2, 3], [1, 0, 2, 4], [2, 2, 0, 5], [3, 4, 5, 0]] 123 | assert sky_city_cost([[0,1,5], [1,0,2], [5,2,0]], [3,1,2]) == [[0, 1, 5, 3], [1, 0, 2, 1], [5, 2, 0, 2], [3, 1, 2, 0]] 124 | assert sky_city_cost([[0,1,1], [1,0,1], [1,1,0]], [1,2,3]) == [[0,1,1,1], [1,0,1,2], [1,1,0,3], [1,2,3,0]] 125 | assert sky_city_cost([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [10, 20, 30]) == [[0, 1, 2, 10], [1, 0, 3, 20], [2, 3, 0, 30], [10, 20, 30, 0]] 126 | assert sky_city_cost([[0, 1, 2], [1, 0, 1], [2, 1, 0]], [1, 2, 3]) == [[0, 1, 2, 1], [1, 0, 1, 2], [2, 1, 0, 3], [1, 2, 3, 0]] 127 | assert sky_city_cost([[0, 2, 3],[3, 0, 2],[3, 2, 0]], [1, 2, 3]) == [[0, 2, 3, 1], [3, 0, 2, 2], [3, 2, 0, 3], [1, 2, 3, 0]] 128 | assert sky_city_cost([[1,4,5],[4,2,6],[5,6,3]], [10,5,7]) == [[1,4,5,10],[4,2,6,5],[5,6,3,7],[10,5,7,0]] 129 | assert sky_city_cost([[0, 1, 2], [1, 0, 1], [2, 1, 0]], [3, 2, 1]) == [[0, 1, 2, 3], [1, 0, 1, 2], [2, 1, 0, 1], [3, 2, 1, 0]] 130 | assert sky_city_cost([[0,2,3],[2,0,1],[3,1,0]], [0,1,2]) == [[0,2,3,0], [2,0,1,1], [3,1,0,2], [0,1,2,0]] 131 | assert sky_city_cost([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [5, 6, 7]) == [[0, 1, 2, 5], [1, 0, 3, 6], [2, 3, 0, 7], [5, 6, 7, 0]] 132 | assert sky_city_cost([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [1, 1, 1]) == [[0, 1, 1, 1], [1, 0, 1, 1], [1, 1, 0, 1], [1, 1, 1, 0]] 133 | assert sky_city_cost([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [4, 5, 6]) == [[0, 1, 2, 4], [1, 0, 3, 5], [2, 3, 0, 6], [4, 5, 6, 0]] 134 | assert sky_city_cost([[0, 1, 2], [2, 0, 3], [4, 3, 0]], [1, 2, 3]) == [[0, 1, 2, 1], [2, 0, 3, 2], [4, 3, 0, 3], [1, 2, 3, 0]] 135 | assert sky_city_cost([[0, 10, 10], [10, 0, 10], [10, 10, 0]], [50, 50, 50]) == [[0, 10, 10, 50], [10, 0, 10, 50], [10, 10, 0, 50], [50, 50, 50, 0]] 136 | assert sky_city_cost([[0, 2, 1], [1, 0, 2], [2, 1, 0]], [0, 3, 2]) == [[0, 2, 1, 0], [1, 0, 2, 3], [2, 1, 0, 2], [0, 3, 2, 0]] 137 | assert sky_city_cost([[0,5,5],[5,0,5],[5,5,0]], [10,10,10]) == [[0,5,5,10],[5,0,5,10],[5,5,0,10],[10,10,10,0]] 138 | assert sky_city_cost([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [1, 2, 1]) == [[0, 1, 1, 1], [1, 0, 1, 2], [1, 1, 0, 1], [1, 2, 1, 0]] 139 | assert sky_city_cost([[0,1,1],[1,0,1],[1,1,0]], [1,1,1]) == [[0,1,1,1],[1,0,1,1],[1,1,0,1],[1,1,1,0]] 140 | assert sky_city_cost([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [2, 3, 4]) == [[0, 1, 2, 2], [1, 0, 3, 3], [2, 3, 0, 4], [2, 3, 4, 0]] 141 | assert sky_city_cost([[0, 1, 2], [1, 0, 2], [2, 2, 0]], [1, 2, 3]) == [[0, 1, 2, 1], [1, 0, 2, 2], [2, 2, 0, 3], [1, 2, 3, 0]] 142 | assert sky_city_cost([[0,1], [1,0]], [2,3]) == [[0, 1, 2], [1, 0, 3], [2, 3, 0]] 143 | assert sky_city_cost([[0, 1, 2], [1, 0, 2], [2, 2, 0]], [3, 3, 3]) == [[0, 1, 2, 3], [1, 0, 2, 3], [2, 2, 0, 3], [3, 3, 3, 0]] 144 | assert sky_city_cost([[0, 5, 4], [5, 0, 3], [4, 3, 0]], [5, 6, 4]) == [[0, 5, 4, 5], [5, 0, 3, 6], [4, 3, 0, 4], [5, 6, 4, 0]] 145 | assert sky_city_cost([[0, 1, 2], [1, 0, 1], [2, 1, 0]], [10, 10, 10]) == [[0, 1, 2, 10], [1, 0, 1, 10], [2, 1, 0, 10], [10, 10, 10, 0]] 146 | assert sky_city_cost([[0, 1, 10], [1, 0, 10], [10, 10, 0]], [10, 20, 30]) == [[0, 1, 10, 10], [1, 0, 10, 20], [10, 10, 0, 30], [10, 20, 30, 0]] 147 | assert sky_city_cost([[0,1,2],[1,0,3],[2,3,0]], [4,5,6]) == [[0,1,2,4],[1,0,3,5],[2,3,0,6],[4,5,6,0]] 148 | assert sky_city_cost([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [1, 3, 3]) == [[0, 1, 2, 1], [1, 0, 3, 3], [2, 3, 0, 3], [1, 3, 3, 0]] 149 | assert sky_city_cost([[0, 10, 20], [10, 0, 15], [20, 15, 0]], [25, 30, 20]) == [[0, 10, 20, 25], [10, 0, 15, 30], [20, 15, 0, 20], [25, 30, 20, 0]] 150 | assert sky_city_cost([[1,2,3], [2,3,4], [3,4,5]], [1,2,3]) == [[1,2,3,1],[2,3,4,2],[3,4,5,3],[1,2,3,0]] 151 | assert sky_city_cost([[2, 3], [4, 5]], [1, 2]) == [[2, 3, 1], [4, 5, 2], [1, 2, 0]] 152 | assert sky_city_cost([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [0, 1, 2]) == [[0, 1, 1, 0], [1, 0, 1, 1], [1, 1, 0, 2], [0, 1, 2, 0]] 153 | assert sky_city_cost([[0,1,2],[1,0,2],[2,2,0]], [3,4,5]) == [[0,1,2,3],[1,0,2,4],[2,2,0,5],[3,4,5,0]] 154 | assert sky_city_cost([[0, 3, 5], [3, 0, 1], [5, 1, 0]], [6, 2, 4]) == [[0, 3, 5, 6], [3, 0, 1, 2], [5, 1, 0, 4], [6, 2, 4, 0]] 155 | assert sky_city_cost([[1, 2], [3, 4]], [5, 6]) == [[1, 2, 5], [3, 4, 6], [5, 6, 0]] 156 | assert sky_city_cost([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [0, 1, 1]) == [[0, 1, 1, 0], [1, 0, 1, 1], [1, 1, 0, 1], [0, 1, 1, 0]] 157 | assert sky_city_cost([[0,1,2],[1,0,0],[2,0,0]], [4,4,4]) == [[0,1,2,4],[1,0,0,4],[2,0,0,4],[4,4,4,0]] 158 | assert sky_city_cost([[0,1,3],[1,0,5],[3,5,0]], [2,2,2]) == [[0,1,3,2],[1,0,5,2],[3,5,0,2],[2,2,2,0]] 159 | assert select_airport_cities([[1, 2, 3], [4, 5, 6], [7, 8, 9]], [10, 11, 12]) == [] 160 | assert select_airport_cities([[0, 1, 2, 3], [1, 0, 2, 3], [2, 2, 0, 3], [3, 3, 3, 0]], [1, 2, 3, 4]) == [] 161 | assert select_airport_cities([[0, 1, 2], [1, 0, 1], [2, 1, 0]], [1, 1, 1]) == [] 162 | assert select_airport_cities([[0, 1], [1, 0]], [1, 1]) == [] 163 | assert select_airport_cities([[0, 1, 5], [1, 0, 2], [5, 2, 0]], [2, 3, 1]) == [0, 2] 164 | assert select_airport_cities([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [0, 1, 1]) == [], "should be an empty list" 165 | assert select_airport_cities([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [1, 2, 3]) == [] 166 | assert select_airport_cities([[0, 5, 4], [5, 0, 3], [4, 3, 0]], [7, 8, 5]) == [] 167 | assert select_airport_cities([[0, 1, 1, 1], [1, 0, 1, 1], [1, 1, 0, 1], [1, 1, 1, 0]], [4, 5, 6, 7]) == [] 168 | assert select_airport_cities([[0, 1, 2, 1], [1, 0, 2, 1], [2, 2, 0, 1], [1, 1, 1, 0]], [2, 1, 0, 3]) == [1, 2] 169 | assert select_airport_cities([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], [0, 0, 0, 0, 0]) == [] 170 | assert select_airport_cities([[0, 1, 1, 1], [1, 0, 1, 1], [1, 1, 0, 1], [1, 1, 1, 0]], [1, 1, 1, 1]) == [] 171 | assert select_airport_cities([[0, 1, 2, 3], [1, 0, 2, 3], [2, 2, 0, 3], [3, 3, 3, 0]], [4, 5, 6, 7]) == [] 172 | assert select_airport_cities([[0,1,1,1,1],[1,0,1,1,1],[1,1,0,1,1],[1,1,1,0,1],[1,1,1,1,0]],[2,2,2,2,2]) == [] 173 | assert select_airport_cities([[0, 1, 5], [1, 0, 2], [5, 2, 0]], [3, 3, 3]) == [] 174 | assert select_airport_cities([[0, 1, 2, 3], [1, 0, 2, 3], [2, 2, 0, 3], [3, 3, 3, 0]], [0, 0, 0, 0]) == [0, 1, 2, 3] 175 | assert select_airport_cities([[0,1,1,1,1],[1,0,1,1,1],[1,1,0,1,1],[1,1,1,0,1],[1,1,1,1,0]], [5,5,5,5,5]) == [] 176 | assert select_airport_cities([[1, 2, 3], [4, 5, 6], [7, 8, 9]], [1, 2, 3]) == [] 177 | assert select_airport_cities([[0, 1, 2], [1, 0, 2], [2, 2, 0]], [2, 3, 1]) == [] 178 | assert select_airport_cities([[0, 1, 5, 9], [1, 0, 2, 6], [5, 2, 0, 4], [9, 6, 4, 0]], [10, 5, 3, 1]) == [2, 3] 179 | assert select_airport_cities([[0, 0, 1], [0, 0, 1], [1, 1, 0]], [1, 1, 1]) == [] 180 | assert select_airport_cities([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [0, 1, 1]) == [] 181 | assert select_airport_cities([[0, 1, 2], [1, 0, 3], [2, 3, 0]], [3, 2, 1]) == [] 182 | assert select_airport_cities([[0, 100, 100, 100], [100, 0, 100, 100], [100, 100, 0, 100], [100, 100, 100, 0]], [100, 100, 100, 100]) == [] 183 | assert select_airport_cities([[0,2,2,2,1],[2,0,2,2,1],[2,2,0,2,1],[2,2,2,0,1],[1,1,1,1,0]], [5,5,5,5,5]) == [] 184 | assert select_airport_cities([[0,1,1,100,100,100], [1,0,100,1,100,100], [1,100,0,100,1,100], [100,1,100,0,100,1], [100,100,1,100,0,100], [100,100,100,1,100,0]], [0,0,0,0,0,0]) 185 | assert select_airport_cities([[0, 1, 1, 1], [1, 0, 1, 1], [1, 1, 0, 1], [1, 1, 1, 0]], [.25, .25, .25, .25]) == [0, 1, 2, 3] 186 | assert select_airport_cities([[0, 3, 3], [3, 0, 3], [3, 3, 0]], [0, 1, 2]) == [0, 1, 2] 187 | assert select_airport_cities([[0, 1, 3], [1, 0, 1], [3, 1, 0]], [2, 1, 0]) == [] 188 | assert select_airport_cities([[0, 1, 1], [1, 0, 1], [1, 1, 0]], [1, 1, 1]) == [] 189 | assert select_airport_cities([[0, 1, 1, 1], [1, 0, 1, 1], [1, 1, 0, 1], [1, 1, 1, 0]], [0, 1, 1, 1]) == [] 190 | assert select_airport_cities([[0,1,1,1,1],[1,0,1,1,1],[1,1,0,1,1],[1,1,1,0,1],[1,1,1,1,0]], [0,1,1,1,1]) == [] 191 | assert select_airport_cities([[0,1,1,1],[1,0,1,1],[1,1,0,1],[1,1,1,0]], [1,1,1,1]) == [] 192 | assert final_node_connectors([[0, 1, 1, 0, 0, 0], [1, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) == [] 193 | assert final_node_connectors([[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0]]) == [] 194 | assert final_node_connectors([[0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 1, 0]]) == [] 195 | assert final_node_connectors([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7], [7, 8], [8, 9], [9, 10], [10, 11]]) == [] 196 | assert final_node_connectors([[0, 1, 1, 0], [0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]]) == [] 197 | assert final_node_connectors([[0, 1, 1, 0], [1, 0, 1, 1], [1, 1, 0, 1], [0, 1, 1, 0]]) == [1, 2] 198 | assert final_node_connectors([[0,0],[1,0]]) == [] 199 | assert final_node_connectors([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]) == [] 200 | assert final_node_connectors([[0],[1],[0],[0]]) == [], "expected []" 201 | assert final_node_connectors([[0, 1, 1, 0], [0, 0, 0, 1], [0, 0, 0, 1], [0, 0, 0, 0]]) == [1, 2] 202 | assert final_node_connectors([[0, 1, 0, 0], [0, 0, 0, 1], [0, 0, 0, 1], [0, 0, 0, 0]]) == [1, 2] 203 | assert final_node_connectors([[0,0,0,1,0],[0,0,0,0,0],[0,0,0,1,0],[0,0,0,0,0],[0,0,0,0,0]]) == [] 204 | assert final_node_connectors([[0,1,1,1,0],[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0]]) == [] 205 | assert final_node_connectors([[0,1],[1,0]]) == [] 206 | assert final_node_connectors([[0,1,0,0],[0,0,1,0],[0,0,0,0],[0,0,1,0]]) == [] 207 | assert final_node_connectors([[0,0,0],[1,0,0],[1,1,0]]) == [] 208 | assert final_node_connectors([[0,0,0,0],[0,0,1,1],[0,1,0,0],[0,1,0,0]]) == [] 209 | assert final_node_connectors([[0, 0, 1, 0], [0, 0, 1, 0], [1, 1, 0, 1], [0, 0, 1, 0]]) == [], "Test 1 failed" 210 | assert final_node_connectors([[0, 1, 0, 0, 0, 0], [1, 0, 1, 0, 0, 0], [0, 1, 0, 1, 0, 0], [0, 0, 1, 0, 1, 0], [0, 0, 0, 1, 0, 1], [0, 0, 0, 0, 1, 0]]) == [] 211 | assert final_node_connectors([[0, 1, 1, 0], [1, 0, 0, 1], [1, 0, 0, 1], [0, 1, 1, 0]]) == [1, 2] 212 | assert final_node_connectors([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) == [] 213 | assert final_node_connectors([[0, 1], [1, 0]]) == [], 'test 1' 214 | assert final_node_connectors([[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 1], [1, 0, 1, 0]]) == [0, 2] 215 | assert final_node_connectors([[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 0], [0, 0, 0, 1]]) == [] # only one node connected to the final node, so the answer is an empty list 216 | assert final_node_connectors([[1,0,0,0],[0,1,0,0],[0,0,0,0],[0,0,1,0]]) == [] 217 | assert final_node_connectors([[0, 1, 1], [0, 0, 1], [0, 0, 0]]) == [0, 1] 218 | assert final_node_connectors([[0, 1], [1, 0]]) == [] 219 | assert final_node_connectors([[0,1,1,1],[1,0,1,0],[1,1,0,1],[1,0,1,0]]) == [0,2] 220 | assert final_node_connectors([[0, 0, 1, 0, 0], [0, 0, 1, 1, 0], [0, 0, 0, 1, 1], [0, 1, 0, 0, 1], [0, 0, 0, 1, 0]]) == [2, 3] 221 | assert final_node_connectors([[0,0,0,0],[0,0,0,1],[0,0,0,1],[0,0,0,0]]) == [1,2] 222 | assert final_node_connectors([[0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1], [0, 0, 0, 0, 0]]) == [] 223 | assert final_node_connectors([[0, 1, 0], [0, 0, 1], [0, 0, 0]]) == [] 224 | assert final_node_connectors([[0, 0, 1, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) == [] 225 | assert final_node_connectors([[0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) == [] 226 | assert final_node_connectors([[0,0,0,0], [0,0,0,0], [0,0,0,0], [0,0,0,0]]) == [] 227 | assert final_node_connectors([[0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 1], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) == [], 'incorrect' 228 | assert final_node_connectors([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]) == [] 229 | assert final_node_connectors([[0, 1, 0], [0, 0, 1], [1, 0, 0]]) == [] 230 | assert final_node_connectors([[0,0,0],[0,0,0],[0,0,0]]) == [] 231 | assert final_node_connectors([[0, 0, 0, 1], [1, 0, 0, 1], [0, 0, 0, 1], [0, 0, 0, 0]]) == [0, 1, 2] 232 | assert final_node_connectors([[0,1,1],[1,0,1],[1,1,0]]) == [0,1] 233 | assert final_node_connectors([[1, 2], [2]]) == [] 234 | assert final_node_connectors([[0,1,1,1,1], [1,0,0,0,0], [1,0,0,0,0], [1,0,0,0,0], [1,0,0,0,0]]) == [] 235 | assert final_node_connectors([[0,0,0,0,1],[1,0,0,0,0],[0,1,0,0,0],[0,0,1,0,0],[0,0,0,1,0]]) == [] 236 | assert final_node_connectors([[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]]) == [] 237 | assert final_node_connectors([[0,1],[0,0]]) == [], 'incorrect' 238 | assert final_node_connectors([[0, 1, 1], [1, 0, 0], [1, 0, 0]]) == [] 239 | assert final_node_connectors([[0,1,0,0,0,0], [1,0,1,0,0,0], [0,1,0,1,0,0], [0,0,1,0,1,1], [0,0,0,1,0,1], [0,0,0,1,1,0]]) == [3, 4], 'incorrect' 240 | assert final_node_connectors([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) == [], "incorrect output" 241 | assert final_node_connectors([[0,0,0],[0,0,1],[0,0,0]]) == [] 242 | -------------------------------------------------------------------------------- /programs/problem_solving_no_tests.ss: -------------------------------------------------------------------------------- 1 | select_airport_cities(city_road_cost, city_airport_cost): given a matrix representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city (where any two cities with airports are connected), return a list of the cities that should have airports built in them to minimize the total cost of building roads and airports such that all cities are connected. The list should be sorted in ascending order. 2 | sky_city_cost(city_road_cost, city_airport_cost): given a list of lists representing the cost of building a road between any two cities, and a list representing the cost of building an airport in a city, return a new cost matrix with a new node corresponding to the sky. 3 | minimum_spanning_tree(cost_matrix): given a list of lists representing the cost of each edge, return an adjacency matrix corresponding to the minimum spanning true. all entries in the adjacency matrix should be 0 or 1. 4 | final_node_connectors(adjacency_matrix): given a list of lists representing an adjacency matrix, return a list of the nodes connected to the final node. However, if only one node is connected to the final node, return an empty list. -------------------------------------------------------------------------------- /programs/virtualhome.py: -------------------------------------------------------------------------------- 1 | # return a list of strings that represents an action plan to put a mug on the stall and bread on the desk. 2 | def task_plan(): 3 | return put_object_on("mug", "stall") + put_object_on("bread", "desk") 4 | 5 | # return a list of strings that represents an action plan to put an object in a place. 6 | def put_object_on(object, place): 7 | return [ 8 | 'find ' + object, 9 | 'grab ' + object, 10 | 'walk to ' + place, 11 | 'put ' + object + ' on ' + place 12 | ] 13 | 14 | 15 | from execute_virtual_home import test_script;assert test_script(task_plan()) 16 | 17 | from execute_virtual_home import test_script;assert test_script(put_object_on("mug", "stall")) 18 | -------------------------------------------------------------------------------- /programs/virtualhome.ss: -------------------------------------------------------------------------------- 1 | task_plan(): return a list of strings that represents an action plan to put a mug on the stall and bread on the desk. 2 | -> executable 3 | put_object_on(object, place): return a list of strings that represents an action plan to put an object in a place. 4 | "mug", "stall" -> executable -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | absl-py==1.3.0 2 | aiohttp==3.8.3 3 | aiosignal==1.3.1 4 | alabaster @ file:///home/ktietz/src/ci/alabaster_1611921544520/work 5 | anaconda-client @ file:///opt/concourse/worker/volumes/live/866d4dd0-ff5b-4d0b-718d-0267a3b10e06/volume/anaconda-client_1635342573767/work 6 | anaconda-navigator==2.1.1 7 | anaconda-project @ file:///tmp/build/80754af9/anaconda-project_1626085644852/work 8 | anyascii==0.3.0 9 | anyio @ file:///opt/concourse/worker/volumes/live/96440bbe-d2f1-4a9e-5edf-600248ff38bd/volume/anyio_1617783321037/work/dist 10 | appdirs==1.4.4 11 | applaunchservices @ file:///Users/ktietz/demo/mc3/conda-bld/applaunchservices_1630511705208/work 12 | appnope @ file:///opt/concourse/worker/volumes/live/6ca6f098-d773-4461-5c91-a24a17435bda/volume/appnope_1606859448531/work 13 | appscript @ file:///opt/concourse/worker/volumes/live/00049ed6-6263-4a6e-72b9-9d990f6e2f07/volume/appscript_1611427000595/work 14 | argh==0.26.2 15 | argon2-cffi @ file:///opt/concourse/worker/volumes/live/38e8fb2b-1295-4bdf-4adf-b20acbe4d91b/volume/argon2-cffi_1607022498041/work 16 | arrow @ file:///opt/concourse/worker/volumes/live/1c202787-83f7-4b70-6d98-b40769f597f4/volume/arrow_1617737667847/work 17 | asn1crypto @ file:///tmp/build/80754af9/asn1crypto_1596577642040/work 18 | astroid @ file:///opt/concourse/worker/volumes/live/5aff3c6b-d8ac-4e74-4846-0f446794397d/volume/astroid_1628063157520/work 19 | astropy @ file:///opt/concourse/worker/volumes/live/dac790a5-ee97-4520-5b55-f2cc50d275e6/volume/astropy_1629829220593/work 20 | async-generator @ file:///home/ktietz/src/ci/async_generator_1611927993394/work 21 | async-timeout==4.0.2 22 | atomicwrites==1.4.0 23 | attrs @ file:///tmp/build/80754af9/attrs_1620827162558/work 24 | audioread==2.1.9 25 | autopep8 @ file:///tmp/build/80754af9/autopep8_1620866417880/work 26 | Babel @ file:///tmp/build/80754af9/babel_1620871417480/work 27 | backcall @ file:///home/ktietz/src/ci/backcall_1611930011877/work 28 | backports.functools-lru-cache @ file:///tmp/build/80754af9/backports.functools_lru_cache_1618170165463/work 29 | backports.shutil-get-terminal-size @ file:///tmp/build/80754af9/backports.shutil_get_terminal_size_1608222128777/work 30 | backports.tempfile @ file:///home/linux1/recipes/ci/backports.tempfile_1610991236607/work 31 | backports.weakref==1.0.post1 32 | beautifulsoup4 @ file:///tmp/build/80754af9/beautifulsoup4_1631874778482/work 33 | bert-score==0.3.12 34 | binaryornot @ file:///tmp/build/80754af9/binaryornot_1617751525010/work 35 | bitarray @ file:///opt/concourse/worker/volumes/live/8f2b2fa2-f7cd-4343-4d8f-3cc38b632e33/volume/bitarray_1629132852828/work 36 | bkcharts==0.2 37 | black==22.10.0 38 | blanc==0.2.7 39 | bleach @ file:///tmp/build/80754af9/bleach_1628110601003/work 40 | blis==0.7.9 41 | bokeh @ file:///opt/concourse/worker/volumes/live/278130e0-6bd5-4375-72c8-87158eba9408/volume/bokeh_1635324480391/work 42 | boto==2.49.0 43 | boto3==1.26.8 44 | botocore==1.29.8 45 | bottle==0.12.23 46 | Bottleneck @ file:///opt/concourse/worker/volumes/live/ac8c8ef3-2ed0-42e9-6ec0-5bb05ad938f6/volume/bottleneck_1607575111469/work 47 | brotlipy==0.7.0 48 | cached-property @ file:///tmp/build/80754af9/cached-property_1600785575025/work 49 | cachetools==5.2.0 50 | catalogue==2.0.8 51 | certifi==2021.10.8 52 | cffi @ file:///opt/concourse/worker/volumes/live/976f8942-f51d-4f0e-7352-2a10f0820d0e/volume/cffi_1625814703974/work 53 | cfgv==3.3.1 54 | chardet @ file:///opt/concourse/worker/volumes/live/7e1102c4-8702-40f2-63d6-f260ce5f85e4/volume/chardet_1607706831384/work 55 | charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work 56 | click==8.0.3 57 | cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work 58 | clyent==1.2.2 59 | colorama @ file:///tmp/build/80754af9/colorama_1607707115595/work 60 | conda==4.10.3 61 | conda-build==3.21.5 62 | conda-content-trust @ file:///tmp/build/80754af9/conda-content-trust_1617045594566/work 63 | conda-pack @ file:///tmp/build/80754af9/conda-pack_1611163042455/work 64 | conda-package-handling @ file:///opt/concourse/worker/volumes/live/8fb3e065-760b-4a9d-4cd9-aca7fc8baf53/volume/conda-package-handling_1618262145611/work 65 | conda-repo-cli @ file:///tmp/build/80754af9/conda-repo-cli_1620168426516/work 66 | conda-token @ file:///tmp/build/80754af9/conda-token_1620076980546/work 67 | conda-verify==3.4.2 68 | confection==0.0.3 69 | contextlib2 @ file:///Users/ktietz/demo/mc3/conda-bld/contextlib2_1630668244042/work 70 | cookiecutter @ file:///tmp/build/80754af9/cookiecutter_1617748928239/work 71 | coqpit==0.0.15 72 | coqui-trainer==0.0.5 73 | crfm-helm==0.1.0 74 | cryptography @ file:///opt/concourse/worker/volumes/live/3143e751-d0f4-457e-7dc5-b7eaa48a56a8/volume/cryptography_1633520383659/work 75 | cycler==0.10.0 76 | cymem==2.0.7 77 | Cython @ file:///opt/concourse/worker/volumes/live/090b3344-25bd-4e30-5dde-5f77abae4b7a/volume/cython_1636035875931/work 78 | cytoolz==0.11.0 79 | daal4py==2021.3.0 80 | dacite==1.6.0 81 | dask==2021.10.0 82 | datasets==2.5.2 83 | dateparser==1.0.0 84 | debugpy @ file:///opt/concourse/worker/volumes/live/1a15daf1-2a67-4a51-6aa1-c3bca01f7577/volume/debugpy_1629222706040/work 85 | decorator @ file:///tmp/build/80754af9/decorator_1632776554403/work 86 | defusedxml @ file:///tmp/build/80754af9/defusedxml_1615228127516/work 87 | diff-match-patch @ file:///Users/ktietz/demo/mc3/conda-bld/diff-match-patch_1630511840874/work 88 | dill==0.3.5.1 89 | distlib==0.3.4 90 | distributed @ file:///opt/concourse/worker/volumes/live/34c9adba-e95e-473d-7a32-ecdf958a8844/volume/distributed_1635968220957/work 91 | docopt==0.6.2 92 | docutils @ file:///opt/concourse/worker/volumes/live/c87e10af-317d-45cb-7142-3d02053858ee/volume/docutils_1620827978243/work 93 | editdistance==0.6.0 94 | emoji==2.2.0 95 | en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.2.0/en_core_web_sm-3.2.0-py3-none-any.whl 96 | entrypoints==0.3 97 | et-xmlfile==1.1.0 98 | fastcache @ file:///opt/concourse/worker/volumes/live/8356601f-d9bc-4d13-4017-1a58ebea6849/volume/fastcache_1607571270986/work 99 | ffmpeg-python==0.2.0 100 | filelock @ file:///tmp/build/80754af9/filelock_1635402558181/work 101 | flake8==5.0.4 102 | Flask @ file:///home/ktietz/src/ci/flask_1611932660458/work 103 | fonttools==4.25.0 104 | frozenlist==1.3.3 105 | fsspec==2022.11.0 106 | future @ file:///opt/concourse/worker/volumes/live/f456638c-86a7-4060-7f5f-d499a051219b/volume/future_1607571337593/work 107 | gdown==4.4.0 108 | gevent @ file:///opt/concourse/worker/volumes/live/014e1366-b455-480b-61a8-9a69e4909791/volume/gevent_1628273687151/work 109 | gin-config==0.5.0 110 | glob2 @ file:///home/linux1/recipes/ci/glob2_1610991677669/work 111 | gmpy2==2.0.8 112 | google-api-core==2.10.1 113 | google-api-python-client==2.64.0 114 | google-auth==2.14.1 115 | google-auth-httplib2==0.1.0 116 | googleapis-common-protos==1.56.4 117 | greenlet @ file:///opt/concourse/worker/volumes/live/b27b4e9e-4697-4d57-403b-f82d36a391ca/volume/greenlet_1628888146890/work 118 | gruut==2.0.4 119 | gruut-ipa==0.10.1 120 | gruut-lang-cs==2.0.0 121 | gruut-lang-de==2.0.0 122 | gruut-lang-es==2.0.0 123 | gruut-lang-fr==2.0.0 124 | gruut-lang-it==2.0.0 125 | gruut-lang-nl==2.0.0 126 | gruut-lang-pt==2.0.0 127 | gruut-lang-ru==2.0.0 128 | gruut-lang-sv==2.0.0 129 | gunicorn==20.1.0 130 | gym==0.21.0 131 | h5py @ file:///opt/concourse/worker/volumes/live/e7503571-c7b1-45ac-4bb7-37b5178cb0df/volume/h5py_1622088436205/work 132 | HeapDict @ file:///Users/ktietz/demo/mc3/conda-bld/heapdict_1630598515714/work 133 | html5lib @ file:///Users/ktietz/demo/mc3/conda-bld/html5lib_1629144453894/work 134 | httplib2==0.21.0 135 | huggingface-hub==0.10.1 136 | icetk==0.0.4 137 | identify==2.5.8 138 | idna @ file:///tmp/build/80754af9/idna_1622654382723/work 139 | imagecodecs @ file:///opt/concourse/worker/volumes/live/ab9fe69e-f7d4-471a-6199-cb95a745fb88/volume/imagecodecs_1635529117386/work 140 | imageio @ file:///tmp/build/80754af9/imageio_1617700267927/work 141 | imagesize @ file:///Users/ktietz/demo/mc3/conda-bld/imagesize_1628863108022/work 142 | importlib-metadata @ file:///opt/concourse/worker/volumes/live/b1c84e32-1519-4554-50d3-980dc7c220d9/volume/importlib-metadata_1631916711680/work 143 | importlib-resources==5.10.0 144 | inflect==5.4.0 145 | inflection==0.5.1 146 | iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work 147 | intervaltree @ file:///Users/ktietz/demo/mc3/conda-bld/intervaltree_1630511889664/work 148 | ipykernel @ file:///opt/concourse/worker/volumes/live/6f6caad4-5c02-4b4a-5243-1eece346c27b/volume/ipykernel_1633545433252/work/dist/ipykernel-6.4.1-py3-none-any.whl 149 | ipython @ file:///opt/concourse/worker/volumes/live/c0526798-0817-46b3-68c1-bb9ffefe344a/volume/ipython_1635944197798/work 150 | ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work 151 | ipywidgets @ file:///tmp/build/80754af9/ipywidgets_1634143127070/work 152 | isort @ file:///tmp/build/80754af9/isort_1628603791788/work 153 | itsdangerous @ file:///tmp/build/80754af9/itsdangerous_1621432558163/work 154 | jdcal @ file:///Users/ktietz/demo/mc3/conda-bld/jdcal_1630584345063/work 155 | jedi @ file:///opt/concourse/worker/volumes/live/4b03e428-2d0c-4635-502c-16df884971e8/volume/jedi_1611333763457/work 156 | jieba==0.42.1 157 | Jinja2 @ file:///tmp/build/80754af9/jinja2_1612213139570/work 158 | jinja2-time @ file:///tmp/build/80754af9/jinja2-time_1617751524098/work 159 | jmespath==1.0.1 160 | joblib @ file:///tmp/build/80754af9/joblib_1635411271373/work 161 | json5 @ file:///tmp/build/80754af9/json5_1624432770122/work 162 | jsonlines==3.1.0 163 | jsonschema @ file:///Users/ktietz/demo/mc3/conda-bld/jsonschema_1630511932244/work 164 | jupyter @ file:///opt/concourse/worker/volumes/live/5d55c245-fcac-42f9-4a6c-7e147c07785b/volume/jupyter_1607700866889/work 165 | jupyter-client @ file:///tmp/build/80754af9/jupyter_client_1616770841739/work 166 | jupyter-console @ file:///tmp/build/80754af9/jupyter_console_1616615302928/work 167 | jupyter-core @ file:///opt/concourse/worker/volumes/live/ade0d1fb-3680-4893-7a79-579c9d86c1c1/volume/jupyter_core_1633420119353/work 168 | jupyter-server @ file:///opt/concourse/worker/volumes/live/c0b6c5cd-8b5f-482c-6789-42a64b3d2acc/volume/jupyter_server_1616084049292/work 169 | jupyterlab @ file:///tmp/build/80754af9/jupyterlab_1635799997693/work 170 | jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work 171 | jupyterlab-server @ file:///tmp/build/80754af9/jupyterlab_server_1633419203660/work 172 | jupyterlab-widgets @ file:///tmp/build/80754af9/jupyterlab_widgets_1609884341231/work 173 | keyring @ file:///opt/concourse/worker/volumes/live/effa77ff-08c0-456d-548e-ecd9681ace1e/volume/keyring_1629321568005/work 174 | kiwisolver @ file:///opt/concourse/worker/volumes/live/f8867fdb-2fa2-4145-73d9-4d6f6dad5f7c/volume/kiwisolver_1612282424136/work 175 | langcodes==3.3.0 176 | lazy-object-proxy @ file:///opt/concourse/worker/volumes/live/a169db40-97bf-4f51-6893-dc751f705b7b/volume/lazy-object-proxy_1616529067444/work 177 | libarchive-c @ file:///tmp/build/80754af9/python-libarchive-c_1617780486945/work 178 | librosa==0.8.0 179 | llvmlite==0.39.1 180 | locket==0.2.1 181 | lxml @ file:///opt/concourse/worker/volumes/live/e9a191ad-c8f1-4bb7-7955-c333b8ab6c55/volume/lxml_1616443232489/work 182 | Mako==1.2.3 183 | MarkupSafe @ file:///opt/concourse/worker/volumes/live/fac44ebc-60de-4182-7f89-29bb8796c554/volume/markupsafe_1607027351541/work 184 | matplotlib @ file:///opt/concourse/worker/volumes/live/f670ab63-e220-495e-450b-14c9591f195b/volume/matplotlib-suite_1634667037960/work 185 | matplotlib-inline @ file:///tmp/build/80754af9/matplotlib-inline_1628242447089/work 186 | mccabe==0.7.0 187 | mecab-python3==1.0.3 188 | mistune @ file:///opt/concourse/worker/volumes/live/4217afd5-dad1-438d-6f79-e4992ccda0e5/volume/mistune_1607364880245/work 189 | mkl-fft==1.3.1 190 | mkl-random @ file:///opt/concourse/worker/volumes/live/0cda23d8-7460-44b2-7e5d-3c76a8a0ca7e/volume/mkl_random_1626186083266/work 191 | mkl-service==2.4.0 192 | mock @ file:///tmp/build/80754af9/mock_1607622725907/work 193 | more-itertools @ file:///tmp/build/80754af9/more-itertools_1635423142362/work 194 | moverscore==1.0.3 195 | mpmath==1.2.1 196 | msgpack @ file:///opt/concourse/worker/volumes/live/ccdd6ca0-523a-4fde-5a76-cdd4f47c445e/volume/msgpack-python_1612287158191/work 197 | multidict==6.0.2 198 | multipledispatch @ file:///opt/concourse/worker/volumes/live/ae29ad0f-3a64-4ff5-7393-0aa95f2c9f85/volume/multipledispatch_1607574242710/work 199 | multiprocess==0.70.13 200 | munkres==1.1.4 201 | murmurhash==1.0.9 202 | mypy==0.990 203 | mypy-extensions==0.4.3 204 | navigator-updater==0.2.1 205 | nbclassic @ file:///tmp/build/80754af9/nbclassic_1616085367084/work 206 | nbclient @ file:///tmp/build/80754af9/nbclient_1614364831625/work 207 | nbconvert @ file:///opt/concourse/worker/volumes/live/ad745f9a-647c-4095-6b46-a8b45a735914/volume/nbconvert_1624479072790/work 208 | nbformat @ file:///tmp/build/80754af9/nbformat_1617383369282/work 209 | nest-asyncio @ file:///tmp/build/80754af9/nest-asyncio_1613680548246/work 210 | networkx @ file:///tmp/build/80754af9/networkx_1633639043937/work 211 | nltk==3.7 212 | nodeenv==1.7.0 213 | nose @ file:///tmp/build/80754af9/nose_1606773131901/work 214 | notebook @ file:///opt/concourse/worker/volumes/live/600e39a3-70b3-4ed1-4986-4cc3d1a9be7c/volume/notebook_1635411664246/work 215 | num2words==0.5.10 216 | numba==0.56.4 217 | numexpr @ file:///opt/concourse/worker/volumes/live/8a490d79-0e07-4fed-40fc-a78f896e4811/volume/numexpr_1618856522733/work 218 | numpy==1.23.5 219 | numpydoc @ file:///tmp/build/80754af9/numpydoc_1605117425582/work 220 | olefile @ file:///Users/ktietz/demo/mc3/conda-bld/olefile_1629805411829/work 221 | openai==0.23.1 222 | openpyxl @ file:///tmp/build/80754af9/openpyxl_1632777717936/work 223 | packaging @ file:///tmp/build/80754af9/packaging_1625611678980/work 224 | pandas==1.3.4 225 | pandas-stubs==1.5.0.221012 226 | pandocfilters @ file:///opt/concourse/worker/volumes/live/d8ef4635-066d-4ffe-5341-12ebf01bd094/volume/pandocfilters_1605120459573/work 227 | parameterized==0.8.1 228 | parso @ file:///tmp/build/80754af9/parso_1617223946239/work 229 | partd @ file:///tmp/build/80754af9/partd_1618000087440/work 230 | path @ file:///opt/concourse/worker/volumes/live/ada87e48-bd34-49ac-6489-f51a7259db05/volume/path_1623603892072/work 231 | pathlib2 @ file:///opt/concourse/worker/volumes/live/9cb7a8e6-768d-4b83-4c58-1af1d6394b3e/volume/pathlib2_1625585698767/work 232 | pathspec==0.10.2 233 | pathy==0.6.2 234 | patsy==0.5.2 235 | pep8==1.7.1 236 | pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work 237 | pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work 238 | Pillow==8.4.0 239 | pkginfo==1.7.1 240 | platformdirs==2.4.1 241 | pluggy @ file:///opt/concourse/worker/volumes/live/a2630e20-8422-46fc-7347-f73294368851/volume/pluggy_1615976601840/work 242 | ply==3.11 243 | pooch==1.6.0 244 | portalocker==2.6.0 245 | poyo @ file:///tmp/build/80754af9/poyo_1617751526755/work 246 | pre-commit==2.20.0 247 | preshed==3.0.8 248 | prometheus-client @ file:///tmp/build/80754af9/prometheus_client_1623189609245/work 249 | prompt-toolkit @ file:///tmp/build/80754af9/prompt-toolkit_1633440160888/work 250 | protobuf==3.20.1 251 | psutil @ file:///opt/concourse/worker/volumes/live/da41f1b1-060b-47fa-4c17-557e069ead1d/volume/psutil_1612298011002/work 252 | ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl 253 | py @ file:///tmp/build/80754af9/py_1607971587848/work 254 | pyarrow==10.0.0 255 | pyasn1==0.4.8 256 | pyasn1-modules==0.2.8 257 | pycodestyle==2.9.1 258 | pycosat==0.6.3 259 | pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work 260 | pycurl==7.44.1 261 | pydantic==1.8.2 262 | pydocstyle @ file:///tmp/build/80754af9/pydocstyle_1621600989141/work 263 | pyemd==0.5.1 264 | pyerfa @ file:///opt/concourse/worker/volumes/live/fef1f482-1dec-42b9-4439-b8031d24ea69/volume/pyerfa_1621560786048/work 265 | pyext==0.7 266 | pyflakes==2.5.0 267 | Pygments @ file:///tmp/build/80754af9/pygments_1629234116488/work 268 | pyhocon==0.3.59 269 | PyJWT @ file:///opt/concourse/worker/volumes/live/bd094316-1935-4bf0-5028-b889d4e7967c/volume/pyjwt_1619682501859/work 270 | pylint @ file:///opt/concourse/worker/volumes/live/110f293a-445f-477b-4041-2e00ae784083/volume/pylint_1627536795404/work 271 | pyls-spyder==0.4.0 272 | pymongo==4.2.0 273 | pynndescent==0.5.6 274 | pyodbc===4.0.0-unsupported 275 | pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1635333100036/work 276 | pypandoc==1.8.1 277 | pyparsing==2.4.7 278 | pypinyin==0.46.0 279 | pyrsistent @ file:///opt/concourse/worker/volumes/live/76cffa60-bd33-4155-4e83-ea03c38b1294/volume/pyrsistent_1636111020441/work 280 | pysbd==0.3.4 281 | PySocks @ file:///opt/concourse/worker/volumes/live/112288ac-9cb0-4e73-768b-13baf4ca6419/volume/pysocks_1605305820043/work 282 | pytest==7.1.3 283 | python-crfsuite==0.9.7 284 | python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work 285 | python-lsp-black @ file:///tmp/build/80754af9/python-lsp-black_1634232156041/work 286 | python-lsp-jsonrpc==1.0.0 287 | python-lsp-server==1.2.4 288 | python-slugify @ file:///tmp/build/80754af9/python-slugify_1620405669636/work 289 | pytorch-pretrained-bert==0.6.2 290 | pytrec-eval==0.5 291 | pytz==2021.3 292 | pytz-deprecation-shim==0.1.0.post0 293 | PyWavelets @ file:///opt/concourse/worker/volumes/live/3cbb6155-7383-45e0-55bb-8641a92939f6/volume/pywavelets_1607645526758/work 294 | pyworld==0.3.0 295 | PyYAML==6.0 296 | pyzmq @ file:///opt/concourse/worker/volumes/live/a6b770e2-97ff-4d20-790a-688be9a9004f/volume/pyzmq_1628276022869/work 297 | QDarkStyle @ file:///tmp/build/80754af9/qdarkstyle_1617386714626/work 298 | qstylizer @ file:///tmp/build/80754af9/qstylizer_1617713584600/work/dist/qstylizer-0.1.10-py2.py3-none-any.whl 299 | QtAwesome @ file:///tmp/build/80754af9/qtawesome_1615991616277/work 300 | qtconsole @ file:///tmp/build/80754af9/qtconsole_1632739723211/work 301 | QtPy @ file:///tmp/build/80754af9/qtpy_1629397026935/work 302 | regex @ file:///opt/concourse/worker/volumes/live/6d0c2188-eef5-45ba-7cde-604a70bbd9f2/volume/regex_1628063357190/work 303 | requests @ file:///tmp/build/80754af9/requests_1629994808627/work 304 | resampy==0.2.2 305 | responses==0.18.0 306 | retrying==1.3.3 307 | rope @ file:///tmp/build/80754af9/rope_1623703006312/work 308 | rouge-score==0.1.2 309 | rsa==4.9 310 | Rtree @ file:///opt/concourse/worker/volumes/live/18283f9b-719e-45b1-7f3a-937f49358be4/volume/rtree_1618420836397/work 311 | ruamel-yaml-conda @ file:///opt/concourse/worker/volumes/live/e81cf0fe-611a-498e-6e69-a7320057c1ac/volume/ruamel_yaml_1616016689696/work 312 | s3transfer==0.6.0 313 | sacrebleu==2.2.1 314 | sacremoses==0.0.53 315 | scikit-image==0.18.3 316 | scikit-learn==1.1.3 317 | scikit-learn-intelex==2021.20210714.100439 318 | scipy==1.9.3 319 | seaborn @ file:///tmp/build/80754af9/seaborn_1629307859561/work 320 | Send2Trash @ file:///tmp/build/80754af9/send2trash_1632406701022/work 321 | sentencepiece==0.1.97 322 | simplegeneric==0.8.1 323 | singledispatch @ file:///tmp/build/80754af9/singledispatch_1629321204894/work 324 | sip==4.19.13 325 | six @ file:///tmp/build/80754af9/six_1623709665295/work 326 | smart-open==5.2.1 327 | sniffio @ file:///opt/concourse/worker/volumes/live/38ca9e9e-09d1-4d43-5a0f-b546422e7807/volume/sniffio_1614030472707/work 328 | snowballstemmer @ file:///tmp/build/80754af9/snowballstemmer_1611258885636/work 329 | sortedcollections @ file:///tmp/build/80754af9/sortedcollections_1611172717284/work 330 | sortedcontainers @ file:///tmp/build/80754af9/sortedcontainers_1623949099177/work 331 | SoundFile==0.10.3.post1 332 | soupsieve @ file:///tmp/build/80754af9/soupsieve_1616183228191/work 333 | spacy==3.2.5 334 | spacy-legacy==3.0.10 335 | spacy-loggers==1.0.3 336 | Sphinx==4.2.0 337 | sphinxcontrib-applehelp @ file:///home/ktietz/src/ci/sphinxcontrib-applehelp_1611920841464/work 338 | sphinxcontrib-devhelp @ file:///home/ktietz/src/ci/sphinxcontrib-devhelp_1611920923094/work 339 | sphinxcontrib-htmlhelp @ file:///tmp/build/80754af9/sphinxcontrib-htmlhelp_1623945626792/work 340 | sphinxcontrib-jsmath @ file:///home/ktietz/src/ci/sphinxcontrib-jsmath_1611920942228/work 341 | sphinxcontrib-qthelp @ file:///home/ktietz/src/ci/sphinxcontrib-qthelp_1611921055322/work 342 | sphinxcontrib-serializinghtml @ file:///tmp/build/80754af9/sphinxcontrib-serializinghtml_1624451540180/work 343 | sphinxcontrib-websupport @ file:///tmp/build/80754af9/sphinxcontrib-websupport_1597081412696/work 344 | spyder @ file:///opt/concourse/worker/volumes/live/5ce630f9-babe-4f53-755a-cd7324c300d9/volume/spyder_1636480249527/work 345 | spyder-kernels @ file:///opt/concourse/worker/volumes/live/72f0c5be-8b9e-43d2-67e5-f038915937d7/volume/spyder-kernels_1634236950410/work 346 | SQLAlchemy @ file:///opt/concourse/worker/volumes/live/454d700c-0b8b-469e-7add-023d0df6ee3d/volume/sqlalchemy_1626948440911/work 347 | sqlitedict==1.7.0 348 | srsly==2.4.5 349 | stanza==1.4.2 350 | statsmodels @ file:///opt/concourse/worker/volumes/live/237cf070-cf87-4ff8-6149-8213b5eb40a5/volume/statsmodels_1614023802257/work 351 | summ-eval==0.892 352 | sympy==1.11.1 353 | tables @ file:///opt/concourse/worker/volumes/live/daa73f70-754b-4f28-73ce-6f96c40f4b9d/volume/pytables_1607975400838/work 354 | tabulate==0.9.0 355 | TBB==0.2 356 | tblib @ file:///Users/ktietz/demo/mc3/conda-bld/tblib_1629402031467/work 357 | tensorboardX==2.5 358 | terminado==0.9.4 359 | testpath @ file:///tmp/build/80754af9/testpath_1624638946665/work 360 | text-unidecode @ file:///Users/ktietz/demo/mc3/conda-bld/text-unidecode_1629401354553/work 361 | textdistance @ file:///tmp/build/80754af9/textdistance_1612461398012/work 362 | thinc==8.0.17 363 | threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work 364 | three-merge @ file:///tmp/build/80754af9/three-merge_1607553261110/work 365 | tifffile @ file:///tmp/build/80754af9/tifffile_1627275862826/work 366 | tinycss @ file:///tmp/build/80754af9/tinycss_1617713798712/work 367 | tokenizers==0.12.1 368 | toml @ file:///tmp/build/80754af9/toml_1616166611790/work 369 | tomli==2.0.1 370 | toolz @ file:///home/linux1/recipes/ci/toolz_1610987900194/work 371 | torch==1.12.1 372 | torchaudio==0.11.0 373 | torchvision==0.13.1 374 | tornado @ file:///opt/concourse/worker/volumes/live/2c1a63a2-006b-48ee-56b9-0cfe8b4927f9/volume/tornado_1606942321278/work 375 | tqdm==4.64.1 376 | traitlets @ file:///tmp/build/80754af9/traitlets_1632522747050/work 377 | transformers==4.22.2 378 | TTS==0.6.1 379 | typed-ast @ file:///opt/concourse/worker/volumes/live/c2210c24-bb2d-4474-77f5-1bc9bf1cb79f/volume/typed-ast_1624953683856/work 380 | typer==0.4.2 381 | types-pytz==2022.4.0.0 382 | typing==3.7.4.3 383 | typing_extensions==4.4.0 384 | tzdata==2022.1 385 | tzlocal==4.1 386 | ujson @ file:///opt/concourse/worker/volumes/live/258fa87e-ec76-445c-5f9c-fc9523993cd7/volume/ujson_1611259511951/work 387 | umap-learn==0.5.1 388 | uncertainty-calibration==0.1.4 389 | unicodecsv==0.14.1 390 | Unidecode @ file:///tmp/build/80754af9/unidecode_1614712377438/work 391 | unidic-lite==1.0.8 392 | uritemplate==4.1.1 393 | urllib3==1.26.7 394 | virtualenv==20.13.0 395 | wasabi==0.10.1 396 | watchdog @ file:///opt/concourse/worker/volumes/live/3aa0e7d9-c795-4854-41fd-f0b2492c886a/volume/watchdog_1624955010200/work 397 | wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work 398 | webencodings==0.5.1 399 | webrtcvad==2.0.10 400 | websocket-client==1.3.3 401 | Werkzeug @ file:///tmp/build/80754af9/werkzeug_1635505089296/work 402 | whichcraft @ file:///tmp/build/80754af9/whichcraft_1617751293875/work 403 | whisper @ git+https://github.com/openai/whisper.git@9f70a352f9f8630ab3aa0d06af5cb9532bd8c21d 404 | widgetsnbextension @ file:///opt/concourse/worker/volumes/live/42762d5a-6520-4b06-68a2-619dfed6a007/volume/widgetsnbextension_1607531500933/work 405 | wrapt @ file:///opt/concourse/worker/volumes/live/1381ca3b-f984-4f5c-4f46-a67ca4eeffa5/volume/wrapt_1607574524589/work 406 | wurlitzer @ file:///opt/concourse/worker/volumes/live/f0d9880a-118b-4ba3-7b92-3c5bb0f6c00b/volume/wurlitzer_1626947802781/work 407 | xlrd @ file:///tmp/build/80754af9/xlrd_1608072521494/work 408 | XlsxWriter @ file:///tmp/build/80754af9/xlsxwriter_1628603415431/work 409 | xlwings==0.24.9 410 | xlwt==1.3.0 411 | xmltodict @ file:///Users/ktietz/demo/mc3/conda-bld/xmltodict_1629301980723/work 412 | xxhash==3.1.0 413 | yapf @ file:///tmp/build/80754af9/yapf_1615749224965/work 414 | yarl==1.8.1 415 | zict==2.0.0 416 | zipp @ file:///tmp/build/80754af9/zipp_1633618647012/work 417 | zope.event==4.5.0 418 | zope.interface @ file:///opt/concourse/worker/volumes/live/8c2d4bd1-6406-47f0-45ce-890992bafbf6/volume/zope.interface_1625036159007/work 419 | zstandard==0.18.0 --------------------------------------------------------------------------------