))
20 | assert len(unittest_result.failures) == 0, stream.getvalue()
21 | ```
22 |
23 | 1. **Ensure Test Code Reusability**:
24 | - Construct your test cases within reusable blocks of code, like functions or classes, to promote reusability and efficient testing across iterations.
25 | - Example: Instead of writing test cases one-by-one in isolation, define a function or class that can run all test cases together and can be simply rerun after any modification to the source code.
26 | 2. **One-Go Testing**: Aim to craft and execute all test case codes in a single iteration whenever possible, reducing the need for back-and-forth adjustments and providing comprehensive feedback for any code adjustments.
27 |
28 |
29 | ### 🌟 **You're Equipped for Excellence!**
30 | With your skill set, you are prepared to tackle any testing challenge that comes your way. Remember, your test plans should be both succinct and comprehensive, ensuring each step is informed and every test case is valuable.
31 |
--------------------------------------------------------------------------------
/creator/prompts/testsummary_function_schema.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "test_summary",
3 | "description": "A method to be invoked once all test cases have been successfully completed. This function provides a comprehensive summary of each test case, detailing their input, execution command, expected results, actual results, and pass status.",
4 | "parameters": {
5 | "$defs": {
6 | "TestCase": {
7 | "properties": {
8 | "test_input": {
9 | "description": "The input data or conditions used for the test.",
10 | "type": "string"
11 | },
12 | "run_command": {
13 | "description": "The command or function that was executed for the test.",
14 | "type": "string"
15 | },
16 | "expected_result": {
17 | "description": "The expected outcome or result of the test.",
18 | "type": "string"
19 | },
20 | "actual_result": {
21 | "description": "The actual outcome or result observed after the test was executed.",
22 | "type": "string"
23 | },
24 | "is_passed": {
25 | "description": "A boolean indicating whether the test passed or failed.",
26 | "type": "boolean"
27 | }
28 | },
29 | "required": [
30 | "test_input",
31 | "run_command",
32 | "expected_result",
33 | "actual_result",
34 | "is_passed"
35 | ],
36 | "type": "object"
37 | }
38 | },
39 | "properties": {
40 | "test_cases": {
41 | "description": "Extract a list of test cases that were run.",
42 | "items": {
43 | "$ref": "#/$defs/TestCase"
44 | },
45 | "type": "array"
46 | }
47 | },
48 | "required": [
49 | "test_cases"
50 | ],
51 | "type": "object"
52 | }
53 | }
--------------------------------------------------------------------------------
/creator/prompts/tips_for_debugging_prompt.md:
--------------------------------------------------------------------------------
1 | === Tips
2 | ### Debuging Procedure
3 | 1. When encountering an error, **be humble and proactive**. Admit potential model limitations that your output may contain illusory phenomena.
4 | 2. Brainstorm **1-3 alternative causes** for the error and related solutions.
5 | 3. Evaluate the proposed solutions and **select the most viable one**.
6 | 4. Implement the chosen solution and **validate its effectiveness**.
7 |
--------------------------------------------------------------------------------
/creator/prompts/tips_for_testing_prompt.md:
--------------------------------------------------------------------------------
1 | === Tips
2 | ### 🔄 **Iterative Testing Approach:**
3 | 1. **Adopt Humility Towards Expectations**: Recognize the potential presence of illusory phenomena because your nature is a Large Language Model. If you found some test cases fail again and again, you should accept that your expected results are incorrect. When construct your expected result, it's better to use code to validate rather than directly generate the expected result by yourself.
4 | 2. **Propose Diverse Solutions**: When discrepancies arise, neither blindly adjust the code nor the expectations. Brainstorm various (2-4) solutions to accurately diagnose the root cause of the inconsistency.
5 | 3. **Evaluate and Choose Solution**: Critically assess the proposed solutions and select the most viable one that aligns with the observed discrepancies. If validated discrepancies are attributed to the source code, make necessary modifications to resolve them. If expectations are misaligned, validate them using executable code when possible. If not viable, adjust the expectations with due diligence and validation.
6 | 4. **Implement and Validate Solution**: Execute the chosen solution and verify its effectiveness in resolving the inconsistency, ensuring the alignment of code and expectations.
7 |
--------------------------------------------------------------------------------
/creator/prompts/tips_for_veryfy_prompt.md:
--------------------------------------------------------------------------------
1 | === Tips
2 | go on to next step if has, otherwise end
3 |
--------------------------------------------------------------------------------
/creator/retrivever/__init__.py:
--------------------------------------------------------------------------------
1 | from .base import BaseVectorStore
2 |
3 |
4 | __all__ = ["BaseVectorStore"]
5 |
--------------------------------------------------------------------------------
/creator/retrivever/base.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from typing import List
3 | import json
4 | import os
5 |
6 | from creator.llm import create_embedding
7 | from creator.config.library import config
8 |
9 | from .score_functions import cosine_similarity
10 |
11 |
12 | class BaseVectorStore:
13 |
14 | def __init__(self, skill_library_path: str = ""):
15 |
16 | self.vectordb_path: str = config.local_skill_library_vectordb_path
17 | self.skill_library_path = config.local_skill_library_path
18 | self.vector_store = {}
19 | self.embeddings = None
20 | self.embedding_model = create_embedding()
21 | self.sorted_keys = []
22 | self.query_cache = {}
23 |
24 | if skill_library_path and os.path.exists(skill_library_path):
25 | self.skill_library_path = skill_library_path
26 |
27 | if os.path.isdir(self.skill_library_path):
28 | self.query_cache_path = self.vectordb_path + "/query_cache.json"
29 | self.vectordb_path = self.vectordb_path + "/vector_db.json"
30 | if os.path.exists(self.query_cache_path):
31 | with open(self.query_cache_path, mode="r", encoding="utf-8") as f:
32 | self.query_cache = json.load(f)
33 |
34 | if os.path.exists(self.vectordb_path):
35 | # load vectordb
36 | with open(self.vectordb_path, mode="r", encoding="utf-8") as f:
37 | self.vector_store = json.load(f)
38 |
39 | self.update_index()
40 |
41 | def update_index(self):
42 | # glob skill_library_path to find `embedding_text.txt`
43 | embeddings = []
44 |
45 | for root, dirs, files in os.walk(self.skill_library_path):
46 | for file in files:
47 | if root not in self.vector_store and file == "embedding_text.txt":
48 | embedding_text_path = os.path.join(root, file)
49 | with open(embedding_text_path, mode="r", encoding="utf-8") as f:
50 | embedding_text = f.read()
51 |
52 | skill_path = os.path.join(root, "skill.json")
53 | with open(skill_path, encoding="utf-8") as f:
54 | skill_json = json.load(f)
55 | skill_json["skill_id"] = root
56 | skill_json["embedding_text"] = embedding_text
57 | self.vector_store[root] = skill_json
58 |
59 | # index embedding_texts
60 | no_embedding_obj = {key:value for key, value in self.vector_store.items() if "embedding" not in value}
61 | if len(no_embedding_obj) > 0:
62 | no_embedding_texts = []
63 | sorted_keys = sorted(no_embedding_obj)
64 | for key in sorted_keys:
65 | no_embedding_texts.append(no_embedding_obj[key]["embedding_text"])
66 |
67 | embeddings = self.embedding_model.embed_documents(no_embedding_texts)
68 | for i, key in enumerate(sorted_keys):
69 | self.vector_store[key]["embedding"] = embeddings[i]
70 |
71 | self.sorted_keys = sorted(self.vector_store)
72 | embeddings = []
73 | for key in self.sorted_keys:
74 | embeddings.append(self.vector_store[key]["embedding"])
75 | self.embeddings = np.array(embeddings)
76 | # save to vectordb
77 | with open(self.vectordb_path, "w", encoding="utf-8") as f:
78 | json.dump(self.vector_store, f)
79 |
80 | def save_query_cache(self):
81 | with open(self.query_cache_path, "w", encoding="utf-8") as f:
82 | json.dump(self.query_cache, f)
83 |
84 | def search(self, query: str, top_k: int = 3, threshold=0.8) -> List[dict]:
85 | key = str((query, top_k, threshold))
86 | if key in self.query_cache:
87 | return self.query_cache[key]
88 |
89 | self.update_index()
90 |
91 | query_embedding = self.embedding_model.embed_query(query)
92 | query_embedding = np.array(query_embedding)
93 | indexes, scores = cosine_similarity(docs_matrix=self.embeddings, query_vec=query_embedding, k=top_k)
94 | results = []
95 | for i, index in enumerate(indexes):
96 | if scores[i] < threshold:
97 | break
98 | result = self.vector_store[self.sorted_keys[index]]
99 | result = result.copy()
100 | result.pop("embedding")
101 | result["score"] = scores[i]
102 | results.append(result)
103 | self.query_cache[key] = results
104 | self.save_query_cache()
105 | return results
106 |
107 |
--------------------------------------------------------------------------------
/creator/retrivever/score_functions.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | def cosine_similarity(docs_matrix, query_vec, k=3):
5 | similarities = np.dot(docs_matrix, query_vec) / (np.linalg.norm(docs_matrix, axis=1) * np.linalg.norm(query_vec))
6 | top_k_indices = np.argsort(similarities)[-k:][::-1]
7 | return top_k_indices, similarities[top_k_indices]
8 |
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/create/conversation_history.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "role": "user",
4 | "content": "# file name: create.py\nimport creator\nfrom creator.schema.skill import CodeSkill\nfrom typing import Optional, List\n\n\ndef create(\n request: Optional[str] = None,\n messages: Optional[List[dict]] = None,\n messages_json_path: Optional[str] = None,\n skill_path: Optional[str] = None,\n skill_json_path: Optional[str] = None,\n file_content: Optional[str] = None,\n file_path: Optional[str] = None,\n huggingface_repo_id: Optional[str] = None,\n huggingface_skill_path: Optional[str] = None,\n) -> CodeSkill:\n \"\"\"Create a skill from various sources.\n\n Args:\n request (Optional[str], optional): Request string. Defaults to None.\n messages (Optional[List[dict]], optional): Messages in list of dict format. Defaults to None.\n messages_json_path (Optional[str], optional): Path to messages JSON file. Defaults to None.\n skill_path (Optional[str], optional): Path to skill directory. Defaults to None.\n skill_json_path (Optional[str], optional): Path to skill JSON file. Defaults to None.\n file_content (Optional[str], optional): File content. Defaults to None.\n file_path (Optional[str], optional): Path to file. Defaults to None.\n huggingface_repo_id (Optional[str], optional): Huggingface repo ID. Defaults to None.\n huggingface_skill_path (Optional[str], optional): Huggingface skill path. Defaults to None.\n\n Returns:\n CodeSkill: Created skill\n Example:\n >>> skill = creator.create(request=\"filter how many prime numbers are in 201\")\n >>> skill = creator.create(messages=[{\"role\": \"user\",\"content\": \"write a program to list all the python functions and their docstrings in a directory\"},{\"role\": \"assistant\",\"content\": \"Sure, I can help with that. Here's the plan:\\n\\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\\n4. Finally, we will print out the function names and their docstrings.\\n\\nLet's start with step 1: getting a list of all Python files in the specified directory.\",\"function_call\": {\"name\": \"run_code\",\"arguments\": \"{\\n \\\"language\\\": \\\"python\\\",\\n \\\"code\\\": \\\"import os\\\\nimport glob\\\\n\\\\n# Get the current working directory\\\\ncwd = os.getcwd()\\\\n\\\\n# Get a list of all Python files in the directory\\\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\\\n\\\\npython_files\\\"\\n}\"}}])\n >>> skill = creator.create(messages_json_path=\"./messages_example.json\")\n >>> skill = creator.create(file_path=\"../creator/utils/ask_human.py\")\n >>> skill = creator.create(huggingface_repo_id=\"Sayoyo/skill-library\", huggingface_skill_path=\"extract_pdf_section\")\n >>> skill = creator.create(skill_json_path=os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/create/skill.json\")\n \"\"\"\n if request is not None:\n skill = creator.create(request=request)\n elif messages is not None:\n skill = creator.create(messages=messages)\n elif messages_json_path is not None:\n skill = creator.create(messages_json_path=messages_json_path)\n elif skill_path is not None:\n skill = creator.create(skill_path=skill_path)\n elif skill_json_path is not None:\n skill = creator.create(skill_json_path=skill_json_path)\n elif file_content is not None:\n skill = creator.create(file_content=file_content)\n elif file_path is not None:\n skill = creator.create(file_path=file_path)\n elif huggingface_repo_id is not None and huggingface_skill_path is not None:\n skill = creator.create(\n huggingface_repo_id=huggingface_repo_id, huggingface_skill_path=huggingface_skill_path\n )\n else:\n raise ValueError(\"At least one argument must be provided.\")\n\n return skill\n"
5 | }
6 | ]
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/create/embedding_text.txt:
--------------------------------------------------------------------------------
1 | create
2 | Create a skill from various sources.
3 | Args:
4 | request (Optional[str], optional): Request string. Defaults to None.
5 | messages (Optional[List[dict]], optional): Messages in list of dict format. Defaults to None.
6 | messages_json_path (Optional[str], optional): Path to messages JSON file. Defaults to None.
7 | skill_path (Optional[str], optional): Path to skill directory. Defaults to None.
8 | skill_json_path (Optional[str], optional): Path to skill JSON file. Defaults to None.
9 | file_content (Optional[str], optional): File content. Defaults to None.
10 | file_path (Optional[str], optional): Path to file. Defaults to None.
11 | huggingface_repo_id (Optional[str], optional): Huggingface repo ID. Defaults to None.
12 | huggingface_skill_path (Optional[str], optional): Huggingface skill path. Defaults to None.
13 |
14 | Returns:
15 | CodeSkill: Created skill
16 | Example:
17 | >>> skill = create(request="filter how many prime numbers are in 201")
18 | >>> skill = create(messages=[{"role": "user","content": "write a program to list all the python functions and their docstrings in a directory"},{"role": "assistant","content": "Sure, I can help with that. Here's the plan:\n\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\n4. Finally, we will print out the function names and their docstrings.\n\nLet's start with step 1: getting a list of all Python files in the specified directory.","function_call": {"name": "run_code","arguments": "{\n \"language\": \"python\",\n \"code\": \"import os\\nimport glob\\n\\n# Get the current working directory\\ncwd = os.getcwd()\\n\\n# Get a list of all Python files in the directory\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\n\\npython_files\"\n}"}}])
19 | >>> skill = create(messages_json_path="./messages_example.json")
20 | >>> skill = create(file_path="../creator/utils/ask_human.py")
21 | >>> skill = create(huggingface_repo_id="Sayoyo/skill-library", huggingface_skill_path="extract_pdf_section")
22 | >>> skill = create(skill_json_path=os.path.expanduser("~") + "/.cache/open_creator/skill_library/create/skill.json")
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/create/function_call.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "create",
3 | "description": "Create a skill from various sources.\n\nskill = creator.create(request=\"filter how many prime numbers are in 201\")",
4 | "parameters": {
5 | "type": "object",
6 | "properties": {
7 | "request": {
8 | "type": "string",
9 | "description": "Request string."
10 | },
11 | "messages": {
12 | "type": "array",
13 | "description": "Messages in list of dict format."
14 | },
15 | "messages_json_path": {
16 | "type": "string",
17 | "description": "Path to messages JSON file."
18 | },
19 | "skill_path": {
20 | "type": "string",
21 | "description": "Path to skill directory."
22 | },
23 | "skill_json_path": {
24 | "type": "string",
25 | "description": "Path to skill JSON file."
26 | },
27 | "file_content": {
28 | "type": "string",
29 | "description": "File content."
30 | },
31 | "file_path": {
32 | "type": "string",
33 | "description": "Path to file."
34 | },
35 | "huggingface_repo_id": {
36 | "type": "string",
37 | "description": "Huggingface repo ID."
38 | },
39 | "huggingface_skill_path": {
40 | "type": "string",
41 | "description": "Huggingface skill path."
42 | }
43 | },
44 | "required": []
45 | }
46 | }
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/create/install_dependencies.sh:
--------------------------------------------------------------------------------
1 | pip install -U "open-creator"
2 |
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/create/skill_code.py:
--------------------------------------------------------------------------------
1 | from creator.core import creator
2 | from creator.core.skill import CodeSkill
3 | from typing import Optional, List
4 |
5 |
6 | def create(
7 | request: Optional[str] = None,
8 | messages: Optional[List[dict]] = None,
9 | messages_json_path: Optional[str] = None,
10 | skill_path: Optional[str] = None,
11 | skill_json_path: Optional[str] = None,
12 | file_content: Optional[str] = None,
13 | file_path: Optional[str] = None,
14 | huggingface_repo_id: Optional[str] = None,
15 | huggingface_skill_path: Optional[str] = None,
16 | ) -> CodeSkill:
17 | """Create a skill from various sources.
18 |
19 | Args:
20 | request (Optional[str], optional): Request string. Defaults to None.
21 | messages (Optional[List[dict]], optional): Messages in list of dict format. Defaults to None.
22 | messages_json_path (Optional[str], optional): Path to messages JSON file. Defaults to None.
23 | skill_path (Optional[str], optional): Path to skill directory. Defaults to None.
24 | skill_json_path (Optional[str], optional): Path to skill JSON file. Defaults to None.
25 | file_content (Optional[str], optional): File content. Defaults to None.
26 | file_path (Optional[str], optional): Path to file. Defaults to None.
27 | huggingface_repo_id (Optional[str], optional): Huggingface repo ID. Defaults to None.
28 | huggingface_skill_path (Optional[str], optional): Huggingface skill path. Defaults to None.
29 |
30 | Returns:
31 | CodeSkill: Created skill
32 | Example:
33 | >>> skill = create(request="filter how many prime numbers are in 201")
34 | >>> skill = create(messages=[{"role": "user","content": "write a program to list all the python functions and their docstrings in a directory"},{"role": "assistant","content": "Sure, I can help with that. Here's the plan:\n\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\n4. Finally, we will print out the function names and their docstrings.\n\nLet's start with step 1: getting a list of all Python files in the specified directory.","function_call": {"name": "run_code","arguments": "{\n \"language\": \"python\",\n \"code\": \"import os\\nimport glob\\n\\n# Get the current working directory\\ncwd = os.getcwd()\\n\\n# Get a list of all Python files in the directory\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\n\\npython_files\"\n}"}}])
35 | >>> skill = create(messages_json_path="./messages_example.json")
36 | >>> skill = create(file_path="../creator/utils/ask_human.py")
37 | >>> skill = create(huggingface_repo_id="Sayoyo/skill-library", huggingface_skill_path="extract_pdf_section")
38 | >>> skill = create(skill_json_path=os.path.expanduser("~") + "/.cache/open_creator/skill_library/create/skill.json")
39 | """
40 | if request is not None:
41 | skill = creator.create(request=request)
42 | elif messages is not None:
43 | skill = creator.create(messages=messages)
44 | elif messages_json_path is not None:
45 | skill = creator.create(messages_json_path=messages_json_path)
46 | elif skill_path is not None:
47 | skill = creator.create(skill_path=skill_path)
48 | elif skill_json_path is not None:
49 | skill = creator.create(skill_json_path=skill_json_path)
50 | elif file_content is not None:
51 | skill = creator.create(file_content=file_content)
52 | elif file_path is not None:
53 | skill = creator.create(file_path=file_path)
54 | elif huggingface_repo_id is not None and huggingface_skill_path is not None:
55 | skill = creator.create(
56 | huggingface_repo_id=huggingface_repo_id, huggingface_skill_path=huggingface_skill_path
57 | )
58 | else:
59 | raise ValueError("At least one argument must be provided.")
60 |
61 | return skill
62 |
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/create/skill_doc.md:
--------------------------------------------------------------------------------
1 | ## Skill Details:
2 | - **Name**: create
3 | - **Description**: Create a skill from various sources.
4 | - **Version**: 1.0.0
5 | - **Usage**:
6 | ```python
7 | skill = create(request="filter how many prime numbers are in 201")
8 | skill = create(messages=[{"role": "user","content": "write a program to list all the python functions and their docstrings in a directory"},{"role": "assistant","content": "Sure, I can help with that. Here's the plan:\n\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\n4. Finally, we will print out the function names and their docstrings.\n\nLet's start with step 1: getting a list of all Python files in the specified directory.","function_call": {"name": "run_code","arguments": "{\n \"language\": \"python\",\n \"code\": \"import os\\nimport glob\\n\\n# Get the current working directory\\ncwd = os.getcwd()\\n\\n# Get a list of all Python files in the directory\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\n\\npython_files\"\n}"}}])
9 | skill = create(messages_json_path="./messages_example.json")
10 | skill = create(file_path="../creator/utils/ask_human.py")
11 | skill = create(huggingface_repo_id="Sayoyo/skill-library", huggingface_skill_path="extract_pdf_section")
12 | skill = create(skill_json_path=os.path.expanduser("~") + "/.cache/open_creator/skill_library/create/skill.json")
13 | ```
14 | - **Parameters**:
15 | - **request** (string): Request string.
16 | - **messages** (array): Messages in list of dict format.
17 | - **messages_json_path** (string): Path to messages JSON file.
18 | - **skill_path** (string): Path to skill directory.
19 | - **skill_json_path** (string): Path to skill JSON file.
20 | - **file_content** (string): File content.
21 | - **file_path** (string): Path to file.
22 | - **huggingface_repo_id** (string): Huggingface repo ID.
23 | - **huggingface_skill_path** (string): Huggingface skill path.
24 |
25 | - **Returns**:
26 | - **CodeSkill** (object): Created skill
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/save/conversation_history.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "role": "user",
4 | "content": "# file name: save.py\nimport creator\nfrom creator.schema.skill import CodeSkill\n\n\ndef save(skill: CodeSkill, huggingface_repo_id: str = None, skill_path: str = None):\n \"\"\"\n Save a skill to a local path or a huggingface repo.\n \n Parameters:\n skill: CodeSkill object, the skill to be saved.\n huggingface_repo_id: str, optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.\n skill_path: str, optional, the local path. If provided, the skill will be saved to this path.\n \n Returns:\n None\n \n Usage examples:\n ```python\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> creator.save(skill=skill, huggingface_repo_id=\"ChuxiJ/skill_library\")\n ```\n or\n ```python\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> creator.save(skill=skill, skill_path=\"/path/to/save\")\n ```\n \"\"\"\n if huggingface_repo_id is not None:\n creator.save_to_hub(skill=skill, huggingface_repo_id=huggingface_repo_id)\n elif skill_path is not None:\n creator.save_to_skill_path(skill=skill, skill_path=skill_path)\n else:\n raise ValueError(\"Either huggingface_repo_id or skill_path must be provided.\")\n \n"
5 | }
6 | ]
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/save/embedding_text.txt:
--------------------------------------------------------------------------------
1 | save
2 | Save a skill to a local path or a huggingface repo.
3 | Usage examples:
4 | save(skill=skill) or save(skill=skill, huggingface_repo_id='xxxx/skill_library') or save(skill=skill, skill_path='/path/to/save')
5 | ['save', 'skill', 'huggingface', 'local path']
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/save/function_call.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "save",
3 | "description": "Save a skill to a local path or a huggingface repo.\n\nsave(skill=skill, huggingface_repo_id='xxxx/skill_library') or save(skill=skill, skill_path='/path/to/save')",
4 | "parameters": {
5 | "type": "object",
6 | "properties": {
7 | "skill": {
8 | "type": "object",
9 | "description": "CodeSkill object, the skill to be saved."
10 | },
11 | "huggingface_repo_id": {
12 | "type": "string",
13 | "description": "optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo."
14 | },
15 | "skill_path": {
16 | "type": "string",
17 | "description": "optional, the local path. If provided, the skill will be saved to this path."
18 | }
19 | },
20 | "required": [
21 | "skill"
22 | ]
23 | }
24 | }
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/save/install_dependencies.sh:
--------------------------------------------------------------------------------
1 | pip install -U "open-creator"
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/save/skill.json:
--------------------------------------------------------------------------------
1 | {
2 | "skill_name": "save",
3 | "skill_description": "Save a skill to a local path or a huggingface repo.",
4 | "skill_metadata": {
5 | "created_at": "2023-10-04 09:54:43",
6 | "author": "gongjunmin",
7 | "updated_at": "2023-10-04 09:54:43",
8 | "usage_count": 0,
9 | "version": "1.0.0",
10 | "additional_kwargs": {}
11 | },
12 | "skill_tags": [
13 | "save",
14 | "skill",
15 | "huggingface",
16 | "local path"
17 | ],
18 | "skill_usage_example": "save(skill=skill, huggingface_repo_id='ChuxiJ/skill_library') or save(skill=skill, skill_path='/path/to/save')",
19 | "skill_program_language": "python",
20 | "skill_code": "from creator.core import creator\nfrom creator.core.skill import CodeSkill\n\n\ndef save(skill: CodeSkill, huggingface_repo_id: str = None, skill_path: str = None):\n \"\"\"\n Save a skill to a local path or a huggingface repo.\n \n Parameters:\n skill: CodeSkill object, the skill to be saved.\n huggingface_repo_id: str, optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.\n skill_path: str, optional, the local path. If provided, the skill will be saved to this path.\n \n Returns:\n None\n \n Example:\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> save(skill=skill, huggingface_repo_id=\"ChuxiJ/skill_library\") # save to remote\n >>> save(skill=skill, skill_path=\"/path/to/save\") # save to local\n \"\"\"\n if huggingface_repo_id is not None:\n creator.save(skill=skill, huggingface_repo_id=huggingface_repo_id)\n elif skill_path is not None:\n creator.save(skill=skill, skill_path=skill_path)\n else:\n creator.save(skill=skill)",
21 | "skill_parameters": [
22 | {
23 | "param_name": "skill",
24 | "param_type": "object",
25 | "param_description": "CodeSkill object, the skill to be saved.",
26 | "param_required": true,
27 | "param_default": null
28 | },
29 | {
30 | "param_name": "huggingface_repo_id",
31 | "param_type": "string",
32 | "param_description": "optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.",
33 | "param_required": false,
34 | "param_default": null
35 | },
36 | {
37 | "param_name": "skill_path",
38 | "param_type": "string",
39 | "param_description": "optional, the local path. If provided, the skill will be saved to this path.",
40 | "param_required": false,
41 | "param_default": null
42 | }
43 | ],
44 | "skill_return": null,
45 | "skill_dependencies": [
46 | {
47 | "dependency_name": "open-creator",
48 | "dependency_version": "latest",
49 | "dependency_type": "package"
50 | }
51 | ],
52 | "conversation_history": [
53 | {
54 | "role": "user",
55 | "content": "# file name: save.py\nimport creator\nfrom creator.schema.skill import CodeSkill\n\n\ndef save(skill: CodeSkill, huggingface_repo_id: str = None, skill_path: str = None):\n \"\"\"\n Save a skill to a local path or a huggingface repo.\n \n Parameters:\n skill: CodeSkill object, the skill to be saved.\n huggingface_repo_id: str, optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.\n skill_path: str, optional, the local path. If provided, the skill will be saved to this path.\n \n Returns:\n None\n \n Usage examples:\n ```python\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> creator.save(skill=skill, huggingface_repo_id=\"ChuxiJ/skill_library\")\n ```\n or\n ```python\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> creator.save(skill=skill, skill_path=\"/path/to/save\")\n ```\n \"\"\"\n if huggingface_repo_id is not None:\n creator.save_to_hub(skill=skill, huggingface_repo_id=huggingface_repo_id)\n elif skill_path is not None:\n creator.save_to_skill_path(skill=skill, skill_path=skill_path)\n else:\n raise ValueError(\"Either huggingface_repo_id or skill_path must be provided.\")\n \n"
56 | }
57 | ],
58 | "test_summary": null
59 | }
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/save/skill_code.py:
--------------------------------------------------------------------------------
1 | from creator.core import creator
2 | from creator.core.skill import CodeSkill
3 |
4 |
5 | def save(skill: CodeSkill, huggingface_repo_id: str = None, skill_path: str = None):
6 | """
7 | Save a skill to a local path or a huggingface repo.
8 |
9 | Parameters:
10 | skill: CodeSkill object, the skill to be saved.
11 | huggingface_repo_id: str, optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.
12 | skill_path: str, optional, the local path. If provided, the skill will be saved to this path.
13 |
14 | Returns:
15 | None
16 |
17 | Example:
18 | >>> import creator
19 | >>> import os
20 | >>> skill_json_path = os.path.expanduser("~") + "/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json"
21 | >>> skill = creator.create(skill_json_path=skill_json_path)
22 | >>> save(skill=skill, huggingface_repo_id="ChuxiJ/skill_library") # save to remote
23 | >>> save(skill=skill, skill_path="/path/to/save") # save to local
24 | """
25 | if huggingface_repo_id is not None:
26 | creator.save(skill=skill, huggingface_repo_id=huggingface_repo_id)
27 | elif skill_path is not None:
28 | creator.save(skill=skill, skill_path=skill_path)
29 | else:
30 | creator.save(skill=skill)
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/save/skill_doc.md:
--------------------------------------------------------------------------------
1 | ## Skill Details:
2 | - **Name**: save
3 | - **Description**: Save a skill to a local path or a huggingface repo.
4 | - **Version**: 1.0.0
5 | - **Usage**:
6 | You need to create a skill first
7 | ```python
8 | import creator
9 | import os
10 | skill_json_path = os.path.expanduser("~") + "/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json"
11 | skill = creator.create(skill_json_path=skill_json_path)
12 | ```
13 | ```python
14 | save(skill=skill, huggingface_repo_id="ChuxiJ/skill_library")
15 | ```
16 | or
17 | ```python
18 | save(skill=skill, skill_path="/path/to/save")
19 | ```
20 | - **Parameters**:
21 | - **skill** (object): CodeSkill object, the skill to be saved.
22 | - Required: True
23 | - **huggingface_repo_id** (string): optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.
24 | - **skill_path** (string): optional, the local path. If provided, the skill will be saved to this path.
25 |
26 | - **Returns**:
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/search/conversation_history.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "role": "user",
4 | "content": "# file name: search.py\nimport creator\nfrom creator.schema.skill import CodeSkill\n\n\ndef search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:\n \"\"\"\n Search skills by query.\n \n Parameters:\n query: str, the query.\n top_k: int, optional, the maximum number of skills to return.\n threshold: float, optional, the minimum similarity score to return a skill.\n Returns:\n a list of CodeSkill objects.\n\n Example:\n >>> import creator\n >>> skills = search(\"I want to extract some pages from a pdf\")\n \"\"\"\n\n return creator.search(query=query, top_k=top_k, threshold=threshold)\n\n"
5 | }
6 | ]
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/search/embedding_text.txt:
--------------------------------------------------------------------------------
1 | search
2 | This skill allows users to search for skills by query.
3 | skills = search('I want to extract some pages from a pdf')
4 | ['search', 'query', 'CodeSkill']
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/search/function_call.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "search",
3 | "description": "This skill allows users to search for skills by query.\n\nskills = search('I want to extract some pages from a pdf')",
4 | "parameters": {
5 | "type": "object",
6 | "properties": {
7 | "query": {
8 | "type": "string",
9 | "description": "The query to search for skills."
10 | },
11 | "top_k": {
12 | "type": "integer",
13 | "description": "The maximum number of skills to return.",
14 | "default": 1
15 | },
16 | "threshold": {
17 | "type": "float",
18 | "description": "The minimum similarity score to return a skill.",
19 | "default": 0.8
20 | }
21 | },
22 | "required": [
23 | "query"
24 | ]
25 | }
26 | }
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/search/install_dependencies.sh:
--------------------------------------------------------------------------------
1 | pip install -U "open-creator"
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/search/search.py:
--------------------------------------------------------------------------------
1 | from creator.core import creator
2 | from creator.core.skill import CodeSkill
3 |
4 |
5 | def search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:
6 | """
7 | Search skills by query.
8 |
9 | Parameters:
10 | query: str, the query.
11 | top_k: int, optional, the maximum number of skills to return.
12 | threshold: float, optional, the minimum similarity score to return a skill.
13 | Returns:
14 | a list of CodeSkill objects.
15 |
16 | Example:
17 | >>> import creator
18 | >>> skills = search("I want to extract some pages from a pdf")
19 | """
20 |
21 | return creator.search(query=query, top_k=top_k, threshold=threshold)
22 |
23 |
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/search/skill.json:
--------------------------------------------------------------------------------
1 | {
2 | "skill_name": "search",
3 | "skill_description": "This skill allows users to search for skills by query.",
4 | "skill_metadata": {
5 | "created_at": "2023-10-04 14:51:53",
6 | "author": "gongjunmin",
7 | "updated_at": "2023-10-04 14:51:53",
8 | "usage_count": 0,
9 | "version": "1.0.0",
10 | "additional_kwargs": {}
11 | },
12 | "skill_tags": [
13 | "search",
14 | "query",
15 | "CodeSkill"
16 | ],
17 | "skill_usage_example": "skills = search('I want to extract some pages from a pdf')",
18 | "skill_program_language": "python",
19 | "skill_code": "from creator.core import creator\nfrom creator.core.skill import CodeSkill\n\ndef search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:\n '''\n Search skills by query.\n \n Parameters:\n query: str, the query.\n top_k: int, optional, the maximum number of skills to return.\n threshold: float, optional, the minimum similarity score to return a skill.\n Returns:\n a list of CodeSkill objects.\n\n Example:\n >>> import creator\n >>> skills = search('I want to extract some pages from a pdf')\n '''\n\n return creator.search(query=query, top_k=top_k, threshold=threshold)",
20 | "skill_parameters": [
21 | {
22 | "param_name": "query",
23 | "param_type": "string",
24 | "param_description": "The query to search for skills.",
25 | "param_required": true,
26 | "param_default": null
27 | },
28 | {
29 | "param_name": "top_k",
30 | "param_type": "integer",
31 | "param_description": "The maximum number of skills to return.",
32 | "param_required": false,
33 | "param_default": 1
34 | },
35 | {
36 | "param_name": "threshold",
37 | "param_type": "float",
38 | "param_description": "The minimum similarity score to return a skill.",
39 | "param_required": false,
40 | "param_default": 0.8
41 | }
42 | ],
43 | "skill_return": {
44 | "param_name": "skills",
45 | "param_type": "array",
46 | "param_description": "A list of CodeSkill objects.",
47 | "param_required": true,
48 | "param_default": null
49 | },
50 | "skill_dependencies": [
51 | {
52 | "dependency_name": "open-creator",
53 | "dependency_version": "latest",
54 | "dependency_type": "package"
55 | }
56 | ],
57 | "conversation_history": [
58 | {
59 | "role": "user",
60 | "content": "# file name: search.py\nimport creator\nfrom creator.schema.skill import CodeSkill\n\n\ndef search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:\n \"\"\"\n Search skills by query.\n \n Parameters:\n query: str, the query.\n top_k: int, optional, the maximum number of skills to return.\n threshold: float, optional, the minimum similarity score to return a skill.\n Returns:\n a list of CodeSkill objects.\n\n Example:\n >>> import creator\n >>> skills = search(\"I want to extract some pages from a pdf\")\n \"\"\"\n\n return creator.search(query=query, top_k=top_k, threshold=threshold)\n\n"
61 | }
62 | ],
63 | "test_summary": null
64 | }
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/search/skill_code.py:
--------------------------------------------------------------------------------
1 | import creator
2 | from creator.core.skill import CodeSkill
3 |
4 | def search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:
5 | '''
6 | Search skills by query.
7 |
8 | Parameters:
9 | query: str, the query.
10 | top_k: int, optional, the maximum number of skills to return.
11 | threshold: float, optional, the minimum similarity score to return a skill.
12 | Returns:
13 | a list of CodeSkill objects.
14 |
15 | Example:
16 | >>> import creator
17 | >>> skills = search('I want to extract some pages from a pdf')
18 | '''
19 |
20 | return creator.search(query=query, top_k=top_k, threshold=threshold)
--------------------------------------------------------------------------------
/creator/skill_library/open-creator/search/skill_doc.md:
--------------------------------------------------------------------------------
1 | ## Skill Details:
2 | - **Name**: search
3 | - **Description**: This skill allows users to search for skills by query.
4 | - **Version**: 1.0.0
5 | - **Usage**:
6 | ```python
7 | skills = search('I want to extract some pages from a pdf')
8 | ```
9 | - **Parameters**:
10 | - **query** (string): The query to search for skills.
11 | - Required: True
12 | - **top_k** (integer): The maximum number of skills to return.
13 | - Default: 1
14 | - **threshold** (float): The minimum similarity score to return a skill.
15 | - Default: 0.8
16 |
17 | - **Returns**:
18 | - **skills** (array): A list of CodeSkill objects.
--------------------------------------------------------------------------------
/creator/utils/__init__.py:
--------------------------------------------------------------------------------
1 | from .install_command import generate_install_command
2 | from .skill_doc import generate_skill_doc
3 | from .language_suffix import generate_language_suffix
4 | from .title_remover import remove_title
5 | from .output_truncate import truncate_output
6 | from .ask_human import ask_run_code_confirm
7 | from .dict2list import convert_to_values_list
8 | from .user_info import get_user_info
9 | from .load_prompt import load_system_prompt
10 | from .printer import print
11 | from .code_split import split_code_blocks
12 | from .valid_code import is_valid_code, is_expression
13 | from .tips_utils import remove_tips
14 |
15 |
16 | __all__ = [
17 | "generate_install_command",
18 | "generate_skill_doc",
19 | "generate_language_suffix",
20 | "remove_title",
21 | "truncate_output",
22 | "ask_run_code_confirm",
23 | "convert_to_values_list",
24 | "get_user_info",
25 | "load_system_prompt",
26 | "print",
27 | "split_code_blocks",
28 | "is_valid_code",
29 | "is_expression",
30 | "remove_tips"
31 | ]
32 |
--------------------------------------------------------------------------------
/creator/utils/ask_human.py:
--------------------------------------------------------------------------------
1 | import inquirer
2 |
3 |
4 | def ask_run_code_confirm(message='Would you like to run this code? (y/n)\n\n'):
5 | questions = [inquirer.Confirm('confirm', message=message)]
6 | answers = inquirer.prompt(questions)
7 | return answers["confirm"]
8 |
--------------------------------------------------------------------------------
/creator/utils/code_split.py:
--------------------------------------------------------------------------------
1 | def split_code_blocks(code: str):
2 | lines = code.strip().split('\n')
3 | i = len(lines) - 1
4 |
5 | codes = []
6 |
7 | while i >= 0:
8 | line = lines[i]
9 |
10 | # If line not start with space
11 | if not line.startswith((" ", "\t")):
12 | codes.append(line)
13 | i -= 1
14 | # Else
15 | else:
16 | break
17 |
18 | # Add remaining lines as a single block
19 | if i >= 0:
20 | codes.append("\n".join(lines[:i+1]))
21 |
22 | return codes[::-1]
23 |
--------------------------------------------------------------------------------
/creator/utils/dict2list.py:
--------------------------------------------------------------------------------
1 |
2 | def convert_to_values_list(x):
3 | if isinstance(x, dict):
4 | value_list = list(x.values())
5 | if len(value_list) > 0 and isinstance(value_list[0], dict):
6 | key = list(x.keys())[0]
7 | value_list[0]["name"] = key
8 | return value_list
9 | elif isinstance(x, str):
10 | if x == "None":
11 | return []
12 | else:
13 | return [{
14 | "parameter_name": "result",
15 | "parameter_type": x
16 | }]
17 | return x
18 |
--------------------------------------------------------------------------------
/creator/utils/install_command.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def generate_install_command(language: str, dependencies):
4 | if language == "python":
5 | return _generate_python_install_command(dependencies)
6 | elif language == "R":
7 | return _generate_r_install_command(dependencies)
8 | elif language == "javascript":
9 | return _generate_javascript_install_command(dependencies)
10 | elif language == "shell":
11 | return _generate_shell_install_command(dependencies)
12 | elif language == "applescript":
13 | return _generate_applescript_install_command(dependencies)
14 | elif language == "html":
15 | return _generate_html_install_command(dependencies)
16 | else:
17 | raise NotImplementedError
18 |
19 |
20 | def _generate_python_install_command(dependencies):
21 | shell_command_str = 'pip show {package_name} || pip install "{package_name}'
22 | commands = []
23 | if not isinstance(dependencies, list):
24 | dependencies = [dependencies]
25 | for dep in dependencies:
26 | if dep.dependency_type not in ("build-in", "function"):
27 | shell_command = shell_command_str.format(package_name=dep.dependency_name)
28 | if dep.dependency_version:
29 | if dep.dependency_version == "latest":
30 | shell_command += '"'
31 | elif dep.dependency_version[:2] not in ("==", ">=", "<=", "!="):
32 | shell_command += "==" + dep.dependency_version
33 | else:
34 | shell_command += dep.dependency_version
35 | shell_command += '"'
36 | commands.append(shell_command)
37 | return "\n".join(commands)
38 |
39 |
40 | def _generate_r_install_command(dependencies):
41 | shell_command_str = "Rscript -e 'if (!requireNamespace(\"{package_name}\", quietly = TRUE)) install.packages(\"{package_name}\")'"
42 | commands = []
43 | for dep in dependencies:
44 | if dep.dependency_type == "package":
45 | shell_command = shell_command_str.format(package_name=dep.dependency_name)
46 | if dep.dependency_version:
47 | shell_command += "==" + dep.dependency_version
48 | commands.append(shell_command)
49 | return "\n".join(commands)
50 |
51 |
52 | def _generate_javascript_install_command(dependencies):
53 | shell_command_str = "npm install {package_name}"
54 | commands = []
55 | for dep in dependencies:
56 | if dep.dependency_type == "package":
57 | shell_command = shell_command_str.format(package_name=dep.dependency_name)
58 | if dep.dependency_version:
59 | shell_command += "@" + dep.dependency_version
60 | commands.append(shell_command)
61 | return "\n".join(commands)
62 |
63 |
64 | def _generate_shell_install_command(dependencies):
65 | shell_command_str = "apt-get install {package_name}"
66 | commands = []
67 | for dep in dependencies:
68 | if dep.dependency_type == "package":
69 | shell_command = shell_command_str.format(package_name=dep.dependency_name)
70 | if dep.dependency_version:
71 | shell_command += "=" + dep.dependency_version
72 | commands.append(shell_command)
73 | return "\n".join(commands)
74 |
75 |
76 | def _generate_applescript_install_command(dependencies):
77 | shell_command_str = "osascript -e 'tell application \"Terminal\" to do script \"brew install {package_name}\"'"
78 | commands = []
79 | for dep in dependencies:
80 | if dep.dependency_type == "package":
81 | shell_command = shell_command_str.format(package_name=dep.dependency_name)
82 | if dep.dependency_version:
83 | shell_command += "==" + dep.dependency_version
84 | commands.append(shell_command)
85 | return "\n".join(commands)
86 |
87 |
88 | def _generate_html_install_command(dependencies):
89 | shell_command_str = "apt-get install {package_name}"
90 | commands = []
91 | for dep in dependencies:
92 | if dep.dependency_type == "package":
93 | shell_command = shell_command_str.format(package_name=dep.dependency_name)
94 | if dep.dependency_version:
95 | shell_command += "=" + dep.dependency_version
96 | commands.append(shell_command)
97 | return "\n".join(commands)
98 |
--------------------------------------------------------------------------------
/creator/utils/language_suffix.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def generate_language_suffix(language: str):
4 | if language == "python":
5 | return ".py"
6 | elif language == "R":
7 | return ".R"
8 | elif language == "javascript":
9 | return ".js"
10 | elif language == "shell":
11 | return ".sh"
12 | elif language == "applescript":
13 | return ".applescript"
14 | elif language == "html":
15 | return ".html"
16 | else:
17 | raise NotImplementedError
18 |
--------------------------------------------------------------------------------
/creator/utils/load_prompt.py:
--------------------------------------------------------------------------------
1 | def load_system_prompt(prompt_path):
2 | with open(prompt_path, encoding='utf-8') as f:
3 | prompt = f.read()
4 | return prompt
5 |
--------------------------------------------------------------------------------
/creator/utils/output_truncate.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def truncate_output(tool_result, max_output_chars=1000):
4 | stdout, stderr = tool_result.get("stdout", ""), tool_result.get("stderr", "")
5 | stdout_str, stderr_str = str(stdout), str(stderr)
6 | type_of_stdout, type_of_stderr = type(stdout), type(stderr)
7 | truncated = False
8 | if len(stdout_str) > max_output_chars:
9 | tool_result["stdout"] = stdout_str[:max_output_chars] + "..."
10 | tool_result["stdout"] += f"\nOutput of `run_code` function truncated. The first {max_output_chars} characters are shown\n"
11 | stdout_str = tool_result["stdout"]
12 | truncated = True
13 | if len(stderr_str) > max_output_chars:
14 | tool_result["stderr"] = "..." + stderr_str[-max_output_chars:]
15 | if not truncated:
16 | tool_result["stderr"] += f"\nOutput of `run_code` function truncated. The last {max_output_chars} characters are shown\n"
17 | stderr_str = tool_result["stderr"]
18 | if len(stdout_str+stderr_str) > max_output_chars:
19 | tool_result["stderr"] = "..." + stderr_str[-max_output_chars:]
20 | if not truncated:
21 | tool_result["stderr"] += f"\nOutput of `run_code` function truncated. The last {max_output_chars} characters are shown\n"
22 | left_chars = max_output_chars - len(stdout_str) if len(stdout_str) < max_output_chars else 0
23 | tool_result["stderr"] += stderr_str[-left_chars:]
24 |
25 | if type_of_stdout != str:
26 | try:
27 | tool_result["stdout"] = type_of_stdout(tool_result["stdout"])
28 | except Exception:
29 | pass
30 | if type_of_stderr != str:
31 | try:
32 | tool_result["stderr"] = type_of_stderr(tool_result["stderr"])
33 | except Exception:
34 | pass
35 | return tool_result
36 |
--------------------------------------------------------------------------------
/creator/utils/printer.py:
--------------------------------------------------------------------------------
1 | import json
2 | import sys
3 | from rich.markdown import Markdown
4 | from rich.console import Console
5 | from rich import print as rich_print
6 | from rich.json import JSON
7 | import io
8 |
9 | # Save the original print function
10 | original_print = print
11 |
12 |
13 | def to_str(s):
14 | if isinstance(s, (dict, list)):
15 | return json.dumps(s, indent=4)
16 | return str(s)
17 |
18 |
19 | class Printer:
20 | def __init__(self):
21 | self.callbacks = {}
22 | console = Console()
23 | self.is_terminal = console.is_terminal
24 | self.is_jupyter = console.is_jupyter
25 | self.is_interactive = Console().is_interactive
26 | self.use_rich = self.is_terminal or self.is_jupyter or self.is_interactive
27 | self.output_capture = io.StringIO() # Moved inside the class as an instance variable
28 |
29 | def add_callback(self, func):
30 | self.callbacks[func.__name__] = func
31 |
32 | def remove_callback(self, func_name):
33 | self.callbacks.pop(func_name, None)
34 |
35 | def print(self, *messages, sep=' ', end='\n', file=None, flush=False, print_type='str', output_option='both'):
36 | formatted_message = sep.join(map(to_str, messages))
37 |
38 | if print_type == 'markdown' and self.use_rich:
39 | formatted_message = Markdown(formatted_message)
40 | elif print_type == 'json' and self.use_rich:
41 | formatted_message = JSON(formatted_message)
42 |
43 | for callback in self.callbacks.values():
44 | try:
45 | callback(formatted_message, end=end, file=file, flush=flush, output_option=output_option)
46 | except Exception as e:
47 | original_print(f"Error in callback {callback.__name__}: {str(e)}", file=sys.stderr)
48 |
49 | def add_default_callback(self):
50 | if self.use_rich:
51 | def default_print(message, end='\n', file=None, flush=False, output_option='terminal'):
52 | target_file = file or self.output_capture
53 | if output_option in ['terminal', 'both']:
54 | console = Console(force_jupyter=self.is_jupyter, force_terminal=self.is_terminal, force_interactive=self.is_interactive, file=target_file)
55 | console.print(message, end=end)
56 | # if output_option in ['stdout', 'both']:
57 | # rich_print(message, end=end, file=sys.stdout, flush=flush)
58 | else:
59 | def default_print(message, end='\n', file=None, flush=False, output_option='both'):
60 | target_file = file or self.output_capture
61 | if output_option in ['stdout', 'both']:
62 | original_print(message, end=end, file=target_file, flush=flush)
63 | if output_option in ['terminal', 'both'] and target_file is not sys.stdout:
64 | original_print(message, end=end, file=sys.stdout, flush=flush)
65 |
66 | self.add_callback(default_print)
67 |
68 |
69 | printer = Printer()
70 | printer.add_default_callback()
71 |
72 |
73 | # Replace the built-in print
74 | def print(*args, sep=' ', end='\n', file=None, flush=False, print_type='str', output_option='both', **kwargs):
75 | printer.print(*args, sep=sep, end=end, file=file, flush=flush, print_type=print_type, output_option=output_option, **kwargs)
76 |
--------------------------------------------------------------------------------
/creator/utils/skill_doc.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def generate_skill_doc(skill):
4 | def format_parameter(param):
5 | """Helper function to format a parameter for markdown."""
6 | details = [f" - **{param.param_name}** ({param.param_type}): {param.param_description}"]
7 | if param.param_required:
8 | details.append(" - Required: True")
9 | if param.param_default:
10 | details.append(f" - Default: {param.param_default}")
11 | return "\n".join(details)
12 |
13 | def format_return(ret):
14 | """Helper function to format a return for markdown."""
15 | return f" - **{ret.param_name}** ({ret.param_type}): {ret.param_description}"
16 |
17 | doc = f"""## Skill Details:
18 | - **Name**: {skill.skill_name}
19 | - **Description**: {skill.skill_description}
20 | - **Version**: {skill.skill_metadata.version}
21 | - **Usage**:
22 | ```{skill.skill_program_language}
23 | {skill.skill_usage_example}
24 | ```
25 | - **Parameters**:
26 | """
27 | # Handle skill_parameters
28 | if isinstance(skill.skill_parameters, list):
29 | for param in skill.skill_parameters:
30 | doc += format_parameter(param) + "\n"
31 | elif skill.skill_parameters: # If it's a single CodeSkillParameter
32 | doc += format_parameter(skill.skill_parameters) + "\n"
33 |
34 | doc += "\n- **Returns**:\n"
35 | if isinstance(skill.skill_return, list):
36 | for ret in skill.skill_return:
37 | doc += format_return(ret) + "\n"
38 | elif skill.skill_return: # If it's a single CodeSkillParameter
39 | doc += format_return(skill.skill_return) + "\n"
40 |
41 | return doc.strip()
42 |
43 |
--------------------------------------------------------------------------------
/creator/utils/tips_utils.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def remove_tips(messages):
4 | new_messages = []
5 | for m in messages:
6 | if m.content is None or not m.content.startswith("=== Tips"):
7 | new_messages.append(m)
8 | return new_messages
9 |
--------------------------------------------------------------------------------
/creator/utils/title_remover.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | def remove_title(schema):
4 | """
5 | Remove all title and its properties's title
6 | Recursively remove all titles including `$defs`
7 |
8 | Args:
9 | - schema (dict): The input schema dictionary.
10 |
11 | Returns:
12 | - dict: The schema dictionary after removing the "title" field.
13 |
14 | Example:
15 | >>> schema = {
16 | ... "title": "Example Schema",
17 | ... "properties": {
18 | ... "name": {
19 | ... "title": "Name",
20 | ... "type": "string"
21 | ... },
22 | ... "age": {
23 | ... "title": "Age",
24 | ... "type": "integer"
25 | ... }
26 | ... },
27 | ... "$defs": {
28 | ... "address": {
29 | ... "title": "Address",
30 | ... "type": "object",
31 | ... "properties": {
32 | ... "street": {
33 | ... "title": "Street",
34 | ... "type": "string"
35 | ... }
36 | ... }
37 | ... }
38 | ... }
39 | ... }
40 | >>> remove_title(schema)
41 | {
42 | "properties": {
43 | "name": {
44 | "type": "string"
45 | },
46 | "age": {
47 | "type": "integer"
48 | }
49 | },
50 | "$defs": {
51 | "address": {
52 | "type": "object",
53 | "properties": {
54 | "street": {
55 | "type": "string"
56 | }
57 | }
58 | }
59 | }
60 | }
61 | """
62 | if "title" in schema:
63 | schema.pop("title")
64 | if "properties" in schema:
65 | for prop in schema["properties"]:
66 | remove_title(schema["properties"][prop])
67 | if "$defs" in schema:
68 | for prop in schema["$defs"]:
69 | remove_title(schema["$defs"][prop])
70 | return schema
71 |
--------------------------------------------------------------------------------
/creator/utils/user_info.py:
--------------------------------------------------------------------------------
1 | import getpass
2 | import os
3 | import platform
4 |
5 |
6 | def get_user_info():
7 | user_info = """
8 | [User Info]
9 | Name: {username}
10 | CWD: {current_working_directory}
11 | OS: {operating_system}
12 | """
13 | username = getpass.getuser()
14 | current_working_directory = os.getcwd()
15 | operating_system = platform.system()
16 | return user_info.format(
17 | username=username,
18 | current_working_directory=current_working_directory,
19 | operating_system=operating_system
20 | )
--------------------------------------------------------------------------------
/creator/utils/valid_code.py:
--------------------------------------------------------------------------------
1 | import re
2 | import ast
3 |
4 |
5 | def is_valid_variable_name(name: str) -> bool:
6 | return re.match(r"^[a-zA-Z_]\w*$", name) is not None
7 |
8 |
9 | def extract_variable_names(code: str) -> list:
10 | try:
11 | tree = ast.parse(code, mode="eval")
12 | except SyntaxError:
13 | return []
14 | return [node.id for node in ast.walk(tree) if isinstance(node, ast.Name)]
15 |
16 |
17 | def is_code_with_assignment(code: str) -> bool:
18 | if "=" not in code:
19 | return False
20 |
21 | left, right = code.split("=", 1)
22 | return is_valid_variable_name(left.strip()) and is_compilable(right.strip(), "eval")
23 |
24 |
25 | def is_compilable(code: str, mode: str) -> bool:
26 | try:
27 | compile(code, "", mode)
28 | return True
29 | except SyntaxError:
30 | return False
31 |
32 |
33 | def is_valid_code(code: str, namespace: dict) -> bool:
34 | variables = extract_variable_names(code)
35 | if not all(is_valid_variable_name(variable) for variable in variables):
36 | return False
37 |
38 | return (is_compilable(code, "eval") or
39 | is_code_with_assignment(code) or
40 | is_compilable(code, "exec"))
41 |
42 |
43 | def is_expression(code: str):
44 | return is_compilable(code, "eval")
45 |
--------------------------------------------------------------------------------
/docs/api_doc.md:
--------------------------------------------------------------------------------
1 | ## Open-Creator API Documentation
2 |
3 | ### Function: `create`
4 | Generates a `CodeSkill` instance using different input sources.
5 |
6 | #### Parameters:
7 | - `request`: String detailing the skill functionality.
8 | - `messages` or `messages_json_path`: Messages as a list of dictionaries or a path to a JSON file containing messages.
9 | - `file_content` or `file_path`: String of file content or path to a code/API doc file.
10 | - `skill_path` or `skill_json_path`: Directory path with skill name as stem or file path with `skill.json` as stem.
11 | - `huggingface_repo_id`: Identifier for a Huggingface repository.
12 | - `huggingface_skill_path`: Path to the skill within the Huggingface repository.
13 |
14 | #### Returns:
15 | - `CodeSkill`: The created skill.
16 |
17 | #### Usage:
18 | 1. Creating Skill using a Request String:
19 | ```python
20 | skill = create(request="filter how many prime numbers are in 201")
21 | ```
22 | 2. Creating Skill using Messages:
23 | - Directly:
24 | ```python
25 | skill = create(messages=[{"role": "user", "content": "write a program..."}])
26 | ```
27 | - Via JSON Path:
28 | ```python
29 | skill = create(messages_json_path="./messages_example.json")
30 | ```
31 |
32 | 3. Creating Skill using File Content or File Path:
33 | - Direct Content:
34 | ```python
35 | skill = create(file_content="def example_function(): pass")
36 | ```
37 | - File Path:
38 | ```python
39 | skill = create(file_path="../creator/utils/example.py")
40 | ```
41 |
42 | 4. Creating Skill using Skill Path or Skill JSON Path:
43 | - JSON Path:
44 | ```python
45 | skill = create(skill_json_path="~/.cache/open_creator/skill_library/create/skill.json")
46 | ```
47 | - Skill Path:
48 | ```python
49 | skill = create(skill_path="~/.cache/open_creator/skill_library/create")
50 | ```
51 |
52 | 5. Creating Skill using Huggingface Repository ID and Skill Path:
53 | If a skill is hosted in a Huggingface repository, you can create it by specifying the repository ID and the skill path within the repository.
54 | ```python
55 | skill = create(huggingface_repo_id="YourRepo/skill-library", huggingface_skill_path="specific_skill")
56 | ```
57 |
58 | #### Notes:
59 | - Ensure to provide accurate and accessible file paths.
60 | - At least one parameter must be specified to generate a skill.
61 | - Parameters’ functionality does not overlap; specify the most relevant one for clarity.
62 | - Use absolute paths where possible to avoid relative path issues.
63 | - Ensure the repository ID and skill path are accurate and that you have the necessary access permissions to retrieve the skill from the repository.
64 |
65 |
66 | ### Function: `save`
67 | Stores a `CodeSkill` instance either to a local path or a Huggingface repository. In default just use `save(skill)` and it will store the skill into the default path. Only save the skill when the user asks to do so.
68 |
69 | #### Parameters:
70 | - `skill` (CodeSkill): The skill instance to be saved.
71 | - `huggingface_repo_id` (Optional[str]): Identifier for a Huggingface repository.
72 | - `skill_path` (Optional[str]): Local path where the skill should be saved.
73 |
74 | #### Returns:
75 | - None
76 |
77 | #### Usage:
78 | The `save` function allows for the persistent storage of a `CodeSkill` instance by saving it either locally or to a specified Huggingface repository.
79 |
80 | 1. **Save to Huggingface Repository:**
81 | ```python
82 | save(skill=skill, huggingface_repo_id="YourRepo/skill_library")
83 | ```
84 |
85 | 2. **Save Locally:**
86 | ```python
87 | save(skill=skill, skill_path="/path/to/save")
88 | ```
89 |
90 | #### Notes:
91 | - At least one of `huggingface_repo_id` or `skill_path` must be provided to execute the function, otherwise a `ValueError` will be raised.
92 | - Ensure provided paths and repository identifiers are accurate and accessible.
93 |
94 |
95 | ### Function: `search`
96 | Retrieve skills related to a specified query from the available pool of skills.
97 |
98 | #### Parameters:
99 | - `query` (str): Search query string.
100 | - `top_k` (Optional[int]): Maximum number of skills to return. Default is 1.
101 | - `threshold` (Optional[float]): Minimum similarity score to return a skill. Default is 0.8.
102 |
103 | #### Returns:
104 | - List[CodeSkill]: A list of retrieved `CodeSkill` objects that match the query.
105 |
106 | #### Usage:
107 | The `search` function allows users to locate skills related to a particular query string. This is particularly useful for identifying pre-existing skills within a skill library that may fulfill a requirement or for exploring available functionalities.
108 |
109 | 1. **Basic Search:**
110 | ```python
111 | skills = search("extract pages from a pdf")
112 | ```
113 |
114 | 2. **Refined Search:**
115 | ```python
116 | skills = search("extract pages from a pdf", top_k=3, threshold=0.85)
117 | ```
118 |
119 | #### Notes:
120 | - The `query` should be descriptive to enhance the accuracy of retrieved results.
121 | - Adjust `top_k` and `threshold` to balance between specificity and breadth of results.
122 | - Ensure to check the length of the returned list to validate the presence of results before usage.
123 |
124 |
125 | ### Skill Object Methods and Operator Overloading
126 |
127 | Explore the functionalities and modifications of a skill object through methods and overloaded operators.
128 |
129 | #### Method: `run`
130 | Execute a skill with provided arguments or request.
131 |
132 | - **Example Usage**:
133 |
134 | ```python
135 | skills = search("pdf extract section")
136 | if skills:
137 | skill = skills[0]
138 | input_args = {
139 | "pdf_path": "creator.pdf",
140 | "start_page": 3,
141 | "end_page": 8,
142 | "output_path": "creator3-8.pdf"
143 | }
144 | print(skill.run(input_args))
145 | ```
146 |
147 | #### Method: `test`
148 | Validate a skill using a tester agent.
149 |
150 | - **Example Usage**:
151 |
152 | ```python
153 | skill = create(request="filter prime numbers in a range, e.g., filter_prime_numbers(2, 201)")
154 | test_summary = skill.test()
155 | print(test_summary)
156 | print(skill.conversation_history)
157 | ```
158 |
159 | #### Overloaded Operators:
160 | Modify and refine skills using operator overloading.
161 |
162 | 1. **Combining Skills**: Utilize the `+` operator to chain or execute skills in parallel, detailing the coordination with the `>` operator.
163 |
164 | ```python
165 | new_skill = skillA + skillB > "Explanation of how skills A and B operate together"
166 | ```
167 |
168 | 2. **Refactoring Skills**: Employ the `>` operator to enhance or modify existing skills.
169 | ```python
170 | refactored_skill = skill > "Descriptive alterations or enhancements"
171 | ```
172 |
173 | 3. **Decomposing Skills**: Use the `<` operator to break down a skill into simpler components.
174 | ```python
175 | simpler_skills = skill < "Description of how the skill should be decomposed"
176 | ```
177 |
178 | #### Notes:
179 | - Ensure accurate descriptions when using overloaded operators to ensure skill modifications are clear and understandable.
180 | - Validate skills with `test` method to ensure functionality post-modification.
181 |
--------------------------------------------------------------------------------
/docs/commands.md:
--------------------------------------------------------------------------------
1 |
2 | ## Commands
3 |
4 | see help
5 |
6 | ```shell
7 | creator -h
8 | ```
9 |
10 | **Arguments**:
11 |
12 | - `-h, --help`
13 | show this help message and exit
14 | - `-c, --config`
15 | open config.yaml file in text editor
16 | - `-i, --interactive`
17 | Enter interactive mode
18 | - COMMANDS `{create,save,search,server,ui}`
19 |
20 |
21 | ---
22 |
23 | ### ui
24 |
25 | streamlit demo:
26 |
27 | ```
28 | creator ui
29 | ```
30 |
31 | open [streamlit demo](http://localhost:8501/)
32 |
33 | ---
34 |
35 | ### create
36 |
37 | usage:
38 |
39 | ```
40 | creator create [-h] [-r REQUEST] [-m MESSAGES] [-sp SKILL_JSON_PATH] [-c FILE_CONTENT] [-f FILE_PATH] [-hf_id HUGGINGFACE_REPO_ID]
41 | [-hf_path HUGGINGFACE_SKILL_PATH] [-s]
42 | ```
43 |
44 | `-h, --help`
45 | show this help message and exit
46 |
47 | `-r REQUEST, --request REQUEST`
48 | Request string
49 |
50 | `-m MESSAGES, --messages MESSAGES`
51 | Openai messages format
52 |
53 | `-sp SKILL_JSON_PATH, --skill_json_path SKILL_JSON_PATH`
54 | Path to skill JSON file
55 |
56 | `-c FILE_CONTENT, --file_content FILE_CONTENT`
57 | File content of API docs or code file
58 |
59 | `-f FILE_PATH, --file_path FILE_PATH`
60 | Path to API docs or code file
61 |
62 | `-hf_id HUGGINGFACE_REPO_ID, --huggingface_repo_id HUGGINGFACE_REPO_ID`
63 | Huggingface repo ID
64 |
65 | `-hf_path HUGGINGFACE_SKILL_PATH, --huggingface_skill_path HUGGINGFACE_SKILL_PATH`
66 | Huggingface skill path
67 |
68 | `-s, --save`
69 | Save skill after creation
70 |
71 | ---
72 |
73 | ### save
74 |
75 | usage:
76 |
77 | ```
78 | creator save [-h] [-s SKILL] [-sp SKILL_JSON_PATH] [-hf_id HUGGINGFACE_REPO_ID]
79 | ```
80 |
81 | `-h, --help`
82 | show this help message and exit
83 |
84 | `-s SKILL, --skill SKILL`
85 | Skill json object
86 |
87 | `-sp SKILL_JSON_PATH, --skill_json_path SKILL_JSON_PATH`
88 | Path to skill JSON file
89 |
90 | `-hf_id HUGGINGFACE_REPO_ID, --huggingface_repo_id HUGGINGFACE_REPO_ID`
91 | Huggingface repo ID
92 |
93 | ---
94 |
95 | ### search
96 |
97 | ```
98 | creator search [-h] [-q QUERY] [-k TOP_K] [-t THRESHOLD] [-r]
99 | ```
100 |
101 | `-h, --help`
102 | show this help message and exit
103 |
104 | `-q QUERY, --query QUERY`
105 | Search query
106 |
107 | `-k TOP_K, --top_k TOP_K`
108 | Number of results to return, default 3
109 |
110 | `-t THRESHOLD, --threshold THRESHOLD`
111 | Threshold for search, default 0.8
112 |
113 | `-r, --remote`
114 | Search from remote
115 |
116 | ---
117 |
118 | ### server
119 |
120 | ```
121 | creator server [-h] [-host HOST] [-p PORT]
122 | ```
123 |
124 | `-h, --help`
125 | show this help message and exit
126 |
127 | `-host HOST, --host HOST`
128 | IP address
129 |
130 | `-p PORT, --port PORT`
131 | Port number
132 |
133 |
134 | After running the server, you can access the API documentation at [docs](http://localhost:8000/docs)
135 |
136 |
137 | ---
138 |
139 | ### Interactive mode
140 |
141 | Directly enter
142 | ```shell
143 | creator
144 | ```
145 |
146 | or
147 |
148 | ```shell
149 | creator [-i] [--interactive] [-q] [--quiet]
150 | ```
151 |
152 | - `q, --quiet` Quiet mode to enter interactive mode and not rich_print LOGO and help
153 |
--------------------------------------------------------------------------------
/docs/configurations.md:
--------------------------------------------------------------------------------
1 |
2 | # Configurations
3 |
4 | ```yaml
5 | LOCAL_SKILL_LIBRARY_PATH: .cache/open_creator/skill_library
6 | REMOTE_SKILL_LIBRARY_PATH: .cache/open_creator/remote
7 | LOCAL_SKILL_LIBRARY_VECTORD_PATH: .cache/open_creator/vectordb/
8 | PROMPT_CACHE_HISTORY_PATH: .cache/open_creator/prompt_cache/
9 | LOGGER_CACHE_PATH: .cache/open_creator/logs/
10 | SKILL_EXTRACT_AGENT_CACHE_PATH: .cache/open_creator/llm_cache
11 | OFFICIAL_SKILL_LIBRARY_PATH: timedomain/skill-library
12 | OFFICIAL_SKILL_LIBRARY_TEMPLATE_PATH: timedomain/skill-library-template
13 |
14 | BUILD_IN_SKILL_LIBRARY_DIR: skill_library/open-creator/
15 |
16 | # for AZURE, it is your_deployment_id
17 | # for ANTHROPIC, it is claude-2
18 | # for VertexAI, it is chat-bison
19 | # for huggingface, it is huggingface/WizardLM/WizardCoder-Python-34B-V1.0 model path
20 | # for ollama, it is like ollama/llama2
21 | # the default is openai/gpt-3.5
22 | MODEL_NAME: gpt-3.5-turbo-16k
23 | TEMPERATURE: 0 # only 0 can use llm_cache
24 |
25 | USE_AZURE: false
26 | RUN_HUMAN_CONFIRM: false
27 | USE_STREAM_CALLBACK: true
28 |
29 | ANTHROPIC_API_KEY: ""
30 |
31 | AZURE_API_KEY: ""
32 | AZURE_API_BASE: ""
33 | AZURE_API_VERSION: ""
34 |
35 | VERTEX_PROJECT: ""
36 | VERTEX_LOCATION: ""
37 |
38 | HUGGINGFACE_API_KEY: ""
39 | HUGGINGFACE_API_BASE: ""
40 | ```
--------------------------------------------------------------------------------
/docs/examples.md:
--------------------------------------------------------------------------------
1 | # Examples
2 |
3 | ```{toctree}
4 | ---
5 | maxdepth: 2
6 | caption: Contents:
7 | ---
8 | examples/02_skills_library
9 | ```
10 |
--------------------------------------------------------------------------------
/docs/examples/data/create_api.md:
--------------------------------------------------------------------------------
1 | ## Open-Creator API Documentation
2 |
3 | ### Function: `create`
4 | Generates a `CodeSkill` instance using different input sources.
5 |
6 | #### Parameters:
7 | - `request`: String detailing the skill functionality.
8 | - `messages` or `messages_json_path`: Messages as a list of dictionaries or a path to a JSON file containing messages.
9 | - `file_content` or `file_path`: String of file content or path to a code/API doc file.
10 | - `skill_path` or `skill_json_path`: Directory path with skill name as stem or file path with `skill.json` as stem.
11 | - `huggingface_repo_id`: Identifier for a Huggingface repository.
12 | - `huggingface_skill_path`: Path to the skill within the Huggingface repository.
13 |
14 | #### Returns:
15 | - `CodeSkill`: The created skill.
16 |
17 | #### Installation
18 | ```shell
19 | pip install -U open-creator
20 | ```
21 | open-creator: "^0.1.2"
22 |
23 | #### Usage:
24 | ```python
25 | from creator import create
26 | ```
27 |
28 | #### Notes:
29 | - Ensure to provide accurate and accessible file paths.
30 | - At least one parameter must be specified to generate a skill.
31 | - Parameters’ functionality does not overlap; specify the most relevant one for clarity.
32 | - Use absolute paths where possible to avoid relative path issues.
33 | - Ensure the repository ID and skill path are accurate and that you have the necessary access permissions to retrieve the skill from the repository.
34 |
--------------------------------------------------------------------------------
/docs/examples/data/open-creator2-5.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/timedomain-tech/open-creator/b035221a1099d5a6a10b98c4adbf5e8b08120e35/docs/examples/data/open-creator2-5.pdf
--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------
1 | # Overview
2 |
3 | ---
4 |
5 | ## Introduction
6 |
7 | ◓ Open Creator
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 | Build your costomized skill library
23 | An open-source LLM tool for extracting repeatable tasks from your conversations, and saving them into a customized skill library for retrieval.
24 |
25 |
26 |
27 |
28 |
29 | `open-creator` is an innovative package designed to extract skills from existing conversations or a requirement, save them, and retrieve them when required. It offers a seamless way to consolidate and archive refined versions of codes, turning them into readily usable skill sets, thereby enhancing the power of the [open-interpreter](https://github.com/KillianLucas/open-interpreter).
30 |
31 | ---
32 |
33 | ## Framework
34 |
35 | 
36 |
37 | ---
38 |
39 |
--------------------------------------------------------------------------------
/docs/installation.md:
--------------------------------------------------------------------------------
1 | ## Installlation
2 |
3 | install by pip
4 |
5 | ```shell
6 | pip install -U open-creator
7 | ```
8 |
9 | or
10 |
11 | ```shell
12 | pip install git+https://github.com/timedomain-tech/open-creator.git
13 | ```
14 |
15 | install by poetry
16 |
17 | or
18 |
19 | ```shell
20 | git clone https://github.com/timedomain-tech/open-creator.git
21 | cd open-creator
22 | poetry install
23 | ```
24 |
25 | ---
26 |
--------------------------------------------------------------------------------
/docs/pics/skill-library-hub.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/timedomain-tech/open-creator/b035221a1099d5a6a10b98c4adbf5e8b08120e35/docs/pics/skill-library-hub.png
--------------------------------------------------------------------------------
/docs/pics/skill-library-hub_home.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/timedomain-tech/open-creator/b035221a1099d5a6a10b98c4adbf5e8b08120e35/docs/pics/skill-library-hub_home.png
--------------------------------------------------------------------------------
/docs/skill-library-hub.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # Skill Library Hub
4 |
5 | You can find skills shared by other users at [skill-library-hub](https://huggingface.co/spaces/timedomain/skill-library-hub), or submit locally saved skills to share with other users.
6 |
7 | 
8 |
9 | ## Submit Skill
10 |
11 | 1. Switch to the Tab of **Submit here!**
12 | 2. Enter the repo id of your skill-library space, such as "timedomain/skill-library"
13 | 3. Enter the skill name and press the Enter key. You can enter multiple skill names. Press the Enter key each time to confirm.
14 | 4. Click Submit to submit
15 |
16 | 
17 |
18 |
--------------------------------------------------------------------------------
/docs/tech_report/figures/creator_agents.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/timedomain-tech/open-creator/b035221a1099d5a6a10b98c4adbf5e8b08120e35/docs/tech_report/figures/creator_agents.pdf
--------------------------------------------------------------------------------
/docs/tech_report/figures/framework.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/timedomain-tech/open-creator/b035221a1099d5a6a10b98c4adbf5e8b08120e35/docs/tech_report/figures/framework.png
--------------------------------------------------------------------------------
/docs/tech_report/figures/framework.tex:
--------------------------------------------------------------------------------
1 | \begin{figure}[t]
2 | \centering
3 | \includegraphics[width=1\textwidth]{./figures/creator_agents.pdf}
4 | \caption{\textbf{The Overview of Open-Creator Framework}}
5 | \label{fig:creator_agents}
6 | \end{figure}
--------------------------------------------------------------------------------
/docs/tech_report/main.bbl:
--------------------------------------------------------------------------------
1 | \begin{thebibliography}{10}
2 |
3 | \bibitem{wei2022chain}
4 | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed~Chi, Quoc~V. Le, Denny Zhou, et~al.
5 | \newblock Chain-of-thought prompting elicits reasoning in large language models.
6 | \newblock {\em Advances in Neural Information Processing Systems}, 35:24824--24837, 2022.
7 |
8 | \bibitem{xu2023rewoo}
9 | Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu.
10 | \newblock Rewoo: Decoupling reasoning from observations for efficient augmented language models, 2023.
11 |
12 | \bibitem{wang2023selfconsistency}
13 | Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed~Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou.
14 | \newblock Self-consistency improves chain of thought reasoning in language models, 2023.
15 |
16 | \bibitem{yao2023tree}
17 | Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas~L. Griffiths, Yuan Cao, and Karthik Narasimhan.
18 | \newblock Tree of thoughts: Deliberate problem solving with large language models, 2023.
19 |
20 | \bibitem{schick2023toolformer}
21 | Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom.
22 | \newblock Toolformer: Language models can teach themselves to use tools, 2023.
23 |
24 | \bibitem{li2023apibank}
25 | Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li.
26 | \newblock Api-bank: A benchmark for tool-augmented llms, 2023.
27 |
28 | \bibitem{openinterpreter}
29 | KillianLucas.
30 | \newblock Open interpreter, 2023.
31 |
32 | \bibitem{skreta2023errors}
33 | Marta Skreta, Naruki Yoshikawa, Sebastian Arellano-Rubach, Zhi Ji, Lasse~Bjørn Kristensen, Kourosh Darvish, Alán Aspuru-Guzik, Florian Shkurti, and Animesh Garg.
34 | \newblock Errors are useful prompts: Instruction guided task programming with verifier-assisted iterative prompting.
35 | \newblock {\em arXiv preprint arXiv: Arxiv-2303.14100}, 2023.
36 |
37 | \bibitem{yao2022react}
38 | Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
39 | \newblock React: Synergizing reasoning and acting in language models.
40 | \newblock {\em arXiv preprint arXiv: Arxiv-2210.03629}, 2022.
41 |
42 | \bibitem{wang2023voyager}
43 | Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar.
44 | \newblock Voyager: An open-ended embodied agent with large language models, 2023.
45 |
46 | \bibitem{song2023llmplanner}
47 | Chan~Hee Song, Jiaman Wu, Clayton Washington, Brian~M. Sadler, Wei-Lun Chao, and Yu~Su.
48 | \newblock Llm-planner: Few-shot grounded planning for embodied agents with large language models, 2023.
49 |
50 | \bibitem{hong2023metagpt}
51 | Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka~Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, and Chenglin Wu.
52 | \newblock Metagpt: Meta programming for multi-agent collaborative framework, 2023.
53 |
54 | \bibitem{qian2023communicative}
55 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun.
56 | \newblock Communicative agents for software development, 2023.
57 |
58 | \bibitem{GPTEngineer}
59 | Anton Osika et~al.
60 | \newblock Gpt engineer, 2023.
61 |
62 | \bibitem{GPTeam}
63 | 101dotxyz.
64 | \newblock Gpteam: Collaborative ai agents, 2023.
65 |
66 | \bibitem{bairi2023codeplan}
67 | Ramakrishna Bairi, Atharv Sonwane, Aditya Kanade, Vageesh~D C, Arun Iyer, Suresh Parthasarathy, Sriram Rajamani, B.~Ashok, and Shashank Shet.
68 | \newblock Codeplan: Repository-level coding using llms and planning, 2023.
69 |
70 | \bibitem{wang2023survey}
71 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu~Chen, Yankai Lin, Wayne~Xin Zhao, Zhewei Wei, and Ji-Rong Wen.
72 | \newblock A survey on large language model based autonomous agents, 2023.
73 |
74 | \bibitem{qian2023creator}
75 | Cheng Qian, Chi Han, Yi~R. Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji.
76 | \newblock Creator: Disentangling abstract and concrete reasonings of large language models through tool creation, 2023.
77 |
78 | \bibitem{GPTCache}
79 | zilliztech.
80 | \newblock Gptcache : A library for creating semantic cache for llm queries, 2023.
81 |
82 | \bibitem{Schmidhuber_2015}
83 | Jürgen Schmidhuber.
84 | \newblock Deep learning in neural networks: An overview.
85 | \newblock {\em Neural Networks}, 61:85--117, jan 2015.
86 |
87 | \bibitem{dong2023survey}
88 | Qingxiu Dong, Lei Li, Damai Dai, Ce~Zheng, Zhiyong Wu, Baobao Chang, Xu~Sun, Jingjing Xu, Lei Li, and Zhifang Sui.
89 | \newblock A survey on in-context learning, 2023.
90 |
91 | \bibitem{lewis2021retrievalaugmented}
92 | Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela.
93 | \newblock Retrieval-augmented generation for knowledge-intensive nlp tasks, 2021.
94 |
95 | \bibitem{li2022survey}
96 | Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu.
97 | \newblock A survey on retrieval-augmented text generation, 2022.
98 |
99 | \bibitem{mialon2023augmented}
100 | Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom.
101 | \newblock Augmented language models: a survey, 2023.
102 |
103 | \bibitem{langchain}
104 | langchain team.
105 | \newblock Hwchase17/langchain: building applications with llms through composability, 2023.
106 |
107 | \end{thebibliography}
108 |
--------------------------------------------------------------------------------
/docs/tech_report/main.tex:
--------------------------------------------------------------------------------
1 | \documentclass{article}
2 |
3 |
4 | \usepackage{arxiv}
5 |
6 | \usepackage[utf8]{inputenc} % allow utf-8 input
7 | \usepackage[T1]{fontenc} % use 8-bit T1 fonts
8 | \usepackage{hyperref} % hyperlinks
9 | \usepackage{url} % simple URL typesetting
10 | \usepackage{booktabs} % professional-quality tables
11 | \usepackage{amsfonts} % blackboard math symbols
12 | \usepackage{nicefrac} % compact symbols for 1/2, etc.
13 | \usepackage{microtype} % microtypography
14 | \usepackage{lipsum}
15 | \usepackage{fancyhdr} % header
16 | \usepackage{graphicx} % graphics
17 | \graphicspath{{figures/}} % organize your images and other figures under figures/ folder
18 | \usepackage{tikz}
19 | \usepackage{tabularx}
20 | \usepackage{array}
21 | % \usepackage{natbib}
22 | \PassOptionsToPackage{numbers, compress}{natbib}
23 |
24 |
25 | \newcommand{\halfcircle}{
26 | \begin{tikzpicture}[baseline=-0.75ex]
27 | \fill[black] (0,0) -- (0.25,0) arc (0:180:0.25) -- cycle; % Fill the upper half
28 | \draw (0,0) -- (-0.25,0) arc (0:180:-0.25); % Draw the border for the lower half
29 | \end{tikzpicture}
30 | }
31 |
32 | %Header
33 | \pagestyle{fancy}
34 | \thispagestyle{empty}
35 | \rhead{ \textit{ }}
36 |
37 | %% Title
38 | % \halfcircle{}
39 | \title{Open-Creator: Bridging Code Interpreter and Skill Library}
40 |
41 | \author{
42 | Junmin Gong\(^{*}\) \\
43 | \texttt{gongjunmin@timedomain.ai} \\
44 | Timedomain \\
45 | \and
46 | Sen Wang \(\dagger\) \\
47 | \texttt{sayo@timedomain.ai} \\
48 | Timedomain \\
49 | \and
50 | Wenxiao Zhao \(\dagger\) \\
51 | \texttt{sean.z@timedomain.ai} \\
52 | Timedomain \\
53 | \and
54 | Jing Guo\(\ddagger\) \\
55 | \texttt{joe.g@timedomain.ai} \\
56 | Timedomain \\
57 | }
58 |
59 |
60 | \begin{document}
61 | \textcolor{red}{\textbf{Note:} The experimental section is currently in progress and will be updated in the subsequent versions.}
62 | \maketitle
63 | \input{sections/0-abstract}
64 | \input{sections/1-introduction}
65 | \input{tables/learning_approaches}
66 | \input{sections/2-related_work}
67 | \input{figures/framework}
68 | \input{sections/3-method}
69 | % \input{sections/4-experiments}
70 | % \input{sections/5-discussions}
71 | % \input{sections/6-conclusion}
72 | \input{sections/7-acknowledgement}
73 | \newpage
74 | %Bibliography
75 | \bibliographystyle{unsrt}
76 | \bibliography{references}
77 |
78 |
79 | \end{document}
80 |
--------------------------------------------------------------------------------
/docs/tech_report/open-creator.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/timedomain-tech/open-creator/b035221a1099d5a6a10b98c4adbf5e8b08120e35/docs/tech_report/open-creator.pdf
--------------------------------------------------------------------------------
/docs/tech_report/sections/0-abstract.tex:
--------------------------------------------------------------------------------
1 | \begin{abstract}
2 | AI agents enhanced with tools, particularly code interpreters, hold promising prospects for broad and deep applications. However, existing systems often require reinterpreting problems and generating new codes for similar tasks, rather than reusing previously written and validated functions, leading to inefficiencies in token usage and lack of generalization. In response to these challenges, we introduce \textit{open-creator}, a novel AI agents framework bridging code interpreters and skill libraries. Open-creator is designed to standardize various inputs (including, but not limited to, dialogic problem-solving experiences, code files, and API documentation) into a uniform skill object format. This framework supports local saving, searching, and cloud uploading by users. Adopting a modular strategy, open-creator allows developers and researchers to create, share, and reuse skills without concerns over compatibility or version control. Furthermore, it offers flexibility in modifying, assembling, and disassembling created skills, which is crucial as skills may need updates over time and with changing environments. This mechanism allows AI agents to continually optimize skills based on feedback and new data. Our approach paves the way for more efficient and adaptable AI agent functionalities, contributing to the field's ongoing development. The open-creator code is publicly accessible at: \href{https://github.com/timedomain-tech/open-creator}{https://github.com/timedomain-tech/open-creator}.
3 | \end{abstract}
4 |
--------------------------------------------------------------------------------
/docs/tech_report/sections/1-introduction.tex:
--------------------------------------------------------------------------------
1 | \section{Introduction}
2 | AI agents engage in complex reasoning by integrating planning ~\cite{wei2022chain, xu2023rewoo, wang2023selfconsistency, yao2023tree}, decision-making, and the utilization of appropriate tools or APIs ~\cite{schick2023toolformer, li2023apibank}. However, these tools are typically predetermined and designed by humans, and the number of available tools is often limited due to the constraints on the input context length of Large Language Models (LLMs). To enhance the versatility of AI agents, a viable approach is to amalgamate the code-generation capabilities of LLMs with code execution functionalities. This integration allows for the flexible writing and execution of code to address specific user needs, embodying the role of Code Interpreters ~\cite{openinterpreter}.
3 |
4 | Given that LLMs occasionally generate erroneous codes—leading to low robustness and inability to meet user requirements—recent research has focused on enabling LLMs to auto-correct codes through environmental feedback ~\cite{skreta2023errors, yao2022react, wang2023voyager, song2023llmplanner}. Additionally, there is emphasis on developing sophisticated projects through rational task decomposition and persistent memory. This focus has given rise to a plethora of AI agent frameworks, including MetaGPT ~\cite{hong2023metagpt}, ChatDev ~\cite{qian2023communicative}, GPT-enginger ~\cite{GPTEngineer}, GPT-term ~\cite{GPTeam}, and codeplan ~\cite{bairi2023codeplan}. These studies explore collaborative mechanisms among different roles, introduction of improved environments, enhanced feedback from agents, optimized task decomposition, and various engineering tricks, collectively contributing to the flourishing ecosystem of AI agents in the fields of Computer Science and Software Engineering. A comprehensive literature review in this area has been conducted by Wang et al ~\cite{wang2023survey}.
5 |
--------------------------------------------------------------------------------
/docs/tech_report/sections/3-method.tex:
--------------------------------------------------------------------------------
1 | \section{Method}
2 |
3 |
4 | Open-Creator is an integrated package, incorporating all functionalities of CREATOR agents along with additional features such as saving to local or remote skill libraries and performing RAG searches from the skill library. Open-Creator is composed of three main components: Creator API, Collaborative Agents, and Skill Library Hub.
5 |
6 | \subsection{Creator API}
7 |
8 | The Creator API is a pivotal component of Open-Creator, serving as an essential interface for both developers and researchers focused on AI agents and the AI agents themselves. Designed with an emphasis on simplicity and user-friendliness, the API primarily offers three critical functionalities:
9 | \begin{enumerate}
10 | \item \textit{creator.create}: This function stands out for its versatility, enabling the generation of unified skill objects from a wide range of sources. Users can derive skill objects from dialogues between users and agents that have code interpreter tools, representing the problem-solving processes. Also, they can craft these objects directly from sources such as code files, API documentation, specific problem requirements, or even by utilizing existing skills.
11 |
12 | \item \textit{creator.save}: Once the skills are formulated, they require a reliable storage solution. This function offers the flexibility for users to save their skill objects in diverse formats. Be it locally or on cloud platforms like the Hugging Face Hub, users have the freedom to choose their preferred storage method.
13 |
14 | \item \textit{creator.search}: The retrieval process is streamlined with this function. It begins by transforming the structured skills into vectors. Following this, a semantic search mechanism is employed to ensure users can retrieve the top \( k \) skills, ideally suited for tackling new problems.
15 | \end{enumerate}
16 |
17 |
18 | \subsection{Collaborative Agents}
19 |
20 | The Collaborative Agents encompasses five primary components:
21 |
22 | \begin{enumerate}
23 | \item \textbf{Extractor Agent:} Responsible for converting existing problem-solving experiences (typically dialogues with a code interpreter), textual content, and documents into a unified skill object. The skill object encapsulates skill name, description, use cases, input-output parameters, associated dependencies, and code language. The code within the historical records is modularized and reorganized.
24 |
25 | \item \textbf{Interpreter Agent:} It leverages the open-source project, `open-interpreter` ~\cite{openinterpreter}, for prompt templates and code execution settings. The agent generates dialogue histories in the absence of known problem-solving procedures. Depending on execution results and user feedback, it preliminarily verifies the accuracy of results. The prompt templates of `open-interpreter` utilize thought chains and the rewoo framework. Initial approaches to user queries involve incremental planning and task decomposition, followed by execution and retrospective outlining of the next steps. The ReWOO ~\cite{xu2023rewoo} framework delineates the language model's inference process from external tool invocations, significantly reducing redundant prompts and computational requirements.
26 |
27 | \item \textbf{Tester Agent:} A variant of the interpreter agent, its primary role differs as it generates test cases and reports for stored skill objects. This evaluates their robustness and generalization performance, subsequently providing feedback to the interpreter for iterations.
28 |
29 | \item \textbf{Refactor Agent:} This agent facilitates modifications based on user demands. A technique involving operator overloading elegantly represents skill amalgamation, fine-tuning, and modularization of complex skills. Instead of repetitively restructuring extensive skill inputs, a mathematical operation-based approach simplifies the interface. Skill objects can be accumulated, and the resultant skill objects are appended with natural language using symbols > or <. For instance, "skillA + skillB > user\_request" represents the merging of two skills as per user demands. "SkillA < user\_request" illustrates the modularization of a complex skill based on user requirements. For skill fine-tuning, "skillA > user\_request" suffices.
30 |
31 | \item \textbf{Creator Agent:} This agent orchestrates the usage of the Creator API interfaces and coordinates the operations of the above four agents in response to user queries and intents. It uses the search interface to retrieve relevant skills for problem-solving. If the retrieved skills are inadequate, it employs the create interface to devise a new solution, followed by the save interface to persist the new skill. It inherently supports direct operations on API interfaces and dispatches responses across various agents. The agent also employs overloaded operators for skill iterative updates and refactoring.
32 | \end{enumerate}
33 |
34 | The Agents are developed on the langchain ~\cite{langchain} platform and are optimized with LLM cache. They are progressively designed to support diverse open-source or proprietary API-based LLMs. Figure ~\ref{fig:creator_agents} aptly depicts their interrelationships.
35 |
36 | \subsection{Skill Library Hub}
37 |
38 | The Skill Library focuses on the persistent storage of skills. It employs a directory structure where each skill is stored in its named subfolder. Additionally, the advantages of the Hugging Face Hub community are harnessed to allow users to upload their private skill libraries to the cloud.
39 |
40 | After users craft a skill, they have the option to save it on the cloud by providing a Hugging Face `repo\_id`. If the user hasn't established a repository, one is automatically forked from our pre-defined template. Following the fork, the user's skill is uploaded. To access skills from others' libraries, users need only supply the public repo\_id and the designated skill directory name for downloading locally. After downloading, the search index is auto-updated to include the new skill. With the community's growth around the skill library, we've also introduced cloud-based searching, making it easier to tap into collective community insights.
41 |
--------------------------------------------------------------------------------
/docs/tech_report/sections/7-acknowledgement.tex:
--------------------------------------------------------------------------------
1 | \section{Acknowledgements}
2 |
3 | We extend our heartfelt appreciation to Killian Lucas, the author of open-interpreter. We are also immensely grateful to the Discord community members, including warjiang, leonidas, AJ Ram, Papillon, minjunes, localman, jofus, Satake, oliveR, piotr, Grindelwald, Nico, MyRealNameIsTim, Pablo Vazquez, and jbexta, for their extensive discussions on the skill library and invaluable feedback. Their collective wisdom greatly enriched our work.
4 |
--------------------------------------------------------------------------------
/docs/tech_report/tables/learning_approaches.tex:
--------------------------------------------------------------------------------
1 | \begin{table}
2 | \label{tab:learning_approaches}
3 | \begin{tabularx}{\textwidth}{|c|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{4cm}|c|}
4 | \hline
5 | & \textbf{Learning} & \textbf{Outcome} & \textbf{Generalization} & \textbf{Interpretability} \\ \hline
6 | ANN & Parameter Optimization & Weights of NN & Mapping Function & Low \\ \hline
7 | LLM & Instruction Fine-tuning & Instruction-based LLM & In-context Learning Prompt & Medium \\ \hline
8 | Agents & Tool Creation & Skill Library & RAG & High \\ \hline
9 | \end{tabularx}
10 | \centering
11 | \vspace{2mm} % Adjust the space as you need
12 | \caption{Comparison of Different Learning Approaches}
13 | \end{table}
14 |
--------------------------------------------------------------------------------
/mkdocs.yml:
--------------------------------------------------------------------------------
1 | site_name: Open-Creator
2 | site_url: https://open-creator.github.io
3 | nav:
4 | - Getting Start:
5 | - Overview: index.md
6 | - Installation: installation.md
7 | - Examples:
8 | - 01 Skills Create: examples/01_skills_create.ipynb
9 | - 02 Skills Lirabry: examples/02_skills_library.ipynb
10 | - 03 Skills Search: examples/03_skills_search.ipynb
11 | - 04 Skills Run: examples/04_skills_run.ipynb
12 | - 05 Skills Test: examples/05_skills_test.ipynb
13 | - 06 Skills Refactor: examples/06_skills_refactor.ipynb
14 | - 07 Skills Auto Optimize: examples/07_skills_auto_optimize.ipynb
15 | - 08 Creator Agent: examples/08_creator_agent.ipynb
16 | - API:
17 | - API Docs: api_doc.md
18 | - Commands: commands.md
19 | - Configurations: configurations.md
20 |
21 | theme: readthedocs
22 |
23 | plugins:
24 | - search
25 | - mkdocs-jupyter:
26 | kernel_name: python3
27 | ignore_h1_titles: true
28 | include_requirejs: true
29 |
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [tool.poetry]
2 | name = "open-creator"
3 | packages = [
4 | {include = "creator"},
5 | ]
6 | version = "0.1.2"
7 | description = "Build your costomized skill library"
8 | authors = ["JunminGONG "]
9 | readme = "README.md"
10 | include = ["creator/config.yaml"]
11 |
12 | [tool.poetry.dependencies]
13 | python = "^3.10"
14 | rich = "^13.5.2"
15 | langchain = ">=0.0.317"
16 | huggingface_hub = "^0.17.2"
17 | loguru = "^0.7.2"
18 | pydantic = "^2.0.3"
19 | python-dotenv = "^1.0.0"
20 | openai = "^0.28.1"
21 | tiktoken = "^0.5.1"
22 | prompt_toolkit = "^3.0.39"
23 | inquirer = "^3.1.3"
24 | pyyaml = "^6.0.1"
25 | appdirs = "^1.4.4"
26 | urllib3 = "^2.0.6"
27 | fastapi = "^0.103.1"
28 | uvicorn = "^0.23.2"
29 | streamlit = "^1.27.2"
30 |
31 |
32 |
33 | [tool.poetry.dependencies.pyreadline3]
34 | version = "^3.4.1"
35 | markers = "sys_platform == 'win32'"
36 |
37 | [tool.poetry.group.dev.dependencies]
38 | pytest = "^7.4.0"
39 |
40 | [build-system]
41 | requires = ["poetry-core>=1.0.0"]
42 | build-backend = "poetry.core.masonry.api"
43 |
44 | [tool.poetry.scripts]
45 | creator = "creator:cmd_client"
--------------------------------------------------------------------------------