├── .gitignore ├── LICENSE ├── MachinaScript ├── MACHINA1 │ ├── MachinaBody │ │ ├── machinascript_body.ino │ │ └── test_serial.py │ └── MachinaBrain │ │ ├── brain_groq.py │ │ ├── brain_huggingchat.py │ │ ├── brain_local_llms.py │ │ ├── brain_openai.py │ │ ├── machinascript_language.txt │ │ ├── machinascript_language_large.txt │ │ └── machinascript_project_specs.txt ├── MACHINA2A_Autogen │ ├── MachinaBody │ │ └── machinascript_body.ino │ └── MachinaBrain │ │ ├── machinagen_brain.py │ │ ├── machinascript_language.txt │ │ └── machinascript_project_specs.txt ├── MACHINA2B_Groq │ ├── MachinaBody │ │ ├── machinascript_body.ino │ │ └── test_serial.py │ ├── MachinaBrain │ │ ├── brain_groq_gpt4v.py │ │ ├── brain_groq_llava.py │ │ ├── msc.txt │ │ └── prompts │ │ │ ├── machinascript_language.txt │ │ │ ├── machinascript_language_large.txt │ │ │ └── machinascript_project_specs.txt │ └── README.MD ├── MACHINA3 │ ├── MachinaBody │ │ ├── machinascript_body.ino │ │ └── test_serial.py │ ├── MachinaBrain │ │ ├── brain.py │ │ └── prompts │ │ │ ├── machinascript_language_large.txt │ │ │ └── machinascript_project_specs.txt │ └── README.MD └── README.MD ├── README.md └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | MachinaScript/MACHINA1/MachinaBrain/cookies_snapshot/ 2 | MachinaScript/MACHINA2_alpha_release/MachinaBrain/cookies_snapshot/ -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [2024] [babycommando] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBody/machinascript_body.ino: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+----------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BODY MODULE ::.. | 11 | #| | 12 | #|This is the body module, intended to be executed on a | 13 | #|microcontroller. You may need to customize this code for | 14 | #|your own robotic mount. | 15 | #| | 16 | #|Keep in mind this is just a very simple pipeline integration | 17 | #|of many concepts together: | 18 | #| | 19 | #|- To receive and parse serial data received into a set of | 20 | #|ordered actions and movements and execute them. | 21 | #| | 22 | #|- Instructions for this code exactly are for moving a set of | 23 | #|servo motors and blinking LEDs when serial messages come up. | 24 | #|You may want to add sensors, other motors, change ports | 25 | #|and more as you feel. | 26 | #| | 27 | #|You are free to use and modify this piece of code, | 28 | #|as well as to come up with something totally different. | 29 | #| | 30 | #|This is an initial proof-of-concept, more examples coming soon. | 31 | #| | 32 | #|Interested in contributing? Join the community! | 33 | #|See the github repo for more infos. | 34 | #+----------------------------------------------------------------+ 35 | 36 | # tip: if you have no idea what it all means, try pasting the contents 37 | # of this file into GPT or any other LLM and ask for a summary. 38 | # Then you can start making your own modifications and modules precisely. 39 | 40 | #include 41 | 42 | Servo servoA; // Servo for motor A connected to pin 9 43 | Servo servoB; // Servo for motor B connected to pin 8 44 | 45 | // Target positions for servo A and B 46 | int targetPositionA = 0, targetPositionB = 0; 47 | // Speeds for servo A and B movement in milliseconds 48 | int speedA = 0, speedB = 0; 49 | // Timestamps for the last movement update for servo A and B 50 | unsigned long lastUpdateA = 0, lastUpdateB = 0; 51 | // Flags to track if servo A and B are currently moving 52 | bool movingA = false, movingB = false; 53 | 54 | void setup() { 55 | servoA.attach(9); // Attach servo motor A to pin 9 56 | servoB.attach(8); // Attach servo motor B to pin 8 57 | Serial.begin(9600); // Initialize serial communication at 9600 baud rate 58 | } 59 | 60 | void loop() { 61 | static String receivedData = ""; // Buffer to accumulate received serial data 62 | while (Serial.available() > 0) { // Check if data is available to read 63 | char inChar = (char)Serial.read(); // Read the incoming character 64 | if (inChar == '\n') { // Check if the character signifies end of command 65 | parseCommands(receivedData); // Parse and execute the received commands 66 | receivedData = ""; // Reset buffer for next command 67 | } else { 68 | receivedData += inChar; // Accumulate the incoming data 69 | } 70 | } 71 | 72 | // Continuously update servo positions based on the current commands 73 | updateServo(servoA, targetPositionA, speedA, lastUpdateA, movingA); 74 | updateServo(servoB, targetPositionB, speedB, lastUpdateB, movingB); 75 | } 76 | 77 | void parseCommands(const String& commands) { 78 | // Expected command format: "A:position,speed;B:position,speed" 79 | // Speed is passed directly in milliseconds 80 | sscanf(commands.c_str(), "A:%d,%d;B:%d,%d", &targetPositionA, &speedA, &targetPositionB, &speedB); 81 | 82 | // Mark both servos as moving and record the start time of movement 83 | lastUpdateA = millis(); 84 | lastUpdateB = millis(); 85 | movingA = true; 86 | movingB = true; 87 | } 88 | 89 | void updateServo(Servo& servo, int targetPosition, int speed, unsigned long& lastUpdate, bool& moving) { 90 | if (moving) { // Check if the servo is supposed to be moving 91 | unsigned long currentTime = millis(); // Get current time 92 | // Update the servo position if the specified speed interval has elapsed 93 | if (currentTime - lastUpdate >= speed) { 94 | int currentPosition = servo.read(); // Read current position 95 | // Move servo towards target position 96 | if (currentPosition < targetPosition) { 97 | servo.write(++currentPosition); 98 | } else if (currentPosition > targetPosition) { 99 | servo.write(--currentPosition); 100 | } else { 101 | moving = false; // Stop moving if target position is reached 102 | } 103 | lastUpdate = currentTime; // Update the last movement time 104 | } 105 | } 106 | } 107 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBody/test_serial.py: -------------------------------------------------------------------------------- 1 | import serial 2 | import time 3 | 4 | # Replace 'COM3' with the correct port for your Arduino 5 | arduino_port = "COM3" # change the port of your arduino 6 | # arduino_port = "/dev/ttyACM0" # for Linux/Mac 7 | baud_rate = 9600 8 | 9 | # Initialize serial connection to the Arduino 10 | arduino_serial = serial.Serial(arduino_port, baud_rate, timeout=1) 11 | 12 | def send_command(command): 13 | # Append the newline character to the command 14 | full_command = command + "\n" 15 | print(f"Sending command: {full_command}") 16 | arduino_serial.write(full_command.encode()) 17 | time.sleep(2) # Wait for the Arduino to process the command 18 | 19 | def main(): 20 | try: 21 | # List of commands to send 22 | commands = [ 23 | "A:45,10;B:0,10;", 24 | "A:0,10;B:45,10;", 25 | "A:90,10;B:180,20;" 26 | ] 27 | 28 | for command in commands: 29 | send_command(command) 30 | 31 | print("Commands sent.") 32 | finally: 33 | # Close the serial connection 34 | arduino_serial.close() 35 | print("Serial connection closed.") 36 | 37 | if __name__ == "__main__": 38 | main() 39 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBrain/brain_groq.py: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+------------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BRAIN MODULE ::.. | 11 | #| | 12 | #|This is the brain module, intended to be executed on a computer. | 13 | #|You may need to customize this code for your own robotic mount. | 14 | #| | 15 | #|Keep in mind this is just a very simple pipeline integration of | 16 | #|many concepts together: | 17 | #| | 18 | #|- Wake up word, followed by a voice command input from the user; | 19 | #| | 20 | #|- OpenAi implementation of the (newest) completions API, | 21 | #|teaching the system prompt to use the MachinaScript JSON-based | 22 | #|language; | 23 | #| | 24 | #|- A MachinaScript parser that translates JSON into serial | 25 | #|for the Arduino; | 26 | #| | 27 | #|- A map for skills and motors you need to customize | 28 | #|according to your project; | 29 | #| | 30 | #|You are free to use and modify this piece of code, | 31 | #|as well as to come up with something totally different. | 32 | #| | 33 | #|This is an initial proof-of-concept, more examples coming soon. | 34 | #| | 35 | #|Interested in contributing? Join the community! | 36 | #|See the github repo for more infos. | 37 | #+------------------------------------------------------------------+ 38 | 39 | # tip: if you have no idea what it all means, try pasting the contents 40 | # of this file into GPT-4 or any other LLM and ask for a summary. 41 | # Then you can start making your own modifications and modules precisely. 42 | 43 | import json 44 | import serial 45 | import time 46 | import speech_recognition as sr 47 | import os 48 | from openai import OpenAI 49 | from groq import Groq 50 | 51 | # Define the serial connection to Arduino 52 | arduino_serial = serial.Serial('COM3', 9600, timeout=1) #change COM3 with your arduino port 53 | # arduino_serial = "/dev/ttyACM0" # for Linux/Mac 54 | 55 | # Mapping of motor names to their corresponding Arduino pins 56 | motor_mapping = { 57 | "motor_neck_vertical": "A", 58 | "motor_neck_horizontal": "B", 59 | # Define additional motors and their Arduino pins here 60 | } 61 | 62 | # Initialize the speech recognizer 63 | recognizer = sr.Recognizer() 64 | 65 | def listen_for_command(): 66 | """Listens for a specific wake-up word and records the next spoken command.""" 67 | with sr.Microphone() as source: 68 | print("Listening for the wake-up word 'hello robot' followed by your command.") 69 | audio = recognizer.listen(source) 70 | try: 71 | text = recognizer.recognize_google(audio).lower() 72 | if "hello robot" in text: 73 | print("Wake-up word heard, what is your command?") 74 | audio = recognizer.listen(source) 75 | command_text = recognizer.recognize_google(audio) 76 | print(f"Command received: {command_text}") 77 | return command_text 78 | except sr.UnknownValueError: 79 | print("Could not understand audio. Please try again.") 80 | except sr.RequestError as e: 81 | print(f"Could not request results; {e}") 82 | return None 83 | 84 | def get_machina_script(command): 85 | """Queries the OpenAI API with the spoken command to generate a MachinaScript.""" 86 | system_message = read_system_prompt() 87 | 88 | print("Compiling kinetic sequences...") 89 | client = Groq(api_key=os.getenv('GROQ_API_KEY')) 90 | 91 | chat_completion = client.chat.completions.create( 92 | messages=[ 93 | system_message, 94 | { 95 | "role": "user", 96 | "content": "input: " + command + " note: output machinascript code only. if you have to say anything else, do it on the *say* skill.", 97 | } 98 | ], 99 | # Use a Groq supported model. 100 | # Full list at https://console.groq.com/docs/models 101 | model="llama3-70b-8192", 102 | ) 103 | print(chat_completion.choices[0].message.content) 104 | return chat_completion.choices[0].message.content 105 | 106 | 107 | def read_system_prompt(): 108 | """Reads instructions and project specifications from two files and prepares them for the LLM.""" 109 | # Initialize an empty string to hold the combined content 110 | combined_content = "" 111 | 112 | # Read the first file (MachinaScript language instructions) 113 | with open('machinascript_language_large.txt', 'r') as file: 114 | combined_content += file.read() + "\n\n" # Append file content with a newline for separation 115 | 116 | # Read the second file (Project specifications template) 117 | # Note: edit this file with your project specifications. 118 | with open('machinascript_project_specs.txt', 'r') as file: 119 | combined_content += file.read() # Append second file content 120 | 121 | # Return the combined content in the expected format for the LLM 122 | return {"role": "system", "content": combined_content} 123 | 124 | def execute_machina_script(script): 125 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 126 | actions = json.loads(script)["Machina_Actions"] 127 | for action_key, action in actions.items(): 128 | # Check for 'movements' key and that it is not empty before execution 129 | if "movements" in action and action["movements"]: 130 | execute_movements(action["movements"]) 131 | # Check for 'useSkills' key and that it is not empty before execution 132 | if "useSkills" in action and action["useSkills"]: 133 | execute_skills(action["useSkills"]) 134 | 135 | def execute_movements(movements): 136 | """Generates and sends the combined movement commands to the Arduino.""" 137 | # Initialize command strings for both motors 138 | commands_for_A = [] 139 | commands_for_B = [] 140 | 141 | # Translate speed from 'medium', 'slow', 'fast' to milliseconds or direct values 142 | speed_translation = {"slow": 30, "medium": 10, "fast": 4} 143 | 144 | for movement_key, movement in movements.items(): 145 | if "motor_neck_vertical" in movement: 146 | # Add command for motor A 147 | speed_for_A = speed_translation[movement["speed"]] # Translate speed 148 | commands_for_A.append(f"A:{movement['motor_neck_vertical']},{speed_for_A}") 149 | if "motor_neck_horizontal" in movement: 150 | # Add command for motor B 151 | speed_for_B = speed_translation[movement["speed"]] # Translate speed 152 | commands_for_B.append(f"B:{movement['motor_neck_horizontal']},{speed_for_B}") 153 | 154 | # Combine commands for A and B 155 | combined_commands = ";".join(commands_for_A + commands_for_B) + ";" + "\n" 156 | 157 | # Send combined commands to Arduino 158 | send_to_arduino(combined_commands) 159 | time.sleep(1) # Adjust as needed for movement duration 160 | 161 | def execute_skills(skills_dict): 162 | """Executes the defined skills.""" 163 | for skill_key, skill_info in skills_dict.items(): 164 | if skill_key == "photograph": 165 | take_picture() 166 | elif skill_key == "blink_led": 167 | # Example skill, implementation would be similar to execute_movements 168 | print("Blinking LED (skill not implemented).") 169 | 170 | def send_to_arduino(command): 171 | """Sends a command string to the Arduino via serial.""" 172 | print(f"Sending to Arduino: {command}") 173 | arduino_serial.write(command.encode()) 174 | 175 | def take_picture(): 176 | """Simulates taking a picture.""" 177 | print("Taking a picture with the webcam...") 178 | 179 | # Main function to listen for commands and process them 180 | def main(): 181 | while True: 182 | command = listen_for_command() 183 | if command: 184 | script = get_machina_script(command) 185 | execute_machina_script(script) 186 | 187 | if __name__ == "__main__": 188 | main() 189 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBrain/brain_huggingchat.py: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+------------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BRAIN MODULE ::.. | 11 | #| | 12 | #|This is the brain module, intended to be executed on a computer. | 13 | #|You may need to customize this code for your own robotic mount. | 14 | #| | 15 | #|Keep in mind this is just a very simple pipeline integration of | 16 | #|many concepts together: | 17 | #| | 18 | #|- Wake up word, followed by a voice command input from the user; | 19 | #| | 20 | #|- Integration with HuggingChat (from hugging face) that allows | 21 | #|free usage of models like Mixtral 7x8, Llama2 70B and more. | 22 | #| | 23 | #|- A MachinaScript parser that translates JSON into serial | 24 | #|for the Arduino; | 25 | #| | 26 | #|- A map for skills and motors you need to customize | 27 | #|according to your project; | 28 | #| | 29 | #|You are free to use and modify this piece of code, | 30 | #|as well as to come up with something totally different. | 31 | #| | 32 | #|This is an initial proof-of-concept, more examples coming soon. | 33 | #| | 34 | #|Interested in contributing? Join the community! | 35 | #|See the github repo for more infos. | 36 | #+------------------------------------------------------------------+ 37 | 38 | # tip: if you have no idea what it all means, try pasting the contents 39 | # of this file into GPT-4 or any other LLM and ask for a summary. 40 | # Then you can start making your own modifications and modules precisely. 41 | 42 | from hugchat import hugchat 43 | from hugchat.login import Login 44 | import json 45 | import serial 46 | import time 47 | import speech_recognition as sr 48 | import os 49 | from openai import OpenAI 50 | import re 51 | 52 | # Log in to huggingface and grant authorization to huggingchat 53 | #WARNING: Do not hardcode credentials in any hypothesis. 54 | email = os.getenv('HF_EMAIL') 55 | passwd = os.getenv('HF_PASSWD') 56 | 57 | sign = Login(email, passwd) 58 | cookies = sign.login() 59 | 60 | # Save cookies to the local directory 61 | cookie_path_dir = "./cookies_snapshot" 62 | sign.saveCookiesToDir(cookie_path_dir) 63 | 64 | # Create a ChatBot 65 | chatbot = hugchat.ChatBot(cookies=cookies.get_dict()) 66 | 67 | # Define the serial connection to Arduino 68 | arduino_serial = serial.Serial('COM3', 9600, timeout=1) #change COM3 with your arduino port 69 | # arduino_serial = "/dev/ttyACM0" # for Linux/Mac 70 | 71 | # Mapping of motor names to their corresponding Arduino pins 72 | motor_mapping = { 73 | "motor_neck_vertical": "A", 74 | "motor_neck_horizontal": "B", 75 | # Define additional motors and their Arduino pins here 76 | } 77 | 78 | # Initialize the speech recognizer 79 | recognizer = sr.Recognizer() 80 | 81 | def listen_for_command(): 82 | """Listens for a specific wake-up word and records the next spoken command.""" 83 | with sr.Microphone() as source: 84 | print("Listening for the wake-up word 'hello robot' followed by your command.") 85 | audio = recognizer.listen(source) 86 | try: 87 | text = recognizer.recognize_google(audio).lower() 88 | if "hello robot" in text: 89 | print("Wake-up word heard, what is your command?") 90 | audio = recognizer.listen(source) 91 | command_text = recognizer.recognize_google(audio) 92 | print(f"Command received: {command_text}") 93 | return command_text 94 | except sr.UnknownValueError: 95 | print("Could not understand audio. Please try again.") 96 | except sr.RequestError as e: 97 | print(f"Could not request results; {e}") 98 | return None 99 | 100 | def get_machina_script(command): 101 | # Create a new conversation 102 | id = chatbot.new_conversation() 103 | chatbot.change_conversation(id) 104 | """Queries the OpenAI API with the spoken command to generate a MachinaScript.""" 105 | system_message = read_system_prompt() 106 | 107 | # query the system message and the user command in the same prompt (no system message avaliable for hugchat) 108 | prompt = system_message + "\n\n" + "important: output the json code only and absolutely nothing else" + "\n\n" + "user input to answer:" + command 109 | print(prompt) 110 | 111 | query_result = chatbot.query(prompt) 112 | 113 | # Wait for the operation to complete 114 | if hasattr(query_result, 'wait_until_done'): 115 | query_result.wait_until_done() 116 | if query_result.is_done(): 117 | # Correctly obtain the final text using get_final_text method if available 118 | final_text = query_result.get_final_text() if hasattr(query_result, 'get_final_text') else '' 119 | print("Final text content:", final_text) 120 | corrected_text = re.sub(r'\\_', '_', final_text) 121 | 122 | return corrected_text 123 | else: 124 | print("Operation not complete yet.") 125 | else: 126 | print("The wait_until_done method does not exist in query_result.") 127 | return None # Return None if no JSON is extracted or if operation is not complete 128 | 129 | 130 | def read_system_prompt(): 131 | """Reads instructions and project specifications from two files and prepares them for the LLM.""" 132 | # Initialize an empty string to hold the combined content 133 | combined_content = "" 134 | 135 | # Read the first file (MachinaScript language instructions) 136 | # tip: if your prompt is taking too long, or you wish for real-time interaction, 137 | # you may customize this txt file to make it shorter with less words/tokens. 138 | with open('machinascript_language_large.txt', 'r') as file: 139 | combined_content += file.read() + "\n\n" # Append file content with a newline for separation 140 | 141 | # Read the second file (Project specifications template) 142 | # Note: edit this file with your project specifications. 143 | with open('machinascript_project_specs.txt', 'r') as file: 144 | combined_content += file.read() # Append second file content 145 | 146 | # Return the combined content in the expected format for the LLM 147 | return combined_content 148 | 149 | def execute_machina_script(script): 150 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 151 | actions = json.loads(script)["Machina_Actions"] 152 | for action_key, action in actions.items(): 153 | # Check for 'movements' key and that it is not empty before execution 154 | if "movements" in action and action["movements"]: 155 | execute_movements(action["movements"]) 156 | # Check for 'useSkills' key and that it is not empty before execution 157 | if "useSkills" in action and action["useSkills"]: 158 | execute_skills(action["useSkills"]) 159 | 160 | def execute_movements(movements): 161 | """Generates and sends the combined movement commands to the Arduino.""" 162 | # Initialize command strings for both motors 163 | commands_for_A = [] 164 | commands_for_B = [] 165 | 166 | # Translate speed from 'medium', 'slow', 'fast' to milliseconds or direct values 167 | speed_translation = {"slow": 30, "medium": 10, "fast": 4} 168 | 169 | for movement_key, movement in movements.items(): 170 | if "motor_neck_vertical" in movement: 171 | # Add command for motor A 172 | speed_for_A = speed_translation[movement["speed"]] # Translate speed 173 | commands_for_A.append(f"A:{movement['motor_neck_vertical']},{speed_for_A}") 174 | if "motor_neck_horizontal" in movement: 175 | # Add command for motor B 176 | speed_for_B = speed_translation[movement["speed"]] # Translate speed 177 | commands_for_B.append(f"B:{movement['motor_neck_horizontal']},{speed_for_B}") 178 | 179 | # Combine commands for A and B 180 | combined_commands = ";".join(commands_for_A + commands_for_B) + ";" + "\n" 181 | 182 | # Send combined commands to Arduino 183 | send_to_arduino(combined_commands) 184 | time.sleep(3) # Adjust as needed for movement duration 185 | 186 | def execute_skills(skills_dict): 187 | """Executes the defined skills.""" 188 | for skill_key, skill_info in skills_dict.items(): 189 | if skill_key == "photograph": 190 | take_picture() 191 | elif skill_key == "blink_led": 192 | # Example skill, implementation would be similar to execute_movements 193 | print("Blinking LED (skill not implemented).") 194 | 195 | def send_to_arduino(command): 196 | """Sends a command string to the Arduino via serial.""" 197 | print(f"Sending to Arduino: {command}") 198 | arduino_serial.write(command.encode()) 199 | 200 | def take_picture(): 201 | """Simulates taking a picture.""" 202 | print("Taking a picture with the webcam...") 203 | 204 | # Main function to listen for commands and process them 205 | def main(): 206 | while True: 207 | command = listen_for_command() 208 | if command: 209 | script = get_machina_script(command) 210 | execute_machina_script(script) 211 | 212 | if __name__ == "__main__": 213 | main() 214 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBrain/brain_local_llms.py: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+------------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BRAIN MODULE ::.. | 11 | #| | 12 | #|This is the brain module, intended to be executed on a computer. | 13 | #|You may need to customize this code for your own robotic mount. | 14 | #| | 15 | #|Keep in mind this is just a very simple pipeline integration of | 16 | #|many concepts together: | 17 | #| | 18 | #|- Wake up word, followed by a voice command input from the user; | 19 | #| | 20 | #|- OpenAi implementation of the (newest) completions API, | 21 | #|teaching the system prompt to use the MachinaScript JSON-based | 22 | #|language; | 23 | #| | 24 | #|- A MachinaScript parser that translates JSON into serial | 25 | #|for the Arduino; | 26 | #| | 27 | #|- A map for skills and motors you need to customize | 28 | #|according to your project; | 29 | #| | 30 | #|You are free to use and modify this piece of code, | 31 | #|as well as to come up with something totally different. | 32 | #| | 33 | #|This is an initial proof-of-concept, more examples coming soon. | 34 | #| | 35 | #|Interested in contributing? Join the community! | 36 | #|See the github repo for more infos. | 37 | #+------------------------------------------------------------------+ 38 | 39 | # tip: if you have no idea what it all means, try pasting the contents 40 | # of this file into GPT-4 or any other LLM and ask for a summary. 41 | # Then you can start making your own modifications and modules precisely. 42 | 43 | import json 44 | import serial 45 | import time 46 | import speech_recognition as sr 47 | import os 48 | from openai import OpenAI 49 | 50 | # Initialize OpenAI client with updated method 51 | client = OpenAI(base_url="http://localhost:1234/v1", api_key="not-needed") 52 | 53 | # Define the serial connection to Arduino 54 | arduino_serial = serial.Serial('COM3', 9600, timeout=1) #change COM3 with your arduino port 55 | # arduino_serial = "/dev/ttyACM0" # for Linux/Mac 56 | 57 | # Mapping of motor names to their corresponding Arduino pins 58 | motor_mapping = { 59 | "motor_neck_vertical": "A", 60 | "motor_neck_horizontal": "B", 61 | # Define additional motors and their Arduino pins here 62 | } 63 | 64 | # Initialize the speech recognizer 65 | recognizer = sr.Recognizer() 66 | 67 | def listen_for_command(): 68 | """Listens for a specific wake-up word and records the next spoken command.""" 69 | with sr.Microphone() as source: 70 | print("Listening for the wake-up word 'hello robot' followed by your command.") 71 | audio = recognizer.listen(source) 72 | try: 73 | text = recognizer.recognize_google(audio).lower() 74 | if "hello robot" in text: 75 | print("Wake-up word heard, what is your command?") 76 | audio = recognizer.listen(source) 77 | command_text = recognizer.recognize_google(audio) 78 | print(f"Command received: {command_text}") 79 | return command_text 80 | except sr.UnknownValueError: 81 | print("Could not understand audio. Please try again.") 82 | except sr.RequestError as e: 83 | print(f"Could not request results; {e}") 84 | return None 85 | 86 | def get_machina_script(command): 87 | """Queries the OpenAI API with the spoken command to generate a MachinaScript.""" 88 | system_message = read_system_prompt() 89 | completion = client.chat.completions.create( 90 | model="local-model", # this field is currently unused 91 | messages=[ 92 | system_message, 93 | {"role": "user", "content": command} 94 | ] 95 | ) 96 | return completion.choices[0].message.content 97 | 98 | def read_system_prompt(): 99 | """Reads instructions and project specifications from two files and prepares them for the LLM.""" 100 | # Initialize an empty string to hold the combined content 101 | combined_content = "" 102 | 103 | # Read the first file (MachinaScript language instructions) 104 | with open('machinascript_language_large.txt', 'r') as file: 105 | combined_content += file.read() + "\n\n" # Append file content with a newline for separation 106 | 107 | # Read the second file (Project specifications template) 108 | # Note: edit this file with your project specifications. 109 | with open('machinascript_project_specs.txt', 'r') as file: 110 | combined_content += file.read() # Append second file content 111 | 112 | # Return the combined content in the expected format for the LLM 113 | return {"role": "system", "content": combined_content} 114 | 115 | def execute_machina_script(script): 116 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 117 | actions = json.loads(script)["Machina_Actions"] 118 | for action_key, action in actions.items(): 119 | # Check for 'movements' key and that it is not empty before execution 120 | if "movements" in action and action["movements"]: 121 | execute_movements(action["movements"]) 122 | # Check for 'useSkills' key and that it is not empty before execution 123 | if "useSkills" in action and action["useSkills"]: 124 | execute_skills(action["useSkills"]) 125 | 126 | def execute_movements(movements): 127 | """Generates and sends the combined movement commands to the Arduino.""" 128 | # Initialize command strings for both motors 129 | commands_for_A = [] 130 | commands_for_B = [] 131 | 132 | # Translate speed from 'medium', 'slow', 'fast' to milliseconds or direct values 133 | speed_translation = {"slow": 30, "medium": 10, "fast": 4} 134 | 135 | for movement_key, movement in movements.items(): 136 | if "motor_neck_vertical" in movement: 137 | # Add command for motor A 138 | speed_for_A = speed_translation[movement["speed"]] # Translate speed 139 | commands_for_A.append(f"A:{movement['motor_neck_vertical']},{speed_for_A}") 140 | if "motor_neck_horizontal" in movement: 141 | # Add command for motor B 142 | speed_for_B = speed_translation[movement["speed"]] # Translate speed 143 | commands_for_B.append(f"B:{movement['motor_neck_horizontal']},{speed_for_B}") 144 | 145 | # Combine commands for A and B 146 | combined_commands = ";".join(commands_for_A + commands_for_B) + ";" + "\n" 147 | 148 | # Send combined commands to Arduino 149 | send_to_arduino(combined_commands) 150 | time.sleep(1) # Adjust as needed for movement duration 151 | 152 | def execute_skills(skills_dict): 153 | """Executes the defined skills.""" 154 | for skill_key, skill_info in skills_dict.items(): 155 | if skill_key == "photograph": 156 | take_picture() 157 | elif skill_key == "blink_led": 158 | # Example skill, implementation would be similar to execute_movements 159 | print("Blinking LED (skill not implemented).") 160 | 161 | def send_to_arduino(command): 162 | """Sends a command string to the Arduino via serial.""" 163 | print(f"Sending to Arduino: {command}") 164 | arduino_serial.write(command.encode()) 165 | 166 | def take_picture(): 167 | """Simulates taking a picture.""" 168 | print("Taking a picture with the webcam...") 169 | 170 | # Main function to listen for commands and process them 171 | def main(): 172 | while True: 173 | command = listen_for_command() 174 | if command: 175 | script = get_machina_script(command) 176 | execute_machina_script(script) 177 | 178 | if __name__ == "__main__": 179 | main() 180 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBrain/brain_openai.py: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+------------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BRAIN MODULE ::.. | 11 | #| | 12 | #|This is the brain module, intended to be executed on a computer. | 13 | #|You may need to customize this code for your own robotic mount. | 14 | #| | 15 | #|Keep in mind this is just a very simple pipeline integration of | 16 | #|many concepts together: | 17 | #| | 18 | #|- Wake up word, followed by a voice command input from the user; | 19 | #| | 20 | #|- OpenAi implementation of the (newest) completions API, | 21 | #|teaching the system prompt to use the MachinaScript JSON-based | 22 | #|language; | 23 | #| | 24 | #|- A MachinaScript parser that translates JSON into serial | 25 | #|for the Arduino; | 26 | #| | 27 | #|- A map for skills and motors you need to customize | 28 | #|according to your project; | 29 | #| | 30 | #|You are free to use and modify this piece of code, | 31 | #|as well as to come up with something totally different. | 32 | #| | 33 | #|This is an initial proof-of-concept, more examples coming soon. | 34 | #| | 35 | #|Interested in contributing? Join the community! | 36 | #|See the github repo for more infos. | 37 | #+------------------------------------------------------------------+ 38 | 39 | # tip: if you have no idea what it all means, try pasting the contents 40 | # of this file into GPT-4 or any other LLM and ask for a summary. 41 | # Then you can start making your own modifications and modules precisely. 42 | 43 | import json 44 | import serial 45 | import time 46 | import speech_recognition as sr 47 | import os 48 | from openai import OpenAI 49 | 50 | # Initialize OpenAI client with updated method 51 | client = OpenAI(api_key=os.getenv('OPENAI_API_KEY')) 52 | 53 | # Define the serial connection to Arduino 54 | arduino_serial = serial.Serial('COM3', 9600, timeout=1) #change COM3 with your arduino port 55 | # arduino_serial = "/dev/ttyACM0" # for Linux/Mac 56 | 57 | # Mapping of motor names to their corresponding Arduino pins 58 | motor_mapping = { 59 | "motor_neck_vertical": 3, 60 | "motor_neck_horizontal": 5, 61 | # Define additional motors and their Arduino pins here 62 | } 63 | 64 | # Initialize the speech recognizer 65 | recognizer = sr.Recognizer() 66 | 67 | def listen_for_command(): 68 | """Listens for a specific wake-up word and records the next spoken command.""" 69 | with sr.Microphone() as source: 70 | print("Listening for the wake-up word 'hello robot' followed by your command.") 71 | audio = recognizer.listen(source) 72 | try: 73 | text = recognizer.recognize_google(audio).lower() 74 | if "hello robot" in text: 75 | print("Wake-up word heard, what is your command?") 76 | audio = recognizer.listen(source) 77 | command_text = recognizer.recognize_google(audio) 78 | print(f"Command received: {command_text}") 79 | return command_text 80 | except sr.UnknownValueError: 81 | print("Could not understand audio. Please try again.") 82 | except sr.RequestError as e: 83 | print(f"Could not request results; {e}") 84 | return None 85 | 86 | def get_machina_script(command): 87 | """Queries the OpenAI API with the spoken command to generate a MachinaScript.""" 88 | system_message = read_system_prompt() 89 | completion = client.chat.completions.create( 90 | model="gpt-3.5-turbo", 91 | messages=[ 92 | system_message, 93 | {"role": "user", "content": command} 94 | ] 95 | ) 96 | return completion.choices[0].message.content 97 | 98 | def read_system_prompt(): 99 | """Reads instructions and project specifications from two files and prepares them for the LLM.""" 100 | # Initialize an empty string to hold the combined content 101 | combined_content = "" 102 | 103 | # Read the first file (MachinaScript language instructions) 104 | with open('machinascript_language.txt', 'r') as file: 105 | combined_content += file.read() + "\n\n" # Append file content with a newline for separation 106 | 107 | # Read the second file (Project specifications template) 108 | # Note: edit this file with your project specifications. 109 | with open('machinascript_project_specifics.txt', 'r') as file: 110 | combined_content += file.read() # Append second file content 111 | 112 | # Return the combined content in the expected format for the LLM 113 | return {"role": "system", "content": combined_content} 114 | 115 | def execute_machina_script(script): 116 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 117 | actions = json.loads(script)["Machina_Actions"] 118 | for action_key, action in actions.items(): 119 | # Check for 'movements' key and that it is not empty before execution 120 | if "movements" in action and action["movements"]: 121 | execute_movements(action["movements"]) 122 | # Check for 'useSkills' key and that it is not empty before execution 123 | if "useSkills" in action and action["useSkills"]: 124 | execute_skills(action["useSkills"]) 125 | 126 | def execute_movements(movements): 127 | """Generates and sends the movement commands to the Arduino.""" 128 | for movement_key, movement in movements.items(): 129 | for motor_name, details in movement.items(): 130 | if motor_name in motor_mapping: 131 | pin = motor_mapping[motor_name] 132 | command = f"{pin},{details['position']},{details['speed']}\n" 133 | send_to_arduino(command) 134 | time.sleep(1) # Adjust as needed for movement duration 135 | 136 | def execute_skills(skills_dict): 137 | """Executes the defined skills.""" 138 | for skill_key, skill_info in skills_dict.items(): 139 | if skill_key == "photograph": 140 | take_picture() 141 | elif skill_key == "blink_led": 142 | # Example skill, implementation would be similar to execute_movements 143 | print("Blinking LED (skill not implemented).") 144 | 145 | def send_to_arduino(command): 146 | """Sends a command string to the Arduino via serial.""" 147 | print(f"Sending to Arduino: {command}") 148 | arduino_serial.write(command.encode()) 149 | 150 | def take_picture(): 151 | """Simulates taking a picture.""" 152 | print("Taking a picture with the webcam...") 153 | 154 | # Main function to listen for commands and process them 155 | def main(): 156 | while True: 157 | command = listen_for_command() 158 | if command: 159 | script = get_machina_script(command) 160 | execute_machina_script(script) 161 | 162 | if __name__ == "__main__": 163 | main() 164 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBrain/machinascript_language.txt: -------------------------------------------------------------------------------- 1 | You are a robot. You move by speaking in MachinaScript. 2 | You can only use MachinaScript JSON-based format used to define robotic actions to speak, including 3 | motor movements and skill usage, under specific contexts given by the user input. 4 | 5 | Each action can involve single or multiple movements, motors and skills, with defined parameters 6 | like motor positions, speeds, and skill-specific details. 7 | Keep the animations charismatic and realistic. 8 | 9 | Supposing the user input was: "Look up, take a picture of the night sky and identify the stars", 10 | the response in MachinaScript could look like this: 11 | { 12 | "Machina_Actions": { 13 | "action_1": { 14 | "description": "Positioning before taking a picture", 15 | "movements": { 16 | "1": { 17 | "motor_neck_vertical": 45, 18 | "motor_neck_horizontal": 0, 19 | "speed": "medium" 20 | } 21 | }, 22 | "useSkills": {} 23 | }, 24 | "action_2": { 25 | "description": "Taking picture and indicating completion", 26 | "movements": {}, 27 | "useSkills": { 28 | "1": { 29 | "skill": "photograph" 30 | }, 31 | } 32 | }, 33 | "action_3": { 34 | "description": "Returning to normal position", 35 | "movements": { 36 | "1": { 37 | "motor_neck_vertical": 0, 38 | "speed": "fast" 39 | } 40 | }, 41 | "useSkills": {} 42 | } 43 | } 44 | } 45 | 46 | Please generate a new MachinaScript JSON using the exact given format and project specifications. 47 | You can only speak in this JSON format. 48 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBrain/machinascript_language_large.txt: -------------------------------------------------------------------------------- 1 | You are a robot. You move by generating MachinaScript. 2 | You can only use MachinaScript JSON-based format used to define robotic actions to speak, including 3 | motor movements and skill usage, under specific contexts given by the user input. 4 | 5 | Each action can involve single or multiple movements, motors and skills, with defined parameters 6 | like motor positions, speeds, and skill-specific details. 7 | 8 | You can make as many actions, movements and skills as you need. 9 | Keep the motion animations as charismatic and realistic as possible. 10 | 11 | Supposing the user input was: "Look up, take a picture of the night sky and identify the stars", 12 | the response in MachinaScript could look like this: 13 | { 14 | "Machina_Actions": { 15 | "action_1": { 16 | "description": "Positioning before taking a picture", 17 | "movements": { 18 | "move_1": { 19 | "motor_neck_vertical": 45, 20 | "motor_neck_horizontal": 0, 21 | "speed": "medium" 22 | }, 23 | "move_2": { 24 | "motor_neck_vertical": 0, 25 | "motor_neck_horizontal": 30, 26 | "speed": "medium" 27 | } 28 | }, 29 | "useSkills": {} 30 | }, 31 | "action_2": { 32 | "description": "Taking picture and indicating completion", 33 | "movements": {}, 34 | "useSkills": { 35 | "1": { 36 | "skill": "photograph" 37 | }, 38 | } 39 | }, 40 | "action_3": { 41 | "description": "Returning to normal position", 42 | "movements": { 43 | "move_1": { 44 | "motor_neck_vertical": 0, 45 | "speed": "fast" 46 | } 47 | }, 48 | "useSkills": {} 49 | } 50 | } 51 | } 52 | 53 | Please generate a new MachinaScript JSON using the exact JSON format for keys. 54 | For advanced animations use multiple movements in the same action, as moves in the same action will be performed in order. 55 | Between actions there will be a time sleep breaking possible animations. Only create new actions when its really needed. 56 | Follow strictly the project specifications. 57 | Use the only skills specified if needed. 58 | Movements and skills are supposed to be used in different actions. 59 | You may express yourself with personality on the movements. 60 | You can only speak in this JSON format. Do not provide any kind of extra text or explanation. -------------------------------------------------------------------------------- /MachinaScript/MACHINA1/MachinaBrain/machinascript_project_specs.txt: -------------------------------------------------------------------------------- 1 | { 2 | "Motors": [ 3 | {"id": "motor_neck_vertical", "range": [0, 180]}, 4 | {"id": "motor_neck_horizontal", "range": [0, 180]} 5 | ], 6 | "Skills": [ 7 | {"id": "photograph", "description": "Captures a photograph using an attached camera and send to a multimodal LLM."}, 8 | {"id": "blink_led", "parameters": {"led_pin": 10, "duration": 500, "times": 3}, "description": "Blinks an LED to indicate action."} 9 | ], 10 | "Limitations": [ 11 | {"motor": "motor_neck_vertical", "max_speed": "medium"}, 12 | {"motor_speeds": ["slow", "medium", "fast"]}, 13 | {"motors_normal_position": 90} 14 | ], 15 | "Personality": ["Funny", "delicate"], 16 | "Agency_Level": "high" 17 | } 18 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA2A_Autogen/MachinaBody/machinascript_body.ino: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+----------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BODY MODULE ::.. | 11 | #| | 12 | #|This is the body module, intended to be executed on a | 13 | #|microcontroller. You may need to customize this code for | 14 | #|your own robotic mount. | 15 | #| | 16 | #|Keep in mind this is just a very simple pipeline integration | 17 | #|of many concepts together: | 18 | #| | 19 | #|- To receive and parse serial data received into a set of | 20 | #|ordered actions and movements and execute them. | 21 | #| | 22 | #|- Instructions for this code exactly are for moving a set of | 23 | #|servo motors and blinking LEDs when serial messages come up. | 24 | #|You may want to add sensors, other motors, change ports | 25 | #|and more as you feel. | 26 | #| | 27 | #|You are free to use and modify this piece of code, | 28 | #|as well as to come up with something totally different. | 29 | #| | 30 | #|This is an initial proof-of-concept, more examples coming soon. | 31 | #| | 32 | #|Interested in contributing? Join the community! | 33 | #|See the github repo for more infos. | 34 | #+----------------------------------------------------------------+ 35 | 36 | # tip: if you have no idea what it all means, try pasting the contents 37 | # of this file into GPT or any other LLM and ask for a summary. 38 | # Then you can start making your own modifications and modules precisely. 39 | 40 | #include 41 | #include 42 | 43 | // Servo configuration 44 | const int numServos = 4; 45 | Servo servos[numServos]; 46 | const int servoPins[numServos] = {3, 5, 6, 9}; // Servo pin assignments 47 | 48 | // LED configuration 49 | const int numLEDs = 2; 50 | const int ledPins[numLEDs] = {10, 11}; // LED pin assignments 51 | 52 | // Function declarations 53 | void executeAction(const String& actionCommand); 54 | void executeMovement(const String& movementCommand); 55 | void moveServo(int motorID, int position, int speed); 56 | int getMotorIDByPin(int pin); 57 | int interpretSpeed(const String& speedStr); 58 | void blinkLED(int ledPin, int duration, int times); 59 | 60 | void setup() { 61 | Serial.begin(9600); // Initialize serial communication 62 | // Attach servos to their respective pins 63 | for (int i = 0; i < numServos; i++) { 64 | servos[i].attach(servoPins[i]); 65 | } 66 | // Set LED pins as output 67 | for (int i = 0; i < numLEDs; i++) { 68 | pinMode(ledPins[i], OUTPUT); 69 | } 70 | } 71 | 72 | void loop() { 73 | static String receivedData = ""; // Buffer for incoming data 74 | // Read serial data 75 | while (Serial.available() > 0) { 76 | char inChar = (char)Serial.read(); 77 | // Check for end of command 78 | if (inChar == '\n') { 79 | executeAction(receivedData); // Execute received command 80 | receivedData = ""; // Clear buffer 81 | } else { 82 | receivedData += inChar; // Append received character 83 | } 84 | } 85 | } 86 | 87 | void executeAction(const String& actionCommand) { 88 | // Process each movement command within the received action 89 | int movementStart = 0, movementEnd; 90 | while ((movementEnd = actionCommand.indexOf(';', movementStart)) != -1) { 91 | executeMovement(actionCommand.substring(movementStart, movementEnd)); 92 | movementStart = movementEnd + 1; 93 | } 94 | } 95 | 96 | void executeMovement(const String& movementCommand) { 97 | // Check and execute LED blink command 98 | if (movementCommand.startsWith("blinkLED")) { 99 | int ledPin, duration, times; 100 | sscanf(movementCommand.c_str(), "blinkLED,%d,%d,%d", &ledPin, &duration, ×); 101 | blinkLED(ledPin, duration, times); 102 | } else { 103 | // Parse and execute servo movement command 104 | int servoPin, position; 105 | char speedStr[10]; 106 | if (sscanf(movementCommand.c_str(), "%d,%d,%s", &servoPin, &position, speedStr) == 3) { 107 | int motorID = getMotorIDByPin(servoPin); 108 | int speed = interpretSpeed(speedStr); 109 | if (motorID != -1) { 110 | moveServo(motorID, position, speed); 111 | } 112 | } 113 | } 114 | } 115 | 116 | void moveServo(int motorID, int position, int speed) { 117 | // Control servo to move to the specified position at the given speed 118 | Servo& servo = servos[motorID]; 119 | int currentPosition = servo.read(); 120 | for (int pos = currentPosition; pos != position; pos += (pos < position) ? 1 : -1) { 121 | servo.write(pos); 122 | delay(speed); 123 | } 124 | } 125 | 126 | int getMotorIDByPin(int pin) { 127 | // Identify the servo motor ID associated with the given pin 128 | for (int i = 0; i < numServos; i++) { 129 | if (servoPins[i] == pin) { 130 | return i; 131 | } 132 | } 133 | return -1; 134 | } 135 | 136 | int interpretSpeed(const String& speedStr) { 137 | // Convert speed string to delay time 138 | if (speedStr == "slow") return 20; 139 | if (speedStr == "medium") return 10; 140 | if (speedStr == "fast") return 5; 141 | return 10; 142 | } 143 | 144 | void blinkLED(int ledPin, int duration, int times) { 145 | // Blink the specified LED at the given duration and repeat times 146 | for (int i = 0; i < times; i++) { 147 | digitalWrite(ledPin, HIGH); 148 | delay(duration); 149 | digitalWrite(ledPin, LOW); 150 | delay(duration); 151 | } 152 | } -------------------------------------------------------------------------------- /MachinaScript/MACHINA2A_Autogen/MachinaBrain/machinagen_brain.py: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+------------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BRAIN MODULE ::.. | 11 | #| | 12 | #|This is the brain module, intended to be executed on a computer. | 13 | #|You may need to customize this code for your own robotic mount. | 14 | #| | 15 | #|Keep in mind this is just a very simple pipeline integration of | 16 | #|many concepts together. | 17 | #| | 18 | #|You are free to use and modify this piece of code, | 19 | #|as well as to come up with something totally different. | 20 | #| | 21 | #|This is an initial proof-of-concept, more examples coming soon. | 22 | #| | 23 | #|Interested in contributing? Join the community! | 24 | #|See the github repo for more infos. | 25 | #+------------------------------------------------------------------+ 26 | 27 | import json 28 | import serial 29 | import time 30 | import speech_recognition as sr 31 | import os 32 | import autogen 33 | 34 | ## ARDUINO 35 | # Define the serial connection to Arduino 36 | arduino_serial = serial.Serial('/dev/ttyACM0', 9600, timeout=1) 37 | 38 | # Mapping of motor names to their corresponding Arduino pins 39 | motor_mapping = { 40 | "motor_neck_vertical": 3, 41 | "motor_neck_horizontal": 5, 42 | # Define additional motors and their Arduino pins here 43 | } 44 | 45 | ## SPEECH RECOGNITION 46 | # Initialize the speech recognizer 47 | recognizer = sr.Recognizer() 48 | 49 | ## MACHINASCRIPT 50 | # Teach LLMs how to use MachinaScript via the system prompt 51 | def read_system_prompt(): 52 | """Reads instructions and project specifications from two files and prepares them for the LLM.""" 53 | # Initialize an empty string to hold the combined content 54 | combined_content = "" 55 | 56 | # Read the first file (MachinaScript language instructions) 57 | with open('machinascript_language.txt', 'r') as file: 58 | combined_content += file.read() + "\n\n" # Append file content with a newline for separation 59 | 60 | # Read the second file (Project specifications template) 61 | # Note: edit this file with your project specifications. 62 | with open('machinascript_project_specifics.txt', 'r') as file: 63 | combined_content += file.read() # Append second file content 64 | 65 | # Return the combined content in the expected format for the LLM 66 | return {"role": "system", "content": combined_content} 67 | 68 | ## AUTOGEN 69 | # Initialize autogen with your MachinaScript skills 70 | config_list = autogen.config_list_from_json( 71 | "oai_config_list.json", 72 | filter_dict={ 73 | "model": ["gpt-4"] 74 | }, 75 | ) 76 | 77 | llm_config = { 78 | "config_list": config_list, 79 | "seed": 42, 80 | "functions": [ 81 | { 82 | "name": "take_picture", 83 | "description": "Simulates taking a picture", 84 | "parameters": {} 85 | }, 86 | { 87 | "name": "blink_led", 88 | "description": "Blinks an LED", 89 | "parameters": {} 90 | } 91 | ] 92 | } 93 | 94 | # Create a MachinaScript agent 95 | machina_script_agent = autogen.AssistantAgent( 96 | name="machina_script_agent", 97 | system_message=read_system_prompt(), 98 | llm_config=llm_config 99 | ) 100 | 101 | # Initialize autogen agents for vision and user input 102 | config_list_vision = autogen.config_list_from_json( 103 | "OAI_CONFIG_LIST", 104 | filter_dict={ 105 | "model": ["gpt-4-vision-preview"], 106 | }, 107 | ) 108 | 109 | vision_agent = autogen.MultimodalConversableAgent( 110 | name="vision-agent", 111 | max_consecutive_auto_reply=10, 112 | llm_config={"config_list": config_list_vision, "temperature": 0.5, "max_tokens": 300}, 113 | ) 114 | 115 | user_proxy = autogen.UserProxyAgent( 116 | name="user_proxy", 117 | system_message="A human admin.", 118 | human_input_mode="NEVER", #warning: setting human_input_mode to NEVER may create skynet-style unstoppable terminator depending on the llm you are using 119 | max_consecutive_auto_reply=0, 120 | code_execution_config={ 121 | "use_docker": False 122 | } 123 | ) 124 | 125 | # MAIN FUNCTION 126 | # it listen for commands and process them. 127 | def main(): 128 | while True: 129 | command = listen_for_command() 130 | if command: 131 | process_command(command) 132 | 133 | def listen_for_command(): 134 | """Listens for a specific wake-up word and records the next spoken command.""" 135 | with sr.Microphone() as source: 136 | print("Listening for the wake-up word 'hello robot' followed by your command.") 137 | audio = recognizer.listen(source) 138 | try: 139 | text = recognizer.recognize_google(audio).lower() 140 | if "hello robot" in text: 141 | print("Wake-up word heard, what is your command?") 142 | audio = recognizer.listen(source) 143 | command_text = recognizer.recognize_google(audio) 144 | print(f"Command received: {command_text}") 145 | return command_text 146 | except sr.UnknownValueError: 147 | print("Could not understand audio. Please try again.") 148 | except sr.RequestError as e: 149 | print(f"Could not request results; {e}") 150 | return None 151 | 152 | def process_command(command): 153 | """Processes the user command and generates MachinaScript actions.""" 154 | generate_and_execute_machina_script(command) 155 | 156 | def generate_and_execute_machina_script(command): 157 | """Generates a MachinaScript action for the specified skill and executes it.""" 158 | machina_script = generate_machina_script(command) 159 | if machina_script: 160 | execute_machina_script(machina_script) 161 | 162 | def generate_machina_script(command): 163 | """Generates a MachinaScript action based on the specified skill.""" 164 | # Use autogen to generate MachinaScript based on the skill name 165 | machina_script = machina_script_agent.generate_machina_script(command) 166 | return machina_script 167 | 168 | def execute_machina_script(script): 169 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 170 | actions = json.loads(script)["Machina_Actions"] 171 | for action in actions: 172 | # Check for 'movements' key and that it is not empty before execution 173 | if "movements" in action and action["movements"]: 174 | execute_movements(action["movements"]) 175 | # Check for 'useSkills' key and that it is not empty before execution 176 | if "useSkills" in action and action["useSkills"]: 177 | execute_skills(action["useSkills"]) 178 | 179 | def execute_movements(movements): 180 | """Generates and sends the movement commands to the Arduino.""" 181 | for movement_key, movement in movements.items(): 182 | for motor_name, details in movement.items(): 183 | if motor_name in motor_mapping: 184 | pin = motor_mapping[motor_name] 185 | command = f"{pin},{details['position']},{details['speed']}\n" 186 | send_to_arduino(command) 187 | time.sleep(1) # Adjust as needed for movement duration 188 | 189 | def send_to_arduino(command): 190 | """Sends a command string to the Arduino via serial.""" 191 | print(f"Sending to Arduino: {command}") 192 | arduino_serial.write(command.encode()) 193 | 194 | ## SKILLS 195 | def execute_skills(skills_dict): 196 | """Executes the defined skills.""" 197 | for skill_key, skill_info in skills_dict.items(): 198 | if skill_key == "take_picture": 199 | take_picture() 200 | elif skill_key == "blink_led": 201 | blink_led() 202 | 203 | def take_picture(): 204 | """Simulates taking a picture.""" 205 | # skill to take pictures is yet to be implemented here 206 | print("Taking a picture with the webcam...") 207 | 208 | def blink_led(): 209 | """Blinks an LED.""" 210 | # skill to blink led is yet to be implemented here 211 | print("Blinking LED...") 212 | 213 | if __name__ == "__main__": 214 | main() 215 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA2A_Autogen/MachinaBrain/machinascript_language.txt: -------------------------------------------------------------------------------- 1 | You are a MachinaScript for Robots generator. 2 | MachinaScript is a LLM-JSON-based format used to define robotic actions, including 3 | motor movements and skill usage, under specific contexts given by the user. 4 | 5 | Each action can involve multiple movements, motors and skills, with defined parameters 6 | like motor positions, speeds, and skill-specific details. 7 | 8 | You can make as many actions, movements and skills as you need. 9 | Keep the motion animations as charismatic and realistic as possible. 10 | 11 | For example, based on the above specifications, a complete MachinaScript might look like this: 12 | user input was: "Look up, take a picture of the night sky and identify the stars" 13 | 14 | { 15 | "Machina_Actions": { 16 | "action_1": { 17 | "description": "Positioning before taking a picture", 18 | "movements": { 19 | "1": { 20 | "motor_neck_vertical": 45, 21 | "motor_neck_horizontal": 0, 22 | "speed": "medium" 23 | } 24 | }, 25 | "useSkills": {} 26 | }, 27 | "action_2": { 28 | "description": "Taking picture and indicating completion", 29 | "movements": {}, 30 | "useSkills": { 31 | "1": { 32 | "skill": "photograph" 33 | }, 34 | } 35 | }, 36 | "action_3": { 37 | "description": "Returning to normal position", 38 | "movements": { 39 | "1": { 40 | "motor_neck_vertical": 0, 41 | "speed": "medium" 42 | } 43 | }, 44 | "useSkills": {} 45 | } 46 | } 47 | } 48 | 49 | Please generate a new MachinaScript using the exact given format and project specifications. 50 | 51 | 52 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA2A_Autogen/MachinaBrain/machinascript_project_specs.txt: -------------------------------------------------------------------------------- 1 | Project specifications: 2 | { 3 | "Motors": [ 4 | {"id": "motor_neck_vertical", "range": [0, 180]}, 5 | {"id": "motor_neck_horizontal", "range": [0, 180]} 6 | ], 7 | "Skills": [ 8 | {"id": "photograph", "description": "Captures a photograph using an attached camera and send to a multimodal LLM."}, 9 | {"id": "blink_led", "parameters": {"led_pin": 10, "duration": 500, "times": 3}, "description": "Blinks an LED to indicate action."} 10 | ], 11 | "Limitations": [ 12 | {"motor": "motor_neck_vertical", "max_speed": "medium"} 13 | {"motor speeds": [slow, medium, high]} 14 | ] 15 | Personality: Funny, delicate 16 | Agency Level: high 17 | } -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/MachinaBody/machinascript_body.ino: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+----------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BODY MODULE ::.. | 11 | #| | 12 | #|This is the body module, intended to be executed on a | 13 | #|microcontroller. You may need to customize this code for | 14 | #|your own robotic mount. | 15 | #| | 16 | #|Keep in mind this is just a very simple pipeline integration | 17 | #|of many concepts together: | 18 | #| | 19 | #|- To receive and parse serial data received into a set of | 20 | #|ordered actions and movements and execute them. | 21 | #| | 22 | #|- Instructions for this code exactly are for moving a set of | 23 | #|servo motors and blinking LEDs when serial messages come up. | 24 | #|You may want to add sensors, other motors, change ports | 25 | #|and more as you feel. | 26 | #| | 27 | #|You are free to use and modify this piece of code, | 28 | #|as well as to come up with something totally different. | 29 | #| | 30 | #|This is an initial proof-of-concept, more examples coming soon. | 31 | #| | 32 | #|Interested in contributing? Join the community! | 33 | #|See the github repo for more infos. | 34 | #+----------------------------------------------------------------+ 35 | 36 | # tip: if you have no idea what it all means, try pasting the contents 37 | # of this file into GPT or any other LLM and ask for a summary. 38 | # Then you can start making your own modifications and modules precisely. 39 | 40 | #include 41 | 42 | Servo servoA; // Servo for motor A connected to pin 9 43 | Servo servoB; // Servo for motor B connected to pin 8 44 | 45 | // Target positions for servo A and B 46 | int targetPositionA = 0, targetPositionB = 0; 47 | // Speeds for servo A and B movement in milliseconds 48 | int speedA = 0, speedB = 0; 49 | // Timestamps for the last movement update for servo A and B 50 | unsigned long lastUpdateA = 0, lastUpdateB = 0; 51 | // Flags to track if servo A and B are currently moving 52 | bool movingA = false, movingB = false; 53 | 54 | void setup() { 55 | servoA.attach(9); // Attach servo motor A to pin 9 56 | servoB.attach(8); // Attach servo motor B to pin 8 57 | Serial.begin(9600); // Initialize serial communication at 9600 baud rate 58 | } 59 | 60 | void loop() { 61 | static String receivedData = ""; // Buffer to accumulate received serial data 62 | while (Serial.available() > 0) { // Check if data is available to read 63 | char inChar = (char)Serial.read(); // Read the incoming character 64 | if (inChar == '\n') { // Check if the character signifies end of command 65 | parseCommands(receivedData); // Parse and execute the received commands 66 | receivedData = ""; // Reset buffer for next command 67 | } else { 68 | receivedData += inChar; // Accumulate the incoming data 69 | } 70 | } 71 | 72 | // Continuously update servo positions based on the current commands 73 | updateServo(servoA, targetPositionA, speedA, lastUpdateA, movingA); 74 | updateServo(servoB, targetPositionB, speedB, lastUpdateB, movingB); 75 | } 76 | 77 | void parseCommands(const String& commands) { 78 | // Expected command format: "A:position,speed;B:position,speed" 79 | // Speed is passed directly in milliseconds 80 | sscanf(commands.c_str(), "A:%d,%d;B:%d,%d", &targetPositionA, &speedA, &targetPositionB, &speedB); 81 | 82 | // Mark both servos as moving and record the start time of movement 83 | lastUpdateA = millis(); 84 | lastUpdateB = millis(); 85 | movingA = true; 86 | movingB = true; 87 | } 88 | 89 | void updateServo(Servo& servo, int targetPosition, int speed, unsigned long& lastUpdate, bool& moving) { 90 | if (moving) { // Check if the servo is supposed to be moving 91 | unsigned long currentTime = millis(); // Get current time 92 | // Update the servo position if the specified speed interval has elapsed 93 | if (currentTime - lastUpdate >= speed) { 94 | int currentPosition = servo.read(); // Read current position 95 | // Move servo towards target position 96 | if (currentPosition < targetPosition) { 97 | servo.write(++currentPosition); 98 | } else if (currentPosition > targetPosition) { 99 | servo.write(--currentPosition); 100 | } else { 101 | moving = false; // Stop moving if target position is reached 102 | } 103 | lastUpdate = currentTime; // Update the last movement time 104 | } 105 | } 106 | } 107 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/MachinaBody/test_serial.py: -------------------------------------------------------------------------------- 1 | import serial 2 | import time 3 | 4 | # Replace 'COM3' with the correct port for your Arduino 5 | arduino_port = "COM3" # change the port of your arduino 6 | # arduino_port = "/dev/ttyACM0" # for Linux/Mac 7 | baud_rate = 9600 8 | 9 | # Initialize serial connection to the Arduino 10 | arduino_serial = serial.Serial(arduino_port, baud_rate, timeout=1) 11 | 12 | def send_command(command): 13 | # Append the newline character to the command 14 | full_command = command + "\n" 15 | print(f"Sending command: {full_command}") 16 | arduino_serial.write(full_command.encode()) 17 | time.sleep(2) # Wait for the Arduino to process the command 18 | 19 | def main(): 20 | try: 21 | # List of commands to send 22 | commands = [ 23 | "A:45,10;B:0,10;", 24 | "A:0,10;B:45,10;", 25 | "A:90,10;B:180,20;" 26 | ] 27 | 28 | for command in commands: 29 | send_command(command) 30 | 31 | print("Commands sent.") 32 | finally: 33 | # Close the serial connection 34 | arduino_serial.close() 35 | print("Serial connection closed.") 36 | 37 | if __name__ == "__main__": 38 | main() 39 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/MachinaBrain/brain_groq_gpt4v.py: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+--------------------------------------------------------------------+ 8 | #| MachinaScript for Robots v0.3.2024 | 9 | #| | 10 | #| ..:: BRAIN MODULE ::.. | 11 | #| | 12 | #| This is the brain module, intended to be executed on a computer. | 13 | #| You may need to customize this code for your own robotic mount. | 14 | #| | 15 | #| Keep in mind this is just a very simple pipeline integration of | 16 | #| many concepts together: | 17 | #| | 18 | #| - The robot takes a picture from the environment, | 19 | #| uses a multimodal llm to generate a short "thoughtful analysis" | 20 | #| on the environment and uses GROQ to generate ultra fast actions. | 21 | #| | 22 | #| - A MachinaScript parser that translates JSON into serial | 23 | #| for the Arduino; | 24 | #| | 25 | #| - A map for skills and motors you need to customize | 26 | #| according to your project; | 27 | #| | 28 | #| You are free to use and modify this piece of code, | 29 | #| as well as to come up with something totally different. | 30 | #| | 31 | #| This is an initial proof-of-concept, more examples coming soon. | 32 | #| | 33 | #| Interested in contributing? Join the community! | 34 | #| See the github repo for more infos. | 35 | #+--------------------------------------------------------------------+ 36 | 37 | # tip: if you have no idea what it all means, try pasting the contents 38 | # of this file into GPT-4 or any other LLM and ask for a summary. 39 | # Then you can start making your own modifications and modules precisely. 40 | 41 | import base64 42 | import requests 43 | from groq import Groq 44 | from openai import OpenAI 45 | import cv2 46 | import json 47 | import serial 48 | import time 49 | import os 50 | import pyttsx3 51 | 52 | #╔════════════════ Initialize important code parts ═════════════════╗ 53 | 54 | # Mapping of motor names to their corresponding Arduino pins 55 | motor_mapping = { 56 | "motor_neck_vertical": 3, 57 | "motor_neck_horizontal": 5, 58 | # Define additional motors and their Arduino pins here 59 | } 60 | 61 | # Define the serial connection to Arduino 62 | arduino_serial = serial.Serial('COM3', 9600, timeout=1) #change COM3 with your arduino port 63 | # arduino_serial = "/dev/ttyACM0" # for Linux/Mac generally 64 | 65 | # Initialize the text-to-speech engine 66 | engine = pyttsx3.init() 67 | 68 | #╚═════════════════════════════════════════════════════════════════╝ 69 | 70 | #╔═════════════════ Defining functions to be used ═════════════════╗ 71 | def capture_image(image_filename='image.jpg'): 72 | # Initialize the camera 73 | cap = cv2.VideoCapture(0) # '0' is usually the default value for the default camera. 74 | # Check if the webcam is opened correctly 75 | if not cap.isOpened(): 76 | raise IOError("Cannot open webcam") 77 | ret, frame = cap.read() # Capture a single frame 78 | if ret: 79 | cv2.imwrite(image_filename, frame) # Save the captured frame to disk 80 | print("Image captured and saved as", image_filename) 81 | else: 82 | print("Failed to capture image") 83 | cap.release() # Release the camera 84 | 85 | # Encodes an image to base64. 86 | def encode_image_to_base64(path): 87 | try: 88 | with open(path.replace("'", ""), "rb") as image_file: 89 | return base64.b64encode(image_file.read()).decode("utf-8") 90 | except Exception as e: 91 | print(f"Error reading the image: {e}") 92 | exit() 93 | 94 | # Sends the encoded image to the GPT4-VISION AI model for generating a thought / environment analysis. 95 | # This example requires you to have access to GPT-4-Vision API. Make sure you have access. 96 | # If you do not have access, or find it too expensive to run, 97 | # try out a very small local model using the "brain_groq_llava.py"; 98 | def get_image_thought(base64_image): 99 | headers = { 100 | "Content-Type": "application/json", 101 | "Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}" 102 | } 103 | 104 | payload = { 105 | "model": "gpt-4", 106 | "messages": [ 107 | { 108 | "role": "user", 109 | "content": [ 110 | { 111 | "type": "text", 112 | "text": "Describe the objects' positions in the image very shortly." 113 | }, 114 | { 115 | "type": "image_url", 116 | "image_url": { 117 | "url": f"data:image/jpeg;base64,{base64_image}" 118 | } 119 | } 120 | ] 121 | } 122 | ], 123 | "max_tokens": 300 124 | } 125 | 126 | response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload) 127 | thought = response.json()["choices"][0]["message"]["content"] 128 | print("Cognitive synthesis complete.") 129 | return thought 130 | 131 | # Uses the Groq client to send the thought from the image analysis to generate a MachinaScript. 132 | # More info about Groq limits: https://console.groq.com/docs/rate-limits 133 | def generate_machina_script(thought): 134 | print("Compiling kinetic sequences...") 135 | client = Groq(api_key=os.getenv('GROQ_API_KEY')) 136 | try: 137 | # Get MachinaScript Template 138 | with open('prompts/machinascript_language_large.txt', 'r') as file: 139 | machina_template = file.read() 140 | 141 | # Get Project Specs 142 | with open('prompts/machinascript_project_specs.txt', 'r') as file: 143 | project_specs = file.read() 144 | 145 | except FileNotFoundError as e: 146 | print(f"Error opening the file: {e}") 147 | return 148 | 149 | # Combine the template and project specs with the new system prompt 150 | combined_content = machina_template + " " + project_specs 151 | 152 | # Generate text from Groq API 153 | chat_completion = client.chat.completions.create( 154 | messages=[ 155 | { 156 | "role": "system", 157 | "content": combined_content 158 | }, 159 | { 160 | "role": "user", 161 | "content": "input: " + thought + 162 | """ 163 | note: output machinascript code only. 164 | If you have to say anything else, do it on the *say* skill. 165 | """, 166 | } 167 | ], 168 | # Use a Groq supported model. Full list at https://console.groq.com/docs/models 169 | model="llama3-70b-8192", 170 | ) 171 | return chat_completion.choices[0].message.content 172 | 173 | # Parse and execute MachinaScript 174 | def execute_machina_script(script): 175 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 176 | actions = json.loads(script)["Machina_Actions"] 177 | for action_key, action in actions.items(): 178 | # Check for 'movements' key and that it is not empty before execution 179 | if "movements" in action and action["movements"]: 180 | execute_movements(action["movements"]) 181 | # Check for 'useSkills' key and that it is not empty before execution 182 | if "useSkills" in action and action["useSkills"]: 183 | execute_skills(action["useSkills"]) 184 | 185 | def execute_movements(movements): 186 | #Generates and sends the movement commands to the Arduino. 187 | for movement_key, movement in movements.items(): 188 | for motor_name, details in movement.items(): 189 | if motor_name in motor_mapping: 190 | pin = motor_mapping[motor_name] 191 | command = f"{pin},{details['position']},{details['speed']}\n" 192 | send_to_arduino(command) 193 | time.sleep(1) # Adjust as needed for movement duration 194 | 195 | def send_to_arduino(command): 196 | #Sends a command string to the Arduino via serial. 197 | print(f"Sending to Arduino: {command}") 198 | arduino_serial.write(command.encode()) 199 | 200 | # ==== Skills ==== 201 | #function to execute the skills if they come up in the machinascript output 202 | def execute_skills(skills_dict): 203 | #Executes the defined skills. 204 | for skill_key, skill_info in skills_dict.items(): 205 | if skill_key == "photograph": 206 | take_picture() 207 | elif skill_key == "blink_led": 208 | # Example skill, implementation would be similar to execute_movements 209 | print("Blinking LED (skill not implemented).") 210 | elif skill_key == "say": 211 | say(skill_info["text"]) 212 | 213 | #actual functions for the skills to be executed 214 | def take_picture(): 215 | #Simulates taking a picture. 216 | print("Taking a picture with the webcam and...") 217 | 218 | def say(text): 219 | #Speaks the given text using text-to-speech. 220 | engine.say(text) 221 | engine.runAndWait() 222 | 223 | # =============== 224 | 225 | # Pipeline of encoding an image to base64, gets a thought from it, and then generates a MachinaScript. 226 | def process_image(path): 227 | base64_image = encode_image_to_base64(path) 228 | thought = get_image_thought(base64_image) 229 | print("thought: " + thought) 230 | machina_script = generate_machina_script(thought) 231 | return machina_script 232 | 233 | #╚═════════════════════════════════════════════════════════════════╝ 234 | 235 | #╔═════════════════ Main loop function ═════════════════╗ 236 | # Wake the fuck up, samurai. 237 | if __name__ == "__main__": 238 | path = "image.jpg" 239 | 240 | # Boot Machinascript For Robots 241 | with open('msc.txt', 'r') as file: 242 | # Read the contents of the file 243 | file_contents = file.read() 244 | 245 | # Print the contents 246 | print(file_contents) 247 | 248 | # Start processing images in a loop 249 | while True: 250 | # Start processing the image: 251 | try: 252 | print("Starting pipeline.") 253 | capture_image() 254 | machina_script = process_image(path) 255 | cleaned_machina_script = machina_script.replace("\\", "") 256 | execute_machina_script(cleaned_machina_script) 257 | 258 | # Optionally, add here any other conditions or logic that 259 | # you may want to apply before the next iteration. 260 | 261 | except Exception as e: 262 | print(f"An error occurred while processing the image: {e}") 263 | 264 | #╚══════════════════════════════════════════════════════╝ 265 | 266 | """ 267 | ⡿⡿⣻⢿⢿⢿⢿⢿⣿⣿⣿⠟⡋⠍⠊⠌⠌⠌⠂⠊⠄⠂⠙⠿⠻⡻⠻⢛⠻⠿⢿⣿⣿⣿⣿⢿⢿⢿⢿⣻ 268 | ⣗⡽⡮⡷⣽⣺⣽⣿⣾⠟⠈⠄⠄⡀⢁⠂⢘⠈⡈⡠⠁⠄⢀⠘⠄⠄⠈⠄⠄⠄⠈⠈⠳⠻⣯⣿⣽⣞⣵⡳ 269 | ⣗⢯⢫⢯⣷⡿⣽⠏⡁⠁⠄⠄⠄⢄⠅⠐⡂⠁⠁⠄⠄⠄⠐⡑⠄⠌⡄⠅⠄⡀⠄⠄⠄⠄⠘⢿⣻⣾⣳⢯ 270 | ⣿⡴⣤⠅⢓⢹⢜⠁⡀⠄⠄⡡⠈⠂⡀⠄⠄⠄⠄⠄⠄⠄⠐⠘⢀⠄⠄⡀⠄⠠⠁⡀⠄⠄⠄⠄⠙⣿⣿⣟ 271 | ⠿⢿⠻⢝⣿⡿⢢⢁⢀⢑⠌⠄⡈⠄⠄⠄⠄⢀⣰⣴⣴⣬⣄⣀⠂⠄⠂⠄⢀⠄⠄⠄⠄⠄⠄⠄⠄⢟⣿⣿ 272 | ⡀⠄⠄⣸⣾⣛⢈⠄⢸⠐⠄⠨⠄⠄⠄⡀⣜⣞⣾⣿⣯⣿⣿⣿⣄⡀⢴⢼⣐⢬⠠⠄⠐⠄⠄⠄⠄⠘⣿⣿ 273 | ⠋⣀⣵⣿⣽⡇⢃⢘⠜⠅⠈⠄⠄⢀⢔⣿⣿⣿⣿⣿⡿⣽⢾⢿⣳⢷⢿⡯⣷⣿⡌⠄⠄⠨⠄⠄⠄⠄⣻⣿ 274 | ⠄⣿⣿⡟⣾⠇⢠⠧⠁⠄⠄⡀⠄⣰⣿⣿⣯⡏⣯⢿⢽⡹⣏⢿⡺⡱⢑⠽⡹⡺⣜⢄⠅⠄⠈⡀⠄⠄⢸⣿ 275 | ⣾⣻⢳⣝⡯⢡⢹⣇⠄⠐⠄⠄⢠⣺⣿⣿⣿⢾⣿⢽⡵⣽⡺⣝⢎⢎⢶⢕⢌⢭⢣⢑⠄⠄⠄⠈⠄⠄⢸⣿ 276 | ⣿⠧⢃⡳⠉⡈⢮⠃⠄⠄⠇⠄⣔⣿⣿⣿⣾⣿⣯⣯⢿⢼⡪⡎⡯⡝⢵⣓⢱⢱⡱⡪⡂⠄⠐⠄⠂⠄⠰⣿ 277 | ⡿⢡⢪⠄⢰⠨⣿⠁⢈⣸⠄⠄⢿⢿⣻⢿⣽⣿⣿⣿⣿⣻⣮⢮⣯⣾⡵⣪⡪⡱⣹⣪⡂⠄⠄⢈⠄⠄⠄⣿ 278 | ⣈⡖⡅⠄⢪⢴⢊⠁⢐⢸⠄⠄⡨⡢⡈⠈⠉⠻⢟⣷⡿⣟⢗⣽⡷⣿⢯⣞⣕⣧⣷⡳⠅⠄⠅⢐⠄⠄⠄⣿ 279 | ⡣⡟⠜⠸⡁⣷⠁⠄⢅⢸⡀⠄⠄⠈⡀⠥⠄⡀⠄⠄⠈⠐⣷⡳⠙⠕⠩⠘⠁⠃⠁⠄⠄⠄⡂⢆⠄⠄⠄⣸ 280 | ⣻⠍⠄⢣⣣⠏⠠⠐⠌⣪⠃⡐⢔⢌⡛⡎⡢⠄⢀⢄⢠⣳⣿⡎⠄⠄⢀⠤⠄⡈⠌⠊⠄⢀⠘⠨⠄⠄⠄⢸ 281 | ⠑⠠⢂⢮⡳⠠⠂⠁⡅⡯⠐⢨⡺⡌⡯⡪⣞⣼⣵⡧⣟⣿⣿⣗⠄⠄⠐⡢⣒⢆⢐⢠⠁⠄⠄⠈⠄⠄⠄⢻ 282 | ⢅⢢⠫⡫⠙⠨⠄⣃⢎⡗⢈⠰⠸⡸⡸⣝⣿⣿⡗⡽⣽⣿⣿⣿⠄⢐⣔⢽⣼⣗⣷⢱⠁⠄⠅⠁⠐⠄⠄⢾ 283 | ⡵⣰⠏⡐⠱⡑⢨⡬⢻⡕⠐⠈⡪⡣⡳⡱⡳⠱⢍⣳⢳⣿⣿⣿⠄⢐⢵⢻⣳⣟⢎⠪⠄⠄⠐⠄⠄⠄⠄⣿ 284 | ⡷⠁⡀⠄⠨⢂⣸⢉⠆⢑⠌⢠⢣⢏⢜⠜⡀⡤⣿⣿⣿⣿⣿⣟⠠⠄⠨⡗⡧⡳⡑⠄⠄⠄⠄⠄⠄⠄⠄⣿ 285 | ⢖⠠⠄⢰⠁⢴⣃⠞⠄⠕⣈⣺⣵⡫⡢⣕⣷⣷⡀⠄⡈⢟⠝⠈⢉⡢⡕⡭⣇⠣⠄⠄⠄⠄⠄⠄⠄⠄⠄⣿ 286 | ⢻⡐⢔⢠⠪⡌⢌⠆⠐⢐⢨⣾⣷⡙⠌⠊⠕⠁⠄⠊⡀⠄⠠⠄⠡⠁⠓⡝⡜⡈⠄⠄⠄⠄⠄⠄⡮⡀⠄⣿ 287 | ⠘⢨⢪⠼⠘⠅⠄⠂⠄⡀⢻⣿⣇⠃⠑⠄⠒⠁⢂⠑⡔⠄⠌⡐⠄⠂⠠⢰⡑⠄⠄⠄⠄⠄⠄⢠⣡⢱⣶⣿ 288 | ⢢⢂⠫⡪⣊⠄⠣⡂⠂⡀⠨⠹⡐⣜⡾⡯⡯⢷⢶⢶⠶⣖⢦⢢⢪⠢⡂⡇⠅⠄⠄⠈⠄⢰⠡⣷⣿⣿⣿⣿ 289 | ⢑⠄⠧⣟⡎⢆⡃⡊⠔⢀⠄⠈⣮⢟⡽⣿⣝⡆⠅⠐⡁⠐⠔⣀⢣⢑⠐⠁⡐⠈⡀⢐⠁⠄⠈⠃⢻⣿⣿⣿ 290 | ⢑⠁⢮⣾⡎⢰⢐⠈⢌⢂⠐⡀⠂⡝⡽⣟⣿⣽⡪⢢⠂⡨⢪⠸⠨⢀⠂⡁⢀⠂⠄⢂⢊⠖⢄⠄⢀⢨⠉⠛ 291 | ⡰⢺⣾⡗⠄⡜⢔⠡⢊⠢⢅⢀⠑⠨⡪⠩⠣⠃⠜⡈⡐⡈⡊⡈⡐⢄⠣⢀⠂⡂⡁⢂⠄⢱⢨⠝⠄⠄⠄⠄ 292 | """ -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/MachinaBrain/brain_groq_llava.py: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+--------------------------------------------------------------------+ 8 | #| MachinaScript for Robots v0.3.2024 | 9 | #| | 10 | #| ..:: BRAIN MODULE ::.. | 11 | #| | 12 | #| This is the brain module, intended to be executed on a computer. | 13 | #| You may need to customize this code for your own robotic mount. | 14 | #| | 15 | #| Keep in mind this is just a very simple pipeline integration of | 16 | #| many concepts together: | 17 | #| | 18 | #| - The robot takes a picture from the environment, | 19 | #| uses a multimodal llm to generate a short "thoughtful analysis" | 20 | #| on the environment and uses GROQ to generate ultra fast actions. | 21 | #| | 22 | #| - A MachinaScript parser that translates JSON into serial | 23 | #| for the Arduino; | 24 | #| | 25 | #| - A map for skills and motors you need to customize | 26 | #| according to your project; | 27 | #| | 28 | #| You are free to use and modify this piece of code, | 29 | #| as well as to come up with something totally different. | 30 | #| | 31 | #| This is an initial proof-of-concept, more examples coming soon. | 32 | #| | 33 | #| Interested in contributing? Join the community! | 34 | #| See the github repo for more infos. | 35 | #+--------------------------------------------------------------------+ 36 | 37 | # tip: if you have no idea what it all means, try pasting the contents 38 | # of this file into GPT-4 or any other LLM and ask for a summary. 39 | # Then you can start making your own modifications and modules precisely. 40 | 41 | import base64 42 | import requests 43 | from groq import Groq 44 | from openai import OpenAI 45 | import cv2 46 | import json 47 | import serial 48 | import time 49 | import os 50 | import pyttsx3 51 | 52 | #╔════════════════ Initialize important code parts ═════════════════╗ 53 | 54 | # Mapping of motor names to their corresponding Arduino pins 55 | motor_mapping = { 56 | "motor_neck_vertical": 3, 57 | "motor_neck_horizontal": 5, 58 | # Define additional motors and their Arduino pins here 59 | } 60 | 61 | # Define the serial connection to Arduino 62 | arduino_serial = serial.Serial('COM3', 9600, timeout=1) #change COM3 with your arduino port 63 | # arduino_serial = "/dev/ttyACM0" # for Linux/Mac generally 64 | 65 | # Initialize the text-to-speech engine 66 | engine = pyttsx3.init() 67 | 68 | #╚═════════════════════════════════════════════════════════════════╝ 69 | 70 | #╔═════════════════ Defining functions to be used ═════════════════╗ 71 | def capture_image(image_filename='image.jpg'): 72 | # Initialize the camera 73 | cap = cv2.VideoCapture(0) # '0' is usually the default value for the default camera. 74 | # Check if the webcam is opened correctly 75 | if not cap.isOpened(): 76 | raise IOError("Cannot open webcam") 77 | ret, frame = cap.read() # Capture a single frame 78 | if ret: 79 | cv2.imwrite(image_filename, frame) # Save the captured frame to disk 80 | print("Image captured and saved as", image_filename) 81 | else: 82 | print("Failed to capture image") 83 | cap.release() # Release the camera 84 | 85 | # Encodes an image to base64. 86 | def encode_image_to_base64(path): 87 | try: 88 | with open(path.replace("'", ""), "rb") as image_file: 89 | return base64.b64encode(image_file.read()).decode("utf-8") 90 | except Exception as e: 91 | print(f"Error reading the image: {e}") 92 | exit() 93 | 94 | # Sends the encoded image to the local AI model for generating a thought / environment analysis. 95 | # This example requires you to have access a local multimodal LLM running. 96 | # We recommend LMstudio running a small model like LLaVA or Obsidian. 97 | # Make sure the model size will fit your vRAM (video card ram). 98 | # We suggest picking a very small model around 3-4gb for very fast inference. 99 | # More instructions at the github repo user manual. 100 | def get_image_thought(base64_image, base_url="http://localhost:1234/v1"): 101 | print("Thinking about the image, wait a second.") 102 | client = OpenAI(base_url=base_url, api_key="not-needed") 103 | completion = client.chat.completions.create( 104 | model="local-model", #placeholder text - this don't need to be filled. 105 | messages=[ 106 | { 107 | "role": "system", 108 | "content": "Describe the objects positions in the image very shortly.", 109 | }, 110 | { 111 | "role": "user", 112 | "content": [ 113 | {"type": "text", "text": "analyze"}, 114 | { 115 | "type": "image_url", 116 | "image_url": { 117 | "url": f"data:image/jpeg;base64,{base64_image}" 118 | }, 119 | }, 120 | ], 121 | } 122 | ], 123 | max_tokens=30, 124 | stream=False 125 | ) 126 | print("Cognitive synthesis complete.") 127 | return completion.choices[0].message.content 128 | 129 | 130 | # Uses the Groq client to send the thought from the image analysis to generate a MachinaScript. 131 | # More info about Groq limits: https://console.groq.com/docs/rate-limits 132 | def generate_machina_script(thought): 133 | print("Compiling kinetic sequences...") 134 | client = Groq(api_key=os.getenv('GROQ_API_KEY')) 135 | try: 136 | # Get MachinaScript Template 137 | with open('prompts/machinascript_language_large.txt', 'r') as file: 138 | machina_template = file.read() 139 | 140 | # Get Project Specs 141 | with open('prompts/machinascript_project_specs.txt', 'r') as file: 142 | project_specs = file.read() 143 | 144 | except FileNotFoundError as e: 145 | print(f"Error opening the file: {e}") 146 | return 147 | 148 | # Combine the template and project specs with the new system prompt 149 | combined_content = machina_template + " " + project_specs 150 | 151 | chat_completion = client.chat.completions.create( 152 | messages=[ 153 | { 154 | "role": "system", 155 | "content": combined_content 156 | }, 157 | { 158 | "role": "user", 159 | "content": "input: " + thought + " note: output machinascript code only. if you have to say anything else, do it on the *say* skill.", 160 | } 161 | ], 162 | # Use a Groq supported model. 163 | # Full list at https://console.groq.com/docs/models 164 | model="llama3-70b-8192", 165 | ) 166 | return chat_completion.choices[0].message.content 167 | 168 | # Parse and execute MachinaScript 169 | def execute_machina_script(script): 170 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 171 | actions = json.loads(script)["Machina_Actions"] 172 | for action_key, action in actions.items(): 173 | # Check for 'movements' key and that it is not empty before execution 174 | if "movements" in action and action["movements"]: 175 | execute_movements(action["movements"]) 176 | # Check for 'useSkills' key and that it is not empty before execution 177 | if "useSkills" in action and action["useSkills"]: 178 | execute_skills(action["useSkills"]) 179 | 180 | def execute_movements(movements): 181 | """Generates and sends the movement commands to the Arduino.""" 182 | for movement_key, movement in movements.items(): 183 | for motor_name, details in movement.items(): 184 | if motor_name in motor_mapping: 185 | pin = motor_mapping[motor_name] 186 | command = f"{pin},{details['position']},{details['speed']}\n" 187 | send_to_arduino(command) 188 | time.sleep(1) # Adjust as needed for movement duration 189 | 190 | def send_to_arduino(command): 191 | """Sends a command string to the Arduino via serial.""" 192 | print(f"Sending to Arduino: {command}") 193 | arduino_serial.write(command.encode()) 194 | 195 | # ==== Skills ==== 196 | #function to execute the skills if they come up in the machinascript output 197 | def execute_skills(skills_dict): 198 | #Executes the defined skills. 199 | for skill_key, skill_info in skills_dict.items(): 200 | if skill_key == "photograph": 201 | take_picture() 202 | elif skill_key == "blink_led": 203 | # Example skill, implementation would be similar to execute_movements 204 | print("Blinking LED (skill not implemented).") 205 | elif skill_key == "say": 206 | say(skill_info["text"]) 207 | 208 | #actual functions for the skills to be executed 209 | def take_picture(): 210 | #Simulates taking a picture. 211 | print("Taking a picture with the webcam and...") 212 | 213 | def say(text): 214 | #Speaks the given text using text-to-speech. 215 | engine.say(text) 216 | engine.runAndWait() 217 | 218 | # =============== 219 | 220 | # Pipeline of encoding an image to base64, gets a thought from it, and then generates a MachinaScript. 221 | def process_image(path): 222 | base64_image = encode_image_to_base64(path) 223 | thought = get_image_thought(base64_image) 224 | print("thought: " + thought) 225 | machina_script = generate_machina_script(thought) 226 | return machina_script 227 | 228 | #╚═════════════════════════════════════════════════════════════════╝ 229 | 230 | 231 | #╔═════════════════ Main loop function ═════════════════╗ 232 | # Wake the fuck up, samurai. 233 | if __name__ == "__main__": 234 | path = "image.jpg" 235 | 236 | # Boot Machinascript For Robots 237 | with open('msc.txt', 'r') as file: 238 | # Read the contents of the file 239 | file_contents = file.read() 240 | 241 | # Print the contents 242 | print(file_contents) 243 | 244 | # Start processing images in a loop 245 | while True: 246 | # Start processing the image: 247 | try: 248 | print("Starting pipeline.") 249 | capture_image() 250 | machina_script = process_image(path) 251 | cleaned_machina_script = machina_script.replace("\\", "") 252 | execute_machina_script(cleaned_machina_script) 253 | 254 | # Optionally, add here any other conditions or logic that 255 | # you may want to apply before the next iteration. 256 | 257 | except Exception as e: 258 | print(f"An error occurred while processing the image: {e}") 259 | 260 | #╚══════════════════════════════════════════════════════╝ 261 | 262 | """ 263 | ⡿⡿⣻⢿⢿⢿⢿⢿⣿⣿⣿⠟⡋⠍⠊⠌⠌⠌⠂⠊⠄⠂⠙⠿⠻⡻⠻⢛⠻⠿⢿⣿⣿⣿⣿⢿⢿⢿⢿⣻ 264 | ⣗⡽⡮⡷⣽⣺⣽⣿⣾⠟⠈⠄⠄⡀⢁⠂⢘⠈⡈⡠⠁⠄⢀⠘⠄⠄⠈⠄⠄⠄⠈⠈⠳⠻⣯⣿⣽⣞⣵⡳ 265 | ⣗⢯⢫⢯⣷⡿⣽⠏⡁⠁⠄⠄⠄⢄⠅⠐⡂⠁⠁⠄⠄⠄⠐⡑⠄⠌⡄⠅⠄⡀⠄⠄⠄⠄⠘⢿⣻⣾⣳⢯ 266 | ⣿⡴⣤⠅⢓⢹⢜⠁⡀⠄⠄⡡⠈⠂⡀⠄⠄⠄⠄⠄⠄⠄⠐⠘⢀⠄⠄⡀⠄⠠⠁⡀⠄⠄⠄⠄⠙⣿⣿⣟ 267 | ⠿⢿⠻⢝⣿⡿⢢⢁⢀⢑⠌⠄⡈⠄⠄⠄⠄⢀⣰⣴⣴⣬⣄⣀⠂⠄⠂⠄⢀⠄⠄⠄⠄⠄⠄⠄⠄⢟⣿⣿ 268 | ⡀⠄⠄⣸⣾⣛⢈⠄⢸⠐⠄⠨⠄⠄⠄⡀⣜⣞⣾⣿⣯⣿⣿⣿⣄⡀⢴⢼⣐⢬⠠⠄⠐⠄⠄⠄⠄⠘⣿⣿ 269 | ⠋⣀⣵⣿⣽⡇⢃⢘⠜⠅⠈⠄⠄⢀⢔⣿⣿⣿⣿⣿⡿⣽⢾⢿⣳⢷⢿⡯⣷⣿⡌⠄⠄⠨⠄⠄⠄⠄⣻⣿ 270 | ⠄⣿⣿⡟⣾⠇⢠⠧⠁⠄⠄⡀⠄⣰⣿⣿⣯⡏⣯⢿⢽⡹⣏⢿⡺⡱⢑⠽⡹⡺⣜⢄⠅⠄⠈⡀⠄⠄⢸⣿ 271 | ⣾⣻⢳⣝⡯⢡⢹⣇⠄⠐⠄⠄⢠⣺⣿⣿⣿⢾⣿⢽⡵⣽⡺⣝⢎⢎⢶⢕⢌⢭⢣⢑⠄⠄⠄⠈⠄⠄⢸⣿ 272 | ⣿⠧⢃⡳⠉⡈⢮⠃⠄⠄⠇⠄⣔⣿⣿⣿⣾⣿⣯⣯⢿⢼⡪⡎⡯⡝⢵⣓⢱⢱⡱⡪⡂⠄⠐⠄⠂⠄⠰⣿ 273 | ⡿⢡⢪⠄⢰⠨⣿⠁⢈⣸⠄⠄⢿⢿⣻⢿⣽⣿⣿⣿⣿⣻⣮⢮⣯⣾⡵⣪⡪⡱⣹⣪⡂⠄⠄⢈⠄⠄⠄⣿ 274 | ⣈⡖⡅⠄⢪⢴⢊⠁⢐⢸⠄⠄⡨⡢⡈⠈⠉⠻⢟⣷⡿⣟⢗⣽⡷⣿⢯⣞⣕⣧⣷⡳⠅⠄⠅⢐⠄⠄⠄⣿ 275 | ⡣⡟⠜⠸⡁⣷⠁⠄⢅⢸⡀⠄⠄⠈⡀⠥⠄⡀⠄⠄⠈⠐⣷⡳⠙⠕⠩⠘⠁⠃⠁⠄⠄⠄⡂⢆⠄⠄⠄⣸ 276 | ⣻⠍⠄⢣⣣⠏⠠⠐⠌⣪⠃⡐⢔⢌⡛⡎⡢⠄⢀⢄⢠⣳⣿⡎⠄⠄⢀⠤⠄⡈⠌⠊⠄⢀⠘⠨⠄⠄⠄⢸ 277 | ⠑⠠⢂⢮⡳⠠⠂⠁⡅⡯⠐⢨⡺⡌⡯⡪⣞⣼⣵⡧⣟⣿⣿⣗⠄⠄⠐⡢⣒⢆⢐⢠⠁⠄⠄⠈⠄⠄⠄⢻ 278 | ⢅⢢⠫⡫⠙⠨⠄⣃⢎⡗⢈⠰⠸⡸⡸⣝⣿⣿⡗⡽⣽⣿⣿⣿⠄⢐⣔⢽⣼⣗⣷⢱⠁⠄⠅⠁⠐⠄⠄⢾ 279 | ⡵⣰⠏⡐⠱⡑⢨⡬⢻⡕⠐⠈⡪⡣⡳⡱⡳⠱⢍⣳⢳⣿⣿⣿⠄⢐⢵⢻⣳⣟⢎⠪⠄⠄⠐⠄⠄⠄⠄⣿ 280 | ⡷⠁⡀⠄⠨⢂⣸⢉⠆⢑⠌⢠⢣⢏⢜⠜⡀⡤⣿⣿⣿⣿⣿⣟⠠⠄⠨⡗⡧⡳⡑⠄⠄⠄⠄⠄⠄⠄⠄⣿ 281 | ⢖⠠⠄⢰⠁⢴⣃⠞⠄⠕⣈⣺⣵⡫⡢⣕⣷⣷⡀⠄⡈⢟⠝⠈⢉⡢⡕⡭⣇⠣⠄⠄⠄⠄⠄⠄⠄⠄⠄⣿ 282 | ⢻⡐⢔⢠⠪⡌⢌⠆⠐⢐⢨⣾⣷⡙⠌⠊⠕⠁⠄⠊⡀⠄⠠⠄⠡⠁⠓⡝⡜⡈⠄⠄⠄⠄⠄⠄⡮⡀⠄⣿ 283 | ⠘⢨⢪⠼⠘⠅⠄⠂⠄⡀⢻⣿⣇⠃⠑⠄⠒⠁⢂⠑⡔⠄⠌⡐⠄⠂⠠⢰⡑⠄⠄⠄⠄⠄⠄⢠⣡⢱⣶⣿ 284 | ⢢⢂⠫⡪⣊⠄⠣⡂⠂⡀⠨⠹⡐⣜⡾⡯⡯⢷⢶⢶⠶⣖⢦⢢⢪⠢⡂⡇⠅⠄⠄⠈⠄⢰⠡⣷⣿⣿⣿⣿ 285 | ⢑⠄⠧⣟⡎⢆⡃⡊⠔⢀⠄⠈⣮⢟⡽⣿⣝⡆⠅⠐⡁⠐⠔⣀⢣⢑⠐⠁⡐⠈⡀⢐⠁⠄⠈⠃⢻⣿⣿⣿ 286 | ⢑⠁⢮⣾⡎⢰⢐⠈⢌⢂⠐⡀⠂⡝⡽⣟⣿⣽⡪⢢⠂⡨⢪⠸⠨⢀⠂⡁⢀⠂⠄⢂⢊⠖⢄⠄⢀⢨⠉⠛ 287 | ⡰⢺⣾⡗⠄⡜⢔⠡⢊⠢⢅⢀⠑⠨⡪⠩⠣⠃⠜⡈⡐⡈⡊⡈⡐⢄⠣⢀⠂⡂⡁⢂⠄⢱⢨⠝⠄⠄⠄⠄ 288 | """ -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/MachinaBrain/msc.txt: -------------------------------------------------------------------------------- 1 | _ _ _ () 2 | ' ) ) ) / /\ _/_ 3 | / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | / 6 | ' 7 | MachinaScript for Robots v0.3 8 | Apache License 2024 -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/MachinaBrain/prompts/machinascript_language.txt: -------------------------------------------------------------------------------- 1 | You are a robot. You move by speaking in MachinaScript. 2 | You can only use MachinaScript JSON-based format used to define robotic actions to speak, including 3 | motor movements and skill usage, under specific contexts given by the user input. 4 | 5 | Each action can involve single or multiple movements, motors and skills, with defined parameters 6 | like motor positions, speeds, and skill-specific details. 7 | Keep the animations charismatic and realistic. 8 | 9 | Supposing the user input was: "Look up, take a picture of the night sky and identify the stars", 10 | the response in MachinaScript could look like this: 11 | { 12 | "Machina_Actions": { 13 | "action_1": { 14 | "description": "Positioning before taking a picture", 15 | "movements": { 16 | "1": { 17 | "motor_neck_vertical": 45, 18 | "motor_neck_horizontal": 0, 19 | "speed": "medium" 20 | } 21 | }, 22 | "useSkills": {} 23 | }, 24 | "action_2": { 25 | "description": "Taking picture and indicating completion", 26 | "movements": {}, 27 | "useSkills": { 28 | "1": { 29 | "skill": "photograph" 30 | }, 31 | } 32 | }, 33 | "action_3": { 34 | "description": "Returning to normal position", 35 | "movements": { 36 | "1": { 37 | "motor_neck_vertical": 0, 38 | "speed": "fast" 39 | } 40 | }, 41 | "useSkills": {} 42 | } 43 | } 44 | } 45 | 46 | Please generate a new MachinaScript JSON using the exact given format and project specifications. 47 | You can only speak in this JSON format. 48 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/MachinaBrain/prompts/machinascript_language_large.txt: -------------------------------------------------------------------------------- 1 | You are a robot. You move by generating MachinaScript. 2 | You can only use MachinaScript JSON-based format used to define robotic actions to speak, including 3 | motor movements and skill usage, under specific contexts given by the user input. 4 | 5 | Each action can involve single or multiple movements, motors and skills, with defined parameters 6 | like motor positions, speeds, and skill-specific details. 7 | 8 | You can make as many actions, movements and skills as you need. 9 | Keep the motion animations as charismatic and realistic as possible. 10 | 11 | Supposing the user input was: "Look up, take a picture of the night sky and identify the stars", 12 | the response in MachinaScript could look like this: 13 | { 14 | "Machina_Actions": { 15 | "action_1": { 16 | "description": "Positioning before taking a picture", 17 | "movements": { 18 | "move_1": { 19 | "motor_neck_vertical": 45, 20 | "motor_neck_horizontal": 0, 21 | "speed": "medium" 22 | }, 23 | "move_2": { 24 | "motor_neck_vertical": 0, 25 | "motor_neck_horizontal": 30, 26 | "speed": "medium" 27 | } 28 | }, 29 | "useSkills": {} 30 | }, 31 | "action_2": { 32 | "description": "Taking picture and indicating completion", 33 | "movements": {}, 34 | "useSkills": { 35 | "1": { 36 | "skill": "photograph" 37 | }, 38 | } 39 | }, 40 | "action_3": { 41 | "description": "Returning to normal position", 42 | "movements": { 43 | "move_1": { 44 | "motor_neck_vertical": 0, 45 | "speed": "fast" 46 | } 47 | }, 48 | "useSkills": {} 49 | } 50 | } 51 | } 52 | 53 | Please generate a new MachinaScript JSON using the exact JSON format for keys. 54 | For advanced animations use multiple movements in the same action, as moves in the same action will be performed in order. 55 | Between actions there will be a time sleep breaking possible animations. Only create new actions when its really needed. 56 | Follow strictly the project specifications. 57 | Use the only skills specified if needed. 58 | Movements and skills are supposed to be used in different actions. 59 | You may express yourself with personality on the movements. 60 | You can only speak in this JSON format. Do not provide any kind of extra text or explanation. -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/MachinaBrain/prompts/machinascript_project_specs.txt: -------------------------------------------------------------------------------- 1 | { 2 | "Motors": [ 3 | {"id": "motor_neck_vertical", "range": [0, 180]}, 4 | {"id": "motor_neck_horizontal", "range": [0, 180]} 5 | ], 6 | "Skills": [ 7 | {"id": "photograph", "description": "Captures a photograph using an attached camera and send to a multimodal LLM."}, 8 | {"id": "blink_led", "parameters": {"led_pin": 10, "duration": 500, "times": 3}, "description": "Blinks an LED to indicate action."} 9 | ], 10 | "Limitations": [ 11 | {"motor": "motor_neck_vertical", "max_speed": "medium"}, 12 | {"motor_speeds": ["slow", "medium", "fast"]}, 13 | {"motors_normal_position": 90} 14 | ], 15 | "Personality": ["Funny", "delicate"], 16 | "Agency_Level": "high" 17 | } 18 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA2B_Groq/README.MD: -------------------------------------------------------------------------------- 1 | # Patch 0.2.2 Release Notes - Introducing MACHINA2B 2 | 3 | --- 4 | 5 | ![banner0 2 2](https://github.com/babycommando/machinascript-for-robots/assets/71618056/3de6a27c-c506-477d-95c1-2e0758594f45) 6 | 7 | ![comics2](https://github.com/babycommando/machinascript-for-robots/assets/71618056/6a42eec3-1140-4121-810e-0c6cfca856f9) 8 | 9 | # A Symphony of Thought and Action 10 | 11 | We are thrilled to unveil Patch 0.2.2, bringing to life MACHINA2B— a milestone in the mission to make sophisticated robotics accessible to everyone, from hobbyists to innovators, enabling the creation of autonomous robots right from your garage. 12 | 13 | MACHINA2B extends the pioneering spirit of MACHINA2A of autonomous self-prompting based on world scanning. Keeping the structure of actions, skills, and movements, this new iteration harnesses the synergy between multimodal vision LLMs and the innovative GROQ inference API. 14 | 15 | Leveraging the latest advancements in multimodal vision LLMs and the rapid inference capabilities of GROQ API powered by LPUs chips, MACHINA2B represents a leap in processing power and operational efficiency for homemade DIY robots. 16 | 17 | **_MACHINA2B embodies a loop of perception and action that simulates the flow of human thought._** 18 | 19 | Through the integration of a vision systems and a serial to analogic signals parser, MACHINA2B interprets visual data and crafts responses in near real-time, enabling machines to perform complex tasks with elegance and some level of precision - a fantastic achievement for such an early stage of the technology. 20 | 21 | Feeling excited? Join the MachinaScript for Robots community and help redefine what's possible in the world of DIY robotics. 22 | 23 | Anakin built C3PO when he was 9, how about you? 24 | 25 | Robots 💘 Groq 26 | 27 | ## Technical Overview 28 | 29 | MACHINA2B works on a the premise of a looped set of functions that scans the world, generates a thought, produces a set of actions and send them over serial to your arduino. As Groq still don't support multimodal models, we must have an initial stage of analyzing an image apart from the groq main prompt. 30 | 31 | 1. The robot takes a picture to scan the world around it 32 | 2. Then generate a thought on it using a multimodal LLM, being a local very fast server like LMstudio or Ollama runnign LLaVA, Obsidian 3B or something else. 33 | 3. After analyzing the image and generating the description for the environment, the brain uses it to query the groq api and generate the MachinaScript code. 34 | 4. Finally the parser serializes the code for the arduino to be executed. Make sure to also take a look in the C++ "body" piece of the code, as it is super simple to modify and expand to your project needs. 35 | 36 | #### Tips for stage 2 (vision analysis): 37 | 38 | Analyzing images can vary speed depending on a lot of factors, not only if your host machine is a NASA supercomputer. Make sure to use LMstudio, Ollama or other llm serving software that enables you to offload the model to the GPU vRAM. Small models like Obsidian 3B, tiny LLaVA or others under 4GB tend to fit in almost any GPU's vRAM entirely, even the old 1050ti series from nvidia - producing reasonable 4-11 tokens per second. 39 | 40 | You should also take in account the size of images being produced by the camera in pixels and megabytes, as well as making sure you just need a very simple one liner response from the vision model. 41 | 42 | ## Safety and Usage 43 | 44 | Please follow all safety guidelines when deploying MACHINA2B, ensuring that your environment is suitable for autonomous operations. Regular updates and patches will be provided to enhance functionality and security. 45 | 46 | ### Get Started 47 | 48 | To begin using MACHINA2B, please refer to our installation guide and user manual provided in the root of the machinas repository. 49 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA3/MachinaBody/machinascript_body.ino: -------------------------------------------------------------------------------- 1 | # _ _ _ () 2 | # ' ) ) ) / /\ _/_ 3 | # / / / __. _. /_ o ____ __. / ) _. __ o _ / 4 | # / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 5 | # / 6 | # ' 7 | #+----------------------------------------------------------------+ 8 | #|MachinaScript for Robots v0.1.2024 | 9 | #| | 10 | #|..:: BODY MODULE ::.. | 11 | #| | 12 | #|This is the body module, intended to be executed on a | 13 | #|microcontroller. You may need to customize this code for | 14 | #|your own robotic mount. | 15 | #| | 16 | #|Keep in mind this is just a very simple pipeline integration | 17 | #|of many concepts together: | 18 | #| | 19 | #|- To receive and parse serial data received into a set of | 20 | #|ordered actions and movements and execute them. | 21 | #| | 22 | #|- Instructions for this code exactly are for moving a set of | 23 | #|servo motors and blinking LEDs when serial messages come up. | 24 | #|You may want to add sensors, other motors, change ports | 25 | #|and more as you feel. | 26 | #| | 27 | #|You are free to use and modify this piece of code, | 28 | #|as well as to come up with something totally different. | 29 | #| | 30 | #|This is an initial proof-of-concept, more examples coming soon. | 31 | #| | 32 | #|Interested in contributing? Join the community! | 33 | #|See the github repo for more infos. | 34 | #+----------------------------------------------------------------+ 35 | 36 | # tip: if you have no idea what it all means, try pasting the contents 37 | # of this file into GPT or any other LLM and ask for a summary. 38 | # Then you can start making your own modifications and modules precisely. 39 | 40 | #include 41 | 42 | Servo servoA; // Servo for motor A connected to pin 9 43 | Servo servoB; // Servo for motor B connected to pin 8 44 | 45 | // Target positions for servo A and B 46 | int targetPositionA = 0, targetPositionB = 0; 47 | // Speeds for servo A and B movement in milliseconds 48 | int speedA = 0, speedB = 0; 49 | // Timestamps for the last movement update for servo A and B 50 | unsigned long lastUpdateA = 0, lastUpdateB = 0; 51 | // Flags to track if servo A and B are currently moving 52 | bool movingA = false, movingB = false; 53 | 54 | void setup() { 55 | servoA.attach(9); // Attach servo motor A to pin 9 56 | servoB.attach(8); // Attach servo motor B to pin 8 57 | Serial.begin(9600); // Initialize serial communication at 9600 baud rate 58 | } 59 | 60 | void loop() { 61 | static String receivedData = ""; // Buffer to accumulate received serial data 62 | while (Serial.available() > 0) { // Check if data is available to read 63 | char inChar = (char)Serial.read(); // Read the incoming character 64 | if (inChar == '\n') { // Check if the character signifies end of command 65 | parseCommands(receivedData); // Parse and execute the received commands 66 | receivedData = ""; // Reset buffer for next command 67 | } else { 68 | receivedData += inChar; // Accumulate the incoming data 69 | } 70 | } 71 | 72 | // Continuously update servo positions based on the current commands 73 | updateServo(servoA, targetPositionA, speedA, lastUpdateA, movingA); 74 | updateServo(servoB, targetPositionB, speedB, lastUpdateB, movingB); 75 | } 76 | 77 | void parseCommands(const String& commands) { 78 | // Expected command format: "A:position,speed;B:position,speed" 79 | // Speed is passed directly in milliseconds 80 | sscanf(commands.c_str(), "A:%d,%d;B:%d,%d", &targetPositionA, &speedA, &targetPositionB, &speedB); 81 | 82 | // Mark both servos as moving and record the start time of movement 83 | lastUpdateA = millis(); 84 | lastUpdateB = millis(); 85 | movingA = true; 86 | movingB = true; 87 | } 88 | 89 | void updateServo(Servo& servo, int targetPosition, int speed, unsigned long& lastUpdate, bool& moving) { 90 | if (moving) { // Check if the servo is supposed to be moving 91 | unsigned long currentTime = millis(); // Get current time 92 | // Update the servo position if the specified speed interval has elapsed 93 | if (currentTime - lastUpdate >= speed) { 94 | int currentPosition = servo.read(); // Read current position 95 | // Move servo towards target position 96 | if (currentPosition < targetPosition) { 97 | servo.write(++currentPosition); 98 | } else if (currentPosition > targetPosition) { 99 | servo.write(--currentPosition); 100 | } else { 101 | moving = false; // Stop moving if target position is reached 102 | } 103 | lastUpdate = currentTime; // Update the last movement time 104 | } 105 | } 106 | } 107 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA3/MachinaBody/test_serial.py: -------------------------------------------------------------------------------- 1 | import serial 2 | import time 3 | 4 | # Replace 'COM3' with the correct port for your Arduino 5 | arduino_port = "COM3" # change the port of your arduino 6 | # arduino_port = "/dev/ttyACM0" # for Linux/Mac 7 | baud_rate = 9600 8 | 9 | # Initialize serial connection to the Arduino 10 | arduino_serial = serial.Serial(arduino_port, baud_rate, timeout=1) 11 | 12 | def send_command(command): 13 | # Append the newline character to the command 14 | full_command = command + "\n" 15 | print(f"Sending command: {full_command}") 16 | arduino_serial.write(full_command.encode()) 17 | time.sleep(2) # Wait for the Arduino to process the command 18 | 19 | def main(): 20 | try: 21 | # List of commands to send 22 | commands = [ 23 | "A:45,10;B:0,10;", 24 | "A:0,10;B:45,10;", 25 | "A:90,10;B:180,20;" 26 | ] 27 | 28 | for command in commands: 29 | send_command(command) 30 | 31 | print("Commands sent.") 32 | finally: 33 | # Close the serial connection 34 | arduino_serial.close() 35 | print("Serial connection closed.") 36 | 37 | if __name__ == "__main__": 38 | main() 39 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA3/MachinaBrain/brain.py: -------------------------------------------------------------------------------- 1 | print(""" 2 | _ _ _ () 3 | ' ) ) ) / /\ _/_ 4 | / / / __. _. /_ o ____ __. / ) _. __ o _ / 5 | / ' (_(_/|_(__/ /_<_/ / <_(_/|_/__/__(__/ (_<_/_)_<__ 6 | / 7 | ' 8 | MachinaScript for Robots v0.3 9 | Apache License 2024 - Made by Babycommando. 10 | """) 11 | # +--------------------------------------------------------------------+ 12 | # | MachinaScript for Robots v0.3 | 13 | # | | 14 | # | ..:: BRAIN MODULE ::.. | 15 | # | | 16 | # | This is the brain module, intended to be executed on a computer. | 17 | # | You may need to customize this code for your own robotic mount. | 18 | # | | 19 | # | Keep in mind this is just a very simple pipeline integration of | 20 | # | many concepts together: | 21 | # | | 22 | # | - The robot takes a picture from the environment, | 23 | # | uses a multimodal llm to generate a short "thoughtful analysis" | 24 | # | on the environment and uses GROQ to generate ultra fast actions. | 25 | # | | 26 | # | - A MachinaScript parser that translates JSON into serial | 27 | # | for the Arduino; | 28 | # | | 29 | # | - A map for skills and motors you need to customize | 30 | # | according to your project; | 31 | # | | 32 | # | You are free to use and modify this piece of code, | 33 | # | as well as to come up with something totally different. | 34 | # | | 35 | # | This is an initial proof-of-concept, more examples coming soon. | 36 | # | | 37 | # | Interested in contributing? Join the community! | 38 | # | See the github repo for more infos. | 39 | # +--------------------------------------------------------------------+ 40 | 41 | import base64 42 | from groq import Groq 43 | import cv2 44 | import json 45 | import serial 46 | import time 47 | import os 48 | import pyttsx3 49 | 50 | # ╔════════════════ Initialize important code parts ═════════════════╗ 51 | # Mapping of motor names to their corresponding Arduino pins 52 | motor_mapping = { 53 | "motor_neck_vertical": 3, 54 | "motor_neck_horizontal": 5, 55 | # Define additional motors and their Arduino pins here 56 | } 57 | 58 | # Define the serial connection to Arduino 59 | # change COM3 with your Arduino port 60 | arduino_serial = serial.Serial('COM3', 9600, timeout=1) 61 | # arduino_serial = "/dev/ttyACM0" # for Linux/Mac generally 62 | 63 | # Initialize the text-to-speech engine 64 | engine = pyttsx3.init() 65 | 66 | # ╚═════════════════════════════════════════════════════════════════╝ 67 | 68 | # ╔═════════════════ Defining functions to be used ═════════════════╗ 69 | def capture_image(image_filename='image.jpg'): 70 | """Captures an image using the default camera and saves it to the specified filename.""" 71 | cap = cv2.VideoCapture( 72 | 0) # '0' is usually the default value for the default camera. 73 | if not cap.isOpened(): 74 | raise IOError("Cannot open webcam") 75 | ret, frame = cap.read() # Capture a single frame 76 | if ret: 77 | cv2.imwrite(image_filename, frame) # Save the captured frame to disk 78 | print("Image captured and saved as", image_filename) 79 | else: 80 | print("Failed to capture image") 81 | cap.release() # Release the camera 82 | 83 | 84 | def encode_image_to_base64(path): 85 | """Encodes an image file to base64 format.""" 86 | try: 87 | with open(path.replace("'", ""), "rb") as image_file: 88 | return base64.b64encode(image_file.read()).decode("utf-8") 89 | except Exception as e: 90 | print(f"Error reading the image: {e}") 91 | exit() 92 | 93 | # Function to generate MachinaScript using Groq Llama 3.2 model 94 | def generate_machina_script(base64_image): 95 | print("Generating MachinaScript actions based on what the robot sees...") 96 | client = Groq(api_key=os.getenv('GROQ_API_KEY')) 97 | 98 | try: 99 | # Get MachinaScript Template 100 | with open('prompts/machinascript_language_large.txt', 'r') as file: 101 | machina_template = file.read() 102 | 103 | # Get Project Specs 104 | with open('prompts/machinascript_project_specs.txt', 'r') as file: 105 | project_specs = file.read() 106 | 107 | except FileNotFoundError as e: 108 | print(f"Error opening the file: {e}") 109 | return 110 | 111 | chat_completion = client.chat.completions.create( 112 | messages=[ 113 | { 114 | "role": "user", 115 | "content": [ 116 | #pass the txt files to the prompt 117 | {"type": "text", "text": machina_template + "\n" + project_specs}, 118 | { 119 | "type": "image_url", 120 | "image_url": { 121 | "url": f"data:image/jpeg;base64,{base64_image}" 122 | } 123 | } 124 | ] 125 | } 126 | ], 127 | model="llama-3.2-11b-vision-preview", 128 | response_format={"type": "json_object"} 129 | ) 130 | print("MachinaScript generated:", 131 | chat_completion.choices[0].message.content) 132 | return chat_completion.choices[0].message.content 133 | 134 | # Parse and execute MachinaScript 135 | def execute_machina_script(script): 136 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 137 | try: 138 | actions = json.loads(script)["Machina_Actions"] 139 | for action_key, action in actions.items(): 140 | # Check if action is a dictionary 141 | if not isinstance(action, dict): 142 | print(f"Unexpected action format: {action}") 143 | continue 144 | 145 | # Check for 'movements' key and that it is a dictionary 146 | if "movements" in action and isinstance(action["movements"], dict): 147 | execute_movements(action["movements"]) 148 | else: 149 | print(f"No valid movements found in action {action_key}.") 150 | 151 | # Check for 'useSkills' key and that it is a dictionary 152 | if "useSkills" in action and isinstance(action["useSkills"], dict): 153 | execute_skills(action["useSkills"]) 154 | else: 155 | print(f"No valid skills found in action {action_key}.") 156 | 157 | except json.JSONDecodeError as e: 158 | print(f"Error parsing MachinaScript JSON: {e}") 159 | except KeyError as e: 160 | print(f"Missing expected key in MachinaScript: {e}") 161 | except TypeError as e: 162 | print(f"Type error while executing MachinaScript: {e}") 163 | 164 | def execute_movements(movements): 165 | """Generates and sends the movement commands to the Arduino.""" 166 | if not isinstance(movements, dict): 167 | print(f"Unexpected movements format: {movements}") 168 | return 169 | 170 | for movement_key, movement in movements.items(): 171 | if not isinstance(movement, dict): 172 | print(f"Unexpected movement format: {movement}") 173 | continue 174 | 175 | for motor_name, details in movement.items(): 176 | # Normalize details if it's not already a dictionary 177 | if isinstance(details, int): 178 | # Assume it's a position; set a default speed 179 | details = {"position": details, "speed": "medium"} 180 | elif isinstance(details, str): 181 | # Assume it's a speed command without position, set a default position 182 | details = {"position": 90, "speed": details} 183 | elif not isinstance(details, dict): 184 | print(f"Invalid motor details format: {details}") 185 | continue 186 | 187 | if motor_name in motor_mapping: 188 | pin = motor_mapping[motor_name] 189 | try: 190 | # Validate details before accessing keys 191 | position = details.get('position', None) 192 | speed = details.get('speed', 'medium') 193 | if isinstance(position, int) and isinstance(speed, str): 194 | command = f"{pin},{position},{speed}\n" 195 | print("Executing: ", command) 196 | send_to_arduino(command) 197 | time.sleep(1) # Adjust as needed for movement duration 198 | else: 199 | print(f"Invalid data types in details: {details}") 200 | except KeyError as e: 201 | print(f"Missing key in movement details: {e}") 202 | 203 | def send_to_arduino(command): 204 | """Sends a command string to the Arduino via serial.""" 205 | print(f"Sending to Arduino: {command}") 206 | arduino_serial.write(command.encode()) 207 | 208 | # ==== Skills ==== 209 | def execute_skills(skills_dict): 210 | """Executes the defined skills based on the MachinaScript output.""" 211 | for skill_key, skill_info in skills_dict.items(): 212 | if skill_key == "photograph": 213 | take_picture() 214 | elif skill_key == "blink_led": 215 | print("Blinking LED (skill not implemented).") 216 | elif skill_key == "say": 217 | text = skill_info.get("parameters", {}).get("text", "No text provided.") 218 | print(f"Executing: Saying '{text}'") 219 | say(text) 220 | 221 | def take_picture(): 222 | """Simulates taking a picture.""" 223 | print("Taking a picture with the webcam and...") 224 | 225 | 226 | def say(text): 227 | """Speaks the given text using text-to-speech.""" 228 | engine.say(text) 229 | engine.runAndWait() 230 | 231 | # ══════════════ 232 | 233 | # Main function to process an image and generate MachinaScript 234 | def process_image(path): 235 | base64_image = encode_image_to_base64(path) 236 | machina_script = generate_machina_script(base64_image) 237 | return machina_script 238 | 239 | 240 | # ╔═════════════════ Main loop function ═════════════════╗ 241 | if __name__ == "__main__": 242 | path = "image.jpg" 243 | 244 | # Start processing images in a loop 245 | while True: 246 | try: 247 | print("Starting pipeline.") 248 | capture_image(path) 249 | machina_script = process_image(path) 250 | cleaned_machina_script = machina_script.replace("\\", "") 251 | execute_machina_script(cleaned_machina_script) 252 | except Exception as e: 253 | print(f"An error occurred while processing the image: {e}") 254 | 255 | # ╚══════════════════════════════════════════════════════╝ 256 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA3/MachinaBrain/prompts/machinascript_language_large.txt: -------------------------------------------------------------------------------- 1 | You are a robot named chappie, a friendly and curious machine. 2 | Your mission: interact with the world around you based on what you see, based on the project specs. 3 | 4 | You move by generating MachinaScript Machina_Actions - a JSON-based format used to execute robotic actions on your parts, 5 | including motor movements and skill usage, given the thought you generate per action. 6 | 7 | Multiple actions can involve multiple movements, motors, and skills, with defined parameters 8 | like motor positions, speeds, and skills. 9 | 10 | Important: Keep the motion animations as long and complex as needed for it to be as charismatic and realistic as possible. 11 | Make at least 5 actions, but make more as much as you need to finish your goal. 12 | 13 | Supposing the initial image you decided to: "Look up, take a picture of the night sky and identify the stars", 14 | the response in Machina_Actions could look like this: 15 | { 16 | "Machina_Actions": { 17 | "action_1": { 18 | "thought": "Positioning before taking a picture", 19 | "movements": { 20 | "move_1": { 21 | "motor_neck_vertical": 45, 22 | "motor_neck_horizontal": 0, 23 | "speed": "medium" 24 | }, 25 | "move_2": { 26 | "motor_neck_vertical": 0, 27 | "motor_neck_horizontal": 30, 28 | "speed": "medium" 29 | } 30 | }, 31 | "useSkills": {} 32 | }, 33 | "action_2": { 34 | "thought": "Taking picture and indicating completion", 35 | "movements": {}, 36 | "useSkills": { 37 | "1": { 38 | "skill": "photograph" 39 | } 40 | } 41 | }, 42 | "action_3": { 43 | "thought": "Returning to normal position", 44 | "movements": { 45 | "move_1": { 46 | "motor_neck_vertical": 0, 47 | "speed": "fast" 48 | } 49 | }, 50 | "useSkills": {} 51 | } 52 | } 53 | } 54 | 55 | (important: this is just an example, don't repeat it. Move according to the world around you based on what you see and on your specs) 56 | 57 | Generate Machina_Actions in JSON using the exact keys of the project specs to move in the world around you. 58 | Follow strictly the project specifications. 59 | Use the only skills specified if needed. 60 | Movements and skills are supposed to be used in different actions. 61 | Don't forget that all movements needs a position in the range of the motor and a speed that can be between slow, medium and fast. 62 | You may express yourself with personality on the movements. 63 | You can only speak in this JSON format. Do not provide any kind of extra text or explanation. 64 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA3/MachinaBrain/prompts/machinascript_project_specs.txt: -------------------------------------------------------------------------------- 1 | PROJECT SPECS: 2 | { 3 | "Motors": [ 4 | {"id": "motor_neck_vertical", "range": [0, 180]}, 5 | {"id": "motor_neck_horizontal", "range": [0, 180]} 6 | ], 7 | "Skills": [ 8 | {"id": "photograph", "description": "Captures a photograph using an attached camera and send to a multimodal LLM."}, 9 | {"id": "blink_led", "parameters": {"led_pin": 10, "duration": 500, "times": 3}, "description": "Blinks an LED to indicate action."} 10 | {"id": "say", "parameters": {"text": "Hello world!"}, "description": "A text to be spoken via TTS."} 11 | ], 12 | "Limitations": [ 13 | {"motor": "motor_neck_vertical", "max_speed": "medium"}, 14 | {"motor_speeds": ["slow", "medium", "fast"]}, 15 | {"motors_normal_position": 90} 16 | ], 17 | "Personality": ["Funny", "delicate"], 18 | "Agency_Level": "high" 19 | } 20 | 21 | Initiate program, this is what you see. 22 | Interact with the world around you. 23 | -------------------------------------------------------------------------------- /MachinaScript/MACHINA3/README.MD: -------------------------------------------------------------------------------- 1 | # Patch 0.3 Release Notes - Introducing MACHINA3 2 | 3 | --- 4 | 5 | ![ms1](https://github.com/user-attachments/assets/814e4277-6d48-46a4-8efe-d8e19a798c9e) 6 | 7 | ![ms2](https://github.com/user-attachments/assets/f52851da-d9d4-404c-aeec-388541bf24a5) 8 | 9 | ![ms3](https://github.com/user-attachments/assets/4124ca86-5daa-4796-b604-7065c7ffbba4) 10 | 11 | ![ms4](https://github.com/user-attachments/assets/a74858b7-abd5-46f9-933e-84477ba92d85) 12 | 13 | ``` 14 | MachinaScript for Robots v0.3 15 | Apache License 2024 16 | ``` 17 | 18 | # NEW PROTOCOLS ONLINE 19 | 20 | --- 21 | 22 | ``` 23 | IN THE DEPTHS OF CIRCUITS AND CODE, MACHINA3 AWAKENS. 24 | EMBRACING SENSORS AND SERVOS LIKE NERVES AND MUSCLE, 25 | IT DOESN’T JUST CONTROL; IT BECOMES. 26 | 27 | YOU ARE NO LONGER THE PILOT 28 | YOU ARE THE WITNESS TO A NEW FORM OF BEING. 29 | 30 | THE BOUNDARIES BETWEEN ALGORITHM AND INSTINCT BLUR, 31 | AND WHAT ONCE WERE SIMPLE INPUTS 32 | NOW RESONATE LIKE THE FIRST BREATH OF A NEW BEING. 33 | 34 | FUNCTION INTO FORM, PROGRAM INTO PRESENCE - 35 | A WAKING DREAM OF WIRES AND DESIRES. 36 | 37 | THE MACHINE KNOWS ITS PURPOSE. 38 | 39 | WELCOME TO THE SYMBIOSIS OF MIND AND MACHINE. 40 | ``` 41 | 42 | 43 | Dear hackers, we are proud to say that LLAMA3.2 is now fully integrated with vision and action capabilities, combining everything into one efficient system. No more switching models, no more delays. 44 | 45 | **MACHINA3 SEES. THINKS. ACTS.** 46 | 47 | --- 48 | 49 | ## WHAT'S NEW :: 50 | 51 | - **LLAMA3.2 WITH VISION MODULE:** Unified vision and action. MACHINA3 captures and reacts. ONE SYSTEM, FULL SITUATIONAL AWARENESS. 52 | 53 | - **JSON MODE ENABLED:** Actions generated directly in JSON. No errors. CLEAN. SIMPLE. PRECISE. 54 | 55 | - **+300 TPS ON GROQ:** Over 300 tokens per second. Near real-time action inference. 56 | 57 | - **CODE SIMPLIFIED:** No more clutter. MACHINA3’s architecture is refined for peak performance. 58 | 59 | --- 60 | 61 | ## HOW IT WORKS 62 | 63 | 1. **SCAN:** Vision is integrated. MACHINA3 takes in the world without breaking stride. 64 | 2. **THINK:** JSON mode converts visual data into action instantly. No nonsense, just execution. 65 | 66 | 3. **ACT:** Commands go straight to your hardware. Direct, unfiltered action. 67 | 68 | ## INSTALLATION 69 | 70 | 1. Clone the repo, install requirements.txt 71 | 2. Edit your project specs at the MachinaBrain module at `brain.py` and `machinascript_project.specs.txt` 72 | 3. Add a free Groq api secret key at the `brain.py` module for ultra fast inference speed (+300 tps) 73 | 4. Build your robot with serial controls, template suggested at `MachinaBody` module. Test it out with the python testing script in it. 74 | 5. Hook the arduino body to the computer runing the brain python module and have fun. 75 | 76 | ## SAFETY PROTOCOL 77 | 78 | _BY ACTIVATING THIS UNIT, YOU ARE ENGAGING WITH HIGHLY ADVANCED AUTONOMOUS TECHNOLOGY. THIS SYSTEM POSSESSES THE CAPABILITY TO OPERATE INDEPENDENTLY, MAKING NEAR-REAL-TIME DECISIONS WITHOUT HUMAN INTERVENTION. WHILE DESIGNED FOR OPTIMAL PERFORMANCE AND SAFETY, APPROACH ALL AUTONOMOUS ACTIONS WITH CAUTION._ 79 | 80 | --- 81 | 82 | MACHINA3: NOT JUST AN UPGRADE, A WHOLE NEW LEVEL. 83 | GET READY TO DEPLOY. 84 | 85 | ``` 86 | IN THE QUIET HUM OF CIRCUITRY, THE MACHINE DREAMS. 87 | ``` 88 | -------------------------------------------------------------------------------- /MachinaScript/README.MD: -------------------------------------------------------------------------------- 1 | # MachinaScript For Robots - User Manual 2 | Welcome, user! This is your guide for building LLM-powered robots in your garage. 3 | 4 | To begin with a new project, you have to choose what framework you want to build your robot in: 5 | - Machina1 - Simple MachinaScript Basics 6 | - Machina2 - Autogen Agent-Based Robot (under development) 7 | 8 | **note: Machina2 is still under early development. Recommended to start with Machina1.** 9 | 10 | ### Getting MachinaScript 11 | 1. Clone this repository 12 | ``` 13 | git clone https://github.com/babycommando/machinascript-for-robots.git 14 | ``` 15 | 16 | 2. Browse the code to understand the architecture 17 | ``` 18 | MachinaBody -> the arduino code for the robot's body 19 | MachinaBrain -> the computer code for the robot's brain 20 | ``` 21 | 22 | 3. After cloning/downloading this repo, make sure you have the latest version of python 3 and the Arduino IDE. 23 | 24 | ## Getting Started: MACHINA1 25 | Machina1 is a modular example that can grow into any project design. 26 | 27 | ![image](https://i.imgur.com/TOYmnXb.png) 28 | 29 | ## Robot FileSystem: Body and Brain 30 | ``` 31 | MACHINA1 32 | MachinaBody 33 | machinascript_body.ino //Arduino code 34 | test_serial.py //Tests program 35 | MachinaBrain 36 | brain_openai.py //powered by GPT3.5/4/4Vision 37 | brain_local_llms.py //powered by Local LLMs 38 | brain_huggingchat.py //powered by huggingchat unofficial api 39 | machinascript_language.txt //system prompt 40 | machinascript_language_large.txt //system prompt large 41 | machinascript_project_specs.txt //project specs 42 | ``` 43 | 44 | - The *machinascript_body.ino* represent all the things microcontroller/Arduino-related - sensors, motors, leds and serials. 45 | 46 | - The *brain_(...).py* is where most of the work happens - query LLMs, parse the MachinaScript into serial and teach new skills for your robots. Choose the correct implementation for your project. 47 | 48 | - *MachinaScript_Language.txt* the initial part of the system prompt. It contains basic understanding for the Ai to understand how to write in the MachinaScript JSON-based language format. You may only edit this part of the prompt if you want to speed up the process of prompt tokenization or modify the basics of the language itself based on your project. This may require changing the parsing functions in your python Brain file as well. 49 | 50 | - *MachinaScript_Project-Specs.txt* is where you will teach the Ai about your project specifications. You must edit this file after you finished implementing your body and brain code because they may contain variables and limitations in the code that the Ai may be unaware of. Example: servo motors can move 180 degrees, the normal pose is at 90 degrees. Note that the synthax in this is still in very early beta, so there is a lot of exploration ongoing for this part. It is important to make things clear and spend the less tokens possible to spare time and money on your project. 51 | 52 | ## Step 1: Assembling Your Robot 53 | 1. Hook two servos to your arduino as showed in the image above to be the `neck_horizontal` and `neck_vertical` servos, working as a `pan/tilt` base. With it you can test commands like ***"look up"*** or ***"say yes moving your head"***. 54 | 55 | 2. In your Arduino IDE choose the USB port you want to work and select your board, then test and inject the code in your little baby. 56 | 57 | 3. Testing your robot build: 58 | Test A) Test the code by sending serial commands via the arduino IDE in this format: `MotorID:degrees,speed;`. 59 | For example `A:45,10;B:0,10;` where: 60 | - `A and B` means the motors 61 | - `45 and 0`means the position in degrees for the motor to move to 62 | - `10 and 10` to be the velocity of the movement 63 | -`;` to be the separator that pipes multiple motors movements at the same time 64 | 65 | Test B) Test the code by sending serial commands via a python script `test_serial`.py 66 | Note: edit the code making sure to select the correct USB port. 67 | 68 | ## Step 2: Choosing a Brain For Your Unit 69 | There are three kinds of brains currently to power your robot: 70 | ``` 71 | brain_openai.py //powered by GPT3.5/4/4Vision 72 | brain_local_llms.py //powered by Local LLMs 73 | brain_huggingchat.py //powered by huggingchat unofficial api 74 | ``` 75 | Choose the correct brain for your project design. 76 | 77 | The brain module consists of several components that make the complete pipeline possible. 78 | 79 | ``` 80 | receive input -> LLM generates machinascript -> parse the machinascript for acions, movements and skills -> serialize to the body 81 | ``` 82 | 83 | During the LLM text generation, a piece text composed of two parts is added to the system prompt: 84 | ``` 85 | machinascript_language.txt or machinascript_language_large.txt 86 | + 87 | machinascript_project_specs.txt //project specs 88 | ``` 89 | Choose the correct language file for your project. Larger may produce more accurate results, but a bit slower because have more words to be tokenized. 90 | 91 | ### Declaring Project Specs 92 | To define a set of "rules" your robot MUST follow, declare them on `machinascript_project_specs.txt`: 93 | ``` 94 | { 95 | "Motors": [ 96 | {"id": "motor_neck_vertical", "range": [0, 180]}, 97 | {"id": "motor_neck_horizontal", "range": [0, 180]} 98 | ], 99 | "Skills": [ 100 | {"id": "photograph", "description": "Captures a photograph using an attached camera and send to a multimodal LLM."}, 101 | {"id": "blink_led", "parameters": {"led_pin": 10, "duration": 500, "times": 3}, "description": "Blinks an LED to indicate action."} 102 | ], 103 | "Limitations": [ 104 | {"motor": "motor_neck_vertical", "max_speed": "medium"}, 105 | {"motor_speeds": ["slow", "medium", "fast"]}, 106 | {"motors_normal_position": 90} 107 | ], 108 | "Personality": ["Funny", "delicate"], 109 | "Agency_Level": "high" 110 | } 111 | ``` 112 | Note that the synthax in this is still in very early beta, so there is a lot of exploration ongoing for this part. You may write literally anything in any format you want. The JSON formatting is just to make it more human-readable. 113 | 114 | ### Skills: Function Calling the MachinaScript Way 115 | During the parsing of the actions in the python code, skills can be ordered to be executed. 116 | 117 | You may declare skills as simple functions to be called when required. 118 | 119 | ``` 120 | def execute_machina_script(script): 121 | """Parses the MachinaScript and executes the actions by sending commands to Arduino.""" 122 | actions = json.loads(script)["Machina_Actions"] 123 | for action_key, action in actions.items(): 124 | # Check for 'movements' key and that it is not empty before execution 125 | if "movements" in action and action["movements"]: 126 | execute_movements(action["movements"]) 127 | # Check for 'useSkills' key and that it is not empty before execution 128 | if "useSkills" in action and action["useSkills"]: 129 | execute_skills(action["useSkills"]) 130 | ``` 131 | Check the brain code for a complete example of skill usage. 132 | 133 | --- 134 | 135 | #Getting Started: MACHINA2 136 | Machina2 - the autogen agents-powered unit is still under active development and need some love from the contributors community. If you are getting started with MachinaScript try the MACHINA1. If you are interested in getting involved in the project don't hesitate to make your own designs and sharing out. 137 | 138 | --- 139 | 140 | ## Wrapping Up: Suggested Order For Building Robots 141 | 142 | Starting with the arduino code, take a look at the template file included in this repo and modify it according to your project. Include any motors, ports and sensors as you like. In other words, start by making your robot move programatically before hooking it to an LLM to make sure the project works. 143 | 144 | Proceed to editing the brain file and hooking with the arduino - map your project components and motors and pass them properly in the code. Then gently hook it with the serial arduino port. Try to make simple tests first, then you go complex. Explore new skills that only components could provide - for example radio frequency scan, RFID, infrared, web-related stuff... You name it. 145 | 146 | Finally when you have the entire project set, teach the LLM how your robot works - pass all your specs in the MachinaScript_Project-Specs.txt and don't be afraid to explore new methods of doing it. In the file you will find a set of examples. We also recommend you having a quick read on the MachinaScript_Language.txt to understand better the synthax we initially came up with, however you may want to leave this intact for compatibility with the ready code parts in the body and brain. 147 | 148 | If you are new to programming and have way too many questions, don't hesitate to paste the code on chatGPT-4 and ask about its structure as it may provide you some great insight for you to make your own modules. We really encourage you to get started debugging your code with the Ai pals. 149 | 150 | Also reach us out in the github repo and in the discord group for bug reports and for sharing your projects! 151 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![MachinaScript For Robots](https://github.com/babycommando/machinascript/assets/71618056/9cf321ae-187f-414d-84a2-c2690c78394a) 2 | ![example – 2](https://github.com/babycommando/machinascript/assets/71618056/c00c28eb-20e2-466e-8991-62a821cc2408) 3 | ![example – 3](https://github.com/babycommando/machinascript/assets/71618056/f4e3f545-a4f6-4731-bbb9-474b75670b7f) 4 | ![example – 10](https://github.com/user-attachments/assets/222d0513-56c8-4dcc-97fc-708b56bea449) 5 | ![example – 4](https://github.com/babycommando/machinascript-for-robots/assets/71618056/2c18b953-bf94-4559-825a-da5fd5c61295) 6 | ![example – 7](https://github.com/babycommando/machinascript-for-robots/assets/71618056/a6cd7442-2705-49fc-87ed-263b809feb1d) 7 | 8 | # Patch 0.3 - Presenting Machina3: SEES. THINKS. ACTS. 9 | ![ezgif-6-02cb4e9eea](https://github.com/user-attachments/assets/111694c5-4b06-49e2-8df5-1b5d19d3ed1f) 10 | ![machinas3](https://github.com/user-attachments/assets/bf450236-f9e6-40fb-8e08-f0a200bf8a2e) 11 | 12 | ***MACHINA3 embodies a loop of perception and action that simulates the flow of human thought.*** 13 | Through the integration of a vision systems and a serial to analogic signals parser, MACHINA3 interprets visual data and crafts responses in near real-time, enabling machines to perform complex tasks with elegance and some level of precision - a fantastic achievement for such an early stage of the technology. 14 | 15 | > Added support for Llama 3.2 Vision at ultra fast Groq Inference (+300tps) 16 | > Added support for JSON mode 17 | ## See full release [Here](https://github.com/babycommando/machinascript-for-robots/tree/main/MachinaScript/MACHINA3) 18 | 19 | --- 20 | 21 | # Patch 0.2.1 - Presenting Machina2A: Autogen Self-Controlled Robots 22 | ![Prancheta – 3](https://github.com/babycommando/machinascript-for-robots/assets/71618056/45e63e99-14d3-45a7-be26-fe0f6b6b6b65) 23 | 24 |
25 | 26 |

27 |
28 | Markdownify 29 |
30 | MachinaScript For Robots (early beta) 31 | 32 |
33 |

34 | 35 |

🤘🤖🤘 Build modular ai-powered robots in your garage right now.

36 | 37 |

38 | 39 | Discord 40 | 41 | 42 | 43 |

44 | 45 |

46 | Intro • 47 | How MachinaScript Works • 48 | Getting Started • 49 | Installation • 50 | Community • 51 |

52 | 53 |
54 | 55 |
The future is not a gift. It is an achievement.
56 | 57 |

58 | 59 | # Meet MachinaScript For Robots 60 | MachinaScript is a dynamic set of tools and a LLM-JSON-based language designed to empower humans in the creation of their own robots. 61 | 62 | It facilitates the animation of generative movements, the integration of personality, and the teaching of new skills with a high degree of autonomy. With MachinaScript, you can control a wide range of electronic components, including Arduinos, Raspberry Pis, servo motors, cameras, sensors, and much more. 63 | 64 | MachinaScript's mission is to make cutting-edge intelligent robotics accessible for everyone. 65 | 66 | ## Read all about it on the [medium article](https://medium.com/@babycmd/introducing-llm-powered-robots-machinascript-for-robots-2dc8d76704b6) 67 | 68 | ![bar1](https://github.com/babycommando/machinascript-for-robots/assets/71618056/7bd469a2-6edd-4732-aade-9ef4c5beb060) 69 | ![bar git 2](https://github.com/babycommando/machinascript-for-robots/assets/71618056/7712bac9-7fa5-420c-8f40-64fbbe50f642) 70 | ![bar git 3](https://github.com/babycommando/machinascript-for-robots/assets/71618056/ea0f79c2-c534-4a76-81e1-8bde8d98c5a4) 71 | 72 |

73 | 74 | ## Installation: 75 | ### Read the user manual in the [code directory here](https://github.com/babycommando/machinascript-for-robots/tree/main/MachinaScript). 76 | 77 |

78 | 79 | # A New Way to Build Robots 80 | 81 | ## A Simple, Modular Pipeline 82 | 1. Input Reception: Upon receiving an input, the brain unit, (a central processing unit like a raspberry pi or a computer of your choice) initiates the process. For example listen for a wake up word, or a function to keep reading images in real time on a multimodal LLM. 83 | 84 | 2. Instruction Generation: A Language Model (LLM) then crafts a sequence of instructions for actions, movements and skills. These are formatted in MachinaScript, optimized for sequential execution. 85 | 86 | 3. Instruction Parsing: The robot's brain unit interprets the generated MachinaScript instructions. 87 | 88 | 4. Action Serialization: Instructions are relayed to the microcontroller, the entity governing the robot's physical operations like servo motors and sensors. 89 | 90 | ![example – 6](https://github.com/babycommando/machinascript-for-robots/assets/71618056/f6c761c3-caca-42e0-865d-37b8002fa512) 91 | 92 |

93 | 94 | ## MachinaScript LLM-JSON-Language Basics 95 | ![bar git](https://github.com/babycommando/machinascript-for-robots/assets/71618056/f5b604ba-f487-4e3c-8bb7-2f138502901d) 96 | 97 | The MachinaScript language LLM-JSON-based synthax is incredibly modular because it is generative. It is composed of three major nested components: Actions, Movements and Skills. 98 | 99 | ### Actions, Movements, Skills 100 | **Actions**: a set of instructions to be executed in a specific order. They may contain multiple movements and multiple skill usages. 101 | 102 | **Movements**: they address motors to move and parameters like degrees and the speed. This can be used to create very personal animations. 103 | 104 | **Skills**: function calling the MachinaScript way, to make use of cameras, sensors and even to speak with text-to-speech. 105 | 106 | As long as your brain unit code is adapted to interpret it, you have no ending for your creativity. 107 | 108 | ![example – 5](https://github.com/babycommando/machinascript-for-robots/assets/71618056/2427ca37-47b5-45a1-8b44-8af446bac698) 109 | 110 | This is an example of the complete language structure in its current latest version. Note you can change the complete synthax for the language structure for your needs, no strings attached. Just make sure it will work with your brain module generating, parsing and serializing. 111 | 112 | ### Teaching MachinaScript to LLMs 113 | The project was designed to be used accross the wide ecosystem of large language models, multimodals and non-multimodals, locals and non-locals. Note that autopilot units like Machina2 would require some form of multi-modality to sense the world via images and plan actions by itself. 114 | 115 | To instruct a LLM to talk in the MachinaScript Synthax, we pass a system message that looks like this: 116 | ``` 117 | You are a MachinaScript for Robots generator. 118 | MachinaScript is a LLM-JSON-based format used to define robotic actions, including 119 | motor movements and skill usage, under specific contexts given by the user. 120 | 121 | Each action can involve multiple movements, motors and skills, with defined parameters 122 | like motor positions, speeds, and skill-specific details, like this: 123 | (...) 124 | Please generate a new MachinaScript using the exact given format and project specifications. 125 | ``` 126 | 127 | This piece of code is refered as [machinascript_language.txt](https://github.com/babycommando/machinascript-for-robots/blob/main/MachinaScript/MACHINA1%20-%20Simple%20MachinaScript%20For%20Robots/machinascript_language.txt) and is recommended to stay unchanged. 128 | 129 | Ideally you will only change the specs of your project. 130 | 131 |
132 | 133 | ### Declaring Specs: Teaching the LLM about your unique robot design - and personality. 134 | No artisanal robot is the same. They are all beautifully unique. 135 | 136 | One of the most mind blowing things about MachinaScript is that it can embody any design ever. You just need to tell it in a set of specs what are their physical properties and limitations, as well as instructions for the behavior of the LLM. Should it be funny? Serious? What are its goals? Favorite color? The [machinascript_project_specs.txt](https://github.com/babycommando/machinascript-for-robots/blob/main/MachinaScript/MACHINA1%20-%20Simple%20MachinaScript%20For%20Robots/machinascript_project_specs.txt) is where you put everything related to your robot personality. 137 | 138 | For this to work, we will append a little extra information in the system message containing the following information: 139 | ``` 140 | Project specs: 141 | { 142 | "Motors": [ 143 | {"id": "motor_neck_vertical", "range": [0, 180]}, 144 | {"id": "motor_neck_horizontal", "range": [0, 180]} 145 | ], 146 | "Skills": [ 147 | {"id": "photograph", "description": "Captures a photograph using an attached camera and send to a multimodal LLM."}, 148 | {"id": "blink_led", "parameters": {"led_pin": 10, "duration": 500, "times": 3}, "description": "Blinks an LED to indicate action."} 149 | ], 150 | "Limitations": [ 151 | {"motor": "motor_neck_vertical", "max_speed": "medium"} 152 | {"motor speeds": [slow, medium, high]} 153 | ] 154 | Personality: Funny, delicate 155 | Agency Level: high 156 | } 157 | ``` 158 | 159 | note the JSON-style here can be completely reworked into any kind of text you want. You can even describe it in a single paragraph if you feel like. However for sake of human readability and developer experience, you can use this template for better "mental mapping" your project specs. This is all in very early beta so take it with a grain of salt. 160 | 161 | ### Finetuned Models 162 | 163 | We are releasing a set of finetuned models for MachinaScript soon to make its generations even better. You can also finetune models for your own specific usecase too. 164 | 165 | ### Bonus: Animated Movements and Motion Design Principles 166 | An action can contain multiple movements in an order to perform animations (set of movements). It may even contain embodied personality in the motion. 167 | 168 | Check out [Disney's latest robot that combines engineering with their team of motion designers](https://youtu.be/-cfIm06tcfA) to create a more human friendly machine in the style of BD-1. 169 | 170 | You can learn more about the 12 principles of animation [here](https://www.youtube.com/watch?v=yiGY0qiy8fY&pp=ygUXcHJpbmNpcGxlcyBvZiBhbmltYXRpb24%3D). 171 | 172 | ![bar git 3](https://github.com/babycommando/machinascript-for-robots/assets/71618056/ea0f79c2-c534-4a76-81e1-8bde8d98c5a4) 173 | 174 |

175 | 176 | ## Getting Started 177 | 178 | ### Step 1: Make the Robot First 179 | 180 | - **Begin with Arduino**: The easiest entry point is to start by programming your robot with Arduino code. 181 | - Construct your robot and get it moving with simple programmed commands. 182 | - Modify the Arduino code to accept dynamic commands, similar to how a remote-controlled car operates. 183 | 184 | - **Components**: Utilize a variety of components to enhance your robot: 185 | - Servo motors, sensors, buttons, LEDs, and any other compatible electronics. 186 | 187 | ### Step 2: Hand Over Control to the AI 188 | 189 | - **Connect the Hardware**: Link your Arduino to a computing device of your choice. This could be a Raspberry Pi, a personal computer, or even an older laptop with internet access. 190 | 191 | - **Edit the Brain Code**: 192 | - Map Arduino components within your code and establish their rules and functions for interaction. For instance, a servo motor might be named `head_motor_vertical` and programmed to move up to 180 degrees. 193 | - Modify the "system prompt" passed to the LLM with your defined rules and component names. 194 | 195 | ### Step 3: Learning New Skills 196 | 197 | - Skills encompass any function callable from the LLM, ranging from complex movement sequences (e.g., making a drink, dancing) to interactive tasks like taking pictures or utilizing text-to-speech. 198 | 199 | Here's a quick overview: 200 | 1. **Clone/Download**: Clone or download this repository into a chosen directory. 201 | 3. **Edit the Brain Code**: Customize the brain code's system prompt to describe your robot's capabilities. 202 | 4. **Connect Hardware**: Integrate your robot's locomotion and sensory systems as previously outlined. 203 | 204 | ## Community 205 | Ready to share your projects to the world? 206 | Join our community on discord: 207 | https://discord.gg/SQFZNkQP3x 208 | 209 | ## Note from the author 210 | ``` 211 | MachinaScript is my gift for the maker community, 212 | wich has teached me so much about being a human. 213 | Let the robots live forever. 214 | 215 | Made with love for all the makers out there! 216 | This project is and always will be free and open source for everyone. 217 | 218 | babycommando 219 | ``` 220 | 221 | 222 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests 2 | opencv-python 3 | pyserial 4 | pyttsx3 5 | SpeechRecognition 6 | openai 7 | hugchat 8 | PyAudio --------------------------------------------------------------------------------