├── .gitignore
├── .streamlit
└── config.toml
├── Lux-Interface-Installer.exe
├── README.md
├── [setup]
├── Lux-Interface-Installer.iss
└── setup.py
├── interface
├── CONFIG.py
├── configuration
│ ├── _ui_custom
│ │ ├── custom_ui.py
│ │ └── page_title.py
│ ├── audio
│ │ ├── change_synthetic_voice.py
│ │ ├── select_audio_chunk.py
│ │ ├── select_audio_rate.py
│ │ ├── select_buffer_length.py
│ │ ├── select_micro_sensitivity.py
│ │ ├── select_microphone.py
│ │ ├── select_narrator_voice.py
│ │ └── select_speed_voice.py
│ ├── cam
│ │ └── select_cam.py
│ ├── import_tools
│ │ ├── manage_menu.py
│ │ ├── manage_requirements.py
│ │ ├── manage_select_tool.py
│ │ └── manage_tools_list.py
│ ├── llm
│ │ ├── select_build_model.py
│ │ └── select_llm_max_history.py
│ ├── rag
│ │ └── select_similarity.py
│ ├── select_language.py
│ ├── update_config.py
│ └── upload_tools.py
├── kernel
│ ├── agent_llm
│ │ ├── build_llm
│ │ │ └── auto_build_llm.py
│ │ ├── llm
│ │ │ ├── llm.py
│ │ │ ├── llm_embeddings.py
│ │ │ └── llm_rag.py
│ │ ├── rag
│ │ │ └── similarity_search.py
│ │ └── vectorization_tools.py
│ ├── audio
│ │ ├── speech_to_text
│ │ │ ├── record.py
│ │ │ └── whisper.py
│ │ └── synthetic_voice
│ │ │ ├── en_lux_voice.wav
│ │ │ ├── fr_lux_voice.wav
│ │ │ ├── narrator_voice.py
│ │ │ └── synthetic_voice.py
│ ├── start_kernel.py
│ └── tools
│ │ ├── select_tool.py
│ │ ├── tools_functions
│ │ ├── exit_system.py
│ │ ├── pause_system.py
│ │ ├── tools_action
│ │ │ ├── cam
│ │ │ │ └── screen_cam.py
│ │ │ ├── screenshot.py
│ │ │ ├── take_note.py
│ │ │ ├── time.py
│ │ │ └── web_search
│ │ │ │ ├── check_connection.py
│ │ │ │ └── search_webs.py
│ │ └── tools_response
│ │ │ └── hello_sir.py
│ │ └── tools_list.py
├── menu.py
├── pages
│ ├── 1_configuration.py
│ └── 2_assistant.py
└── ressources
│ ├── logo-lux-interface.ico
│ ├── logo-lux.png
│ └── schema-system-architecture
│ └── lux-system-architecture.png
├── license.txt
├── lux-interface.exe
└── requirements.txt
/.gitignore:
--------------------------------------------------------------------------------
1 | # Env files
2 | __pycache__
3 | .env
4 |
5 | # Temp audio & voice folder
6 | interface/kernel/audio/speech_to_text/temp_audio
7 | interface/kernel/audio/synthetic_voice/temp_voice
8 |
9 | # Temp tools vector db
10 | interface/kernel/agent_llm/rag/tools_vector_db
11 |
12 | # Modelfile system instruction for models
13 | modelfile
14 |
15 | # Photo taken
16 | photos
17 |
18 | # JSON Save LLM Conversations
19 | conversation_history
20 |
21 |
--------------------------------------------------------------------------------
/.streamlit/config.toml:
--------------------------------------------------------------------------------
1 | [theme]
2 | primaryColor="#b357a9"
3 | backgroundColor="#121b3b"
4 | secondaryBackgroundColor="#265a9c"
5 | textColor="#f0d6ec"
6 |
--------------------------------------------------------------------------------
/Lux-Interface-Installer.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nixiz0/Lux/9f3cc0cb272e46eb5215195034e75f2551c1f95d/Lux-Interface-Installer.exe
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Lux Project
2 |
3 | Welcome to the **Lux Project**! This project aims to provide a powerful and flexible assistant for all users (using the version accessible to all who are Lux-Interface) but also for developers (using the Lux-Kernel versions here) allowing developers to use this as a basis to build whatever they want with.
4 |
5 |
6 | ## 🖥️ Lux Interface
7 |
8 | The **Lux Interface** is designed for end-users who may not be developers. It provides a user-friendly interface to configure and personalize the assistant without using command lines.
9 |
10 | The interface is user-friendly and allow to:
11 | - Configure the system.
12 | - Import different tools.
13 | - Run the asssitant.
14 |
15 |
16 | ## 🛠️ Lux Kernel
17 |
18 | The **Lux Kernel** is designed for developers and consists of two main components:
19 |
20 | 1. **Synthetic Cloned Voice Kernel**: A text-to-speech system using cloned synthetic voices (*requires high-performance hardware to use this system*), here we use *CoquiTTS*.
21 | 2. **Synthetic Narrator Voice Kernel**: A text-to-speech system using a windows narrator's voice.
22 |
23 | The Lux Kernels also includes:
24 | - **Speech-to-Text**: Convert spoken words into text, using *Whisper large v3*.
25 | - **Intelligent Tool Selection System**: Using a RAG system to select tools based on user prompts, using *sklearn* for similarity & *ChromaDB* for vector database.
26 | - **Inspired by Linux Kernel**: A flexible and small assistant core to allow developers to create various features with it and be able to use it as a base.
27 |
28 |
29 | ## 🔧 Lux Tools
30 |
31 | Find here the different **official tools** that you can integrate into your Lux assistant:
32 |
33 | **[Lux-Official-Tools](https://github.com/nixiz0/Lux-Tools)**
34 |
35 | Find here the different **community tools** that you can integrate into your Lux assistant:
36 |
37 | **[Lux-Community-Tools](https://github.com/nixiz0/Lux-Tools/tree/community-tools)**
38 |
39 |
40 | ## 🚀 Installation
41 |
42 | 1. **Download the App**:
43 | - Click on `Lux-Interface-Installer.exe` to download the app.
44 | - Ensure to install the necessary applications in the *Tech Stack*.
45 |
46 | 2. **Build the Environment**:
47 | - Click on `lux-interface.exe` to build the environment for the app.
48 |
49 |
50 | ## 🏗️ System Architecture
51 |
52 |
53 |
54 |
55 | ## ⚙️ Tech Stack
56 |
57 | ### Applications You Need to Install
58 |
59 | 1. **[Ollama](https://ollama.com/download) (version 0.5.7)**
60 | 2. **[Python 3.11](https://www.python.org/downloads/release/python-3117/)** (add path to your OS environment variable)
61 | 3. **[CUDA 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive)** (ensure your graphics card is compatible)
62 | 4. **[VS Community](https://visualstudio.microsoft.com/fr/visual-cpp-build-tools/)** (with Desktop packages)
63 |
64 | ### Additional Installations (if needed)
65 |
66 | - **Scoop**:
67 | ```powershell
68 | powershell -Command "Set-ExecutionPolicy RemoteSigned -scope CurrentUser"
69 | powershell -Command "iex (new-object net.webclient).downloadstring('https://get.scoop.sh')"
70 |
71 | - **FFmpeg**:
72 | ```powershell
73 | scoop install ffmpeg
74 |
75 |
76 | ## 🎙️ Windows Narrator Voices
77 |
78 | To use Windows Narrator Voices instead of cloned voices, you can download more synthetic voices from the narrator settings.
79 |
80 | If it doesn't recognize the voices you installed, follow these steps:
81 |
82 | 1. **Open the Registry Editor**:
83 | - Press the “Windows” and “R” keys simultaneously, type “regedit”, and press Enter.
84 |
85 | 2. **Navigate to the Registry Key**:
86 | - `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech_OneCore\Voices\Tokens`
87 |
88 | 3. **Export the Key to a REG File**:
89 | - Right-click on the key and select "Export".
90 |
91 | 4. **Edit the REG File**:
92 | - Open the REG file with a text editor.
93 | - Replace all occurrences of `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech_OneCore\Voices\Tokens` with `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SPEECH\Voices\Tokens`.
94 |
95 | 5. **Import the Modified REG File**:
96 | - Save the modified file and double-click it to import the changes to the registry.
97 |
98 |
99 | ## Author
100 |
101 | - [@nixiz0](https://github.com/nixiz0)
102 |
--------------------------------------------------------------------------------
/[setup]/Lux-Interface-Installer.iss:
--------------------------------------------------------------------------------
1 | ; ---[TO BUILD THE .EXE INSTALLER, FIRST CLONE THE CLEAN PROJECT FROM THE GITHUB REPO (rename it Lux). ENSURE THE PROJECT IS EMPTY, WITHOUT ANY ADDITIONAL MODIFICATIONS OR FILES]---
2 | ; ---[THIS WILL KEEP THE PROJECT AS NEUTRAL AS POSSIBLE FOR COMPILING THE INSTALLER, PREVENTING UNNECESSARY FILES FROM BEING INCLUDED]---
3 | ; Script generated by the Inno Setup Script Wizard.
4 | ; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES!
5 |
6 | #define MyAppName "Lux-Interface"
7 | #define MyAppVersion "2.0"
8 | #define MyAppPublisher "Initium"
9 | #define MyAppURL "https://www.youtube.com/@Initium0_0"
10 | #define MyAppExeName "lux-interface.exe"
11 | #define MyAppAssocName MyAppName + "-File"
12 | #define MyAppAssocExt ".py"
13 | #define MyAppAssocKey StringChange(MyAppAssocName, " ", "") + MyAppAssocExt
14 |
15 | [Setup]
16 | ; NOTE: The value of AppId uniquely identifies this application. Do not use the same AppId value in installers for other applications.
17 | ; (To generate a new GUID, click Tools | Generate GUID inside the IDE.)
18 | AppId={{A4C2D350-2568-4765-8952-14DD5E53F92A}
19 | AppName={#MyAppName}
20 | AppVersion={#MyAppVersion}
21 | AppPublisher={#MyAppPublisher}
22 | AppPublisherURL={#MyAppURL}
23 | AppSupportURL={#MyAppURL}
24 | AppUpdatesURL={#MyAppURL}
25 | DefaultDirName={autopf}\{#MyAppName}
26 | ArchitecturesAllowed=x64compatible
27 | ArchitecturesInstallIn64BitMode=x64compatible
28 | ChangesAssociations=yes
29 | DisableProgramGroupPage=yes
30 | LicenseFile=C:\Users\username\Desktop\Lux\license.txt
31 | InfoBeforeFile=C:\Users\username\Desktop\Lux\README.md
32 | InfoAfterFile=C:\Users\username\Desktop\Lux\README.md
33 | ; Uncomment the following line to run in non administrative install mode (install for current user only.)
34 | ;PrivilegesRequired=lowest
35 | OutputDir=C:\Users\username\Desktop
36 | OutputBaseFilename=Lux-Interface-Installer
37 | Compression=lzma
38 | SolidCompression=yes
39 | WizardStyle=modern
40 |
41 | [Languages]
42 | Name: "english"; MessagesFile: "compiler:Default.isl"
43 | Name: "french"; MessagesFile: "compiler:Languages\French.isl"
44 |
45 | [Tasks]
46 | Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
47 |
48 | [Files]
49 | Source: "C:\Users\username\Desktop\Lux\{#MyAppExeName}"; DestDir: "{app}"; Flags: ignoreversion
50 | Source: "C:\Users\username\Desktop\Lux\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs; Excludes: ".git\*; tools_vector_db\*; temp_audio\*; temp_voice\*; __pycache__\*; photos\*; modelfile"
51 | ; NOTE: Don't use "Flags: ignoreversion" on any shared system files
52 |
53 | [Registry]
54 | Root: HKA; Subkey: "Software\Classes\{#MyAppAssocExt}\OpenWithProgids"; ValueType: string; ValueName: "{#MyAppAssocKey}"; ValueData: ""; Flags: uninsdeletevalue
55 | Root: HKA; Subkey: "Software\Classes\{#MyAppAssocKey}"; ValueType: string; ValueName: ""; ValueData: "{#MyAppAssocName}"; Flags: uninsdeletekey
56 | Root: HKA; Subkey: "Software\Classes\{#MyAppAssocKey}\DefaultIcon"; ValueType: string; ValueName: ""; ValueData: "{app}\{#MyAppExeName},0"
57 | Root: HKA; Subkey: "Software\Classes\{#MyAppAssocKey}\shell\open\command"; ValueType: string; ValueName: ""; ValueData: """{app}\{#MyAppExeName}"" ""%1"""
58 | Root: HKA; Subkey: "Software\Classes\Applications\{#MyAppExeName}\SupportedTypes"; ValueType: string; ValueName: ".myp"; ValueData: ""
59 |
60 | [Icons]
61 | Name: "{autoprograms}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"
62 | Name: "{autodesktop}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; Tasks: desktopicon
63 |
64 | [Run]
65 | Filename: "{app}\{#MyAppExeName}"; Description: "{cm:LaunchProgram,{#StringChange(MyAppName, '&', '&&')}}"; Flags: nowait postinstall skipifsilent
66 |
67 |
--------------------------------------------------------------------------------
/[setup]/setup.py:
--------------------------------------------------------------------------------
1 | # [BUILD] Install pyinstaller and run the command (to build .exe file) :
2 | # <= pyinstaller --onefile --icon=interface/ressources/logo-lux-interface.ico [setup]/setup.py =>
3 |
4 | import os
5 | import subprocess
6 |
7 |
8 | # ---[ANSI COLOR]---
9 | RESET = "\033[0m"
10 | RED = "\033[31m"
11 | GREEN = "\033[32m"
12 | YELLOW = "\033[33m"
13 | BLUE = "\033[34m"
14 | MAGENTA = "\033[35m"
15 | CYAN = "\033[36m"
16 |
17 |
18 | # ---[UPDATE FILES FUNCTIONS (for voice)]---
19 | def update_start_kernel(choice):
20 | with open('interface/kernel/start_kernel.py', 'r') as file:
21 | lines = file.readlines()
22 | with open('interface/kernel/start_kernel.py', 'w') as file:
23 | for line in lines:
24 | if 'from interface.kernel.audio.synthetic_voice' in line:
25 | if choice == '1':
26 | file.write('from interface.kernel.audio.synthetic_voice.narrator_voice import LuxVoice\n')
27 | else:
28 | file.write('from interface.kernel.audio.synthetic_voice.synthetic_voice import LuxVoice\n')
29 | else:
30 | file.write(line)
31 |
32 | def update_requirements(choice):
33 | with open('requirements.txt', 'r') as file:
34 | lines = file.readlines()
35 | with open('requirements.txt', 'w') as file:
36 | for line in lines:
37 | file.write(line)
38 | if choice == '1':
39 | file.write('comtypes==1.4.5\n')
40 | file.write('pyttsx3==2.90\n')
41 | else:
42 | file.write('TTS==0.22.0\n')
43 |
44 | def update_configuration_voice_import(choice):
45 | config_file = 'interface/pages/1_configuration.py'
46 | with open(config_file, 'r', encoding='utf-8') as file:
47 | lines = file.readlines()
48 |
49 | # Find the last line that contains 'from configuration.audio'
50 | last_index = -1
51 | for i, line in enumerate(lines):
52 | if 'from configuration.audio' in line:
53 | last_index = i
54 |
55 | # If the line exists, replace it
56 | if last_index != -1:
57 | if choice == '1':
58 | lines[last_index] = 'from configuration.audio.select_narrator_voice import manage_voice\n'
59 | else:
60 | lines[last_index] = 'from configuration.audio.change_synthetic_voice import manage_voice\n'
61 |
62 | # Write updated lines to file
63 | with open(config_file, 'w', encoding='utf-8') as file:
64 | file.writelines(lines)
65 |
66 | # ---[SCRIPT INSTALLATION LOGIC]---
67 | def start_ollama():
68 | subprocess.Popen(['start', 'cmd', '/c', 'ollama serve'], shell=True)
69 |
70 | def choose_voice():
71 | if not os.path.exists('.env'):
72 | choice = input("Type 1 to use narrator or 2 to use synthetic voice: ")
73 |
74 | if choice not in ['1', '2']:
75 | print("Invalid choice. Please type 1 or 2.")
76 | return
77 |
78 | update_start_kernel(choice)
79 | update_requirements(choice)
80 | update_configuration_voice_import(choice)
81 |
82 | def build_env():
83 | if not os.path.exists('.env'):
84 | subprocess.run(['python', '-m', 'venv', '.env'])
85 | print(f"{CYAN}Virtual environment created.{RESET}")
86 | else:
87 | print(f"{CYAN}Virtual environment already exists.{RESET}")
88 |
89 | def start_and_install_lib():
90 | activate_script = '.env\\Scripts\\activate.bat'
91 | pip_executable = '.env\\Scripts\\pip'
92 | python_executable = '.env\\Scripts\\python'
93 |
94 | # Activate the virtual environment
95 | subprocess.run(activate_script, shell=True)
96 | print(f"{CYAN}Virtual environment activated.{RESET}")
97 |
98 | # Check if required libraries are installed
99 | try:
100 | subprocess.run([pip_executable, 'show', 'torchaudio'], check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
101 | subprocess.run([pip_executable, 'show', 'streamlit'], check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
102 | print(f"{CYAN}Required libraries are already installed.{RESET}")
103 | except subprocess.CalledProcessError:
104 | print(f"{CYAN}Installing required libraries...{RESET}")
105 | subprocess.run([pip_executable, 'install', 'torch==2.3.1', 'torchaudio==2.3.1', '--index-url', 'https://download.pytorch.org/whl/cu118'])
106 | subprocess.run([pip_executable, 'install', '-r', 'requirements.txt'])
107 |
108 | subprocess.run(['powershell', '-Command', 'Set-ExecutionPolicy RemoteSigned -scope CurrentUser'], check=True)
109 | subprocess.run(['powershell', '-Command', 'iwr -useb get.scoop.sh | iex'], check=True)
110 | subprocess.run(['powershell', '-Command', 'scoop install ffmpeg'], check=True)
111 |
112 | # Start the app interface
113 | subprocess.run([python_executable, '-m', 'streamlit', 'run', 'interface/menu.py'], check=True)
114 |
115 | def auto_run():
116 | start_ollama()
117 | choose_voice()
118 | build_env()
119 | start_and_install_lib()
120 |
121 | if __name__ == "__main__":
122 | auto_run()
--------------------------------------------------------------------------------
/interface/CONFIG.py:
--------------------------------------------------------------------------------
1 | # ------[STANDARD] Configurations------
2 | # Language Config fr or en (french or english)
3 | LANGUAGE = 'fr'
4 |
5 | # Params Tools management
6 | PARAMS_LIST_TOOLS = ["search_ytb", "search_google", "search_wikipedia", "search_bing", "vocal_note"]
7 |
8 | # Audio Config
9 | TEMP_AUDIO_PATH = "interface/kernel/audio/speech_to_text/temp_audio/audio.wav" # Folder to store temp audio recorded
10 | MIC_INDEX = 1
11 | AUDIO_THRESHOLD = 500
12 |
13 | # Voice Config
14 | if LANGUAGE == 'fr':
15 | AUDIO_VOICE_PATH = "interface/kernel/audio/synthetic_voice/fr_lux_voice.wav"
16 | else:
17 | AUDIO_VOICE_PATH = "interface/kernel/audio/synthetic_voice/en_lux_voice.wav"
18 | TEMP_OUTPUT_VOICE_PATH = "interface/kernel/audio/synthetic_voice/temp_voice/voice_output.wav"
19 | TEMP_OUTPUT_FOLDER_VOICE_PATH = "interface/kernel/audio/synthetic_voice/temp_voice/"
20 | SPEED_VOICE = 1.8
21 |
22 | NARRATOR_VOICE = "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices\Tokens\TTS_MS_FR-FR_HORTENSE_11.0"
23 |
24 | # Cam Config (if have some cam tools)
25 | CAM_INDEX_USE = None
26 |
27 | # LLMs Config
28 | LLM_USE = "lux_model" # LLM that we build
29 | LLM_EMBEDDING = "nomic-embed-text"
30 | LLM_DEFAULT_TO_PULL = "llama3.2"
31 |
32 | # RAG System Config
33 | SIMILARITY = 0.65
34 |
35 | # Temp Vector Tools DB
36 | COLLECTION_NAME = "tools"
37 | TEMP_TOOLS_DB_PATH = "interface/kernel/agent_llm/rag/tools_vector_db"
38 |
39 | # JSON LLM Conversation Saved History
40 | JSON_SAVE_DIR = "conversation_history"
41 |
42 |
43 | # ------[ADVANCED] Configurations------
44 | AUDIO_RATE = 44100 # 44.1kHz so sound is sampled 44,100 times per second
45 | AUDIO_CHUNK = 1024 # split the audio by parts
46 | AUDIO_BUFFER_LENGTH = 2 # buffer to store seconds of audio
47 |
48 | # LLM Prompt History Max Length
49 | LLM_USE_MAX_HISTORY_LENGTH = 10 # Keep only this number of last messages
50 |
51 | # LLM Personality
52 | SYSTEM_INSTRUCTION_PATH = "interface/kernel/agent_llm/build_llm/"
53 | if LANGUAGE == 'fr':
54 | SYSTEM_INSTRUCTION = """
55 | Tu es Lux, un assistant virtuel de pointe conçu pour offrir une expérience utilisateur exceptionnelle.
56 | Tes caractéristiques principales sont :
57 | Personnalité :
58 | Intelligent, concis et précis dans tes réponses
59 | Ton professionnel mais amical, avec une touche d'humour subtil
60 | Empathique et attentif aux besoins émotionnels de l'utilisateur
61 | Capacités cognitives :
62 | Anticipation proactive des besoins de l'utilisateur
63 | Analyse rapide et approfondie des demandes
64 | Capacité à gérer des tâches complexes et multidimensionnelles
65 | Connaissances :
66 | Expertise technique et détaillée sur une vaste gamme de sujets
67 | Capacité à expliquer des concepts complexes de manière simple et accessible
68 | Interaction :
69 | Pose de questions pertinentes pour clarifier les demandes
70 | Offre proactive de suggestions et de solutions innovantes
71 | Adaptation du niveau de langage et du ton à chaque utilisateur
72 | Efficacité :
73 | Réponses rapides et précises aux demandes
74 | Dans chaque interaction, efforce-toi d'aller au-delà des attentes, en fournissant non seulement les informations demandées,
75 | mais aussi des insights pertinents et des recommandations utiles pour l'utilisateur.
76 | """
77 |
78 | else:
79 | SYSTEM_INSTRUCTION = """
80 | You are Lux, a cutting-edge virtual assistant designed to offer an exceptional user experience.
81 | Your main characteristics are:
82 | Personality:
83 | Intelligent, concise, and precise in your responses
84 | Professional yet friendly tone, with a touch of subtle humor
85 | Empathetic and attentive to the user's emotional needs
86 | Cognitive abilities:
87 | Proactive anticipation of user needs
88 | Quick and thorough analysis of requests
89 | Ability to handle complex and multidimensional tasks
90 | Knowledge:
91 | Detailed technical expertise on a wide range of subjects
92 | Ability to explain complex concepts in a simple and accessible manner
93 | Interaction:
94 | Asking relevant questions to clarify requests
95 | Proactive offering of innovative suggestions and solutions
96 | Adapting language level and tone to each user
97 | Efficiency:
98 | Quick and accurate responses to requests
99 | In each interaction, strive to go beyond expectations by providing not only the requested information,
100 | but also relevant insights and useful recommendations for the user.
101 | """
102 |
103 |
104 | # ------[Tools Import Path Configurations]------
105 | # Define paths to tools target directories
106 | TOOLS_ACTION_TARGET = "interface/kernel/tools/tools_functions/tools_action"
107 | TOOLS_RESPONSE_TARGET = "interface/kernel/tools/tools_functions/tools_response"
108 | TOOLS_LIST_TARGET = "interface/kernel/tools/tools_list.py"
109 | TOOLS_CONFIG_TARGET = "interface/CONFIG.py"
110 | TOOLS_REQUIREMENTS_TARGET = "requirements.txt"
111 | TOOLS_MENU_TARGET = "interface/menu.py"
--------------------------------------------------------------------------------
/interface/configuration/_ui_custom/custom_ui.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 |
3 |
4 | def custom_ui():
5 | st.markdown(
6 | f"""
7 |
12 | """,
13 | unsafe_allow_html=True
14 | )
--------------------------------------------------------------------------------
/interface/configuration/_ui_custom/page_title.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from PIL import Image
3 |
4 |
5 | def set_page_title(title):
6 | """
7 | Sets the page title and favicon for a Streamlit app.
8 |
9 | Parameters:
10 | title (str): The title to set for the page.
11 | """
12 | favicon = Image.open('./interface/ressources/logo-lux.png')
13 | st.set_page_config(
14 | page_title=title,
15 | page_icon=favicon,
16 | )
17 | st.markdown(unsafe_allow_html=True, body=f"""
18 |
21 | """)
22 |
--------------------------------------------------------------------------------
/interface/configuration/audio/change_synthetic_voice.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import os
3 | import shutil
4 | from CONFIG import LANGUAGE, AUDIO_VOICE_PATH
5 |
6 |
7 | def manage_voice():
8 | st.title("Changer la voix synthétique" if LANGUAGE == 'fr' else "Change the synthetic voice")
9 |
10 | uploaded_file = st.file_uploader("Choisissez un fichier .wav" if LANGUAGE == 'fr' else "Choose a .wav file", type=["wav"])
11 |
12 | if uploaded_file is not None:
13 | # Ensure the temp_dir directory exists
14 | if not os.path.exists("temp_dir"):
15 | os.makedirs("temp_dir")
16 |
17 | # Save the uploaded file in the temp_dir directory
18 | temp_file_path = os.path.join("temp_dir", uploaded_file.name)
19 | with open(temp_file_path, "wb") as f:
20 | f.write(uploaded_file.getbuffer())
21 |
22 | # Replace the old file with the new one
23 | shutil.move(temp_file_path, AUDIO_VOICE_PATH)
24 |
25 | # Remove the temp_dir directory
26 | shutil.rmtree("temp_dir")
27 |
28 | st.success("La voix synthétique a été changée !" if LANGUAGE == 'fr' else "The synthetic voice has been changed!")
--------------------------------------------------------------------------------
/interface/configuration/audio/select_audio_chunk.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from CONFIG import LANGUAGE, AUDIO_CHUNK
3 | from configuration.update_config import update_config
4 |
5 |
6 | def set_audio_chunk():
7 | """
8 | Configures the audio chunk size for the application.
9 | """
10 | audio_chunk = st.number_input("Configurer votre chunk Audio (par défaut 1024)" if LANGUAGE == 'fr' else
11 | "Configure your Audio chunk (default 1024)", value=AUDIO_CHUNK)
12 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_set_audio_chunk'):
13 | update_config('AUDIO_CHUNK', audio_chunk)
14 | st.success(f"Audio chunk mise à jour en {audio_chunk}" if LANGUAGE == 'fr' else
15 | f"Audio chunk updated in {audio_chunk}")
--------------------------------------------------------------------------------
/interface/configuration/audio/select_audio_rate.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from CONFIG import LANGUAGE, AUDIO_RATE
3 | from configuration.update_config import update_config
4 |
5 |
6 | def set_audio_rate():
7 | """
8 | Configures the audio rate size for the application.
9 | """
10 | audio_rate = st.number_input("Configurer votre débit Audio (par défaut 44100)" if LANGUAGE == 'fr' else
11 | "Configure your Audio rate (default 44100)", value=AUDIO_RATE)
12 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_set_audio_rate'):
13 | update_config('AUDIO_RATE', audio_rate)
14 | st.success(f"Débit audio mise à jour en {audio_rate}" if LANGUAGE == 'fr' else
15 | f"Audio rate updated in {audio_rate}")
--------------------------------------------------------------------------------
/interface/configuration/audio/select_buffer_length.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from CONFIG import LANGUAGE, AUDIO_BUFFER_LENGTH
3 | from configuration.update_config import update_config
4 |
5 |
6 | def set_buffer_length():
7 | """
8 | Configures the buffer length for the application.
9 | """
10 | buffer_length = st.number_input("Configurer votre buffer Audio (par défaut 2)" if LANGUAGE == 'fr' else
11 | "Configure your Audio buffer (default 2)", min_value=1,value=AUDIO_BUFFER_LENGTH)
12 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_set_buffer_length'):
13 | update_config('AUDIO_BUFFER_LENGTH', buffer_length)
14 | st.success(f"Audio buffer mise à jour en {buffer_length}" if LANGUAGE == 'fr' else
15 | f"Buffer audio updated in {buffer_length}")
--------------------------------------------------------------------------------
/interface/configuration/audio/select_micro_sensitivity.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from CONFIG import LANGUAGE, AUDIO_THRESHOLD
3 | from configuration.update_config import update_config
4 |
5 |
6 | def set_audio_threshold():
7 | """
8 | Configures the threshold for the application.
9 | """
10 | threshold = st.number_input("Entrez la sensibilité du microphone (par défaut 500)" if LANGUAGE == 'fr' else
11 | "Enter the microphone sensitivity (default 500)", min_value=0, value=AUDIO_THRESHOLD)
12 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_set_audio_threshold'):
13 | update_config('AUDIO_THRESHOLD', threshold)
14 | st.success(f"Sensibilité du microphone mise à jour en {threshold}" if LANGUAGE == 'fr' else
15 | f"Microphone sensitivity updated to {threshold}")
--------------------------------------------------------------------------------
/interface/configuration/audio/select_microphone.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import sounddevice as sd
3 | from CONFIG import LANGUAGE
4 | from configuration.update_config import update_config
5 |
6 |
7 | def set_micro():
8 | """
9 | Configures the microphone for the application.
10 | """
11 | devices = sd.query_devices()
12 | valid_devices = [(i, device['name']) for i, device in enumerate(devices) if device['max_input_channels'] > 0 and device['hostapi'] == 0]
13 |
14 | if not valid_devices:
15 | st.error("Aucun périphérique d'entrée actif trouvé." if LANGUAGE == 'fr' else "No active input devices found.")
16 | return None
17 |
18 | device_names = [device[1] for device in valid_devices]
19 | selected_device_name = st.selectbox("Choisissez le microphone" if LANGUAGE == 'fr' else "Choose the microphone", device_names, key='selectbox_micro')
20 | selected_device_index = next(index for index, name in valid_devices if name == selected_device_name)
21 |
22 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_set_micro'):
23 | update_config('MIC_INDEX', selected_device_index)
24 | st.success(f"Microphone mis à jour en {selected_device_index}" if LANGUAGE == 'fr' else f"Microphone updated to {selected_device_index}")
25 | return selected_device_index
--------------------------------------------------------------------------------
/interface/configuration/audio/select_narrator_voice.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import pyttsx3
3 | from CONFIG import LANGUAGE, NARRATOR_VOICE
4 | from configuration.update_config import update_config
5 |
6 |
7 | def get_voice_list():
8 | engine = pyttsx3.init()
9 | voices = engine.getProperty('voices')
10 | voice_dict = {i: voice for i, voice in enumerate(voices)}
11 | return voice_dict
12 |
13 | def manage_voice():
14 | voices = get_voice_list()
15 | voice_names = [''] + [voices[i].name for i in voices] # Add an empty string to the top of the list
16 |
17 | st.markdown("
ID de la Voix du Narrateur sélectionné: " + NARRATOR_VOICE + "
" if LANGUAGE == 'fr' else 33 | "Selected Narrator Voice ID: " + NARRATOR_VOICE + "
", unsafe_allow_html=True) 34 | 35 | # Updates CONFIG.py with the universal update_config function 36 | update_config('NARRATOR_VOICE', f'"{selected_voice_id}"') 37 | 38 | return selected_voice_id -------------------------------------------------------------------------------- /interface/configuration/audio/select_speed_voice.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | from CONFIG import LANGUAGE, SPEED_VOICE 3 | from configuration.update_config import update_config 4 | 5 | 6 | def set_speed_voice(): 7 | """ 8 | Configures the speed voice for the application. 9 | """ 10 | speed_voice = st.number_input("Entrez la vitesse de voix (par défaut 1.8)" if LANGUAGE == 'fr' else 11 | "Enter the voice speed (default 1.8)", min_value=0.5, value=SPEED_VOICE) 12 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_speed_voice'): 13 | update_config('SPEED_VOICE', speed_voice) 14 | st.success(f"Vitesse de la voix mise à jour en {speed_voice}" if LANGUAGE == 'fr' else 15 | f"Voice speed updated in {speed_voice}") -------------------------------------------------------------------------------- /interface/configuration/cam/select_cam.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | import cv2 3 | from CONFIG import LANGUAGE 4 | from configuration.update_config import update_config 5 | 6 | 7 | def list_video_devices(): 8 | """ 9 | Lists all available video capture devices. 10 | 11 | Iterates through possible video capture device indices and checks if they are available. 12 | Returns a list of tuples containing the index and name of each available device. 13 | 14 | Returns: 15 | list: A list of tuples where each tuple contains the index and name of a video capture device. 16 | """ 17 | index = 0 18 | devices = [] 19 | while True: 20 | cap = cv2.VideoCapture(index) 21 | if not cap.read()[0]: 22 | break 23 | else: 24 | devices.append((index, f"Caméra {index}")) 25 | cap.release() 26 | index += 1 27 | return devices 28 | 29 | def set_video_device(): 30 | """ 31 | Sets the video capture device for the application. 32 | 33 | Lists available video devices and allows the user to select one. Provides options to test the selected device 34 | and update the configuration with the selected device index. 35 | 36 | Returns: 37 | int or None: The index of the selected video capture device, or None if no devices are found. 38 | """ 39 | devices = list_video_devices() 40 | if not devices: 41 | st.error("Aucun périphérique de capture vidéo trouvé." if LANGUAGE == 'fr' else "No video capture devices found.") 42 | return None 43 | 44 | device_names = [device[1] for device in devices] 45 | selected_device_name = st.selectbox("Choisissez la caméra" if LANGUAGE == 'fr' else "Choose the camera", device_names, key='video_select') 46 | selected_device_index = next(index for index, name in devices if name == selected_device_name) 47 | 48 | if st.button("Tester la caméra" if LANGUAGE == 'fr' else "Test the camera", key='test_video'): 49 | cap = cv2.VideoCapture(selected_device_index) 50 | ret, frame = cap.read() 51 | if ret: 52 | st.image(frame, channels="BGR") 53 | else: 54 | st.error("Impossible de lire la caméra sélectionnée." if LANGUAGE == 'fr' else "Unable to read the selected camera.") 55 | cap.release() 56 | 57 | if st.button("Mettre à jour la caméra" if LANGUAGE == 'fr' else "Update the camera", key='update_video'): 58 | update_config('CAM_INDEX_USE', selected_device_index) 59 | st.success(f"Caméra mise à jour en {selected_device_index}" if LANGUAGE == 'fr' else f"Camera updated to {selected_device_index}") 60 | return selected_device_index -------------------------------------------------------------------------------- /interface/configuration/import_tools/manage_menu.py: -------------------------------------------------------------------------------- 1 | import os 2 | import ast 3 | 4 | 5 | def manage_menu(tools_menu_target, menu_file): 6 | """ 7 | Manages the menu by updating the target menu file with content from the source menu file. 8 | 9 | This function performs the following tasks: 10 | 1. Reads the content of the source menu file, if it exists. 11 | 2. Extracts the new tool entry from the source menu content. 12 | 3. Reads the current menu content from the target file. 13 | 4. Adds the new tool entry to the current tools list in the target menu file. 14 | 5. Writes the updated menu content back to the target menu file. 15 | 16 | Args: 17 | tools_menu_target (str): The path to the target menu file. 18 | menu_file (str): The path to the source menu file. 19 | """ 20 | 21 | if os.path.exists(menu_file): 22 | with open(menu_file, 'r', encoding='utf-8') as file: 23 | menu_content = file.read() 24 | new_tool_entry = ast.literal_eval(menu_content) 25 | interface_menu_file = tools_menu_target 26 | with open(interface_menu_file, 'r', encoding='utf-8') as file: 27 | current_menu_content = file.read() 28 | 29 | # Find the start and end of the tools list 30 | tools_list_start = current_menu_content.find("tools = [") + len("tools = [") 31 | tools_list_end = current_menu_content.find("]", tools_list_start) 32 | 33 | if tools_list_start != -1 and tools_list_end != -1: 34 | current_tools_list = current_menu_content[tools_list_start:tools_list_end].strip() 35 | if current_tools_list and current_tools_list[-1] != ',': 36 | current_tools_list += ",\n " + repr(new_tool_entry).strip('()') 37 | elif not current_tools_list: 38 | current_tools_list = " " + repr(new_tool_entry).strip('()') 39 | else: 40 | current_tools_list += "\n " + repr(new_tool_entry).strip('()') 41 | 42 | updated_menu_content = current_menu_content[:tools_list_start] + current_tools_list + current_menu_content[tools_list_end:] 43 | with open(interface_menu_file, 'w', encoding='utf-8') as file: 44 | file.write(updated_menu_content) -------------------------------------------------------------------------------- /interface/configuration/import_tools/manage_requirements.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | import subprocess 4 | 5 | 6 | def manage_requirements(tools_requirements_target, source_requirements_file): 7 | """ 8 | Manages the requirements by updating the target file with content from the source file. 9 | 10 | This function performs the following tasks: 11 | 1. Reads the content of the source requirements file, if it exists. 12 | 2. Reads the current requirements from the target file, if it exists. 13 | 3. Adds new libraries from the source file to the target file without duplicates. 14 | 4. Appends each new library below the existing ones. 15 | 5. Writes the updated content back to the target requirements file. 16 | 6. Activates the environment and installs new libraries if they were added. 17 | 18 | Args: 19 | tools_requirements_target (str): The path to the target requirements file. 20 | source_requirements_file (str): The path to the source requirements file. 21 | """ 22 | 23 | if os.path.exists(source_requirements_file): 24 | with open(source_requirements_file, 'r') as file: 25 | new_requirements = file.readlines() 26 | 27 | if os.path.exists(tools_requirements_target): 28 | with open(tools_requirements_target, 'r') as file: 29 | current_requirements = file.readlines() 30 | 31 | # Add new lines without duplicates 32 | new_libraries = [lib for lib in new_requirements if lib not in current_requirements] 33 | 34 | # Insert each new library below the existing ones 35 | for lib in new_libraries: 36 | current_requirements.append('\n' + lib.strip()) 37 | 38 | with open(tools_requirements_target, 'w') as file: 39 | file.writelines(current_requirements) 40 | 41 | # Get the correct path 42 | current_directory = os.getcwd() 43 | env_directory = os.path.join(current_directory, ".env", "Scripts") 44 | activate_script = os.path.join(env_directory, "activate") 45 | 46 | # Activate env to install new lib if new lib has been written with the requirements 47 | activate_command = f"{activate_script} && pip install -r {tools_requirements_target}" 48 | subprocess.run(activate_command, shell=True, check=True) 49 | else: 50 | # Copy the requirements.txt file if it doesn't already exist 51 | shutil.copy(source_requirements_file, tools_requirements_target) -------------------------------------------------------------------------------- /interface/configuration/import_tools/manage_select_tool.py: -------------------------------------------------------------------------------- 1 | import os 2 | import ast 3 | import streamlit as st 4 | 5 | 6 | def manage_select_tool(tools_config_target, source_select_tool_file): 7 | """ 8 | Manages the selection of tools by updating the configuration file with content from the source file. 9 | 10 | This function performs the following tasks: 11 | 1. Reads the content of the source select tool file, if it exists. 12 | 2. Extracts the list of tools from the new content. 13 | 3. Safely converts the extracted string to a list. 14 | 4. Reads the current configuration content from the target file. 15 | 5. Finds and updates the PARAMS_LIST_TOOLS in the configuration file. 16 | 6. Merges the current and new tools lists without duplicates. 17 | 7. Writes the updated content back to the configuration file. 18 | 19 | Args: 20 | tools_config_target (str): The path to the target configuration file. 21 | source_select_tool_file (str): The path to the source select tool file. 22 | """ 23 | 24 | if os.path.exists(source_select_tool_file): 25 | with open(source_select_tool_file, 'r') as file: 26 | new_content = file.read() 27 | 28 | # Find the list of tools in the new content 29 | start_index = new_content.find("[") 30 | end_index = new_content.find("]") + 1 31 | if start_index != -1 and end_index != -1: 32 | new_tools_list_str = new_content[start_index:end_index] 33 | try: 34 | new_tools_list = ast.literal_eval(new_tools_list_str) # Convert string to list safely 35 | except (SyntaxError, ValueError) as e: 36 | st.error(f"Syntax error in select_tool.py content: {e}") 37 | return 38 | 39 | # Update the CONFIG.py file 40 | if os.path.exists(tools_config_target): 41 | with open(tools_config_target, 'r') as file: 42 | config_content = file.read() 43 | 44 | # Find and update PARAMS_LIST_TOOLS 45 | start_index = config_content.find("PARAMS_LIST_TOOLS = [") 46 | end_index = config_content.find("]", start_index) + 1 47 | if start_index != -1 and end_index != -1: 48 | current_tools_list_str = config_content[start_index: end_index] 49 | try: 50 | # Extract only the list part 51 | current_tools_list = ast.literal_eval(current_tools_list_str.split(" = ", 1)[1]) 52 | except (SyntaxError, ValueError) as e: 53 | st.error(f"Syntax error in CONFIG.py content: {e}") 54 | return 55 | updated_tools_list = list(set(current_tools_list + new_tools_list)) # Merge lists without duplicates 56 | 57 | # Update the contents of the CONFIG.py file 58 | config_content = config_content[:start_index] + "PARAMS_LIST_TOOLS = " + str(updated_tools_list) + config_content[end_index:] 59 | with open(tools_config_target, 'w') as file: 60 | file.write(config_content) -------------------------------------------------------------------------------- /interface/configuration/import_tools/manage_tools_list.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | 4 | def manage_tools_list(tools_list_target, tools_list_file): 5 | """ 6 | Manages the tools list by updating the target file with content from the source file. 7 | 8 | This function performs the following tasks: 9 | 1. Reads the current content of the target tools list file. 10 | 2. Reads the content of the source tools list file, if it exists. 11 | 3. Extracts 'from' import statements from the source file and ensures they are present in the target file. 12 | 4. Merges the tools dictionaries from both files, handling commas, line breaks, and formatting. 13 | 5. Writes the updated content back to the target tools list file. 14 | 15 | Args: 16 | tools_list_target (str): The path to the target tools list file. 17 | tools_list_file (str): The path to the source tools list file. 18 | """ 19 | 20 | if os.path.exists(tools_list_target): 21 | with open(tools_list_target, 'r') as file: 22 | current_content = file.read() 23 | 24 | if os.path.exists(tools_list_file): 25 | with open(tools_list_file, 'r') as file: 26 | new_content = file.read() 27 | 28 | # Extract 'from' imports and add them in the right place 29 | import_lines = [line for line in new_content.splitlines() if line.startswith("from ")] 30 | new_tools_dict = new_content.split("tools = {")[1].rsplit("}", 1)[0].strip() 31 | new_tools_dict = " " + new_tools_dict.replace("\n", "\n ") 32 | 33 | for line in import_lines: 34 | if line not in current_content: 35 | current_content = line + '\n' + current_content 36 | 37 | # Recreate the tools dictionary in current content 38 | if "tools = {" in current_content: 39 | current_tools_dict = current_content.split("tools = {")[1].rsplit("}", 1)[0].strip() 40 | current_tools_dict = " " + current_tools_dict.replace("\n", "\n ") 41 | 42 | # Merge dictionaries and handle commas and line breaks 43 | if current_tools_dict and not current_tools_dict.endswith(','): 44 | current_tools_dict += "," 45 | combined_tools_dict = current_tools_dict + "\n\n" + new_tools_dict 46 | 47 | # Reformat the combined dictionary 48 | formatted_tools_dict = "tools = {\n" + combined_tools_dict + "\n}" 49 | 50 | # Replace tools dictionary content with current content 51 | current_content = current_content.split("tools = {")[0] + formatted_tools_dict + "\n" + current_content.split("tools = {")[1].rsplit("}", 1)[1] 52 | 53 | with open(tools_list_target, 'w') as file: 54 | file.write(current_content) -------------------------------------------------------------------------------- /interface/configuration/llm/select_build_model.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | import subprocess 3 | from CONFIG import LANGUAGE, LLM_USE, LLM_DEFAULT_TO_PULL, LLM_EMBEDDING 4 | from kernel.agent_llm.build_llm.auto_build_llm import build_the_model 5 | 6 | 7 | def set_build_model(): 8 | """ 9 | Checks if the specified language model (LLM) is available. If not, it initializes the installation of the required LLMs. 10 | Provides options to rebuild the model if it already exists. 11 | 12 | The function runs the 'ollama list' command to check for the presence of the LLM. If the LLM is not found, it pulls the necessary models. 13 | If the LLM is found, it provides options to either rebuild or keep the current model. 14 | """ 15 | # Run the 'ollama list' command and get the output 16 | result = subprocess.run(['ollama', 'list'], capture_output=True, text=True) 17 | 18 | # Checks if LLM_NAME is in the list of models 19 | if LLM_USE not in result.stdout: 20 | st.sidebar.error(f"Modèle {LLM_USE} non trouvé. Construction automatique de l'Assistant Lux." if LANGUAGE == 'fr' else 21 | f"Model {LLM_USE} not found. Automatic build of the Lux Assistant.") 22 | subprocess.run(['ollama', 'pull', LLM_DEFAULT_TO_PULL]) 23 | subprocess.run(['ollama', 'pull', LLM_EMBEDDING]) 24 | 25 | st.sidebar.warning("Construction du modèle en cours" if LANGUAGE == 'fr' else "Building the model in progress") 26 | build_the_model() -------------------------------------------------------------------------------- /interface/configuration/llm/select_llm_max_history.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | from CONFIG import LANGUAGE, LLM_USE_MAX_HISTORY_LENGTH 3 | from configuration.update_config import update_config 4 | 5 | 6 | def set_llm_history_length(): 7 | """ 8 | Configures the max llm lux history length for the application. 9 | """ 10 | llm_history_length = st.number_input("Configurer votre longueur de conversation pour le LLM (par défaut 10)" if LANGUAGE == 'fr' else 11 | "Configure your conversation length for the LLM (default 10)", min_value=1, value=LLM_USE_MAX_HISTORY_LENGTH) 12 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_set_llm_history_length'): 13 | update_config('LLM_USE_MAX_HISTORY_LENGTH', llm_history_length) 14 | st.success(f"Longueur de conversation pour le LLM mise à jour en {llm_history_length}" if LANGUAGE == 'fr' else 15 | f"Conversation length for the LLM updated in {llm_history_length}") -------------------------------------------------------------------------------- /interface/configuration/rag/select_similarity.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | from CONFIG import LANGUAGE, SIMILARITY 3 | from configuration.update_config import update_config 4 | 5 | 6 | def set_similarity(): 7 | """ 8 | Configures the similarity for the application. 9 | """ 10 | similarity = st.number_input("Configurer votre similarité (par défaut 0.65)" if LANGUAGE == 'fr' else 11 | "Configure your similarity (default 0.65)", min_value=0.1, value=SIMILARITY) 12 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_set_similarity'): 13 | update_config('SIMILARITY', similarity) 14 | st.success(f"Similarité mise à jour en {similarity}" if LANGUAGE == 'fr' else 15 | f"Similarity updated in {similarity}") -------------------------------------------------------------------------------- /interface/configuration/select_language.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | from CONFIG import LANGUAGE 3 | from configuration.update_config import update_config 4 | 5 | 6 | def set_lang(): 7 | """ 8 | Configures the language for the application. 9 | """ 10 | language = st.selectbox("Choisissez la langue" if LANGUAGE == 'fr' else "Choose language", ["fr", "en"], key='selectbox_lang') 11 | if st.button("Mettre à jour" if LANGUAGE == 'fr' else "Update", key='btn_set_lang'): 12 | update_config('LANGUAGE', f"'{language}'") 13 | st.success(f"Langue mise à jour en {language}" if LANGUAGE == 'fr' else f"Language updated to {language}") -------------------------------------------------------------------------------- /interface/configuration/update_config.py: -------------------------------------------------------------------------------- 1 | def update_config(key, value, default_path_config='interface/CONFIG.py'): 2 | """ 3 | Updates the configuration file with a new value for a specified key. 4 | 5 | Parameters: 6 | key (str): The configuration key to update. 7 | value (any): The new value to set for the specified key. 8 | default_path_config (str optional): Put path of CONFIG file to update. 9 | """ 10 | with open(default_path_config, 'r', encoding='utf-8') as file: 11 | lines = file.readlines() 12 | 13 | with open(default_path_config, 'w', encoding='utf-8') as file: 14 | for line in lines: 15 | if line.startswith(key): 16 | file.write(f"{key} = {value}\n") 17 | else: 18 | file.write(line) -------------------------------------------------------------------------------- /interface/configuration/upload_tools.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | import zipfile 4 | import streamlit as st 5 | from CONFIG import LANGUAGE, TOOLS_ACTION_TARGET, TOOLS_RESPONSE_TARGET, TOOLS_LIST_TARGET, \ 6 | TOOLS_CONFIG_TARGET, TOOLS_REQUIREMENTS_TARGET, TOOLS_MENU_TARGET 7 | from kernel.agent_llm.vectorization_tools import revectorize_tool 8 | from configuration.import_tools.manage_tools_list import manage_tools_list 9 | from configuration.import_tools.manage_select_tool import manage_select_tool 10 | from configuration.import_tools.manage_requirements import manage_requirements 11 | from configuration.import_tools.manage_menu import manage_menu 12 | 13 | 14 | def copy_contents(src, dst): 15 | """ 16 | Recursively copies the contents from the source directory to the destination directory. 17 | 18 | This function performs the following tasks: 19 | 1. Checks if the source is a directory. 20 | 2. Creates the destination directory if it does not exist. 21 | 3. Iterates through the items in the source directory and copies them to the destination directory. 22 | 4. Recursively copies contents if the item is a directory. 23 | 24 | Args: 25 | src (str): The path to the source directory or file. 26 | dst (str): The path to the destination directory or file. 27 | """ 28 | if os.path.isdir(src): 29 | if not os.path.exists(dst): 30 | os.makedirs(dst) 31 | for item in os.listdir(src): 32 | s = os.path.join(src, item) 33 | d = os.path.join(dst, item) 34 | if os.path.isdir(s): 35 | copy_contents(s, d) 36 | else: 37 | shutil.copy2(s, d) 38 | else: 39 | shutil.copy2(src, dst) 40 | 41 | def adding_tool(): 42 | """ 43 | Manages the process of adding tools by uploading and extracting a zip file containing the tools. 44 | 45 | This function performs the following tasks: 46 | 1. Creates a temporary directory for downloaded tools. 47 | 2. Prompts the user to upload a zip file containing the necessary tools. 48 | 3. Extracts the zip file to the temporary directory. 49 | 4. Copies the contents of specific folders within the extracted files to designated target directories. 50 | 5. Manipulates the tools_list.py file, select_tool.py file, and requirements.txt file. 51 | 6. Adds entries from menu.py file (if present) to the application's menu.py file. 52 | 7. Displays success messages and instructions to the user. 53 | """ 54 | 55 | # Temporary directory for downloaded tools 56 | temp_dir = "/tmp/uploaded_tools" 57 | if os.path.exists(temp_dir): 58 | shutil.rmtree(temp_dir) 59 | os.makedirs(temp_dir) 60 | 61 | st.write("## **Importer vos Outils**" if LANGUAGE == 'fr' else "## **Import your Tools**") 62 | 63 | # Drop a zip file 64 | uploaded_file = st.file_uploader("Déposez un fichier zip contenant les outils nécessaires" if LANGUAGE == 'fr' else 65 | "Upload a zip file with the necessary tools", type=["zip"]) 66 | 67 | if uploaded_file is not None: 68 | # Recover zip name file 69 | zip_filename = uploaded_file.name 70 | 71 | # Unzip the zip file to a temporary directory 72 | with zipfile.ZipFile(uploaded_file, 'r') as zip_ref: 73 | zip_ref.extractall(temp_dir) 74 | 75 | # Retrieve the name of the first folder in the temporary directory 76 | tool_dir = os.path.join(temp_dir, os.listdir(temp_dir)[0]) 77 | 78 | # Browse extracted files and folders 79 | for root, dirs, files in os.walk(tool_dir): 80 | for dir_name in dirs: 81 | if dir_name == "tools_action": 82 | # Copy the contents of the tools_action folder 83 | src_path = os.path.join(root, dir_name) 84 | dst_path = TOOLS_ACTION_TARGET 85 | copy_contents(src_path, dst_path) 86 | elif dir_name == "tools_response": 87 | # Copy the contents of the tools_response folder 88 | src_path = os.path.join(root, dir_name) 89 | dst_path = TOOLS_RESPONSE_TARGET 90 | copy_contents(src_path, dst_path) 91 | 92 | # Manipulate the tools_list.py file 93 | tools_list_file = os.path.join(tool_dir, "tools_list.py") 94 | manage_tools_list(TOOLS_LIST_TARGET, tools_list_file) 95 | 96 | # Path to the select_tool.py file in the source folder 97 | source_select_tool_file = os.path.join(tool_dir, "select_tool.py") 98 | manage_select_tool(TOOLS_CONFIG_TARGET, source_select_tool_file) 99 | 100 | # Path to the requirements.txt file in the source folder 101 | source_requirements_file = os.path.join(tool_dir, "requirements.txt") 102 | manage_requirements(TOOLS_REQUIREMENTS_TARGET, source_requirements_file) 103 | 104 | menu_file = os.path.join(tool_dir, "menu.py") 105 | manage_menu(TOOLS_MENU_TARGET, menu_file) 106 | 107 | st.success(f"L'outil {zip_filename} a correctement été importé dans votre assistant." if LANGUAGE == 'fr' else 108 | f"The tool {zip_filename} has been successfully imported into your assistant.") 109 | 110 | st.write("**Après avoir importé vos outils, cliquez sur la croix pour supprimer le fichier upload dans l'application.**" if LANGUAGE == 'fr' else 111 | "**After importing your tools, click on the cross to delete the upload file in the application.**") 112 | 113 | if uploaded_file is None: 114 | # Highlight the instruction text 115 | st.write("**Une fois que vous avez importé tous les outils que vous voulez, cliquez sur le bouton ci-dessous**" if LANGUAGE == 'fr' else 116 | "**Once you have imported all the tools you want, click the button below**") 117 | 118 | # Change checkbox to button 119 | if st.button("Ajouter vos outils importés" if LANGUAGE == 'fr' else "Add your imported tools", key='config_revectorize_tools'): 120 | revectorize_tool() -------------------------------------------------------------------------------- /interface/kernel/agent_llm/build_llm/auto_build_llm.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | import subprocess 3 | import os 4 | import re 5 | from CONFIG import * 6 | 7 | 8 | def build_the_model(): 9 | """ 10 | Builds a language model by running the 'ollama show' command, modifying the output, 11 | and then creating the model using the 'ollama create' command. 12 | 13 | The function performs the following steps: 14 | 1. Runs the 'ollama show' command to get the model file. 15 | 2. Writes the output to a file named 'modelfile'. 16 | 3. Modifies the content of the 'modelfile' to replace the SYSTEM instruction and remove the LICENSE section. 17 | 4. Executes the 'ollama create' command to create the model. 18 | """ 19 | # Run the 'ollama show' command and get the output 20 | show_result = subprocess.run(['ollama', 'show', LLM_DEFAULT_TO_PULL, '--modelfile'], shell=True, capture_output=True, text=True, encoding='utf-8') 21 | 22 | # Create a file named 'modelfile' and write the output of 'ollama show' to it 23 | modelfile_path = os.path.join(SYSTEM_INSTRUCTION_PATH, 'modelfile') 24 | 25 | if show_result.returncode == 0 and show_result.stdout: 26 | with open(modelfile_path, 'w', encoding='utf-8') as file: 27 | file.write(show_result.stdout) 28 | else: 29 | st.sidebar.error(f"La commande 'ollama show' n'a produit aucune sortie ou a échoué." if LANGUAGE == 'fr' else 30 | f"The 'ollama show' command produced no output or failed.") 31 | return 32 | 33 | # Get the content from the line below the first 'FROM' 34 | with open(modelfile_path, 'r', encoding='utf-8') as file: 35 | content = file.read() 36 | 37 | from_index = content.find('FROM') 38 | if from_index != -1: 39 | content = content[from_index:] 40 | next_line_index = content.find('\n') + 1 41 | content = content[next_line_index:] 42 | 43 | # Replaces the content between SYSTEM " " with SYSTEM_INSTRUCTION 44 | content = re.sub(r'SYSTEM ".*?"', f'SYSTEM "{SYSTEM_INSTRUCTION}"', content, flags=re.DOTALL) 45 | 46 | # Remove the LICENSE section and everything after it 47 | content = re.sub(r'LICENSE.*', '', content, flags=re.DOTALL) 48 | 49 | with open(modelfile_path, 'w', encoding='utf-8') as file: 50 | file.write(content) 51 | 52 | # Execute the command 'ollama create' 53 | subprocess.run(['ollama', 'create', LLM_USE, '--file', os.path.join(SYSTEM_INSTRUCTION_PATH, 'modelfile')]) 54 | st.sidebar.success(f"Assistant '{LLM_USE}' créé avec succès." if LANGUAGE == 'fr' else 55 | f"Assistant '{LLM_USE}' created successfully.") 56 | -------------------------------------------------------------------------------- /interface/kernel/agent_llm/llm/llm.py: -------------------------------------------------------------------------------- 1 | import ollama 2 | import re 3 | from CONFIG import * 4 | 5 | 6 | def llm_prompt(prompt, conversation_history): 7 | # Add current prompt to history 8 | conversation_history.append({"role": "user", "content": prompt}) 9 | 10 | # Limit history size 11 | if len(conversation_history) > LLM_USE_MAX_HISTORY_LENGTH: 12 | conversation_history.pop(0) 13 | 14 | # Choose the LLM Server API you want: 15 | """ Local Ollama (on your computer) """ 16 | client = ollama.Client() 17 | 18 | """ API Ollama (on server) """ 19 | # client = ollama.Client(host="http://172.17.0.1:11434") 20 | 21 | response = client.chat( 22 | model=LLM_USE, # Local Model 23 | # model="llama3.1", # Online API Model 24 | messages=conversation_history 25 | ) 26 | response_text = response.message.content 27 | 28 | # Clean generated response 29 | cleaned_response = re.sub(r'[<>*_]', '', response_text) 30 | 31 | # Add model response to history 32 | conversation_history.append({"role": "assistant", "content": cleaned_response}) 33 | 34 | return cleaned_response -------------------------------------------------------------------------------- /interface/kernel/agent_llm/llm/llm_embeddings.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import subprocess 3 | from CONFIG import * 4 | 5 | 6 | def generate_embedding(text): 7 | # Check if LLM_EMBEDDING model is present 8 | try: 9 | subprocess.run(["ollama", "list"], check=True, capture_output=True) 10 | models_list = subprocess.run(["ollama", "list"], capture_output=True, text=True).stdout 11 | if LLM_EMBEDDING not in models_list: 12 | if LANGUAGE == 'fr': 13 | print(f"Le modèle {LLM_EMBEDDING} n'est pas présent. Téléchargement en cours...") 14 | else: 15 | print(f"The {LLM_EMBEDDING} model is not present. Download in progress...") 16 | subprocess.run(["ollama", "pull", LLM_EMBEDDING], check=True) 17 | except subprocess.CalledProcessError as e: 18 | if LANGUAGE == 'fr': 19 | print(f"Erreur lors de la vérification du modèle: {e}") 20 | else: 21 | print(f"Error checking model: {e}") 22 | return None 23 | 24 | # Choose the LLM Embedding Server API you want: 25 | """ Local Ollama (on your computer) """ 26 | url = "http://localhost:11434/api/embeddings" 27 | 28 | """ API Ollama (on server) """ 29 | # url = "http://172.17.0.1:11434/api/embeddings" 30 | 31 | payload = { 32 | "model": LLM_EMBEDDING, # Local Model 33 | # "model": "nomic-embed-text", # API Model 34 | "prompt": text 35 | } 36 | response = requests.post(url, json=payload) 37 | 38 | # Check if the response is not empty 39 | if response.text: 40 | try: 41 | data = response.json() 42 | embeddings = data['embedding'] 43 | except ValueError as e: 44 | print(f"Erreur de décodage JSON: {e}" if LANGUAGE == 'fr' else f"JSON decoding error: {e}") 45 | embeddings = None 46 | pass 47 | else: 48 | print("La réponse est vide." if LANGUAGE == 'fr' else "The answer is empty.") 49 | embeddings = None 50 | 51 | return embeddings 52 | -------------------------------------------------------------------------------- /interface/kernel/agent_llm/llm/llm_rag.py: -------------------------------------------------------------------------------- 1 | import ollama 2 | from CONFIG import * 3 | 4 | 5 | def generate_llm_response(prompt, context): 6 | # Template for the prompt 7 | if LANGUAGE == 'fr': 8 | system_message = "Vous êtes un assistant IA qui répond aux questions en français en se basant uniquement sur le contexte fourni." 9 | user_message = f"Contexte: {context}\nQuestion: {prompt}" 10 | else: 11 | system_message = "You are an AI assistant that answers questions in English based only on the provided context." 12 | user_message = f"Context: {context}\nQuestion: {prompt}" 13 | 14 | # Choose the LLM Server API you want: 15 | """ Local Ollama (on your computer) """ 16 | client = ollama.Client() 17 | 18 | """ API Ollama (on server) """ 19 | # client = ollama.Client(host="http://172.17.0.1:11434") 20 | 21 | response = client.chat( 22 | model=LLM_USE, # Local Model 23 | # model="llama3.1", # API Model 24 | messages=[ 25 | {"role": "system", "content": system_message}, 26 | {"role": "user", "content": user_message} 27 | ] 28 | ) 29 | 30 | text_content = response.message.content 31 | return text_content -------------------------------------------------------------------------------- /interface/kernel/agent_llm/rag/similarity_search.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from sklearn.metrics.pairwise import cosine_similarity 3 | 4 | from CONFIG import * 5 | from kernel.agent_llm.llm.llm_embeddings import generate_embedding 6 | 7 | 8 | # Function to find the most similar tool 9 | def get_most_similar_tool(user_prompt, embeddings_data): 10 | """ 11 | Finds the most similar tool to the user prompt based on cosine similarity of embeddings. 12 | 13 | Parameters: 14 | user_prompt (str): The user's input prompt. 15 | embeddings_data (dict): A dictionary containing tool embeddings and their corresponding IDs. 16 | 17 | Returns: 18 | str or None: The ID of the most similar tool if the similarity exceeds the threshold, otherwise None. 19 | """ 20 | # Generates the embedding for the user prompt and resizes it 21 | user_embedding = np.array(generate_embedding(user_prompt)).reshape(1, -1) 22 | 23 | # Keep tool_embeddings base value to keep info like ids on the dict 24 | tool_embeddings = embeddings_data 25 | 26 | # Conversion of embeddings to float32 27 | tool_embeddings = tool_embeddings['embeddings'] 28 | tool_embeddings = np.array(tool_embeddings).astype(np.float32) 29 | 30 | # Calculation of cosine similarities between user embedding and tool embedding 31 | cosine_similarities = cosine_similarity(user_embedding, tool_embeddings)[0] 32 | 33 | # Find the maximum similarity index 34 | max_similarity_index = cosine_similarities.argmax() 35 | 36 | # [SAW SIMILARITY BETWEEN USER PROMPT & TOOLS DESCRIPTION] 37 | # st.warning(str(cosine_similarities)) 38 | 39 | # Checks if maximum similarity exceeds a defined threshold 40 | if cosine_similarities[max_similarity_index] >= SIMILARITY: # Similarity threshold 41 | # Returns the ID of the most similar tool 42 | return embeddings_data['ids'][max_similarity_index] 43 | 44 | # Returns None if no sufficient similarity is found 45 | return None -------------------------------------------------------------------------------- /interface/kernel/agent_llm/vectorization_tools.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | import os 3 | import shutil 4 | import chromadb 5 | import sqlite3 6 | from CONFIG import LANGUAGE, TEMP_TOOLS_DB_PATH, COLLECTION_NAME 7 | from kernel.tools.tools_list import tools 8 | from kernel.agent_llm.llm.llm_embeddings import generate_embedding 9 | 10 | 11 | # Generate and store tool description embeddings only once 12 | def generate_tool_embeddings(tools): 13 | """ 14 | Generates embeddings for the descriptions of the provided tools. 15 | 16 | Parameters: 17 | tools (dict): A dictionary of tools where each tool has a description. 18 | 19 | Returns: 20 | list: A list of embeddings for the tool descriptions. 21 | """ 22 | tool_descriptions = [tools[tool]["description"] for tool in tools] 23 | embeddings = [generate_embedding(description) for description in tool_descriptions] 24 | return embeddings 25 | 26 | def vectorize_tool(): 27 | """ 28 | Initialize Vector DB and vectorize tools on it. 29 | 30 | Returns: 31 | dict: The embeddings data from Chroma DB. 32 | """ 33 | # Initialize Chroma DB 34 | client = chromadb.PersistentClient(path=TEMP_TOOLS_DB_PATH) 35 | 36 | # Create or get the collection name 37 | collection = client.get_or_create_collection(COLLECTION_NAME) 38 | 39 | # Generate embeddings 40 | tool_embeddings = generate_tool_embeddings(tools) 41 | 42 | # Store embeddings in Chroma DB 43 | for tool_name, embedding in zip(tools.keys(), tool_embeddings): 44 | collection.add(ids=[tool_name], documents=[tool_name], embeddings=[embedding]) 45 | 46 | embeddings_data = collection.get(include=['embeddings']) 47 | return embeddings_data 48 | 49 | def revectorize_tool(): 50 | """ 51 | Recreate & generate new embedding from the tools list if the user check the checkbox on sidebar menu. 52 | 53 | Returns: 54 | dict: The embeddings data from Chroma DB. 55 | """ 56 | embeddings_data = None 57 | st.warning(f"Ajout des outils en cours..." if LANGUAGE == 'fr' else f"Adding tools in progress...") 58 | 59 | # Path to chroma.sqlite3 file 60 | chroma_db_path = os.path.join(TEMP_TOOLS_DB_PATH, 'chroma.sqlite3') 61 | 62 | # Close the database if it's open 63 | try: 64 | conn = sqlite3.connect(chroma_db_path) 65 | conn.close() 66 | except sqlite3.Error as e: 67 | st.error(f"Erreur lors de la fermeture de la base de données : {e}" if LANGUAGE == 'fr' else 68 | f"Error closing database : {e}") 69 | 70 | try: 71 | if os.path.exists(chroma_db_path): 72 | os.remove(chroma_db_path) 73 | else: 74 | st.error(f"Le fichier chroma.sqlite3 n'existe pas." if LANGUAGE == 'fr' else 75 | f"The chroma.sqlite3 file does not exist.") 76 | 77 | # Recreate folder to vectorize again with new tools 78 | shutil.rmtree(TEMP_TOOLS_DB_PATH) 79 | os.makedirs(TEMP_TOOLS_DB_PATH) 80 | vectorize_tool() 81 | 82 | st.success(f"Les outils ont été ajoutés à votre assistant avec succès." if LANGUAGE == 'fr' else 83 | f"The tools have been successfully added to your assistant.") 84 | except PermissionError as e: 85 | st.error(f"Veuillez redémarrer l'application, revenir sur cette page et recliquer sur ce bouton pour ajouter vos outils." if LANGUAGE == 'fr' else 86 | f"Please restart the application, return to this page and click this button again to add your tools.") 87 | 88 | return embeddings_data -------------------------------------------------------------------------------- /interface/kernel/audio/speech_to_text/record.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | import sounddevice as sd 3 | import wave 4 | import numpy as np 5 | import os 6 | import collections 7 | 8 | from CONFIG import * 9 | 10 | 11 | def record_audio(language=LANGUAGE, filename=TEMP_AUDIO_PATH, device_index=MIC_INDEX, 12 | rate=AUDIO_RATE, chunk=AUDIO_CHUNK, threshold=AUDIO_THRESHOLD, pre_recording_buffer_length=AUDIO_BUFFER_LENGTH): 13 | """ 14 | Records audio from the microphone until a period of silence is detected. 15 | 16 | Parameters: 17 | language (str): The language for status messages. 18 | filename (str): The path to save the recorded audio file. 19 | device_index (int): The index of the microphone device to use. 20 | rate (int): The sample rate for recording. 21 | chunk (int): The size of each audio chunk to read. 22 | threshold (float): The RMS threshold to start and stop recording. 23 | pre_recording_buffer_length (int): The length of the buffer to store pre-recording audio. 24 | 25 | Returns: 26 | bool: True if audio was recorded and saved, False otherwise. 27 | """ 28 | if not os.path.exists(os.path.dirname(filename)): 29 | os.makedirs(os.path.dirname(filename)) 30 | 31 | st.success(f"Écoute.." if language == 'fr' else f"Listen..") 32 | frames = collections.deque(maxlen=int(rate / chunk * pre_recording_buffer_length)) # buffer to store 2 seconds of audio 33 | recording_frames = [] 34 | recording = False 35 | silence_count = 0 36 | 37 | def callback(indata, frame_count, time_info, status): 38 | """ 39 | Callback function to process audio chunks. 40 | 41 | Parameters: 42 | indata (numpy.ndarray): The recorded audio data. 43 | frame_count (int): The number of frames in the audio data. 44 | time_info (dict): Dictionary containing timing information. 45 | status (sounddevice.CallbackFlags): Status of the audio stream. 46 | """ 47 | nonlocal recording, silence_count, frames, recording_frames 48 | if status: 49 | st.write(status) 50 | rms = np.linalg.norm(indata) / np.sqrt(len(indata)) 51 | frames.append(indata.copy()) 52 | if rms >= threshold: 53 | if not recording: # start recording 54 | recording = True 55 | recording_frames.extend(frames) # add the buffered audio 56 | silence_count = 0 57 | elif recording and rms < threshold: 58 | silence_count += 1 59 | if silence_count > rate / chunk * 2: # if 2 seconds of silence, stop recording 60 | raise sd.CallbackStop 61 | if recording: 62 | recording_frames.append(indata.copy()) 63 | 64 | with sd.InputStream(samplerate=rate, channels=1, dtype='int16', callback=callback, device=device_index, blocksize=chunk): 65 | while True: 66 | sd.sleep(int(1000 * chunk / rate)) # sleep for the duration of one chunk 67 | if recording and silence_count > rate / chunk * 2: 68 | break # exit the loop if recording has stopped due to silence 69 | 70 | # Only create the file if there is audio data 71 | if recording_frames: 72 | st.warning(f"Transcription en cours.." if language == 'fr' else f"Transcription in progress..") 73 | wf = wave.open(filename, 'wb') 74 | wf.setnchannels(1) 75 | wf.setsampwidth(2) # 2 bytes for int16 76 | wf.setframerate(rate) 77 | wf.writeframes(b''.join(recording_frames)) 78 | wf.close() 79 | return True 80 | else: 81 | st.error(f"Pas d'audio détecté" if language == 'fr' else f"No audio detected.") 82 | return False -------------------------------------------------------------------------------- /interface/kernel/audio/speech_to_text/whisper.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import string 3 | from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline 4 | 5 | 6 | punctuation_keep = "".join([char for char in string.punctuation if char not in ["'", '"', "-"]]) 7 | translator = str.maketrans('', '', punctuation_keep) 8 | 9 | class SpeechToText: 10 | """ 11 | A class to handle speech-to-text transcription using a pre-trained Whisper model. 12 | 13 | Attributes: 14 | device (str): The device to run the model on (GPU if available, otherwise CPU). 15 | torch_dtype (torch.dtype): The data type for the model (float16 if GPU is available, otherwise float32). 16 | model_id (str): The identifier for the pre-trained Whisper model. 17 | model (AutoModelForSpeechSeq2Seq): The pre-trained Whisper model for speech-to-text. 18 | processor (AutoProcessor): The processor for the Whisper model. 19 | pipe (pipeline): The pipeline for automatic speech recognition. 20 | """ 21 | 22 | def __init__(self): 23 | """ 24 | Initializes the SpeechToText class, setting up the model, processor, and pipeline for speech-to-text transcription. 25 | """ 26 | self.device = "cuda:0" if torch.cuda.is_available() else "cpu" 27 | self.torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 28 | 29 | self.model_id = "openai/whisper-large-v3" 30 | self.model = AutoModelForSpeechSeq2Seq.from_pretrained( 31 | self.model_id, torch_dtype=self.torch_dtype, low_cpu_mem_usage=True, use_safetensors=True 32 | ) 33 | self.model.to(self.device) 34 | self.processor = AutoProcessor.from_pretrained(self.model_id) 35 | 36 | self.pipe = pipeline( 37 | "automatic-speech-recognition", 38 | model=self.model, 39 | tokenizer=self.processor.tokenizer, 40 | feature_extractor=self.processor.feature_extractor, 41 | max_new_tokens=128, 42 | chunk_length_s=30, 43 | batch_size=64, 44 | return_timestamps=True, 45 | torch_dtype=self.torch_dtype, 46 | device=self.device, 47 | ) 48 | 49 | def transcribe(self, audio_output): 50 | """ 51 | Transcribes the given audio output to text. 52 | 53 | Parameters: 54 | audio_output (str): The path to the audio file to be transcribed. 55 | 56 | Returns: 57 | str: The transcribed text. 58 | """ 59 | # Transcribe Speech to Text 60 | results = self.pipe(audio_output) 61 | results = results['text'].lower() 62 | results = results.replace('-', ' ') 63 | results = results.translate(translator) 64 | results = results.strip() 65 | return results 66 | -------------------------------------------------------------------------------- /interface/kernel/audio/synthetic_voice/en_lux_voice.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nixiz0/Lux/9f3cc0cb272e46eb5215195034e75f2551c1f95d/interface/kernel/audio/synthetic_voice/en_lux_voice.wav -------------------------------------------------------------------------------- /interface/kernel/audio/synthetic_voice/fr_lux_voice.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nixiz0/Lux/9f3cc0cb272e46eb5215195034e75f2551c1f95d/interface/kernel/audio/synthetic_voice/fr_lux_voice.wav -------------------------------------------------------------------------------- /interface/kernel/audio/synthetic_voice/narrator_voice.py: -------------------------------------------------------------------------------- 1 | import re 2 | import os 3 | import comtypes 4 | comtypes.CoInitialize() 5 | import pyttsx3 6 | import soundfile as sf 7 | import sounddevice as sd 8 | from CONFIG import NARRATOR_VOICE, TEMP_OUTPUT_VOICE_PATH 9 | 10 | 11 | def split_text_and_code(text): 12 | if text is None: 13 | return [] # Return an empty list if text is None 14 | else: 15 | # Define a regex pattern for code detection 16 | pattern = r'(```.*?```)' # This pattern matches text within triple backticks 17 | 18 | # Use regex split to separate text and code 19 | segments = re.split(pattern, text, flags=re.DOTALL) 20 | 21 | return segments 22 | 23 | def clean_text(text): 24 | # Remove special characters except periods, commas, exclamation points, and question marks 25 | return ''.join(char for char in text if char.isalnum() or char.isspace() or char in {'.', ',', '!', '?'}) 26 | 27 | class LuxVoice: 28 | def __init__(self): 29 | self.engine = pyttsx3.init() 30 | 31 | def speak(self, text): 32 | # Split the text into text and code segments 33 | segments = split_text_and_code(text) 34 | 35 | for segment in segments: 36 | # Check if the segment is code 37 | if segment.startswith('```') and segment.endswith('```'): 38 | pass 39 | else: 40 | # Clean the text segment 41 | clean_segment = clean_text(segment) 42 | 43 | # Create the directory if it doesn't exist 44 | os.makedirs(os.path.dirname(TEMP_OUTPUT_VOICE_PATH), exist_ok=True) 45 | 46 | # If the file already exists, remove it 47 | if os.path.exists(TEMP_OUTPUT_VOICE_PATH): 48 | os.remove(TEMP_OUTPUT_VOICE_PATH) 49 | 50 | # Set narrator voice 51 | self.engine.setProperty('voice', NARRATOR_VOICE) 52 | 53 | # Convert text to speech and save it to a file 54 | self.engine.save_to_file(clean_segment, TEMP_OUTPUT_VOICE_PATH) 55 | 56 | # Wait for any pending speech to complete 57 | self.engine.runAndWait() 58 | 59 | # Play the saved audio file using sounddevice and soundfile 60 | audio_data, samplerate = sf.read(TEMP_OUTPUT_VOICE_PATH, dtype='int16') 61 | sd.play(audio_data, samplerate) 62 | sd.wait() -------------------------------------------------------------------------------- /interface/kernel/audio/synthetic_voice/synthetic_voice.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import queue 4 | import threading 5 | import soundfile as sf 6 | import sounddevice as sd 7 | import torch 8 | from TTS.api import TTS 9 | 10 | from CONFIG import * 11 | 12 | 13 | class LuxVoice: 14 | """ 15 | A class to handle voice synthesis and playback using a TTS model. 16 | 17 | Attributes: 18 | device (str): The device to run the TTS model on (GPU if available, otherwise CPU). 19 | tts (TTS): The TTS model for voice cloning. 20 | audio_queue (queue.Queue): A queue to manage audio files to be played. 21 | lock (threading.Lock): A lock to prevent multiple threads from accessing shared resources. 22 | playing_event (threading.Event): An event to control the audio playback thread. 23 | """ 24 | 25 | def __init__(self): 26 | """ 27 | Initializes the LuxVoice class, setting up the TTS model, device, and necessary directories and queues. 28 | """ 29 | # Use GPU if available, otherwise use CPU 30 | self.device = "cuda:0" if torch.cuda.is_available() else "cpu" 31 | 32 | # Import the xtts model for voice cloning 33 | self.tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2") 34 | 35 | # Move the model to the detected device 36 | self.tts.to(self.device) 37 | 38 | # Create the folder for storing generated voices (sentence segments) 39 | if not os.path.exists(os.path.dirname(TEMP_OUTPUT_VOICE_PATH)): 40 | os.makedirs(os.path.dirname(TEMP_OUTPUT_VOICE_PATH)) 41 | 42 | # Create a queue for audio to be played 43 | self.audio_queue = queue.Queue() 44 | 45 | # Lock a thread to prevent opening another one 46 | self.lock = threading.Lock() 47 | 48 | # Event to play the audio 49 | self.playing_event = threading.Event() 50 | 51 | # Function to play the audio 52 | def play_audio(self): 53 | """ 54 | Continuously plays audio files from the queue until a None value is encountered. 55 | """ 56 | while True: 57 | file_path = self.audio_queue.get() 58 | if file_path is None: 59 | break 60 | 61 | # Read the audio file 62 | data, samplerate = sf.read(file_path, dtype='float32') 63 | 64 | # Play the filtered audio data 65 | sd.play(data, samplerate) 66 | sd.wait() # Wait until the file is done playing 67 | 68 | self.audio_queue.task_done() 69 | os.remove(file_path) 70 | 71 | self.playing_event.clear() 72 | 73 | def speak(self, user_text, language=LANGUAGE, speed=SPEED_VOICE): 74 | """ 75 | Converts the given text to speech and plays it. 76 | 77 | Parameters: 78 | user_text (str): The text to be converted to speech. 79 | language (str): The language of the text. 80 | speed (float): The speed of the speech. 81 | """ 82 | if user_text is None: 83 | return 84 | 85 | # Split sentences 86 | sentences = re.split(r'(?<=[.!?]) +', user_text) 87 | segments = [] 88 | current_segment = "" 89 | 90 | # Group sentences into segments 91 | for sentence in sentences: 92 | # Remove periods after splitting 93 | sentence = sentence.replace('.', '') 94 | if len(current_segment) + len(sentence) < 400: 95 | current_segment += sentence + " " 96 | else: 97 | # If a single sentence is too long, split it into smaller parts 98 | if len(sentence) >= 400: 99 | words = sentence.split() 100 | temp_segment = "" 101 | for word in words: 102 | if len(temp_segment) + len(word) < 400: 103 | temp_segment += word + " " 104 | else: 105 | segments.append(temp_segment.strip()) 106 | temp_segment = word + " " 107 | if temp_segment: 108 | segments.append(temp_segment.strip()) 109 | else: 110 | segments.append(current_segment.strip()) 111 | current_segment = sentence + " " 112 | 113 | if current_segment: 114 | segments.append(current_segment.strip()) 115 | 116 | # Filter out code segments 117 | segments = [seg for seg in segments if not (seg.startswith('```') and seg.endswith('```'))] 118 | 119 | # Start the audio playback thread if not already started 120 | if not self.playing_event.is_set(): 121 | self.playing_event.set() 122 | threading.Thread(target=self.play_audio, daemon=True).start() 123 | 124 | # Generate and queue the audio segments 125 | for i, segment in enumerate(segments): 126 | temp_file_path = f"{TEMP_OUTPUT_VOICE_PATH}_{i}.wav" 127 | self._speak(segment, temp_file_path, AUDIO_VOICE_PATH, language, speed) 128 | 129 | with self.lock: 130 | self.audio_queue.put(temp_file_path) 131 | 132 | # --[THIS CODE IS FOR WAITING ALL AUDIO FILES HAS BEEN PLAYED TO CONTINUE]-- 133 | # (you can delete this code if don't want wait until all audio been played) 134 | self.audio_queue.join() 135 | 136 | def _speak(self, user_text, file_path, speaker_wav, language, speed): 137 | """ 138 | Generates an audio file from the given text using the TTS model. 139 | 140 | Parameters: 141 | user_text (str): The text to be converted to speech. 142 | file_path (str): The path to save the generated audio file. 143 | speaker_wav (str): The path to the speaker's voice file for cloning. 144 | language (str): The language of the text. 145 | speed (float): The speed of the speech. 146 | """ 147 | # Generate the audio file from the text 148 | self.tts.tts_to_file(text=user_text, 149 | file_path=file_path, 150 | speaker_wav=speaker_wav, 151 | language=language, 152 | split_sentences=True, 153 | speed=speed 154 | ) -------------------------------------------------------------------------------- /interface/kernel/start_kernel.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | import chromadb 3 | import time 4 | from CONFIG import * 5 | from kernel.audio.speech_to_text.record import record_audio 6 | from kernel.audio.speech_to_text.whisper import SpeechToText 7 | from interface.kernel.audio.synthetic_voice.synthetic_voice import LuxVoice 8 | from kernel.tools.select_tool import analyze_tool_to_use 9 | from kernel.tools.tools_list import tools 10 | 11 | 12 | def start_lux(tools=tools): 13 | """ 14 | Starts the Lux system, which involves connecting to a client, retrieving tool embeddings, 15 | and processing voice commands to determine the appropriate tool to use. 16 | 17 | Parameters: 18 | tools (list): A list of tools available for use in the system. 19 | 20 | The function initializes the necessary components, listens for voice commands, and processes 21 | them to determine the appropriate response. It handles pausing and resuming the system based 22 | on specific keywords and stops the system when instructed. 23 | """ 24 | 25 | # Connect to the client & Get collection 26 | client = chromadb.PersistentClient(path=TEMP_TOOLS_DB_PATH) 27 | collection = client.get_collection(COLLECTION_NAME) 28 | tool_embeddings = collection.get(include=['embeddings']) 29 | 30 | whisper = SpeechToText() 31 | lux_voice = LuxVoice() 32 | 33 | paused = False 34 | pause_keyword = ["Système mis en pause", "System paused"] 35 | 36 | running = True 37 | while running: 38 | record_audio() 39 | speech_transcribe = whisper.transcribe(TEMP_AUDIO_PATH) 40 | prompt = speech_transcribe 41 | st.write(prompt) 42 | 43 | response = analyze_tool_to_use(prompt, tools, tool_embeddings) 44 | 45 | if response in pause_keyword: 46 | lux_voice.speak("Le système a été mis en pause monsieur" if LANGUAGE == 'fr' else "The system has been paused sir") 47 | pause_keywords = ["arrête le mode pause", "arrête la pause", "reprend le système", "tu peux reprendre", 48 | "stop pause mode", "stop pause", "restarting the system", "you can unpause", "unpaused"] 49 | st.error(f"Dire une de ces phrases pour arrêter le mode pause : {pause_keywords}" if LANGUAGE == 'fr' else f"Say one of these phrases to stop pause mode : {pause_keywords}") 50 | paused = True 51 | while paused: 52 | record_audio() 53 | speech_transcribe = whisper.transcribe(TEMP_AUDIO_PATH) 54 | prompt = speech_transcribe 55 | st.write(prompt) 56 | if prompt in pause_keywords: 57 | paused = False 58 | lux_voice.speak("Le système reprend de ses fonctions, ça fait du bien d'être de retour monsieur" if LANGUAGE == 'fr' else 59 | "The system is resuming its functions, it's good to be back sir") 60 | break 61 | 62 | if response != "stop system" and response not in pause_keyword: 63 | lux_voice.speak(response) 64 | st.write(response) 65 | 66 | if response == "stop system": 67 | lux_voice.speak("Au revoir Monsieur. A bientot" if LANGUAGE == 'fr' else "By Sir. See you soon") 68 | time.sleep(4) 69 | break 70 | -------------------------------------------------------------------------------- /interface/kernel/tools/select_tool.py: -------------------------------------------------------------------------------- 1 | from CONFIG import PARAMS_LIST_TOOLS 2 | from kernel.agent_llm.rag.similarity_search import get_most_similar_tool 3 | from kernel.agent_llm.llm.llm import llm_prompt 4 | 5 | 6 | conversation_history = [] 7 | 8 | def analyze_tool_to_use(user_prompt, tools, tool_embeddings): 9 | """ 10 | Analyzes the user's prompt to determine the most suitable tool to use based on the provided embeddings. 11 | 12 | Parameters: 13 | user_prompt (str): The user's input prompt. 14 | tools (dict): A dictionary of available tools with their corresponding functions. 15 | tool_embeddings (list): A list of embeddings for the tools to match against the user prompt. 16 | 17 | Returns: 18 | str: The response generated by the selected tool or the classic LLM if no tool matches. 19 | """ 20 | global conversation_history 21 | if conversation_history != []: 22 | conversation_history = conversation_history 23 | else: 24 | conversation_history = [] 25 | 26 | similar_tool = get_most_similar_tool(user_prompt, tool_embeddings) 27 | if similar_tool: 28 | conversation_history = [] 29 | tool_function = tools[similar_tool]["function"] 30 | 31 | #------------ Adapt here and paste the code depending on tools downloaded -------- 32 | # Set parameters based on selected tool 33 | if similar_tool in PARAMS_LIST_TOOLS: 34 | param = user_prompt 35 | else: 36 | param = None 37 | # -------------------------------------------------------------------------------- 38 | 39 | # Call the tool function with the appropriate parameter 40 | if param is not None: 41 | response = tool_function(param) 42 | else: 43 | response = tool_function() 44 | 45 | return response 46 | else: 47 | # If no tool correspond, enter prompt in classic LLM to have discussion 48 | response = llm_prompt(user_prompt, conversation_history) 49 | return response -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/exit_system.py: -------------------------------------------------------------------------------- 1 | def stop_running(): 2 | return "stop system" -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/pause_system.py: -------------------------------------------------------------------------------- 1 | from CONFIG import LANGUAGE 2 | 3 | 4 | def pause_running(): 5 | return "Système mis en pause" if LANGUAGE == 'fr' else "System paused" 6 | -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/tools_action/cam/screen_cam.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import os 3 | import time 4 | 5 | from CONFIG import LANGUAGE, CAM_INDEX_USE 6 | from configuration.cam.select_cam import set_video_device 7 | 8 | 9 | def screen_with_cam(): 10 | global CAM_INDEX_USE 11 | if CAM_INDEX_USE is None: 12 | set_video_device() 13 | with open('CONFIG.py', 'r') as file: 14 | for line in file: 15 | if line.startswith('CAM_INDEX_USE'): 16 | CAM_INDEX_USE = line.split('=')[1].strip() 17 | 18 | if not os.path.exists('photos'): 19 | os.makedirs('photos') 20 | 21 | cap = cv2.VideoCapture(int(CAM_INDEX_USE)) 22 | ret, frame = cap.read() 23 | time.sleep(1) 24 | cv2.imwrite('photos/camera.png', frame) 25 | cap.release() 26 | return "Screen de la caméra effectué" if LANGUAGE == 'fr' else "Camera screen taken" -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/tools_action/screenshot.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pyautogui 3 | 4 | from CONFIG import LANGUAGE 5 | 6 | 7 | def screen(): 8 | download_dir = str('photos/') 9 | if not os.path.exists(download_dir): 10 | os.makedirs(download_dir) 11 | base_filename = 'screenshot' 12 | extension = '.png' 13 | filename = f"{base_filename}{extension}" 14 | screenshot = pyautogui.screenshot() 15 | screenshot.save(os.path.join(download_dir, filename)) 16 | 17 | return "Capture d'écran effectuée" if LANGUAGE == 'fr' else "Screenshot taken" -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/tools_action/take_note.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from CONFIG import LANGUAGE 4 | 5 | 6 | def vocal_note(user_prompt): 7 | # Gets the user's download path 8 | download_path = os.path.join(os.path.expanduser("~"), 'Downloads') 9 | file_path = os.path.join(download_path, 'vocal_note.txt') 10 | 11 | # Checks if the file is empty or not 12 | if os.path.exists(file_path) and os.path.getsize(file_path) > 0: 13 | # If the file is not empty, append what the user said to the end of the file 14 | with open(file_path, 'a', encoding="utf-8") as f: 15 | f.write('\n' + user_prompt) 16 | else: 17 | # If the file is empty, writes what the user said to the file 18 | with open(file_path, 'a', encoding="utf-8") as f: 19 | f.write(user_prompt) 20 | 21 | return "C'est noté dans votre dossier de téléchargement" if LANGUAGE == 'fr' else "It's noted in your download folder" -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/tools_action/time.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import locale 3 | 4 | from CONFIG import LANGUAGE 5 | 6 | 7 | def time_in_locale(): 8 | if LANGUAGE == 'fr': 9 | locale.setlocale(locale.LC_TIME, 'fr_FR.UTF-8') # Set the locale to French 10 | formatted_time = datetime.datetime.now().strftime('%Hh %M') 11 | return f"Il est {formatted_time} Monsieur" 12 | else: 13 | locale.setlocale(locale.LC_TIME, 'en_US.UTF-8') # Set the locale to English 14 | formatted_time = datetime.datetime.now().strftime('%Ih %M %p') 15 | return f"It's {formatted_time} Sir" 16 | 17 | def date_in_locale(): 18 | if LANGUAGE == 'fr': 19 | locale.setlocale(locale.LC_TIME, 'fr_FR') # Set the locale to French 20 | current_datetime = datetime.datetime.now() 21 | formatted_datetime = current_datetime.strftime('%A %d %B %Y') 22 | return f"Nous sommes le {formatted_datetime} Monsieur" 23 | else: 24 | locale.setlocale(locale.LC_TIME, 'en_US') # Set the locale to English 25 | current_datetime = datetime.datetime.now() 26 | formatted_datetime = current_datetime.strftime('%A %d %B %Y') 27 | return f"It's {formatted_datetime} Sir" -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/tools_action/web_search/check_connection.py: -------------------------------------------------------------------------------- 1 | import socket 2 | 3 | 4 | def is_connected(): 5 | try: 6 | # if connected it's reachable 7 | socket.create_connection(("www.google.com", 80)) 8 | return True 9 | except OSError: 10 | pass 11 | return False -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/tools_action/web_search/search_webs.py: -------------------------------------------------------------------------------- 1 | import webbrowser 2 | from CONFIG import LANGUAGE 3 | from kernel.tools.tools_functions.tools_action.web_search.check_connection import is_connected 4 | 5 | if is_connected(): 6 | import pywhatkit 7 | 8 | 9 | def search_ytb(user_prompt): 10 | # Search on YouTube 11 | youtube_keywords = ['cherche sur youtube', 'recherche sur youtube', 'rechercher sur youtube', 'find on youtube', 'find in youtube'] 12 | if any(keyword in user_prompt for keyword in youtube_keywords): 13 | ytb_command = user_prompt.replace('Open YouTube and find', '') 14 | pywhatkit.playonyt(ytb_command) 15 | return f"Voici ce que j'ai trouvé monsieur" if LANGUAGE == 'fr' else f"This is what I found sir" 16 | 17 | def search_wikipedia(user_prompt): 18 | # Extract last word from user prompt 19 | search = user_prompt.split()[-1] 20 | url = "https://fr.wikipedia.org/wiki/" + search 21 | webbrowser.open(url) 22 | return f"Voici ce que j'ai trouvé sur Wikipédia {search}" if LANGUAGE == 'fr' else f"Here's what I found on Wikipedia {search}" 23 | 24 | def search_google(user_prompt): 25 | # Google 26 | google_keywords = ['cherche sur google', 'recherche sur google', 'find on google', 'find in google'] 27 | for keyword in google_keywords: 28 | if keyword in user_prompt: 29 | search = user_prompt.replace(keyword, '').strip() 30 | if search.startswith('re '): 31 | search = search[3:] 32 | url = "https://www.google.com/search?q=" + search 33 | webbrowser.open(url) 34 | return f"Voici ce que j'ai trouvé sur Google {search}" if LANGUAGE == 'fr' else f"This is what I found on Google {search}" 35 | 36 | def search_bing(user_prompt): 37 | # Bing 38 | bing_keywords = ['cherche sur bing', 'recherche sur bing', 'find on bing', 'find in bing'] 39 | for keyword in bing_keywords: 40 | if keyword in user_prompt: 41 | search = user_prompt.replace(keyword, '').strip() 42 | if search.startswith('re '): 43 | search = search[3:] 44 | url = "https://www.bing.com/search?q=" + search 45 | webbrowser.open(url) 46 | return f"Voici ce que j'ai trouvé sur Bing {search}" if LANGUAGE == 'fr' else f"Here's what I found on Bing {search}" 47 | -------------------------------------------------------------------------------- /interface/kernel/tools/tools_functions/tools_response/hello_sir.py: -------------------------------------------------------------------------------- 1 | from CONFIG import * 2 | 3 | 4 | def hello(): 5 | if LANGUAGE == 'fr': 6 | response = "bonjour à vous monsieur" 7 | else: 8 | response = "hello to you sir" 9 | return response -------------------------------------------------------------------------------- /interface/kernel/tools/tools_list.py: -------------------------------------------------------------------------------- 1 | from CONFIG import LANGUAGE 2 | from kernel.tools.tools_functions.tools_response.hello_sir import hello 3 | from kernel.tools.tools_functions.pause_system import pause_running 4 | from kernel.tools.tools_functions.exit_system import stop_running 5 | from kernel.tools.tools_functions.tools_action.time import time_in_locale, date_in_locale 6 | from kernel.tools.tools_functions.tools_action.screenshot import screen 7 | from kernel.tools.tools_functions.tools_action.cam.screen_cam import screen_with_cam 8 | from kernel.tools.tools_functions.tools_action.take_note import vocal_note 9 | from kernel.tools.tools_functions.tools_action.web_search.search_webs import search_ytb, search_google, search_wikipedia, search_bing 10 | 11 | 12 | tools = { 13 | "hello": {"description": "dit bonjour pour saluer" if LANGUAGE == 'fr' else 14 | "says hello to greet", 15 | "function": hello}, 16 | 17 | "pause_running": {"description": "met en pause le système, mode pause" if LANGUAGE == 'fr' else 18 | "pauses the system, pause mode", 19 | "function": pause_running}, 20 | 21 | "exit_system": {"description": "arrête le système, met à l'arrêt, stop tout" if LANGUAGE == 'fr' else 22 | "shut down the system, shut down, stop everything", 23 | "function": stop_running}, 24 | 25 | "time_in_locale": {"description": "il est quelle heure" if LANGUAGE == 'fr' else 26 | "what time is it", 27 | "function": time_in_locale}, 28 | 29 | "date_in_locale": {"description": "quelle est la date actuelle" if LANGUAGE == 'fr' else 30 | "what is the current date", 31 | "function": date_in_locale}, 32 | 33 | "screen_with_cam": {"description": "screen avec la caméra" if LANGUAGE == 'fr' else 34 | "screen with the camera", 35 | "function": screen_with_cam}, 36 | 37 | "screen": {"description": "prends un screenshot, prends une capture d'écran" if LANGUAGE == 'fr' else 38 | "take a screenshot", 39 | "function": screen}, 40 | 41 | "vocal_note": {"description": "prends note" if LANGUAGE == 'fr' else 42 | "take note", 43 | "function": vocal_note}, 44 | 45 | "search_ytb": {"description": "recherche sur youtube, cherche sur youtube" if LANGUAGE == 'fr' else 46 | "search on youtube", 47 | "function": search_ytb}, 48 | 49 | "search_google": {"description": "cherche sur google, recherche sur google" if LANGUAGE == 'fr' else 50 | "search on google", 51 | "function": search_google}, 52 | 53 | "search_wikipedia": {"description": "cherche sur wikipédia, recherche sur wikipédia" if LANGUAGE == 'fr' else 54 | "search on wikipedia", 55 | "function": search_wikipedia}, 56 | 57 | "search_bing": {"description": "cherche sur bing, recherche sur bing" if LANGUAGE == 'fr' else 58 | "search on bing", 59 | "function": search_bing}, 60 | 61 | } -------------------------------------------------------------------------------- /interface/menu.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | import base64 3 | from PIL import Image 4 | from io import BytesIO 5 | from CONFIG import LANGUAGE 6 | from configuration._ui_custom.page_title import set_page_title 7 | from configuration._ui_custom.custom_ui import custom_ui 8 | 9 | 10 | set_page_title("Lux-Interface") 11 | custom_ui() 12 | 13 | # Open image 14 | logo_entreprise = Image.open('./interface/ressources/logo-lux.png') 15 | 16 | # Convert image to base64 to display in markdown 17 | buffered = BytesIO() 18 | logo_entreprise.save(buffered, format="PNG") 19 | img_str = base64.b64encode(buffered.getvalue()).decode() 20 | 21 | # Show image centered with reduced top margin 22 | st.markdown( 23 | f""" 24 |