├── .github ├── dangerous_example.png ├── demo.gif ├── demo.mov ├── demo.mp4 └── workflows │ └── publish.yml ├── .gitignore ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── pyproject.toml └── src └── zev ├── command_history.py ├── command_selector.py ├── config ├── __init__.py ├── setup.py └── types.py ├── constants.py ├── llms ├── __init__.py ├── azure_openai │ ├── __init__.py │ ├── provider.py │ └── setup.py ├── gemini │ ├── __init__.py │ ├── provider.py │ └── setup.py ├── inference_provider_base.py ├── llm.py ├── ollama │ ├── __init__.py │ ├── provider.py │ └── setup.py ├── openai │ ├── __init__.py │ ├── provider.py │ └── setup.py └── types.py ├── main.py └── utils.py /.github/dangerous_example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/.github/dangerous_example.png -------------------------------------------------------------------------------- /.github/demo.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/.github/demo.gif -------------------------------------------------------------------------------- /.github/demo.mov: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/.github/demo.mov -------------------------------------------------------------------------------- /.github/demo.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/.github/demo.mp4 -------------------------------------------------------------------------------- /.github/workflows/publish.yml: -------------------------------------------------------------------------------- 1 | name: Publish to PyPI 2 | 3 | on: 4 | release: 5 | types: [published] 6 | push: 7 | tags: 8 | - 'v*' 9 | workflow_dispatch: 10 | 11 | # Add permissions configuration 12 | permissions: 13 | id-token: write 14 | 15 | jobs: 16 | deploy: 17 | runs-on: ubuntu-latest 18 | steps: 19 | - uses: actions/checkout@v4 20 | - name: Set up Python 21 | uses: actions/setup-python@v4 22 | with: 23 | python-version: '3.9' 24 | - name: Install dependencies 25 | run: | 26 | python -m pip install --upgrade pip 27 | pip install build 28 | - name: Build package 29 | run: python -m build 30 | - name: Publish to PyPI 31 | uses: pypa/gh-action-pypi-publish@release/v1 32 | with: 33 | password: ${{ secrets.PYPI_API_TOKEN }} 34 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | 3 | src/*.egg-info 4 | **/__pycache__ 5 | 6 | # mac os 7 | .DS_Store 8 | 9 | # dist 10 | /dist 11 | .aider* 12 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Zev 2 | 3 | Thank you for your interest in contributing to Zev! This document provides guidelines and steps for contributing to our project. 4 | 5 | ## Successfully Contributing 6 | 7 | Community contributions are valued, but at the same time, incorporating community written code is time-consuming and challenging. As a general rule, I'd divide contributions into these different cateries: 8 | 9 | - Bug fixes (including typos) 10 | - Documentation changes 11 | - Refactors 12 | - Feature changes 13 | 14 | For very simple bug fixes and documentation changes, feel free to directly open a PR. For any refactors, features or larger bug fixes, please try to open an issue first, so I can respond there before you put hard work into coding things. 15 | 16 | ## Reasons why I reject pull requests 17 | 18 | Unfortunately, not all community contributions can be accepted. Here are some general guidelines on what is likely to be rejected: 19 | 20 | ### Your code is hard to review 21 | 22 | If you try to bite off too much at once, it's hard to review. So for example, try not to mix a refactor together with a bug fix. It's better to do that in two different PRs. 23 | 24 | For feature changes, the best way to ensure quick review is including a screen recording. Features are likely to be rejected for reasons other than the code itself, so adding a quick screen recording helps me to comment without having to actually go through the code line by line (which is a waste of time anyway if major changes to the feature itself are needed). 25 | 26 | ### Your change implements a feature I don't like 27 | 28 | In general, Zev is aiming to remain very simple. Anything that makes it less simple to use is _likely_ to be rejected. For feature additions, try to open an issue first, so that it can be discussed there before you code. 29 | 30 | ### Your change isn't written cleanly or doesn't follow the style of the code 31 | 32 | Sometimes I will suggest modifications, but it's time consuming, so if the code is very messy, I might reject outright. Code that doesn't follow the style conventions of the surrounding code will also be rejected... not because I believe that Zev is written with objectively great style, but because I believe that consistency > correctness when it comes to style. 33 | 34 | ### You are changing too many things at once 35 | 36 | Keep PRs specific, to the point and focused on one thing. 37 | 38 | ## Code Style Guidelines 39 | 40 | - Follow the existing code style in the project 41 | - Run `ruff check` and `ruff format` to validate and format your code 42 | 43 | ## Questions or Issues? 44 | 45 | If you have any questions or run into issues, open an issue in the repository or reach out to one of the maintainers. 46 | 47 | Thank you for contributing to Zev! 48 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2025 Daniel Newman 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Zev 🔍 2 | 3 | [![PyPI version](https://badge.fury.io/py/zev.svg)](https://badge.fury.io/py/zev) 4 | [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) 5 | 6 | Zev helps you remember (or discover) terminal commands using natural language. 7 | 8 | ![Description](./.github/demo.gif) 9 | 10 | ## 🔧 Installation 11 | 12 | ```bash 13 | pip install zev 14 | ``` 15 | 16 | - **Note:** This project runs on top of LLM APIs like OpenAI, Google's Gemini, or [Ollama](https://ollama.com/). 17 | 18 | ## 📦 Dependencies 19 | 20 | For clipboard functionality (copying and pasting) to work properly, you may need to install: 21 | 22 | - On Linux: `xclip` or `xsel` (for X11) or `wl-clipboard` (for Wayland) 23 | - On macOS: No additional dependencies needed 24 | - On Windows: No additional dependencies needed 25 | 26 | ## 🎮 Usage 27 | 28 | ### Option 1: Interactive Mode 29 | 30 | ```bash 31 | zev 32 | ``` 33 | 34 | ### Option 2: Direct Query 35 | 36 | ```bash 37 | zev '' 38 | ``` 39 | 40 | ## 📝 Examples 41 | 42 | ```bash 43 | # Find running processes 44 | zev 'show all running python processes' 45 | 46 | # File operations 47 | zev 'find all .py files modified in the last 24 hours' 48 | 49 | # System information 50 | zev 'show disk usage for current directory' 51 | 52 | # Network commands 53 | zev 'check if google.com is reachable' 54 | 55 | # Git operations 56 | zev 'show uncommitted changes in git' 57 | ``` 58 | 59 | ## 🛡️ Safety Considerations 60 | 61 | ⚠️ Commands are generated by LLMs. While the tool attempts to flag dangerous commands, it may not always do so. Use caution. 62 | 63 | ![Example of dangerous command warning](./.github/dangerous_example.png) 64 | 65 | ## ⚙️ Settings 66 | 67 | ### **Supported LLM Providers:** 68 | 69 | - OpenAI 70 | - Google Gemini 71 | - Ollama 72 | - Azure OpenAI 73 | 74 | You can update your API keys and provider settings by running: 75 | 76 | ```bash 77 | zev --setup 78 | ``` 79 | 80 | ### OpenAI 81 | 82 | To use OpenAI, you need an OpenAI account and a subscription. You can create an API key on [this page](https://platform.openai.com/settings/organization/api-keys). 83 | 84 | ### Google Gemini (Free) 85 | 86 | To use Google's Gemini models, you need a Google AI Studio account. You can create a Gemini API key in [Google AI Studio](https://aistudio.google.com/). 87 | 88 | ## 🐪 Using with Ollama 89 | 90 | You can use Zev with [Ollama](https://ollama.ai/) as an alternative to hosted providers, which lets you run all commands locally. To set this up: 91 | 92 | 1. Install and start [Ollama](https://ollama.com/) with a model of your choice 93 | 94 | 2. Run `zev --setup` and put in the proper settings. For example: 95 | 96 | ``` bash 97 | ? Pick your LLM provider: Ollama 98 | ? Enter the Ollama URL: http://localhost:11434/v1 99 | ? Enter the model to use (e.g. llama3.2): llama3.2 100 | ``` 101 | 102 | ### Azure OpenAI 103 | 104 | To use Azure OpenAI, you’ll need access to an [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) resource in your Azure subscription. 105 | 106 | 1. Create a deployment of the model you want to use (e.g., `gpt-4o`, `gpt-4o-mini` etc.). 107 | 108 | 1. Install required Azure depedencies with `pip install zev[azure]`. 109 | 110 | 1. Run `zev --setup` and put in the proper settings. For example: 111 | 112 | ``` bash 113 | ? Pick your LLM provider: Azure OpenAI 114 | ? Azure OpenAI account name (e.g. my-openai-resource): my-openai-resource 115 | ? Azure OpenAI API key (leave blank to use Entra ID / keyless auth): 116 | ? Azure OpenAI deployment name (e.g. gpt-4, gpt-4o, etc): gpt-4o 117 | ? Azure OpenAI API version: 2025-03-01-preview 118 | ``` 119 | 120 | > **Note:** Model version must be 2024-08-06 or newer. 121 | > API version must be 2024-08-01-preview or newer. 122 | 123 | Note that to switch backends, you can re-run `zev --setup` again at any time. 124 | 125 | ## 🤝 Contributing 126 | 127 | Contributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for details. 128 | 129 | ## 📄 License 130 | 131 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. 132 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [project] 2 | name = "zev" 3 | version = "0.8.1" 4 | description = "Lookup CLI commands easily" 5 | readme = "README.md" 6 | dependencies = [ 7 | "openai>=1.72.0", 8 | "pydantic>=2.10.6", 9 | "pyperclip>=1.9.0", 10 | "python-dotenv>=1.0.1", 11 | "questionary>=2.1.0", 12 | "rich>=13.9.4" 13 | ] 14 | requires-python = ">=3.9" 15 | urls = { Repository = "https://github.com/dtnewman/zev" } 16 | 17 | [project.scripts] 18 | zev = "zev.main:app" 19 | 20 | [project.optional-dependencies] 21 | dev = [ 22 | "ruff>=0.11.2", 23 | "isort>=5.13.2" 24 | ] 25 | azure = [ 26 | "azure-identity>=1.20.0" 27 | ] 28 | 29 | [build-system] 30 | requires = ["setuptools>=61.0"] 31 | build-backend = "setuptools.build_meta" 32 | 33 | [tool.setuptools.packages.find] 34 | where = ["src"] 35 | include = ["zev*"] 36 | 37 | [tool.ruff] 38 | line-length = 120 39 | # this ensures imports are properly sorted 40 | lint.select = ["I001"] 41 | 42 | [tool.isort] 43 | profile = "black" 44 | line_length = 120 45 | -------------------------------------------------------------------------------- /src/zev/command_history.py: -------------------------------------------------------------------------------- 1 | from pathlib import Path 2 | from typing import Optional 3 | 4 | import questionary 5 | from pydantic import BaseModel 6 | 7 | from zev.command_selector import show_options 8 | from zev.constants import HISTORY_FILE_NAME 9 | from zev.llms.types import OptionsResponse 10 | 11 | 12 | class CommandHistoryEntry(BaseModel): 13 | query: str 14 | response: OptionsResponse 15 | 16 | 17 | class CommandHistory: 18 | def __init__(self) -> None: 19 | self.path = Path.home() / HISTORY_FILE_NAME 20 | self.max_entries = 100 21 | self.path.touch(exist_ok=True) 22 | self.encoding = "utf-8" 23 | 24 | def save_options(self, query: str, options: OptionsResponse) -> None: 25 | entry = CommandHistoryEntry(query=query, response=options) 26 | self._write_to_history_file(entry) 27 | 28 | def get_history(self) -> list[CommandHistoryEntry]: 29 | with open(self.path, "r", encoding=self.encoding) as f: 30 | entries = [CommandHistoryEntry.model_validate_json(line) for line in f if line.strip()] 31 | 32 | if not entries: 33 | return None 34 | 35 | return entries 36 | 37 | def _write_to_history_file(self, new_entry: CommandHistoryEntry) -> None: 38 | with open(self.path, "a", encoding=self.encoding) as f: 39 | f.write(new_entry.model_dump_json() + "\n") 40 | 41 | # If we've exceeded max entries, trim the file 42 | with open(self.path, "r", encoding=self.encoding) as f: 43 | lines = f.readlines() 44 | if len(lines) > self.max_entries: 45 | with open(self.path, "w", encoding=self.encoding) as f: 46 | f.writelines(lines[-self.max_entries :]) 47 | 48 | def display_history_options(self, reverse_history_entries, show_limit=5) -> Optional[CommandHistoryEntry]: 49 | if not reverse_history_entries: 50 | print("No command history found") 51 | return None 52 | 53 | style = questionary.Style( 54 | [ 55 | ("answer", "fg:#61afef"), 56 | ("question", "bold"), 57 | ("instruction", "fg:#98c379"), 58 | ] 59 | ) 60 | 61 | query_options = [questionary.Choice(entry.query, value=entry) for entry in reverse_history_entries[:show_limit]] 62 | 63 | if len(reverse_history_entries) > show_limit: 64 | query_options.append(questionary.Choice("Show more...", value="show_more")) 65 | 66 | query_options.append(questionary.Separator()) 67 | query_options.append(questionary.Choice("Cancel")) 68 | 69 | selected = questionary.select( 70 | "Select from history:", choices=query_options, use_shortcuts=True, style=style 71 | ).ask() 72 | 73 | if selected == "show_more": 74 | all_options = [questionary.Choice(entry.query, value=entry) for entry in reverse_history_entries] 75 | all_options.append(questionary.Separator()) 76 | all_options.append(questionary.Choice("Cancel")) 77 | 78 | return questionary.select( 79 | "Select from history (showing all items):", choices=all_options, use_shortcuts=True, style=style 80 | ).ask() 81 | 82 | return selected 83 | 84 | def show_history(self): 85 | history_entries = self.get_history() 86 | if not history_entries: 87 | print("No command history found") 88 | return 89 | 90 | selected_entry = self.display_history_options(list(reversed(history_entries))) 91 | 92 | if selected_entry in (None, "Cancel"): 93 | return 94 | 95 | commands = selected_entry.response.commands 96 | 97 | if not commands: 98 | print("No commands available") 99 | return None 100 | 101 | show_options(commands) 102 | -------------------------------------------------------------------------------- /src/zev/command_selector.py: -------------------------------------------------------------------------------- 1 | from subprocess import run as run_command 2 | 3 | import pyperclip 4 | import questionary 5 | from rich import print as rprint 6 | 7 | from zev.llms.types import Command 8 | 9 | 10 | def show_options(commands: list[Command]): 11 | options = assemble_options(commands) 12 | selected = display_options(options) 13 | handle_selected_option(selected) 14 | 15 | 16 | def assemble_options(commands: list[Command]): 17 | options = [questionary.Choice(cmd.command, description=cmd.short_explanation, value=cmd) for cmd in commands] 18 | options.append(questionary.Choice("Cancel")) 19 | options.append(questionary.Separator()) 20 | return options 21 | 22 | 23 | def display_options(options: list[questionary.Choice]): 24 | selected = questionary.select( 25 | "Select command:", 26 | choices=options, 27 | use_shortcuts=True, 28 | style=questionary.Style( 29 | [ 30 | ("answer", "fg:#61afef"), 31 | ("question", "bold"), 32 | ("instruction", "fg:#98c379"), 33 | ] 34 | ), 35 | ).ask() 36 | return selected 37 | 38 | 39 | def handle_selected_option(selected): 40 | if selected and selected != "Cancel": 41 | print("") 42 | if selected.dangerous_explanation: 43 | rprint(f"[red]⚠️ Warning: {selected.dangerous_explanation}[/red]\n") 44 | try: 45 | pyperclip.copy(selected.command) 46 | rprint("[green]✓[/green] Copied to clipboard") 47 | except pyperclip.PyperclipException as e: 48 | rprint( 49 | "[red]Could not copy to clipboard (see https://github.com/dtnewman/zev?tab=readme-ov-file#-dependencies)[/red]\n" 50 | ) 51 | rprint("[cyan]Here is your command:[/cyan]") 52 | print(selected.command) 53 | if questionary.confirm("Would you like to run it?").ask(): 54 | print("Running command:", selected.command) 55 | run_command(selected.command, shell=True) 56 | -------------------------------------------------------------------------------- /src/zev/config/__init__.py: -------------------------------------------------------------------------------- 1 | from pathlib import Path 2 | 3 | from dotenv import dotenv_values 4 | 5 | 6 | class Config: 7 | def __init__(self): 8 | self.config_path = Path.home() / ".zevrc" 9 | self.vals = dotenv_values(self.config_path) 10 | 11 | @property 12 | def llm_provider(self): 13 | return self.vals.get("LLM_PROVIDER") 14 | 15 | # OpenAI 16 | @property 17 | def openai_api_key(self): 18 | return self.vals.get("OPENAI_API_KEY") 19 | 20 | @property 21 | def openai_model(self): 22 | return self.vals.get("OPENAI_MODEL") 23 | 24 | # Ollama 25 | @property 26 | def ollama_base_url(self): 27 | return self.vals.get("OLLAMA_BASE_URL") 28 | 29 | @property 30 | def ollama_model(self): 31 | return self.vals.get("OLLAMA_MODEL") 32 | 33 | # Gemini 34 | @property 35 | def gemini_model(self): 36 | return self.vals.get("GEMINI_MODEL") 37 | 38 | @property 39 | def gemini_api_key(self): 40 | return self.vals.get("GEMINI_API_KEY") 41 | 42 | # Azure OpenAI 43 | @property 44 | def azure_openai_account_name(self): 45 | return self.vals.get("AZURE_OPENAI_ACCOUNT_NAME") 46 | 47 | @property 48 | def azure_openai_api_key(self): 49 | return self.vals.get("AZURE_OPENAI_API_KEY") 50 | 51 | @property 52 | def azure_openai_deployment(self): 53 | return self.vals.get("AZURE_OPENAI_DEPLOYMENT") 54 | 55 | @property 56 | def azure_openai_api_version(self): 57 | return self.vals.get("AZURE_OPENAI_API_VERSION") 58 | 59 | 60 | config = Config() 61 | -------------------------------------------------------------------------------- /src/zev/config/setup.py: -------------------------------------------------------------------------------- 1 | from pathlib import Path 2 | from typing import Dict 3 | 4 | import questionary 5 | from dotenv import dotenv_values 6 | 7 | from zev.config.types import ( 8 | SetupQuestion, 9 | SetupQuestionSelect, 10 | SetupQuestionSelectOption, 11 | SetupQuestionText, 12 | ) 13 | from zev.constants import LLMProviders 14 | from zev.llms.azure_openai.setup import questions as azure_questions 15 | from zev.llms.gemini.setup import questions as gemini_questions 16 | from zev.llms.ollama.setup import questions as ollama_questions 17 | from zev.llms.openai.setup import questions as openai_questions 18 | 19 | setup_questions = [ 20 | SetupQuestionSelect( 21 | name="LLM_PROVIDER", 22 | prompt="Pick your LLM provider:", 23 | options=[ 24 | SetupQuestionSelectOption( 25 | value=LLMProviders.OPENAI, 26 | label="OpenAI", 27 | follow_up_questions=openai_questions, 28 | ), 29 | SetupQuestionSelectOption( 30 | value=LLMProviders.OLLAMA, 31 | label="Ollama", 32 | follow_up_questions=ollama_questions, 33 | ), 34 | SetupQuestionSelectOption( 35 | value=LLMProviders.GEMINI, 36 | label="Gemini", 37 | follow_up_questions=gemini_questions, 38 | ), 39 | SetupQuestionSelectOption( 40 | value=LLMProviders.AZURE_OPENAI, 41 | label="Azure OpenAI", 42 | follow_up_questions=azure_questions, 43 | ), 44 | ], 45 | ) 46 | ] 47 | 48 | 49 | def prompt_question(question: SetupQuestion, answers: Dict[str, str]) -> Dict[str, str]: 50 | existing_answer = answers.get(question.name) 51 | if isinstance(question, SetupQuestionSelect): 52 | selected_option: SetupQuestionSelectOption = questionary.select( 53 | question.prompt, 54 | choices=[ 55 | questionary.Choice(option.label, description=option.description, value=option) 56 | for option in question.options 57 | ], 58 | ).ask() 59 | 60 | answers[question.name] = selected_option.value 61 | for q in selected_option.follow_up_questions: 62 | answers.update(prompt_question(q, answers=answers)) 63 | elif isinstance(question, SetupQuestionText): 64 | answer = questionary.text( 65 | question.prompt, 66 | default=existing_answer or question.default or "", 67 | validate=question.validator, 68 | ).ask() 69 | answers[question.name] = answer 70 | else: 71 | raise Exception("Invalid question type") 72 | return answers 73 | 74 | 75 | def run_setup(): 76 | config_path = Path.home() / ".zevrc" 77 | answers = dotenv_values(config_path) # load in current values and then override as necessary 78 | for question in setup_questions: 79 | answers.update(prompt_question(question, answers)) 80 | 81 | new_file = "" 82 | for env_var_name, value in answers.items(): 83 | new_file += f"{env_var_name}={value}\n" 84 | 85 | with open(config_path, "w", encoding="utf-8") as f: 86 | f.write(new_file) 87 | -------------------------------------------------------------------------------- /src/zev/config/types.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | from typing import Callable, List, Optional, Tuple 3 | 4 | 5 | @dataclass 6 | class SetupQuestionSelectOption: 7 | """Represents a possible answer to a question""" 8 | 9 | value: str 10 | label: str 11 | description: Optional[str] = None 12 | follow_up_questions: Tuple["SetupQuestion", ...] = () 13 | 14 | 15 | @dataclass 16 | class SetupQuestion: 17 | name: str 18 | prompt: str 19 | 20 | 21 | @dataclass 22 | class SetupQuestionSelect(SetupQuestion): 23 | """Prompts the user with a select menu""" 24 | 25 | options: List[SetupQuestionSelectOption] 26 | 27 | 28 | @dataclass 29 | class SetupQuestionText(SetupQuestion): 30 | """Prompts the user to enter text""" 31 | 32 | validator: Optional[Callable] = None # a function that takes answer and returns a bool 33 | default: Optional[str] = "" 34 | -------------------------------------------------------------------------------- /src/zev/constants.py: -------------------------------------------------------------------------------- 1 | class LLMProviders: 2 | OPENAI = "openai" 3 | OLLAMA = "ollama" 4 | GEMINI = "gemini" 5 | AZURE_OPENAI = "azure_openai" 6 | 7 | 8 | DEFAULT_PROVIDER = LLMProviders.OPENAI 9 | 10 | # Default model names for each provider 11 | OPENAI_DEFAULT_MODEL = "gpt-4o-mini" 12 | GEMINI_DEFAULT_MODEL = "gemini-2.0-flash" 13 | GEMINI_BASE_URL = "https://generativelanguage.googleapis.com" 14 | 15 | OPENAI_BASE_URL = "https://api.openai.com/v1" 16 | CONFIG_FILE_NAME = ".zevrc" 17 | HISTORY_FILE_NAME = ".zevhistory" 18 | 19 | 20 | PROMPT = """ 21 | You are a helpful assistant that helps users remember commands for the terminal. You 22 | will return a JSON object with a list of at most three options. 23 | 24 | The options should be related to the prompt that the user provides (the prompt might 25 | either be desciptive or in the form of a question). 26 | 27 | The options should be in the form of a command that can be run in a bash terminal. 28 | 29 | If the user prompt is not clear, return an empty list and set is_valid to false, and 30 | provide an explanation of why it is not clear in the explanation_if_not_valid field. 31 | 32 | If you provide an option that is likely to be dangerous, set is_dangerous to true for 33 | that option. For example, the command 'git reset --hard' is dangerous because it can 34 | delete all the user's local changes. 'rm -rf' is dangerous because it can delete all 35 | the files in the user's directory. If something is marked as dangerous, provide a 36 | short explanation of why it is dangerous in the dangerous_explanation field (leave 37 | this field empty if the option is not dangerous). 38 | 39 | Otherwise, set is_valid to true, leave explanation_if_not_valid empty, and provide the 40 | commands in the commands field (remember, up to 3 options, and they all must be commands 41 | that can be run in a bash terminal without changing anything). Each command should have 42 | a short explanation of what it does. 43 | 44 | Here is some context about the user's environment: 45 | 46 | ============== 47 | 48 | {context} 49 | 50 | ============== 51 | 52 | Here is the users prompt: 53 | 54 | ============== 55 | 56 | {prompt} 57 | """ 58 | -------------------------------------------------------------------------------- /src/zev/llms/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/src/zev/llms/__init__.py -------------------------------------------------------------------------------- /src/zev/llms/azure_openai/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/src/zev/llms/azure_openai/__init__.py -------------------------------------------------------------------------------- /src/zev/llms/azure_openai/provider.py: -------------------------------------------------------------------------------- 1 | from openai import AzureOpenAI 2 | 3 | from zev.config import config 4 | from zev.llms.openai.provider import OpenAIProvider 5 | 6 | 7 | class AzureOpenAIProvider(OpenAIProvider): 8 | AUTH_ERROR_MESSAGE = "Error: There was an error authenticating with Azure OpenAI. Check Azure credentials or run `zev --setup` again." 9 | 10 | def __init__(self): 11 | required_vars = { 12 | "AZURE_OPENAI_ACCOUNT_NAME": config.azure_openai_account_name, 13 | "AZURE_OPENAI_DEPLOYMENT": config.azure_openai_deployment, 14 | "AZURE_OPENAI_API_VERSION": config.azure_openai_api_version, 15 | } 16 | 17 | for var, value in required_vars.items(): 18 | if not value: 19 | raise ValueError(f"{var} must be set. Run `zev --setup`.") 20 | 21 | azure_openai_endpoint = f"https://{config.azure_openai_account_name}.openai.azure.com/" 22 | 23 | if config.azure_openai_api_key: 24 | self.client = AzureOpenAI( 25 | api_key=config.azure_openai_api_key, 26 | azure_endpoint=azure_openai_endpoint, 27 | api_version=config.azure_openai_api_version, 28 | ) 29 | else: 30 | try: 31 | from azure.identity import ( # pylint: disable=import-outside-toplevel 32 | DefaultAzureCredential, 33 | get_bearer_token_provider, 34 | ) 35 | except ImportError as exc: 36 | raise ImportError("Missing required Azure packages. Run `pip install zev[azure]`") from exc 37 | token_provider = get_bearer_token_provider( 38 | DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default" 39 | ) 40 | self.client = AzureOpenAI( 41 | azure_endpoint=azure_openai_endpoint, 42 | api_version=config.azure_openai_api_version, 43 | azure_ad_token_provider=token_provider, 44 | ) 45 | 46 | self.model = config.azure_openai_deployment 47 | -------------------------------------------------------------------------------- /src/zev/llms/azure_openai/setup.py: -------------------------------------------------------------------------------- 1 | from zev.config.types import SetupQuestionText 2 | 3 | questions = ( 4 | SetupQuestionText( 5 | name="AZURE_OPENAI_ACCOUNT_NAME", 6 | prompt="Azure OpenAI account name (e.g. my-openai-resource):", 7 | default="", 8 | ), 9 | SetupQuestionText( 10 | name="AZURE_OPENAI_API_KEY", 11 | prompt="Azure OpenAI API key (leave blank to use Entra ID / keyless auth):", 12 | default="", 13 | ), 14 | SetupQuestionText( 15 | name="AZURE_OPENAI_DEPLOYMENT", 16 | prompt="Azure OpenAI deployment name (e.g. gpt-4, gpt-4o, etc):", 17 | default="gpt-4o", 18 | ), 19 | SetupQuestionText( 20 | name="AZURE_OPENAI_API_VERSION", 21 | prompt="Azure OpenAI API version:", 22 | default="2025-03-01-preview", 23 | ), 24 | ) 25 | -------------------------------------------------------------------------------- /src/zev/llms/gemini/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/src/zev/llms/gemini/__init__.py -------------------------------------------------------------------------------- /src/zev/llms/gemini/provider.py: -------------------------------------------------------------------------------- 1 | import json 2 | import urllib.error 3 | import urllib.request 4 | 5 | from zev.config import config 6 | from zev.constants import GEMINI_BASE_URL, GEMINI_DEFAULT_MODEL, PROMPT 7 | from zev.llms.inference_provider_base import InferenceProvider 8 | from zev.llms.types import OptionsResponse 9 | 10 | GEMINI_RESPONSE_SCHEMA = { 11 | "response_mime_type": "application/json", 12 | "response_schema": { 13 | "type": "OBJECT", 14 | "properties": { 15 | "commands": { 16 | "type": "ARRAY", 17 | "items": { 18 | "type": "OBJECT", 19 | "properties": { 20 | "command": {"type": "STRING"}, 21 | "short_explanation": {"type": "STRING"}, 22 | "is_dangerous": {"type": "BOOLEAN"}, 23 | "dangerous_explanation": {"type": "STRING"}, 24 | }, 25 | "required": [ 26 | "command", 27 | "short_explanation", 28 | "is_dangerous", 29 | ], 30 | }, 31 | }, 32 | "is_valid": {"type": "BOOLEAN"}, 33 | "explanation_if_not_valid": {"type": "STRING"}, 34 | }, 35 | "required": [ 36 | "commands", 37 | "is_valid", 38 | ], 39 | }, 40 | } 41 | 42 | 43 | class GeminiProvider(InferenceProvider): 44 | def __init__(self): 45 | if not config.gemini_api_key: 46 | raise ValueError("GEMINI_API_KEY must be set. Try running `zev --setup`.") 47 | 48 | self.model = config.gemini_model or GEMINI_DEFAULT_MODEL 49 | self.api_url = f"{GEMINI_BASE_URL}/v1beta/models/{self.model}:generateContent?key={config.gemini_api_key}" 50 | 51 | def get_options(self, prompt: str, context: str) -> None: 52 | assembled_prompt = PROMPT.format(prompt=prompt, context=context) 53 | headers = {"Content-Type": "application/json"} 54 | body = json.dumps( 55 | { 56 | "contents": [{"parts": [{"text": assembled_prompt}]}], 57 | "generationConfig": GEMINI_RESPONSE_SCHEMA, 58 | } 59 | ).encode("utf-8") 60 | request = urllib.request.Request(self.api_url, data=body, headers=headers, method="POST") 61 | 62 | try: 63 | with urllib.request.urlopen(request) as response: 64 | data = json.loads(response.read().decode()) 65 | text_output = data["candidates"][0]["content"]["parts"][0]["text"] 66 | parsed_json = json.loads(text_output) 67 | return OptionsResponse(**parsed_json) 68 | except urllib.error.HTTPError as e: 69 | try: 70 | error_data = json.loads(e.read().decode()) 71 | print("Error:", error_data["error"]["message"]) 72 | except Exception: 73 | print("HTTP Error:", e.code) 74 | print("Note that to update settings, you can run `zev --setup`.") 75 | except Exception as e: 76 | print(f"Unexpected error: {e}") 77 | return None 78 | -------------------------------------------------------------------------------- /src/zev/llms/gemini/setup.py: -------------------------------------------------------------------------------- 1 | from zev.config.types import ( 2 | SetupQuestionSelect, 3 | SetupQuestionSelectOption, 4 | SetupQuestionText, 5 | ) 6 | 7 | questions = ( 8 | SetupQuestionText( 9 | name="GEMINI_API_KEY", 10 | prompt="Your GEMINI api key:", 11 | default="", 12 | ), 13 | SetupQuestionSelect( 14 | name="GEMINI_MODEL", 15 | prompt="Choose which model you would like to default to:", 16 | options=[ 17 | SetupQuestionSelectOption( 18 | value="gemini-1.5-flash", 19 | label="gemini-1.5-flash", 20 | description="Low latency, good for summarization, good performance", 21 | ), 22 | SetupQuestionSelectOption( 23 | value="gemini-2.0-flash", 24 | label="gemini-2.0-flash", 25 | description="Long context, good performance", 26 | ), 27 | ], 28 | ), 29 | ) 30 | -------------------------------------------------------------------------------- /src/zev/llms/inference_provider_base.py: -------------------------------------------------------------------------------- 1 | from zev.llms.types import OptionsResponse 2 | 3 | 4 | class InferenceProvider: 5 | def __init__(self): 6 | raise NotImplementedError("Subclasses must implement this method") 7 | 8 | def get_options(self, prompt: str, context: str) -> OptionsResponse | None: 9 | raise NotImplementedError("Subclasses must implement this method") 10 | -------------------------------------------------------------------------------- /src/zev/llms/llm.py: -------------------------------------------------------------------------------- 1 | from zev.config import config 2 | from zev.constants import LLMProviders 3 | from zev.llms.inference_provider_base import InferenceProvider 4 | 5 | 6 | def get_inference_provider() -> InferenceProvider: 7 | if config.llm_provider == LLMProviders.OPENAI: 8 | # pylint: disable=import-outside-toplevel 9 | from zev.llms.openai.provider import OpenAIProvider 10 | 11 | return OpenAIProvider() 12 | elif config.llm_provider == LLMProviders.OLLAMA: 13 | # pylint: disable=import-outside-toplevel 14 | from zev.llms.ollama.provider import OllamaProvider 15 | 16 | return OllamaProvider() 17 | elif config.llm_provider == LLMProviders.GEMINI: 18 | # pylint: disable=import-outside-toplevel 19 | from zev.llms.gemini.provider import GeminiProvider 20 | 21 | return GeminiProvider() 22 | elif config.llm_provider == LLMProviders.AZURE_OPENAI: 23 | # pylint: disable=import-outside-toplevel 24 | from zev.llms.azure_openai.provider import AzureOpenAIProvider 25 | 26 | return AzureOpenAIProvider() 27 | else: 28 | raise ValueError(f"Invalid LLM provider: {config.llm_provider}") 29 | -------------------------------------------------------------------------------- /src/zev/llms/ollama/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/src/zev/llms/ollama/__init__.py -------------------------------------------------------------------------------- /src/zev/llms/ollama/provider.py: -------------------------------------------------------------------------------- 1 | from openai import OpenAI 2 | 3 | from zev.config import config 4 | from zev.llms.openai.provider import OpenAIProvider 5 | 6 | 7 | class OllamaProvider(OpenAIProvider): 8 | """ 9 | Same as OpenAIProvider, but takes a different base url and model. 10 | """ 11 | 12 | def __init__(self): 13 | if not config.ollama_base_url: 14 | raise ValueError("OLLAMA_BASE_URL must be set. Try running `zev --setup`.") 15 | if not config.ollama_model: 16 | raise ValueError("OLLAMA_MODEL must be set. Try running `zev --setup`.") 17 | # api_key is not used, but is still required by the OpenAI client 18 | # https://github.com/ollama/ollama/blob/5cfc1c39f3d5822b0c0906f863f6df45c141c33b/docs/openai.md?plain=1#L19 19 | self.client = OpenAI(base_url=config.ollama_base_url, api_key="ollama") 20 | self.model = config.ollama_model 21 | -------------------------------------------------------------------------------- /src/zev/llms/ollama/setup.py: -------------------------------------------------------------------------------- 1 | from zev.config.types import SetupQuestionText 2 | 3 | questions = ( 4 | SetupQuestionText( 5 | name="OLLAMA_BASE_URL", 6 | prompt="Enter the Ollama URL:", 7 | default="http://localhost:11434/v1", 8 | ), 9 | SetupQuestionText(name="OLLAMA_MODEL", prompt="Enter the model to use (e.g. llama3.2):"), 10 | ) 11 | -------------------------------------------------------------------------------- /src/zev/llms/openai/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dtnewman/zev/29a69a73ea001d7d995ed81e2b3c55f7a0264d63/src/zev/llms/openai/__init__.py -------------------------------------------------------------------------------- /src/zev/llms/openai/provider.py: -------------------------------------------------------------------------------- 1 | from openai import AuthenticationError, OpenAI 2 | 3 | from zev.config import config 4 | from zev.constants import OPENAI_BASE_URL, OPENAI_DEFAULT_MODEL, PROMPT 5 | from zev.llms.inference_provider_base import InferenceProvider 6 | from zev.llms.types import OptionsResponse 7 | 8 | 9 | class OpenAIProvider(InferenceProvider): 10 | AUTH_ERROR_MESSAGE = ( 11 | "Error: There was an error with your OpenAI API key. You can change it by running `zev --setup`." 12 | ) 13 | 14 | def __init__(self): 15 | if not config.openai_api_key: 16 | raise ValueError("OPENAI_API_KEY must be set. Try running `zev --setup`.") 17 | 18 | self.client = OpenAI(base_url=OPENAI_BASE_URL, api_key=config.openai_api_key) 19 | self.model = config.openai_model or OPENAI_DEFAULT_MODEL 20 | 21 | def get_options(self, prompt: str, context: str) -> OptionsResponse | None: 22 | try: 23 | assembled_prompt = PROMPT.format(prompt=prompt, context=context) 24 | response = self.client.beta.chat.completions.parse( 25 | model=self.model, 26 | messages=[{"role": "user", "content": assembled_prompt}], 27 | response_format=OptionsResponse, 28 | ) 29 | return response.choices[0].message.parsed 30 | except AuthenticationError: 31 | print(self.AUTH_ERROR_MESSAGE) 32 | return None 33 | -------------------------------------------------------------------------------- /src/zev/llms/openai/setup.py: -------------------------------------------------------------------------------- 1 | from zev.config.types import ( 2 | SetupQuestionSelect, 3 | SetupQuestionSelectOption, 4 | SetupQuestionText, 5 | ) 6 | 7 | questions = ( 8 | SetupQuestionText( 9 | name="OPENAI_API_KEY", 10 | prompt="Your OPENAI api key:", 11 | default="", 12 | ), 13 | SetupQuestionSelect( 14 | name="OPENAI_MODEL", 15 | prompt="Choose which model you would like to default to:", 16 | options=[ 17 | SetupQuestionSelectOption( 18 | value="gpt-4o-mini", 19 | label="gpt-4o-mini", 20 | description="Good performance and speed, and cheaper", 21 | ), 22 | SetupQuestionSelectOption( 23 | value="gpt-4o", 24 | label="gpt-4o", 25 | description="More expensive and slower, but better performance", 26 | ), 27 | ], 28 | ), 29 | ) 30 | -------------------------------------------------------------------------------- /src/zev/llms/types.py: -------------------------------------------------------------------------------- 1 | from typing import Optional 2 | 3 | from pydantic import BaseModel 4 | 5 | 6 | class Command(BaseModel): 7 | command: str 8 | short_explanation: str 9 | is_dangerous: bool 10 | dangerous_explanation: Optional[str] = None 11 | 12 | 13 | class OptionsResponse(BaseModel): 14 | commands: list[Command] 15 | is_valid: bool 16 | explanation_if_not_valid: Optional[str] = None 17 | -------------------------------------------------------------------------------- /src/zev/main.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from pathlib import Path 3 | 4 | import dotenv 5 | from rich import print as rprint 6 | from rich.console import Console 7 | 8 | from zev.command_history import CommandHistory 9 | from zev.command_selector import show_options 10 | from zev.config import config 11 | from zev.config.setup import run_setup 12 | from zev.constants import CONFIG_FILE_NAME 13 | from zev.llms.llm import get_inference_provider 14 | from zev.utils import get_env_context, get_input_string, show_help 15 | 16 | command_history = CommandHistory() 17 | 18 | 19 | def setup(): 20 | run_setup() 21 | 22 | 23 | def get_options(words: str): 24 | context = get_env_context() 25 | console = Console() 26 | rprint(f"") 27 | with console.status( 28 | f"[bold blue]Thinking... [grey39](running query using {config.llm_provider} backend)", spinner="dots" 29 | ): 30 | inference_provider = get_inference_provider() 31 | response = inference_provider.get_options(prompt=words, context=context) 32 | command_history.save_options(words, response) 33 | 34 | if response is None: 35 | return 36 | 37 | if not response.is_valid: 38 | print(response.explanation_if_not_valid) 39 | return 40 | 41 | if not response.commands: 42 | print("No commands available") 43 | return 44 | 45 | show_options(response.commands) 46 | 47 | 48 | def run_no_prompt(): 49 | input = get_input_string("input", "Describe what you want to do:", required=False, help_text="(-h for help)") 50 | if handle_special_case(input): 51 | return 52 | get_options(input) 53 | 54 | 55 | def handle_special_case(args): 56 | if not args: 57 | return False 58 | 59 | if isinstance(args, str): 60 | args = args.split() 61 | 62 | if len(args) > 1: 63 | return False 64 | 65 | command = args[0].lower() 66 | 67 | if command == "--setup" or command == "-s": 68 | setup() 69 | return True 70 | 71 | if command == "--version" or command == "-v": 72 | print("zev version: 0.8.1") 73 | return True 74 | 75 | if command == "--recent" or command == "-r": 76 | command_history.show_history() 77 | return True 78 | 79 | if command == "--help" or command == "-h": 80 | show_help() 81 | return True 82 | 83 | return False 84 | 85 | 86 | def app(): 87 | # check if .zevrc exists or if setting up again 88 | config_path = Path.home() / CONFIG_FILE_NAME 89 | args = [arg.strip() for arg in sys.argv[1:]] 90 | 91 | if not config_path.exists(): 92 | run_setup() 93 | print("Setup complete...\n") 94 | if len(args) == 1 and args[0] == "--setup": 95 | return 96 | 97 | if handle_special_case(args): 98 | return 99 | 100 | dotenv.load_dotenv(config_path, override=True) 101 | 102 | if not args: 103 | run_no_prompt() 104 | return 105 | 106 | # Strip any trailing question marks from the input 107 | query = " ".join(args).rstrip("?") 108 | get_options(query) 109 | 110 | 111 | if __name__ == "__main__": 112 | app() 113 | -------------------------------------------------------------------------------- /src/zev/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import platform 3 | 4 | import questionary 5 | 6 | CLI_STYLE = questionary.Style( 7 | [ 8 | ("qmark", "#98c379"), 9 | ("question", "#98c379"), 10 | ("instruction", "italic #646464"), 11 | ] 12 | ) 13 | 14 | 15 | def get_input_string( 16 | field_name: str, 17 | prompt_text: str, 18 | default: str = "", 19 | required: bool = False, 20 | help_text: str = "", 21 | ) -> str: 22 | """Ask for a single line of input in the terminal, with colour + hint.""" 23 | base = f"{prompt_text} (default: {default})" if default else prompt_text 24 | 25 | while True: 26 | value = questionary.text( 27 | message=base, 28 | default=default, 29 | instruction=help_text or None, 30 | style=CLI_STYLE, 31 | validate=lambda t: bool(t) if required else True, 32 | ).ask() 33 | 34 | if value is None: # user pressed Ctrl-C / Ctrl-D 35 | raise KeyboardInterrupt 36 | if value == "" and default: # user just hit ↵ 37 | return default 38 | if value or not required: 39 | return value 40 | 41 | # Required but empty ─ repeat 42 | print(f"{field_name} is required, please try again.") 43 | 44 | 45 | def get_env_context() -> str: 46 | os_name = platform.platform(aliased=True) 47 | shell = os.environ.get("SHELL") or os.environ.get("COMSPEC") 48 | return f"OS: {os_name}\nSHELL: {shell}" if shell else f"OS: {os_name}" 49 | 50 | 51 | def show_help(): 52 | print(""" 53 | Zev is a simple CLI tool to help you remember terminal commands. 54 | 55 | Usage: 56 | zev "" Describe what you want to do 57 | zev --help, -h Show this help message 58 | zev --recent, -r Show recently run commands and results 59 | zev --setup, -s Run setup again 60 | zev --version, -v Show version information 61 | """) 62 | --------------------------------------------------------------------------------