├── CNAME
├── samples
└── puppy.jpg
├── claudette
├── __init__.py
├── toolloop.py
├── text_editor.py
├── asink.py
├── _modidx.py
└── core.py
├── MANIFEST.in
├── .github
└── workflows
│ ├── test.yaml.off
│ └── deploy.yaml
├── nbdev.yml
├── tools
└── refresh_llm_docs.sh
├── pyproject.toml
├── _quarto.yml
├── styles.css
├── settings.ini
├── llms.txt
├── .gitignore
├── apilist.txt
├── setup.py
├── CHANGELOG.md
├── 03_text_editor.ipynb
├── LICENSE
├── README.txt
├── llms-ctx.txt
└── README.md
/CNAME:
--------------------------------------------------------------------------------
1 | claudette.answer.ai
2 |
--------------------------------------------------------------------------------
/samples/puppy.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AnswerDotAI/claudette/HEAD/samples/puppy.jpg
--------------------------------------------------------------------------------
/claudette/__init__.py:
--------------------------------------------------------------------------------
1 | __version__ = "0.3.11"
2 | from .core import *
3 | from .toolloop import *
4 | from .asink import *
5 |
6 |
--------------------------------------------------------------------------------
/MANIFEST.in:
--------------------------------------------------------------------------------
1 | include settings.ini
2 | include LICENSE
3 | include CONTRIBUTING.md
4 | include README.md
5 | recursive-exclude * __pycache__
6 |
--------------------------------------------------------------------------------
/.github/workflows/test.yaml.off:
--------------------------------------------------------------------------------
1 | name: CI
2 | on: [workflow_dispatch, pull_request, push]
3 |
4 | jobs:
5 | test:
6 | runs-on: ubuntu-latest
7 | steps: [uses: fastai/workflows/nbdev-ci@master]
8 |
--------------------------------------------------------------------------------
/nbdev.yml:
--------------------------------------------------------------------------------
1 | project:
2 | output-dir: _docs
3 |
4 | website:
5 | title: "claudette"
6 | site-url: "https://claudette.answer.ai/"
7 | description: "Claudette is Claude's friend"
8 | repo-branch: main
9 | repo-url: "https://github.com/AnswerDotAI/claudette"
10 |
--------------------------------------------------------------------------------
/.github/workflows/deploy.yaml:
--------------------------------------------------------------------------------
1 | name: Deploy to GitHub Pages
2 |
3 | permissions:
4 | contents: write
5 | pages: write
6 |
7 | on:
8 | push:
9 | branches: [ "main", "master" ]
10 | workflow_dispatch:
11 | jobs:
12 | deploy:
13 | runs-on: ubuntu-latest
14 | steps: [uses: fastai/workflows/quarto-ghp@master]
15 |
--------------------------------------------------------------------------------
/tools/refresh_llm_docs.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | echo "Refreshing LLM documentation files..."
4 |
5 | echo "Generating API list documentation..."
6 | pysym2md claudette --output_file apilist.txt
7 |
8 | echo "Generating context files..."
9 | llms_txt2ctx llms.txt > llms-ctx.txt
10 | llms_txt2ctx llms.txt --optional True > llms-ctx-full.txt
11 |
12 | echo "✅ Documentation refresh complete!"
13 |
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [build-system]
2 | requires = ["setuptools>=64.0"]
3 | build-backend = "setuptools.build_meta"
4 |
5 | [project]
6 | name="claudette"
7 | requires-python=">=3.9"
8 | dynamic = [ "keywords", "description", "version", "dependencies", "optional-dependencies", "readme", "license", "authors", "classifiers", "entry-points", "scripts", "urls"]
9 |
10 | [tool.uv]
11 | cache-keys = [{ file = "pyproject.toml" }, { file = "settings.ini" }, { file = "setup.py" }]
12 |
--------------------------------------------------------------------------------
/_quarto.yml:
--------------------------------------------------------------------------------
1 | project:
2 | type: website
3 | resources:
4 | - "*.txt"
5 | preview:
6 | port: 3000
7 | browser: false
8 |
9 | format:
10 | html:
11 | theme: cosmo
12 | css: styles.css
13 | toc: true
14 | code-tools: true
15 | code-block-bg: true
16 | code-block-border-left: "#31BAE9"
17 | highlight-style: arrow
18 | grid:
19 | sidebar-width: 180px
20 | body-width: 1800px
21 | margin-width: 150px
22 | gutter-width: 1.0rem
23 | keep-md: true
24 | commonmark: default
25 |
26 | website:
27 | twitter-card: true
28 | open-graph: true
29 | repo-actions: [issue]
30 | navbar:
31 | background: primary
32 | search: true
33 | sidebar:
34 | style: floating
35 |
36 | metadata-files:
37 | - nbdev.yml
38 | - sidebar.yml
39 |
40 |
--------------------------------------------------------------------------------
/styles.css:
--------------------------------------------------------------------------------
1 | .cell {
2 | margin-bottom: 1rem;
3 | }
4 |
5 | .cell > .sourceCode {
6 | margin-bottom: 0;
7 | }
8 |
9 | .cell-output > pre {
10 | margin-bottom: 0;
11 | }
12 |
13 | .cell-output:not(.cell-output-markdown) > pre,
14 | .cell-output:not(.cell-output-markdown) > .sourceCode > pre,
15 | .cell-output-stdout > pre,
16 | .cell-output-markdown {
17 | margin-left: 0.4rem;
18 | padding-left: 0.4rem;
19 | margin-top: 0;
20 | background: none;
21 | border-left: 2px solid lightsalmon;
22 | border-top-left-radius: 0;
23 | border-top-right-radius: 0;
24 | }
25 |
26 | .cell-output > .sourceCode {
27 | border: none;
28 | }
29 |
30 | .cell-output > .sourceCode {
31 | background: none;
32 | margin-top: 0;
33 | }
34 |
35 | div.description {
36 | padding-left: 2px;
37 | padding-top: 5px;
38 | font-style: italic;
39 | font-size: 135%;
40 | opacity: 70%;
41 | }
42 |
--------------------------------------------------------------------------------
/settings.ini:
--------------------------------------------------------------------------------
1 | [DEFAULT]
2 | repo = claudette
3 | lib_name = claudette
4 | version = 0.3.11
5 | min_python = 3.9
6 | license = apache2
7 | black_formatting = False
8 | requirements = fastcore>=1.8.4 msglm>=0.0.9 anthropic>=0.52 toolslm>=0.3.0
9 | dev_requirements = google-auth pycachy
10 | doc_path = _docs
11 | lib_path = claudette
12 | nbs_path = .
13 | recursive = False
14 | tst_flags = notest
15 | put_version_in_init = True
16 | branch = main
17 | custom_sidebar = False
18 | doc_host = https://claudette.answer.ai
19 | doc_baseurl = /
20 | git_url = https://github.com/AnswerDotAI/claudette
21 | title = claudette
22 | audience = Developers
23 | author = Jeremy Howard
24 | author_email = j@fast.ai
25 | copyright = 2024 onwards, Jeremy Howard
26 | description = Claudette is Claude's friend
27 | keywords = nbdev jupyter notebook python
28 | language = English
29 | status = 4
30 | user = AnswerDotAI
31 | readme_nb = index.ipynb
32 | allowed_metadata_keys =
33 | allowed_cell_metadata_keys =
34 | jupyter_hooks = True
35 | clean_ids = True
36 | clear_all = False
37 | cell_number = False
38 | skip_procs =
39 | update_pyproject = True
40 |
41 |
--------------------------------------------------------------------------------
/llms.txt:
--------------------------------------------------------------------------------
1 | # Claudette
2 |
3 | > Claudette is a Python library that wraps Anthropic's Claude API to provide a higher-level interface for creating AI applications. It automates common patterns while maintaining full control, offering features like stateful chat, prefill support, image handling, and streamlined tool use.
4 |
5 | Things to remember when using Claudette:
6 |
7 | - You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key
8 | - Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex)
9 | - The library provides both synchronous and asynchronous interfaces
10 | - Use `Chat()` for maintaining conversation state and handling tool interactions
11 | - When using tools, the library automatically handles the request/response loop
12 | - Image support is built in but only available on compatible models (not Haiku)
13 |
14 | ## Docs
15 |
16 | - [README](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/README.md): Quick start guide and overview
17 |
18 | ## API
19 |
20 | - [API List](https://raw.githubusercontent.com/AnswerDotAI/claudette/refs/heads/main/apilist.txt): A succint list of all functions and methods in claudette.
21 |
22 | ## Optional
23 | - [Tool loop handling](https://claudette.answer.ai/toolloop.html.md): How to use the tool loop functionality for complex multi-step interactions
24 | - [Async support](https://claudette.answer.ai/async.html.md): Using Claudette with async/await
25 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | test.txt
2 | CLAUDE.md
3 | .claude/
4 | .gitattributes
5 | _proc/
6 | index_files/
7 | sidebar.yml
8 | Gemfile.lock
9 | token
10 | _docs/
11 | conda/
12 | .last_checked
13 | .gitconfig
14 | *.bak
15 | *.log
16 | *~
17 | ~*
18 | _tmp*
19 | tmp*
20 | tags
21 |
22 | # Byte-compiled / optimized / DLL files
23 | __pycache__/
24 | *.py[cod]
25 | *$py.class
26 |
27 | # C extensions
28 | *.so
29 |
30 | # Distribution / packaging
31 | .Python
32 | env/
33 | build/
34 | develop-eggs/
35 | dist/
36 | downloads/
37 | eggs/
38 | .eggs/
39 | lib/
40 | lib64/
41 | parts/
42 | sdist/
43 | var/
44 | wheels/
45 | *.egg-info/
46 | .installed.cfg
47 | *.egg
48 |
49 | # PyInstaller
50 | # Usually these files are written by a python script from a template
51 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
52 | *.manifest
53 | *.spec
54 |
55 | # Installer logs
56 | pip-log.txt
57 | pip-delete-this-directory.txt
58 |
59 | # Unit test / coverage reports
60 | htmlcov/
61 | .tox/
62 | .coverage
63 | .coverage.*
64 | .cache
65 | nosetests.xml
66 | coverage.xml
67 | *.cover
68 | .hypothesis/
69 |
70 | # Translations
71 | *.mo
72 | *.pot
73 |
74 | # Django stuff:
75 | *.log
76 | local_settings.py
77 |
78 | # Flask stuff:
79 | instance/
80 | .webassets-cache
81 |
82 | # Scrapy stuff:
83 | .scrapy
84 |
85 | # Sphinx documentation
86 | docs/_build/
87 |
88 | # PyBuilder
89 | target/
90 |
91 | # Jupyter Notebook
92 | .ipynb_checkpoints
93 |
94 | # pyenv
95 | .python-version
96 |
97 | # celery beat schedule file
98 | celerybeat-schedule
99 |
100 | # SageMath parsed files
101 | *.sage.py
102 |
103 | # dotenv
104 | .env
105 |
106 | # virtualenv
107 | .venv
108 | venv/
109 | ENV/
110 |
111 | # Spyder project settings
112 | .spyderproject
113 | .spyproject
114 |
115 | # Rope project settings
116 | .ropeproject
117 |
118 | # mkdocs documentation
119 | /site
120 |
121 | # mypy
122 | .mypy_cache/
123 |
124 | .vscode
125 | *.swp
126 |
127 | # osx generated files
128 | .DS_Store
129 | .DS_Store?
130 | .Trashes
131 | ehthumbs.db
132 | Thumbs.db
133 | .idea
134 |
135 | # pytest
136 | .pytest_cache
137 |
138 | # tools/trust-doc-nbs
139 | docs_src/.last_checked
140 |
141 | # symlinks to fastai
142 | docs_src/fastai
143 | tools/fastai
144 |
145 | # link checker
146 | checklink/cookies.txt
147 |
148 | # .gitconfig is now autogenerated
149 | .gitconfig
150 |
151 | _docs
152 |
153 | /.quarto/
154 |
--------------------------------------------------------------------------------
/apilist.txt:
--------------------------------------------------------------------------------
1 | # claudette Module Documentation
2 |
3 | ## claudette.asink
4 |
5 | - `class AsyncClient`
6 | - `def __init__(self, model, cli, log)`
7 | Async Anthropic messages client.
8 |
9 |
10 | - `@patch @delegates(Client) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, **kwargs)`
11 | Make an async call to Claude.
12 |
13 | - `@delegates() class AsyncChat`
14 | - `def __init__(self, model, cli, **kwargs)`
15 | Anthropic async chat client.
16 |
17 |
18 | ## claudette.core
19 |
20 | - `def find_block(r, blk_type)`
21 | Find the first block of type `blk_type` in `r.content`.
22 |
23 | - `def contents(r)`
24 | Helper to get the contents from Claude response `r`.
25 |
26 | - `def usage(inp, out, cache_create, cache_read)`
27 | Slightly more concise version of `Usage`.
28 |
29 | - `@patch def __add__(self, b)`
30 | Add together each of `input_tokens` and `output_tokens`
31 |
32 | - `def mk_msgs(msgs, **kw)`
33 | Helper to set 'assistant' role on alternate messages.
34 |
35 | - `class Client`
36 | - `def __init__(self, model, cli, log)`
37 | Basic Anthropic messages client.
38 |
39 |
40 | - `def mk_tool_choice(choose)`
41 | Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise
42 |
43 | - `def mk_funcres(tuid, res)`
44 | Given tool use id and the tool result, create a tool_result response.
45 |
46 | - `def mk_toolres(r, ns, obj)`
47 | Create a `tool_result` message from response `r`.
48 |
49 | - `@patch @delegates(messages.Messages.create) def __call__(self, msgs, sp, temp, maxtok, prefill, stream, stop, tools, tool_choice, **kwargs)`
50 | Make a call to Claude.
51 |
52 | - `@patch @delegates(Client.__call__) def structured(self, msgs, tools, obj, ns, **kwargs)`
53 | Return the value of all tool calls (generally used for structured outputs)
54 |
55 | - `class Chat`
56 | - `def __init__(self, model, cli, sp, tools, temp, cont_pr)`
57 | Anthropic chat client.
58 |
59 | - `@property def use(self)`
60 |
61 | - `def img_msg(data, cache)`
62 | Convert image `data` into an encoded `dict`
63 |
64 | - `def text_msg(s, cache)`
65 | Convert `s` to a text message
66 |
67 | - `def mk_msg(content, role, cache, **kw)`
68 | Helper to create a `dict` appropriate for a Claude message. `kw` are added as key/value pairs to the message
69 |
70 | ## claudette.toolloop
71 |
72 | - `@patch @delegates(Chat.__call__) def toolloop(self, pr, max_steps, trace_func, cont_func, **kwargs)`
73 | Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages
74 |
75 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from pkg_resources import parse_version
2 | from configparser import ConfigParser
3 | import setuptools, shlex
4 | assert parse_version(setuptools.__version__)>=parse_version('36.2')
5 |
6 | # note: all settings are in settings.ini; edit there, not here
7 | config = ConfigParser(delimiters=['='])
8 | config.read('settings.ini', encoding='utf-8')
9 | cfg = config['DEFAULT']
10 |
11 | cfg_keys = 'version description keywords author author_email'.split()
12 | expected = cfg_keys + "lib_name user branch license status min_python audience language".split()
13 | for o in expected: assert o in cfg, "missing expected setting: {}".format(o)
14 | setup_cfg = {o:cfg[o] for o in cfg_keys}
15 |
16 | licenses = {
17 | 'apache2': ('Apache Software License 2.0','OSI Approved :: Apache Software License'),
18 | 'mit': ('MIT License', 'OSI Approved :: MIT License'),
19 | 'gpl2': ('GNU General Public License v2', 'OSI Approved :: GNU General Public License v2 (GPLv2)'),
20 | 'gpl3': ('GNU General Public License v3', 'OSI Approved :: GNU General Public License v3 (GPLv3)'),
21 | 'bsd3': ('BSD License', 'OSI Approved :: BSD License'),
22 | }
23 | statuses = [ '1 - Planning', '2 - Pre-Alpha', '3 - Alpha',
24 | '4 - Beta', '5 - Production/Stable', '6 - Mature', '7 - Inactive' ]
25 | py_versions = '3.6 3.7 3.8 3.9 3.10'.split()
26 |
27 | requirements = shlex.split(cfg.get('requirements', ''))
28 | if cfg.get('pip_requirements'): requirements += shlex.split(cfg.get('pip_requirements', ''))
29 | min_python = cfg['min_python']
30 | lic = licenses.get(cfg['license'].lower(), (cfg['license'], None))
31 | dev_requirements = (cfg.get('dev_requirements') or '').split()
32 |
33 | setuptools.setup(
34 | name = cfg['lib_name'],
35 | license = lic[0],
36 | classifiers = [
37 | 'Development Status :: ' + statuses[int(cfg['status'])],
38 | 'Intended Audience :: ' + cfg['audience'].title(),
39 | 'Natural Language :: ' + cfg['language'].title(),
40 | ] + ['Programming Language :: Python :: '+o for o in py_versions[py_versions.index(min_python):]] + (['License :: ' + lic[1] ] if lic[1] else []),
41 | url = cfg['git_url'],
42 | packages = setuptools.find_packages(),
43 | include_package_data = True,
44 | install_requires = requirements,
45 | extras_require={ 'dev': dev_requirements },
46 | dependency_links = cfg.get('dep_links','').split(),
47 | python_requires = '>=' + cfg['min_python'],
48 | long_description = open('README.md', encoding='utf-8').read(),
49 | long_description_content_type = 'text/markdown',
50 | zip_safe = False,
51 | entry_points = {
52 | 'console_scripts': cfg.get('console_scripts','').split(),
53 | 'nbdev': [f'{cfg.get("lib_path")}={cfg.get("lib_path")}._modidx:d']
54 | },
55 | **setup_cfg)
56 |
57 |
58 |
--------------------------------------------------------------------------------
/claudette/toolloop.py:
--------------------------------------------------------------------------------
1 | # AUTOGENERATED! DO NOT EDIT! File to edit: ../01_toolloop.ipynb.
2 |
3 | # %% auto 0
4 | __all__ = []
5 |
6 | # %% ../01_toolloop.ipynb
7 | from .core import *
8 | from fastcore.utils import *
9 | from fastcore.meta import delegates
10 | from fastcore.xtras import save_iter
11 | from functools import wraps
12 |
13 | from anthropic.types import TextBlock, Message, ToolUseBlock
14 |
15 | # %% ../01_toolloop.ipynb
16 | _final_prompt = "You have no more tool uses. Please summarize your findings. If you did not complete your goal please tell the user what further work needs to be done so they can choose how best to proceed."
17 |
18 | # %% ../01_toolloop.ipynb
19 | @patch
20 | @delegates(Chat.__call__)
21 | def toolloop(self:Chat,
22 | pr, # Prompt to pass to Claude
23 | max_steps=10, # Maximum number of tool requests to loop through
24 | cont_func:callable=noop, # Function that stops loop if returns False
25 | final_prompt=_final_prompt, # Prompt to add if last message is a tool call
26 | **kwargs):
27 | "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages"
28 | @save_iter
29 | def _f(o):
30 | init_n = len(self.h)
31 | r = self(pr, **kwargs)
32 | yield r
33 | if len(self.last)>1: yield self.last[1]
34 | for i in range(max_steps-1):
35 | if self.c.stop_reason!='tool_use': break
36 | r = self(final_prompt if i==max_steps-2 else None, **kwargs)
37 | yield r
38 | if len(self.last)>1: yield self.last[1]
39 | if not cont_func(*self.h[-3:]): break
40 | o.value = self.h[init_n+1:]
41 | return _f()
42 |
43 | # %% ../01_toolloop.ipynb
44 | from .asink import AsyncChat
45 |
46 | # %% ../01_toolloop.ipynb
47 | @patch
48 | @delegates(AsyncChat.__call__)
49 | def toolloop(
50 | self: AsyncChat,
51 | pr, # Prompt to pass to Claude
52 | max_steps=10, # Maximum number of tool requests to loop through
53 | cont_func: callable = noop, # Function that stops loop if returns False
54 | final_prompt = _final_prompt, # Prompt to add if last message is a tool call
55 | **kwargs
56 | ):
57 | "Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages"
58 | @save_iter
59 | async def _f(o):
60 | init_n = len(self.h)
61 | r = await self(pr, **kwargs)
62 | yield r
63 | if len(self.last)>1: yield self.last[1]
64 | for i in range(max_steps-1):
65 | if self.c.stop_reason != 'tool_use': break
66 | r = await self(final_prompt if i==max_steps-2 else None, **kwargs)
67 | yield r
68 | if len(self.last)>1: yield self.last[1]
69 | if not cont_func(*self.h[-3:]): break
70 | o.value = self.h[init_n+1:]
71 | return _f()
72 |
--------------------------------------------------------------------------------
/claudette/text_editor.py:
--------------------------------------------------------------------------------
1 | # AUTOGENERATED! DO NOT EDIT! File to edit: ../03_text_editor.ipynb.
2 |
3 | # %% auto 0
4 | __all__ = ['text_editor_conf', 'view', 'create', 'insert', 'str_replace', 'str_replace_editor', 'str_replace_based_edit_tool']
5 |
6 | # %% ../03_text_editor.ipynb
7 | from pathlib import Path
8 |
9 | # %% ../03_text_editor.ipynb
10 | def view(path:str, view_range:tuple[int,int]=None, nums:bool=False):
11 | 'View directory or file contents with optional line range and numbers'
12 | try:
13 | p = Path(path).expanduser().resolve()
14 | if not p.exists(): return f'Error: File not found: {p}'
15 | if p.is_dir():
16 | res = [str(f) for f in p.glob('**/*')
17 | if not any(part.startswith('.') for part in f.relative_to(p).parts)]
18 | return f'Directory contents of {p}:\n' + '\n'.join(res)
19 |
20 | lines = p.read_text().splitlines()
21 | s,e = 1,len(lines)
22 | if view_range:
23 | s,e = view_range
24 | if not (1 <= s <= len(lines)): return f'Error: Invalid start line {s}'
25 | if e != -1 and not (s <= e <= len(lines)): return f'Error: Invalid end line {e}'
26 | lines = lines[s-1:None if e==-1 else e]
27 |
28 | return '\n'.join([f'{i+s-1:6d} │ {l}' for i,l in enumerate(lines,1)] if nums else lines)
29 | except Exception as e: return f'Error viewing file: {str(e)}'
30 |
31 | # %% ../03_text_editor.ipynb
32 | def create(path: str, file_text: str, overwrite:bool=False) -> str:
33 | 'Creates a new file with the given content at the specified path'
34 | try:
35 | p = Path(path)
36 | if p.exists():
37 | if not overwrite: return f'Error: File already exists: {p}'
38 | p.parent.mkdir(parents=True, exist_ok=True)
39 | p.write_text(file_text)
40 | return f'Created file {p} containing:\n{file_text}'
41 | except Exception as e: return f'Error creating file: {str(e)}'
42 |
43 | # %% ../03_text_editor.ipynb
44 | def insert(path: str, insert_line: int, new_str: str) -> str:
45 | 'Insert new_str at specified line number'
46 | try:
47 | p = Path(path)
48 | if not p.exists(): return f'Error: File not found: {p}'
49 |
50 | content = p.read_text().splitlines()
51 | if not (0 <= insert_line <= len(content)): return f'Error: Invalid line number {insert_line}'
52 |
53 | content.insert(insert_line, new_str)
54 | new_content = '\n'.join(content)
55 | p.write_text(new_content)
56 | return f'Inserted text at line {insert_line} in {p}.\nNew contents:\n{new_content}'
57 | except Exception as e: return f'Error inserting text: {str(e)}'
58 |
59 | # %% ../03_text_editor.ipynb
60 | def str_replace(path: str, old_str: str, new_str: str) -> str:
61 | 'Replace first occurrence of old_str with new_str in file'
62 | try:
63 | p = Path(path)
64 | if not p.exists(): return f'Error: File not found: {p}'
65 |
66 | content = p.read_text()
67 | count = content.count(old_str)
68 |
69 | if count == 0: return 'Error: Text not found in file'
70 | if count > 1: return f'Error: Multiple matches found ({count})'
71 |
72 | new_content = content.replace(old_str, new_str, 1)
73 | p.write_text(new_content)
74 | return f'Replaced text in {p}.\nNew contents:\n{new_content}'
75 | except Exception as e: return f'Error replacing text: {str(e)}'
76 |
77 | # %% ../03_text_editor.ipynb
78 | def str_replace_editor(**kwargs: dict[str, str]) -> str:
79 | 'Dispatcher for Anthropic Text Editor tool commands: view, str_replace, create, insert, undo_edit.'
80 | cmd = kwargs.pop('command', '')
81 | if cmd == 'view': return view(**kwargs)
82 | elif cmd == 'str_replace': return str_replace(**kwargs)
83 | elif cmd == 'create': return create(**kwargs)
84 | elif cmd == 'insert': return insert(**kwargs)
85 | elif cmd == 'undo_edit': return 'Undo edit is not supported yet.'
86 | else: return f'Unknown command: {cmd}'
87 |
88 | def str_replace_based_edit_tool(**kwargs: dict[str, str]) -> str:
89 | 'Dispatcher for Anthropic Text Editor tool commands: view, str_replace, create, insert.'
90 | return str_replace_editor(**kwargs)
91 |
92 | # %% ../03_text_editor.ipynb
93 | text_editor_conf = {
94 | 'sonnet': {"type": "text_editor_20250728", "name": "str_replace_based_edit_tool"},
95 | 'sonnet-4': {"type": "text_editor_20250429", "name": "str_replace_based_edit_tool"},
96 | 'sonnet-3-7': {'type': 'text_editor_20250124', 'name': 'str_replace_editor'},
97 | 'sonnet-3-5': {'type': 'text_editor_20241022', 'name': 'str_replace_editor'}
98 | }
99 |
--------------------------------------------------------------------------------
/claudette/asink.py:
--------------------------------------------------------------------------------
1 | # AUTOGENERATED! DO NOT EDIT! File to edit: ../02_async.ipynb.
2 |
3 | # %% auto 0
4 | __all__ = ['AsyncClient', 'mk_funcres_async', 'mk_toolres_async', 'AsyncChat']
5 |
6 | # %% ../02_async.ipynb
7 | import inspect, typing, mimetypes, base64, json
8 | from collections import abc
9 |
10 | from anthropic import AsyncAnthropic
11 | from anthropic.types import ToolUseBlock
12 | from toolslm.funccall import get_schema, mk_ns, call_func, call_func_async
13 | from fastcore.meta import delegates
14 | from fastcore.utils import *
15 | from .core import *
16 | from msglm import mk_msg_anthropic as mk_msg, mk_msgs_anthropic as mk_msgs
17 |
18 | # %% ../02_async.ipynb
19 | class AsyncClient(Client):
20 | def __init__(self, model, cli=None, log=False, cache=False):
21 | "Async Anthropic messages client."
22 | super().__init__(model,cli,log,cache)
23 | if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})
24 |
25 | # %% ../02_async.ipynb
26 | @asave_iter
27 | async def _astream(o, cm, prefill, cb):
28 | async with cm as s:
29 | yield prefill
30 | async for x in s.text_stream: yield x
31 | o.value = await s.get_final_message()
32 | await cb(o.value)
33 |
34 | # %% ../02_async.ipynb
35 | @patch
36 | @delegates(Client)
37 | async def __call__(self:AsyncClient,
38 | msgs:list, # List of messages in the dialog
39 | sp='', # The system prompt
40 | temp=0, # Temperature
41 | maxtok=4096, # Maximum tokens
42 | maxthinktok=0, # Maximum thinking tokens
43 | prefill='', # Optional prefill to pass to Claude as start of its response
44 | stream:bool=False, # Stream response?
45 | stop=None, # Stop sequence
46 | tools:Optional[list]=None, # List of tools to make available to Claude
47 | tool_choice:Optional[dict]=None, # Optionally force use of some tool
48 | cb=None, # Callback to pass result to when complete
49 | **kwargs):
50 | "Make an async call to Claude."
51 | msgs,kwargs = self._precall(msgs, prefill, sp, temp, maxtok, maxthinktok, stream,
52 | stop, tools, tool_choice, kwargs)
53 | m = self.c.messages
54 | f = m.stream if stream else m.create
55 | res = f(model=self.model, messages=msgs, **kwargs)
56 | async def _cb(v):
57 | self._log(v, prefill=prefill, msgs=msgs, **kwargs)
58 | if cb: await cb(v)
59 | if stream: return _astream(res, prefill, _cb)
60 | res = await res
61 | try: return res
62 | finally: await _cb(res)
63 |
64 | # %% ../02_async.ipynb
65 | async def mk_funcres_async(fc, ns):
66 | "Given tool use block `fc`, get tool result, and create a tool_result response."
67 | try: res = await call_func_async(fc.name, fc.input, ns=ns, raise_on_err=False)
68 | except KeyError: res = f"Error - tool not defined in the tool_schemas: {fc.name}"
69 | return dict(type="tool_result", tool_use_id=fc.id, content=str(res))
70 |
71 | # %% ../02_async.ipynb
72 | async def mk_toolres_async(
73 | r:abc.Mapping, # Tool use request response from Claude
74 | ns:Optional[abc.Mapping]=None # Namespace to search for tools
75 | ):
76 | "Create a `tool_result` message from response `r`."
77 | cts = getattr(r, 'content', [])
78 | res = [mk_msg(r.model_dump(), role='assistant')]
79 | if ns is None: ns=globals()
80 | tcs = [await mk_funcres_async(o, ns) for o in cts if isinstance(o,ToolUseBlock)]
81 | if tcs: res.append(mk_msg(tcs))
82 | return res
83 |
84 | # %% ../02_async.ipynb
85 | @patch
86 | @delegates(Client.__call__)
87 | async def structured(self:AsyncClient,
88 | msgs:list, # List of messages in the dialog
89 | tools:Optional[list]=None, # List of tools to make available to Claude
90 | ns:Optional[abc.Mapping]=None, # Namespace to search for tools
91 | **kwargs):
92 | "Return the value of all tool calls (generally used for structured outputs)"
93 | tools = listify(tools)
94 | if ns is None: ns=mk_ns(*tools)
95 | res = await self(msgs, tools=tools, tool_choice=tools,**kwargs)
96 | cts = getattr(res, 'content', [])
97 | tcs = [await call_func_async(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)]
98 | return tcs
99 |
100 | # %% ../02_async.ipynb
101 | @delegates()
102 | class AsyncChat(Chat):
103 | def __init__(self,
104 | model:Optional[str]=None, # Model to use (leave empty if passing `cli`)
105 | cli:Optional[Client]=None, # Client to use (leave empty if passing `model`)
106 | **kwargs):
107 | "Anthropic async chat client."
108 | super().__init__(model, cli, **kwargs)
109 | if not cli: self.c = AsyncClient(model)
110 |
111 | # %% ../02_async.ipynb
112 | @patch
113 | async def _append_pr(self:AsyncChat, pr=None):
114 | prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history
115 | if pr and prev_role == 'user': await self()
116 | self._post_pr(pr, prev_role)
117 |
118 | # %% ../02_async.ipynb
119 | @patch
120 | async def __call__(self:AsyncChat,
121 | pr=None, # Prompt / message
122 | temp=None, # Temperature
123 | maxtok=4096, # Maximum tokens
124 | maxthinktok=0, # Maximum thinking tokens
125 | stream=False, # Stream response?
126 | prefill='', # Optional prefill to pass to Claude as start of its response
127 | tool_choice:Optional[Union[str,bool,dict]]=None, # Optionally force use of some tool
128 | **kw):
129 | if temp is None: temp=self.temp
130 | await self._append_pr(pr)
131 | async def _cb(v):
132 | self.last = await mk_toolres_async(v, ns=limit_ns(self.ns, self.tools, tool_choice))
133 | self.h += self.last
134 | return await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, maxthinktok=maxthinktok, tools=self.tools, tool_choice=tool_choice, cb=_cb, **kw)
135 |
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | # Release notes
2 |
3 |
4 |
5 | ## 0.3.11
6 |
7 | ### New Features
8 |
9 | - Validate tool names returned by model against allowed tools ([#103](https://github.com/AnswerDotAI/claudette/pull/103)), thanks to [@PiotrCzapla](https://github.com/PiotrCzapla)
10 |
11 |
12 | ## 0.3.10
13 |
14 | ### Bugs Squashed
15 |
16 | - `find_blocks` does not handle dicts ([#96](https://github.com/AnswerDotAI/claudette/issues/96))
17 |
18 |
19 | ## 0.3.7
20 |
21 | ### New Features
22 |
23 | - Add Haiku and Sonnet 4.5 ([#95](https://github.com/AnswerDotAI/claudette/issues/95))
24 |
25 |
26 | ## 0.3.6
27 |
28 | ### New Features
29 |
30 | - Support opus 4.1 ([#93](https://github.com/AnswerDotAI/claudette/issues/93))
31 |
32 |
33 | ## 0.3.5
34 |
35 | ### New Features
36 |
37 | - Support async functions in async toolloop ([#92](https://github.com/AnswerDotAI/claudette/issues/92))
38 |
39 |
40 | ## 0.3.4
41 |
42 | ### Bugs Squashed
43 |
44 | - Escape Quotes in markdown footnotes ([#90](https://github.com/AnswerDotAI/claudette/pull/90)), thanks to [@ncoop57](https://github.com/ncoop57)
45 |
46 |
47 | ## 0.3.2
48 |
49 | ### New Features
50 |
51 | - Suport for tool functions returning `ToolResult` image values ([#88](https://github.com/AnswerDotAI/claudette/pull/88)), thanks to [@austinvhuang](https://github.com/austinvhuang)
52 |
53 |
54 | ## 0.3.1
55 |
56 | ### New Features
57 |
58 | - Use msglm 0.0.9 ([#87](https://github.com/AnswerDotAI/claudette/issues/87))
59 |
60 |
61 | ## 0.3.0
62 |
63 | ### Breaking Changes
64 |
65 | - Remove `obj` support ([#85](https://github.com/AnswerDotAI/claudette/issues/85))
66 |
67 |
68 | ## 0.2.2
69 |
70 | ### New Features
71 |
72 | - Make full messages available after streaming ([#84](https://github.com/AnswerDotAI/claudette/issues/84))
73 |
74 |
75 | ## 0.2.0
76 |
77 | ### Breaking Changes
78 |
79 | - `toolloop` now returns an iterator of every message, including each tool request and result
80 |
81 | ### New Features
82 |
83 | - Add tool call results to toolloop result ([#83](https://github.com/AnswerDotAI/claudette/issues/83))
84 | - `show_thk` param for `contents` ([#82](https://github.com/AnswerDotAI/claudette/issues/82))
85 |
86 |
87 | ## 0.1.11
88 |
89 | ### New Features
90 |
91 | - Use updated text editor tool ([#80](https://github.com/AnswerDotAI/claudette/issues/80))
92 | - Skip hidden directories in `view()`
93 |
94 |
95 | ## 0.1.10
96 |
97 | ### New Features
98 |
99 | - Support Claude 4 ([#79](https://github.com/AnswerDotAI/claudette/issues/79))
100 |
101 |
102 | ## 0.1.9
103 |
104 | ### New Features
105 |
106 | - Tool loop now continues when function calls return an error ([#78](https://github.com/AnswerDotAI/claudette/pull/78)), thanks to [@erikgaas](https://github.com/erikgaas)
107 | - feat: add text editor tool implementation ([#71](https://github.com/AnswerDotAI/claudette/pull/71)), thanks to [@ncoop57](https://github.com/ncoop57)
108 |
109 |
110 | ## 0.1.8
111 |
112 | ### New Features
113 |
114 | - Add exhausted tool loop warning ([#75](https://github.com/AnswerDotAI/claudette/issues/75))
115 | - Text editor tool implementation ([#71](https://github.com/AnswerDotAI/claudette/pull/71)), thanks to [@ncoop57](https://github.com/ncoop57)
116 | - Async tool loop ([#70](https://github.com/AnswerDotAI/claudette/issues/70))
117 | - Pre-serialized funcs in tool calling ([#67](https://github.com/AnswerDotAI/claudette/pull/67)), thanks to [@erikgaas](https://github.com/erikgaas)
118 | - Extended Thinking ([#66](https://github.com/AnswerDotAI/claudette/issues/66))
119 |
120 |
121 | ## 0.1.7
122 |
123 | ### Bugs Squashed
124 |
125 | - Bump required `msglm` version
126 |
127 |
128 | ## 0.1.6
129 |
130 | ### Bugs Squashed
131 |
132 | - Bump required `anthropic` version
133 |
134 |
135 | ## 0.1.5
136 |
137 | ### New Features
138 |
139 | - add extended thinking ([#65](https://github.com/AnswerDotAI/claudette/pull/65)), thanks to [@comhar](https://github.com/comhar)
140 | - Make Sonnet 3.7 the default sonnet model ([#63](https://github.com/AnswerDotAI/claudette/issues/63))
141 | - Add model capabilities attributes `has_streaming_models`, `has_temperature_models`, and `has_system_prompt_models` ([#57](https://github.com/AnswerDotAI/claudette/pull/57)), thanks to [@austinvhuang](https://github.com/austinvhuang)
142 |
143 | ### Bugs Squashed
144 |
145 | - fix bedrock usage reporting ([#60](https://github.com/AnswerDotAI/claudette/pull/60)), thanks to [@hamelsmu](https://github.com/hamelsmu)
146 |
147 |
148 | ## 0.1.3
149 |
150 | ### New Features
151 |
152 | - add caching to async client too ([#55](https://github.com/AnswerDotAI/claudette/pull/55)), thanks to [@bclavie](https://github.com/bclavie)
153 |
154 |
155 | ## 0.1.2
156 |
157 | ### New Features
158 |
159 | - Add continuously-updated cache support to Chat and Client ([#54](https://github.com/AnswerDotAI/claudette/pull/54)), thanks to [@bclavie](https://github.com/bclavie)
160 | - Enhance AsyncChat with improved tool support and message handling ([#52](https://github.com/AnswerDotAI/claudette/pull/52)), thanks to [@ncoop57](https://github.com/ncoop57)
161 | - Add support for user defined types in tool calling functions ([#51](https://github.com/AnswerDotAI/claudette/pull/51)), thanks to [@austinvhuang](https://github.com/austinvhuang)
162 | - Add detailed cost breakdown and improve content handling ([#49](https://github.com/AnswerDotAI/claudette/pull/49)), thanks to [@ncoop57](https://github.com/ncoop57)
163 |
164 |
165 | ## 0.1.1
166 |
167 | ### New Features
168 |
169 | - add structured to async ([#48](https://github.com/AnswerDotAI/claudette/pull/48)), thanks to [@hamelsmu](https://github.com/hamelsmu)
170 | - add `msglm` ([#46](https://github.com/AnswerDotAI/claudette/pull/46)), thanks to [@comhar](https://github.com/comhar)
171 | - Add support for new Claude 3.5 Haiku model ([#44](https://github.com/AnswerDotAI/claudette/pull/44)), thanks to [@ncoop57](https://github.com/ncoop57)
172 | - trace history instead of chat response in toolloop ([#39](https://github.com/AnswerDotAI/claudette/pull/39)), thanks to [@comhar](https://github.com/comhar)
173 |
174 |
175 | ## 0.1.0
176 |
177 | ### Breaking Changes
178 |
179 | - `tool_choice` is no longer a `Chat` instance variable; instead it is a parameter to `Chat.__call__`
180 |
181 | ### New Features
182 |
183 | - Add `temp` param to `Chat` ([#38](https://github.com/AnswerDotAI/claudette/issues/38))
184 |
185 | ### Bugs Squashed
186 |
187 | - `pr` included but not used ([#37](https://github.com/AnswerDotAI/claudette/issues/37))
188 | - fix tool use bug ([#35](https://github.com/AnswerDotAI/claudette/pull/35)), thanks to [@comhar](https://github.com/comhar)
189 |
190 |
191 | ## 0.0.10
192 |
193 | ### New Features
194 |
195 | - Add `Client.structured` ([#32](https://github.com/AnswerDotAI/claudette/issues/32))
196 | - Use `dict2obj` ([#30](https://github.com/AnswerDotAI/claudette/issues/30))
197 | - Store tool call result without stringifying ([#29](https://github.com/AnswerDotAI/claudette/issues/29))
198 |
199 |
200 | ## 0.0.9
201 |
202 | ### New Features
203 |
204 | - Async support ([#21](https://github.com/AnswerDotAI/claudette/issues/21))
205 |
206 |
207 | ## 0.0.7
208 |
209 | ### New Features
210 |
211 | - Prompt caching ([#20](https://github.com/AnswerDotAI/claudette/issues/20))
212 | - add markdown to doc output ([#19](https://github.com/AnswerDotAI/claudette/issues/19))
213 | - Support vscode details tags ([#18](https://github.com/AnswerDotAI/claudette/issues/18))
214 | - Add a `cont_pr` param to Chat as a "default" prompt [#15](https://github.com/AnswerDotAI/claudette/pull/15)), thanks to [@tom-pollak](https://github.com/tom-pollak)
215 |
216 | ### Bugs Squashed
217 |
218 | - Explicit `tool_choice` causes chat() to call tool twice. ([#11](https://github.com/AnswerDotAI/claudette/issues/11))
219 |
220 |
221 | ## 0.0.6
222 |
223 | ### New Features
224 |
225 | - Default chat prompt & function calling refactor ([#15](https://github.com/AnswerDotAI/claudette/pull/15)), thanks to [@tom-pollak](https://github.com/tom-pollak)
226 |
227 |
228 | ## 0.0.5
229 |
230 | ### New Features
231 |
232 | - Better support for stop sequences ([#12](https://github.com/AnswerDotAI/claudette/pull/12)), thanks to [@xl0](https://github.com/xl0)
233 |
234 |
235 | ## 0.0.3
236 |
237 | ### New Features
238 |
239 | - Amazon Bedrock and Google Vertex support ([#7](https://github.com/AnswerDotAI/claudette/issues/7))
240 |
241 | ### Bug Fixes
242 |
243 | - Update model paths for non-beta tool use ([#2](https://github.com/AnswerDotAI/claudette/pull/2)), thanks to [@sarahpannn](https://github.com/sarahpannn)
244 |
245 |
246 | ## 0.0.1
247 |
248 | - Initial release
249 |
250 |
--------------------------------------------------------------------------------
/claudette/_modidx.py:
--------------------------------------------------------------------------------
1 | # Autogenerated by nbdev
2 |
3 | d = { 'settings': { 'branch': 'main',
4 | 'doc_baseurl': '/',
5 | 'doc_host': 'https://claudette.answer.ai',
6 | 'git_url': 'https://github.com/AnswerDotAI/claudette',
7 | 'lib_path': 'claudette'},
8 | 'syms': { 'claudette.asink': { 'claudette.asink.AsyncChat': ('async.html#asyncchat', 'claudette/asink.py'),
9 | 'claudette.asink.AsyncChat.__call__': ('async.html#asyncchat.__call__', 'claudette/asink.py'),
10 | 'claudette.asink.AsyncChat.__init__': ('async.html#asyncchat.__init__', 'claudette/asink.py'),
11 | 'claudette.asink.AsyncChat._append_pr': ('async.html#asyncchat._append_pr', 'claudette/asink.py'),
12 | 'claudette.asink.AsyncClient': ('async.html#asyncclient', 'claudette/asink.py'),
13 | 'claudette.asink.AsyncClient.__call__': ('async.html#asyncclient.__call__', 'claudette/asink.py'),
14 | 'claudette.asink.AsyncClient.__init__': ('async.html#asyncclient.__init__', 'claudette/asink.py'),
15 | 'claudette.asink.AsyncClient.structured': ('async.html#asyncclient.structured', 'claudette/asink.py'),
16 | 'claudette.asink._astream': ('async.html#_astream', 'claudette/asink.py'),
17 | 'claudette.asink.mk_funcres_async': ('async.html#mk_funcres_async', 'claudette/asink.py'),
18 | 'claudette.asink.mk_toolres_async': ('async.html#mk_toolres_async', 'claudette/asink.py')},
19 | 'claudette.core': { 'claudette.core.Chat': ('core.html#chat', 'claudette/core.py'),
20 | 'claudette.core.Chat.__call__': ('core.html#chat.__call__', 'claudette/core.py'),
21 | 'claudette.core.Chat.__init__': ('core.html#chat.__init__', 'claudette/core.py'),
22 | 'claudette.core.Chat._append_pr': ('core.html#chat._append_pr', 'claudette/core.py'),
23 | 'claudette.core.Chat._post_pr': ('core.html#chat._post_pr', 'claudette/core.py'),
24 | 'claudette.core.Chat._repr_markdown_': ('core.html#chat._repr_markdown_', 'claudette/core.py'),
25 | 'claudette.core.Chat.cost': ('core.html#chat.cost', 'claudette/core.py'),
26 | 'claudette.core.Chat.use': ('core.html#chat.use', 'claudette/core.py'),
27 | 'claudette.core.Client': ('core.html#client', 'claudette/core.py'),
28 | 'claudette.core.Client.__call__': ('core.html#client.__call__', 'claudette/core.py'),
29 | 'claudette.core.Client.__init__': ('core.html#client.__init__', 'claudette/core.py'),
30 | 'claudette.core.Client._log': ('core.html#client._log', 'claudette/core.py'),
31 | 'claudette.core.Client._precall': ('core.html#client._precall', 'claudette/core.py'),
32 | 'claudette.core.Client._r': ('core.html#client._r', 'claudette/core.py'),
33 | 'claudette.core.Client._repr_markdown_': ('core.html#client._repr_markdown_', 'claudette/core.py'),
34 | 'claudette.core.Client.cost': ('core.html#client.cost', 'claudette/core.py'),
35 | 'claudette.core.Client.structured': ('core.html#client.structured', 'claudette/core.py'),
36 | 'claudette.core.Message._repr_markdown_': ('core.html#message._repr_markdown_', 'claudette/core.py'),
37 | 'claudette.core.ServerToolUsage.__add__': ('core.html#servertoolusage.__add__', 'claudette/core.py'),
38 | 'claudette.core.ToolResult': ('core.html#toolresult', 'claudette/core.py'),
39 | 'claudette.core.ToolResult.__init__': ('core.html#toolresult.__init__', 'claudette/core.py'),
40 | 'claudette.core.ToolResult.__str__': ('core.html#toolresult.__str__', 'claudette/core.py'),
41 | 'claudette.core.Usage.__add__': ('core.html#usage.__add__', 'claudette/core.py'),
42 | 'claudette.core.Usage.__repr__': ('core.html#usage.__repr__', 'claudette/core.py'),
43 | 'claudette.core.Usage.cost': ('core.html#usage.cost', 'claudette/core.py'),
44 | 'claudette.core.Usage.total': ('core.html#usage.total', 'claudette/core.py'),
45 | 'claudette.core._convert': ('core.html#_convert', 'claudette/core.py'),
46 | 'claudette.core._dgetattr': ('core.html#_dgetattr', 'claudette/core.py'),
47 | 'claudette.core._img_content': ('core.html#_img_content', 'claudette/core.py'),
48 | 'claudette.core._is_builtin': ('core.html#_is_builtin', 'claudette/core.py'),
49 | 'claudette.core._stream': ('core.html#_stream', 'claudette/core.py'),
50 | 'claudette.core._type': ('core.html#_type', 'claudette/core.py'),
51 | 'claudette.core.allowed_tools': ('core.html#allowed_tools', 'claudette/core.py'),
52 | 'claudette.core.blks2cited_txt': ('core.html#blks2cited_txt', 'claudette/core.py'),
53 | 'claudette.core.can_set_system_prompt': ('core.html#can_set_system_prompt', 'claudette/core.py'),
54 | 'claudette.core.can_set_temperature': ('core.html#can_set_temperature', 'claudette/core.py'),
55 | 'claudette.core.can_stream': ('core.html#can_stream', 'claudette/core.py'),
56 | 'claudette.core.can_use_extended_thinking': ('core.html#can_use_extended_thinking', 'claudette/core.py'),
57 | 'claudette.core.contents': ('core.html#contents', 'claudette/core.py'),
58 | 'claudette.core.find_block': ('core.html#find_block', 'claudette/core.py'),
59 | 'claudette.core.find_blocks': ('core.html#find_blocks', 'claudette/core.py'),
60 | 'claudette.core.get_costs': ('core.html#get_costs', 'claudette/core.py'),
61 | 'claudette.core.get_pricing': ('core.html#get_pricing', 'claudette/core.py'),
62 | 'claudette.core.get_types': ('core.html#get_types', 'claudette/core.py'),
63 | 'claudette.core.limit_ns': ('core.html#limit_ns', 'claudette/core.py'),
64 | 'claudette.core.mk_funcres': ('core.html#mk_funcres', 'claudette/core.py'),
65 | 'claudette.core.mk_tool_choice': ('core.html#mk_tool_choice', 'claudette/core.py'),
66 | 'claudette.core.mk_toolres': ('core.html#mk_toolres', 'claudette/core.py'),
67 | 'claudette.core.search_conf': ('core.html#search_conf', 'claudette/core.py'),
68 | 'claudette.core.server_tool_usage': ('core.html#server_tool_usage', 'claudette/core.py'),
69 | 'claudette.core.think_md': ('core.html#think_md', 'claudette/core.py'),
70 | 'claudette.core.tool': ('core.html#tool', 'claudette/core.py'),
71 | 'claudette.core.usage': ('core.html#usage', 'claudette/core.py')},
72 | 'claudette.text_editor': { 'claudette.text_editor.create': ('text_editor.html#create', 'claudette/text_editor.py'),
73 | 'claudette.text_editor.insert': ('text_editor.html#insert', 'claudette/text_editor.py'),
74 | 'claudette.text_editor.str_replace': ('text_editor.html#str_replace', 'claudette/text_editor.py'),
75 | 'claudette.text_editor.str_replace_based_edit_tool': ( 'text_editor.html#str_replace_based_edit_tool',
76 | 'claudette/text_editor.py'),
77 | 'claudette.text_editor.str_replace_editor': ( 'text_editor.html#str_replace_editor',
78 | 'claudette/text_editor.py'),
79 | 'claudette.text_editor.view': ('text_editor.html#view', 'claudette/text_editor.py')},
80 | 'claudette.toolloop': { 'claudette.toolloop.AsyncChat.toolloop': ('toolloop.html#asyncchat.toolloop', 'claudette/toolloop.py'),
81 | 'claudette.toolloop.Chat.toolloop': ('toolloop.html#chat.toolloop', 'claudette/toolloop.py')}}}
82 |
--------------------------------------------------------------------------------
/03_text_editor.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {},
7 | "outputs": [],
8 | "source": [
9 | "#| default_exp text_editor"
10 | ]
11 | },
12 | {
13 | "cell_type": "markdown",
14 | "metadata": {},
15 | "source": [
16 | "# Text Editor\n",
17 | "\n",
18 | "Implements functions for Anthropic's Text Editor Tool API, allowing Claude to view and edit files.\n",
19 | "\n",
20 | "See the [Anthropic Text Editor Tool documentation](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool) for details on the API commands (`view`, `create`, `insert`, `str_replace`, `undo_edit`) and usage. The `str_replace_editor` function acts as the required dispatcher."
21 | ]
22 | },
23 | {
24 | "cell_type": "code",
25 | "execution_count": null,
26 | "metadata": {},
27 | "outputs": [],
28 | "source": [
29 | "#| export\n",
30 | "from pathlib import Path"
31 | ]
32 | },
33 | {
34 | "cell_type": "code",
35 | "execution_count": null,
36 | "metadata": {},
37 | "outputs": [],
38 | "source": [
39 | "#| hide\n",
40 | "from claudette.core import *\n",
41 | "from toolslm.funccall import *"
42 | ]
43 | },
44 | {
45 | "cell_type": "code",
46 | "execution_count": null,
47 | "metadata": {},
48 | "outputs": [],
49 | "source": [
50 | "from cachy import enable_cachy"
51 | ]
52 | },
53 | {
54 | "cell_type": "code",
55 | "execution_count": null,
56 | "metadata": {},
57 | "outputs": [],
58 | "source": [
59 | "enable_cachy()"
60 | ]
61 | },
62 | {
63 | "cell_type": "code",
64 | "execution_count": null,
65 | "metadata": {},
66 | "outputs": [],
67 | "source": [
68 | "if Path('test.txt').exists(): Path('test.txt').unlink()"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": null,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "#| exports\n",
78 | "def view(path:str, view_range:tuple[int,int]=None, nums:bool=False):\n",
79 | " 'View directory or file contents with optional line range and numbers'\n",
80 | " try:\n",
81 | " p = Path(path).expanduser().resolve()\n",
82 | " if not p.exists(): return f'Error: File not found: {p}'\n",
83 | " if p.is_dir():\n",
84 | " res = [str(f) for f in p.glob('**/*') \n",
85 | " if not any(part.startswith('.') for part in f.relative_to(p).parts)]\n",
86 | " return f'Directory contents of {p}:\\n' + '\\n'.join(res)\n",
87 | " \n",
88 | " lines = p.read_text().splitlines()\n",
89 | " s,e = 1,len(lines)\n",
90 | " if view_range:\n",
91 | " s,e = view_range\n",
92 | " if not (1 <= s <= len(lines)): return f'Error: Invalid start line {s}'\n",
93 | " if e != -1 and not (s <= e <= len(lines)): return f'Error: Invalid end line {e}'\n",
94 | " lines = lines[s-1:None if e==-1 else e]\n",
95 | " \n",
96 | " return '\\n'.join([f'{i+s-1:6d} │ {l}' for i,l in enumerate(lines,1)] if nums else lines)\n",
97 | " except Exception as e: return f'Error viewing file: {str(e)}'"
98 | ]
99 | },
100 | {
101 | "cell_type": "code",
102 | "execution_count": null,
103 | "metadata": {},
104 | "outputs": [
105 | {
106 | "name": "stdout",
107 | "output_type": "stream",
108 | "text": [
109 | " 1 │ project:\n",
110 | " 2 │ type: website\n",
111 | " 3 │ resources: \n",
112 | " 4 │ - \"*.txt\"\n",
113 | " 5 │ preview:\n",
114 | " 6 │ port: 3000\n",
115 | " 7 │ browser: false\n",
116 | " 8 │ \n",
117 | " 9 │ format:\n",
118 | " 10 │ html:\n"
119 | ]
120 | }
121 | ],
122 | "source": [
123 | "print(view('_quarto.yml', (1,10), nums=True))"
124 | ]
125 | },
126 | {
127 | "cell_type": "code",
128 | "execution_count": null,
129 | "metadata": {},
130 | "outputs": [],
131 | "source": [
132 | "#| exports\n",
133 | "def create(path: str, file_text: str, overwrite:bool=False) -> str:\n",
134 | " 'Creates a new file with the given content at the specified path'\n",
135 | " try:\n",
136 | " p = Path(path)\n",
137 | " if p.exists():\n",
138 | " if not overwrite: return f'Error: File already exists: {p}'\n",
139 | " p.parent.mkdir(parents=True, exist_ok=True)\n",
140 | " p.write_text(file_text)\n",
141 | " return f'Created file {p} containing:\\n{file_text}'\n",
142 | " except Exception as e: return f'Error creating file: {str(e)}'"
143 | ]
144 | },
145 | {
146 | "cell_type": "code",
147 | "execution_count": null,
148 | "metadata": {},
149 | "outputs": [
150 | {
151 | "name": "stdout",
152 | "output_type": "stream",
153 | "text": [
154 | "Created file test.txt containing:\n",
155 | "Hello, world!\n",
156 | " 1 │ Hello, world!\n"
157 | ]
158 | }
159 | ],
160 | "source": [
161 | "print(create('test.txt', 'Hello, world!'))\n",
162 | "print(view('test.txt', nums=True))"
163 | ]
164 | },
165 | {
166 | "cell_type": "code",
167 | "execution_count": null,
168 | "metadata": {},
169 | "outputs": [],
170 | "source": [
171 | "#| exports\n",
172 | "def insert(path: str, insert_line: int, new_str: str) -> str:\n",
173 | " 'Insert new_str at specified line number'\n",
174 | " try:\n",
175 | " p = Path(path)\n",
176 | " if not p.exists(): return f'Error: File not found: {p}'\n",
177 | " \n",
178 | " content = p.read_text().splitlines()\n",
179 | " if not (0 <= insert_line <= len(content)): return f'Error: Invalid line number {insert_line}'\n",
180 | " \n",
181 | " content.insert(insert_line, new_str)\n",
182 | " new_content = '\\n'.join(content)\n",
183 | " p.write_text(new_content)\n",
184 | " return f'Inserted text at line {insert_line} in {p}.\\nNew contents:\\n{new_content}'\n",
185 | " except Exception as e: return f'Error inserting text: {str(e)}'"
186 | ]
187 | },
188 | {
189 | "cell_type": "code",
190 | "execution_count": null,
191 | "metadata": {},
192 | "outputs": [
193 | {
194 | "name": "stdout",
195 | "output_type": "stream",
196 | "text": [
197 | " 1 │ Let's add a new line\n",
198 | " 2 │ Hello, world!\n"
199 | ]
200 | }
201 | ],
202 | "source": [
203 | "insert('test.txt', 0, 'Let\\'s add a new line')\n",
204 | "print(view('test.txt', nums=True))"
205 | ]
206 | },
207 | {
208 | "cell_type": "code",
209 | "execution_count": null,
210 | "metadata": {},
211 | "outputs": [],
212 | "source": [
213 | "#| exports\n",
214 | "def str_replace(path: str, old_str: str, new_str: str) -> str:\n",
215 | " 'Replace first occurrence of old_str with new_str in file'\n",
216 | " try:\n",
217 | " p = Path(path)\n",
218 | " if not p.exists(): return f'Error: File not found: {p}'\n",
219 | " \n",
220 | " content = p.read_text()\n",
221 | " count = content.count(old_str)\n",
222 | " \n",
223 | " if count == 0: return 'Error: Text not found in file'\n",
224 | " if count > 1: return f'Error: Multiple matches found ({count})'\n",
225 | " \n",
226 | " new_content = content.replace(old_str, new_str, 1)\n",
227 | " p.write_text(new_content)\n",
228 | " return f'Replaced text in {p}.\\nNew contents:\\n{new_content}'\n",
229 | " except Exception as e: return f'Error replacing text: {str(e)}'"
230 | ]
231 | },
232 | {
233 | "cell_type": "code",
234 | "execution_count": null,
235 | "metadata": {},
236 | "outputs": [
237 | {
238 | "name": "stdout",
239 | "output_type": "stream",
240 | "text": [
241 | " 1 │ Let's add a \n",
242 | " 2 │ Hello, world!\n"
243 | ]
244 | }
245 | ],
246 | "source": [
247 | "str_replace('test.txt', 'new line', '')\n",
248 | "print(view('test.txt', nums=True))"
249 | ]
250 | },
251 | {
252 | "cell_type": "code",
253 | "execution_count": null,
254 | "metadata": {},
255 | "outputs": [],
256 | "source": [
257 | "#| exports\n",
258 | "def str_replace_editor(**kwargs: dict[str, str]) -> str:\n",
259 | " 'Dispatcher for Anthropic Text Editor tool commands: view, str_replace, create, insert, undo_edit.'\n",
260 | " cmd = kwargs.pop('command', '')\n",
261 | " if cmd == 'view': return view(**kwargs)\n",
262 | " elif cmd == 'str_replace': return str_replace(**kwargs) \n",
263 | " elif cmd == 'create': return create(**kwargs)\n",
264 | " elif cmd == 'insert': return insert(**kwargs)\n",
265 | " elif cmd == 'undo_edit': return 'Undo edit is not supported yet.'\n",
266 | " else: return f'Unknown command: {cmd}'\n",
267 | "\n",
268 | "def str_replace_based_edit_tool(**kwargs: dict[str, str]) -> str:\n",
269 | " 'Dispatcher for Anthropic Text Editor tool commands: view, str_replace, create, insert.'\n",
270 | " return str_replace_editor(**kwargs)"
271 | ]
272 | },
273 | {
274 | "cell_type": "code",
275 | "execution_count": null,
276 | "metadata": {},
277 | "outputs": [],
278 | "source": [
279 | "#| export\n",
280 | "text_editor_conf = {\n",
281 | " 'sonnet': {\"type\": \"text_editor_20250728\", \"name\": \"str_replace_based_edit_tool\"},\n",
282 | " 'sonnet-4': {\"type\": \"text_editor_20250429\", \"name\": \"str_replace_based_edit_tool\"},\n",
283 | " 'sonnet-3-7': {'type': 'text_editor_20250124', 'name': 'str_replace_editor'},\n",
284 | " 'sonnet-3-5': {'type': 'text_editor_20241022', 'name': 'str_replace_editor'}\n",
285 | "}"
286 | ]
287 | },
288 | {
289 | "cell_type": "code",
290 | "execution_count": null,
291 | "metadata": {},
292 | "outputs": [],
293 | "source": [
294 | "model = models[1]"
295 | ]
296 | },
297 | {
298 | "cell_type": "code",
299 | "execution_count": null,
300 | "metadata": {},
301 | "outputs": [],
302 | "source": [
303 | "c = Chat(model, tools=[text_editor_conf['sonnet']], ns=mk_ns(str_replace_based_edit_tool))\n",
304 | "# c.toolloop('Please explain what my _quarto.yml does. Use your tools')"
305 | ]
306 | },
307 | {
308 | "cell_type": "code",
309 | "execution_count": null,
310 | "metadata": {},
311 | "outputs": [],
312 | "source": []
313 | }
314 | ],
315 | "metadata": {
316 | "kernelspec": {
317 | "display_name": "python3",
318 | "language": "python",
319 | "name": "python3"
320 | }
321 | },
322 | "nbformat": 4,
323 | "nbformat_minor": 2
324 | }
325 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/claudette/core.py:
--------------------------------------------------------------------------------
1 | # AUTOGENERATED! DO NOT EDIT! File to edit: ../00_core.ipynb.
2 |
3 | # %% auto 0
4 | __all__ = ['empty', 'model_types', 'all_models', 'models', 'models_aws', 'models_goog', 'text_only_models',
5 | 'has_streaming_models', 'has_system_prompt_models', 'has_temperature_models', 'has_extended_thinking_models',
6 | 'pricing', 'server_tool_pricing', 'can_stream', 'can_set_system_prompt', 'can_set_temperature',
7 | 'can_use_extended_thinking', 'find_block', 'server_tool_usage', 'usage', 'Client', 'get_types',
8 | 'mk_tool_choice', 'get_pricing', 'get_costs', 'ToolResult', 'mk_funcres', 'allowed_tools', 'limit_ns',
9 | 'mk_toolres', 'tool', 'Chat', 'think_md', 'search_conf', 'find_blocks', 'blks2cited_txt', 'contents',
10 | 'mk_msg', 'mk_msgs']
11 |
12 | # %% ../00_core.ipynb
13 | import inspect, typing, json
14 | from collections import abc
15 | from dataclasses import dataclass
16 | from typing import get_type_hints, Any, Callable
17 | from functools import wraps
18 |
19 | from anthropic import Anthropic, AnthropicBedrock, AnthropicVertex
20 | from anthropic.types import (Usage, TextBlock, ServerToolUseBlock,
21 | WebSearchToolResultBlock, Message, ToolUseBlock,
22 | ThinkingBlock, ServerToolUsage)
23 | from anthropic.resources import messages
24 |
25 | import toolslm
26 | from toolslm.funccall import *
27 |
28 | from fastcore.meta import delegates
29 | from fastcore.utils import *
30 | from fastcore.xtras import save_iter
31 | from msglm import mk_msg_anthropic as mk_msg, mk_msgs_anthropic as mk_msgs
32 |
33 | # %% ../00_core.ipynb
34 | _all_ = ['mk_msg', 'mk_msgs']
35 |
36 | # %% ../00_core.ipynb
37 | empty = inspect.Parameter.empty
38 |
39 | # %% ../00_core.ipynb
40 | model_types = {
41 | # Anthropic
42 | 'claude-opus-4-5': 'opus',
43 | 'claude-sonnet-4-5': 'sonnet',
44 | 'claude-haiku-4-5': 'haiku',
45 | 'claude-opus-4-1-20250805': 'opus-4-1',
46 | 'claude-opus-4-20250514': 'opus-4',
47 | 'claude-3-opus-20240229': 'opus-3',
48 | 'claude-sonnet-4-20250514': 'sonnet-4',
49 | 'claude-3-7-sonnet-20250219': 'sonnet-3-7',
50 | 'claude-3-5-sonnet-20241022': 'sonnet-3-5',
51 | 'claude-3-haiku-20240307': 'haiku-3',
52 | 'claude-3-5-haiku-20241022': 'haiku-3-5',
53 | # AWS
54 | 'anthropic.claude-opus-4-1-20250805-v1:0': 'opus',
55 | 'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet',
56 | 'anthropic.claude-3-opus-20240229-v1:0': 'opus-3',
57 | 'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet',
58 | 'anthropic.claude-3-haiku-20240307-v1:0': 'haiku',
59 | # Google
60 | 'claude-opus-4-1@20250805': 'opus',
61 | 'claude-3-5-sonnet-v2@20241022': 'sonnet',
62 | 'claude-3-opus@20240229': 'opus-3',
63 | 'claude-3-sonnet@20240229': 'sonnet',
64 | 'claude-3-haiku@20240307': 'haiku',
65 | }
66 |
67 | all_models = list(model_types)
68 |
69 | # %% ../00_core.ipynb
70 | models = all_models[:10]
71 |
72 | # %% ../00_core.ipynb
73 | models_aws = [
74 | 'anthropic.claude-opus-4-1-20250805-v1:0',
75 | 'anthropic.claude-sonnet-4-20250514-v1:0',
76 | 'claude-3-5-haiku-20241022',
77 | 'claude-3-7-sonnet-20250219',
78 | 'anthropic.claude-3-opus-20240229-v1:0',
79 | 'anthropic.claude-3-5-sonnet-20241022-v2:0'
80 | ]
81 |
82 | # %% ../00_core.ipynb
83 | models_goog = [
84 | 'claude-opus-4-1@20250805',
85 | 'anthropic.claude-3-sonnet-20240229-v1:0',
86 | 'anthropic.claude-3-haiku-20240307-v1:0',
87 | 'claude-3-opus@20240229',
88 | 'claude-3-5-sonnet-v2@20241022',
89 | 'claude-3-sonnet@20240229',
90 | 'claude-3-haiku@20240307'
91 | ]
92 |
93 | # %% ../00_core.ipynb
94 | text_only_models = ('claude-3-5-haiku-20241022',)
95 |
96 | # %% ../00_core.ipynb
97 | has_streaming_models = set(all_models)
98 | has_system_prompt_models = set(all_models)
99 | has_temperature_models = set(all_models)
100 | has_extended_thinking_models = {
101 | 'claude-opus-4-5', 'claude-opus-4-1-20250805', 'claude-opus-4-20250514',
102 | 'claude-sonnet-4-20250514', 'claude-3-7-sonnet-20250219', 'sonnet-4-5',
103 | 'haiku-4-5'
104 | }
105 |
106 | # %% ../00_core.ipynb
107 | def can_stream(m): return m in has_streaming_models
108 | def can_set_system_prompt(m): return m in has_system_prompt_models
109 | def can_set_temperature(m): return m in has_temperature_models
110 | def can_use_extended_thinking(m): return m in has_extended_thinking_models
111 |
112 | # %% ../00_core.ipynb
113 | def _type(x):
114 | try: return x.type
115 | except AttributeError: return x.get('type')
116 |
117 | def find_block(r:abc.Mapping, # The message to look in
118 | blk_type:type|str=TextBlock # The type of block to find
119 | ):
120 | "Find the first block of type `blk_type` in `r.content`."
121 | f = (lambda x:_type(x)==blk_type) if isinstance(blk_type,str) else (lambda x:isinstance(x,blk_type))
122 | return first(o for o in r.content if f(o))
123 |
124 | # %% ../00_core.ipynb
125 | @patch
126 | def _repr_markdown_(self:(Message)):
127 | det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items())
128 | cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex
129 | return f"""{cts}
130 |
131 |
132 |
133 | - {det}
134 |
135 | """
136 |
137 | # %% ../00_core.ipynb
138 | def server_tool_usage(web_search_requests=0):
139 | 'Little helper to create a server tool usage object'
140 | return ServerToolUsage(web_search_requests=web_search_requests)
141 |
142 | # %% ../00_core.ipynb
143 | def usage(inp=0, # input tokens
144 | out=0, # Output tokens
145 | cache_create=0, # Cache creation tokens
146 | cache_read=0, # Cache read tokens
147 | server_tool_use=server_tool_usage() # server tool use
148 | ):
149 | 'Slightly more concise version of `Usage`.'
150 | return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create,
151 | cache_read_input_tokens=cache_read, server_tool_use=server_tool_use)
152 |
153 | # %% ../00_core.ipynb
154 | def _dgetattr(o,s,d):
155 | "Like getattr, but returns the default if the result is None"
156 | return getattr(o,s,d) or d
157 |
158 | @patch(as_prop=True)
159 | def total(self:Usage): return self.input_tokens+self.output_tokens+_dgetattr(self, "cache_creation_input_tokens",0)+_dgetattr(self, "cache_read_input_tokens",0)
160 |
161 | # %% ../00_core.ipynb
162 | @patch
163 | def __repr__(self:Usage):
164 | io_toks = f'In: {self.input_tokens}; Out: {self.output_tokens}'
165 | cache_toks = f'Cache create: {_dgetattr(self, "cache_creation_input_tokens",0)}; Cache read: {_dgetattr(self, "cache_read_input_tokens",0)}'
166 | server_tool_use = _dgetattr(self, "server_tool_use",server_tool_usage())
167 | server_tool_use_str = f'Search: {server_tool_use.web_search_requests}'
168 | total_tok = f'Total Tokens: {self.total}'
169 | return f'{io_toks}; {cache_toks}; {total_tok}; {server_tool_use_str}'
170 |
171 | # %% ../00_core.ipynb
172 | @patch
173 | def __add__(self:ServerToolUsage, b):
174 | "Add together each of the server tool use counts"
175 | return ServerToolUsage(web_search_requests=self.web_search_requests+b.web_search_requests)
176 |
177 | # %% ../00_core.ipynb
178 | @patch
179 | def __add__(self:Usage, b):
180 | "Add together each of `input_tokens` and `output_tokens`"
181 | return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens,
182 | _dgetattr(self,'cache_creation_input_tokens',0)+_dgetattr(b,'cache_creation_input_tokens',0),
183 | _dgetattr(self,'cache_read_input_tokens',0)+_dgetattr(b,'cache_read_input_tokens',0),
184 | _dgetattr(self,'server_tool_use',server_tool_usage())+_dgetattr(b,'server_tool_use',server_tool_usage()))
185 |
186 | # %% ../00_core.ipynb
187 | class Client:
188 | def __init__(self, model, cli=None, log=False, cache=False):
189 | "Basic Anthropic messages client."
190 | self.model,self.use = model,usage()
191 | self.text_only = model in text_only_models
192 | self.log = [] if log else None
193 | self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}))
194 | self.cache = cache
195 |
196 | # %% ../00_core.ipynb
197 | @patch
198 | def _r(self:Client, r:Message, prefill=''):
199 | "Store the result of the message and accrue total usage."
200 | if prefill:
201 | blk = find_block(r)
202 | if blk: blk.text = prefill + (blk.text or '')
203 | self.result = r
204 | self.use += r.usage
205 | self.stop_reason = r.stop_reason
206 | self.stop_sequence = r.stop_sequence
207 | return r
208 |
209 | # %% ../00_core.ipynb
210 | @patch
211 | def _log(self:Client, final, prefill, msgs, **kwargs):
212 | self._r(final, prefill)
213 | if self.log is not None: self.log.append({
214 | "msgs": msgs, **kwargs,
215 | "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence
216 | })
217 | return self.result
218 |
219 | # %% ../00_core.ipynb
220 | @save_iter
221 | def _stream(o, cm, prefill, cb):
222 | with cm as s:
223 | yield prefill
224 | yield from s.text_stream
225 | o.value = s.get_final_message()
226 | cb(o.value)
227 |
228 | # %% ../00_core.ipynb
229 | def get_types(msgs):
230 | types = []
231 | for m in msgs:
232 | content = m.get('content', [])
233 | if isinstance(content, list): types.extend(getattr(c, 'type', None) or c['type'] for c in content)
234 | else: types.append('text')
235 | return types
236 |
237 | # %% ../00_core.ipynb
238 | def mk_tool_choice(choose:Union[str,bool,None])->dict:
239 | "Create a `tool_choice` dict that's 'auto' if `choose` is `None`, 'any' if it is True, or 'tool' otherwise"
240 | return {"type": "tool", "name": choose} if isinstance(choose,str) else {'type':'any'} if choose else {'type':'auto'}
241 |
242 | # %% ../00_core.ipynb
243 | @patch
244 | def _precall(self:Client, msgs, prefill, sp, temp, maxtok, maxthinktok, stream,
245 | stop, tools, tool_choice, kwargs):
246 | if tools: kwargs['tools'] = [get_schema(o) if callable(o) else o for o in listify(tools)]
247 | if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice)
248 | if maxthinktok:
249 | kwargs['thinking'] = {'type':'enabled', 'budget_tokens':maxthinktok}
250 | temp,prefill = 1,''
251 | pref = [prefill.strip()] if prefill else []
252 | if not isinstance(msgs,list): msgs = [msgs]
253 | if stop is not None:
254 | if not isinstance(stop, (list)): stop = [stop]
255 | kwargs["stop_sequences"] = stop
256 | msgs = mk_msgs(msgs+pref, cache=self.cache, cache_last_ckpt_only=self.cache)
257 | assert not ('image' in get_types(msgs) and self.text_only), f"Images not supported by: {self.model}"
258 | kwargs |= dict(max_tokens=maxtok, system=sp, temperature=temp)
259 | return msgs, kwargs
260 |
261 | # %% ../00_core.ipynb
262 | @patch
263 | @delegates(messages.Messages.create)
264 | def __call__(self:Client,
265 | msgs:list, # List of messages in the dialog
266 | sp='', # The system prompt
267 | temp=0, # Temperature
268 | maxtok=4096, # Maximum tokens
269 | maxthinktok=0, # Maximum thinking tokens
270 | prefill='', # Optional prefill to pass to Claude as start of its response
271 | stream:bool=False, # Stream response?
272 | stop=None, # Stop sequence
273 | tools:Optional[list]=None, # List of tools to make available to Claude
274 | tool_choice:Optional[dict]=None, # Optionally force use of some tool
275 | cb=None, # Callback to pass result to when complete
276 | **kwargs):
277 | "Make a call to Claude."
278 | msgs,kwargs = self._precall(msgs, prefill, sp, temp, maxtok, maxthinktok, stream,
279 | stop, tools, tool_choice, kwargs)
280 | m = self.c.messages
281 | f = m.stream if stream else m.create
282 | res = f(model=self.model, messages=msgs, **kwargs)
283 | def _cb(v):
284 | self._log(v, prefill=prefill, msgs=msgs, **kwargs)
285 | if cb: cb(v)
286 | if stream: return _stream(res, prefill, _cb)
287 | try: return res
288 | finally: _cb(res)
289 |
290 | # %% ../00_core.ipynb
291 | pricing = { # model type: $ / million tokens (input, output, cache write, cache read)
292 | 'opus': (5, 25, 6.25, 0.5),
293 | 'sonnet': (3, 15, 3.75, 0.3),
294 | 'haiku': (1, 5, 1.25, 0.1),
295 | 'haiku-3': (0.25, 1.25, 0.3, 0.03),
296 | 'haiku-3-5': (1, 3, 1.25, 0.1),
297 | }
298 |
299 | # %% ../00_core.ipynb
300 | def get_pricing(m, u):
301 | return pricing[m][:3] if u.prompt_token_count < 128_000 else pricing[m][3:]
302 |
303 | # %% ../00_core.ipynb
304 | server_tool_pricing = {
305 | 'web_search_requests': 10, # $10 per 1,000
306 | }
307 |
308 | # %% ../00_core.ipynb
309 | @patch
310 | def cost(self:Usage, costs:tuple) -> float:
311 | cache_w, cache_r = _dgetattr(self, "cache_creation_input_tokens",0), _dgetattr(self, "cache_read_input_tokens",0)
312 | tok_cost = sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6
313 | server_tool_use = _dgetattr(self, "server_tool_use",server_tool_usage())
314 | server_tool_cost = server_tool_use.web_search_requests * server_tool_pricing['web_search_requests'] / 1e3
315 | return tok_cost + server_tool_cost
316 |
317 | # %% ../00_core.ipynb
318 | @patch(as_prop=True)
319 | def cost(self: Client) -> float: return self.use.cost(pricing[model_types[self.model]])
320 |
321 | # %% ../00_core.ipynb
322 | def get_costs(c):
323 | costs = pricing[model_types[c.model]]
324 |
325 | inp_cost = c.use.input_tokens * costs[0] / 1e6
326 | out_cost = c.use.output_tokens * costs[1] / 1e6
327 |
328 | cache_w = c.use.cache_creation_input_tokens
329 | cache_r = c.use.cache_read_input_tokens
330 | cache_cost = (cache_w * costs[2] + cache_r * costs[3]) / 1e6
331 |
332 | server_tool_use = c.use.server_tool_use
333 | server_tool_cost = server_tool_use.web_search_requests * server_tool_pricing['web_search_requests'] / 1e3
334 | return inp_cost, out_cost, cache_cost, cache_w + cache_r, server_tool_cost
335 |
336 | # %% ../00_core.ipynb
337 | @patch
338 | def _repr_markdown_(self:Client):
339 | if not hasattr(self,'result'): return 'No results yet'
340 | msg = contents(self.result)
341 | inp_cost, out_cost, cache_cost, cached_toks, server_tool_cost = get_costs(self)
342 | return f"""{msg}
343 |
344 | | Metric | Count | Cost (USD) |
345 | |--------|------:|-----:|
346 | | Input tokens | {self.use.input_tokens:,} | {inp_cost:.6f} |
347 | | Output tokens | {self.use.output_tokens:,} | {out_cost:.6f} |
348 | | Cache tokens | {cached_toks:,} | {cache_cost:.6f} |
349 | | Server tool use | {self.use.server_tool_use.web_search_requests:,} | {server_tool_cost:.6f} |
350 | | **Total** | **{self.use.total:,}** | **${self.cost:.6f}** |"""
351 |
352 | # %% ../00_core.ipynb
353 | class ToolResult(BasicRepr):
354 | def __init__(self, result_type: str, data): store_attr()
355 | def __str__(self): return str(self.data)
356 |
357 | # %% ../00_core.ipynb
358 | def _img_content(b64data):
359 | return [{"type": "image",
360 | "source":{"type": "base64", "media_type": "image/png", "data": b64data}},
361 | {"type": "text", "text": "Captured screenshot."}]
362 |
363 | def mk_funcres(fc, ns):
364 | "Given tool use block 'fc', get tool result, and create a tool_result response."
365 | try: res = call_func(fc.name, fc.input, ns=ns, raise_on_err=False)
366 | except KeyError as e: return {"type": "tool_result", "tool_use_id": fc.id, "content": f"Error - tool not defined in the tool_schemas: {fc.name}"}
367 | if isinstance(res, ToolResult) and res.result_type=="image/png": res = _img_content(res.data) # list
368 | else: res = str(res.data) if isinstance(res, ToolResult) else str(res)
369 | return {"type": "tool_result", "tool_use_id": fc.id, "content": res}
370 |
371 | # %% ../00_core.ipynb
372 | def allowed_tools(specs: Optional[list[Union[str,abc.Callable]]], choice: Optional[Union[dict,str]]=None):
373 | if not isinstance(choice, dict): choice=mk_tool_choice(choice)
374 | if choice['type'] == 'tool': return {choice['name']}
375 | if choice['type'] == 'none': return set()
376 | return {v['name'] if isinstance(v, dict) else v.__name__ for v in specs or []}
377 |
378 | # %% ../00_core.ipynb
379 | def limit_ns(
380 | ns:Optional[abc.Mapping]=None, # Namespace to search for tools
381 | specs:Optional[list[Union[str,abc.Callable]]]=None, # List of the tools that are allowed for llm to call
382 | choice:Optional[Union[dict,str]]=None # Tool choice as defined by Anthropic API
383 | ):
384 | "Filter namespace `ns` to only include tools allowed by `specs` and `choice`"
385 | if ns is None: ns=globals()
386 | if not isinstance(ns, abc.Mapping): ns = mk_ns(ns)
387 | ns = {k:ns[k] for k in allowed_tools(specs, choice) if k in ns}
388 | return ns
389 |
390 | # %% ../00_core.ipynb
391 | def mk_toolres(
392 | r:abc.Mapping, # Tool use request response from Claude
393 | ns:Optional[abc.Mapping]=None, # Namespace to search for tools
394 | ):
395 | "Create a `tool_result` message from response `r`."
396 | cts = getattr(r, 'content', [])
397 | res = [mk_msg(r.model_dump(), role='assistant')]
398 | if ns is None: ns=globals()
399 | tcs = [mk_funcres(o, ns) for o in cts if isinstance(o,ToolUseBlock)]
400 | if tcs: res.append(mk_msg(tcs))
401 | return res
402 |
403 | # %% ../00_core.ipynb
404 | @patch
405 | @delegates(Client.__call__)
406 | def structured(self:Client,
407 | msgs:list, # List of messages in the dialog
408 | tools:list[abc.Callable]=None, # List of tools to make available to Claude
409 | ns:Optional[abc.Mapping]=None, # Namespace to search for tools
410 | **kwargs):
411 | "Return the value of all tool calls (generally used for structured outputs)"
412 | tools = listify(tools)
413 | res = self(msgs, tools=tools, tool_choice=tools, **kwargs)
414 | if ns is None: ns=mk_ns(*tools)
415 | cts = getattr(res, 'content', [])
416 | tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)]
417 | return tcs
418 |
419 | # %% ../00_core.ipynb
420 | def _is_builtin(tp: type):
421 | "Returns True for built in primitive types or containers"
422 | return (tp in (str, int, float, bool, complex) or tp is None
423 | or getattr(tp, '__origin__', None) is not None) # Pass through all container types
424 |
425 | def _convert(val: Dict, # dictionary argument being passed in
426 | tp: type): # type of the tool function input
427 | "Convert converts a single argument"
428 | if val is None or _is_builtin(tp) or not isinstance(val, dict): return val
429 | return tp(**val)
430 |
431 | # %% ../00_core.ipynb
432 | def tool(func):
433 | if isinstance(func, dict): return func # it's a schema, so don't change
434 | hints = get_type_hints(func)
435 | @wraps(func)
436 | def wrapper(*args, **kwargs):
437 | new_args = [_convert(arg, hints[p]) for p,arg in zip(inspect.signature(func).parameters, args)]
438 | new_kwargs = {k: _convert(v, hints[k]) if k in hints else v for k,v in kwargs.items()}
439 | return func(*new_args, **new_kwargs)
440 | return wrapper
441 |
442 | # %% ../00_core.ipynb
443 | class Chat:
444 | def __init__(self,
445 | model:Optional[str]=None, # Model to use (leave empty if passing `cli`)
446 | cli:Optional[Client]=None, # Client to use (leave empty if passing `model`)
447 | sp='', # Optional system prompt
448 | tools:Optional[list]=None, # List of tools to make available to Claude
449 | temp=0, # Temperature
450 | cont_pr:Optional[str]=None, # User prompt to continue an assistant response
451 | cache: bool = False, # Use Claude cache?
452 | hist: list = None, # Initialize history
453 | ns:Optional[abc.Mapping]=None # Namespace to search for tools
454 | ):
455 | "Anthropic chat client."
456 | assert model or cli
457 | assert cont_pr != "", "cont_pr may not be an empty string"
458 | self.c = (cli or Client(model, cache=cache))
459 | if hist is None: hist=[]
460 | if tools: tools = [tool(t) for t in listify(tools)]
461 | if ns is None: ns=tools
462 | self.h,self.sp,self.tools,self.cont_pr,self.temp,self.cache,self.ns = hist,sp,tools,cont_pr,temp,cache,ns
463 |
464 | @property
465 | def use(self): return self.c.use
466 |
467 | # %% ../00_core.ipynb
468 | @patch(as_prop=True)
469 | def cost(self: Chat) -> float: return self.c.cost
470 |
471 | # %% ../00_core.ipynb
472 | @patch
473 | def _post_pr(self:Chat, pr, prev_role):
474 | if pr is None and prev_role == 'assistant':
475 | if self.cont_pr is None:
476 | raise ValueError("Prompt must be given after completion, or use `self.cont_pr`.")
477 | pr = self.cont_pr # No user prompt, keep the chain
478 | if pr: self.h.append(mk_msg(pr, cache=self.cache))
479 |
480 | # %% ../00_core.ipynb
481 | @patch
482 | def _append_pr(self:Chat, pr=None):
483 | prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user'
484 | if pr and prev_role == 'user': self() # already user request pending
485 | self._post_pr(pr, prev_role)
486 |
487 | # %% ../00_core.ipynb
488 | @patch
489 | def __call__(self:Chat,
490 | pr=None, # Prompt / message
491 | temp=None, # Temperature
492 | maxtok=4096, # Maximum tokens
493 | maxthinktok=0, # Maximum thinking tokens
494 | stream=False, # Stream response?
495 | prefill='', # Optional prefill to pass to Claude as start of its response
496 | tool_choice:Optional[dict]=None, # Optionally force use of some tool
497 | **kw):
498 | if temp is None: temp=self.temp
499 | self._append_pr(pr)
500 | def _cb(v):
501 | self.last = mk_toolres(v, ns=limit_ns(self.ns, self.tools, tool_choice))
502 | self.h += self.last
503 | return self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, maxthinktok=maxthinktok,
504 | tools=self.tools, tool_choice=tool_choice, cb=_cb, **kw)
505 |
506 | # %% ../00_core.ipynb
507 | @patch
508 | def _repr_markdown_(self:Chat):
509 | if not hasattr(self.c, 'result'): return 'No results yet'
510 | last_msg = contents(self.c.result)
511 |
512 | def fmt_msg(m):
513 | t = contents(m)
514 | if isinstance(t, dict): return t['content']
515 | return t
516 |
517 | history = '\n\n'.join(f"**{m['role']}**: {fmt_msg(m)}"
518 | for m in self.h)
519 | det = self.c._repr_markdown_().split('\n\n')[-1]
520 | if history: history = f"""
521 |
522 | ► History
523 |
524 | {history}
525 |
526 |
527 | """
528 |
529 | return f"""{last_msg}
530 | {history}
531 | {det}"""
532 |
533 | # %% ../00_core.ipynb
534 | def think_md(txt, thk):
535 | return f"""
536 | {txt}
537 |
538 |
539 | Thinking
540 | {thk}
541 |
542 | """
543 |
544 | # %% ../00_core.ipynb
545 | def search_conf(max_uses:int=None, allowed_domains:list=None, blocked_domains:list=None, user_location:dict=None):
546 | 'Little helper to create a search tool config'
547 | conf = {'type': 'web_search_20250305', 'name': 'web_search'}
548 | if max_uses: conf['max_uses'] = max_uses
549 | if allowed_domains: conf['allowed_domains'] = allowed_domains
550 | if blocked_domains: conf['blocked_domains'] = blocked_domains
551 | if user_location: conf['user_location'] = user_location
552 | return conf
553 |
554 | # %% ../00_core.ipynb
555 | def find_blocks(r, blk_type=TextBlock, type='text'):
556 | "Helper to find all blocks of type `blk_type` in response `r`."
557 | def f(b): return (b.get('type')=='text') if isinstance(b, dict) else (isinstance(b, TextBlock))
558 | return [b for b in getattr(r, "content", []) if f(b)]
559 |
560 | # %% ../00_core.ipynb
561 | def blks2cited_txt(txt_blks):
562 | "Helper to get the contents from a list of `TextBlock`s, with citations."
563 | text_sections, citations = [], []
564 | for blk in txt_blks:
565 | if isinstance(blk, dict): blk = AttrDict(blk)
566 | section = blk.text
567 | if getattr(blk, 'citations', None):
568 | markers = []
569 | for cit in blk.citations:
570 | citations.append(cit)
571 | markers.append(f"[^{len(citations)}]")
572 | section = f"{section} " + " ".join(markers)
573 | text_sections.append(section)
574 | body = "".join(text_sections)
575 | def _cite(i, cit):
576 | esc = cit.cited_text.replace('"', r'\"')
577 | return f'[^{i+1}]: {cit.url}\n\t"{esc}"'
578 | if citations:
579 | refs = '\n\n'.join(L.enumerate(citations).starmap(_cite))
580 | body = f"{body}\n\n{refs}" if body else refs
581 | return body
582 |
583 | # %% ../00_core.ipynb
584 | def contents(r, show_thk=True):
585 | "Helper to get the contents from Claude response `r`."
586 | blks = find_blocks(r, blk_type=TextBlock)
587 | content = None
588 | if blks: content = blks2cited_txt(blks)
589 | if show_thk:
590 | tk_blk = find_block(r, blk_type=ThinkingBlock)
591 | if tk_blk: return think_md(content, tk_blk.thinking.strip())
592 | if not content:
593 | blk = find_block(r)
594 | if not blk and getattr(r, "content", None): blk = r.content
595 | if hasattr(blk, "text"): content = blk.text.strip()
596 | elif hasattr(blk, "content"): content = blk.content.strip()
597 | elif hasattr(blk, "source"): content = f"*Media Type - {blk.type}*"
598 | else: content = str(blk)
599 | return content
600 |
--------------------------------------------------------------------------------
/README.txt:
--------------------------------------------------------------------------------
1 | # claudette
2 |
3 |
4 |
5 |
6 | > **NB**: If you are reading this in GitHub’s readme, we recommend you
7 | > instead read the much more nicely formatted [documentation
8 | > format](https://claudette.answer.ai/) of this tutorial.
9 |
10 | *Claudette* is a wrapper for Anthropic’s [Python
11 | SDK](https://github.com/anthropics/anthropic-sdk-python).
12 |
13 | The SDK works well, but it is quite low level – it leaves the developer
14 | to do a lot of stuff manually. That’s a lot of extra work and
15 | boilerplate! Claudette automates pretty much everything that can be
16 | automated, whilst providing full control. Amongst the features provided:
17 |
18 | - A [`Chat`](https://claudette.answer.ai/core.html#chat) class that
19 | creates stateful dialogs
20 | - Support for *prefill*, which tells Claude what to use as the first few
21 | words of its response
22 | - Convenient image support
23 | - Simple and convenient support for Claude’s new Tool Use API.
24 |
25 | You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the
26 | key provided to you by Anthropic in order to use this library.
27 |
28 | Note that this library is the first ever “literate nbdev” project. That
29 | means that the actual source code for the library is a rendered Jupyter
30 | Notebook which includes callout notes and tips, HTML tables and images,
31 | detailed explanations, and teaches *how* and *why* the code is written
32 | the way it is. Even if you’ve never used the Anthropic Python SDK or
33 | Claude API before, you should be able to read the source code. Click
34 | [Claudette’s Source](https://claudette.answer.ai/core.html) to read it,
35 | or clone the git repo and execute the notebook yourself to see every
36 | step of the creation process in action. The tutorial below includes
37 | links to API details which will take you to relevant parts of the
38 | source. The reason this project is a new kind of literal program is
39 | because we take seriously Knuth’s call to action, that we have a “*moral
40 | commitment*” to never write an “*illiterate program*” – and so we have a
41 | commitment to making literate programming and easy and pleasant
42 | experience. (For more on this, see [this
43 | talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.)
44 |
45 | > “*Let us change our traditional attitude to the construction of
46 | > programs: Instead of imagining that our main task is to instruct a
47 | > **computer** what to do, let us concentrate rather on explaining to
48 | > **human beings** what we want a computer to do.*” Donald E. Knuth,
49 | > [Literate
50 | > Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf)
51 | > (1984)
52 |
53 | ## Install
54 |
55 | ``` sh
56 | pip install claudette
57 | ```
58 |
59 | ## Getting started
60 |
61 | Anthropic’s Python SDK will automatically be installed with Claudette,
62 | if you don’t already have it.
63 |
64 | ``` python
65 | import os
66 | # os.environ['ANTHROPIC_LOG'] = 'debug'
67 | ```
68 |
69 | To print every HTTP request and response in full, uncomment the above
70 | line.
71 |
72 | ``` python
73 | from claudette import *
74 | ```
75 |
76 | Claudette only exports the symbols that are needed to use the library,
77 | so you can use `import *` to import them. Alternatively, just use:
78 |
79 | ``` python
80 | import claudette
81 | ```
82 |
83 | …and then add the prefix `claudette.` to any usages of the module.
84 |
85 | Claudette provides `models`, which is a list of models currently
86 | available from the SDK.
87 |
88 | ``` python
89 | models
90 | ```
91 |
92 | ['claude-3-opus-20240229',
93 | 'claude-3-5-sonnet-20241022',
94 | 'claude-3-haiku-20240307']
95 |
96 | For these examples, we’ll use Sonnet 3.5, since it’s awesome!
97 |
98 | ``` python
99 | model = models[1]
100 | ```
101 |
102 | ## Chat
103 |
104 | The main interface to Claudette is the
105 | [`Chat`](https://claudette.answer.ai/core.html#chat) class, which
106 | provides a stateful interface to Claude:
107 |
108 | ``` python
109 | chat = Chat(model, sp="""You are a helpful and concise assistant.""")
110 | chat("I'm Jeremy")
111 | ```
112 |
113 | Hello Jeremy, nice to meet you.
114 |
115 |
116 |
117 | - id: `msg_015oK9jEcra3TEKHUGYULjWB`
118 | - content:
119 | `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]`
120 | - model: `claude-3-5-sonnet-20241022`
121 | - role: `assistant`
122 | - stop_reason: `end_turn`
123 | - stop_sequence: `None`
124 | - type: `message`
125 | - usage:
126 | `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
127 |
128 |
129 |
130 | ``` python
131 | r = chat("What's my name?")
132 | r
133 | ```
134 |
135 | Your name is Jeremy.
136 |
137 |
138 |
139 | - id: `msg_01Si8sTFJe8d8vq7enanbAwj`
140 | - content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]`
141 | - model: `claude-3-5-sonnet-20241022`
142 | - role: `assistant`
143 | - stop_reason: `end_turn`
144 | - stop_sequence: `None`
145 | - type: `message`
146 | - usage:
147 | `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
148 |
149 |
150 |
151 | ``` python
152 | r = chat("What's my name?")
153 | r
154 | ```
155 |
156 | Your name is Jeremy.
157 |
158 |
159 |
160 | - id: `msg_01BHWRoAX8eBsoLn2bzpBkvx`
161 | - content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]`
162 | - model: `claude-3-5-sonnet-20241022`
163 | - role: `assistant`
164 | - stop_reason: `end_turn`
165 | - stop_sequence: `None`
166 | - type: `message`
167 | - usage:
168 | `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
169 |
170 |
171 |
172 | As you see above, displaying the results of a call in a notebook shows
173 | just the message contents, with the other details hidden behind a
174 | collapsible section. Alternatively you can `print` the details:
175 |
176 | ``` python
177 | print(r)
178 | ```
179 |
180 | Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62)
181 |
182 | Claude supports adding an extra `assistant` message at the end, which
183 | contains the *prefill* – i.e. the text we want Claude to assume the
184 | response starts with. Let’s try it out:
185 |
186 | ``` python
187 | chat("Concisely, what is the meaning of life?",
188 | prefill='According to Douglas Adams,')
189 | ```
190 |
191 | According to Douglas Adams,42. Philosophically, it’s to find personal
192 | meaning through relationships, purpose, and experiences.
193 |
194 |
195 |
196 | - id: `msg_01R9RvMdFwea9iRX5uYSSHG7`
197 | - content:
198 | `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]`
199 | - model: `claude-3-5-sonnet-20241022`
200 | - role: `assistant`
201 | - stop_reason: `end_turn`
202 | - stop_sequence: `None`
203 | - type: `message`
204 | - usage:
205 | `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
206 |
207 |
208 |
209 | You can add `stream=True` to stream the results as soon as they arrive
210 | (although you will only see the gradual generation if you execute the
211 | notebook yourself, of course!)
212 |
213 | ``` python
214 | for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True):
215 | print(o, end='')
216 | ```
217 |
218 | It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
219 |
220 | ### Async
221 |
222 | Alternatively, you can use
223 | [`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or
224 | [`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for
225 | the async versions, e.g:
226 |
227 | ``` python
228 | chat = AsyncChat(model)
229 | await chat("I'm Jeremy")
230 | ```
231 |
232 | Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by
233 | Anthropic. How can I help you today?
234 |
235 |
236 |
237 | - id: `msg_016Q8cdc3sPWBS8eXcNj841L`
238 | - content:
239 | `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]`
240 | - model: `claude-3-5-sonnet-20241022`
241 | - role: `assistant`
242 | - stop_reason: `end_turn`
243 | - stop_sequence: `None`
244 | - type: `message`
245 | - usage:
246 | `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
247 |
248 |
249 |
250 | Remember to use `async for` when streaming in this case:
251 |
252 | ``` python
253 | async for o in await chat("Concisely, what is the meaning of life?",
254 | prefill='According to Douglas Adams,', stream=True):
255 | print(o, end='')
256 | ```
257 |
258 | According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful.
259 |
260 | ## Prompt caching
261 |
262 | If you use `mk_msg(msg, cache=True)`, then the message is cached using
263 | Claude’s [prompt
264 | caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)
265 | feature. For instance, here we use caching when asking about Claudette’s
266 | readme file:
267 |
268 | ``` python
269 | chat = Chat(model, sp="""You are a helpful and concise assistant.""")
270 | ```
271 |
272 | ``` python
273 | nbtxt = Path('README.txt').read_text()
274 | msg = f'''
275 | {nbtxt}
276 |
277 | In brief, what is the purpose of this project based on the readme?'''
278 | r = chat(mk_msg(msg, cache=True))
279 | r
280 | ```
281 |
282 | Claudette is a high-level wrapper for Anthropic’s Python SDK that
283 | automates common tasks and provides additional functionality. Its main
284 | features include:
285 |
286 | 1. A Chat class for stateful dialogs
287 | 2. Support for prefill (controlling Claude’s initial response words)
288 | 3. Convenient image handling
289 | 4. Simple tool use API integration
290 | 5. Support for multiple model providers (Anthropic, AWS Bedrock, Google
291 | Vertex)
292 |
293 | The project is notable for being the first “literate nbdev” project,
294 | meaning its source code is written as a detailed, readable Jupyter
295 | Notebook that includes explanations, examples, and teaching material
296 | alongside the functional code.
297 |
298 | The goal is to simplify working with Claude’s API while maintaining full
299 | control, reducing boilerplate code and manual work that would otherwise
300 | be needed with the base SDK.
301 |
302 |
303 |
304 | - id: `msg_014rVQnYoZXZuyWUCMELG1QW`
305 | - content:
306 | `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]`
307 | - model: `claude-3-5-sonnet-20241022`
308 | - role: `assistant`
309 | - stop_reason: `end_turn`
310 | - stop_sequence: `None`
311 | - type: `message`
312 | - usage:
313 | `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}`
314 |
315 |
316 |
317 | The response records the a cache has been created using these input
318 | tokens:
319 |
320 | ``` python
321 | print(r.usage)
322 | ```
323 |
324 | Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0)
325 |
326 | We can now ask a followup question in this chat:
327 |
328 | ``` python
329 | r = chat('How does it make tool use more ergonomic?')
330 | r
331 | ```
332 |
333 | According to the README, Claudette makes tool use more ergonomic in
334 | several ways:
335 |
336 | 1. It uses docments to make Python function definitions more
337 | user-friendly - each parameter and return value should have a type
338 | and description
339 |
340 | 2. It handles the tool calling process automatically - when Claude
341 | returns a tool_use message, Claudette manages calling the tool with
342 | the provided parameters behind the scenes
343 |
344 | 3. It provides a `toolloop` method that can handle multiple tool calls
345 | in a single step to solve more complex problems
346 |
347 | 4. It allows you to pass a list of tools to the Chat constructor and
348 | optionally force Claude to always use a specific tool via
349 | `tool_choice`
350 |
351 | Here’s a simple example from the README:
352 |
353 | ``` python
354 | def sums(
355 | a:int, # First thing to sum
356 | b:int=1 # Second thing to sum
357 | ) -> int: # The sum of the inputs
358 | "Adds a + b."
359 | print(f"Finding the sum of {a} and {b}")
360 | return a + b
361 |
362 | chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums')
363 | ```
364 |
365 | This makes it much simpler compared to manually handling all the tool
366 | use logic that would be required with the base SDK.
367 |
368 |
369 |
370 | - id: `msg_01EdUvvFBnpPxMtdLRCaSZAU`
371 | - content:
372 | `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ````
373 | - model: `claude-3-5-sonnet-20241022`
374 | - role: `assistant`
375 | - stop_reason: `end_turn`
376 | - stop_sequence: `None`
377 | - type: `message`
378 | - usage:
379 | `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}`
380 |
381 |
382 |
383 | We can see that this only used ~200 regular input tokens – the 7000+
384 | context tokens have been read from cache.
385 |
386 | ``` python
387 | print(r.usage)
388 | ```
389 |
390 | Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205)
391 |
392 | ``` python
393 | chat.use
394 | ```
395 |
396 | In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070
397 |
398 | ## Tool use
399 |
400 | [Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude
401 | use external tools.
402 |
403 | We use [docments](https://fastcore.fast.ai/docments.html) to make
404 | defining Python functions as ergonomic as possible. Each parameter (and
405 | the return value) should have a type, and a docments comment with the
406 | description of what it is. As an example we’ll write a simple function
407 | that adds numbers together, and will tell us when it’s being called:
408 |
409 | ``` python
410 | def sums(
411 | a:int, # First thing to sum
412 | b:int=1 # Second thing to sum
413 | ) -> int: # The sum of the inputs
414 | "Adds a + b."
415 | print(f"Finding the sum of {a} and {b}")
416 | return a + b
417 | ```
418 |
419 | Sometimes Claude will say something like “according to the `sums` tool
420 | the answer is” – generally we’d rather it just tells the user the
421 | answer, so we can use a system prompt to help with this:
422 |
423 | ``` python
424 | sp = "Never mention what tools you use."
425 | ```
426 |
427 | We’ll get Claude to add up some long numbers:
428 |
429 | ``` python
430 | a,b = 604542,6458932
431 | pr = f"What is {a}+{b}?"
432 | pr
433 | ```
434 |
435 | 'What is 604542+6458932?'
436 |
437 | To use tools, pass a list of them to
438 | [`Chat`](https://claudette.answer.ai/core.html#chat):
439 |
440 | ``` python
441 | chat = Chat(model, sp=sp, tools=[sums])
442 | ```
443 |
444 | To force Claude to always answer using a tool, set `tool_choice` to that
445 | function name. When Claude needs to use a tool, it doesn’t return the
446 | answer, but instead returns a `tool_use` message, which means we have to
447 | call the named tool with the provided parameters.
448 |
449 | ``` python
450 | r = chat(pr, tool_choice='sums')
451 | r
452 | ```
453 |
454 | Finding the sum of 604542 and 6458932
455 |
456 | ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542,
457 | ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
458 |
459 |
460 |
461 | - id: `msg_014xrPyotyiBmFSctkp1LZHk`
462 | - content:
463 | `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
464 | - model: `claude-3-5-sonnet-20241022`
465 | - role: `assistant`
466 | - stop_reason: `tool_use`
467 | - stop_sequence: `None`
468 | - type: `message`
469 | - usage:
470 | `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
471 |
472 |
473 |
474 | Claudette handles all that for us – we just call it again, and it all
475 | happens automatically:
476 |
477 | ``` python
478 | chat()
479 | ```
480 |
481 | The sum of 604542 and 6458932 is 7063474.
482 |
483 |
484 |
485 | - id: `msg_01151puJxG8Fa6k6QSmzwKQA`
486 | - content:
487 | `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]`
488 | - model: `claude-3-5-sonnet-20241022`
489 | - role: `assistant`
490 | - stop_reason: `end_turn`
491 | - stop_sequence: `None`
492 | - type: `message`
493 | - usage:
494 | `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
495 |
496 |
497 |
498 | You can see how many tokens have been used at any time by checking the
499 | `use` property. Note that (as of May 2024) tool use in Claude uses a
500 | *lot* of tokens, since it automatically adds a large system prompt.
501 |
502 | ``` python
503 | chat.use
504 | ```
505 |
506 | In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042
507 |
508 | We can do everything needed to use tools in a single step, by using
509 | [`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop).
510 | This can even call multiple tools as needed solve a problem. For
511 | example, let’s define a tool to handle multiplication:
512 |
513 | ``` python
514 | def mults(
515 | a:int, # First thing to multiply
516 | b:int=1 # Second thing to multiply
517 | ) -> int: # The product of the inputs
518 | "Multiplies a * b."
519 | print(f"Finding the product of {a} and {b}")
520 | return a * b
521 | ```
522 |
523 | Now with a single call we can calculate `(a+b)*2` – by passing
524 | `show_trace` we can see each response from Claude in the process:
525 |
526 | ``` python
527 | chat = Chat(model, sp=sp, tools=[sums,mults])
528 | pr = f'Calculate ({a}+{b})*2'
529 | pr
530 | ```
531 |
532 | 'Calculate (604542+6458932)*2'
533 |
534 | ``` python
535 | chat.toolloop(pr, trace_func=print)
536 | ```
537 |
538 | Finding the sum of 604542 and 6458932
539 | [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}]
540 | Finding the product of 7063474 and 2
541 | [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}]
542 | [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}]
543 |
544 | The final result is 14,126,948.
545 |
546 |
547 |
548 | - id: `msg_0162teyBcJHriUzZXMPz4r5d`
549 | - content:
550 | `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]`
551 | - model: `claude-3-5-sonnet-20241022`
552 | - role: `assistant`
553 | - stop_reason: `end_turn`
554 | - stop_sequence: `None`
555 | - type: `message`
556 | - usage:
557 | `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
558 |
559 |
560 |
561 | ## Structured data
562 |
563 | If you just want the immediate result from a single tool, use
564 | [`Client.structured`](https://claudette.answer.ai/core.html#client.structured).
565 |
566 | ``` python
567 | cli = Client(model)
568 | ```
569 |
570 | ``` python
571 | def sums(
572 | a:int, # First thing to sum
573 | b:int=1 # Second thing to sum
574 | ) -> int: # The sum of the inputs
575 | "Adds a + b."
576 | print(f"Finding the sum of {a} and {b}")
577 | return a + b
578 | ```
579 |
580 | ``` python
581 | cli.structured("What is 604542+6458932", sums)
582 | ```
583 |
584 | Finding the sum of 604542 and 6458932
585 |
586 | [7063474]
587 |
588 | This is particularly useful for getting back structured information,
589 | e.g:
590 |
591 | ``` python
592 | class President:
593 | "Information about a president of the United States"
594 | def __init__(self,
595 | first:str, # first name
596 | last:str, # last name
597 | spouse:str, # name of spouse
598 | years_in_office:str, # format: "{start_year}-{end_year}"
599 | birthplace:str, # name of city
600 | birth_year:int # year of birth, `0` if unknown
601 | ):
602 | assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`"
603 | store_attr()
604 |
605 | __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year')
606 | ```
607 |
608 | ``` python
609 | cli.structured("Provide key information about the 3rd President of the United States", President)
610 | ```
611 |
612 | [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)]
613 |
614 | ## Images
615 |
616 | Claude can handle image data as well. As everyone knows, when testing
617 | image APIs you have to use a cute puppy.
618 |
619 | ``` python
620 | fn = Path('samples/puppy.jpg')
621 | display.Image(filename=fn, width=200)
622 | ```
623 |
624 |
626 |
627 | We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object
628 | as before:
629 |
630 | ``` python
631 | chat = Chat(model)
632 | ```
633 |
634 | Claudette expects images as a list of bytes, so we read in the file:
635 |
636 | ``` python
637 | img = fn.read_bytes()
638 | ```
639 |
640 | Prompts to Claudette can be lists, containing text, images, or both, eg:
641 |
642 | ``` python
643 | chat([img, "In brief, what color flowers are in this image?"])
644 | ```
645 |
646 | In this adorable puppy photo, there are purple/lavender colored flowers
647 | (appears to be asters or similar daisy-like flowers) in the background.
648 |
649 |
650 |
651 | - id: `msg_01LHjGv1WwFvDsWUbyLmTEKT`
652 | - content:
653 | `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]`
654 | - model: `claude-3-5-sonnet-20241022`
655 | - role: `assistant`
656 | - stop_reason: `end_turn`
657 | - stop_sequence: `None`
658 | - type: `message`
659 | - usage:
660 | `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
661 |
662 |
663 |
664 | The image is included as input tokens.
665 |
666 | ``` python
667 | chat.use
668 | ```
669 |
670 | In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147
671 |
672 | Alternatively, Claudette supports creating a multi-stage chat with
673 | separate image and text prompts. For instance, you can pass just the
674 | image as the initial prompt (in which case Claude will make some general
675 | comments about what it sees), and then follow up with questions in
676 | additional prompts:
677 |
678 | ``` python
679 | chat = Chat(model)
680 | chat(img)
681 | ```
682 |
683 | What an adorable Cavalier King Charles Spaniel puppy! The photo captures
684 | the classic brown and white coloring of the breed, with those soulful
685 | dark eyes that are so characteristic. The puppy is lying in the grass,
686 | and there are lovely purple asters blooming in the background, creating
687 | a beautiful natural setting. The combination of the puppy’s sweet
688 | expression and the delicate flowers makes for a charming composition.
689 | Cavalier King Charles Spaniels are known for their gentle, affectionate
690 | nature, and this little one certainly seems to embody those traits with
691 | its endearing look.
692 |
693 |
694 |
695 | - id: `msg_01Ciyymq44uwp2iYwRZdKWNN`
696 | - content:
697 | `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]`
698 | - model: `claude-3-5-sonnet-20241022`
699 | - role: `assistant`
700 | - stop_reason: `end_turn`
701 | - stop_sequence: `None`
702 | - type: `message`
703 | - usage:
704 | `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
705 |
706 |
707 |
708 | ``` python
709 | chat('What direction is the puppy facing?')
710 | ```
711 |
712 | The puppy is facing towards the left side of the image. Its head is
713 | positioned so we can see its right side profile, though it appears to be
714 | looking slightly towards the camera, giving us a good view of its
715 | distinctive brown and white facial markings and one of its dark eyes.
716 | The puppy is lying down with its white chest/front visible against the
717 | green grass.
718 |
719 |
720 |
721 | - id: `msg_01AeR9eWjbxa788YF97iErtN`
722 | - content:
723 | `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]`
724 | - model: `claude-3-5-sonnet-20241022`
725 | - role: `assistant`
726 | - stop_reason: `end_turn`
727 | - stop_sequence: `None`
728 | - type: `message`
729 | - usage:
730 | `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
731 |
732 |
733 |
734 | ``` python
735 | chat('What color is it?')
736 | ```
737 |
738 | The puppy has a classic Cavalier King Charles Spaniel coat with a rich
739 | chestnut brown (sometimes called Blenheim) coloring on its ears and
740 | patches on its face, combined with a bright white base color. The white
741 | is particularly prominent on its face (creating a distinctive blaze down
742 | the center) and chest area. This brown and white combination is one of
743 | the most recognizable color patterns for the breed.
744 |
745 |
746 |
747 | - id: `msg_01R91AqXG7pLc8hK24F5mc7x`
748 | - content:
749 | `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]`
750 | - model: `claude-3-5-sonnet-20241022`
751 | - role: `assistant`
752 | - stop_reason: `end_turn`
753 | - stop_sequence: `None`
754 | - type: `message`
755 | - usage:
756 | `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
757 |
758 |
759 |
760 | Note that the image is passed in again for every input in the dialog, so
761 | that number of input tokens increases quickly with this kind of chat.
762 | (For large images, using prompt caching might be a good idea.)
763 |
764 | ``` python
765 | chat.use
766 | ```
767 |
768 | In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964
769 |
770 | ## Other model providers
771 |
772 | You can also use 3rd party providers of Anthropic models, as shown here.
773 |
774 | ### Amazon Bedrock
775 |
776 | These are the models available through Bedrock:
777 |
778 | ``` python
779 | models_aws
780 | ```
781 |
782 | ['anthropic.claude-3-opus-20240229-v1:0',
783 | 'anthropic.claude-3-5-sonnet-20241022-v2:0',
784 | 'anthropic.claude-3-sonnet-20240229-v1:0',
785 | 'anthropic.claude-3-haiku-20240307-v1:0']
786 |
787 | To use them, call `AnthropicBedrock` with your access details, and pass
788 | that to [`Client`](https://claudette.answer.ai/core.html#client):
789 |
790 | ``` python
791 | from anthropic import AnthropicBedrock
792 | ```
793 |
794 | ``` python
795 | ab = AnthropicBedrock(
796 | aws_access_key=os.environ['AWS_ACCESS_KEY'],
797 | aws_secret_key=os.environ['AWS_SECRET_KEY'],
798 | )
799 | client = Client(models_aws[-1], ab)
800 | ```
801 |
802 | Now create your [`Chat`](https://claudette.answer.ai/core.html#chat)
803 | object passing this client to the `cli` parameter – and from then on,
804 | everything is identical to the previous examples.
805 |
806 | ``` python
807 | chat = Chat(cli=client)
808 | chat("I'm Jeremy")
809 | ```
810 |
811 | It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by
812 | Anthropic. How can I help you today?
813 |
814 |
815 |
816 | - id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq`
817 | - content:
818 | `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]`
819 | - model: `claude-3-haiku-20240307`
820 | - role: `assistant`
821 | - stop_reason: `end_turn`
822 | - stop_sequence: `None`
823 | - type: `message`
824 | - usage: `{'input_tokens': 10, 'output_tokens': 32}`
825 |
826 |
827 |
828 | ### Google Vertex
829 |
830 | These are the models available through Vertex:
831 |
832 | ``` python
833 | models_goog
834 | ```
835 |
836 | ['claude-3-opus@20240229',
837 | 'claude-3-5-sonnet-v2@20241022',
838 | 'claude-3-sonnet@20240229',
839 | 'claude-3-haiku@20240307']
840 |
841 | To use them, call `AnthropicVertex` with your access details, and pass
842 | that to [`Client`](https://claudette.answer.ai/core.html#client):
843 |
844 | ``` python
845 | from anthropic import AnthropicVertex
846 | import google.auth
847 | ```
848 |
849 | ``` python
850 | project_id = google.auth.default()[1]
851 | gv = AnthropicVertex(project_id=project_id, region="us-east5")
852 | client = Client(models_goog[-1], gv)
853 | ```
854 |
855 | ``` python
856 | chat = Chat(cli=client)
857 | chat("I'm Jeremy")
858 | ```
859 |
860 | ## Extensions
861 |
862 | - [Pydantic Structured
863 | Ouput](https://github.com/tom-pollak/claudette-pydantic)
864 |
--------------------------------------------------------------------------------
/llms-ctx.txt:
--------------------------------------------------------------------------------
1 | Things to remember when using Claudette:
2 |
3 | - You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key
4 | - Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex)
5 | - The library provides both synchronous and asynchronous interfaces
6 | - Use `Chat()` for maintaining conversation state and handling tool interactions
7 | - When using tools, the library automatically handles the request/response loop
8 | - Image support is built in but only available on compatible models (not Haiku)# claudette
9 |
10 |
11 |
12 | > **NB**: If you are reading this in GitHub’s readme, we recommend you
13 | > instead read the much more nicely formatted [documentation
14 | > format](https://claudette.answer.ai/) of this tutorial.
15 |
16 | *Claudette* is a wrapper for Anthropic’s [Python
17 | SDK](https://github.com/anthropics/anthropic-sdk-python).
18 |
19 | The SDK works well, but it is quite low level – it leaves the developer
20 | to do a lot of stuff manually. That’s a lot of extra work and
21 | boilerplate! Claudette automates pretty much everything that can be
22 | automated, whilst providing full control. Amongst the features provided:
23 |
24 | - A [`Chat`](https://claudette.answer.ai/core.html#chat) class that
25 | creates stateful dialogs
26 | - Support for *prefill*, which tells Claude what to use as the first few
27 | words of its response
28 | - Convenient image support
29 | - Simple and convenient support for Claude’s new Tool Use API.
30 |
31 | You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the
32 | key provided to you by Anthropic in order to use this library.
33 |
34 | Note that this library is the first ever “literate nbdev” project. That
35 | means that the actual source code for the library is a rendered Jupyter
36 | Notebook which includes callout notes and tips, HTML tables and images,
37 | detailed explanations, and teaches *how* and *why* the code is written
38 | the way it is. Even if you’ve never used the Anthropic Python SDK or
39 | Claude API before, you should be able to read the source code. Click
40 | [Claudette’s Source](https://claudette.answer.ai/core.html) to read it,
41 | or clone the git repo and execute the notebook yourself to see every
42 | step of the creation process in action. The tutorial below includes
43 | links to API details which will take you to relevant parts of the
44 | source. The reason this project is a new kind of literal program is
45 | because we take seriously Knuth’s call to action, that we have a “*moral
46 | commitment*” to never write an “*illiterate program*” – and so we have a
47 | commitment to making literate programming and easy and pleasant
48 | experience. (For more on this, see [this
49 | talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.)
50 |
51 | > “*Let us change our traditional attitude to the construction of
52 | > programs: Instead of imagining that our main task is to instruct a
53 | > **computer** what to do, let us concentrate rather on explaining to
54 | > **human beings** what we want a computer to do.*” Donald E. Knuth,
55 | > [Literate
56 | > Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf)
57 | > (1984)
58 |
59 | ## Install
60 |
61 | ``` sh
62 | pip install claudette
63 | ```
64 |
65 | ## Getting started
66 |
67 | Anthropic’s Python SDK will automatically be installed with Claudette,
68 | if you don’t already have it.
69 |
70 | ``` python
71 | import os
72 | # os.environ['ANTHROPIC_LOG'] = 'debug'
73 | ```
74 |
75 | To print every HTTP request and response in full, uncomment the above
76 | line.
77 |
78 | ``` python
79 | from claudette import *
80 | ```
81 |
82 | Claudette only exports the symbols that are needed to use the library,
83 | so you can use `import *` to import them. Alternatively, just use:
84 |
85 | ``` python
86 | import claudette
87 | ```
88 |
89 | …and then add the prefix `claudette.` to any usages of the module.
90 |
91 | Claudette provides `models`, which is a list of models currently
92 | available from the SDK.
93 |
94 | ``` python
95 | models
96 | ```
97 |
98 | ['claude-3-opus-20240229',
99 | 'claude-3-5-sonnet-20241022',
100 | 'claude-3-haiku-20240307']
101 |
102 | For these examples, we’ll use Sonnet 3.5, since it’s awesome!
103 |
104 | ``` python
105 | model = models[1]
106 | ```
107 |
108 | ## Chat
109 |
110 | The main interface to Claudette is the
111 | [`Chat`](https://claudette.answer.ai/core.html#chat) class, which
112 | provides a stateful interface to Claude:
113 |
114 | ``` python
115 | chat = Chat(model, sp="""You are a helpful and concise assistant.""")
116 | chat("I'm Jeremy")
117 | ```
118 |
119 | Hello Jeremy, nice to meet you.
120 |
121 |
122 |
123 | - id: `msg_015oK9jEcra3TEKHUGYULjWB`
124 | - content:
125 | `[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]`
126 | - model: `claude-3-5-sonnet-20241022`
127 | - role: `assistant`
128 | - stop_reason: `end_turn`
129 | - stop_sequence: `None`
130 | - type: `message`
131 | - usage:
132 | `{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
133 |
134 |
135 |
136 | ``` python
137 | r = chat("What's my name?")
138 | r
139 | ```
140 |
141 | Your name is Jeremy.
142 |
143 |
144 |
145 | - id: `msg_01Si8sTFJe8d8vq7enanbAwj`
146 | - content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]`
147 | - model: `claude-3-5-sonnet-20241022`
148 | - role: `assistant`
149 | - stop_reason: `end_turn`
150 | - stop_sequence: `None`
151 | - type: `message`
152 | - usage:
153 | `{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
154 |
155 |
156 |
157 | ``` python
158 | r = chat("What's my name?")
159 | r
160 | ```
161 |
162 | Your name is Jeremy.
163 |
164 |
165 |
166 | - id: `msg_01BHWRoAX8eBsoLn2bzpBkvx`
167 | - content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]`
168 | - model: `claude-3-5-sonnet-20241022`
169 | - role: `assistant`
170 | - stop_reason: `end_turn`
171 | - stop_sequence: `None`
172 | - type: `message`
173 | - usage:
174 | `{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
175 |
176 |
177 |
178 | As you see above, displaying the results of a call in a notebook shows
179 | just the message contents, with the other details hidden behind a
180 | collapsible section. Alternatively you can `print` the details:
181 |
182 | ``` python
183 | print(r)
184 | ```
185 |
186 | Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62)
187 |
188 | Claude supports adding an extra `assistant` message at the end, which
189 | contains the *prefill* – i.e. the text we want Claude to assume the
190 | response starts with. Let’s try it out:
191 |
192 | ``` python
193 | chat("Concisely, what is the meaning of life?",
194 | prefill='According to Douglas Adams,')
195 | ```
196 |
197 | According to Douglas Adams,42. Philosophically, it’s to find personal
198 | meaning through relationships, purpose, and experiences.
199 |
200 |
201 |
202 | - id: `msg_01R9RvMdFwea9iRX5uYSSHG7`
203 | - content:
204 | `[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]`
205 | - model: `claude-3-5-sonnet-20241022`
206 | - role: `assistant`
207 | - stop_reason: `end_turn`
208 | - stop_sequence: `None`
209 | - type: `message`
210 | - usage:
211 | `{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
212 |
213 |
214 |
215 | You can add `stream=True` to stream the results as soon as they arrive
216 | (although you will only see the gradual generation if you execute the
217 | notebook yourself, of course!)
218 |
219 | ``` python
220 | for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True):
221 | print(o, end='')
222 | ```
223 |
224 | It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
225 |
226 | ### Async
227 |
228 | Alternatively, you can use
229 | [`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or
230 | [`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for
231 | the async versions, e.g:
232 |
233 | ``` python
234 | chat = AsyncChat(model)
235 | await chat("I'm Jeremy")
236 | ```
237 |
238 | Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by
239 | Anthropic. How can I help you today?
240 |
241 |
242 |
243 | - id: `msg_016Q8cdc3sPWBS8eXcNj841L`
244 | - content:
245 | `[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]`
246 | - model: `claude-3-5-sonnet-20241022`
247 | - role: `assistant`
248 | - stop_reason: `end_turn`
249 | - stop_sequence: `None`
250 | - type: `message`
251 | - usage:
252 | `{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
253 |
254 |
255 |
256 | Remember to use `async for` when streaming in this case:
257 |
258 | ``` python
259 | async for o in await chat("Concisely, what is the meaning of life?",
260 | prefill='According to Douglas Adams,', stream=True):
261 | print(o, end='')
262 | ```
263 |
264 | According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful.
265 |
266 | ## Prompt caching
267 |
268 | If you use `mk_msg(msg, cache=True)`, then the message is cached using
269 | Claude’s [prompt
270 | caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)
271 | feature. For instance, here we use caching when asking about Claudette’s
272 | readme file:
273 |
274 | ``` python
275 | chat = Chat(model, sp="""You are a helpful and concise assistant.""")
276 | ```
277 |
278 | ``` python
279 | nbtxt = Path('README.txt').read_text()
280 | msg = f'''
281 | {nbtxt}
282 |
283 | In brief, what is the purpose of this project based on the readme?'''
284 | r = chat(mk_msg(msg, cache=True))
285 | r
286 | ```
287 |
288 | Claudette is a high-level wrapper for Anthropic’s Python SDK that
289 | automates common tasks and provides additional functionality. Its main
290 | features include:
291 |
292 | 1. A Chat class for stateful dialogs
293 | 2. Support for prefill (controlling Claude’s initial response words)
294 | 3. Convenient image handling
295 | 4. Simple tool use API integration
296 | 5. Support for multiple model providers (Anthropic, AWS Bedrock, Google
297 | Vertex)
298 |
299 | The project is notable for being the first “literate nbdev” project,
300 | meaning its source code is written as a detailed, readable Jupyter
301 | Notebook that includes explanations, examples, and teaching material
302 | alongside the functional code.
303 |
304 | The goal is to simplify working with Claude’s API while maintaining full
305 | control, reducing boilerplate code and manual work that would otherwise
306 | be needed with the base SDK.
307 |
308 |
309 |
310 | - id: `msg_014rVQnYoZXZuyWUCMELG1QW`
311 | - content:
312 | `[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]`
313 | - model: `claude-3-5-sonnet-20241022`
314 | - role: `assistant`
315 | - stop_reason: `end_turn`
316 | - stop_sequence: `None`
317 | - type: `message`
318 | - usage:
319 | `{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}`
320 |
321 |
322 |
323 | The response records the a cache has been created using these input
324 | tokens:
325 |
326 | ``` python
327 | print(r.usage)
328 | ```
329 |
330 | Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0)
331 |
332 | We can now ask a followup question in this chat:
333 |
334 | ``` python
335 | r = chat('How does it make tool use more ergonomic?')
336 | r
337 | ```
338 |
339 | According to the README, Claudette makes tool use more ergonomic in
340 | several ways:
341 |
342 | 1. It uses docments to make Python function definitions more
343 | user-friendly - each parameter and return value should have a type
344 | and description
345 |
346 | 2. It handles the tool calling process automatically - when Claude
347 | returns a tool_use message, Claudette manages calling the tool with
348 | the provided parameters behind the scenes
349 |
350 | 3. It provides a `toolloop` method that can handle multiple tool calls
351 | in a single step to solve more complex problems
352 |
353 | 4. It allows you to pass a list of tools to the Chat constructor and
354 | optionally force Claude to always use a specific tool via
355 | `tool_choice`
356 |
357 | Here’s a simple example from the README:
358 |
359 | ``` python
360 | def sums(
361 | a:int, # First thing to sum
362 | b:int=1 # Second thing to sum
363 | ) -> int: # The sum of the inputs
364 | "Adds a + b."
365 | print(f"Finding the sum of {a} and {b}")
366 | return a + b
367 |
368 | chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums')
369 | ```
370 |
371 | This makes it much simpler compared to manually handling all the tool
372 | use logic that would be required with the base SDK.
373 |
374 |
375 |
376 | - id: `msg_01EdUvvFBnpPxMtdLRCaSZAU`
377 | - content:
378 | `[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ````
379 | - model: `claude-3-5-sonnet-20241022`
380 | - role: `assistant`
381 | - stop_reason: `end_turn`
382 | - stop_sequence: `None`
383 | - type: `message`
384 | - usage:
385 | `{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}`
386 |
387 |
388 |
389 | We can see that this only used ~200 regular input tokens – the 7000+
390 | context tokens have been read from cache.
391 |
392 | ``` python
393 | print(r.usage)
394 | ```
395 |
396 | Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205)
397 |
398 | ``` python
399 | chat.use
400 | ```
401 |
402 | In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070
403 |
404 | ## Tool use
405 |
406 | [Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude
407 | use external tools.
408 |
409 | We use [docments](https://fastcore.fast.ai/docments.html) to make
410 | defining Python functions as ergonomic as possible. Each parameter (and
411 | the return value) should have a type, and a docments comment with the
412 | description of what it is. As an example we’ll write a simple function
413 | that adds numbers together, and will tell us when it’s being called:
414 |
415 | ``` python
416 | def sums(
417 | a:int, # First thing to sum
418 | b:int=1 # Second thing to sum
419 | ) -> int: # The sum of the inputs
420 | "Adds a + b."
421 | print(f"Finding the sum of {a} and {b}")
422 | return a + b
423 | ```
424 |
425 | Sometimes Claude will say something like “according to the `sums` tool
426 | the answer is” – generally we’d rather it just tells the user the
427 | answer, so we can use a system prompt to help with this:
428 |
429 | ``` python
430 | sp = "Never mention what tools you use."
431 | ```
432 |
433 | We’ll get Claude to add up some long numbers:
434 |
435 | ``` python
436 | a,b = 604542,6458932
437 | pr = f"What is {a}+{b}?"
438 | pr
439 | ```
440 |
441 | 'What is 604542+6458932?'
442 |
443 | To use tools, pass a list of them to
444 | [`Chat`](https://claudette.answer.ai/core.html#chat):
445 |
446 | ``` python
447 | chat = Chat(model, sp=sp, tools=[sums])
448 | ```
449 |
450 | To force Claude to always answer using a tool, set `tool_choice` to that
451 | function name. When Claude needs to use a tool, it doesn’t return the
452 | answer, but instead returns a `tool_use` message, which means we have to
453 | call the named tool with the provided parameters.
454 |
455 | ``` python
456 | r = chat(pr, tool_choice='sums')
457 | r
458 | ```
459 |
460 | Finding the sum of 604542 and 6458932
461 |
462 | ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542,
463 | ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
464 |
465 |
466 |
467 | - id: `msg_014xrPyotyiBmFSctkp1LZHk`
468 | - content:
469 | `[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
470 | - model: `claude-3-5-sonnet-20241022`
471 | - role: `assistant`
472 | - stop_reason: `tool_use`
473 | - stop_sequence: `None`
474 | - type: `message`
475 | - usage:
476 | `{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
477 |
478 |
479 |
480 | Claudette handles all that for us – we just call it again, and it all
481 | happens automatically:
482 |
483 | ``` python
484 | chat()
485 | ```
486 |
487 | The sum of 604542 and 6458932 is 7063474.
488 |
489 |
490 |
491 | - id: `msg_01151puJxG8Fa6k6QSmzwKQA`
492 | - content:
493 | `[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]`
494 | - model: `claude-3-5-sonnet-20241022`
495 | - role: `assistant`
496 | - stop_reason: `end_turn`
497 | - stop_sequence: `None`
498 | - type: `message`
499 | - usage:
500 | `{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
501 |
502 |
503 |
504 | You can see how many tokens have been used at any time by checking the
505 | `use` property. Note that (as of May 2024) tool use in Claude uses a
506 | *lot* of tokens, since it automatically adds a large system prompt.
507 |
508 | ``` python
509 | chat.use
510 | ```
511 |
512 | In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042
513 |
514 | We can do everything needed to use tools in a single step, by using
515 | [`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop).
516 | This can even call multiple tools as needed solve a problem. For
517 | example, let’s define a tool to handle multiplication:
518 |
519 | ``` python
520 | def mults(
521 | a:int, # First thing to multiply
522 | b:int=1 # Second thing to multiply
523 | ) -> int: # The product of the inputs
524 | "Multiplies a * b."
525 | print(f"Finding the product of {a} and {b}")
526 | return a * b
527 | ```
528 |
529 | Now with a single call we can calculate `(a+b)*2` – by passing
530 | `show_trace` we can see each response from Claude in the process:
531 |
532 | ``` python
533 | chat = Chat(model, sp=sp, tools=[sums,mults])
534 | pr = f'Calculate ({a}+{b})*2'
535 | pr
536 | ```
537 |
538 | 'Calculate (604542+6458932)*2'
539 |
540 | ``` python
541 | chat.toolloop(pr, trace_func=print)
542 | ```
543 |
544 | Finding the sum of 604542 and 6458932
545 | [{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}]
546 | Finding the product of 7063474 and 2
547 | [{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}]
548 | [{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}]
549 |
550 | The final result is 14,126,948.
551 |
552 |
553 |
554 | - id: `msg_0162teyBcJHriUzZXMPz4r5d`
555 | - content:
556 | `[{'text': 'The final result is 14,126,948.', 'type': 'text'}]`
557 | - model: `claude-3-5-sonnet-20241022`
558 | - role: `assistant`
559 | - stop_reason: `end_turn`
560 | - stop_sequence: `None`
561 | - type: `message`
562 | - usage:
563 | `{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
564 |
565 |
566 |
567 | ## Structured data
568 |
569 | If you just want the immediate result from a single tool, use
570 | [`Client.structured`](https://claudette.answer.ai/core.html#client.structured).
571 |
572 | ``` python
573 | cli = Client(model)
574 | ```
575 |
576 | ``` python
577 | def sums(
578 | a:int, # First thing to sum
579 | b:int=1 # Second thing to sum
580 | ) -> int: # The sum of the inputs
581 | "Adds a + b."
582 | print(f"Finding the sum of {a} and {b}")
583 | return a + b
584 | ```
585 |
586 | ``` python
587 | cli.structured("What is 604542+6458932", sums)
588 | ```
589 |
590 | Finding the sum of 604542 and 6458932
591 |
592 | [7063474]
593 |
594 | This is particularly useful for getting back structured information,
595 | e.g:
596 |
597 | ``` python
598 | class President:
599 | "Information about a president of the United States"
600 | def __init__(self,
601 | first:str, # first name
602 | last:str, # last name
603 | spouse:str, # name of spouse
604 | years_in_office:str, # format: "{start_year}-{end_year}"
605 | birthplace:str, # name of city
606 | birth_year:int # year of birth, `0` if unknown
607 | ):
608 | assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`"
609 | store_attr()
610 |
611 | __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year')
612 | ```
613 |
614 | ``` python
615 | cli.structured("Provide key information about the 3rd President of the United States", President)
616 | ```
617 |
618 | [President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)]
619 |
620 | ## Images
621 |
622 | Claude can handle image data as well. As everyone knows, when testing
623 | image APIs you have to use a cute puppy.
624 |
625 | ``` python
626 | fn = Path('samples/puppy.jpg')
627 | display.Image(filename=fn, width=200)
628 | ```
629 |
630 |
632 |
633 | We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object
634 | as before:
635 |
636 | ``` python
637 | chat = Chat(model)
638 | ```
639 |
640 | Claudette expects images as a list of bytes, so we read in the file:
641 |
642 | ``` python
643 | img = fn.read_bytes()
644 | ```
645 |
646 | Prompts to Claudette can be lists, containing text, images, or both, eg:
647 |
648 | ``` python
649 | chat([img, "In brief, what color flowers are in this image?"])
650 | ```
651 |
652 | In this adorable puppy photo, there are purple/lavender colored flowers
653 | (appears to be asters or similar daisy-like flowers) in the background.
654 |
655 |
656 |
657 | - id: `msg_01LHjGv1WwFvDsWUbyLmTEKT`
658 | - content:
659 | `[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]`
660 | - model: `claude-3-5-sonnet-20241022`
661 | - role: `assistant`
662 | - stop_reason: `end_turn`
663 | - stop_sequence: `None`
664 | - type: `message`
665 | - usage:
666 | `{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
667 |
668 |
669 |
670 | The image is included as input tokens.
671 |
672 | ``` python
673 | chat.use
674 | ```
675 |
676 | In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147
677 |
678 | Alternatively, Claudette supports creating a multi-stage chat with
679 | separate image and text prompts. For instance, you can pass just the
680 | image as the initial prompt (in which case Claude will make some general
681 | comments about what it sees), and then follow up with questions in
682 | additional prompts:
683 |
684 | ``` python
685 | chat = Chat(model)
686 | chat(img)
687 | ```
688 |
689 | What an adorable Cavalier King Charles Spaniel puppy! The photo captures
690 | the classic brown and white coloring of the breed, with those soulful
691 | dark eyes that are so characteristic. The puppy is lying in the grass,
692 | and there are lovely purple asters blooming in the background, creating
693 | a beautiful natural setting. The combination of the puppy’s sweet
694 | expression and the delicate flowers makes for a charming composition.
695 | Cavalier King Charles Spaniels are known for their gentle, affectionate
696 | nature, and this little one certainly seems to embody those traits with
697 | its endearing look.
698 |
699 |
700 |
701 | - id: `msg_01Ciyymq44uwp2iYwRZdKWNN`
702 | - content:
703 | `[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]`
704 | - model: `claude-3-5-sonnet-20241022`
705 | - role: `assistant`
706 | - stop_reason: `end_turn`
707 | - stop_sequence: `None`
708 | - type: `message`
709 | - usage:
710 | `{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
711 |
712 |
713 |
714 | ``` python
715 | chat('What direction is the puppy facing?')
716 | ```
717 |
718 | The puppy is facing towards the left side of the image. Its head is
719 | positioned so we can see its right side profile, though it appears to be
720 | looking slightly towards the camera, giving us a good view of its
721 | distinctive brown and white facial markings and one of its dark eyes.
722 | The puppy is lying down with its white chest/front visible against the
723 | green grass.
724 |
725 |
726 |
727 | - id: `msg_01AeR9eWjbxa788YF97iErtN`
728 | - content:
729 | `[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]`
730 | - model: `claude-3-5-sonnet-20241022`
731 | - role: `assistant`
732 | - stop_reason: `end_turn`
733 | - stop_sequence: `None`
734 | - type: `message`
735 | - usage:
736 | `{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
737 |
738 |
739 |
740 | ``` python
741 | chat('What color is it?')
742 | ```
743 |
744 | The puppy has a classic Cavalier King Charles Spaniel coat with a rich
745 | chestnut brown (sometimes called Blenheim) coloring on its ears and
746 | patches on its face, combined with a bright white base color. The white
747 | is particularly prominent on its face (creating a distinctive blaze down
748 | the center) and chest area. This brown and white combination is one of
749 | the most recognizable color patterns for the breed.
750 |
751 |
752 |
753 | - id: `msg_01R91AqXG7pLc8hK24F5mc7x`
754 | - content:
755 | `[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]`
756 | - model: `claude-3-5-sonnet-20241022`
757 | - role: `assistant`
758 | - stop_reason: `end_turn`
759 | - stop_sequence: `None`
760 | - type: `message`
761 | - usage:
762 | `{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
763 |
764 |
765 |
766 | Note that the image is passed in again for every input in the dialog, so
767 | that number of input tokens increases quickly with this kind of chat.
768 | (For large images, using prompt caching might be a good idea.)
769 |
770 | ``` python
771 | chat.use
772 | ```
773 |
774 | In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964
775 |
776 | ## Other model providers
777 |
778 | You can also use 3rd party providers of Anthropic models, as shown here.
779 |
780 | ### Amazon Bedrock
781 |
782 | These are the models available through Bedrock:
783 |
784 | ``` python
785 | models_aws
786 | ```
787 |
788 | ['anthropic.claude-3-opus-20240229-v1:0',
789 | 'anthropic.claude-3-5-sonnet-20241022-v2:0',
790 | 'anthropic.claude-3-sonnet-20240229-v1:0',
791 | 'anthropic.claude-3-haiku-20240307-v1:0']
792 |
793 | To use them, call `AnthropicBedrock` with your access details, and pass
794 | that to [`Client`](https://claudette.answer.ai/core.html#client):
795 |
796 | ``` python
797 | from anthropic import AnthropicBedrock
798 | ```
799 |
800 | ``` python
801 | ab = AnthropicBedrock(
802 | aws_access_key=os.environ['AWS_ACCESS_KEY'],
803 | aws_secret_key=os.environ['AWS_SECRET_KEY'],
804 | )
805 | client = Client(models_aws[-1], ab)
806 | ```
807 |
808 | Now create your [`Chat`](https://claudette.answer.ai/core.html#chat)
809 | object passing this client to the `cli` parameter – and from then on,
810 | everything is identical to the previous examples.
811 |
812 | ``` python
813 | chat = Chat(cli=client)
814 | chat("I'm Jeremy")
815 | ```
816 |
817 | It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by
818 | Anthropic. How can I help you today?
819 |
820 |
821 |
822 | - id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq`
823 | - content:
824 | `[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]`
825 | - model: `claude-3-haiku-20240307`
826 | - role: `assistant`
827 | - stop_reason: `end_turn`
828 | - stop_sequence: `None`
829 | - type: `message`
830 | - usage: `{'input_tokens': 10, 'output_tokens': 32}`
831 |
832 |
833 |
834 | ### Google Vertex
835 |
836 | These are the models available through Vertex:
837 |
838 | ``` python
839 | models_goog
840 | ```
841 |
842 | ['claude-3-opus@20240229',
843 | 'claude-3-5-sonnet-v2@20241022',
844 | 'claude-3-sonnet@20240229',
845 | 'claude-3-haiku@20240307']
846 |
847 | To use them, call `AnthropicVertex` with your access details, and pass
848 | that to [`Client`](https://claudette.answer.ai/core.html#client):
849 |
850 | ``` python
851 | from anthropic import AnthropicVertex
852 | import google.auth
853 | ```
854 |
855 | ``` python
856 | project_id = google.auth.default()[1]
857 | gv = AnthropicVertex(project_id=project_id, region="us-east5")
858 | client = Client(models_goog[-1], gv)
859 | ```
860 |
861 | ``` python
862 | chat = Chat(cli=client)
863 | chat("I'm Jeremy")
864 | ```
865 |
866 | ## Extensions
867 |
868 | - [Pydantic Structured
869 | Ouput](https://github.com/tom-pollak/claudette-pydantic)404: Not Found
870 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # claudette
2 |
3 |
4 |
5 |
6 | > **NB**: If you are reading this in GitHub’s readme, we recommend you
7 | > instead read the much more nicely formatted [documentation
8 | > format](https://claudette.answer.ai/) of this tutorial.
9 |
10 | *Claudette* is a wrapper for Anthropic’s [Python
11 | SDK](https://github.com/anthropics/anthropic-sdk-python).
12 |
13 | The SDK works well, but it is quite low level – it leaves the developer
14 | to do a lot of stuff manually. That’s a lot of extra work and
15 | boilerplate! Claudette automates pretty much everything that can be
16 | automated, whilst providing full control. Amongst the features provided:
17 |
18 | - A [`Chat`](https://claudette.answer.ai/core.html#chat) class that
19 | creates stateful dialogs
20 | - Support for *prefill*, which tells Claude what to use as the first few
21 | words of its response
22 | - Convenient image support
23 | - Simple and convenient support for Claude’s new Tool Use API.
24 |
25 | You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the
26 | key provided to you by Anthropic in order to use this library.
27 |
28 | Note that this library is the first ever “literate nbdev” project. That
29 | means that the actual source code for the library is a rendered Jupyter
30 | Notebook which includes callout notes and tips, HTML tables and images,
31 | detailed explanations, and teaches *how* and *why* the code is written
32 | the way it is. Even if you’ve never used the Anthropic Python SDK or
33 | Claude API before, you should be able to read the source code. Click
34 | [Claudette’s Source](https://claudette.answer.ai/core.html) to read it,
35 | or clone the git repo and execute the notebook yourself to see every
36 | step of the creation process in action. The tutorial below includes
37 | links to API details which will take you to relevant parts of the
38 | source. The reason this project is a new kind of literal program is
39 | because we take seriously Knuth’s call to action, that we have a “*moral
40 | commitment*” to never write an “*illiterate program*” – and so we have a
41 | commitment to making literate programming an easy and pleasant
42 | experience. (For more on this, see [this
43 | talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.)
44 |
45 | > “*Let us change our traditional attitude to the construction of
46 | > programs: Instead of imagining that our main task is to instruct a
47 | > **computer** what to do, let us concentrate rather on explaining to
48 | > **human beings** what we want a computer to do.*” Donald E. Knuth,
49 | > [Literate
50 | > Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf)
51 | > (1984)
52 |
53 | ## Install
54 |
55 | ``` sh
56 | pip install claudette
57 | ```
58 |
59 | ## Getting started
60 |
61 | Anthropic’s Python SDK will automatically be installed with Claudette,
62 | if you don’t already have it.
63 |
64 | ``` python
65 | import os
66 | # os.environ['ANTHROPIC_LOG'] = 'debug'
67 | ```
68 |
69 | To print every HTTP request and response in full, uncomment the above
70 | line.
71 |
72 | ``` python
73 | from claudette import *
74 | ```
75 |
76 | Claudette only exports the symbols that are needed to use the library,
77 | so you can use `import *` to import them. Alternatively, just use:
78 |
79 | ``` python
80 | import claudette
81 | ```
82 |
83 | …and then add the prefix `claudette.` to any usages of the module.
84 |
85 | Claudette provides `models`, which is a list of models currently
86 | available from the SDK.
87 |
88 | ``` python
89 | models
90 | ```
91 |
92 | ['claude-opus-4-20250514',
93 | 'claude-sonnet-4-20250514',
94 | 'claude-3-opus-20240229',
95 | 'claude-3-7-sonnet-20250219',
96 | 'claude-3-5-sonnet-20241022']
97 |
98 | For these examples, we’ll use Sonnet 4, since it’s awesome!
99 |
100 | ``` python
101 | model = models[1]
102 | model
103 | ```
104 |
105 | 'claude-sonnet-4-20250514'
106 |
107 | ## Chat
108 |
109 | The main interface to Claudette is the
110 | [`Chat`](https://claudette.answer.ai/core.html#chat) class, which
111 | provides a stateful interface to Claude:
112 |
113 | ``` python
114 | chat = Chat(model, sp="""You are a helpful and concise assistant.""")
115 | chat("I'm Jeremy")
116 | ```
117 |
118 | Hello Jeremy! Nice to meet you. How can I help you today?
119 |
120 |
121 |
122 | - id: `msg_01NNfbziMKnAULhH72Upzpzg`
123 | - content:
124 | `[{'citations': None, 'text': 'Hello Jeremy! Nice to meet you. How can I help you today?', 'type': 'text'}]`
125 | - model: `claude-sonnet-4-20250514`
126 | - role: `assistant`
127 | - stop_reason: `end_turn`
128 | - stop_sequence: `None`
129 | - type: `message`
130 | - usage:
131 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 19, 'output_tokens': 18, 'server_tool_use': None, 'service_tier': 'standard'}`
132 |
133 |
134 |
135 | ``` python
136 | r = chat("What's my name?")
137 | r
138 | ```
139 |
140 | Your name is Jeremy.
141 |
142 |
143 |
144 | - id: `msg_01HoboBZtq6dtocz2UWht9ia`
145 | - content:
146 | `[{'citations': None, 'text': 'Your name is Jeremy.', 'type': 'text'}]`
147 | - model: `claude-sonnet-4-20250514`
148 | - role: `assistant`
149 | - stop_reason: `end_turn`
150 | - stop_sequence: `None`
151 | - type: `message`
152 | - usage:
153 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 45, 'output_tokens': 8, 'server_tool_use': None, 'service_tier': 'standard'}`
154 |
155 |
156 |
157 | ``` python
158 | r = chat("What's my name?")
159 | r
160 | ```
161 |
162 | Your name is Jeremy.
163 |
164 |
165 |
166 | - id: `msg_01SUAYui6JS3Jf65KYfsGWhU`
167 | - content:
168 | `[{'citations': None, 'text': 'Your name is Jeremy.', 'type': 'text'}]`
169 | - model: `claude-sonnet-4-20250514`
170 | - role: `assistant`
171 | - stop_reason: `end_turn`
172 | - stop_sequence: `None`
173 | - type: `message`
174 | - usage:
175 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 61, 'output_tokens': 8, 'server_tool_use': None, 'service_tier': 'standard'}`
176 |
177 |
178 |
179 | As you see above, displaying the results of a call in a notebook shows
180 | just the message contents, with the other details hidden behind a
181 | collapsible section. Alternatively you can `print` the details:
182 |
183 | ``` python
184 | print(r)
185 | ```
186 |
187 | Message(id='msg_01SUAYui6JS3Jf65KYfsGWhU', content=[TextBlock(citations=None, text='Your name is Jeremy.', type='text')], model='claude-sonnet-4-20250514', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 61; Out: 8; Cache create: 0; Cache read: 0; Total Tokens: 69; Search: 0)
188 |
189 | Claude supports adding an extra `assistant` message at the end, which
190 | contains the *prefill* – i.e. the text we want Claude to assume the
191 | response starts with. Let’s try it out:
192 |
193 | ``` python
194 | chat("Concisely, what is the meaning of life?",
195 | prefill='According to Douglas Adams,')
196 | ```
197 |
198 | According to Douglas Adams,it’s 42. More seriously, many find meaning
199 | through relationships, personal growth, contributing to others, and
200 | pursuing what brings fulfillment.
201 |
202 |
203 |
204 | - id: `msg_01UkHn37YePdXg8NVrXk9Qu3`
205 | - content:
206 | `[{'citations': None, 'text': "According to Douglas Adams,it's 42. More seriously, many find meaning through relationships, personal growth, contributing to others, and pursuing what brings fulfillment.", 'type': 'text'}]`
207 | - model: `claude-sonnet-4-20250514`
208 | - role: `assistant`
209 | - stop_reason: `end_turn`
210 | - stop_sequence: `None`
211 | - type: `message`
212 | - usage:
213 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 89, 'output_tokens': 32, 'server_tool_use': None, 'service_tier': 'standard'}`
214 |
215 |
216 |
217 | You can add `stream=True` to stream the results as soon as they arrive
218 | (although you will only see the gradual generation if you execute the
219 | notebook yourself, of course!)
220 |
221 | ``` python
222 | for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True):
223 | print(o, end='')
224 | ```
225 |
226 | It was in "The Hitchhiker's Guide to the Galaxy."
227 |
228 | ### Async
229 |
230 | Alternatively, you can use
231 | [`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or
232 | [`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for
233 | the async versions, e.g:
234 |
235 | ``` python
236 | chat = AsyncChat(model)
237 | await chat("I'm Jeremy")
238 | ```
239 |
240 | Nice to meet you, Jeremy! How are you doing today? Is there anything I
241 | can help you with?
242 |
243 |
244 |
245 | - id: `msg_01HyDqMjwcKEc2V39xLWTwFf`
246 | - content:
247 | `[{'citations': None, 'text': 'Nice to meet you, Jeremy! How are you doing today? Is there anything I can help you with?', 'type': 'text'}]`
248 | - model: `claude-sonnet-4-20250514`
249 | - role: `assistant`
250 | - stop_reason: `end_turn`
251 | - stop_sequence: `None`
252 | - type: `message`
253 | - usage:
254 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 25, 'server_tool_use': None, 'service_tier': 'standard'}`
255 |
256 |
257 |
258 | Remember to use `async for` when streaming in this case:
259 |
260 | ``` python
261 | async for o in await chat("Concisely, what is the meaning of life?",
262 | prefill='According to Douglas Adams,', stream=True):
263 | print(o, end='')
264 | ```
265 |
266 | According to Douglas Adams,it's 42. But more seriously, the meaning of life is likely something you create through your relationships, contributions, growth, and what brings you fulfillment - rather than something you discover pre-made.
267 |
268 | ## Prompt caching
269 |
270 | Claude supports [prompt
271 | caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching),
272 | which can significantly reduce token usage costs when working with large
273 | contexts or repetitive elements. When you use `mk_msg(msg, cache=True)`,
274 | Claudette adds the necessary cache control headers to make that message
275 | cacheable.
276 |
277 | Prompt caching works by marking segments of your prompt for efficient
278 | reuse. When a cached segment is encountered again, Claude reads it from
279 | the cache rather than processing the full content, resulting in a 90%
280 | reduction in token costs for those segments.
281 |
282 | Some key points about prompt caching: - Cache writes cost 25% more than
283 | normal input tokens - Cache reads cost 90% less than normal input
284 | tokens - Minimum cacheable length is model-dependent (1024-2048
285 | tokens) - Cached segments must be completely identical to be reused -
286 | Works well for system prompts, tool definitions, and large context
287 | blocks
288 |
289 | For instance, here we use caching when asking about Claudette’s readme
290 | file:
291 |
292 | ``` python
293 | chat = Chat(model, sp="""You are a helpful and concise assistant.""")
294 | ```
295 |
296 | ``` python
297 | nbtxt = Path('README.txt').read_text()
298 | msg = f'''
299 | {nbtxt}
300 |
301 | In brief, what is the purpose of this project based on the readme?'''
302 | r = chat(mk_msg(msg, cache=True))
303 | r
304 | ```
305 |
306 | Based on the README, Claudette is a high-level wrapper for Anthropic’s
307 | Python SDK that aims to simplify and automate working with Claude’s API.
308 | Its main purposes are:
309 |
310 | 1. **Reduce boilerplate and manual work** - It automates tasks that
311 | would otherwise require manual handling with the base SDK
312 |
313 | 2. **Provide convenient features** like:
314 |
315 | - Stateful chat dialogs via the
316 | [`Chat`](https://claudette.answer.ai/core.html#chat) class
317 | - Support for prefill (controlling Claude’s response start)
318 | - Easy image handling
319 | - Simplified tool use API
320 | - Prompt caching support
321 |
322 | 3. **Maintain full control** while providing automation - you get
323 | convenience without losing flexibility
324 |
325 | 4. **Educational value** - It’s the first “literate nbdev” project,
326 | meaning the source code is written as a readable Jupyter Notebook
327 | with detailed explanations, making it both functional software and a
328 | teaching resource
329 |
330 | The project essentially makes Claude’s API more ergonomic and
331 | user-friendly while preserving all the underlying capabilities.
332 |
333 |
334 |
335 | - id: `msg_011SNy9u95nspNq6PSnQuwBF`
336 | - content:
337 | `[{'citations': None, 'text': 'Based on the README, Claudette is a high-level wrapper for Anthropic\'s Python SDK that aims to simplify and automate working with Claude\'s API. Its main purposes are:\n\n1. **Reduce boilerplate and manual work** - It automates tasks that would otherwise require manual handling with the base SDK\n2. **Provide convenient features** like:\n - Stateful chat dialogs via the [`Chat`](https://claudette.answer.ai/core.html#chat) class\n - Support for prefill (controlling Claude\'s response start)\n - Easy image handling\n - Simplified tool use API\n - Prompt caching support\n\n3. **Maintain full control** while providing automation - you get convenience without losing flexibility\n\n4. **Educational value** - It\'s the first "literate nbdev" project, meaning the source code is written as a readable Jupyter Notebook with detailed explanations, making it both functional software and a teaching resource\n\nThe project essentially makes Claude\'s API more ergonomic and user-friendly while preserving all the underlying capabilities.', 'type': 'text'}]`
338 | - model: `claude-sonnet-4-20250514`
339 | - role: `assistant`
340 | - stop_reason: `end_turn`
341 | - stop_sequence: `None`
342 | - type: `message`
343 | - usage:
344 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 9287, 'input_tokens': 4, 'output_tokens': 223, 'server_tool_use': None, 'service_tier': 'standard'}`
345 |
346 |
347 |
348 | The response records the a cache has been created using these input
349 | tokens:
350 |
351 | ``` python
352 | print(r.usage)
353 | ```
354 |
355 | Usage(cache_creation_input_tokens=0, cache_read_input_tokens=9287, input_tokens=4, output_tokens=223, server_tool_use=None, service_tier='standard')
356 |
357 | We can now ask a followup question in this chat:
358 |
359 | ``` python
360 | r = chat('How does it make tool use more ergonomic?')
361 | r
362 | ```
363 |
364 | Based on the README, Claudette makes tool use more ergonomic in several
365 | key ways:
366 |
367 | ## 1. **Simplified Function Definitions**
368 |
369 | Uses docments to make Python function definitions more user-friendly -
370 | you just need type hints and comments:
371 |
372 | ``` python
373 | def sums(
374 | a:int, # First thing to sum
375 | b:int=1 # Second thing to sum
376 | ) -> int: # The sum of the inputs
377 | "Adds a + b."
378 | return a + b
379 | ```
380 |
381 | ## 2. **Automatic Tool Execution**
382 |
383 | Handles the tool calling process automatically. When Claude returns a
384 | `tool_use` message, you just call `chat()` again and Claudette: - Calls
385 | the tool with the provided parameters - Passes the result back to
386 | Claude - Returns Claude’s final response
387 |
388 | No manual parameter extraction or result handling needed.
389 |
390 | ## 3. **Multi-step Tool Workflows**
391 |
392 | The `toolloop` method can handle multiple tool calls in sequence to
393 | solve complex problems. For example, calculating `(a+b)*2` automatically
394 | uses both addition and multiplication tools in the right order.
395 |
396 | ## 4. **Easy Tool Integration**
397 |
398 | - Pass tools as a simple list to the
399 | [`Chat`](https://claudette.answer.ai/core.html#chat) constructor
400 | - Optionally force tool usage with `tool_choice` parameter
401 | - Get structured data directly with
402 | [`Client.structured()`](https://claudette.answer.ai/core.html#client.structured)
403 |
404 | ## 5. **Reduced Complexity**
405 |
406 | Instead of manually handling tool use messages, parameter parsing,
407 | function calls, and result formatting that the base SDK requires,
408 | Claudette abstracts all of this away while maintaining full
409 | functionality.
410 |
411 | This makes tool use feel more like natural function calling rather than
412 | complex API orchestration.
413 |
414 |
415 |
416 | - id: `msg_01XDJNpxiTkhTY9kbu17Q73L`
417 | - content:
418 | ```` [{'citations': None, 'text': 'Based on the README, Claudette makes tool use more ergonomic in several key ways:\n\n## 1. **Simplified Function Definitions**\nUses docments to make Python function definitions more user-friendly - you just need type hints and comments:\n\n```python\ndef sums(\n a:int, # First thing to sum\n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n return a + b\n```\n\n## 2. **Automatic Tool Execution**\nHandles the tool calling process automatically. When Claude returns a ````tool_use`message, you just call`chat()`again and Claudette:\n- Calls the tool with the provided parameters\n- Passes the result back to Claude\n- Returns Claude\'s final response\n\nNo manual parameter extraction or result handling needed.\n\n## 3. **Multi-step Tool Workflows**\nThe`toolloop`method can handle multiple tool calls in sequence to solve complex problems. For example, calculating`(a+b)\*2`automatically uses both addition and multiplication tools in the right order.\n\n## 4. **Easy Tool Integration**\n- Pass tools as a simple list to the [`Chat`](https://claudette.answer.ai/core.html#chat) constructor\n- Optionally force tool usage with`tool_choice`parameter\n- Get structured data directly with [`Client.structured()`](https://claudette.answer.ai/core.html#client.structured)\n\n## 5. **Reduced Complexity**\nInstead of manually handling tool use messages, parameter parsing, function calls, and result formatting that the base SDK requires, Claudette abstracts all of this away while maintaining full functionality.\n\nThis makes tool use feel more like natural function calling rather than complex API orchestration.', 'type': 'text'}]`
419 | - model: `claude-sonnet-4-20250514`
420 | - role: `assistant`
421 | - stop_reason: `end_turn`
422 | - stop_sequence: `None`
423 | - type: `message`
424 | - usage:
425 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 9287, 'input_tokens': 241, 'output_tokens': 374, 'server_tool_use': None, 'service_tier': 'standard'}`
426 |
427 |
428 |
429 | We can see that this only used ~200 regular input tokens – the 7000+
430 | context tokens have been read from cache.
431 |
432 | ``` python
433 | print(r.usage)
434 | ```
435 |
436 | Usage(cache_creation_input_tokens=0, cache_read_input_tokens=9287, input_tokens=241, output_tokens=374, server_tool_use=None, service_tier='standard')
437 |
438 | ``` python
439 | chat.use
440 | ```
441 |
442 | In: 245; Out: 597; Cache create: 0; Cache read: 18574; Total Tokens: 19416; Search: 0
443 |
444 | ## Tool use
445 |
446 | [Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude
447 | use external tools.
448 |
449 | We use [docments](https://fastcore.fast.ai/docments.html) to make
450 | defining Python functions as ergonomic as possible. Each parameter (and
451 | the return value) should have a type, and a docments comment with the
452 | description of what it is. As an example we’ll write a simple function
453 | that adds numbers together, and will tell us when it’s being called:
454 |
455 | ``` python
456 | def sums(
457 | a:int, # First thing to sum
458 | b:int=1 # Second thing to sum
459 | ) -> int: # The sum of the inputs
460 | "Adds a + b."
461 | print(f"Finding the sum of {a} and {b}")
462 | return a + b
463 | ```
464 |
465 | Sometimes Claude will try to add stuff up “in its head”, so we’ll use a
466 | system prompt to ask it not to.
467 |
468 | ``` python
469 | sp = "Always use tools if math ops are needed."
470 | ```
471 |
472 | We’ll get Claude to add up some long numbers:
473 |
474 | ``` python
475 | a,b = 604542,6458932
476 | pr = f"What is {a}+{b}?"
477 | pr
478 | ```
479 |
480 | 'What is 604542+6458932?'
481 |
482 | To use tools, pass a list of them to
483 | [`Chat`](https://claudette.answer.ai/core.html#chat):
484 |
485 | ``` python
486 | chat = Chat(model, sp=sp, tools=[sums])
487 | ```
488 |
489 | To force Claude to always answer using a tool, set `tool_choice` to that
490 | function name. When Claude needs to use a tool, it doesn’t return the
491 | answer, but instead returns a `tool_use` message, which means we have to
492 | call the named tool with the provided parameters.
493 |
494 | ``` python
495 | r = chat(pr, tool_choice='sums')
496 | r
497 | ```
498 |
499 | Finding the sum of 604542 and 6458932
500 |
501 | ToolUseBlock(id=‘toolu_01UUWNqtkMHQss345r1ir17q’, input={‘a’: 604542,
502 | ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
503 |
504 |
505 |
506 | - id: `msg_0199dXeVq11rc2veGNJVWc4k`
507 | - content:
508 | `[{'id': 'toolu_01UUWNqtkMHQss345r1ir17q', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
509 | - model: `claude-sonnet-4-20250514`
510 | - role: `assistant`
511 | - stop_reason: `tool_use`
512 | - stop_sequence: `None`
513 | - type: `message`
514 | - usage:
515 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 445, 'output_tokens': 53, 'server_tool_use': None, 'service_tier': 'standard'}`
516 |
517 |
518 |
519 | Claudette handles all that for us – we just call it again, and it all
520 | happens automatically:
521 |
522 | ``` python
523 | chat()
524 | ```
525 |
526 | 604542 + 6458932 = 7,063,474
527 |
528 |
529 |
530 | - id: `msg_014wmCxnwQpgvnqKKto3RrxA`
531 | - content:
532 | `[{'citations': None, 'text': '604542 + 6458932 = 7,063,474', 'type': 'text'}]`
533 | - model: `claude-sonnet-4-20250514`
534 | - role: `assistant`
535 | - stop_reason: `end_turn`
536 | - stop_sequence: `None`
537 | - type: `message`
538 | - usage:
539 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 527, 'output_tokens': 19, 'server_tool_use': None, 'service_tier': 'standard'}`
540 |
541 |
542 |
543 | You can see how many tokens have been used at any time by checking the
544 | `use` property. Note that (as of May 2024) tool use in Claude uses a
545 | *lot* of tokens, since it automatically adds a large system prompt.
546 |
547 | ``` python
548 | chat.use
549 | ```
550 |
551 | In: 972; Out: 72; Cache create: 0; Cache read: 0; Total Tokens: 1044; Search: 0
552 |
553 | We can do everything needed to use tools in a single step, by using
554 | [`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop).
555 | This can even call multiple tools as needed solve a problem. For
556 | example, let’s define a tool to handle multiplication:
557 |
558 | ``` python
559 | def mults(
560 | a:int, # First thing to multiply
561 | b:int=1 # Second thing to multiply
562 | ) -> int: # The product of the inputs
563 | "Multiplies a * b."
564 | print(f"Finding the product of {a} and {b}")
565 | return a * b
566 | ```
567 |
568 | Now with a single call we can calculate `(a+b)*2` – by passing
569 | `show_trace` we can see each response from Claude in the process:
570 |
571 | ``` python
572 | chat = Chat(model, sp=sp, tools=[sums,mults])
573 | pr = f'Calculate ({a}+{b})*2'
574 | pr
575 | ```
576 |
577 | 'Calculate (604542+6458932)*2'
578 |
579 | ``` python
580 | for o in chat.toolloop(pr): display(o)
581 | ```
582 |
583 | Finding the sum of 604542 and 6458932
584 |
585 | I’ll help you calculate (604542+6458932)\*2. I need to first add the two
586 | numbers, then multiply the result by 2.
587 |
588 |
589 |
590 | - id: `msg_019DZznw7qiEM2uEEcpNTnKs`
591 | - content:
592 | `[{'citations': None, 'text': "I'll help you calculate (604542+6458932)*2. I need to first add the two numbers, then multiply the result by 2.", 'type': 'text'}, {'id': 'toolu_016NZ7MtE8oWHs5BSkxMcAN7', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
593 | - model: `claude-sonnet-4-20250514`
594 | - role: `assistant`
595 | - stop_reason: `tool_use`
596 | - stop_sequence: `None`
597 | - type: `message`
598 | - usage:
599 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 539, 'output_tokens': 105, 'server_tool_use': None, 'service_tier': 'standard'}`
600 |
601 |
602 |
603 | ``` json
604 | { 'content': [ { 'content': '7063474',
605 | 'tool_use_id': 'toolu_016NZ7MtE8oWHs5BSkxMcAN7',
606 | 'type': 'tool_result'}],
607 | 'role': 'user'}
608 | ```
609 |
610 | Finding the product of 7063474 and 2
611 |
612 | Now I’ll multiply that result by 2:
613 |
614 |
615 |
616 | - id: `msg_01LmQiMRWAtTQz6sChqsbtMy`
617 | - content:
618 | `[{'citations': None, 'text': "Now I'll multiply that result by 2:", 'type': 'text'}, {'id': 'toolu_019BQuhBzEkCWC1JMp6VtcfD', 'input': {'a': 7063474, 'b': 2}, 'name': 'mults', 'type': 'tool_use'}]`
619 | - model: `claude-sonnet-4-20250514`
620 | - role: `assistant`
621 | - stop_reason: `tool_use`
622 | - stop_sequence: `None`
623 | - type: `message`
624 | - usage:
625 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 659, 'output_tokens': 82, 'server_tool_use': None, 'service_tier': 'standard'}`
626 |
627 |
628 |
629 | ``` json
630 | { 'content': [ { 'content': '14126948',
631 | 'tool_use_id': 'toolu_019BQuhBzEkCWC1JMp6VtcfD',
632 | 'type': 'tool_result'}],
633 | 'role': 'user'}
634 | ```
635 |
636 | The answer is **14,126,948**.
637 |
638 | To break it down: - 604,542 + 6,458,932 = 7,063,474 - 7,063,474 × 2 =
639 | 14,126,948
640 |
641 |
642 |
643 | - id: `msg_0119DdeQ2goLFwGkXTXHFDsv`
644 | - content:
645 | `[{'citations': None, 'text': 'The answer is **14,126,948**.\n\nTo break it down:\n- 604,542 + 6,458,932 = 7,063,474\n- 7,063,474 × 2 = 14,126,948', 'type': 'text'}]`
646 | - model: `claude-sonnet-4-20250514`
647 | - role: `assistant`
648 | - stop_reason: `end_turn`
649 | - stop_sequence: `None`
650 | - type: `message`
651 | - usage:
652 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 756, 'output_tokens': 61, 'server_tool_use': None, 'service_tier': 'standard'}`
653 |
654 |
655 |
656 | ## Structured data
657 |
658 | If you just want the immediate result from a single tool, use
659 | [`Client.structured`](https://claudette.answer.ai/core.html#client.structured).
660 |
661 | ``` python
662 | cli = Client(model)
663 | ```
664 |
665 | ``` python
666 | def sums(
667 | a:int, # First thing to sum
668 | b:int=1 # Second thing to sum
669 | ) -> int: # The sum of the inputs
670 | "Adds a + b."
671 | print(f"Finding the sum of {a} and {b}")
672 | return a + b
673 | ```
674 |
675 | ``` python
676 | cli.structured("What is 604542+6458932", sums)
677 | ```
678 |
679 | Finding the sum of 604542 and 6458932
680 |
681 | [7063474]
682 |
683 | This is particularly useful for getting back structured information,
684 | e.g:
685 |
686 | ``` python
687 | class President:
688 | "Information about a president of the United States"
689 | def __init__(self,
690 | first:str, # first name
691 | last:str, # last name
692 | spouse:str, # name of spouse
693 | years_in_office:str, # format: "{start_year}-{end_year}"
694 | birthplace:str, # name of city
695 | birth_year:int # year of birth, `0` if unknown
696 | ):
697 | assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`"
698 | store_attr()
699 |
700 | __repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year')
701 | ```
702 |
703 | ``` python
704 | cli.structured("Provide key information about the 3rd President of the United States", President)
705 | ```
706 |
707 | [President(first='Thomas', last='Jefferson', spouse='Martha Wayles Skelton Jefferson', years_in_office='1801-1809', birthplace='Shadwell, Virginia', birth_year=1743)]
708 |
709 | ## Images
710 |
711 | Claude can handle image data as well. As everyone knows, when testing
712 | image APIs you have to use a cute puppy.
713 |
714 | ``` python
715 | fn = Path('samples/puppy.jpg')
716 | Image(filename=fn, width=200)
717 | ```
718 |
719 |
721 |
722 | We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object
723 | as before:
724 |
725 | ``` python
726 | chat = Chat(model)
727 | ```
728 |
729 | Claudette expects images as a list of bytes, so we read in the file:
730 |
731 | ``` python
732 | img = fn.read_bytes()
733 | ```
734 |
735 | Prompts to Claudette can be lists, containing text, images, or both, eg:
736 |
737 | ``` python
738 | chat([img, "In brief, what color flowers are in this image?"])
739 | ```
740 |
741 | The flowers in this image are purple.
742 |
743 |
744 |
745 | - id: `msg_01N2NCd5JW3gNsGysgmyMx9F`
746 | - content:
747 | `[{'citations': None, 'text': 'The flowers in this image are purple.', 'type': 'text'}]`
748 | - model: `claude-sonnet-4-20250514`
749 | - role: `assistant`
750 | - stop_reason: `end_turn`
751 | - stop_sequence: `None`
752 | - type: `message`
753 | - usage:
754 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 110, 'output_tokens': 11, 'server_tool_use': None, 'service_tier': 'standard'}`
755 |
756 |
757 |
758 | The image is included as input tokens.
759 |
760 | ``` python
761 | chat.use
762 | ```
763 |
764 | In: 110; Out: 11; Cache create: 0; Cache read: 0; Total Tokens: 121; Search: 0
765 |
766 | Alternatively, Claudette supports creating a multi-stage chat with
767 | separate image and text prompts. For instance, you can pass just the
768 | image as the initial prompt (in which case Claude will make some general
769 | comments about what it sees), and then follow up with questions in
770 | additional prompts:
771 |
772 | ``` python
773 | chat = Chat(model)
774 | chat(img)
775 | ```
776 |
777 | What an adorable puppy! This looks like a Cavalier King Charles Spaniel
778 | puppy with the classic Blenheim coloring (chestnut and white markings).
779 | The puppy has those characteristic sweet, gentle eyes and silky coat
780 | that the breed is known for. The setting with the purple flowers in the
781 | background makes for a lovely portrait - it really highlights the
782 | puppy’s beautiful coloring and sweet expression. Cavalier King Charles
783 | Spaniels are known for being friendly, affectionate companions. Is this
784 | your puppy?
785 |
786 |
787 |
788 | - id: `msg_01Jat1obwo79eEz5JMFAF4Mh`
789 | - content:
790 | `[{'citations': None, 'text': "What an adorable puppy! This looks like a Cavalier King Charles Spaniel puppy with the classic Blenheim coloring (chestnut and white markings). The puppy has those characteristic sweet, gentle eyes and silky coat that the breed is known for. The setting with the purple flowers in the background makes for a lovely portrait - it really highlights the puppy's beautiful coloring and sweet expression. Cavalier King Charles Spaniels are known for being friendly, affectionate companions. Is this your puppy?", 'type': 'text'}]`
791 | - model: `claude-sonnet-4-20250514`
792 | - role: `assistant`
793 | - stop_reason: `end_turn`
794 | - stop_sequence: `None`
795 | - type: `message`
796 | - usage:
797 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 98, 'output_tokens': 118, 'server_tool_use': None, 'service_tier': 'standard'}`
798 |
799 |
800 |
801 | ``` python
802 | chat('What direction is the puppy facing?')
803 | ```
804 |
805 | The puppy is facing toward the camera/viewer. You can see the puppy’s
806 | face straight-on, with both eyes visible and looking directly at the
807 | camera. The puppy appears to be lying down with its head up and oriented
808 | forward, giving us a clear frontal view of its sweet face.
809 |
810 |
811 |
812 | - id: `msg_018EqaD7EFveLCzSPbQmMMuE`
813 | - content:
814 | `[{'citations': None, 'text': "The puppy is facing toward the camera/viewer. You can see the puppy's face straight-on, with both eyes visible and looking directly at the camera. The puppy appears to be lying down with its head up and oriented forward, giving us a clear frontal view of its sweet face.", 'type': 'text'}]`
815 | - model: `claude-sonnet-4-20250514`
816 | - role: `assistant`
817 | - stop_reason: `end_turn`
818 | - stop_sequence: `None`
819 | - type: `message`
820 | - usage:
821 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 227, 'output_tokens': 65, 'server_tool_use': None, 'service_tier': 'standard'}`
822 |
823 |
824 |
825 | ``` python
826 | chat('What color is it?')
827 | ```
828 |
829 | The puppy has a chestnut (reddish-brown) and white coat. The ears and
830 | patches around the eyes are a rich chestnut or reddish-brown color,
831 | while the face has a white blaze down the center and the chest/front
832 | appears to be white as well. This is the classic “Blenheim” color
833 | pattern that’s common in Cavalier King Charles Spaniels - the
834 | combination of chestnut and white markings.
835 |
836 |
837 |
838 | - id: `msg_01HzUTvRziMrSfdEjbo1kHnh`
839 | - content:
840 | `[{'citations': None, 'text': 'The puppy has a chestnut (reddish-brown) and white coat. The ears and patches around the eyes are a rich chestnut or reddish-brown color, while the face has a white blaze down the center and the chest/front appears to be white as well. This is the classic "Blenheim" color pattern that\'s common in Cavalier King Charles Spaniels - the combination of chestnut and white markings.', 'type': 'text'}]`
841 | - model: `claude-sonnet-4-20250514`
842 | - role: `assistant`
843 | - stop_reason: `end_turn`
844 | - stop_sequence: `None`
845 | - type: `message`
846 | - usage:
847 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 300, 'output_tokens': 103, 'server_tool_use': None, 'service_tier': 'standard'}`
848 |
849 |
850 |
851 | Note that the image is passed in again for every input in the dialog, so
852 | that number of input tokens increases quickly with this kind of chat.
853 | (For large images, using prompt caching might be a good idea.)
854 |
855 | ``` python
856 | chat.use
857 | ```
858 |
859 | In: 625; Out: 286; Cache create: 0; Cache read: 0; Total Tokens: 911; Search: 0
860 |
861 | ## Extended Thinking
862 |
863 | Claude \>=3.7 Sonnet and Opus have enhanced reasoning capabilities
864 | through [extended
865 | thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking).
866 | This feature allows Claude to think through complex problems
867 | step-by-step, making its reasoning process transparent and its final
868 | answers more reliable.
869 |
870 | To enable extended thinking, simply specify the number of thinking
871 | tokens using the `maxthinktok` parameter when making a call to Chat. The
872 | thinking process will appear in a collapsible section in the response.
873 |
874 | Some important notes about extended thinking:
875 |
876 | - Only available with select models
877 | - Automatically sets `temperature=1` when enabled (required for thinking
878 | to work)
879 | - Cannot be used with `prefill` (these features are incompatible)
880 | - Thinking is presented in a separate collapsible block in the response
881 | - The thinking tokens count toward your usage but help with complex
882 | reasoning
883 |
884 | To access models that support extended thinking, you can use
885 | `has_extended_thinking_models`.
886 |
887 | ``` python
888 | has_extended_thinking_models
889 | ```
890 |
891 | {'claude-3-7-sonnet-20250219',
892 | 'claude-opus-4-20250514',
893 | 'claude-sonnet-4-20250514'}
894 |
895 | ``` python
896 | chat = Chat(model)
897 | ```
898 |
899 | ``` python
900 | chat('Write a sentence about Python!', maxthinktok=1024)
901 | ```
902 |
903 | Python is a versatile, high-level programming language known for its
904 | readable syntax and extensive libraries, making it popular for
905 | everything from web development to data science and machine learning.
906 |
907 |
908 |
909 |
910 |
911 | Thinking
912 |
913 |
914 | The user is asking for a sentence about Python. Since they didn’t
915 | specify, this could refer to either: 1. Python the programming language
916 | 2. Python the snake
917 |
918 | Given the context and common usage, they’re most likely referring to the
919 | programming language Python. I’ll write a sentence about Python the
920 | programming language that’s informative and concise.
921 |
922 |
923 |
924 |
925 | - id: `msg_017EgYhUjsQxWkXN1zrRjFxK`
926 | - content:
927 | `[{'signature': 'EokECkYIBBgCKkDNuftW7kYi6z6RSuzj4DNdtNRxcj486/U8U2NJHg51M+vGmQ1eN7ypz+w4/tpZCFHgWR9KFPXElnHrp3SkoWJrEgzJthIeSKqmoUbrP5YaDKGgUdib0TYhZriKcCIwDrQ2GTZ3D7zE0RVouKJLbzyRl+sQ6FQ+NwNQb5qrHw5Ylmzqxxk4Sa4GuzOEY8zVKvAC+7LfNqCd7jBjPVqaoSRmCubkKuWWeg60G39UCYm/W9VUsrDT1IHLTvOuK3KOYTQL1zqWt1XlBFj52haZIWRmjVU1w2S8EyKIJIThNRYFfT9CDuAeCYwUae8BFL4wm/MEUw+2tDNH3ei7JUvb4sk17cTrePvzpiQNtmHN8TctDBP2RgD7PpTUbjNUsvoRJFBSLLfNsd8wlvAkcph96fDV5dUJ/W3mkluG4XbTTY3ns/rlikAFLTphaoXeqM6buvm889Sep8BQdHuujHcuKWD3auTusayXE5O/9yYcWrU9qPxZ2bxF72tZ1Y65bTBYzhm9ohtB1LTy0x0XvOS76gfGZ8XaJ4vj3OMz1Cn5GSTCNbELTHHVBh5azPSCI9Qu44/ZBE2ZsFA0mtPCiP8cyhZmzAaHFnz2QaKwuTlfz5VnDPmNSNy8rqHywWlkMMA4g9+0SDZSxYJkCYBO+OUs1gNqlwwJQUyYOc1SEmpBkVQee2kYAQ==', 'thinking': "The user is asking for a sentence about Python. Since they didn't specify, this could refer to either:\n1. Python the programming language\n2. Python the snake\n\nGiven the context and common usage, they're most likely referring to the programming language Python. I'll write a sentence about Python the programming language that's informative and concise.", 'type': 'thinking'}, {'citations': None, 'text': 'Python is a versatile, high-level programming language known for its readable syntax and extensive libraries, making it popular for everything from web development to data science and machine learning.', 'type': 'text'}]`
928 | - model: `claude-sonnet-4-20250514`
929 | - role: `assistant`
930 | - stop_reason: `end_turn`
931 | - stop_sequence: `None`
932 | - type: `message`
933 | - usage:
934 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 41, 'output_tokens': 119, 'server_tool_use': None, 'service_tier': 'standard'}`
935 |
936 |
937 |
938 | ## Web Search and Server Tools
939 |
940 | Claude supports server-side tools that run on Anthropic’s
941 | infrastructure. The flagship example is the web search tool, which
942 | allows Claude to search the web for up-to-date information to answer
943 | questions.
944 |
945 | Unlike client-side tools (where you provide functionality), server-side
946 | tools are managed by Anthropic. Claudette makes these easy to use with
947 | helper functions like
948 | [`search_conf()`](https://claudette.answer.ai/core.html#search_conf).
949 |
950 | ``` python
951 | chat = Chat(model, sp='Be concise in your responses.', tools=[search_conf()])
952 | pr = 'What is the current weather in San Diego?'
953 | r = chat(pr)
954 | r
955 | ```
956 |
957 | Based on the search results, here’s the current weather information for
958 | San Diego:
959 |
960 | Today (Saturday, June 21st) has a high of 67°F [^1] [^2] with cloudy
961 | conditions early with partial sunshine expected later [^3]. Winds are
962 | from the SSW at 10 to 15 mph [^4].
963 |
964 | Tonight’s low will be around 62°F with overcast conditions and SSW winds
965 | at 10 to 15 mph [^5].
966 |
967 | Temperatures are running about 4-7 degrees below normal for most areas
968 | [^6], making it a cooler day than typical for this time of year in San
969 | Diego.
970 |
971 | Air quality is currently at an unhealthy level for sensitive groups, so
972 | those with breathing sensitivities should reduce outdoor time if
973 | experiencing symptoms [^7].
974 |
975 |
976 |
977 | - id: `msg_01Kg3aNFF7ibUcWZxPpFNmu8`
978 | - content:
979 | `[{'id': 'srvtoolu_01YazfLE5GfK4ET2Z4rsfcuz', 'input': {'query': 'San Diego weather today'}, 'name': 'web_search', 'type': 'server_tool_use'}, {'content': [{'encrypted_content': 'EqcCCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDF5HWNC1GnRw10SRnBoMsafoy8BqdMy4chO3IjALzgdeA/d4dT1Qx7JKFu6XeXmrHYIe2lxVFl7TqZkRDnKmPpA6mnvpkwGh9eW8RJgqqgGtR4kEv8KoMeWf9VHjh1oYyscLkN5LJq00R4X3C/Cfw9oLLdIAiN9tvq9g2Rzm0Qb/IP5GcdVfiwx7w5dRfL67oaWQ5IFIXSvv9ByLW/ERFI30UKhhKFYumApVuZEIge6CL0j56OQFXAkPVENWIlSnro+9BIKgf866WWvRo0kx2d+SiX9p3ainG+Zqbl55Bq+mvyOUgVabtSEfr+oY6FQPJYYT9ad0QxYUOxgD', 'page_age': '3 days ago', 'title': '10-Day Weather Forecast for San Diego, CA - The Weather Channel | weather.com', 'type': 'web_search_result', 'url': 'https://weather.com/weather/tenday/l/San+Diego+CA?canonicalCityId=3b2b39ed755b459b725bf2a29c71d678'}, {'encrypted_content': 'EroMCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDKBkST0tPvpOGw/SWxoMgJ/iK5dTJy/vMfmsIjAfH72hWmf/naqyXr46jfw0V/iSDaZJPzeQlCOJvENkQwXPp1sup75iYszy/zgR/YMqvQuXXkSUyo9y/bG3wdm4EkiaXeXkz5QSx0o2/SrZPqhOL9EXrQZAdEPo6KZZQeQdLKkwuEeoNj7vHE55EFXmwTLkN+QTPBsZ4mJ9gC/+3SGgbAMGdubcNnzUIlty+u0wAGTLDaJGeKlkIreF1znKqB9RefmQrpaErVDj7za3noPpsqUFRfDzTPfDUzOsRBKo3DNA2f86rYA1V6ot9qpSNOi1WFTnne5tMl68x/QTEXdgVpxolLK243bkcW1lBdAhwHdik+B/z1RHIXpyJWOLZgUwUtlGYm7jqacVVVF7rut/OlppO7+RA5mgpVjx47J40MUq7rmrcn+e9dXSUWduDPgsSCGtyM5jn0WgUgYyNRccdRPhLAeYcqa4xFYUmpbkGn5o7JrtkiU/OzS1pCeKfFRcjEtdCYeOHkj1FXsAJj31/qAlTZcWE3x4DuQbVt+yqpK4BXkpqTqd6ML3SaMm3/ac9TIaAz6Ob/ThK+A5qypeCB4YW9fyL5xG0oyHEKTefh+5If1nF23IW9Kbze5wZVTMobdolFr3uhS547DV2lngv0wEoTpOyADO6bz3qj0w2pB5DAxLPaW+4YPPRz8bKJEwMgLuDH+EP4wiUKpVrSjmBspXwM1WUSxD4X2lh7r6S4Jo7HUHTravhHuzuODKQ9fLCz2xnwto0fGs2uYQ9sQjIXOAYCBRpFfdYVYE68Br+7Ply0pD6J+Xix2NGHSYHpjMUbtCaYqvqwDO8UmlYA7YNyLs9Q/oFAcRXDqtUEZYD0d0h0mOS3zFYMySD9SfYNVx4SX1g3i6f+xCKfgdDMQ5B+jERSzkcHgR4jRifKE89qxfUCd4j72LhmJp8FJYxe993mqa7TXgm/SHe8AJrnG0EIo2dbWrTzn9SeQmZQzCvl0dpE79Qkgfb8j9BXQvmSfF2f8UOcymWo6vjPBCuPc1lvpvISLjcVIY06pAvh9zp3KUMxhc+K9qmyDTlglwj6feZZw/uyNux21y6CpSSITbEv+KuEJd5WaWphw5bqel0B/u/bwV/G2jkgbcwrb75HPGeJt+eNjyClvvJSsu6IgtkchIabdxo2zedcrilzAU9arLRJlcFNANSB0WilNrmmbc3jk+7RUmnc782nB1fz92JI6Cb7dvbncG1m4HHJy3Mh60ednlMO5GXTTnvIjAF2FaQp+MivMCin2YVwOn3N0GyKhVrbcD7EDrsXWA6yFls+AWAubYMJUS9ll/HeJhfSqD1zHyYQZ4Twic+W5kj4QCWYa0EWtnYs6NO76aRHOPSwbF8+/9XQmCUb4rHTXuPjxEL/uXktDE7K52sEq9v81wKz54urHiSGV1/TmARD6KSLhEejlrS5ydv4Du3YOdEXE5G2e4xPDQ1C1pwTEYKyLf/yrdeeKST9TiOxoj+bMIpwfyEq86+C4eajyNWZYnpx8IZviEo3jWOdiXFb53zyl7fIOeWBhWbrWmPjCazjSGofKGbFOz8jGTUOFqIpf94sqT1ent5X+nQTJ5rlrQPLXDfZbTrgP9pTZtZLlvezsYlIuqIFVWnnBOd0Y4IUqOw8e/3WYThftfGW5OkEp4sLMFd/4QnmoXqBtjf87fZLLkLV6ZZv26catrYFdv68doNSO6oTr7Gb1TtgrCJAgRhRarm/4MI3NpYD7D6oxspLjwPp85mND2g9jVtilz51L7NXpbGKcCc5o+e1MbpzZbKvzF6xSAtk2qKZ/i6P9p2tW/BMYG9O7wRDwRENWOwqA41BNdQpJ18a+emVgn5795o1AfLQVlm99so8izdz0sA8+fp1OjT9ziuRgsk9bOH1QhPYovNql0nQwulZJwYfQLz9okhXtM36aC2SGtNrE8IMbPdCIUdMQ0PznqgI17UPXjILvfdXBXCmbRT+CVbOHeeAx1juxSMwML9pWcymLRsAKowOPMj4meHF4relkA72QKR1uGEVIB/Wvl66WBDAcQyRgD', 'page_age': '3 days ago', 'title': 'San Diego, CA Weather Forecast | AccuWeather', 'type': 'web_search_result', 'url': 'https://www.accuweather.com/en/us/san-diego/92101/weather-forecast/347628'}, {'encrypted_content': 'Er0ECioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDBGutrl0Yo/Ph3ytgxoMmv0PMZzhuX1bUMAaIjB8A47OiTekaB3+wTaHtqniXxv1Syc6JTwDJgaWgpy/MxfY+iOIS58xXsHOd7d/vrUqwAOQc9TMOqSb0LGZCGaGxIwiXICeBOu9KYGmqzNtUJm31yQlCGwdOFeznCgec0B2/jhLUZMpoDuoL3ayVUeBYC7IlyW4Lrh7DR8+MUOf27YxozzhkdWKDDDqCf0hN40Qn69bGemDOFejCWrTyCj8ffYa29MAKRcAvQv0TQUehsNOy0xKYzeBA2z9zjzZp9GOesvqBv5/2AxQrVwygU31bLYEeiqHqLrfeQ798HK9oDFlAiHaS/gWmpK1+0d3UA+Y5KCKipBmUxy15LkEmoG0MoG3/OEDZ7mOGuuY5RqCW4hkdT4dzm6OsKfmiLoDG/XI2HNkpcTnCd3tO0yUZdwY3lX4vSjAS2wvFxNsl7b8YbaKM/U8H3JaTVntwyh7GHWYezHYaYXNiMK05KcQMKJlP/bZx82sQkq83jl15k55Nf5NR/R/1PHi/fqzLDwPOSug5RJ8dwdduWd1SF4qcfStBaOZ4fRdTwbxJgiAC6gmxWMAGfXg29iVSNsTBYVaGk+GPrUSIFxU0x36xejO0Vaz7wHL6XPwjvXMhVrWbuf5xMXKRNW3XG3X0AEB7h40x+0MN23BmmJ9G6+JQ5Ngog6z2An4GAM=', 'page_age': '3 days ago', 'title': 'San Diego weather forecast – NBC 7 San Diego', 'type': 'web_search_result', 'url': 'https://www.nbcsandiego.com/weather/'}, {'encrypted_content': 'EpcCCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDGlljWS7qosG4mSt3BoM6gy+Gh0f2zdXJ8kAIjAL5PfnNZR7Vex2zcFoqLody/xA+Yyoob62LVGMQ4+EhXd4CKgvxQEfpQLQgVvIH3UqmgGByQGU45dvgEJrKexyCEvra2CIAoYOTUiKNt6p7dVfp8++dfq28UU2mt5h10bL8ul8y0xeBu6new/tfwFJ6+GS/Bh6p+M+p87Jy65QQ/dgMhV+M7q3EAQUZjh642YMayI4nimm2yIwbLzNRXEszsp2kdmKKhxDpO3C5wp8fWLy5UpWvimjh5WSMnAzEpMoVDqD99FavgHvaPF7GAM=', 'page_age': '3 days ago', 'title': 'Weather Forecast and Conditions for San Diego, CA - The Weather Channel | Weather.com', 'type': 'web_search_result', 'url': 'https://weather.com/weather/today/l/San+Diego+CA?canonicalCityId=3b2b39ed755b459b725bf2a29c71d678'}, {'encrypted_content': 'ErEDCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDKDodoc/BpOij6btVBoMdtlDIIQ1rjzAl3XyIjAUXhC+GWPaQwx74WpJR7Xo+cVP0RjwhWd5LPSL+skOIE8sHVqz0Ry9jmq5jGri3wcqtAJrH5qNCAGieGFHQijAS1lfA2rn0EDH/tLBmC4nKW0MGQz/NLn49FuAcJuEZrw2149zcJ88XQ18hVEhWTkZ/BNEBujb9+M+QthIUUOPf6RplEX+DThq1KIuqbR0XBh71qThLM5ZfBjcvYS4429hpEP5DJo2FP0nqr9Ps8GX/hvIik33EXc4X3x16d649qqcRFwByb1DhE+NZVTiuu4WfljWXWBs+17ezkxxWazj3E9GtLBIL98fnMoOf4p9oTI8EgWe4RjcGqEaI6DWqlGquNgkc4IeqlzVkjWP4v90HoI7oegcDKtK/n+8pNaGqyBQizgf1NglM2gRBj3cZ9kt7cDX8p1KpmXrGsOSEK3mn6RAnjlsFQ2x7UjM7l980KCVKnqeguR0ZkCsvB+KM0AputjxlU+KrBgD', 'page_age': '3 days ago', 'title': 'San Diego, CA', 'type': 'web_search_result', 'url': 'https://www.weather.gov/sgx/'}, {'encrypted_content': 'EpkCCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDLXJ+1Ii8XIJ7AwW/xoMMbTKEdT8hw2J9bdSIjBYWJur7pQQqNijCyZjeN+VSYcaZe7xE/09hGS23TOjFLdvrOhGAMVSd3e2IDGUHCEqnAG6GJZO/sDcDEeEn/sobNJbNjM9VfNoJZEg+f/lZ+KXtMJELQ2Q14ot0iqQ0qKEjDxz3OMLkAYJg3uqeVMOq5QLgz6MNANEoCU+koTqIiYGZ++8Y0tKdtVD/Xrm0QW0fFkdJcN1PVt+6fN8aDJ5+95phOw5vEmRS9FRWg6aLDIwpxaNJ6z+Wbpff8zoSGUcPZFaFIUjAAU6qnx8kRoYAw==', 'page_age': None, 'title': 'San Diego, CA Weather Forecast | KGTV | kgtv.com', 'type': 'web_search_result', 'url': 'https://www.10news.com/weather'}, {'encrypted_content': 'EpQCCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDHL0i7qnzUCM4zRRAhoMAv3yQhl4BrqFyb0nIjBsyvNyrS90lUKCO/9admV9eU8x96+nDdrodoVAe9lpT2fNstMu4pS2Aer192MKKnwqlwGnu1DfFtpu/sSeugS3SUtYxT+ulZYw+ZQwhqLMOf+UadP2PrCAEDh5CM2pF4fMHRdt9GBYTgOA/3ug6W2Ci30q5QXwdiNC/7X2KQV6QrU83m//vy92K8PO9dwqv4B0T8uCSY18O+BtHHlZrzmzPgIWm4OTP2+Blh+f3X+yRubR9JiBehxtgS/IGmdWhzv6QjIbcFy5bhNDGAM=', 'page_age': '1 week ago', 'title': 'Hourly Weather Forecast for San Diego, CA - The Weather Channel | Weather.com', 'type': 'web_search_result', 'url': 'https://weather.com/weather/hourbyhour/l/San+Diego+CA?canonicalCityId=3b2b39ed755b459b725bf2a29c71d678'}, {'encrypted_content': 'Ev0LCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDEYX+2bALYXzQ6feEhoMZaW1cXK7efdUNoDNIjBW2ZS4AEu//SwypPKF8m5uvw2Ats1bCxXnLXTD3v2d/bwrwTPMRx5SPoC7QDtYQRgqgAu0Z3nSWGW4pXnpMfjQUN05yjt++RoCW35vjAN1oiSW4U+P+03Gsf+4z+zKLRKkuPCj4CQmd0X0Z7rqi3uW8+8KMBw5yl90QrjDjivFDwxAkyCTGImrhyWEX+/JT9sJ3bSqevgnd9GrILv0lrqdfgkoIgdEuEMuIIJpVxPDqDoeYDgDGF3FRs7iultqHndd68JhMXVShxi+lQaINVYRtLfX1HWrKlb2M1PySslVAdUWbQqeyvRsRYJotIIN5uAg3EoYcGQBwFA0ifvG70jlcOWDR5Kx5w2A4mISadBHz9mDhaDxJeOCssJdSo465wYGSRzXiyIFO7OOViZSIoHNcPEmuHgqXSzrF4NN+u5PbLaS1fmuUesVPOwvKi1g3MJN/LNvB9MWzpMNcnP0BpDSD/hKGDpPKnxAWYVz1v3Nqwi/IuEmtpa4qWlWA+7t47NfFyxF3l89uaMk81qFoUn6L/P2q6w7VeEAnXRxNkRyKqeQ1FOC5CfgBdvhYxGqxlQ6Z4zwZvJUQ4DUxqjOYroV/WV65cbJ0ZtRT0z/S9OzK9k7xL///oNTIcnAPw9FfXnEiG3Rz8kX06gcgdjDLRkwDy+tYO+VUCmSlZpu8XBszZ7dOGBWbXPjDjYg4t5eqIcLF+uw1hZYoo8xNTGnsk2d/2DpXd8seY5sa4gAUzCwfXwy4MiMmmEonIG6Ki3jNMQ22tjPJMKqgQGy1EdsEqcUephoIq0etHo/J2HsyDvoLxJLbhtm6FsgqrQ4ITzHEt6wDWHOh7+iQSObM674FaPyvzvSEomriGPV7aYynVUAum3MVl9eoeP2W2Xm0qxTnuT3gKnlz766nI3ghO7RHpXAZYpo1gzeq1ZrxMYO0kI9iTGYKf+z+WyO9s9FFKAR4OxS/2fI9W3VHiGSqDpqI87Ez3A3o3uqKqOhvQI0qG+QfZFI1+ll0Wr1CbrGos9dgezqJ9/7GFCNP+cCfV3Y27GsWBots76htL6GD1PE7RsyelyQ9TDXblEQLLDKkH1jnngYq+2RfNrZhKFvll+L2+oDVbk4n+Rb0+fLti1JQi91O8BqMSb7pD3Oa2sdMlSq0qoIFl0T6wanLUNgoIg5GE8R2QBukwWa9oW8jmaJDrKlqe79vWvCDjisPCdfo+X+h7YkRlMW/FoiDJ5X189IxckvoOlY7r6BVEdERcquxbsp0J0uB0JNuWdsa/XGlG1uecYGKmw8pGVwIkYTP2U+Bgfn0XuXKSLHxOAxbCWovcDrbjOA3yCyKQlCt3uyE7zpVlBGAMcUcSy+QbPYx5xdkz2mx/qkmUkjCvp70EbyjPeZehlcTlsqJ7ZIRi+rcH+b4fWoCUr90rUAqbcJMDM4tSLtqvVq4TXbj0pBseXFi9d5bnqByC5By6I3PiuQtkT+bxGYGFvhuEJhoC+CgCOobUIIcOPhZH5lIZTYMQcf3l3gfXhUbuu5WOglaOp66rjxNsaOZmsMfquyX1n+65uJSRpixbPiDDXv/Myig7vn2KdZtyoqz/E7tjcSFD0Vyvc6jKL73H3vs9nOWGDf0w6AJUoLXrhUW921N0+0A5bD4qUxLKZgBkFl33R+dKmKgtMs4iO4C81RPaQdLVSA/wm9jkSAOj6GKk1IZGFKOgiqMXMtItnmKktBbYSD+gJosUyeIAIgsUiP/r9uZA8AsKEMQ+qzQqIS3zSltWnMfN79ehN9MtGDnZc9pjMdXktpqKenYuF/QDttqgLfEVSJaQtXgiXPJtgXJtmxXuXWRLAaqsid07QEvP1N8DqeFIMKHOG33NDJigjxPFnUxbS7elt8ECcbRYn+gewDwydAb0jUCyL8nuE5ySll51eTQqFyfP3xnT092PTB5HRyFBUlD3mw5W168u2HGAM=', 'page_age': None, 'title': 'San Diego, CA 10-Day Weather Forecast | Weather Underground', 'type': 'web_search_result', 'url': 'https://www.wunderground.com/forecast/us/ca/san-diego'}, {'encrypted_content': 'Ev0ECioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDFqrWvXcDBaRhAEIERoMEMwkIlYRNrbpob8qIjAGpulVwN56DZ4z4+PuEB3gdYm6QZ5Qvq+6RFvOLnEGDg+mpsIvb2TL4iauDizcEsUqgAT3/Sza241GQ6OmRz38xfUdXYqE5zatMSVDZ8Pzsd11fgBJg8voUcxKfPRPQWlA8FYW8+K4vDvv5cahR9rbVN829R/VQxq7YdcMsD9D5pbk9WButOmQLwI5/MsvnKo0MJ2e6kVcI1DVTwwHxoF0AJdkXUm/9x5danZ35QmA3FIR+7AOBCgtlOmwwIO1nTB+x6+8gf8DB0Jsgv6scqI/PqYxTYH8pZUEs3sA0pXQWJIcbtbWPRqxvfr6KYTtBAKJcr4XmsOMdi4OhVxzXhed1OjB1Jvebhi7Z+CbOOEZBuL2T7inzxoNlHvd3rhuZoDzpb9IWEMx/29fjwWhy+ztWi4Fdl25QgpWokl9cjDtq+6DEkf97+5ks6oVZU1dKoKq2y6S7cT3SL30IOzpdNox9Nh9FrOZeui/aT2zD7imuB+dyx9ffHi07Ulje7R9UWwBvDzRPMP4Ilwe4nrcQWWloso80KTfIixUTmel1bPKxVdrag/2+4kD6ahHpm2cLazWqTc+GvVY5+IRb2C5Y6pve4tWPVZC584151GHbdDZB2FamZJsdBlVL5VAF7Cz2U2mukxcuFF7Vwh3lpvpuqkldp4yWf2UJVpGMOOgDMgRi9qn+fAqE9jEIKoqXRhrVnKlgAlMGpnFfxu1IvxOLQQRqCNRltm5whIy5YhZa+7bllx+NRgD', 'page_age': '3 days ago', 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}, {'encrypted_content': 'Ev8CCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDOOoeUPUoDED8O83ZhoM30uUue3UQ1WQ6At6IjD+k7S7lYoiPFc5urIX1kWyFxcLj14bmNmI9YackQTQAU27o6IGbFdwaU+40URG2xQqggLYvUzF4uC9bk9e+C5CbKLU1WAsE7Nsb3YpFYsFiwMKqPamqEVSZE/xxUGQZ9w5tbtftdchhIs0bupnPj+G3BgO+ivdO9VcHBrqJwOk+hQ+XhQp3kLRubgZ9ozPpTZDK0xhpuFh18mFwsHgi/dIH478C1yYA50B0d/qL5C4V6NdYfDYsOXL0Ce1y4L8JxlUi//BPq23m1FfJeVo0vPYKTWm0D2TGTEnLygV9ZI9/wUF5H+gDYimj/QJZW1MWWf7bUQl85I1AyroDE2h8pae6FT0VTVWWIUJiBOWr48xbjieS+LHRxP7/XhpZpI2x0zz3WLi1BYdxHH04yxnjxLcQILrOuoYAw==', 'page_age': '5 days ago', 'title': 'National Weather Service', 'type': 'web_search_result', 'url': 'https://forecast.weather.gov/zipcity.php?inputstring=San+Diego,CA/'}], 'tool_use_id': 'srvtoolu_01YazfLE5GfK4ET2Z4rsfcuz', 'type': 'web_search_tool_result'}, {'citations': None, 'text': "Based on the search results, here's the current weather information for San Diego:\n\n", 'type': 'text'}, {'citations': [{'cited_text': 'TomorrowSat 06/21 High · 67 °F · 14% Precip. ', 'encrypted_index': 'Eo8BCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDKwATF21Da6JaeprrRoMR4n2dYDzq6Grz6kOIjDubzeZ5yWRJE4JTY/En0I9Ue7hW3xNHVY1Nb651cmT84e3E/CK5IihYjOjT0ri+bMqEzkJobhLc+mcf18NFKDVD/eXWJkYBA==', 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result_location', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}, {'cited_text': 'High 67F. ', 'encrypted_index': 'Eo8BCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDBZRJenhcEsF45RP3hoM9byW9q+/yaWr+G32IjAOpd1oMmv1j3abI8JHjvI1b5ZvEPvFR0RzxtY1jsxEyRv9IJkysuqnmbSwH3g32TgqE+VO+8ZZPNjG9foScv6WV79YDUAYBA==', 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result_location', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}], 'text': 'Today (Saturday, June 21st) has a high of 67°F', 'type': 'text'}, {'citations': None, 'text': ' with ', 'type': 'text'}, {'citations': [{'cited_text': '/ 0.00 °in Cloudy early with partial sunshine expected late. ', 'encrypted_index': 'Eo8BCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDB4nMEZc/F7OchMulRoMGN8vERmCYSPIeM52IjBEWjKA8pyazlviIcAv3VdKXXstJzWI89Lth9GVQTgJe+aHBo6Z9RixGwe9H65I4iEqE6r+cdWvSki3A5y7liT1a2+EuPgYBA==', 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result_location', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}], 'text': 'cloudy conditions early with partial sunshine expected later', 'type': 'text'}, {'citations': None, 'text': '. ', 'type': 'text'}, {'citations': [{'cited_text': 'Winds SSW at 10 to 15 mph. ', 'encrypted_index': 'Eo8BCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDB45Fg4fI2lZagCjPRoM44KF0iEojWBopwrHIjBhn7dgVM1Ff/cqtJ/lTjfQ/jwVGUPTMJqvlWxrNi2vlcn6nHSZw+cUefs/Ruz/GNgqE6jMHyEpfOkI5cEy1xxL1qwAZvIYBA==', 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result_location', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}], 'text': 'Winds are from the SSW at 10 to 15 mph', 'type': 'text'}, {'citations': None, 'text': '.\n\n', 'type': 'text'}, {'citations': [{'cited_text': 'Low 62F. Winds SSW at 10 to 15 mph. ', 'encrypted_index': 'EpEBCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDGs1z0BKFApU+3H2sxoMHemxpcDkrIGd4oSRIjAWWpByOKdjFHplptGY/sqWHuZVPAxLushiCw0j+aLQ3+bpwzSikTDw6Eb075la0fYqFeOQNDFP6pV830agKs390gtIQ9hn2RgE', 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result_location', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}], 'text': "Tonight's low will be around 62°F with overcast conditions and SSW winds at 10 to 15 mph", 'type': 'text'}, {'citations': None, 'text': '.\n\n', 'type': 'text'}, {'citations': [{'cited_text': 'Cooler today with highs around 4-7 degrees below normal for most areas, still slightly above normal in the lower deserts. ', 'encrypted_index': 'Eo8BCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDKmS4xmv51SRhiupjBoM1jB2H6k2FU08pQ3GIjAV+DJ6rRuV2b6Q7ignq8PgwpnBySIrcI0FjJyUkxWXMUqYQYSPaTtiNLaSUJi1kh4qE+SXxbI2WryAsUjDE9uQS3F/91QYBA==', 'title': 'San Diego, CA', 'type': 'web_search_result_location', 'url': 'https://www.weather.gov/sgx/'}], 'text': 'Temperatures are running about 4-7 degrees below normal for most areas', 'type': 'text'}, {'citations': None, 'text': ', making it a cooler day than typical for this time of year in San Diego.\n\n', 'type': 'text'}, {'citations': [{'cited_text': 'The air has reached a high level of pollution and is unhealthy for sensitive groups. Reduce time spent outside if you are feeling symptoms such as dif...', 'encrypted_index': 'EpEBCioIBBgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDK8D4vc8KfCrlx9emxoMMLnROCL/VkErH3i/IjA/9/wpz9prNlBSXXcsTY2S+xdsPRDTNhPl7sYwMQoZ1OAvK8zbv5m0QT4Z0rVOkO8qFX/4Kq1l4tkCVlYlPmm9TE6ED9r0ABgE', 'title': 'San Diego, CA Weather Forecast | AccuWeather', 'type': 'web_search_result_location', 'url': 'https://www.accuweather.com/en/us/san-diego/92101/weather-forecast/347628'}], 'text': 'Air quality is currently at an unhealthy level for sensitive groups, so those with breathing sensitivities should reduce outdoor time if experiencing symptoms', 'type': 'text'}, {'citations': None, 'text': '.', 'type': 'text'}]`
980 | - model: `claude-sonnet-4-20250514`
981 | - role: `assistant`
982 | - stop_reason: `end_turn`
983 | - stop_sequence: `None`
984 | - type: `message`
985 | - usage:
986 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 7302, 'output_tokens': 325, 'server_tool_use': {'web_search_requests': 1}, 'service_tier': 'standard'}`
987 |
988 |
989 |
990 | The [`search_conf()`](https://claudette.answer.ai/core.html#search_conf)
991 | function creates the necessary configuration for the web search tool.
992 | You can customize it with several parameters:
993 |
994 | ``` python
995 | search_conf(
996 | max_uses=None, # Maximum number of searches Claude can perform
997 | allowed_domains=None, # List of domains to search within (e.g., ['wikipedia.org'])
998 | blocked_domains=None, # List of domains to exclude (e.g., ['twitter.com'])
999 | user_location=None # Location context for search
1000 | )
1001 | ```
1002 |
1003 | When Claude uses the web search tool, the response includes citations
1004 | linking to the source information. Claudette automatically formats these
1005 | citations and provides them as footnotes in the response.
1006 |
1007 | Web search usage is tracked separately from normal token usage in the
1008 | [`usage`](https://claudette.answer.ai/core.html#usage) statistics:
1009 |
1010 | ``` python
1011 | chat.use
1012 | ```
1013 |
1014 | In: 7302; Out: 325; Cache create: 0; Cache read: 0; Total Tokens: 7627; Search: 1
1015 |
1016 | Web search requests have their own pricing. As of May 2024, web searches
1017 | cost \$10 per 1,000 requests.
1018 |
1019 | ## Text Editor Tool
1020 |
1021 | Claudette provides support for Anthropic’s special Text Editor Tool,
1022 | which allows Claude to view and modify files directly. Unlike regular
1023 | function-calling tools, the text editor tool uses a predefined schema
1024 | built into Claude’s model.
1025 |
1026 | Important notes about the text editor tool:
1027 |
1028 | - It’s schema-less - you provide a configuration but not a schema
1029 | - It uses type identifiers like “text_editor_20250124” specific to
1030 | Claude models
1031 | - You must implement a dispatcher function (in Claudette, it’s
1032 | [`str_replace_based_edit_tool`](https://claudette.answer.ai/text_editor.html#str_replace_based_edit_tool))
1033 | - Different commands route through this single dispatcher function
1034 |
1035 | The text editor tool allows Claude to:
1036 |
1037 | - View file or directory contents
1038 | - Create new files
1039 | - Insert text at specific line numbers
1040 | - Replace text within files
1041 |
1042 | ``` python
1043 | from claudette.text_editor import text_editor_conf, str_replace_based_edit_tool
1044 | from toolslm.funccall import mk_ns
1045 |
1046 | # Create a chat with the text editor tool
1047 | chat = Chat(model, sp='Be concise in your responses.',
1048 | tools=[text_editor_conf['sonnet']], ns=mk_ns(str_replace_based_edit_tool))
1049 |
1050 | # Now Claude can explore files
1051 | for o in chat.toolloop('Please explain concisely what my _quarto.yml does. Use your tools, and explain before each usage what you are doing.'):
1052 | if not isinstance(o,dict): display(o)
1053 | ```
1054 |
1055 | I’ll examine your \_quarto.yml file to explain what it does. Let me
1056 | start by looking at the current directory to locate the file.
1057 |
1058 |
1059 |
1060 | - id: `msg_01S59jijaZGLN9jq525KoDRu`
1061 | - content:
1062 | `[{'citations': None, 'text': "I'll examine your _quarto.yml file to explain what it does. Let me start by looking at the current directory to locate the file.", 'type': 'text'}, {'id': 'toolu_01NxKxWNHFroLYxTULzFnBAp', 'input': {'command': 'view', 'path': '.'}, 'name': 'str_replace_based_edit_tool', 'type': 'tool_use'}]`
1063 | - model: `claude-sonnet-4-20250514`
1064 | - role: `assistant`
1065 | - stop_reason: `tool_use`
1066 | - stop_sequence: `None`
1067 | - type: `message`
1068 | - usage:
1069 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1120, 'output_tokens': 106, 'server_tool_use': None, 'service_tier': 'standard'}`
1070 |
1071 |
1072 |
1073 | Now I can see the \_quarto.yml file exists. Let me examine its contents
1074 | to understand what it configures.
1075 |
1076 |
1077 |
1078 | - id: `msg_015w8s5yw4RBs4dXgfLx2CDS`
1079 | - content:
1080 | `[{'citations': None, 'text': 'Now I can see the _quarto.yml file exists. Let me examine its contents to understand what it configures.', 'type': 'text'}, {'id': 'toolu_01ToUv8CSQH69Kx2ahBSwcGf', 'input': {'command': 'view', 'path': '/_Users/jhoward/aai-ws/claudette/_quarto.yml'}, 'name': 'str_replace_based_edit_tool', 'type': 'tool_use'}]`
1081 | - model: `claude-sonnet-4-20250514`
1082 | - role: `assistant`
1083 | - stop_reason: `tool_use`
1084 | - stop_sequence: `None`
1085 | - type: `message`
1086 | - usage:
1087 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 2738, 'output_tokens': 120, 'server_tool_use': None, 'service_tier': 'standard'}`
1088 |
1089 |
1090 |
1091 | Let me use the correct path:
1092 |
1093 |
1094 |
1095 | - id: `msg_01Baaa1otgra5CHimvP9Ja5s`
1096 | - content:
1097 | `[{'citations': None, 'text': 'Let me use the correct path:', 'type': 'text'}, {'id': 'toolu_01FTCbeJLRdAyfqqnq3pAQFa', 'input': {'command': 'view', 'path': '/Users/jhoward/aai-ws/claudette/_quarto.yml'}, 'name': 'str_replace_based_edit_tool', 'type': 'tool_use'}]`
1098 | - model: `claude-sonnet-4-20250514`
1099 | - role: `assistant`
1100 | - stop_reason: `tool_use`
1101 | - stop_sequence: `None`
1102 | - type: `message`
1103 | - usage:
1104 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 2896, 'output_tokens': 102, 'server_tool_use': None, 'service_tier': 'standard'}`
1105 |
1106 |
1107 |
1108 | Your `_quarto.yml` file configures a Quarto website project with the
1109 | following key settings:
1110 |
1111 | **Project Configuration:** - Sets up a website project type - Includes
1112 | all `.txt` files as resources - Configures preview server on port 3000
1113 | without auto-opening browser
1114 |
1115 | **HTML Formatting:** - Uses the “cosmo” theme with custom CSS from
1116 | `styles.css` - Enables table of contents, code tools, and syntax
1117 | highlighting - Sets a custom layout with wider body (1800px) and
1118 | narrower sidebar (180px) - Uses “arrow” highlight style with custom code
1119 | block styling - Keeps markdown files during rendering
1120 |
1121 | **Website Features:** - Enables social media cards (Twitter and Open
1122 | Graph) - Adds a search-enabled navbar with primary background - Uses a
1123 | floating sidebar style - Links to GitHub issues via repo-actions
1124 |
1125 | **Metadata Sources:** - Pulls additional configuration from `nbdev.yml`
1126 | and `sidebar.yml` files
1127 |
1128 | This appears to be set up for a documentation website, likely for a
1129 | Python package using nbdev (given the nbdev.yml reference), with a focus
1130 | on displaying code and technical content with good readability and
1131 | navigation.
1132 |
1133 |
1134 |
1135 | - id: `msg_01Ba23FNyw6xy7KdtTVnBdk6`
1136 | - content:
1137 | `[{'citations': None, 'text': 'Your`\_quarto.yml`file configures a Quarto website project with the following key settings:\n\n**Project Configuration:**\n- Sets up a website project type\n- Includes all`.txt`files as resources\n- Configures preview server on port 3000 without auto-opening browser\n\n**HTML Formatting:**\n- Uses the "cosmo" theme with custom CSS from`styles.css`\n- Enables table of contents, code tools, and syntax highlighting\n- Sets a custom layout with wider body (1800px) and narrower sidebar (180px)\n- Uses "arrow" highlight style with custom code block styling\n- Keeps markdown files during rendering\n\n**Website Features:**\n- Enables social media cards (Twitter and Open Graph)\n- Adds a search-enabled navbar with primary background\n- Uses a floating sidebar style\n- Links to GitHub issues via repo-actions\n\n**Metadata Sources:**\n- Pulls additional configuration from`nbdev.yml`and`sidebar.yml`files\n\nThis appears to be set up for a documentation website, likely for a Python package using nbdev (given the nbdev.yml reference), with a focus on displaying code and technical content with good readability and navigation.', 'type': 'text'}]`
1138 | - model: `claude-sonnet-4-20250514`
1139 | - role: `assistant`
1140 | - stop_reason: `end_turn`
1141 | - stop_sequence: `None`
1142 | - type: `message`
1143 | - usage:
1144 | `{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 3235, 'output_tokens': 267, 'server_tool_use': None, 'service_tier': 'standard'}`
1145 |
1146 |
1147 |
1148 | ``` python
1149 | chat.use
1150 | ```
1151 |
1152 | In: 9989; Out: 595; Cache create: 0; Cache read: 0; Total Tokens: 10584; Search: 0
1153 |
1154 | ## Other model providers
1155 |
1156 | You can also use 3rd party providers of Anthropic models, as shown here.
1157 |
1158 | ### Amazon Bedrock
1159 |
1160 | These are the models available through Bedrock:
1161 |
1162 | ``` python
1163 | models_aws
1164 | ```
1165 |
1166 | ['anthropic.claude-sonnet-4-20250514-v1:0',
1167 | 'claude-3-5-haiku-20241022',
1168 | 'claude-3-7-sonnet-20250219',
1169 | 'anthropic.claude-3-opus-20240229-v1:0',
1170 | 'anthropic.claude-3-5-sonnet-20241022-v2:0']
1171 |
1172 | To use them, call `AnthropicBedrock` with your access details, and pass
1173 | that to [`Client`](https://claudette.answer.ai/core.html#client):
1174 |
1175 | ``` python
1176 | from anthropic import AnthropicBedrock
1177 | ```
1178 |
1179 | ``` python
1180 | ab = AnthropicBedrock(
1181 | aws_access_key=os.environ['AWS_ACCESS_KEY'],
1182 | aws_secret_key=os.environ['AWS_SECRET_KEY'],
1183 | )
1184 | client = Client(models_aws[-1], ab)
1185 | ```
1186 |
1187 | Now create your [`Chat`](https://claudette.answer.ai/core.html#chat)
1188 | object passing this client to the `cli` parameter – and from then on,
1189 | everything is identical to the previous examples.
1190 |
1191 | ``` python
1192 | chat = Chat(cli=client)
1193 | chat("I'm Jeremy")
1194 | ```
1195 |
1196 | ### Google Vertex
1197 |
1198 | These are the models available through Vertex:
1199 |
1200 | ``` python
1201 | models_goog
1202 | ```
1203 |
1204 | To use them, call `AnthropicVertex` with your access details, and pass
1205 | that to [`Client`](https://claudette.answer.ai/core.html#client):
1206 |
1207 | ``` python
1208 | from anthropic import AnthropicVertex
1209 | import google.auth
1210 | ```
1211 |
1212 | ``` python
1213 | # project_id = google.auth.default()[1]
1214 | # gv = AnthropicVertex(project_id=project_id, region="us-east5")
1215 | # client = Client(models_goog[-1], gv)
1216 | ```
1217 |
1218 | ``` python
1219 | chat = Chat(cli=client)
1220 | chat("I'm Jeremy")
1221 | ```
1222 |
1223 | ## Extensions
1224 |
1225 | - [Pydantic Structured
1226 | Ouput](https://github.com/tom-pollak/claudette-pydantic)
1227 |
1228 | [^1]: https://www.wunderground.com/weather/us/ca/san-diego “TomorrowSat
1229 | 06/21 High · 67 °F · 14% Precip.”
1230 |
1231 | [^2]: https://www.wunderground.com/weather/us/ca/san-diego “High 67F.”
1232 |
1233 | [^3]: https://www.wunderground.com/weather/us/ca/san-diego “/ 0.00 °in
1234 | Cloudy early with partial sunshine expected late.”
1235 |
1236 | [^4]: https://www.wunderground.com/weather/us/ca/san-diego “Winds SSW at
1237 | 10 to 15 mph.”
1238 |
1239 | [^5]: https://www.wunderground.com/weather/us/ca/san-diego “Low 62F.
1240 | Winds SSW at 10 to 15 mph.”
1241 |
1242 | [^6]: https://www.weather.gov/sgx/ “Cooler today with highs around 4-7
1243 | degrees below normal for most areas, still slightly above normal in
1244 | the lower deserts.”
1245 |
1246 | [^7]: https://www.accuweather.com/en/us/san-diego/92101/weather-forecast/347628
1247 | “The air has reached a high level of pollution and is unhealthy for
1248 | sensitive groups. Reduce time spent outside if you are feeling
1249 | symptoms such as dif…”
1250 |
--------------------------------------------------------------------------------