53 |
57 |
62 |
63 |
64 |
65 |
66 |
67 |
72 |
73 |
75 | GroqCall is a proxy server that provides function calls for Groq's lightning-fast Language 76 | Processing Unit (LPU) and other AI providers. Additionally, the upcoming FuncyHub will offer a 77 | wide range of built-in functions, hosted on the cloud, making it easier to create AI assistants 78 | without the need to maintain function schemas in the codebase or execute them through multiple 79 | calls. 80 |
81 |82 | Check github repo for more info: 83 | https://github.com/unclecode/groqcall 89 |
90 |92 | Groq is a startup that designs highly specialized processor chips aimed specifically at running 93 | inference on large language models. They've introduced what they call the Language Processing 94 | Unit (LPU), and the speed is astounding—capable of producing 500 to 800 tokens per second or 95 | more. I've become a big fan of Groq and their community; 96 |
97 |98 | I admire what they're doing. It feels like after discovering electricity, the next challenge is 99 | moving it around quickly and efficiently. Groq is doing just that for Artificial Intelligence, 100 | making it easily accessible everywhere. They've opened up their API to the cloud, but as of now, 101 | they lack a function call capability. 102 |
103 |104 | Unable to wait for this feature, I built a proxy that enables function calls using the OpenAI 105 | interface, allowing it to be called from any library. This engineering workaround has proven to 106 | be immensely useful in my company for various projects. Here's the link to the GitHub repository 107 | where you can explore and play around with it. I've included some examples in this collaboration 108 | for you to check out. 109 |
110 |To run this proxy locally on your own machine, follow these steps:
123 |git clone https://github.com/unclecode/groqcall.git
128 | cd groqcall
132 | python -m venv venv
136 | source venv/bin/activate
140 | pip install -r requirements.txt
144 | .venv/bin/uvicorn --app-dir app/ main:app --reload
148 | 151 | For your convenience, I have already set up a server that you can use temporarily. This allows 152 | you to quickly start using the proxy without having to run it locally. 153 |
154 |
155 | To use the pre-built server, simply make requests to the following base URL:
156 | https://groqcall.ai/proxy/groq/v1
157 |
161 | This README is organized into three main sections, each showcasing different aspects of 162 | GroqCall.ai: 163 |
164 |# The following libraries are optional if you're interested in using PhiData or managing your tools on the client side.
188 | !pip install phidata > /dev/null
189 | !pip install openai > /dev/null
190 | !pip install duckduckgo-search > /dev/null
191 |
192 | 194 | Check out the example file 195 | example_2.py 196 | for a full implementation of the following code. 197 |
198 |from duckduckgo_search import DDGS
199 | import requests, os
200 | import json
201 |
202 | # Here you pass your own GROQ API key
203 | api_key=userdata.get("GROQ_API_KEY")
204 | header = {
205 | "Authorization": f"Bearer {api_key}",
206 | "Content-Type": "application/json"
207 | }
208 | proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions"
209 |
210 |
211 | def duckduckgo_search(query, max_results=None):
212 | """
213 | Use this function to search DuckDuckGo for a query.
214 | """
215 | with DDGS() as ddgs:
216 | return [r for r in ddgs.text(query, safesearch='off', max_results=max_results)]
217 |
218 | def duckduckgo_news(query, max_results=None):
219 | """
220 | Use this function to get the latest news from DuckDuckGo.
221 | """
222 | with DDGS() as ddgs:
223 | return [r for r in ddgs.news(query, safesearch='off', max_results=max_results)]
224 |
225 | function_map = {
226 | "duckduckgo_search": duckduckgo_search,
227 | "duckduckgo_news": duckduckgo_news,
228 | }
229 |
230 | request = {
231 | "messages": [
232 | {
233 | "role": "system",
234 | "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n<instructions>\n1. Use markdown to format your answers.\n</instructions>"
235 | },
236 | {
237 | "role": "user",
238 | "content": "Whats happening in France? Summarize top stories with sources, very short and concise."
239 | }
240 | ],
241 | "model": "mixtral-8x7b-32768",
242 | "tool_choice": "auto",
243 | "tools": [
244 | {
245 | "type": "function",
246 | "function": {
247 | "name": "duckduckgo_search",
248 | "description": "Use this function to search DuckDuckGo for a query.\n\nArgs:\n query(str): The query to search for.\n max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n The result from DuckDuckGo.",
249 | "parameters": {
250 | "type": "object",
251 | "properties": {
252 | "query": {
253 | "type": "string"
254 | },
255 | "max_results": {
256 | "type": [
257 | "number",
258 | "null"
259 | ]
260 | }
261 | }
262 | }
263 | }
264 | },
265 | {
266 | "type": "function",
267 | "function": {
268 | "name": "duckduckgo_news",
269 | "description": "Use this function to get the latest news from DuckDuckGo.\n\nArgs:\n query(str): The query to search for.\n max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n The latest news from DuckDuckGo.",
270 | "parameters": {
271 | "type": "object",
272 | "properties": {
273 | "query": {
274 | "type": "string"
275 | },
276 | "max_results": {
277 | "type": [
278 | "number",
279 | "null"
280 | ]
281 | }
282 | }
283 | }
284 | }
285 | }
286 | ]
287 | }
288 |
289 | response = requests.post(
290 | proxy_url,
291 | headers=header,
292 | json=request
293 | )
294 | if response.status_code == 200:
295 | res = response.json()
296 | message = res['choices'][0]['message']
297 | tools_response_messages = []
298 | if not message['content'] and 'tool_calls' in message:
299 | for tool_call in message['tool_calls']:
300 | tool_name = tool_call['function']['name']
301 | tool_args = tool_call['function']['arguments']
302 | tool_args = json.loads(tool_args)
303 | if tool_name not in function_map:
304 | print(f"Error: {tool_name} is not a valid function name.")
305 | continue
306 | tool_func = function_map[tool_name]
307 | tool_response = tool_func(**tool_args)
308 | tools_response_messages.append({
309 | "role": "tool", "content": json.dumps(tool_response)
310 | })
311 |
312 | if tools_response_messages:
313 | request['messages'] += tools_response_messages
314 | response = requests.post(
315 | proxy_url,
316 | headers=header,
317 | json=request
318 | )
319 | if response.status_code == 200:
320 | res = response.json()
321 | print(res['choices'][0]['message']['content'])
322 | else:
323 | print("Error:", response.status_code, response.text)
324 | else:
325 | print(message['content'])
326 | else:
327 | print("Error:", response.status_code, response.text)
328 |
329 | 331 | Check out the example file 332 | example_3.py 333 | for a full implementation of the following code. 334 |
335 |336 | In this method, we only need to provide the function's name, which consists of two parts, acting 337 | as a sort of namespace. The first part identifies the library or toolkit containing the 338 | functions, and the second part specifies the function's name, assuming it's already available on 339 | the proxy server. I aim to collaborate with the community to incorporate all typical functions, 340 | eliminating the need for passing a schema. Without having to handle function calls ourselves, a 341 | single request to the proxy enables it to identify and execute the functions, retrieve responses 342 | from large language models, and return the results to us. Thanks to Groq, all of this occurs in 343 | just seconds. 344 |
345 |from duckduckgo_search import DDGS
348 | import requests, os
349 | api_key = userdata.get("GROQ_API_KEY")
350 | header = {
351 | "Authorization": f"Bearer {api_key}",
352 | "Content-Type": "application/json"
353 | }
354 |
355 | proxy_url = "https://groqcall.ai/proxy/groq/v1/chat/completions"
356 |
357 |
358 | request = {
359 | "messages": [
360 | {
361 | "role": "system",
362 | "content": "YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.\n<instructions>\n1. Use markdown to format your answers.\n</instructions>",
363 | },
364 | {
365 | "role": "user",
366 | "content": "Whats happening in France? Summarize top stories with sources, very short and concise. Also please search about the histoy of france as well.",
367 | },
368 | ],
369 | "model": "mixtral-8x7b-32768",
370 | "tool_choice": "auto",
371 | "tools": [
372 | {
373 | "type": "function",
374 | "function": {
375 | "name": "duckduck.search",
376 | },
377 | },
378 | {
379 | "type": "function",
380 | "function": {
381 | "name": "duckduck.news",
382 | },
383 | },
384 | ],
385 | }
386 |
387 | response = requests.post(
388 | proxy_url,
389 | headers=header,
390 | json=request,
391 | )
392 |
393 | if response.status_code == 200:
394 | res = response.json()
395 | print(res["choices"][0]["message"]["content"])
396 | else:
397 | print("Error:", response.status_code, response.text)
398 |
399 | 401 | Check out the example file 402 | example_1.py 403 | for a full implementation of the following code. 404 |
405 |406 | PhiData is a favorite of mine for creating AI assistants, thanks to its beautifully simplified 407 | interface, unlike the complexity seen in the LangChain library and LlamaIndex. I use it for many 408 | projects and want to give kudos to their team. It's open source, and I recommend everyone check 409 | it out. You can explore more from this link: 410 | https://github.com/phidatahq/phidata. 413 |
414 |from google.README import userdata
417 | from phi.llm.openai.like import OpenAILike
418 | from phi.assistant import Assistant
419 | from phi.tools.duckduckgo import DuckDuckGo
420 | import os, json
421 |
422 |
423 | my_groq = OpenAILike(
424 | model="mixtral-8x7b-32768",
425 | api_key=userdata.get("GROQ_API_KEY"),
426 | base_url="https://groqcall.ai/proxy/groq/v1"
427 | )
428 | assistant = Assistant(
429 | llm=my_groq,
430 | tools=[DuckDuckGo()], show_tool_calls=True, markdown=True
431 | )
432 | assistant.print_response("Whats happening in France? Summarize top stories with sources, very short and concise.", stream=False)
433 |
434 |
435 | 437 | I am excited to extend and grow this repository by adding more built-in functions and 438 | integrating additional services. If you are interested in contributing to this project and being 439 | a part of its development, I would love to collaborate with you! I plan to create a discord 440 | channel for this project, where we can discuss ideas, share knowledge, and work together to 441 | enhance the repository. 442 |
443 |Here's how you can get involved:
444 |454 | If you have any ideas, suggestions, or would like to discuss potential contributions, feel free 455 | to reach out to me. You can contact me through the following channels: 456 |
457 |467 | I'm open to collaboration and excited to see how we can work together to enhance this project 468 | and provide value to the community. Let's connect and explore how we can help each other! 469 |
470 |Together, let's make this repository even more awesome! 🚀
471 |