├── .github
└── FUNDING.yml
├── LICENSE
├── README.md
├── gpti
├── __init__.py
└── util.py
├── requirements.txt
└── setup.py
/.github/FUNDING.yml:
--------------------------------------------------------------------------------
1 | github: [yandricr]
2 | ko_fi: yandricr
3 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 yandricr
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy of
6 | this software and associated documentation files (the "Software"), to deal in
7 | the Software without restriction, including without limitation the rights to
8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
9 | of the Software, and to permit persons to whom the Software is furnished to do
10 | so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | # GPTI
3 |
4 |   [](https://github.com/yandricr/gpti-py/graphs/contributors) [](https://github.com/yandricr/gpti-py) 
5 |
6 | This package simplifies your interaction with various GPT models, eliminating the need for tokens or other methods to access GPT. It also allows you to use three artificial intelligences to generate images: DALL·E, Prodia and more, all of this without restrictions or limits
7 |
8 | ## Installation
9 |
10 | You can install the package via PIP
11 |
12 | ```bash
13 | pip install gpti
14 | ```
15 |
16 | ## Available Models
17 |
18 | GPTI provides access to a variety of artificial intelligence models to meet various needs. Currently, the available models include:
19 |
20 | - [**ChatGPT**](#gpt)
21 | - [**GPT-3.5-Turbo**](#gpt-v2)
22 | - [**ChatGPT Web**](#gptweb)
23 | - [**GPT-4o**](#gpt-4o)
24 | - [**Bing**](#bing)
25 | - [**LLaMA-3.1**](#llama-3.1)
26 | - [**Blackbox**](#blackbox)
27 | - [**AI Images**](#ai-images)
28 |
29 | ## Api key
30 |
31 | If you want to access the premium models, enter your credentials. You can obtain them by [clicking here](https://nexra.aryahcr.cc/api-key/en).
32 |
33 | ```python
34 | from gpti import nexra
35 |
36 | nexra("user-xxxxxxxx", "nx-xxxxxxx-xxxxx-xxxxx");
37 | ```
38 |
39 |
40 | ## Usage GPT
41 |
42 | ```python
43 | import json
44 | from gpti import gpt
45 |
46 | res = gpt.v1(messages=[
47 | {
48 | "role": "assistant",
49 | "content": "Hello! How are you today?"
50 | },
51 | {
52 | "role": "user",
53 | "content": "Hello, my name is Yandri."
54 | },
55 | {
56 | "role": "assistant",
57 | "content": "Hello, Yandri! How are you today?"
58 | }
59 | ], prompt="Can you repeat my name?", model="GPT-4", markdown=False)
60 |
61 | if res.error() != None:
62 | print(json.dumps(res.error()))
63 | else:
64 | print(json.dumps(res.result()))
65 | ```
66 |
67 | #### Models
68 |
69 | Select one of these available models in the API to enhance your experience.
70 |
71 | - gpt-4
72 | - gpt-4-0613
73 | - gpt-4-32k
74 | - gpt-4-0314
75 | - gpt-4-32k-0314
76 | - gpt-3.5-turbo
77 | - gpt-3.5-turbo-16k
78 | - gpt-3.5-turbo-0613
79 | - gpt-3.5-turbo-16k-0613
80 | - gpt-3.5-turbo-0301
81 | - text-davinci-003
82 | - text-davinci-002
83 | - code-davinci-002
84 | - gpt-3
85 | - text-curie-001
86 | - text-babbage-001
87 | - text-ada-001
88 | - davinci
89 | - curie
90 | - babbage
91 | - ada
92 | - babbage-002
93 | - davinci-002
94 |
95 |
96 | ## Usage GPT v2
97 |
98 | It's quite similar, with the difference that it has the capability to generate real-time responses via streaming using gpt-3.5-turbo.
99 |
100 | ```python
101 | import json
102 | from gpti import gpt
103 |
104 | res = gpt.v2(messages=[
105 | {
106 | "role": "assistant",
107 | "content": "Hello! How are you today?"
108 | },
109 | {
110 | "role": "user",
111 | "content": "Hello, my name is Yandri."
112 | },
113 | {
114 | "role": "assistant",
115 | "content": "Hello, Yandri! How are you today?"
116 | },
117 | {
118 | "role": "user",
119 | "content": "Can you repeat my name?"
120 | }
121 | ], markdown=False, stream=False)
122 |
123 | if res.error() != None:
124 | print(json.dumps(res.error()))
125 | else:
126 | print(json.dumps(res.result()))
127 | ```
128 |
129 | ## Usage GPT v2 Streaming
130 |
131 | ```python
132 | import json
133 | from gpti import gpt
134 |
135 | res = gpt.v2(messages=[
136 | {
137 | "role": "assistant",
138 | "content": "Hello! How are you today?"
139 | },
140 | {
141 | "role": "user",
142 | "content": "Hello, my name is Yandri."
143 | },
144 | {
145 | "role": "assistant",
146 | "content": "Hello, Yandri! How are you today?"
147 | },
148 | {
149 | "role": "user",
150 | "content": "Can you repeat my name?"
151 | }
152 | ], markdown=False, stream=False)
153 |
154 | if res.error() != None:
155 | print(json.dumps(res.error()))
156 | else:
157 | for chunk in res.stream():
158 | print(json.dumps(chunk))
159 | ```
160 |
161 |
162 | ## Usage GPT Web
163 |
164 | GPT-4 has been enhanced by me, but errors may arise due to technological complexity. It is advisable to exercise caution when relying entirely on its accuracy for online queries.
165 |
166 | ```python
167 | import json
168 | from gpti import gpt
169 |
170 | res = gpt.web(prompt="Are you familiar with the movie Wonka released in 2023?", markdown=False)
171 |
172 | if res.error() != None:
173 | print(json.dumps(res.error()))
174 | else:
175 | print(json.dumps(res.result()))
176 | ```
177 |
178 |
179 | ## Usage GPT-4o
180 |
181 | ```python
182 | import json
183 | from gpti import gpt
184 |
185 | res = gpt.v3(messages=[
186 | {
187 | "role": "assistant",
188 | "content": "Hello! How are you today?"
189 | },
190 | {
191 | "role": "user",
192 | "content": "Hello, my name is Yandri."
193 | },
194 | {
195 | "role": "assistant",
196 | "content": "Hello, Yandri! How are you today?"
197 | },
198 | {
199 | "role": "user",
200 | "content": "Can you repeat my name?"
201 | }
202 | ], markdown=False, stream=False)
203 |
204 | if res.error() != None:
205 | print(json.dumps(res.error()))
206 | else:
207 | print(json.dumps(res.result()))
208 | ```
209 |
210 | ## Usage GPT-4o Streaming
211 |
212 | ```python
213 | import json
214 | from gpti import gpt
215 |
216 | res = gpt.v3(messages=[
217 | {
218 | "role": "assistant",
219 | "content": "Hello! How are you today?"
220 | },
221 | {
222 | "role": "user",
223 | "content": "Hello, my name is Yandri."
224 | },
225 | {
226 | "role": "assistant",
227 | "content": "Hello, Yandri! How are you today?"
228 | },
229 | {
230 | "role": "user",
231 | "content": "Can you repeat my name?"
232 | }
233 | ], markdown=False, stream=False)
234 |
235 | if res.error() != None:
236 | print(json.dumps(res.error()))
237 | else:
238 | for chunk in res.stream():
239 | print(json.dumps(chunk))
240 | ```
241 |
242 |
243 | ## Usage Bing
244 |
245 | ```python
246 | import json
247 | from gpti import bing
248 |
249 | res = bing(messages=[
250 | {
251 | "role" => "assistant",
252 | "content" => "Hello! How can I help you today? 😊"
253 | },
254 | {
255 | "role": "user",
256 | "content": "Can you tell me how many movies you've told me about?"
257 | }
258 | ], conversation_style="Balanced", markdown=False, stream=False)
259 |
260 | if res.error() != None:
261 | print(json.dumps(res.error()))
262 | else:
263 | print(json.dumps(res.result()))
264 | ```
265 |
266 | ## Usage Bing Streaming
267 |
268 | ```python
269 | import json
270 | from gpti import bing
271 |
272 | res = bing(messages=[
273 | {
274 | "role" => "assistant",
275 | "content" => "Hello! How can I help you today? 😊"
276 | },
277 | {
278 | "role": "user",
279 | "content": "Can you tell me how many movies you've told me about?"
280 | }
281 | ], conversation_style="Balanced", markdown=False, stream=True)
282 |
283 | if res.error() != None:
284 | print(json.dumps(res.error()))
285 | else:
286 | for chunk in res.stream():
287 | print(json.dumps(chunk))
288 | ```
289 |
290 | #### Parameters
291 |
292 | | Parameter | Default | Description |
293 | |--------------------|----------|---------------------------------------------------------------------------------------------------------|
294 | | conversation_style | Balanced | You can use between: "Balanced", "Creative" and "Precise" |
295 | | markdown | false | You can convert the dialogues into continuous streams or not into Markdown |
296 | | stream | false | You are given the option to choose whether you prefer the responses to be in real-time or not |
297 |
298 |
299 | ## Usage LLaMA 3.1
300 |
301 | ```python
302 | import json
303 | from gpti import llama
304 |
305 | res = llama(messages=[
306 | {
307 | "role": "user",
308 | "content": "Hello! How are you? Could you tell me your name?"
309 | }
310 | ], markdown=False, stream=False)
311 |
312 | if res.error() != None:
313 | print(json.dumps(res.error()))
314 | else:
315 | print(json.dumps(res.result()))
316 | ```
317 |
318 | ## Usage LLaMA 3.1 Streaming
319 |
320 | ```python
321 | import json
322 | from gpti import llama
323 |
324 | res = llama(messages=[
325 | {
326 | "role": "user",
327 | "content": "Hello! How are you? Could you tell me your name?"
328 | }
329 | ], markdown=False, stream=True)
330 |
331 | if res.error() != None:
332 | print(json.dumps(res.error()))
333 | else:
334 | for chunk in res.stream():
335 | print(json.dumps(chunk))
336 | ```
337 |
338 |
339 | ## Usage Blackbox
340 |
341 | ```python
342 | import json
343 | from gpti import blackbox
344 |
345 | res = blackbox(messages=[
346 | {
347 | "role": "user",
348 | "content": "Hello! How are you? Could you tell me your name?"
349 | }
350 | ], markdown=False, stream=False)
351 |
352 | if res.error() != None:
353 | print(json.dumps(res.error()))
354 | else:
355 | print(json.dumps(res.result()))
356 | ```
357 |
358 | ## Usage Blackbox Streaming
359 |
360 | ```python
361 | import json
362 | from gpti import blackbox
363 |
364 | res = blackbox(messages=[
365 | {
366 | "role": "user",
367 | "content": "Hello! How are you? Could you tell me your name?"
368 | }
369 | ], markdown=False, stream=True)
370 |
371 | if res.error() != None:
372 | print(json.dumps(res.error()))
373 | else:
374 | for chunk in res.stream():
375 | print(json.dumps(chunk))
376 | ```
377 |
378 |
379 | ## AI Images
380 |
381 | ```python
382 | import json
383 | from gpti import imageai
384 |
385 | res = imageai(prompt="cat color red", model="dalle", response="url" | "base64", data={})
386 |
387 | if res.error() != None:
388 | print(json.dumps(res.error()))
389 | else:
390 | print(json.dumps(res.result()))
391 | ```
392 |
393 | ## API Reference
394 |
395 | Currently, some models require your credentials to access them, while others are free. For more details and examples, please refer to the complete [documentation](https://nexra.aryahcr.cc/).
396 |
397 | #### Code Errors
398 |
399 | These are the error codes that will be presented in case the API fails.
400 |
401 | | Code | Error | Description |
402 | |------|------------------------|-------------------------------------------------------|
403 | | 400 | BAD_REQUEST | Not all parameters have been entered correctly |
404 | | 500 | INTERNAL_SERVER_ERROR | The server has experienced failures |
405 | | 200 | | The API worked without issues |
406 | | 403 | FORBIDDEN | Your API key has expired and needs to be renewed |
407 | | 401 | UNAUTHORIZED | API credentials are required |
408 |
--------------------------------------------------------------------------------
/gpti/__init__.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import json
3 | from .util import api, nexra, apistrm
4 |
5 | class gpt:
6 | class v1:
7 | def __init__(self, messages=[], prompt="", model="", markdown=False):
8 | try:
9 | data_ = {
10 | "messages": [],
11 | "prompt": "",
12 | "model": "GPT-4",
13 | "markdown": False
14 | }
15 |
16 | try:
17 | mess = []
18 | pro = ""
19 | mod = "GPT-4"
20 | mark = False
21 |
22 | if messages is not None:
23 | mess = messages
24 | if prompt is not None:
25 | pro = prompt
26 | if model is not None:
27 | mod = model
28 | if markdown is not None and markdown == True:
29 | mark = True
30 |
31 | data_ = {
32 | "messages": mess,
33 | "prompt": pro,
34 | "model": mod,
35 | "markdown": mark
36 | }
37 | except Exception as e:
38 | data_ = {
39 | "messages": [],
40 | "prompt": "",
41 | "model": "GPT-4",
42 | "markdown": False
43 | }
44 |
45 | try:
46 | res_ = api(data=data_, api="https://nexra.aryahcr.cc/api/chat/gpt", image=False)
47 | self.__error = res_.error()
48 | self.__result = res_.result()
49 | except Exception as e:
50 | self.__error = {
51 | "code": 500,
52 | "status": False,
53 | "error": "INTERNAL_SERVER_ERROR",
54 | "message": "general (unknown) error"
55 | }
56 | self.__result = None
57 | except Exception as e:
58 | self.__error = {
59 | "code": 500,
60 | "status": False,
61 | "error": "INTERNAL_SERVER_ERROR",
62 | "message": "general (unknown) error"
63 | }
64 | self.__result = None
65 | pass
66 | def error(self):
67 | return self.__error
68 | def result(self):
69 | return self.__result
70 | class v2:
71 | def __init__(self, messages=[], stream=False, markdown=False) -> None:
72 | try:
73 | data = {
74 | "messages": [],
75 | "model": "chatgpt",
76 | "markdown": False,
77 | "stream": False
78 | }
79 |
80 | strm = False
81 | try:
82 | if stream != None and stream == True:
83 | strm = True
84 | else:
85 | strm = False
86 | except Exception as e:
87 | strm = False
88 |
89 | try:
90 | mess = []
91 | mark = False
92 |
93 | if messages is not None:
94 | mess = messages
95 |
96 | if markdown is not None:
97 | mark = markdown
98 |
99 | data = {
100 | "messages": mess,
101 | "model": "chatgpt",
102 | "markdown": mark,
103 | "stream": strm
104 | }
105 | except Exception as e:
106 | data = {
107 | "messages": [],
108 | "model": "chatgpt",
109 | "markdown": False,
110 | "stream": strm
111 | }
112 |
113 | try:
114 | res_ = apistrm(data=data, api="https://nexra.aryahcr.cc/api/chat/complements", stream=strm)
115 | self.__error = res_.error()
116 | self.__result = res_.result()
117 | self.__sttrm = res_.stream()
118 | except Exception as e:
119 | self.__error = {
120 | "code": 500,
121 | "error": "INTERNAL_SERVER_ERROR",
122 | "message": "general (unknown) error"
123 | }
124 | self.__result = None
125 | self.__sttrm = None
126 | except Exception as e:
127 | self.__error = {
128 | "code": 500,
129 | "error": "INTERNAL_SERVER_ERROR",
130 | "message": "general (unknown) error"
131 | }
132 | self.__result = None
133 | self.__sttrm = None
134 | pass
135 | def stream(self):
136 | if self.__sttrm != None:
137 | for data in self.__sttrm:
138 | yield data
139 | else:
140 | yield None
141 | def error(self):
142 | return self.__error
143 | def result(self):
144 | return self.__result
145 | class web:
146 | def __init__(self, prompt="", markdown=False):
147 | try:
148 | data_ = {
149 | "prompt": "",
150 | "markdown": False
151 | }
152 |
153 | try:
154 | pro = ""
155 | mark = False
156 |
157 | if prompt is not None:
158 | pro = prompt
159 |
160 | if markdown is not None and markdown == True:
161 | mark = True
162 |
163 | data_ = {
164 | "prompt": pro,
165 | "markdown": mark
166 | }
167 | except Exception as e:
168 | data_ = {
169 | "prompt": "",
170 | "markdown": False
171 | }
172 |
173 | try:
174 | res_ = api(data=data_, api="https://nexra.aryahcr.cc/api/chat/gptweb", image=False)
175 | self.__error = res_.error()
176 | self.__result = res_.result()
177 | except Exception as e:
178 | self.__error = {
179 | "code": 500,
180 | "status": False,
181 | "error": "INTERNAL_SERVER_ERROR",
182 | "message": "general (unknown) error"
183 | }
184 | self.__result = None
185 | except Exception as e:
186 | self.__error = {
187 | "code": 500,
188 | "status": False,
189 | "error": "INTERNAL_SERVER_ERROR",
190 | "message": "general (unknown) error"
191 | }
192 | self.__result = None
193 | pass
194 | def error(self):
195 | return self.__error
196 | def result(self):
197 | return self.__result
198 | class v3:
199 | def __init__(self, messages=[], stream=False, markdown=False) -> None:
200 | try:
201 | data = {
202 | "messages": [],
203 | "model": "gpt-4o",
204 | "markdown": False,
205 | "stream": False
206 | }
207 |
208 | strm = False
209 | try:
210 | if stream != None and stream == True:
211 | strm = True
212 | else:
213 | strm = False
214 | except Exception as e:
215 | strm = False
216 |
217 | try:
218 | mess = []
219 | mark = False
220 |
221 | if messages is not None:
222 | mess = messages
223 |
224 | if markdown is not None:
225 | mark = markdown
226 |
227 | data = {
228 | "messages": mess,
229 | "model": "gpt-4o",
230 | "markdown": mark,
231 | "stream": strm
232 | }
233 | except Exception as e:
234 | data = {
235 | "messages": [],
236 | "model": "gpt-4o",
237 | "markdown": False,
238 | "stream": strm
239 | }
240 |
241 | try:
242 | res_ = apistrm(data=data, api="https://nexra.aryahcr.cc/api/chat/complements", stream=strm)
243 | self.__error = res_.error()
244 | self.__result = res_.result()
245 | self.__sttrm = res_.stream()
246 | except Exception as e:
247 | self.__error = {
248 | "code": 500,
249 | "error": "INTERNAL_SERVER_ERROR",
250 | "message": "general (unknown) error"
251 | }
252 | self.__result = None
253 | self.__sttrm = None
254 | except Exception as e:
255 | self.__error = {
256 | "code": 500,
257 | "error": "INTERNAL_SERVER_ERROR",
258 | "message": "general (unknown) error"
259 | }
260 | self.__result = None
261 | self.__sttrm = None
262 | pass
263 | def stream(self):
264 | if self.__sttrm != None:
265 | for data in self.__sttrm:
266 | yield data
267 | else:
268 | yield None
269 | def error(self):
270 | return self.__error
271 | def result(self):
272 | return self.__result
273 |
274 | class imageai:
275 | def __init__(self, prompt="", model="", response="url", data={}) -> None:
276 | try:
277 | data_app = {
278 | "prompt": "",
279 | "model": "",
280 | "data": {},
281 | "response": "url"
282 | }
283 |
284 | try:
285 | prompt_ = ""
286 | model_ = ""
287 | response_ = "url"
288 | data_ = {}
289 |
290 | if prompt is not None:
291 | prompt_ = prompt
292 |
293 | if model is not None:
294 | model_ = model
295 |
296 | if response is not None:
297 | response_ = response
298 |
299 | if data is not None:
300 | data_ = data
301 |
302 | data_app = {
303 | "prompt": prompt_,
304 | "model": model_,
305 | "data": data_,
306 | "response": response_
307 | }
308 | except Exception as e:
309 | data_app = {
310 | "prompt": "",
311 | "model": "",
312 | "data": {},
313 | "response": "url"
314 | }
315 |
316 | try:
317 | res_ = api(data=data_app, api="https://nexra.aryahcr.cc/api/image/complements", image=True)
318 | self.__error = res_.error()
319 | self.__result = res_.result()
320 | except Exception as e:
321 | self.__error = {
322 | "code": 500,
323 | "status": False,
324 | "error": "INTERNAL_SERVER_ERROR",
325 | "message": "general (unknown) error"
326 | }
327 | self.__result = None
328 | except Exception as e:
329 | self.__error = {
330 | "code": 500,
331 | "error": "INTERNAL_SERVER_ERROR",
332 | "message": "general (unknown) error"
333 | }
334 | self.__result = None
335 | pass
336 | def error(self):
337 | return self.__error
338 | def result(self):
339 | return self.__result
340 |
341 | class bing():
342 | def __init__(self, messages=[], conversation_style="", markdown=False, stream=False) -> None:
343 | try:
344 | strm = False
345 | data = {
346 | "messages": [],
347 | "conversation_style": "Balanced",
348 | "model": "Bing",
349 | "markdown": False,
350 | "stream": False
351 | }
352 |
353 | try:
354 | if stream != None and stream == True:
355 | strm = True
356 | else:
357 | strm = False
358 | except Exception as e:
359 | strm = False
360 |
361 | try:
362 | data = {
363 | "messages": messages if messages is not None else [],
364 | "conversation_style": conversation_style if conversation_style is not None else "Balanced",
365 | "markdown": markdown if markdown is not None else False,
366 | "stream": strm if strm is not None else False,
367 | "model": "Bing"
368 | }
369 | except Exception as e:
370 | data = {
371 | "messages": [],
372 | "conversation_style": "Balanced",
373 | "model": "Bing",
374 | "markdown": False,
375 | "stream": False
376 | }
377 |
378 | try:
379 | res_ = apistrm(data=data, api="https://nexra.aryahcr.cc/api/chat/complements", stream=strm)
380 | self.__error = res_.error()
381 | self.__result = res_.result()
382 | self.__sttrm = res_.stream()
383 | except Exception as e:
384 | self.__error = {
385 | "code": 500,
386 | "status": False,
387 | "error": "INTERNAL_SERVER_ERROR",
388 | "message": "general (unknown) error"
389 | }
390 | self.__result = None
391 | except Exception as e:
392 | self.__error = {
393 | "code": 500,
394 | "status": False,
395 | "error": "INTERNAL_SERVER_ERROR",
396 | "message": "general (unknown) error"
397 | }
398 | self.__result = None
399 | self.__sttrm = None
400 | pass
401 | def stream(self):
402 | if self.__sttrm != None:
403 | for data in self.__sttrm:
404 | yield data
405 | else:
406 | yield None
407 | def error(self):
408 | return self.__error
409 | def result(self):
410 | return self.__result
411 |
412 | class blackbox():
413 | def __init__(self, messages=[], websearch=False, markdown=False, stream=False) -> None:
414 | try:
415 | strm = False
416 | data = {
417 | "messages": [],
418 | "websearch": False,
419 | "model": "blackbox",
420 | "markdown": False,
421 | "stream": False
422 | }
423 |
424 | try:
425 | if stream != None and stream == True:
426 | strm = True
427 | else:
428 | strm = False
429 | except Exception as e:
430 | strm = False
431 |
432 | try:
433 | data = {
434 | "messages": messages if messages is not None else [],
435 | "websearch": websearch if websearch is True else False,
436 | "markdown": markdown if markdown is not None else False,
437 | "stream": strm if strm is not None else False,
438 | "model": "blackbox"
439 | }
440 | except Exception as e:
441 | data = {
442 | "messages": [],
443 | "websearch": False,
444 | "model": "blackbox",
445 | "markdown": False,
446 | "stream": False
447 | }
448 |
449 | try:
450 | res_ = apistrm(data=data, api="https://nexra.aryahcr.cc/api/chat/complements", stream=strm)
451 | self.__error = res_.error()
452 | self.__result = res_.result()
453 | self.__sttrm = res_.stream()
454 | except Exception as e:
455 | self.__error = {
456 | "code": 500,
457 | "status": False,
458 | "error": "INTERNAL_SERVER_ERROR",
459 | "message": "general (unknown) error"
460 | }
461 | self.__result = None
462 | self.__sttrm = None
463 | except Exception as e:
464 | self.__error = {
465 | "code": 500,
466 | "status": False,
467 | "error": "INTERNAL_SERVER_ERROR",
468 | "message": "general (unknown) error"
469 | }
470 | self.__result = None
471 | self.__sttrm = None
472 | pass
473 | def stream(self):
474 | if self.__sttrm != None:
475 | for data in self.__sttrm:
476 | yield data
477 | else:
478 | yield None
479 | def error(self):
480 | return self.__error
481 | def result(self):
482 | return self.__result
483 |
484 | class llama():
485 | def __init__(self, messages=[], markdown=False, stream=False) -> None:
486 | try:
487 | strm = False
488 | data = {
489 | "messages": [],
490 | "model": "llama-3.1",
491 | "markdown": False,
492 | "stream": False
493 | }
494 |
495 | try:
496 | if stream != None and stream == True:
497 | strm = True
498 | else:
499 | strm = False
500 | except Exception as e:
501 | strm = False
502 |
503 | try:
504 | data = {
505 | "messages": messages if messages is not None else [],
506 | "markdown": markdown if markdown is not None else False,
507 | "stream": strm if strm is not None else False,
508 | "model": "llama-3.1"
509 | }
510 | except Exception as e:
511 | data = {
512 | "messages": [],
513 | "model": "llama-3.1",
514 | "markdown": False,
515 | "stream": False
516 | }
517 |
518 | try:
519 | res_ = apistrm(data=data, api="https://nexra.aryahcr.cc/api/chat/complements", stream=strm)
520 | self.__error = res_.error()
521 | self.__result = res_.result()
522 | self.__sttrm = res_.stream()
523 | except Exception as e:
524 | self.__error = {
525 | "code": 500,
526 | "status": False,
527 | "error": "INTERNAL_SERVER_ERROR",
528 | "message": "general (unknown) error"
529 | }
530 | self.__result = None
531 | except Exception as e:
532 | self.__error = {
533 | "code": 500,
534 | "status": False,
535 | "error": "INTERNAL_SERVER_ERROR",
536 | "message": "general (unknown) error"
537 | }
538 | self.__result = None
539 | self.__sttrm = None
540 | pass
541 | def stream(self):
542 | if self.__sttrm != None:
543 | for data in self.__sttrm:
544 | yield data
545 | else:
546 | yield None
547 | def error(self):
548 | return self.__error
549 | def result(self):
550 | return self.__result
551 |
--------------------------------------------------------------------------------
/gpti/util.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import json
3 | import time
4 | from urllib.parse import quote
5 |
6 | _cred = {
7 | "x-nexra-user": None,
8 | "x-nexra-secret": None
9 | }
10 |
11 | def nexra(user, secret):
12 | global _cred
13 | try:
14 | _cred["x-nexra-secret"] = secret if type(secret) is str else None
15 | _cred["x-nexra-user"] = user if type(user) is str else None
16 | except Exception as e:
17 | _cred["x-nexra-secret"] = None
18 | _cred["x-nexra-user"] = None
19 |
20 | class api:
21 | def __init__(self, data={}, api="", image=False):
22 | global _cred
23 |
24 | try:
25 | data_ = None
26 | try:
27 | data_ = json.dumps(data)
28 | data_ = json.loads(data_)
29 | except Exception as e:
30 | raise "err"
31 |
32 | head_ = {
33 | "Content-Type": "application/json"
34 | }
35 | head_.update(_cred)
36 |
37 | data_ = json.dumps(data_)
38 | req = requests.post(url=api, data=data_, headers=head_)
39 |
40 | if req.status_code == 200:
41 | api_url = "https://nexra.aryahcr.cc/api/chat/task/"
42 | try:
43 | if image is not None and image == True:
44 | api_url = "http://nexra.aryahcr.cc/api/image/complements/"
45 | else:
46 | raise("Error")
47 | except Exception as e:
48 | api_url = "https://nexra.aryahcr.cc/api/chat/task/"
49 |
50 | response = req.json()
51 | id = response.get("id")
52 |
53 | check_data = True
54 | data_ = None
55 | error_ = None
56 | while(check_data):
57 | time.sleep(1)
58 | try:
59 | response = requests.get(api_url + quote(id))
60 | if response.status_code == 200:
61 | response = response.json()
62 |
63 | match response.get("status"):
64 | case "pending":
65 | check_data: True
66 | case "completed":
67 | data_ = response
68 | check_data = False
69 | break
70 | case "error" | "not_found" | _:
71 | error_ = response
72 | check_data = False
73 | else:
74 | raise("Error")
75 | except Exception as e:
76 | check_data = False
77 | error_ = None
78 | data_ = None
79 |
80 | if data_ != None:
81 | self.__res = data_
82 | self.__err = None
83 | elif error_ != None:
84 | self.__res = None
85 | self.__err = error_
86 | else:
87 | raise("error")
88 | else:
89 | data_err = {
90 | "code": 500,
91 | "error": "INTERNAL_SERVER_ERROR",
92 | "message": "general (unknown) error"
93 | }
94 |
95 | try:
96 | data_err = req.json()
97 | except Exception as e:
98 | data_err = {
99 | "code": 500,
100 | "error": "INTERNAL_SERVER_ERROR",
101 | "message": "general (unknown) error"
102 | }
103 |
104 | self.__err = data_err
105 | self.__res = None
106 | except Exception as e:
107 | self.__err = {
108 | "code": 500,
109 | "error": "INTERNAL_SERVER_ERROR",
110 | "message": "general (unknown) error"
111 | }
112 | self.__res = None
113 | pass
114 | def result(self):
115 | return self.__res
116 | def error(self):
117 | return self.__err
118 |
119 | class apistrm:
120 | def __init__(self, data={}, api="", stream=False):
121 | global _cred
122 |
123 | try:
124 | data_ = None
125 | try:
126 | data_ = json.dumps(data)
127 | data_ = json.loads(data_)
128 | except Exception as e:
129 | raise "err"
130 |
131 | head_ = {
132 | "Content-Type": "application/json"
133 | }
134 | head_.update(_cred)
135 |
136 | data_ = json.dumps(data_)
137 |
138 | strm_ = False
139 | try:
140 | if type(stream) is bool and stream == True:
141 | strm_ = True
142 | else:
143 | strm_ = False
144 | except Exception as e:
145 | strm_ = False
146 |
147 | if strm_ != True:
148 | self.__strm = None
149 | req = requests.post(url=api, data=data_, headers=head_)
150 |
151 | if req.status_code == 200:
152 | response = req.json()
153 | id = response.get("id")
154 |
155 | check_data = True
156 | data_ = None
157 | error_ = None
158 | while(check_data):
159 | time.sleep(1)
160 | try:
161 | response = requests.get("https://nexra.aryahcr.cc/api/chat/task/" + quote(id))
162 | if response.status_code == 200:
163 | response = response.json()
164 |
165 | match response.get("status"):
166 | case "pending":
167 | check_data: True
168 | case "completed":
169 | data_ = response
170 | check_data = False
171 | break
172 | case "error" | "not_found" | _:
173 | error_ = response
174 | check_data = False
175 | else:
176 | raise("Error")
177 | except Exception as e:
178 | check_data = False
179 | error_ = None
180 | data_ = False
181 |
182 | if data_ != None:
183 | self.__res = data_
184 | self.__err = None
185 | elif error_ != None:
186 | self.__res = None
187 | self.__err = error_
188 | else:
189 | raise("error")
190 | else:
191 | data_err = {
192 | "code": 500,
193 | "error": "INTERNAL_SERVER_ERROR",
194 | "message": "general (unknown) error"
195 | }
196 |
197 | try:
198 | data_err = req.json()
199 | except Exception as e:
200 | data_err = {
201 | "code": 500,
202 | "error": "INTERNAL_SERVER_ERROR",
203 | "message": "general (unknown) error"
204 | }
205 |
206 | self.__err = data_err
207 | self.__res = None
208 | else:
209 | req = requests.post(url=api, data=data_, headers=head_, stream=True)
210 |
211 | if req.status_code == 200:
212 | self.__strm = req
213 | self.__err = None
214 | self.__res = None
215 | else:
216 | data_err = {
217 | "code": 500,
218 | "error": "INTERNAL_SERVER_ERROR",
219 | "message": "general (unknown) error"
220 | }
221 |
222 | try:
223 | data_err = req.json()
224 | except Exception as e:
225 | data_err = {
226 | "code": 500,
227 | "error": "INTERNAL_SERVER_ERROR",
228 | "message": "general (unknown) error"
229 | }
230 |
231 | self.__err = data_err
232 | self.__res = None
233 | self.__strm = None
234 | except Exception as e:
235 | self.__strm = None
236 | self.__err = {
237 | "code": 500,
238 | "error": "INTERNAL_SERVER_ERROR",
239 | "message": "general (unknown) error"
240 | }
241 | self.__res = None
242 | pass
243 | def result(self):
244 | return self.__res
245 | def error(self):
246 | return self.__err
247 | def stream(self):
248 | if self.__strm != None:
249 | try:
250 | tmp = None
251 | err = None
252 | for chunk in self.__strm.iter_lines(chunk_size=1024):
253 | if err == None:
254 | if chunk:
255 | chk = chunk.decode()
256 | chk = chk.split("")
257 |
258 | for data in chk:
259 | result = None
260 |
261 | try:
262 | convert = json.loads(data)
263 | result = data
264 | tmp = None
265 | except Exception as e:
266 | if tmp == None:
267 | tmp = data
268 | else:
269 | try:
270 | convert = json.loads(tmp)
271 | result = tmp
272 | tmp = None
273 | except Exception as e:
274 | tmp = tmp + data
275 | try:
276 | convert = json.loads(tmp)
277 | result = tmp
278 | tmp = None
279 | except Exception as e:
280 | tmp = tmp
281 |
282 | if result != None:
283 | try:
284 | result = json.loads(result)
285 | if result.get("code") == None and result.get(
286 | "status") == None:
287 | yield result
288 | else:
289 | err = result
290 | yield err
291 | except Exception as e:
292 | pass
293 | except Exception as e:
294 | yield {"message":None,"original":None,"finish":True,"error":True}
295 | else:
296 | yield None
297 | pass
298 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | requests
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup, find_packages
2 |
3 | _long_description = ''
4 | try:
5 | with open('README.md', 'r', encoding='utf-8') as f:
6 | _long_description = f.read()
7 | except Exception as e:
8 | _long_description = ''
9 |
10 | setup(
11 | name='gpti',
12 | version='2.1',
13 | packages=find_packages(),
14 | install_requires=[
15 | 'requests',
16 | ],
17 | author='yandricr',
18 | author_email='yandribret@gmail.com',
19 | description='This package simplifies your interaction with various GPT models, removing the need for tokens or other methods to access GPT',
20 | long_description=_long_description,
21 | long_description_content_type='text/markdown',
22 | url='https://github.com/yandricr/gpti-py/',
23 | project_urls={
24 | 'Documentation': 'https://nexra.aryahcr.cc/',
25 | 'Source': 'https://github.com/yandricr/gpti-py/'
26 | },
27 | keywords='gpt gpt-3 gpt-3.5 gpt-4 gpti gpt-free ai blackbox prodia bing chat stream dalle generate-image llama-3.1 gpt-4o',
28 | license='MIT',
29 | package_data={'': ['LICENSE']},
30 | include_package_data=True
31 | )
--------------------------------------------------------------------------------