├── Dockerfile
├── README.md
├── direct_link_generator.py
├── generate_token_pickle.py
├── main.py
├── qbit_conf.py
├── requirements.txt
├── sample_config.env
└── session_generator.py
/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.10-slim
2 | RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install --assume-yes --quiet --no-install-recommends aria2 wget p7zip p7zip-full libmagic1 git
3 | RUN wget -qO /usr/bin/qbittorrent-nox https://github.com/userdocs/qbittorrent-nox-static/releases/latest/download/x86_64-qbittorrent-nox
4 | RUN wget -qO /tmp/rar.tar.gz https://www.rarlab.com/rar/rarlinux-x64-620.tar.gz
5 | RUN wget -qO /tmp/ffmpeg.zip https://github.com/ffbinaries/ffbinaries-prebuilt/releases/download/v4.4.1/ffmpeg-4.4.1-linux-64.zip
6 | RUN wget -qO /tmp/ffprobe.zip https://github.com/ffbinaries/ffbinaries-prebuilt/releases/download/v4.4.1/ffprobe-4.4.1-linux-64.zip
7 | RUN 7z x /tmp/rar.tar.gz -o/tmp -y
8 | RUN 7z x /tmp/rar.tar -o/tmp -y
9 | RUN 7z x /tmp/ffmpeg.zip -o/tmp -y
10 | RUN 7z x /tmp/ffprobe.zip -o/tmp -y
11 | RUN cp /tmp/rar/rar /tmp/rar/unrar /tmp/ffmpeg /tmp/ffprobe /usr/bin
12 | RUN chmod ugo+x /usr/bin/qbittorrent-nox /usr/bin/rar /usr/bin/unrar /usr/bin/ffmpeg /usr/bin/ffprobe
13 | RUN rm /tmp/rar/rar /tmp/rar/unrar /tmp/ffmpeg /tmp/ffprobe /tmp/rar.tar.gz /tmp/rar.tar /tmp/ffmpeg.zip /tmp/ffprobe.zip
14 | COPY . /usr/src/app
15 | WORKDIR /usr/src/app
16 | RUN python -m pip install --upgrade pip
17 | RUN python -m pip install --no-cache-dir -r requirements.txt
18 | ENTRYPOINT ["python", "-u", "main.py"]
19 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Mirror2GDrive
2 | Hello there, 👽 I am a Telegram Bot that can download files using Aria2/Qbittorrent and upload them to your GDrive or Telegram. I can run only on Linux x86_64/amd64 system.
3 |
4 | ### Available Commands
5 | ```sh
6 | start - 👽 Start the bot
7 | mirror - 🗳 Mirror file using Aria2
8 | qbmirror - 🧲 Mirror file using Qbittorrent
9 | unzipmirror - 🗃️ Mirror & unzip using Aria2
10 | qbunzipmirror - 🫧 Mirror & unzip using Qbittorrent
11 | leech - 🧩 Mirror & leech using Aria2
12 | qbleech - 🌀 Mirror and leech using Qbittorrent
13 | unzipleech - 🧬 Unzip and leech
14 | task - 📥 Show the task list
15 | ngrok - 🌍 Show Ngrok URL
16 | stats - ⚙️ Show system info
17 | log - 📄 Get runtime log file
18 | ```
19 |
20 | ### Prepare config.env file
21 | Create an env file in [Github Gist](https://gist.github.com/) or any other place but make sure to provide the direct download link of that file.
22 | ```sh
23 | PICKLE_FILE_URL = ""
24 | BOT_TOKEN = ""
25 | TG_API_ID = ""
26 | TG_API_HASH = ""
27 | # To upload files in telegram
28 | USER_SESSION_STRING = ""
29 | # Authorized users to use the bot
30 | USER_LIST = '[12345, 67890]'
31 | # Drive/Folder ID to upload files
32 | GDRIVE_FOLDER_ID = 'abcXYZ'
33 | # For serving download directory with ngrok's built-in file server
34 | NGROK_AUTH_TOKEN = ""
35 | # For clearing tasks whose upload is completed
36 | AUTO_DEL_TASK = False
37 | # For downloading files from uptobox
38 | UPTOBOX_TOKEN = ""
39 | # For sending files to log channel
40 | LOG_CHANNEL = ""
41 | # For sending files to you
42 | BOT_PM = True
43 | # Create worker using https://gitlab.com/GoogleDriveIndex/Google-Drive-Index
44 | # Example: https://index.workers.dev/0: (Add drive index num with : at the end)
45 | INDEX_LINK = ""
46 | ```
47 |
48 | ### Build and run the docker image
49 | ```sh
50 | docker build -t mybot:latest .
51 |
52 | docker run -d --name=Mirror2GdriveBot \
53 | -e CONFIG_FILE_URL="github gist link of config.env" \
54 | --restart=unless-stopped \
55 | -v $PWD:/usr/src/app `#optional: for data persistence` \
56 | -p 8010:8090 -p 8020:6800 `#optional: for accessing qbit/aria` \
57 | mybot:latest
58 | ```
59 |
60 | ### Extras
61 | - To generate token.pickle file. First place the credentials.json file in current directory and run.
62 | ```sh
63 | docker run --rm -it -v $PWD:/mnt --net host --entrypoint python mybot:latest generate_token_pickle.py
64 | ```
65 | - To get the user session string of your bot.
66 | ```sh
67 | docker run --rm -it --entrypoint python mybot:latest session_generator.py
68 | ```
69 |
70 | ### Credits
71 | - [python-telegram-bot](https://github.com/python-telegram-bot) | [pyrogran](https://github.com/pyrogram)
72 | - [anasty17](https://github.com/anasty17) | [juned](https://github.com/junedkh) for [mirror-leech-telegram-bot](https://github.com/anasty17/mirror-leech-telegram-bot)
73 | - All the creators who are behind the awesome [modules/libraries](https://github.com/sachin0raon/Mirror2Gdrive/blob/master/requirements.txt) used in making this project
--------------------------------------------------------------------------------
/direct_link_generator.py:
--------------------------------------------------------------------------------
1 | # Copyright (C) 2019 The Raphielscape Company LLC.
2 | #
3 | # Licensed under the Raphielscape Public License, Version 1.c (the "License");
4 | # you may not use this file except in compliance with the License.
5 | #
6 | """ Helper Module containing various sites direct links generators. This module is copied and modified as per need
7 | from https://github.com/AvinashReddy3108/PaperplaneExtended . I hereby take no credit of the following code other
8 | than the modifications. See https://github.com/AvinashReddy3108/PaperplaneExtended/commits/master/userbot/modules/direct_links.py
9 | for original authorship. """
10 |
11 | from base64 import standard_b64encode
12 | from http.cookiejar import MozillaCookieJar
13 | from json import loads
14 | from os import path, environ
15 | from re import findall, match, search, sub
16 | from time import sleep
17 | from typing import Optional
18 | from urllib.parse import quote, unquote, urlparse, parse_qs
19 | from uuid import uuid4
20 | from bs4 import BeautifulSoup
21 | from cfscrape import create_scraper
22 | from lk21 import Bypass
23 | from lxml import etree
24 | from math import floor, pow
25 | from humanize import naturaltime
26 |
27 | fmed_list = ['fembed.net', 'fembed.com', 'femax20.com', 'fcdn.stream', 'feurl.com', 'layarkacaxxi.icu',
28 | 'naniplay.nanime.in', 'naniplay.nanime.biz', 'naniplay.com', 'mm9842.com']
29 |
30 | anonfilesBaseSites = ['anonfiles.com', 'hotfile.io', 'bayfiles.com', 'megaupload.nz', 'letsupload.cc',
31 | 'filechan.org', 'myfile.is', 'vshare.is', 'rapidshare.nu', 'lolabits.se',
32 | 'openload.cc', 'share-online.is', 'upvid.cc']
33 |
34 | class DirectDownloadLinkException(Exception):
35 | """This exception is raised in case of failure while generating direct link"""
36 | pass
37 |
38 | def is_mega_link(url: str) -> bool:
39 | return "mega.nz" in url or "mega.co.nz" in url
40 |
41 | def is_gdrive_link(url: str) -> bool:
42 | return "drive.google.com" in url
43 |
44 | def get_gdrive_id(url: str) -> Optional[str]:
45 | _id = None
46 | if "folders" in url or "file" in url or "/uc?" in url:
47 | regex = r"https:\/\/drive\.google\.com\/(?:drive(.*?)\/folders\/|file(.*?)?\/d\/|uc\?id=)([-\w]+)"
48 | try:
49 | if res := search(regex, url):
50 | _id = res.group(3)
51 | else:
52 | raise IndexError("GDrive ID not found")
53 | parsed = urlparse(url)
54 | _id = parse_qs(parsed.query)['id'][0]
55 | except (IndexError, KeyError, AttributeError):
56 | pass
57 | return _id
58 |
59 | def is_share_link(url: str) -> bool:
60 | return bool(match(r'https?:\/\/.+\.gdtot\.\S+|https?:\/\/(filepress|filebee|appdrive|gdflix)\.\S+', url))
61 |
62 | def direct_link_gen(link: str) -> str:
63 | """ direct links generator """
64 | domain = urlparse(link).hostname
65 | if not domain:
66 | raise DirectDownloadLinkException("ERROR: Invalid URL")
67 | if 'youtube.com' in domain or 'youtu.be' in domain:
68 | raise DirectDownloadLinkException("ERROR: Use ytdl cmds for Youtube links")
69 | elif 'yadi.sk' in domain or 'disk.yandex.com' in domain:
70 | return yandex_disk(link)
71 | elif 'mediafire.com' in domain:
72 | return mediafire(link)
73 | elif 'uptobox.com' in domain:
74 | return uptobox(link)
75 | elif 'osdn.net' in domain:
76 | return osdn(link)
77 | elif 'github.com' in domain:
78 | return github(link)
79 | elif 'hxfile.co' in domain:
80 | return hxfile(link)
81 | elif '1drv.ms' in domain:
82 | return onedrive(link)
83 | elif 'pixeldrain.com' in domain:
84 | return pixeldrain(link)
85 | elif 'antfiles.com' in domain:
86 | return antfiles(link)
87 | elif 'streamtape.com' in domain:
88 | return streamtape(link)
89 | elif 'racaty' in domain:
90 | return racaty(link)
91 | elif '1fichier.com' in domain:
92 | return fichier(link)
93 | elif 'solidfiles.com' in domain:
94 | return solidfiles(link)
95 | elif 'krakenfiles.com' in domain:
96 | return krakenfiles(link)
97 | elif 'upload.ee' in domain:
98 | return uploadee(link)
99 | elif 'akmfiles' in domain:
100 | return akmfiles(link)
101 | elif 'linkbox' in domain:
102 | return linkbox(link)
103 | elif 'shrdsk' in domain:
104 | return shrdsk(link)
105 | elif 'letsupload.io' in domain:
106 | return letsupload(link)
107 | elif 'zippyshare.com' in domain:
108 | return zippyshare(link)
109 | elif any(x in domain for x in ['wetransfer.com', 'we.tl']):
110 | return wetransfer(link)
111 | elif any(x in domain for x in anonfilesBaseSites):
112 | return anonfilesBased(link)
113 | elif any(x in domain for x in ['terabox', 'nephobox', '4funbox', 'mirrobox', 'momerybox', 'teraboxapp']):
114 | return terabox(link)
115 | elif any(x in domain for x in fmed_list):
116 | return fembed(link)
117 | elif any(x in domain for x in ['sbembed.com', 'watchsb.com', 'streamsb.net', 'sbplay.org']):
118 | return sbembed(link)
119 | elif is_share_link(link):
120 | if 'gdtot' in domain:
121 | return gdtot(link)
122 | elif 'filepress' in domain:
123 | return filepress(link)
124 | else:
125 | return sharer_scraper(link)
126 | else:
127 | raise DirectDownloadLinkException(f'No Direct link function found for {link}')
128 |
129 | def yandex_disk(url: str) -> str:
130 | """ Yandex.Disk direct link generator
131 | Based on https://github.com/wldhx/yadisk-direct """
132 | try:
133 | link = findall(r'\b(https?://(yadi.sk|disk.yandex.com)\S+)', url)[0][0]
134 | except IndexError:
135 | return "No Yandex.Disk links found\n"
136 | api = 'https://cloud-api.yandex.net/v1/disk/public/resources/download?public_key={}'
137 | cget = create_scraper().request
138 | try:
139 | return cget('get', api.format(link)).json()['href']
140 | except KeyError:
141 | raise DirectDownloadLinkException("ERROR: File not found/Download limit reached")
142 |
143 | def uptobox(url: str) -> str:
144 | """ Uptobox direct link generator
145 | based on https://github.com/jovanzers/WinTenCermin and https://github.com/sinoobie/noobie-mirror """
146 | try:
147 | link = findall(r'\bhttps?://.*uptobox\.com\S+', url)[0]
148 | except IndexError:
149 | raise DirectDownloadLinkException("No Uptobox links found")
150 | if link := findall(r'\bhttps?://.*\.uptobox\.com/dl\S+', url):
151 | return link[0]
152 | cget = create_scraper().request
153 | try:
154 | file_id = findall(r'\bhttps?://.*uptobox\.com/(\w+)', url)[0]
155 | if UPTOBOX_TOKEN := environ['UPTOBOX_TOKEN']:
156 | file_link = f'https://uptobox.com/api/link?token={UPTOBOX_TOKEN}&file_code={file_id}'
157 | else:
158 | file_link = f'https://uptobox.com/api/link?file_code={file_id}'
159 | res = cget('get', file_link).json()
160 | except Exception as e:
161 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
162 | if res['statusCode'] == 0:
163 | return res['data']['dlLink']
164 | elif res['statusCode'] == 16:
165 | sleep(1)
166 | waiting_token = res["data"]["waitingToken"]
167 | sleep(res["data"]["waiting"])
168 | elif res['statusCode'] == 39:
169 | raise DirectDownloadLinkException(f"ERROR: Uptobox is being limited please wait {naturaltime(res['data']['waiting'])}")
170 | else:
171 | raise DirectDownloadLinkException(f"ERROR: {res['message']}")
172 | try:
173 | res = cget('get', f"{file_link}&waitingToken={waiting_token}").json()
174 | return res['data']['dlLink']
175 | except Exception as e:
176 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
177 |
178 | def mediafire(url: str) -> str:
179 | """ MediaFire direct link generator """
180 | if final_link := findall(r'https?:\/\/download\d+\.mediafire\.com\/\S+\/\S+\/\S+', url):
181 | return final_link[0]
182 | cget = create_scraper().request
183 | try:
184 | url = cget('get', url).url
185 | page = cget('get', url).text
186 | except Exception as e:
187 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
188 | if not (final_link := findall(r"\'(https?:\/\/download\d+\.mediafire\.com\/\S+\/\S+\/\S+)\'", page)):
189 | raise DirectDownloadLinkException("ERROR: No links found in this page")
190 | return final_link[0]
191 |
192 | def osdn(url: str) -> str:
193 | """ OSDN direct link generator """
194 | osdn_link = 'https://osdn.net'
195 | try:
196 | link = findall(r'\bhttps?://.*osdn\.net\S+', url)[0]
197 | except IndexError:
198 | raise DirectDownloadLinkException("No OSDN links found")
199 | cget = create_scraper().request
200 | try:
201 | page = BeautifulSoup(cget('get', link, allow_redirects=True).content, 'lxml')
202 | except Exception as e:
203 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
204 | info = page.find('a', {'class': 'mirror_link'})
205 | link = unquote(osdn_link + info['href'])
206 | mirrors = page.find('form', {'id': 'mirror-select-form'}).findAll('tr')
207 | urls = []
208 | for data in mirrors[1:]:
209 | mirror = data.find('input')['value']
210 | urls.append(sub(r'm=(.*)&f', f'm={mirror}&f', link))
211 | return urls[0]
212 |
213 | def github(url: str) -> str:
214 | """ GitHub direct links generator """
215 | try:
216 | findall(r'\bhttps?://.*github\.com.*releases\S+', url)[0]
217 | except IndexError:
218 | raise DirectDownloadLinkException("No GitHub Releases links found")
219 | cget = create_scraper().request
220 | download = cget('get', url, stream=True, allow_redirects=False)
221 | try:
222 | return download.headers["location"]
223 | except KeyError:
224 | raise DirectDownloadLinkException("ERROR: Can't extract the link")
225 |
226 | def hxfile(url: str) -> str:
227 | """ Hxfile direct link generator
228 | Based on https://github.com/zevtyardt/lk21
229 | """
230 | try:
231 | return Bypass().bypass_filesIm(url)
232 | except Exception as e:
233 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
234 |
235 | def letsupload(url: str) -> str:
236 | cget = create_scraper().request
237 | try:
238 | res = cget("POST", url)
239 | except Exception as e:
240 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
241 | if direct_link := findall(r"(https?://letsupload\.io\/.+?)\'", res.text):
242 | return direct_link[0]
243 | else:
244 | raise DirectDownloadLinkException('ERROR: Direct Link not found')
245 |
246 | def anonfilesBased(url: str) -> str:
247 | cget = create_scraper().request
248 | try:
249 | soup = BeautifulSoup(cget('get', url).content, 'lxml')
250 | except Exception as e:
251 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
252 | if sa := soup.find(id="download-url"):
253 | return sa['href']
254 | raise DirectDownloadLinkException("ERROR: File not found!")
255 |
256 | def fembed(link: str) -> str:
257 | """ Fembed direct link generator
258 | Based on https://github.com/zevtyardt/lk21
259 | """
260 | try:
261 | dl_url = Bypass().bypass_fembed(link)
262 | count = len(dl_url)
263 | lst_link = [dl_url[i] for i in dl_url]
264 | return lst_link[count-1]
265 | except Exception as e:
266 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
267 |
268 | def sbembed(link: str) -> str:
269 | """ Sbembed direct link generator
270 | Based on https://github.com/zevtyardt/lk21
271 | """
272 | try:
273 | dl_url = Bypass().bypass_sbembed(link)
274 | count = len(dl_url)
275 | lst_link = [dl_url[i] for i in dl_url]
276 | return lst_link[count-1]
277 | except Exception as e:
278 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
279 |
280 | def onedrive(link: str) -> str:
281 | """ Onedrive direct link generator
282 | Based on https://github.com/UsergeTeam/Userge """
283 | link_without_query = urlparse(link)._replace(query=None).geturl()
284 | direct_link_encoded = str(standard_b64encode(bytes(link_without_query, "utf-8")), "utf-8")
285 | direct_link1 = f"https://api.onedrive.com/v1.0/shares/u!{direct_link_encoded}/root/content"
286 | cget = create_scraper().request
287 | try:
288 | resp = cget('head', direct_link1)
289 | except Exception as e:
290 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
291 | if resp.status_code != 302:
292 | raise DirectDownloadLinkException("ERROR: Unauthorized link, the link may be private")
293 | return resp.next.url
294 |
295 | def pixeldrain(url: str) -> str:
296 | """ Based on https://github.com/yash-dk/TorToolkit-Telegram """
297 | url = url.strip("/ ")
298 | file_id = url.split("/")[-1]
299 | if url.split("/")[-2] == "l":
300 | info_link = f"https://pixeldrain.com/api/list/{file_id}"
301 | dl_link = f"https://pixeldrain.com/api/list/{file_id}/zip"
302 | else:
303 | info_link = f"https://pixeldrain.com/api/file/{file_id}/info"
304 | dl_link = f"https://pixeldrain.com/api/file/{file_id}"
305 | cget = create_scraper().request
306 | try:
307 | resp = cget('get', info_link).json()
308 | except Exception as e:
309 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
310 | if resp["success"]:
311 | return dl_link
312 | else:
313 | raise DirectDownloadLinkException(f"ERROR: Cant't download due {resp['message']}.")
314 |
315 | def antfiles(url: str) -> str:
316 | """ Antfiles direct link generator
317 | Based on https://github.com/zevtyardt/lk21
318 | """
319 | try:
320 | return Bypass().bypass_antfiles(url)
321 | except Exception as e:
322 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
323 |
324 | def streamtape(url: str) -> str:
325 | """ Streamtape direct link generator
326 | Based on https://github.com/zevtyardt/lk21
327 | """
328 | try:
329 | return Bypass().bypass_streamtape(url)
330 | except Exception as e:
331 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
332 |
333 | def racaty(url: str) -> str:
334 | """ Racaty direct link generator
335 | By https://github.com/junedkh """
336 | cget = create_scraper().request
337 | try:
338 | url = cget('GET', url).url
339 | json_data = {
340 | 'op': 'download2',
341 | 'id': url.split('/')[-1]
342 | }
343 | res = cget('POST', url, data=json_data)
344 | except Exception as e:
345 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
346 | if direct_link := etree.HTML(res.text).xpath("//a[contains(@id,'uniqueExpirylink')]/@href"):
347 | return direct_link[0]
348 | else:
349 | raise DirectDownloadLinkException('ERROR: Direct link not found')
350 |
351 | def fichier(link: str) -> str:
352 | """ 1Fichier direct link generator
353 | Based on https://github.com/Maujar
354 | """
355 | regex = r"^([http:\/\/|https:\/\/]+)?.*1fichier\.com\/\?.+"
356 | gan = match(regex, link)
357 | if not gan:
358 | raise DirectDownloadLinkException("ERROR: The link you entered is wrong!")
359 | if "::" in link:
360 | pswd = link.split("::")[-1]
361 | url = link.split("::")[-2]
362 | else:
363 | pswd = None
364 | url = link
365 | cget = create_scraper().request
366 | try:
367 | if pswd is None:
368 | req = cget('post', url)
369 | else:
370 | pw = {"pass": pswd}
371 | req = cget('post', url, data=pw)
372 | except Exception as e:
373 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
374 | if req.status_code == 404:
375 | raise DirectDownloadLinkException("ERROR: File not found/The link you entered is wrong!")
376 | soup = BeautifulSoup(req.content, 'lxml')
377 | if soup.find("a", {"class": "ok btn-general btn-orange"}):
378 | if dl_url := soup.find("a", {"class": "ok btn-general btn-orange"})["href"]:
379 | return dl_url
380 | raise DirectDownloadLinkException("ERROR: Unable to generate Direct Link 1fichier!")
381 | elif len(soup.find_all("div", {"class": "ct_warn"})) == 3:
382 | str_2 = soup.find_all("div", {"class": "ct_warn"})[-1]
383 | if "you must wait" in str(str_2).lower():
384 | if numbers := [int(word) for word in str(str_2).split() if word.isdigit()]:
385 | raise DirectDownloadLinkException(f"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute.")
386 | else:
387 | raise DirectDownloadLinkException("ERROR: 1fichier is on a limit. Please wait a few minutes/hour.")
388 | elif "protect access" in str(str_2).lower():
389 | raise DirectDownloadLinkException("ERROR: This link requires a password and currently this feature is not supported")
390 | else:
391 | raise DirectDownloadLinkException("ERROR: Failed to generate Direct Link from 1fichier!")
392 | elif len(soup.find_all("div", {"class": "ct_warn"})) == 4:
393 | str_1 = soup.find_all("div", {"class": "ct_warn"})[-2]
394 | str_3 = soup.find_all("div", {"class": "ct_warn"})[-1]
395 | if "you must wait" in str(str_1).lower():
396 | if numbers := [int(word) for word in str(str_1).split() if word.isdigit()]:
397 | raise DirectDownloadLinkException(f"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute.")
398 | else:
399 | raise DirectDownloadLinkException("ERROR: 1fichier is on a limit. Please wait a few minutes/hour.")
400 | elif "bad password" in str(str_3).lower():
401 | raise DirectDownloadLinkException("ERROR: The password you entered is wrong!")
402 | else:
403 | raise DirectDownloadLinkException("ERROR: Error trying to generate Direct Link from 1fichier!")
404 | else:
405 | raise DirectDownloadLinkException("ERROR: Error trying to generate Direct Link from 1fichier!")
406 |
407 | def solidfiles(url: str) -> str:
408 | """ Solidfiles direct link generator
409 | Based on https://github.com/Xonshiz/SolidFiles-Downloader
410 | By https://github.com/Jusidama18 """
411 | cget = create_scraper().request
412 | try:
413 | headers = {
414 | 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36'
415 | }
416 | pageSource = cget('get', url, headers=headers).text
417 | mainOptions = str(search(r'viewerOptions\'\,\ (.*?)\)\;', pageSource).group(1))
418 | return loads(mainOptions)["downloadUrl"]
419 | except Exception as e:
420 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
421 |
422 | def krakenfiles(page_link: str) -> str:
423 | """ krakenfiles direct link generator
424 | Based on https://github.com/tha23rd/py-kraken
425 | By https://github.com/junedkh """
426 | cget = create_scraper().request
427 | try:
428 | page_resp = cget('get', page_link)
429 | except Exception as e:
430 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
431 | soup = BeautifulSoup(page_resp.text, "lxml")
432 | try:
433 | token = soup.find("input", id="dl-token")["value"]
434 | except Exception:
435 | raise DirectDownloadLinkException(f"ERROR: Page link is wrong: {page_link}")
436 | hashes = [
437 | item["data-file-hash"]
438 | for item in soup.find_all("div", attrs={"data-file-hash": True})
439 | ]
440 | if not hashes:
441 | raise DirectDownloadLinkException(f"ERROR: Hash not found for : {page_link}")
442 | dl_hash = hashes[0]
443 | payload = f'------WebKitFormBoundary7MA4YWxkTrZu0gW\r\nContent-Disposition: form-data; name="token"\r\n\r\n{token}\r\n------WebKitFormBoundary7MA4YWxkTrZu0gW--'
444 | headers = {
445 | "content-type": "multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW",
446 | "cache-control": "no-cache",
447 | "hash": dl_hash,
448 | }
449 | dl_link_resp = cget('post', f"https://krakenfiles.com/download/{hash}", data=payload, headers=headers)
450 | dl_link_json = dl_link_resp.json()
451 | if "url" in dl_link_json:
452 | return dl_link_json["url"]
453 | else:
454 | raise DirectDownloadLinkException(f"ERROR: Failed to acquire download URL from kraken for : {page_link}")
455 |
456 | def uploadee(url: str) -> str:
457 | """ uploadee direct link generator
458 | By https://github.com/iron-heart-x"""
459 | cget = create_scraper().request
460 | try:
461 | soup = BeautifulSoup(cget('get', url).content, 'lxml')
462 | sa = soup.find('a', attrs={'id': 'd_l'})
463 | return sa['href']
464 | except Exception:
465 | raise DirectDownloadLinkException(f"ERROR: Failed to acquire download URL from upload.ee for : {url}")
466 |
467 | def terabox(url) -> str:
468 | if not path.isfile('terabox.txt'):
469 | raise DirectDownloadLinkException("ERROR: terabox.txt not found")
470 | session = create_scraper()
471 | try:
472 | res = session.request('GET', url)
473 | key = res.url.split('?surl=')[-1]
474 | jar = MozillaCookieJar('terabox.txt')
475 | jar.load()
476 | session.cookies.update(jar)
477 | res = session.request('GET', f'https://www.terabox.com/share/list?app_id=250528&shorturl={key}&root=1')
478 | result = res.json()['list']
479 | except Exception as e:
480 | raise DirectDownloadLinkException(f"ERROR: {e.__class__.__name__}")
481 | if len(result) > 1:
482 | raise DirectDownloadLinkException("ERROR: Can't download mutiple files")
483 | result = result[0]
484 | if result['isdir'] != '0':
485 | raise DirectDownloadLinkException("ERROR: Can't download folder")
486 | return result['dlink']
487 |
488 | def filepress(url):
489 | cget = create_scraper().request
490 | try:
491 | url = cget('GET', url).url
492 | raw = urlparse(url)
493 | json_data = {
494 | 'id': raw.path.split('/')[-1],
495 | 'method': 'publicDownlaod',
496 | }
497 | api = f'{raw.scheme}://api.{raw.hostname}/api/file/downlaod/'
498 | res = cget('POST', api, headers={'Referer': f'{raw.scheme}://{raw.hostname}'}, json=json_data).json()
499 | except Exception as e:
500 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
501 | if 'data' not in res:
502 | raise DirectDownloadLinkException(f'ERROR: {res["statusText"]}')
503 | return f'https://drive.google.com/uc?id={res["data"]}&export=download'
504 |
505 | def gdtot(url):
506 | cget = create_scraper().request
507 | try:
508 | res = cget('GET', f'https://gdbot.xyz/file/{url.split("/")[-1]}')
509 | except Exception as e:
510 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
511 | token_url = etree.HTML(res.content).xpath("//a[contains(@class,'inline-flex items-center justify-center')]/@href")
512 | if not token_url:
513 | try:
514 | url = cget('GET', url).url
515 | p_url = urlparse(url)
516 | res = cget("GET", f"{p_url.scheme}://{p_url.hostname}/ddl/{url.split('/')[-1]}")
517 | except Exception as e:
518 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
519 | if (drive_link := findall(r"myDl\('(.*?)'\)", res.text)) and "drive.google.com" in drive_link[0]:
520 | return drive_link[0]
521 | else:
522 | raise DirectDownloadLinkException('ERROR: Drive Link not found, Try in your broswer')
523 | token_url = token_url[0]
524 | try:
525 | token_page = cget('GET', token_url)
526 | except Exception as e:
527 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__} with {token_url}')
528 | _path = findall('\("(.*?)"\)', token_page.text)
529 | if not _path:
530 | raise DirectDownloadLinkException('ERROR: Cannot bypass this')
531 | _path = _path[0]
532 | raw = urlparse(token_url)
533 | final_url = f'{raw.scheme}://{raw.hostname}{_path}'
534 | return sharer_scraper(final_url)
535 |
536 | def sharer_scraper(url):
537 | cget = create_scraper().request
538 | try:
539 | url = cget('GET', url).url
540 | raw = urlparse(url)
541 | header = {"useragent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/7.0.548.0 Safari/534.10"}
542 | res = cget('GET', url, headers=header)
543 | except Exception as e:
544 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
545 | key = findall('"key",\s+"(.*?)"', res.text)
546 | if not key:
547 | raise DirectDownloadLinkException("ERROR: Key not found!")
548 | key = key[0]
549 | if not etree.HTML(res.content).xpath("//button[@id='drc']"):
550 | raise DirectDownloadLinkException("ERROR: This link don't have direct download button")
551 | boundary = uuid4()
552 | headers = {
553 | 'Content-Type': f'multipart/form-data; boundary=----WebKitFormBoundary{boundary}',
554 | 'x-token': raw.hostname,
555 | 'useragent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/7.0.548.0 Safari/534.10'
556 | }
557 |
558 | data = f'------WebKitFormBoundary{boundary}\r\nContent-Disposition: form-data; name="action"\r\n\r\ndirect\r\n' \
559 | f'------WebKitFormBoundary{boundary}\r\nContent-Disposition: form-data; name="key"\r\n\r\n{key}\r\n' \
560 | f'------WebKitFormBoundary{boundary}\r\nContent-Disposition: form-data; name="action_token"\r\n\r\n\r\n' \
561 | f'------WebKitFormBoundary{boundary}--\r\n'
562 | try:
563 | res = cget("POST", url, cookies=res.cookies, headers=headers, data=data).json()
564 | except Exception as e:
565 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
566 | if "url" not in res:
567 | raise DirectDownloadLinkException('ERROR: Drive Link not found, Try in your broswer')
568 | if "drive.google.com" in res["url"]:
569 | return res["url"]
570 | try:
571 | res = cget('GET', res["url"])
572 | except Exception as e:
573 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
574 | if (drive_link := etree.HTML(res.content).xpath("//a[contains(@class,'btn')]/@href")) and "drive.google.com" in drive_link[0]:
575 | return drive_link[0]
576 | else:
577 | raise DirectDownloadLinkException('ERROR: Drive Link not found, Try in your broswer')
578 |
579 | def wetransfer(url):
580 | cget = create_scraper().request
581 | try:
582 | url = cget('GET', url).url
583 | json_data = {
584 | 'security_hash': url.split('/')[-1],
585 | 'intent': 'entire_transfer'
586 | }
587 | res = cget('POST', f'https://wetransfer.com/api/v4/transfers/{url.split("/")[-2]}/download', json=json_data).json()
588 | except Exception as e:
589 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
590 | if "direct_link" in res:
591 | return res["direct_link"]
592 | elif "message" in res:
593 | raise DirectDownloadLinkException(f"ERROR: {res['message']}")
594 | elif "error" in res:
595 | raise DirectDownloadLinkException(f"ERROR: {res['error']}")
596 | else:
597 | raise DirectDownloadLinkException("ERROR: cannot find direct link")
598 |
599 | def akmfiles(url):
600 | cget = create_scraper().request
601 | try:
602 | url = cget('GET', url).url
603 | json_data = {
604 | 'op': 'download2',
605 | 'id': url.split('/')[-1]
606 | }
607 | res = cget('POST', url, data=json_data)
608 | except Exception as e:
609 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
610 | if direct_link := etree.HTML(res.content).xpath("//a[contains(@class,'btn btn-dow')]/@href"):
611 | return direct_link[0]
612 | else:
613 | raise DirectDownloadLinkException('ERROR: Direct link not found')
614 |
615 | def shrdsk(url):
616 | cget = create_scraper().request
617 | try:
618 | url = cget('GET', url).url
619 | res = cget('GET', f'https://us-central1-affiliate2apk.cloudfunctions.net/get_data?shortid={url.split("/")[-1]}')
620 | except Exception as e:
621 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
622 | if res.status_code != 200:
623 | raise DirectDownloadLinkException(f'ERROR: Status Code {res.status_code}')
624 | res = res.json()
625 | if "type" in res and res["type"].lower() == "upload" and "video_url" in res:
626 | return res["video_url"]
627 | raise DirectDownloadLinkException("ERROR: cannot find direct link")
628 |
629 | def linkbox(url):
630 | cget = create_scraper().request
631 | try:
632 | url = cget('GET', url).url
633 | res = cget('GET', f'https://www.linkbox.to/api/file/detail?itemId={url.split("/")[-1]}').json()
634 | except Exception as e:
635 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
636 | if 'data' not in res:
637 | raise DirectDownloadLinkException('ERROR: Data not found!!')
638 | data = res['data']
639 | if not data:
640 | raise DirectDownloadLinkException('ERROR: Data is None!!')
641 | if 'itemInfo' not in data:
642 | raise DirectDownloadLinkException('ERROR: itemInfo not found!!')
643 | itemInfo = data['itemInfo']
644 | if 'url' not in itemInfo:
645 | raise DirectDownloadLinkException('ERROR: url not found in itemInfo!!')
646 | if "name" not in itemInfo:
647 | raise DirectDownloadLinkException('ERROR: Name not found in itemInfo!!')
648 | name = quote(itemInfo["name"])
649 | raw = itemInfo['url'].split("/", 3)[-1]
650 | return f'https://wdl.nuplink.net/{raw}&filename={name}'
651 |
652 | def zippyshare(url):
653 | cget = create_scraper().request
654 | try:
655 | url = cget('GET', url).url
656 | resp = cget('GET', url)
657 | except Exception as e:
658 | raise DirectDownloadLinkException(f'ERROR: {e.__class__.__name__}')
659 | if not resp.ok:
660 | raise DirectDownloadLinkException('ERROR: Something went wrong!!, Try in your browser')
661 | if findall(r'>File does not exist on this server<', resp.text):
662 | raise DirectDownloadLinkException('ERROR: File does not exist on server!!, Try in your browser')
663 | pages = etree.HTML(resp.text).xpath("//script[contains(text(),'dlbutton')][3]/text()")
664 | if not pages:
665 | raise DirectDownloadLinkException('ERROR: Page not found!!')
666 | js_script = pages[0]
667 | if omg := findall(r"\.omg.=.(.*?);", js_script):
668 | omg = omg[0]
669 | method = f'omg = {omg}'
670 | mtk = (eval(omg) * (int(omg.split("%")[0]) % 3)) + 18
671 | uri1 = findall(r'"/(d/\S+)/"', js_script)
672 | uri2 = findall(r'\/d.*?\+"/(\S+)";', js_script)
673 | elif var_a := findall(r"var.a.=.(\d+)", js_script):
674 | var_a = var_a[0]
675 | method = f'var_a = {var_a}'
676 | mtk = int(pow(int(var_a), 3) + 3)
677 | uri1 = findall(r"\.href.=.\"/(.*?)/\"", js_script)
678 | uri2 = findall(r"\+\"/(.*?)\"", js_script)
679 | elif var_ab := findall(r"var.[ab].=.(\d+)", js_script):
680 | a = var_ab[0]
681 | b = var_ab[1]
682 | method = f'a = {a}, b = {b}'
683 | mtk = eval(f"{floor(int(a)/3) + int(a) % int(b)}")
684 | uri1 = findall(r"\.href.=.\"/(.*?)/\"", js_script)
685 | uri2 = findall(r"\)\+\"/(.*?)\"", js_script)
686 | elif unknown := findall(r"\+\((.*?).\+", js_script):
687 | method = f'unknown = {unknown[0]}'
688 | mtk = eval(f"{unknown[0]}+ 11")
689 | uri1 = findall(r"\.href.=.\"/(.*?)/\"", js_script)
690 | uri2 = findall(r"\)\+\"/(.*?)\"", js_script)
691 | elif unknown1 := findall(r"\+.\((.*?)\).\+", js_script):
692 | method = f'unknown1 = {unknown1[0]}'
693 | mtk = eval(unknown1[0])
694 | uri1 = findall(r"\.href.=.\"/(.*?)/\"", js_script)
695 | uri2 = findall(r"\+.\"/(.*?)\"", js_script)
696 | else:
697 | raise DirectDownloadLinkException("ERROR: Direct link not found")
698 | if not any([uri1, uri2]):
699 | raise DirectDownloadLinkException(f"ERROR: uri1 or uri2 not found with method {method}")
700 | domain = urlparse(url).hostname
701 | return f"https://{domain}/{uri1[0]}/{mtk}/{uri2[0]}"
702 |
--------------------------------------------------------------------------------
/generate_token_pickle.py:
--------------------------------------------------------------------------------
1 | import pickle
2 | import os
3 | from google_auth_oauthlib.flow import InstalledAppFlow
4 | from google.auth.transport.requests import Request
5 |
6 | credentials = None
7 | __G_DRIVE_TOKEN_FILE = "/mnt/token.pickle"
8 | __OAUTH_SCOPE = ["https://www.googleapis.com/auth/drive"]
9 | if os.path.exists(__G_DRIVE_TOKEN_FILE):
10 | with open(__G_DRIVE_TOKEN_FILE, 'rb') as f:
11 | credentials = pickle.load(f)
12 | if (
13 | (credentials is None or not credentials.valid)
14 | and credentials
15 | and credentials.expired
16 | and credentials.refresh_token
17 | ):
18 | credentials.refresh(Request())
19 | else:
20 | flow = InstalledAppFlow.from_client_secrets_file(
21 | '/mnt/credentials.json', __OAUTH_SCOPE)
22 | credentials = flow.run_local_server(port=0, open_browser=False)
23 |
24 | # Save the credentials for the next run
25 | with open(__G_DRIVE_TOKEN_FILE, 'wb') as token:
26 | pickle.dump(credentials, token)
27 |
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | import re
3 | import json
4 | import math
5 | import time
6 | import psutil
7 | import aria2p
8 | import pickle
9 | import logging
10 | import requests
11 | import humanize
12 | import subprocess
13 | import threading
14 | import shutil
15 | import magic
16 | import ffmpeg
17 | import patoolib
18 | import qbittorrentapi
19 | import asyncio
20 | import uvloop
21 | asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
22 | from natsort import natsorted
23 | from PIL import Image, UnidentifiedImageError
24 | from filesplit.split import Split
25 | from ffprobe import FFProbe
26 | from ffprobe.exceptions import FFProbeError
27 | from patoolib import ArchiveMimetypes
28 | from qbit_conf import QBIT_CONF
29 | from pyngrok import ngrok, conf
30 | from urllib.parse import quote
31 | from io import StringIO, FileIO
32 | from sys import exit as _exit
33 | from random import choice
34 | from string import ascii_letters
35 | from typing import Union, Optional, Dict, List, Tuple, Set
36 | from dotenv import load_dotenv
37 | from tenacity import retry, wait_exponential, stop_after_attempt, retry_if_exception_type, RetryError, Retrying, AsyncRetrying
38 | from telegram import Update, error, constants, InlineKeyboardButton, InlineKeyboardMarkup, CallbackQuery, Document, Message
39 | from telegram.ext.filters import Chat
40 | from telegram.ext import ApplicationBuilder, ContextTypes, CommandHandler, CallbackQueryHandler, Application, MessageHandler, filters
41 | from google.auth.transport.requests import Request
42 | from google.auth.exceptions import RefreshError
43 | from googleapiclient.discovery import build
44 | from googleapiclient.http import MediaFileUpload, MediaIoBaseDownload
45 | from googleapiclient.errors import HttpError
46 | from pyrogram import Client, errors, enums, types
47 |
48 | logging.basicConfig(
49 | format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
50 | level=logging.INFO,
51 | handlers=[logging.FileHandler('log.txt'), logging.StreamHandler()]
52 | )
53 | logging.getLogger("pyngrok.ngrok").setLevel(logging.ERROR)
54 | logging.getLogger("pyngrok.process").setLevel(logging.ERROR)
55 | logging.getLogger("pyrogram").setLevel(logging.ERROR)
56 | logging.getLogger("apscheduler.executors.default").setLevel(logging.ERROR)
57 | logger = logging.getLogger(__name__)
58 | from direct_link_generator import direct_link_gen, is_mega_link, is_gdrive_link, get_gdrive_id
59 | aria2c: Optional[aria2p.API] = None
60 | pyro_app: Optional[Client] = None
61 | CONFIG_FILE_URL = os.getenv("CONFIG_FILE_URL")
62 | BOT_START_TIME: Optional[float] = None
63 | BOT_TOKEN: Optional[str] = None
64 | NGROK_AUTH_TOKEN: Optional[str] = None
65 | GDRIVE_FOLDER_ID: Optional[str] = None
66 | AUTO_DEL_TASK: Optional[bool] = None
67 | AUTHORIZED_USERS = set()
68 | TASK_STATUS_MSG_DICT = dict()
69 | TASK_UPLOAD_MODE_DICT = dict()
70 | TASK_CHAT_DICT = dict()
71 | TASK_ID_DICT = dict()
72 | CHAT_UPDATE_MSG_DICT = dict()
73 | UPDOWN_LIST = list()
74 | FOUR_GB = 4194304000
75 | TWO_GB = 2097152000
76 | PICKLE_FILE_NAME = "token.pickle"
77 | START_CMD = "start"
78 | MIRROR_CMD = "mirror"
79 | TASK_CMD = "task"
80 | INFO_CMD = "stats"
81 | LOG_CMD = "log"
82 | QBIT_CMD = "qbmirror"
83 | NGROK_CMD = "ngrok"
84 | UNZIP_ARIA_CMD = "unzipmirror"
85 | UNZIP_QBIT_CMD = "qbunzipmirror"
86 | LEECH_ARIA_CMD = "leech"
87 | LEECH_QBIT_CMD = "qbleech"
88 | UNZIP_LEECH_CMD = "unzipleech"
89 | GDRIVE_BASE_URL = "https://drive.google.com/uc?id={}&export=download"
90 | GDRIVE_FOLDER_BASE_URL = "https://drive.google.com/drive/folders/{}"
91 | TRACKER_URLS = [
92 | "https://cdn.staticaly.com/gh/XIU2/TrackersListCollection/master/all_aria2.txt",
93 | "https://raw.githubusercontent.com/hezhijie0327/Trackerslist/main/trackerslist_tracker.txt"
94 | ]
95 | DHT_FILE_URL = "https://github.com/P3TERX/aria2.conf/raw/master/dht.dat"
96 | DHT6_FILE_URL = "https://github.com/P3TERX/aria2.conf/raw/master/dht6.dat"
97 | ARIA_COMMAND = "aria2c --allow-overwrite=true --auto-file-renaming=true --check-certificate=false --check-integrity=true "\
98 | "--continue=true --content-disposition-default-utf8=true --daemon=true --disk-cache=40M --enable-rpc=true "\
99 | "--force-save=true --http-accept-gzip=true --max-connection-per-server=16 --max-concurrent-downloads=10 "\
100 | "--max-file-not-found=0 --max-tries=20 --min-split-size=10M --optimize-concurrent-downloads=true --reuse-uri=true "\
101 | "--quiet=true --rpc-max-request-size=1024M --split=10 --summary-interval=0 --seed-time=0 --bt-enable-lpd=true "\
102 | "--bt-detach-seed-only=true --bt-remove-unselected-file=true --follow-torrent=mem --bt-tracker={} "\
103 | "--keep-unfinished-download-result=true --save-not-found=true --save-session=/usr/src/app/aria2.session --save-session-interval=60"
104 | MAGNET_REGEX = r'magnet:\?xt=urn:(btih|btmh):[a-zA-Z0-9]*\s*'
105 | URL_REGEX = r'^(?!\/)(rtmps?:\/\/|mms:\/\/|rtsp:\/\/|https?:\/\/|ftp:\/\/)?([^\/:]+:[^\/@]+@)?(www\.)?(?=[^\/:\s]+\.[^\/:\s]+)([^\/:\s]+\.[^\/:\s]+)(:\d+)?(\/[^#\s]*[\s\S]*)?(\?[^#\s]*)?(#.*)?$'
106 | DOWNLOAD_PATH = "/usr/src/app/downloads"
107 | ARIA_OPTS = {'dir': DOWNLOAD_PATH.rstrip("/"), 'max-upload-limit': '5M'}
108 | GDRIVE_PERM = {
109 | 'role': 'reader',
110 | 'type': 'anyone',
111 | 'value': None,
112 | 'withLink': True
113 | }
114 | LOCK = asyncio.Lock()
115 |
116 | def get_qbit_client() -> Optional[qbittorrentapi.Client]:
117 | try:
118 | return qbittorrentapi.Client(
119 | host='http://localhost',
120 | port=8090,
121 | username='admin',
122 | password='adminadmin',
123 | VERIFY_WEBUI_CERTIFICATE=False,
124 | REQUESTS_ARGS={'timeout': (30, 60)},
125 | DISABLE_LOGGING_DEBUG_OUTPUT=True
126 | )
127 | except Exception as err:
128 | logger.error(f"Failed to initialize qbit client: {err.__class__.__name__}")
129 | return None
130 |
131 | async def send_msg_async(msg: str, user_id: Union[str, int]) -> int:
132 | msg_id: int = 0
133 | if all([pyro_app, msg, user_id]):
134 | try:
135 | message = await pyro_app.send_message(chat_id=user_id, text=msg, parse_mode=enums.ParseMode.HTML,
136 | disable_notification=True, disable_web_page_preview=True)
137 | msg_id = message.id
138 | except errors.RPCError as err:
139 | logger.error(f"Failed to send msg to {user_id}[{err.ID}]")
140 | else:
141 | logger.warning(f"Failed to send msg to {user_id}[req param missing]")
142 | return msg_id
143 |
144 | def send_status_update(msg: str, userid: Optional[str] = None) -> Dict[str, str]:
145 | tg_api_url = "https://api.telegram.org/bot{}/sendMessage"
146 | headers = {
147 | "accept": "application/json",
148 | "User-Agent": "Telegram Bot SDK - (https://github.com/irazasyed/telegram-bot-sdk)",
149 | "content-type": "application/json"
150 | }
151 | msg_chat_dict: Dict[str, str] = dict()
152 | for user_id in AUTHORIZED_USERS:
153 | if userid is not None and userid != user_id:
154 | continue
155 | payload = {
156 | "text": msg,
157 | "parse_mode": "HTML",
158 | "disable_web_page_preview": True,
159 | "disable_notification": True,
160 | "reply_to_message_id": None,
161 | "chat_id": user_id
162 | }
163 | try:
164 | response = requests.post(tg_api_url.format(BOT_TOKEN), json=payload, headers=headers)
165 | if response.ok:
166 | logger.info(f"Status msg sent to: {user_id}")
167 | resp_json = json.loads(response.text).get("result")
168 | msg_id = resp_json.get("message_id")
169 | chat_id = resp_json.get("chat").get("id")
170 | msg_chat_dict[msg_id] = chat_id
171 | response.close()
172 | else:
173 | logger.warning(f"Failed to send message [{json.loads(response.text).get('description')}]")
174 | except requests.exceptions.RequestException:
175 | logger.error(f"Failed to send updates to {user_id}")
176 | except (json.JSONDecodeError, AttributeError, IndexError):
177 | logger.warning("Failed to get sent message detail")
178 | return msg_chat_dict
179 |
180 | async def delete_msg(msg_id: int, chat_id: Union[str, int]) -> None:
181 | if all([pyro_app, msg_id, chat_id]):
182 | try:
183 | await pyro_app.delete_messages(chat_id=chat_id, message_ids=msg_id)
184 | except errors.RPCError as err:
185 | logger.warning(f"Failed to delete msg: {msg_id}[{err.ID}]")
186 |
187 | def get_oauth_creds():
188 | logger.info("Loading token.pickle file")
189 | with open(PICKLE_FILE_NAME, 'rb') as f:
190 | credentials = pickle.load(f)
191 | if credentials and credentials.expired and credentials.refresh_token:
192 | try:
193 | credentials.refresh(Request())
194 | except RefreshError:
195 | logger.error("Failed to refresh token")
196 | return None
197 | return credentials
198 |
199 | async def get_gdrive_files(query: Optional[str], folder_id: str, creds) -> List[Dict[str, str]]:
200 | gdrive_service = build('drive', 'v3', credentials=creds if creds is not None else get_oauth_creds(), cache_discovery=False)
201 | if query is None:
202 | query = f"trashed=false and parents in '{folder_id}'"
203 | files_list = []
204 | try:
205 | async for attempt in AsyncRetrying(wait=wait_exponential(multiplier=2, min=4, max=8), stop=stop_after_attempt(2), retry=retry_if_exception_type(Exception)):
206 | with attempt:
207 | files_list.clear()
208 | page_token = None
209 | while True:
210 | response = gdrive_service.files().list(
211 | supportsAllDrives=True, includeItemsFromAllDrives=True, corpora='allDrives', q=query,
212 | spaces='drive', fields='files(id, name, size, mimeType)', pageToken=page_token, pageSize=200).execute()
213 | files_list.extend(response.get("files", []))
214 | page_token = response.get('nextPageToken')
215 | if page_token is None:
216 | break
217 | except RetryError as err:
218 | files_list.clear()
219 | last_err = err.last_attempt.exception()
220 | err_msg = last_err.error_details if isinstance(last_err, HttpError) else str(last_err).replace('>', '').replace('>', '')
221 | logger.error(f"Failed to get files list, error: {err_msg}")
222 | gdrive_service.close()
223 | return files_list
224 |
225 | async def count_uploaded_files(creds = None, folder_id: str = None, file_name: str = None) -> int:
226 | if folder_id is not None:
227 | logger.info(f"Getting the count of files present in {folder_id}")
228 | query = f"mimeType != 'application/vnd.google-apps.folder' and trashed=false and parents in '{folder_id}'"
229 | else:
230 | logger.info(f"Searching for: {file_name}")
231 | file_name = file_name.replace("'", "\\'")
232 | query = f"fullText contains '{file_name}' and trashed=false and parents in '{GDRIVE_FOLDER_ID}'"
233 | return len(await get_gdrive_files(query, folder_id, creds))
234 |
235 | async def delete_empty_folder(folder_id: str, creds = None) -> None:
236 | if folder_id and not await count_uploaded_files(folder_id=folder_id):
237 | logger.info(f"Deleting empty folder: {folder_id} in GDrive")
238 | try:
239 | gdrive_service = build('drive', 'v3', credentials=creds if creds is not None else get_oauth_creds(), cache_discovery=False)
240 | gdrive_service.files().delete(fileId=folder_id, supportsAllDrives=True).execute()
241 | gdrive_service.close()
242 | except Exception as err:
243 | logger.warning(f"Failed to delete folder: {folder_id}, error: {err.__class__.__name__}")
244 |
245 | def remove_extracted_dir(file_name: str) -> None:
246 | if os.path.exists(f"{DOWNLOAD_PATH}/{os.path.splitext(file_name)[0]}"):
247 | shutil.rmtree(path=f"{DOWNLOAD_PATH}/{os.path.splitext(file_name)[0]}", ignore_errors=True)
248 |
249 | def clear_task_files(task_id: str = None, is_qbit: bool = False) -> None:
250 | file_name = None
251 | if task_id is None:
252 | return
253 | logger.info(f"Removing task: {task_id}")
254 | try:
255 | if is_qbit is True:
256 | if qb_client := get_qbit_client():
257 | qbit = qb_client.torrents_info(torrent_hashes=[task_id])[0]
258 | file_name = qbit.files[0].get('name').split("/")[0] if qbit.files else qbit.get('name')
259 | qb_client.torrents_delete(delete_files=True, torrent_hashes=[task_id])
260 | qb_client.auth_log_out()
261 | else:
262 | down = aria2c.get_download(gid=task_id)
263 | file_name = down.name
264 | aria2c.remove(downloads=[down], force=False, files=True, clean=True)
265 | remove_extracted_dir(file_name) if file_name else logger.warning("Unable to get file name")
266 | except Exception as err:
267 | logger.error(f"Failed to remove, error: {str(err)}")
268 |
269 | def create_folder(folder_name: str, creds) -> Optional[str]:
270 | folder_id = None
271 | logger.info(f"Creating folder: {folder_name} in GDrive")
272 | dir_metadata = {'name': folder_name, 'parents': [GDRIVE_FOLDER_ID], 'mimeType': 'application/vnd.google-apps.folder'}
273 | try:
274 | for attempt in Retrying(wait=wait_exponential(multiplier=2, min=4, max=8), stop=stop_after_attempt(2), retry=retry_if_exception_type(Exception)):
275 | with attempt:
276 | gdrive_service = build('drive', 'v3', credentials=creds, cache_discovery=False)
277 | upload_dir = gdrive_service.files().create(body=dir_metadata, supportsAllDrives=True, fields='id').execute()
278 | folder_id = upload_dir.get('id')
279 | except RetryError as err:
280 | last_err = err.last_attempt.exception()
281 | err_msg = f"⚠️ Upload failed: {folder_name}
error in creating folder\nReason: {last_err.error_details}" +\
282 | f"[{last_err.status_code}]
" if isinstance(last_err, HttpError) else f"{str(last_err).replace('<', '').replace('>', '')}
"
283 | logger.error(f"Failed to create folder: {folder_name} [attempts: {err.last_attempt.attempt_number}]")
284 | send_status_update(err_msg)
285 | else:
286 | logger.info(f"Setting permissions for: {folder_name}")
287 | try:
288 | gdrive_service.permissions().create(fileId=folder_id, body=GDRIVE_PERM, supportsAllDrives=True).execute()
289 | gdrive_service.close()
290 | except HttpError:
291 | logger.warning(f"Failed to set permission for: {folder_name}")
292 | return folder_id
293 |
294 | def get_index_link(file_path: Optional[str]) -> str:
295 | _index_link = ""
296 | if file_path is None:
297 | return _index_link
298 | _file_name = os.path.basename(file_path)
299 | if link := os.getenv("INDEX_LINK", "").rstrip('/'):
300 | is_video = True if magic.from_file(file_path, mime=True).startswith('video') else False
301 | _link = f"{link}/{quote(_file_name, safe='')}?a=view" if is_video else f"{link}/{quote(_file_name, safe='')}"
302 | _index_link = f"\n⚡ Index Link: Click here"
303 | return _index_link
304 |
305 | def get_progress_bar(completed: int, total: int) -> str:
306 | completed /= 8
307 | total /= 8
308 | p = 0 if total == 0 else round(completed * 100 / total)
309 | p = min(max(p, 0), 100)
310 | c_block = p // 7
311 | p_str = '■' * c_block
312 | p_str += '▢' * (14 - c_block)
313 | return f"[{p_str}]"
314 |
315 | async def delete_download_status() -> None:
316 | to_be_del = set()
317 | async with LOCK:
318 | for chat_id in CHAT_UPDATE_MSG_DICT:
319 | await delete_msg(CHAT_UPDATE_MSG_DICT[chat_id], chat_id)
320 | to_be_del.add(chat_id)
321 | for _chat_id in to_be_del:
322 | CHAT_UPDATE_MSG_DICT.pop(_chat_id)
323 |
324 | class UpDownProgressUpdate:
325 | def __init__(self, name: str = None, up_start_time: float = 0.0, current: int = 0, total: int = 0, user_id: str = None, action: str = "📤 Uploading"):
326 | self.file_name = name
327 | self.up_start_time = up_start_time
328 | self.uploaded_bytes = current
329 | self.total_bytes = total
330 | self.user_id = user_id
331 | self.action = action
332 | self.status = "IN_PROGRESS"
333 |
334 | async def set_processed_bytes(self, current: int = 0, total: int = 0) -> None:
335 | self.uploaded_bytes = current
336 |
337 | async def get_status_msg(self) -> str:
338 | if self.uploaded_bytes > self.total_bytes:
339 | self.uploaded_bytes = self.total_bytes
340 | return f"╭{self.action}\n├🗂️ File: {self.file_name}
\n├📀 Size: {humanize.naturalsize(self.total_bytes)}
\n" \
341 | f"├{get_progress_bar(self.uploaded_bytes, self.total_bytes)}
\n├💾 Processed: {humanize.naturalsize(self.uploaded_bytes)} " \
342 | f"[{round(number=self.uploaded_bytes * 100 / self.total_bytes, ndigits=1)}%]
\n╰⚡ Speed: {humanize.naturalsize(self.uploaded_bytes / (time.time() - self.up_start_time))}/s
"
343 |
344 | def upload_file(file_path: str, folder_id: str, creds, task_data: str = None, from_dir: bool = False,
345 | up_prog: UpDownProgressUpdate = None) -> None:
346 | file_name = os.path.basename(file_path)
347 | file_metadata = {'name': file_name, 'parents': [folder_id]}
348 | logger.info(f"Starting upload: {file_name}")
349 | try:
350 | for attempt in Retrying(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3), retry=retry_if_exception_type(Exception)):
351 | with attempt:
352 | gdrive_service = build('drive', 'v3', credentials=creds, cache_discovery=False)
353 | drive_file = gdrive_service.files().create(
354 | body=file_metadata, supportsAllDrives=True,
355 | media_body=MediaFileUpload(filename=file_path, resumable=True, chunksize=50 * 1024 * 1024),
356 | fields='id')
357 | response = None
358 | _current_bytes = 0
359 | _last_bytes = 0
360 | _size = os.stat(file_path).st_size
361 | while response is None:
362 | try:
363 | _status, response = drive_file.next_chunk()
364 | if _status:
365 | _current_bytes = _status.resumable_progress if _last_bytes == 0 else _status.resumable_progress - _last_bytes
366 | _last_bytes = _status.resumable_progress
367 | up_prog.uploaded_bytes += _current_bytes
368 | elif _current_bytes == 0:
369 | up_prog.uploaded_bytes += _size
370 | else:
371 | up_prog.uploaded_bytes += _size - _current_bytes
372 | except HttpError as err:
373 | if err.resp.get('content-type', '').startswith('application/json'):
374 | message = eval(err.content).get('error').get('errors')[0].get('message')
375 | else:
376 | message = err.error_details
377 | logger.warning(f"Retrying upload: {file_name} Reason: {message}")
378 | raise err
379 | logger.info(f"Upload completed for {file_name}")
380 | except RetryError as err:
381 | last_err = err.last_attempt.exception()
382 | err_msg = last_err.error_details if isinstance(last_err, HttpError) else str(last_err).replace('>', '').replace('>', '')
383 | logger.error(f"Failed to upload: {file_name} error: {err_msg} attempts: {err.last_attempt.attempt_number}")
384 | msg = f"🗂️ File: {file_name}
upload failed❗\n⚠️ Reason: {err_msg}
"
385 | if from_dir is False:
386 | send_status_update(msg)
387 | else:
388 | drive_file = gdrive_service.files().get(fileId=response['id'], supportsAllDrives=True).execute()
389 | logger.info(f"Setting permissions for {file_name}")
390 | try:
391 | gdrive_service.permissions().create(fileId=drive_file.get('id'), body=GDRIVE_PERM, supportsAllDrives=True).execute()
392 | gdrive_service.close()
393 | except HttpError:
394 | pass
395 | if folder_id == GDRIVE_FOLDER_ID:
396 | send_status_update(f"🗂️ File: {file_name}
uploaded ✔️\n🌐 GDrive Link: "
397 | f"Click here{get_index_link(file_path)}")
398 | if task_data and AUTO_DEL_TASK is True:
399 | task_d = task_data.split(sep="#", maxsplit=1)
400 | task_id = task_d[1]
401 | clear_task_files(task_id, True if task_d[0] == "qbit" else False)
402 |
403 | def is_archive_file(file_name: str) -> bool:
404 | if os.path.isfile(f"{DOWNLOAD_PATH}/{file_name}"):
405 | return magic.from_file(filename=f"{DOWNLOAD_PATH}/{file_name}", mime=True) in ArchiveMimetypes
406 | else:
407 | return False
408 |
409 | async def upload_to_gdrive(gid: str = None, hash: str = None, name: str = None, chat_id: str = None) -> Optional[bool]:
410 | count = 0
411 | creds = None
412 | folder_id = None
413 | is_dir = False
414 | file_name = None
415 | task_done = False
416 | file_path = None
417 | task_id: Optional[str] = None
418 | try:
419 | aria2c.client.save_session()
420 | if hash is not None:
421 | task_id = f"qbit#{hash}"
422 | if qb_client := get_qbit_client():
423 | file_name = qb_client.torrents_files(torrent_hash=hash)[0].get('name').split("/")[0]
424 | qb_client.auth_log_out()
425 | elif gid is not None:
426 | task_id = f"aria#{gid}"
427 | down = aria2c.get_download(gid)
428 | file_name = down.name
429 | if down.followed_by_ids or down.is_metadata:
430 | logger.info(f"Skip uploading of: {file_name}")
431 | return
432 | elif name is None:
433 | logger.error(f"Upload event failed, required param missing")
434 | return
435 | else:
436 | file_name = name
437 | file_path = f"{DOWNLOAD_PATH}/{file_name}"
438 | if not os.path.exists(file_path):
439 | logger.error(f"Upload event failed, could not find {file_path}")
440 | return
441 | else:
442 | if creds := get_oauth_creds():
443 | _size = 0
444 | if os.path.isfile(file_path):
445 | _size = os.stat(file_path).st_size
446 | else:
447 | for path, _, files in os.walk(file_path):
448 | for f in files:
449 | _size += os.stat(os.path.join(path, f)).st_size
450 | up_prog = UpDownProgressUpdate(name=file_name, up_start_time=time.time(), total=_size, user_id=chat_id, action="📤 Uploading [GDrive]")
451 | async with LOCK:
452 | UPDOWN_LIST.append(up_prog)
453 | if os.path.isdir(file_path) is True:
454 | is_dir = True
455 | if folder_id := create_folder(os.path.basename(file_path), creds):
456 | for path, currentDirectory, files in os.walk(file_path):
457 | for f in files:
458 | count += 1
459 | await asyncio.to_thread(upload_file, os.path.join(path, f), folder_id, creds, None, True, up_prog)
460 | else:
461 | await asyncio.to_thread(upload_file, file_path, GDRIVE_FOLDER_ID, creds, task_id, False, up_prog)
462 | up_prog.status = "COMPLETE"
463 | except (aria2p.ClientException, OSError, AttributeError):
464 | logger.error("Failed to complete download event task")
465 | except qbittorrentapi.exceptions.NotFound404Error:
466 | logger.error("Failed to get torrent hash info")
467 | if is_dir is True and folder_id is not None:
468 | if count == await count_uploaded_files(creds=creds, folder_id=folder_id):
469 | send_status_update(f"🗂️ Folder: {file_name}
uploaded ✔️\n🌐 GDrive Link: "
470 | f"Click here{get_index_link(file_path)}")
471 | if AUTO_DEL_TASK is True:
472 | _args = (gid, False) if gid else (hash, True) if hash else (None, False)
473 | clear_task_files(*_args)
474 | task_done = True
475 | else:
476 | await delete_empty_folder(folder_id, creds)
477 | send_status_update(f"🗂️ Folder: {file_name}
upload failed❗\n⚠️ Please check the log for more details using /{LOG_CMD}
")
478 | await delete_download_status()
479 | return task_done
480 |
481 | def get_duration(file_path: str) -> int:
482 | duration = 0
483 | try:
484 | probe = ffmpeg.probe(file_path)
485 | _stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video' or stream['codec_type'] == 'audio'), None)
486 | duration = round(float(_stream.get('duration', 0)))
487 | except (ffmpeg.Error, ValueError) as err:
488 | logger.warning(f"ffmpeg probe error: {err.__class__.__name__}")
489 | try:
490 | for stream in FFProbe(file_path).streams:
491 | if stream.is_video() or stream.is_audio():
492 | duration = round(stream.duration_seconds())
493 | break
494 | except (FFProbeError, AttributeError):
495 | logger.warning(f"ffmpeg probe error: {err.__class__.__name__}")
496 | return duration
497 |
498 | @retry(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3), retry=(retry_if_exception_type(errors.RPCError)))
499 | async def upload_to_tg(file_id: str, file_path: str, media_list: Optional[List[types.InputMediaDocument]] = None,
500 | is_audio: bool = False, is_video: bool = False, is_photo: bool = False, thumb: str = None) -> None:
501 | file_name = os.path.basename(file_path)
502 | user_id = TASK_CHAT_DICT[file_id] if file_id in TASK_CHAT_DICT else None
503 | if pyro_app is None:
504 | send_status_update(f"⚠️ Skip upload: {file_name}
\nReason: Pyrogram session is not initialized", user_id)
505 | return
506 | stat_msg_id = None
507 | _file_size = 0
508 | if media_list is None:
509 | _file_size = os.stat(file_path).st_size
510 | else:
511 | msg_txt = f"📤 Upload started [media group]\n🗂️ Folder: {file_name}
\n📀 Total Size: "
512 | for media_file in media_list:
513 | _file_size += os.stat(media_file.media).st_size
514 | msg_txt += f"{humanize.naturalsize(_file_size)}
\n📚 Total Files: {len(media_list)}
"
515 | if user_id is not None:
516 | stat_msg_id = await send_msg_async(msg_txt, user_id)
517 | upload_progress = UpDownProgressUpdate(name=file_name, up_start_time=time.time(), current=0, total=_file_size, user_id=user_id, action="🌀 Leeching")
518 | async with LOCK:
519 | UPDOWN_LIST.append(upload_progress)
520 | user_id = "self" if user_id is None else user_id
521 | logger.info(f"Tg Upload started: {file_name} [Total items: {len(media_list)}]") if media_list else logger.info(f"Tg Upload started: {file_name}")
522 | LOG_CHANNEL = int(os.getenv('LOG_CHANNEL', '0'))
523 | BOT_PM = os.getenv('BOT_PM', 'True').lower() == "true"
524 | try:
525 | if media_list is not None:
526 | if BOT_PM:
527 | _msg = await pyro_app.send_media_group(chat_id=user_id, media=media_list, disable_notification=True, protect_content=False)
528 | if LOG_CHANNEL:
529 | await pyro_app.copy_media_group(chat_id=LOG_CHANNEL, from_chat_id=_msg[0].chat.id, message_id=_msg[0].id)
530 | elif LOG_CHANNEL:
531 | await pyro_app.send_media_group(chat_id=LOG_CHANNEL, media=media_list, disable_notification=True, protect_content=False)
532 | else:
533 | logger.warning("Both LOG_CHANNEL and BOT_PM are not set")
534 | elif is_audio:
535 | if BOT_PM:
536 | _msg = await pyro_app.send_audio(chat_id=user_id, audio=file_path, caption=f"{file_name}
", parse_mode=enums.ParseMode.HTML,
537 | file_name=file_name, disable_notification=True, protect_content=False, progress=upload_progress.set_processed_bytes,
538 | duration=get_duration(file_path))
539 | if LOG_CHANNEL:
540 | await pyro_app.copy_message(chat_id=LOG_CHANNEL, from_chat_id=_msg.chat.id, message_id=_msg.id)
541 | elif LOG_CHANNEL:
542 | await pyro_app.send_audio(chat_id=LOG_CHANNEL, audio=file_path, caption=f"{file_name}
", parse_mode=enums.ParseMode.HTML,
543 | file_name=file_name, disable_notification=True, protect_content=False, progress=upload_progress.set_processed_bytes,
544 | duration=get_duration(file_path))
545 | else:
546 | logger.warning("Both LOG_CHANNEL and BOT_PM are not set")
547 | elif is_video:
548 | if BOT_PM:
549 | _msg = await pyro_app.send_video(chat_id=user_id, video=file_path, caption=f"{file_name}
", parse_mode=enums.ParseMode.HTML,
550 | file_name=file_name, thumb=thumb, supports_streaming=True, disable_notification=True, protect_content=False,
551 | progress=upload_progress.set_processed_bytes, duration=get_duration(file_path))
552 | if LOG_CHANNEL:
553 | await pyro_app.copy_message(chat_id=LOG_CHANNEL, from_chat_id=_msg.chat.id, message_id=_msg.id)
554 | elif LOG_CHANNEL:
555 | await pyro_app.send_video(chat_id=LOG_CHANNEL, video=file_path, caption=f"{file_name}
", parse_mode=enums.ParseMode.HTML,
556 | file_name=file_name, thumb=thumb, supports_streaming=True, disable_notification=True, protect_content=False,
557 | progress=upload_progress.set_processed_bytes, duration=get_duration(file_path))
558 | else:
559 | logger.warning("Both LOG_CHANNEL and BOT_PM are not set")
560 | elif is_photo:
561 | if BOT_PM:
562 | _msg = await pyro_app.send_photo(chat_id=user_id, photo=file_path, caption=f"{file_name}
", parse_mode=enums.ParseMode.HTML,
563 | disable_notification=True, protect_content=False, progress=upload_progress.set_processed_bytes)
564 | if LOG_CHANNEL:
565 | await pyro_app.copy_message(chat_id=LOG_CHANNEL, from_chat_id=_msg.chat.id, message_id=_msg.id)
566 | elif LOG_CHANNEL:
567 | await pyro_app.send_photo(chat_id=LOG_CHANNEL, photo=file_path, caption=f"{file_name}
", parse_mode=enums.ParseMode.HTML,
568 | disable_notification=True, protect_content=False, progress=upload_progress.set_processed_bytes)
569 | else:
570 | logger.warning("Both LOG_CHANNEL and BOT_PM are not set")
571 | else:
572 | if BOT_PM:
573 | _msg = await pyro_app.send_document(chat_id=user_id, document=file_path, caption=f"{file_name}
", parse_mode=enums.ParseMode.HTML,
574 | file_name=file_name, disable_notification=True, protect_content=False, progress=upload_progress.set_processed_bytes)
575 | if LOG_CHANNEL:
576 | await pyro_app.copy_message(chat_id=LOG_CHANNEL, from_chat_id=_msg.chat.id, message_id=_msg.id)
577 | elif LOG_CHANNEL:
578 | await pyro_app.send_document(chat_id=LOG_CHANNEL, document=file_path, caption=f"{file_name}
", parse_mode=enums.ParseMode.HTML,
579 | file_name=file_name, disable_notification=True, protect_content=False, progress=upload_progress.set_processed_bytes)
580 | else:
581 | logger.warning("Both LOG_CHANNEL and BOT_PM are not set")
582 | logger.info(f"Tg Upload completed: {file_name} [Total items: {len(media_list)}]") if media_list else logger.info(f"Tg Upload completed: {file_name}")
583 | except (ValueError, FileNotFoundError, IndexError) as err:
584 | logger.error(f"Tg Upload failed: {file_name} [{str(err)}]")
585 | except errors.FloodWait as err:
586 | logger.warning(f"Error: {err.ID}")
587 | await asyncio.sleep(err.value)
588 | except errors.RPCError as err:
589 | logger.error(f"Tg Upload failed: {file_name} [{err.ID}]")
590 | raise err
591 | finally:
592 | upload_progress.status = "COMPLETE"
593 | await delete_msg(msg_id=stat_msg_id, chat_id=user_id)
594 |
595 | async def get_file_thumb(file_path: str) -> Optional[str]:
596 | name, ext = os.path.splitext(os.path.basename(file_path))
597 | out_file = f"/tmp/{name}_thumb.jpg"
598 | duration = get_duration(file_path)
599 | try:
600 | ffmpeg_proc = ffmpeg.input(file_path, ss=duration // 2 if duration >= 4 else 30).output(out_file, vframes=1).run_async(quiet=True)
601 | ffmpeg_out, _ = ffmpeg_proc.communicate()
602 | if os.path.exists(out_file):
603 | with Image.open(out_file) as img:
604 | img.resize(size=(320, 180)).convert(mode="RGB").save(out_file, "JPEG")
605 | return out_file
606 | except (ffmpeg.Error, UnidentifiedImageError, ValueError, TypeError, OSError) as err:
607 | logger.warning(f"Failed to get thumb for {name}{ext}[{str(err)}]")
608 |
609 | async def get_file_type(file_path: str) -> Tuple[bool, bool, bool]:
610 | is_audio = is_video = is_photo = False
611 | if os.path.exists(file_path):
612 | mime_type = magic.from_file(file_path, mime=True)
613 | if mime_type.startswith('audio'):
614 | is_audio = True
615 | elif mime_type.startswith('video'):
616 | is_video = True
617 | elif mime_type.startswith('image'):
618 | is_photo = True
619 | return is_audio, is_video, is_photo
620 |
621 | async def split_file(file_path: str, file_size: int, is_video: bool = True, is_audio: bool = False, task_id: str = None) -> List[str]:
622 | _name, _ext = os.path.splitext(os.path.basename(file_path))
623 | parts = math.ceil(file_size/FOUR_GB if pyro_app.me.is_premium else TWO_GB)
624 | split_size = math.ceil(file_size/parts)
625 | out_dir = f"{DOWNLOAD_PATH}/splits/{_name}"
626 | os.makedirs(out_dir, exist_ok=True)
627 | split_files_list: List[str] = []
628 | user_id = TASK_CHAT_DICT[task_id] if task_id is not None and task_id in TASK_CHAT_DICT else None
629 | err_msg = f"❗Upload failed: {_name}{_ext}
due to error while splitting\n"
630 | _task_done = False
631 | if is_video or is_audio:
632 | i = 1
633 | start_time = 0
634 | orig_duration = get_duration(file_path)
635 | try:
636 | while i <= parts or start_time < orig_duration - 4:
637 | parted_name = f"{out_dir}/{_name}.part{str(i).zfill(3)}{_ext}"
638 | _pname = os.path.basename(parted_name)
639 | ff_proc = ffmpeg.input(file_path, ss=start_time).output(parted_name, map_chapters=-1, fs=split_size, c="copy").run_async(quiet=True)
640 | _, _ = ff_proc.communicate()
641 | if os.path.exists(parted_name):
642 | processed_dur = get_duration(parted_name) - 3
643 | if processed_dur <= 0:
644 | raise ValueError(f"Split error: {_pname}[Duration is 0]")
645 | else:
646 | start_time += processed_dur
647 | split_files_list.append(parted_name)
648 | logger.info(f"Split created: {_pname}")
649 | else:
650 | raise FileNotFoundError(f"Split error: {_pname}[file not generated]")
651 | i += 1
652 | _task_done = True
653 | except (ffmpeg.Error, FileNotFoundError, ValueError) as err:
654 | logger.error(f"Split error: {_name}{_ext}[{str(err)}]")
655 | await send_msg_async(f"{err_msg}Error: {str(err)}
", user_id)
656 | else:
657 | logger.info(f"Starting to split: {_name}{_ext}")
658 | _split_files: List[str] = []
659 | try:
660 | _split = Split(file_path, out_dir)
661 | _split.manfilename = "bot_manfile"
662 | _split.bysize(size=split_size, newline=True, includeheader=False)
663 | _split_files.extend([f"{out_dir}/{_file}" for _file in os.listdir(out_dir) if "bot_manfile" != _file])
664 | for _file in natsorted(_split_files):
665 | _new_file = f"{out_dir}/{_name}{_ext}.{str(_file.split('_')[-1].split('.')[0]).zfill(3)}"
666 | shutil.move(src=_file, dst=_new_file)
667 | split_files_list.append(_new_file)
668 | _task_done = True
669 | except Exception as err:
670 | logger.error(f"Split error: {_name}{_ext}[{str(err)}]")
671 | await send_msg_async(f"{err_msg}Error: {str(err).replace('>', '').replace('>', '')}
", user_id)
672 | if not _task_done:
673 | shutil.rmtree(out_dir, ignore_errors=True)
674 | split_files_list.clear()
675 | return split_files_list
676 |
677 | async def check_file_type_and_upload(task_id: str, file_path: str) -> None:
678 | is_audio, is_video, is_photo = await get_file_type(file_path)
679 | file_size = os.stat(file_path).st_size
680 | if (file_size > TWO_GB and not pyro_app.me.is_premium) or (file_size > FOUR_GB and pyro_app.me.is_premium):
681 | for _file in await split_file(file_path, file_size, is_video, is_audio, task_id):
682 | await upload_to_tg(task_id, _file, is_audio=is_audio, is_video=is_video, is_photo=is_photo,
683 | thumb=await get_file_thumb(_file) if is_video else None)
684 | else:
685 | await upload_to_tg(task_id, file_path, is_audio=is_audio, is_video=is_video, is_photo=is_photo,
686 | thumb=await get_file_thumb(file_path) if is_video else None)
687 |
688 | async def trigger_tg_upload(down_path: str, task_id: str, in_group: bool = False) -> None:
689 | file_name = os.path.basename(down_path)
690 | logger.info(f"Tg upload preparation: {file_name}")
691 | try:
692 | if os.path.isdir(down_path):
693 | files_list = []
694 | for address, dirs, files in sorted(os.walk(down_path)):
695 | files_list.extend([os.path.join(address, file) for file in files if os.stat(os.path.join(address, file)).st_size > 0])
696 | files_list = natsorted(files_list)
697 | if in_group:
698 | if files_list and len(files_list) < 2:
699 | await check_file_type_and_upload(task_id, files_list[0])
700 | else:
701 | files_chunk = [files_list[i:i+10] for i in range(0, len(files_list), 10)]
702 | for chunk in files_chunk:
703 | media_list = []
704 | for file in chunk:
705 | is_audio, is_video, is_photo = await get_file_type(file)
706 | name = os.path.basename(file)
707 | if is_audio:
708 | media_list.append(types.InputMediaAudio(media=file, caption=f"{name}
", parse_mode=enums.ParseMode.HTML,
709 | duration=get_duration(file)))
710 | elif is_video:
711 | media_list.append(types.InputMediaVideo(media=file, caption=f"{name}
", thumb= await get_file_thumb(file),
712 | parse_mode=enums.ParseMode.HTML, supports_streaming=True, duration=get_duration(file)))
713 | elif is_photo:
714 | media_list.append(types.InputMediaPhoto(media=file, caption=f"{name}
", parse_mode=enums.ParseMode.HTML))
715 | else:
716 | media_list.append(types.InputMediaDocument(media=file, caption=f"{name}
", parse_mode=enums.ParseMode.HTML))
717 | await upload_to_tg(task_id, file_name, media_list)
718 | else:
719 | for file in files_list:
720 | await check_file_type_and_upload(task_id, file)
721 | else:
722 | if os.stat(down_path).st_size == 0:
723 | logger.warning(f"Skip upload: {file_name}, reason: empty file")
724 | else:
725 | await check_file_type_and_upload(task_id, down_path)
726 | except (RetryError, FileNotFoundError) as err:
727 | logger.error(f"Tg Upload failed: {file_name}, attempts: {err.last_attempt.attempt_number if isinstance(err, RetryError) else f'[{str(err)}]'}")
728 | send_status_update(f"❗Upload failed: {file_name}
\nCheck log for more details",
729 | TASK_CHAT_DICT[task_id] if task_id in TASK_CHAT_DICT else None)
730 | else:
731 | if AUTO_DEL_TASK is True:
732 | logger.info(f"Cleaning up: {file_name}")
733 | shutil.rmtree(path=down_path, ignore_errors=True)
734 | shutil.rmtree(path=f"{DOWNLOAD_PATH}/splits/{os.path.splitext(file_name)[0]}", ignore_errors=True)
735 | clear_task_files(task_id)
736 | finally:
737 | await delete_download_status()
738 |
739 | def get_user(update: Update) -> Union[str, int]:
740 | return update.message.from_user.name if update.message.from_user.name is not None else update.message.chat_id
741 |
742 | async def get_download_info(down: aria2p.Download, auto_ref: bool = True) -> str:
743 | info = f"╭🗂 Name: {down.name}
\n├🚦 Status: {down.status}
| 📀 Size: {down.total_length_string()}
\n"\
744 | f"├{get_progress_bar(down.completed_length, down.total_length)}
\n├📥 Downloaded: {down.completed_length_string()} ({down.progress_string()})
\n"\
745 | f"├⚡ Speed: {down.download_speed_string()}
| 🧩 Peers: {down.connections}
\n├⏳ ETA: {down.eta_string()}
"\
746 | f"| 📚 Total Files: {len(down.files)}
"
747 | if down.bittorrent is not None:
748 | info += f"\n{'╰' if not auto_ref else '├'}🥑 Seeders: {down.num_seeders}
| ⚙️ Engine: Aria2
\n"
749 | else:
750 | info += f"\n{'╰' if not auto_ref else '├'}⚙️ Engine: Aria2
\n"
751 | return info
752 |
753 | async def get_qbit_info(hash: str, client: qbittorrentapi.Client = None, auto_ref: bool = True) -> str:
754 | info = ''
755 | for torrent in client.torrents_info(torrent_hashes=[hash]):
756 | info += f"╭🗂 Name: {torrent.name}
\n├🚦 Status: {torrent.state_enum.value}
| 📀 Size: {humanize.naturalsize(torrent.total_size)}
\n├{get_progress_bar(torrent.downloaded, torrent.total_size)}
\n"\
757 | f"├📥 Downloaded: {humanize.naturalsize(torrent.downloaded)} ({round(number=torrent.progress * 100, ndigits=2)}%)
\n├📦 Remaining: {humanize.naturalsize(torrent.amount_left)}
"\
758 | f"| 🧩 Peers: {torrent.num_leechs}
\n├⚡ Speed: {humanize.naturalsize(torrent.dlspeed)}/s
| 🥑 Seeders: {torrent.num_seeds}
\n"\
759 | f"├⏳ ETA: {humanize.naturaldelta(torrent.eta)}
"
760 | try:
761 | info += f"| 📚 Total Files: {len(client.torrents_files(torrent_hash=hash))}
\n{'╰' if not auto_ref else '├'}⚙️ Engine: Qbittorent
\n"
762 | except qbittorrentapi.exceptions.NotFound404Error:
763 | info += f"| ⚙️ Engine: Qbittorent
\n"
764 | return info
765 |
766 | async def get_downloads_count() -> int:
767 | count = 0
768 | try:
769 | count += len(aria2c.get_downloads())
770 | if qb_client := get_qbit_client():
771 | count += len(qb_client.torrents_info())
772 | qb_client.auth_log_out()
773 | except Exception as err:
774 | logger.error(f"Failed to get total count of download tasks [{err.__class__.__name__}]")
775 | return count
776 |
777 | async def get_ngrok_btn(file_name: str) -> Optional[InlineKeyboardButton]:
778 | try:
779 | if tunnels := ngrok.get_tunnels():
780 | return InlineKeyboardButton(text="🌐 Ngrok URL", url=f"{tunnels[0].public_url}/{quote(file_name, safe='')}")
781 | except (IndexError, ngrok.PyngrokError):
782 | logger.error(f"Failed to get ngrok tunnel")
783 | return None
784 |
785 | async def get_ngrok_file_url(file_name: str) -> str:
786 | _url = ''
787 | try:
788 | if tunnels := await asyncio.to_thread(ngrok.get_tunnels):
789 | _url += f"\n🌎 Ngrok Link: Click here"
790 | except ngrok.PyngrokError:
791 | logger.debug(f"Failed to get ngrok url for: {file_name}")
792 | return _url
793 |
794 | async def get_buttons(prog: str, dl_info: str) -> Dict[str, InlineKeyboardButton]:
795 | return {
796 | "refresh": InlineKeyboardButton(text="♻ Refresh", callback_data=f"{prog}-refresh#{dl_info}"),
797 | "delete": InlineKeyboardButton(text="🚫 Delete", callback_data=f"{prog}-remove#{dl_info}"),
798 | "retry": InlineKeyboardButton(text="🚀 Retry", callback_data=f"{prog}-retry#{dl_info}"),
799 | "resume": InlineKeyboardButton(text="▶ Resume", callback_data=f"{prog}-resume#{dl_info}"),
800 | "pause": InlineKeyboardButton(text="⏸ Pause", callback_data=f"{prog}-pause#{dl_info}"),
801 | "upload": InlineKeyboardButton(text="☁️ Upload", callback_data=f"{prog}-upload#{dl_info}"),
802 | "leech": InlineKeyboardButton(text="🌀 Leech", callback_data=f"{prog}-leech#{dl_info}"),
803 | "extract": InlineKeyboardButton(text="🗃️ Extract", callback_data=f"{prog}-extract#{dl_info}"),
804 | "show_all": InlineKeyboardButton(text=f"🔆 Show All ({await get_downloads_count()})", callback_data=f"{prog}-lists")
805 | }
806 |
807 | async def get_comp_action_btns(file_name: str, buttons: Dict[str, InlineKeyboardButton]):
808 | if ngrok_btn := await get_ngrok_btn(file_name):
809 | if is_archive_file(file_name):
810 | action_btn = [[ngrok_btn, buttons["extract"]], [buttons["upload"], buttons["leech"]], [buttons["show_all"], buttons["delete"]]]
811 | else:
812 | action_btn = [[ngrok_btn, buttons["upload"]], [buttons["leech"], buttons["delete"]], [buttons["show_all"]]]
813 | elif is_archive_file(file_name):
814 | action_btn = [[buttons["extract"], buttons["upload"]], [buttons["leech"], buttons["delete"]], [buttons["show_all"]]]
815 | else:
816 | action_btn = [[buttons["upload"], buttons["leech"]], [buttons["show_all"], buttons["delete"]]]
817 | return action_btn
818 |
819 | async def get_aria_keyboard(down: aria2p.Download) -> InlineKeyboardMarkup:
820 | buttons = await get_buttons("aria", down.gid)
821 | action_btn = [[buttons["show_all"], buttons["delete"]]]
822 | if "error" == down.status:
823 | action_btn.insert(0, [buttons["refresh"], buttons["retry"]])
824 | elif "paused" == down.status:
825 | action_btn.insert(0, [buttons["refresh"], buttons["resume"]])
826 | elif "active" == down.status:
827 | action_btn.insert(0, [buttons["refresh"], buttons["pause"]])
828 | elif "complete" == down.status and down.is_metadata is False and not down.followed_by_ids:
829 | action_btn = await get_comp_action_btns(down.name, buttons)
830 | else:
831 | action_btn = [[buttons["refresh"], buttons["delete"]], [buttons["show_all"]]]
832 | return InlineKeyboardMarkup(action_btn)
833 |
834 | async def get_qbit_keyboard(qbit: qbittorrentapi.TorrentDictionary = None) -> InlineKeyboardMarkup:
835 | buttons = await get_buttons("qbit", qbit.hash)
836 | file_name = qbit.files[0].get('name').split("/")[0] if qbit.files else qbit.get('name')
837 | action_btn = [[buttons["show_all"], buttons["delete"]]]
838 | if qbit.state_enum.is_errored:
839 | action_btn.insert(0, [buttons["refresh"], buttons["retry"]])
840 | elif "pausedDL" == qbit.state_enum.value:
841 | action_btn.insert(0, [buttons["refresh"], buttons["resume"]])
842 | elif qbit.state_enum.is_downloading:
843 | action_btn.insert(0, [buttons["refresh"], buttons["pause"]])
844 | elif qbit.state_enum.is_complete or "pausedUP" == qbit.state_enum.value:
845 | action_btn = await get_comp_action_btns(file_name, buttons)
846 | else:
847 | action_btn = [[buttons["refresh"], buttons["delete"]], [buttons["show_all"]]]
848 | return InlineKeyboardMarkup(action_btn)
849 |
850 | async def reply_message(msg: str, update: Update,
851 | context: ContextTypes.DEFAULT_TYPE,
852 | keyboard: InlineKeyboardMarkup = None, reply: bool = True) -> Optional[Message]:
853 | try:
854 | return await context.bot.send_message(
855 | text=msg,
856 | chat_id=update.message.chat_id,
857 | reply_to_message_id=update.message.id if reply is True else None,
858 | reply_markup=keyboard,
859 | parse_mode=constants.ParseMode.HTML,
860 | )
861 | except AttributeError:
862 | logger.error("Failed to send reply")
863 | except error.TelegramError as err:
864 | logger.error(f"Failed to reply for: {update.message.text} to: {get_user(update)} error: {str(err)}")
865 |
866 | async def edit_message(msg: str, callback: CallbackQuery, keyboard: InlineKeyboardMarkup=None) -> None:
867 | try:
868 | await callback.edit_message_text(
869 | text=msg,
870 | parse_mode=constants.ParseMode.HTML,
871 | reply_markup=keyboard
872 | )
873 | except error.TelegramError as err:
874 | logger.debug(f"Failed to edit message for: {callback.data} error: {str(err)}")
875 |
876 | async def get_total_downloads(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
877 | file_btns = []
878 | files_ = []
879 | msg = ''
880 | keyboard = None
881 | files_.extend([down for down in aria2c.get_downloads()])
882 | if qb_client := get_qbit_client():
883 | files_.extend([torrent for torrent in qb_client.torrents_info()])
884 | qb_client.auth_log_out()
885 | for file_ in natsorted(seq=files_, key=lambda x: x.name if isinstance(x, aria2p.Download) else x.get('name')):
886 | if isinstance(file_, qbittorrentapi.TorrentDictionary):
887 | text = f"[{file_.state_enum.value}] {file_.get('name')}"
888 | cb_data = f"qbit-file#{file_.get('hash')}"
889 | else:
890 | text = f"[{file_.status}] {file_.name}"
891 | cb_data = f"aria-file#{file_.gid}"
892 | file_btns.append([InlineKeyboardButton(text=text, callback_data=cb_data)])
893 | if file_btns:
894 | msg += f"🗂️ Downloads ({len(file_btns)})"
895 | keyboard = InlineKeyboardMarkup(file_btns)
896 | else:
897 | msg += "🔅 No downloads found !"
898 | if update.callback_query is not None:
899 | await edit_message(msg, update.callback_query, keyboard)
900 | else:
901 | await reply_message(msg, update, context, keyboard, False)
902 |
903 | def get_sys_info() -> str:
904 | avg_cpu_temp = ""
905 | if temp := psutil.sensors_temperatures():
906 | if "coretemp" in temp:
907 | key = "coretemp"
908 | elif "cpu_thermal" in temp:
909 | key = "cpu_thermal"
910 | else:
911 | key = None
912 | if key:
913 | cpu_temp = 0
914 | for t in temp[key]:
915 | cpu_temp += t.current
916 | avg_cpu_temp += f"{round(number=cpu_temp/len(temp[key]), ndigits=2)}°C"
917 | else:
918 | avg_cpu_temp += "NA"
919 | details = f"CPU Usage : {psutil.cpu_percent(interval=None)}%\n" \
920 | f"CPU Freq : {math.ceil(psutil.cpu_freq(percpu=False).current)} MHz\n" \
921 | f"CPU Cores : {psutil.cpu_count(logical=True)}\n" \
922 | f"CPU Temp : {avg_cpu_temp}\n" \
923 | f"Total RAM : {humanize.naturalsize(psutil.virtual_memory().total)}\n" \
924 | f"Used RAM : {humanize.naturalsize(psutil.virtual_memory().used)}\n" \
925 | f"Free RAM : {humanize.naturalsize(psutil.virtual_memory().available)}\n" \
926 | f"Total Disk: {humanize.naturalsize(psutil.disk_usage('/').total)}\n" \
927 | f"Used Disk : {humanize.naturalsize(psutil.disk_usage('/').used)}\n" \
928 | f"Free Disk : {humanize.naturalsize(psutil.disk_usage('/').free)}\n" \
929 | f"Swap Mem : {humanize.naturalsize(psutil.swap_memory().used)} of {humanize.naturalsize(psutil.swap_memory().total)}\n" \
930 | f"Threads : {threading.active_count()}\n" \
931 | f"Network IO: 🔻 {humanize.naturalsize(psutil.net_io_counters().bytes_recv)} 🔺 {humanize.naturalsize(psutil.net_io_counters().bytes_sent)}"
932 | try:
933 | details += f"\nBot Uptime: {humanize.naturaltime(time.time() - BOT_START_TIME)}"
934 | details += f"\nAsync Tasks: {len(asyncio.all_tasks(asyncio.get_running_loop()))}"
935 | details += f"\nNgrok URL: {ngrok.get_tunnels()[0].public_url}"
936 | except (OverflowError, IndexError, ngrok.PyngrokError, RuntimeError):
937 | pass
938 | return details
939 |
940 | async def get_event_loop(name: str, file_id: str, prog: str, callback: CallbackQuery, leech: bool = False) -> Optional[asyncio.events.AbstractEventLoop]:
941 | try:
942 | return asyncio.get_running_loop()
943 | except RuntimeError:
944 | logger.critical(f"[RuntimeError] Failed to obtain event loop, unable to process: {name}")
945 | msg = f"⁉️Failed to initiate {'leech' if leech else 'upload'} process for: {name}
\n⚠️ Please tap on the back button and retry"
946 | return await edit_message(msg, callback, InlineKeyboardMarkup([[InlineKeyboardButton(text="⬅️ Back", callback_data=f"{prog}-file#{file_id}")]]))
947 |
948 | async def trigger_upload(name: str, prog: str, file_id: str, update: Update, origin: bool = True, leech: bool = False) -> None:
949 | loop = await get_event_loop(name, file_id, prog, update.callback_query, leech)
950 | if not loop:
951 | return
952 | if not origin and is_archive_file(name):
953 | msg = f"🗂️ File: {name}
is an archive so do you want to upload as it is or upload the extracted contents❓"
954 | await edit_message(msg, update.callback_query, InlineKeyboardMarkup(
955 | [[InlineKeyboardButton(text="📦 Original", callback_data=f"{prog}-{'leech' if leech else 'upload'}-orig#{file_id}"),
956 | InlineKeyboardButton(text="🗃 Extracted", callback_data=f"{prog}-{'leext' if leech else 'upext'}#{file_id}")],
957 | [InlineKeyboardButton(text="⬅️ Back", callback_data=f"{prog}-file#{file_id}")]]
958 | ))
959 | elif leech:
960 | msg = f"📤 Leeching started for: {name}
\n⚠️ Do not press the leech button again unless it has failed, you'll receive status updates on the same"
961 | asyncio.run_coroutine_threadsafe(trigger_tg_upload(down_path=f"{DOWNLOAD_PATH}/{name}", task_id=file_id), loop)
962 | logger.info(f"Leech thread started for: {name}")
963 | await edit_message(msg, update.callback_query, InlineKeyboardMarkup([[InlineKeyboardButton(text="⬅️ Back", callback_data=f"{prog}-file#{file_id}")]]))
964 | elif await count_uploaded_files(file_name=name) > 0:
965 | msg = f"🗂️ File: {name}
is already uploaded and can be found in {GDRIVE_FOLDER_BASE_URL.format(GDRIVE_FOLDER_ID)}"
966 | await edit_message(msg, update.callback_query, InlineKeyboardMarkup([[InlineKeyboardButton(text="⬅️ Back", callback_data=f"{prog}-file#{file_id}")]]))
967 | else:
968 | msg = f"🌈 Upload started for: {name}
\n⚠️ Do not press the upload button again unless the upload has failed, you'll receive status updates on the same"
969 | asyncio.run_coroutine_threadsafe(upload_to_gdrive(name=name, chat_id=str(update.callback_query.message.chat_id)), loop)
970 | logger.info(f"Upload thread started for: {name}")
971 | await edit_message(msg, update.callback_query, InlineKeyboardMarkup([[InlineKeyboardButton(text="⬅️ Back", callback_data=f"{prog}-file#{file_id}")]]))
972 |
973 | def is_file_extracted(file_name: str) -> bool:
974 | folder_name = os.path.splitext(file_name)[0]
975 | folder_path = f"{DOWNLOAD_PATH}/{folder_name}"
976 | try:
977 | folder_size = os.stat(folder_path).st_size if os.path.exists(folder_path) else 0
978 | file_size = os.path.getsize(f"{DOWNLOAD_PATH}/{file_name}")
979 | return folder_size >= file_size
980 | except OSError:
981 | return False
982 |
983 | async def start_extraction(name: str, prog: str, file_id: str, update: Update, upload: bool = False, leech: bool = False) -> None:
984 | folder_name = os.path.splitext(name)[0]
985 | if is_file_extracted(name):
986 | msg = f"🗂️ File: {name}
is already extracted{await get_ngrok_file_url(folder_name)}"
987 | await edit_message(msg, update.callback_query, InlineKeyboardMarkup([[InlineKeyboardButton(text="⬅️ Back", callback_data=f"{prog}-file#{file_id}")]]))
988 | else:
989 | msg = f"🗃️ Extraction started for: {name}
\n⚠️ Do not press the extract button again unless it has failed, you'll receive status updates on the same."
990 | if upload is True:
991 | msg += f" Upload process will be started once it completes."
992 | await edit_message(msg, update.callback_query, InlineKeyboardMarkup([[InlineKeyboardButton(text="⬅️ Back", callback_data=f"{prog}-file#{file_id}")]]))
993 | os.makedirs(name=f"{DOWNLOAD_PATH}/{folder_name}", exist_ok=True)
994 | try:
995 | await asyncio.to_thread(patoolib.extract_archive, archive=f"{DOWNLOAD_PATH}/{name}", outdir=f"{DOWNLOAD_PATH}/{folder_name}", interactive=False)
996 | msg = f"🗂️ File: {name}
extracted ✔️{await get_ngrok_file_url(folder_name)}"
997 | await send_msg_async(msg, update.callback_query.message.chat_id)
998 | if loop := await get_event_loop(name, file_id, prog, update.callback_query, leech):
999 | if upload:
1000 | asyncio.run_coroutine_threadsafe(upload_to_gdrive(name=folder_name, chat_id=str(update.callback_query.message.chat_id)), loop)
1001 | elif leech:
1002 | asyncio.run_coroutine_threadsafe(trigger_tg_upload(down_path=f"{DOWNLOAD_PATH}/{folder_name}", task_id=file_id), loop)
1003 | except patoolib.util.PatoolError as err:
1004 | shutil.rmtree(path=f"{DOWNLOAD_PATH}/{folder_name}", ignore_errors=True)
1005 | await send_msg_async(f"⁉️ Failed to extract: {name}
\n⚠️ Error: {str(err).replace('>', '').replace('<', '')}
\n"
1006 | f"Check /{LOG_CMD} for more details.", update.callback_query.message.chat_id)
1007 |
1008 | async def upext_handler(file_name: str, prog: str, file_id: str, update: Update, leech: bool = False) -> None:
1009 | if is_file_extracted(file_name):
1010 | await trigger_upload(os.path.splitext(file_name)[0], prog, file_id, update, leech=leech)
1011 | else:
1012 | await start_extraction(file_name, prog, file_id, update, upload=False if leech else True, leech=True if leech else False)
1013 |
1014 | async def bot_callback_handler(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1015 | try:
1016 | await update.callback_query.answer()
1017 | callback_data = update.callback_query.data.split("#", maxsplit=1)
1018 | action = callback_data[0].strip()
1019 | if "aria" in action:
1020 | aria_obj = aria2c.get_download(callback_data[1].strip()) if "lists" not in action else None
1021 | if action in ["aria-refresh", "aria-file", "aria-pause", "aria-resume"]:
1022 | if "pause" in action:
1023 | aria2c.pause(downloads=[aria_obj], force=True)
1024 | elif "resume" in action:
1025 | aria2c.resume(downloads=[aria_obj])
1026 | await edit_message(await get_download_info(aria_obj.live, False), update.callback_query, await get_aria_keyboard(aria_obj))
1027 | elif action in ["aria-retry", "aria-remove", "aria-lists"]:
1028 | if "retry" in action:
1029 | aria2c.retry_downloads(downloads=[aria_obj], clean=False)
1030 | for item in aria2c.get_downloads():
1031 | if item.gid not in TASK_CHAT_DICT:
1032 | TASK_CHAT_DICT[item.gid] = update.callback_query.message.chat_id
1033 | TASK_ID_DICT[item.gid] = ''.join(choice(ascii_letters) for i in range(8))
1034 | elif "remove" in action:
1035 | remove_extracted_dir(aria_obj.name)
1036 | aria2c.remove(downloads=[aria_obj], force=True, files=True, clean=True)
1037 | await get_total_downloads(update, context)
1038 | elif "upload" in action:
1039 | await trigger_upload(aria_obj.name, "aria", callback_data[1], update, True if "orig" in action else False)
1040 | elif "extract" in action:
1041 | await start_extraction(aria_obj.name, "aria", callback_data[1], update)
1042 | elif "upext" in action:
1043 | await upext_handler(aria_obj.name, "aria", callback_data[1], update)
1044 | elif "leech" in action:
1045 | await trigger_upload(name=aria_obj.name, prog="aria", file_id=aria_obj.gid, update=update, leech=True, origin=True if "orig" in action else False)
1046 | elif "leext" in action:
1047 | await upext_handler(file_name=aria_obj.name, prog="aria", file_id=aria_obj.gid, update=update, leech=True)
1048 | elif "qbit" in action:
1049 | torrent_hash = callback_data[1].strip() if len(callback_data) > 1 else None
1050 | if qb_client := get_qbit_client():
1051 | if torrent_hash is None:
1052 | name = None
1053 | elif qb_client.torrents_files(torrent_hash):
1054 | name = qb_client.torrents_files(torrent_hash)[0].get('name').split("/")[0]
1055 | else:
1056 | name = qb_client.torrents_info(torrent_hashes=[torrent_hash])[0].get('name')
1057 | if action in ["qbit-refresh", "qbit-file", "qbit-pause", "qbit-resume"]:
1058 | if "pause" in action:
1059 | qb_client.torrents_pause(torrent_hashes=[torrent_hash])
1060 | elif "resume" in action:
1061 | qb_client.torrents_resume(torrent_hashes=[torrent_hash])
1062 | if msg := await get_qbit_info(torrent_hash, qb_client, False):
1063 | await edit_message(msg, update.callback_query, await get_qbit_keyboard(qb_client.torrents_info(torrent_hashes=[torrent_hash])[0]))
1064 | else:
1065 | await edit_message("Torrent not found ❗", update.callback_query)
1066 | elif action in ["qbit-retry", "qbit-remove", "qbit-lists"]:
1067 | if "retry" in action:
1068 | qb_client.torrents_set_force_start(enable=True, torrent_hashes=[torrent_hash])
1069 | elif "remove" in action:
1070 | remove_extracted_dir(name)
1071 | qb_client.torrents_delete(delete_files=True, torrent_hashes=[torrent_hash])
1072 | await get_total_downloads(update, context)
1073 | elif "upload" in action:
1074 | await trigger_upload(name, "qbit", torrent_hash, update, True if "orig" in action else False)
1075 | elif "extract" in action:
1076 | await start_extraction(name, "qbit", torrent_hash, update)
1077 | elif "upext" in action:
1078 | await upext_handler(name, "qbit", torrent_hash, update)
1079 | elif "leech" in action:
1080 | await trigger_upload(name=name, prog="qbit", file_id=torrent_hash, update=update, leech=True, origin=True if "orig" in action else False)
1081 | elif "leext" in action:
1082 | await upext_handler(file_name=name, prog="qbit", file_id=torrent_hash, update=update, leech=True)
1083 | qb_client.auth_log_out()
1084 | else:
1085 | if "refresh" == callback_data[1]:
1086 | await edit_message(get_sys_info(), update.callback_query, InlineKeyboardMarkup([[InlineKeyboardButton(text="♻️ Refresh", callback_data="sys#refresh"), InlineKeyboardButton(text="🚫 Close", callback_data="sys#close")]]))
1087 | if "close" == callback_data[1]:
1088 | try:
1089 | await update.callback_query.delete_message()
1090 | except error.TelegramError:
1091 | await edit_message("Sys info data cleared", update.callback_query)
1092 | except aria2p.ClientException:
1093 | await edit_message(f"⁉️ Unable to find GID: {update.callback_query.data}
", update.callback_query)
1094 | except qbittorrentapi.exceptions.APIError:
1095 | await edit_message(f"⁉️Unable to find Torrent: {update.callback_query.data}
", update.callback_query)
1096 | except (error.TelegramError, requests.exceptions.RequestException, IndexError, ValueError, RuntimeError):
1097 | logger.error(f"Failed to answer callback for: {update.callback_query.data}")
1098 |
1099 | async def sys_info_handler(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1100 | await reply_message(get_sys_info(), update, context,
1101 | InlineKeyboardMarkup([[InlineKeyboardButton(text="♻️ Refresh", callback_data="sys#refresh"),
1102 | InlineKeyboardButton(text="🚫 Close", callback_data="sys#close")]]), False)
1103 |
1104 | async def send_log_file(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1105 | logger.info(f"Sending log file to {get_user(update)}")
1106 | try:
1107 | with open('log.txt', 'rb') as f:
1108 | await context.bot.send_document(
1109 | chat_id=update.message.chat_id,
1110 | document=f,
1111 | filename=f.name,
1112 | reply_to_message_id=update.message.message_id)
1113 | except error.TelegramError:
1114 | logger.error(f"Failed to send the log file to {get_user(update)}")
1115 |
1116 | async def ngrok_info(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1117 | logger.info("Getting ngrok tunnel info")
1118 | try:
1119 | if tunnels := ngrok.get_tunnels():
1120 | await reply_message(f"🌐 Ngrok URL: {tunnels[0].public_url}", update, context)
1121 | else:
1122 | raise IndexError("No tunnel found")
1123 | except (IndexError, ngrok.PyngrokNgrokURLError, ngrok.PyngrokNgrokHTTPError):
1124 | logger.error(f"Failed to get ngrok tunnel, restarting")
1125 | try:
1126 | if ngrok.process.is_process_running(conf.get_default().ngrok_path) is True:
1127 | ngrok.kill()
1128 | await asyncio.sleep(1)
1129 | file_tunnel = ngrok.connect(addr=f"file://{DOWNLOAD_PATH}", proto="http", schemes=["http"], name="files_tunnel", inspect=False)
1130 | await reply_message(f"🌍 Ngrok tunnel started\nURL: {file_tunnel.public_url}", update, context)
1131 | except ngrok.PyngrokError as err:
1132 | await reply_message(f"⁉️ Failed to get tunnel info\nError: {str(err)}
", update, context)
1133 |
1134 | async def start(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1135 | sender = get_user(update)
1136 | logger.info(f"/{START_CMD} sent by {sender}")
1137 | try:
1138 | await update.get_bot().set_my_commands(
1139 | [(START_CMD, "👽 Start the bot"),
1140 | (MIRROR_CMD, "🗳 Mirror file using Aria2"),
1141 | (UNZIP_ARIA_CMD, "🗃️ Mirror & unzip using Aria2"),
1142 | (LEECH_ARIA_CMD, "🧩 Leech file using Aria2"),
1143 | (QBIT_CMD, "🧲 Mirror file using Qbittorrent"),
1144 | (UNZIP_QBIT_CMD, "🫧 Mirror & unzip using Qbittorrent"),
1145 | (LEECH_QBIT_CMD, "🌀 Leech file using Qbittorrent"),
1146 | (UNZIP_LEECH_CMD, "🧬 Unzip and leech"),
1147 | (TASK_CMD, "📥 Show the task list"),
1148 | (INFO_CMD, "⚙️ Show system info"),
1149 | (NGROK_CMD, "🌍 Show Ngrok URL"),
1150 | (LOG_CMD, "📄 Get runtime log file")]
1151 | )
1152 | except (error.TelegramError, RuntimeError):
1153 | logger.error("Failed to set commands")
1154 | await reply_message(
1155 | f"Hi 👋, Welcome to Mirror2Gdrive bot. I can mirror files to your GDrive. Please use /{MIRROR_CMD}
or /{QBIT_CMD}
cmd to send links.",
1156 | update, context
1157 | )
1158 |
1159 | async def get_tg_file(doc: Document, update: Update, context: ContextTypes.DEFAULT_TYPE) -> Tuple[Optional[str], Optional[Message]]:
1160 | logger.info(f"Fetching file: {doc.file_name}")
1161 | tg_msg: Optional[Message] = None
1162 | tg_file_path: Optional[str] = None
1163 | if doc.file_size >= 10485760:
1164 | tg_msg = await reply_message(f"📥 File: {doc.file_name}
is being downloaded, please wait", update, context)
1165 | try:
1166 | tg_file = await context.bot.get_file(file_id=doc.file_id)
1167 | tg_file_path = await tg_file.download_to_drive(custom_path=f"/tmp/{doc.file_id}")
1168 | except error.TelegramError:
1169 | logger.error(f"Failed to download {doc.file_name}, retrying with pyrogram")
1170 | down_prog = UpDownProgressUpdate(name=doc.file_name, up_start_time=time.time(), total=doc.file_size, user_id=str(update.message.chat_id), action="📥 Downloading [TG File]")
1171 | async with LOCK:
1172 | UPDOWN_LIST.append(down_prog)
1173 | try:
1174 | if pyro_app is not None:
1175 | await delete_download_status()
1176 | tg_file_path = await pyro_app.download_media(message=doc.file_id, file_name=f"/tmp/{doc.file_id}",
1177 | progress=down_prog.set_processed_bytes)
1178 | else:
1179 | logger.error("could not find pyrogram session")
1180 | except errors.RPCError as err:
1181 | logger.error(f"Failed to download {doc.file_name} [{err.ID}]")
1182 | except (ValueError, TimeoutError):
1183 | logger.error(f"Given file: {doc.file_name} does not exist in telegram server or may be timeout error while downloading")
1184 | else:
1185 | if tg_file_path is not None and os.path.exists(tg_file_path):
1186 | logger.info(f"Downloaded file from TG: {doc.file_name}")
1187 | if tg_msg is not None:
1188 | tg_msg = await context.bot.edit_message_text(
1189 | text=f"🗂️ File: {doc.file_name}
is downloaded, starting further process",
1190 | chat_id=tg_msg.chat_id, message_id=tg_msg.message_id, parse_mode=constants.ParseMode.HTML
1191 | )
1192 | finally:
1193 | down_prog.status = "COMPLETED"
1194 | return tg_file_path, tg_msg
1195 |
1196 | async def extract_upload_tg_file(file_path: str, upload: bool = False, leech: bool = False, task_id: str = "",
1197 | in_group: bool = False, chat_id: str = None, extract: bool = True) -> None:
1198 | if os.path.exists(file_path) is False or os.path.isdir(file_path) is True:
1199 | return
1200 | name = os.path.basename(file_path)
1201 | folder_name = os.path.splitext(name)[0] if extract else name
1202 | try:
1203 | if extract:
1204 | os.makedirs(name=f"{DOWNLOAD_PATH}/{folder_name}", exist_ok=True)
1205 | await asyncio.to_thread(patoolib.extract_archive, archive=file_path, outdir=f"{DOWNLOAD_PATH}/{folder_name}", interactive=False)
1206 | msg = f"🗂️ File: {name}
extracted ✔️"
1207 | if not upload and not leech:
1208 | msg += await get_ngrok_file_url(folder_name)
1209 | await send_msg_async(msg, chat_id)
1210 | if upload is True and await upload_to_gdrive(name=folder_name, chat_id=chat_id) is True:
1211 | logger.info(f"Cleaning up: {name}")
1212 | shutil.rmtree(path=f"{DOWNLOAD_PATH}/{folder_name}", ignore_errors=True)
1213 | os.remove(file_path)
1214 | if leech is True:
1215 | await trigger_tg_upload(f"{DOWNLOAD_PATH}/{folder_name}", task_id, in_group)
1216 | except patoolib.util.PatoolError as err:
1217 | shutil.rmtree(path=f"{DOWNLOAD_PATH}/{folder_name}", ignore_errors=True)
1218 | await send_msg_async(f"⁉️ Failed to extract: {name}
\n⚠️ Error: {str(err).replace('>', '').replace('<', '')}
\n"
1219 | f"Check /{LOG_CMD} for more details.", chat_id)
1220 | finally:
1221 | if AUTO_DEL_TASK is True:
1222 | shutil.rmtree(file_path, ignore_errors=True)
1223 |
1224 | async def is_torrent_file(doc: Document, context: ContextTypes.DEFAULT_TYPE) -> Optional[str]:
1225 | logger.info(f"Fetching file: {doc.file_name}")
1226 | tg_file = await context.bot.get_file(file_id=doc.file_id)
1227 | tg_file_path = await tg_file.download_to_drive(custom_path=f"/tmp/{tg_file.file_id}")
1228 | return tg_file_path if magic.from_file(tg_file_path, mime=True) == "application/x-bittorrent" else None
1229 |
1230 | async def edit_or_reply(msg: Optional[Message], text: str, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1231 | if msg is not None:
1232 | await context.bot.edit_message_text(text=text, chat_id=msg.chat_id, message_id=msg.message_id, parse_mode=constants.ParseMode.HTML)
1233 | else:
1234 | await reply_message(text, update, context)
1235 | await delete_download_status()
1236 |
1237 | def download_gdrive_file(file_id: str, file_path: str, gdrive_service, down_prog: UpDownProgressUpdate, chat_id: str) -> None:
1238 | fh = FileIO(file=file_path, mode='wb')
1239 | try:
1240 | for attempt in Retrying(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3), retry=retry_if_exception_type(Exception)):
1241 | with attempt:
1242 | request = gdrive_service.files().get_media(fileId=file_id, supportsAllDrives=True)
1243 | downloader = MediaIoBaseDownload(fh, request, chunksize=50 * 1024 * 1024)
1244 | done = False
1245 | _current_bytes = 0
1246 | _last_bytes = 0
1247 | while not done:
1248 | _status, done = downloader.next_chunk()
1249 | if _status:
1250 | _current_bytes = _status.resumable_progress if _last_bytes == 0 else _status.resumable_progress - _last_bytes
1251 | _last_bytes = _status.resumable_progress
1252 | down_prog.uploaded_bytes += _current_bytes
1253 | elif _current_bytes == 0:
1254 | down_prog.uploaded_bytes += _status.total_size
1255 | else:
1256 | down_prog.uploaded_bytes += _status.total_size - _current_bytes
1257 | fh.close()
1258 | except RetryError as err:
1259 | fh.close()
1260 | last_err = err.last_attempt.exception()
1261 | err_msg = last_err.error_details if isinstance(last_err, HttpError) else str(last_err).replace('>', '').replace('>', '')
1262 | msg = f"Failed to download: {os.path.basename(file_path)}, attempts: {err.last_attempt.attempt_number}, error: [{err_msg}]"
1263 | logger.error(msg)
1264 | if os.path.dirname(file_path) == DOWNLOAD_PATH:
1265 | send_status_update(f"⁉️{msg}", chat_id)
1266 |
1267 | def get_up_args(upload_mode: str, file_path: str, file_id: str, chat_id: int) -> Dict[str, Union[str, bool]]:
1268 | if "L" in upload_mode:
1269 | in_group = True if "G" in upload_mode else False
1270 | up_args = {"upload": False, "leech": True, "in_group": in_group}
1271 | elif "M" in upload_mode:
1272 | up_args = {"upload": False, "leech": False}
1273 | else:
1274 | up_args = {"upload": True, "leech": False}
1275 | up_args["extract"] = True if "E" in upload_mode and magic.from_file(file_path, mime=True) in ArchiveMimetypes else False
1276 | up_args["file_path"] = file_path
1277 | up_args["task_id"] = file_id
1278 | up_args["chat_id"] = chat_id
1279 | return up_args
1280 |
1281 | async def gdrive_files_handler(item_id: str, file_list: Dict[str, str], _file_path: str, gdrive_service, file_size: int, tg_msg: Message,
1282 | update: Update, context: ContextTypes.DEFAULT_TYPE, _loop: asyncio.events.AbstractEventLoop,
1283 | upload_mode: str, _folder_name: Optional[str] = None):
1284 | down_prog = UpDownProgressUpdate(user_id=str(update.message.chat_id), total=file_size, up_start_time=time.time(), action="📥 Downloading [GDrive]")
1285 | logger.info(f"Starting status updater for: {item_id}")
1286 | async with LOCK:
1287 | UPDOWN_LIST.append(down_prog)
1288 | for file_id in file_list:
1289 | file_name = file_list[file_id]
1290 | logger.info(f"Starting download: {file_name} [{file_id}]")
1291 | file_path = f"{_file_path}/{file_name}"
1292 | _name, _ext = os.path.splitext(file_name)
1293 | if os.path.exists(file_path):
1294 | count = 1
1295 | for _, _, files in os.walk(_file_path):
1296 | for file_ in files:
1297 | if file_ == file_name:
1298 | count += 1
1299 | file_path = f"{_file_path}/{_name}_{count}{_ext}"
1300 | down_prog.file_name = os.path.basename(file_path)
1301 | await asyncio.to_thread(download_gdrive_file, file_id, file_path, gdrive_service, down_prog, str(update.message.chat_id))
1302 | down_prog.status = "COMPLETED"
1303 | gdrive_service.close()
1304 | _file_names = list(file_list.values())
1305 | _file_name = _file_names[0]
1306 | if all([os.path.exists(f"{_file_path}/{file_name}") for file_name in _file_names]):
1307 | logger.info(f"All ({len(file_list)}) gdrive files downloaded and saved")
1308 | TASK_CHAT_DICT[item_id] = update.message.chat_id
1309 | up_args = get_up_args(upload_mode, f"{DOWNLOAD_PATH}/{_file_name}", item_id, update.message.chat_id)
1310 | if _folder_name:
1311 | _msg = f"✅ GDrive folder: {_folder_name}
downloaded successfully\n📀 Size: {humanize.naturalsize(file_size)}
\n" \
1312 | f"📚 Total files: {len(file_list)}
{await get_ngrok_file_url(_folder_name)}"
1313 | await edit_or_reply(tg_msg, _msg, update, context)
1314 | if up_args['leech']:
1315 | logger.info(f"Leeching started for {_folder_name}")
1316 | await trigger_tg_upload(_file_path, item_id, up_args['in_group'])
1317 | elif up_args['extract']:
1318 | await send_msg_async(f"Automatic extraction of archive file present in {_folder_name}
is not yet supported\n"
1319 | f"You can provide the ngrok url of downloaded archive file to extract.", update.message.chat_id)
1320 | elif up_args['upload']:
1321 | logger.info(f"Uploading started for {_folder_name}")
1322 | await upload_to_gdrive(name=_folder_name, chat_id=str(update.message.chat_id))
1323 | else:
1324 | logger.info(f"No post download action for {_folder_name}")
1325 | else:
1326 | await edit_or_reply(tg_msg, f"✅ Gdrive file: {_file_name}
downloaded successfully\n"
1327 | f"📀 Size: {humanize.naturalsize(file_size)}
{await get_ngrok_file_url(_file_name)}", update, context)
1328 | logger.info(f"calling extract_upload_tg_file() with {up_args}")
1329 | await extract_upload_tg_file(**up_args)
1330 | else:
1331 | _msg = f"Failed to download gdrive file: {_file_name}, Total: {len(file_list)} file, please check the log for more details"
1332 | logger.error(_msg)
1333 | await edit_or_reply(tg_msg, _msg, update, context)
1334 |
1335 | async def get_all_files(folder_id: str, creds) -> List[Dict[str, str]]:
1336 | all_files = await get_gdrive_files(query=None, folder_id=folder_id, creds=creds)
1337 | files_list = []
1338 | for _file in all_files:
1339 | if _file.get('mimeType') == 'application/vnd.google-apps.folder':
1340 | files_list.extend(await get_all_files(_file.get('id'), creds))
1341 | else:
1342 | files_list.append(_file)
1343 | return natsorted(seq=files_list, key=lambda k: k['name'])
1344 |
1345 | async def gdrive_handler(link: str, upload_mode: str, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1346 | logger.info(f"Finding gdrive id from {link}")
1347 | try:
1348 | _loop = asyncio.get_running_loop()
1349 | except RuntimeError:
1350 | await edit_or_reply(None, "Failed to get async running loop, please restart the bot", update, context)
1351 | return
1352 | if file_id := get_gdrive_id(link):
1353 | logger.info(f"Finding the details for {file_id}")
1354 | tg_msg = await reply_message(f"⏳ Processing given link, please wait", update, context)
1355 | try:
1356 | creds = get_oauth_creds()
1357 | async for attempt in AsyncRetrying(wait=wait_exponential(multiplier=2, min=3, max=6), stop=stop_after_attempt(3), retry=retry_if_exception_type(Exception)):
1358 | with attempt:
1359 | gdrive_service = build('drive', 'v3', credentials=creds, cache_discovery=False)
1360 | _meta = gdrive_service.files().get(fileId=file_id, supportsAllDrives=True, fields='name, id, mimeType, size').execute()
1361 | file_name = _meta['name'].replace('/', '')
1362 | file_type = _meta['mimeType']
1363 | except RetryError as err:
1364 | last_err = err.last_attempt.exception()
1365 | err_msg = last_err.error_details if isinstance(last_err, HttpError) else str(last_err).replace('>', '').replace('>', '')
1366 | _msg = f"❗Failed to get the details of GDrive ID: {file_id}
\n⚠️ Error: {err_msg}
"
1367 | await edit_or_reply(tg_msg, _msg, update, context)
1368 | else:
1369 | files_dict = dict()
1370 | folder_name = None
1371 | total_size = 0
1372 | file_path = DOWNLOAD_PATH
1373 | if file_type == "application/vnd.google-apps.folder":
1374 | logger.info(f"Fetching details of gdrive folder: {file_name}")
1375 | files_list = await get_all_files(file_id, creds)
1376 | if not files_list:
1377 | logger.warning(f"No files found in folder: {file_name}")
1378 | await edit_or_reply(tg_msg, f"❗Given folder: {file_name}
is empty, please validate and retry", update, context)
1379 | return
1380 | else:
1381 | for _file in files_list:
1382 | total_size += int(_file['size'])
1383 | files_dict[_file['id']] = _file['name'].replace('/', '')
1384 | logger.info(f"Found {len(files_list)} files in {file_name}, total size: {humanize.naturalsize(total_size)}")
1385 | folder_name = file_name
1386 | file_path += f"/{file_name}"
1387 | logger.info(f"Creating folder: {file_path}")
1388 | os.makedirs(name=file_path, exist_ok=True)
1389 | else:
1390 | files_dict[file_id] = file_name
1391 | total_size = int(_meta['size'])
1392 | await edit_or_reply(tg_msg, f"⏳ Downloading started for {file_name}
", update, context)
1393 | await delete_download_status()
1394 | logger.info(f"Calling gdrive_files_handler() for {len(files_dict)} file")
1395 | asyncio.run_coroutine_threadsafe(gdrive_files_handler(file_id, files_dict, file_path, gdrive_service, total_size,
1396 | tg_msg, update, context, _loop, upload_mode, folder_name), _loop)
1397 | else:
1398 | await edit_or_reply(None, "❗Failed to get the GDrive ID from given link. Please validate the link and retry", update, context)
1399 |
1400 | async def reply_handler(reply_doc: Document, update: Update, context: ContextTypes.DEFAULT_TYPE, upload_mode: str) -> None:
1401 | tg_file_path, tg_msg = await get_tg_file(reply_doc, update, context)
1402 | if tg_file_path is None:
1403 | await edit_or_reply(tg_msg, f"❗Failed to download file, please check /{LOG_CMD} for more details", update, context)
1404 | elif magic.from_file(tg_file_path, mime=True) == "application/x-bittorrent":
1405 | logger.info(f"Adding file to download: {reply_doc.file_name}")
1406 | try:
1407 | aria_obj = aria2c.add_torrent(torrent_file_path=tg_file_path)
1408 | async with LOCK:
1409 | TASK_UPLOAD_MODE_DICT[aria_obj.gid] = upload_mode
1410 | TASK_CHAT_DICT[aria_obj.gid] = update.message.chat_id
1411 | if aria_obj.has_failed is False:
1412 | _msg = f"📥 Download started ✔"
1413 | else:
1414 | _msg = f"⚠️ Failed to start download\nError :{aria_obj.error_message}
"
1415 | except aria2p.ClientException as err:
1416 | _msg = f"❗ Failed to start download\nError :{err.__class__.__name__}
"
1417 | await edit_or_reply(tg_msg, _msg, update, context)
1418 | else:
1419 | file_path = f"{DOWNLOAD_PATH}/{reply_doc.file_name}"
1420 | shutil.move(src=tg_file_path, dst=file_path)
1421 | up_args = get_up_args(upload_mode, file_path, reply_doc.file_id, update.message.chat_id)
1422 | async with LOCK:
1423 | TASK_CHAT_DICT[reply_doc.file_id] = update.message.chat_id
1424 | if not any([up_args['extract'], up_args['upload'], up_args['leech']]):
1425 | await edit_or_reply(tg_msg, f"File: {reply_doc.file_name}
is saved, use ngrok url to "
1426 | f"access{await get_ngrok_file_url(reply_doc.file_name)}", update, context)
1427 | logger.info(f"calling extract_upload_tg_file() with {up_args}")
1428 | await extract_upload_tg_file(**up_args)
1429 |
1430 | async def get_upload_mode(cmd_txt: List[str], unzip: bool, leech: bool) -> str:
1431 | if len(cmd_txt) > 1:
1432 | if re.search("^-M", cmd_txt[1], re.IGNORECASE):
1433 | upload_mode = "ME" if unzip else "M"
1434 | elif re.search("^-G", cmd_txt[1], re.IGNORECASE):
1435 | upload_mode = "ELG" if unzip and leech else "E" if unzip else "LG" if leech else "A"
1436 | elif unzip:
1437 | upload_mode = "EL" if leech else "E"
1438 | elif leech:
1439 | upload_mode = "L"
1440 | else:
1441 | upload_mode = "A"
1442 | elif unzip:
1443 | upload_mode = "EL" if leech else "E"
1444 | elif leech:
1445 | upload_mode = "L"
1446 | else:
1447 | upload_mode = "A"
1448 | return upload_mode
1449 |
1450 | async def aria_upload(update: Update, context: ContextTypes.DEFAULT_TYPE, unzip: bool = False, leech: bool = False) -> None:
1451 | logger.info(f"/{MIRROR_CMD if not unzip else UNZIP_ARIA_CMD} sent by {get_user(update)}")
1452 | aria_obj: Optional[aria2p.Download] = None
1453 | link: Optional[str] = None
1454 | help_txt = f"❗Send a link along with the command or reply to it. You can also reply to a .torrent/archive file\n" \
1455 | f"Send /{MIRROR_CMD} -m or /{UNZIP_ARIA_CMD} -m to disable auto uploading to gdrive\n" \
1456 | f"Send /{LEECH_ARIA_CMD} -g or /{UNZIP_LEECH_CMD} -g to leech files as a media group"
1457 | try:
1458 | cmd_txt = update.message.text.strip().split(" ", maxsplit=1)
1459 | upload_mode = await get_upload_mode(cmd_txt, unzip, leech)
1460 | if reply_msg := update.message.reply_to_message:
1461 | if reply_doc := reply_msg.document:
1462 | asyncio.run_coroutine_threadsafe(reply_handler(reply_doc, update, context, upload_mode), asyncio.get_running_loop())
1463 | return
1464 | elif reply_text := reply_msg.text:
1465 | link = reply_text
1466 | else:
1467 | await reply_message(f"❗Unsupported reply given, please reply with a torrent/link/archive file.", update, context)
1468 | else:
1469 | link = cmd_txt[1][2: len(cmd_txt[1])].strip() if "M" in upload_mode or "G" in upload_mode else cmd_txt[1].strip()
1470 | if link is not None:
1471 | if bool(re.findall(MAGNET_REGEX, link)) is True:
1472 | aria_obj = aria2c.add_magnet(magnet_uri=link, options=ARIA_OPTS)
1473 | elif bool(re.findall(URL_REGEX, link)) is True:
1474 | if not is_mega_link(link) and not is_gdrive_link(link) and not link.endswith('.torrent'):
1475 | logger.info(f"Generating direct link for: {link}")
1476 | try:
1477 | link = await asyncio.to_thread(direct_link_gen, link)
1478 | except Exception as err:
1479 | if "No Direct link function" not in str(err):
1480 | logger.error(f"Failed to generate direct link for: {link}, error: {str(err)}")
1481 | await reply_message(f"⁉️Failed to generate direct link\nReason: {str(err)}
", update, context)
1482 | return
1483 | elif is_gdrive_link(link):
1484 | asyncio.run_coroutine_threadsafe(gdrive_handler(link, upload_mode, update, context), asyncio.get_running_loop())
1485 | return
1486 | aria_obj = aria2c.add_uris(uris=[link], options=ARIA_OPTS)
1487 | else:
1488 | logger.warning(f"Invalid link: {link}")
1489 | await reply_message("⁉️ Invalid link given, please send a valid download link.", update, context)
1490 | if aria_obj is not None:
1491 | if aria_obj.has_failed is False:
1492 | logger.info(f"Download started: {aria_obj.name} with GID: {aria_obj.gid}")
1493 | async with LOCK:
1494 | TASK_UPLOAD_MODE_DICT[aria_obj.gid] = upload_mode
1495 | TASK_CHAT_DICT[aria_obj.gid] = update.message.chat_id
1496 | TASK_ID_DICT[aria_obj.gid] = ''.join(choice(ascii_letters) for i in range(8))
1497 | await reply_message(f"📥 Download started ✔", update, context)
1498 | await delete_download_status()
1499 | else:
1500 | logger.error(f"Failed to start download, error: {aria_obj.error_code}")
1501 | await reply_message(f"⚠️ Failed to start download\nError:{aria_obj.error_message}
❗", update, context)
1502 | aria2c.remove(downloads=[aria_obj], clean=True)
1503 | aria2c.client.save_session()
1504 | except IndexError:
1505 | await reply_message(help_txt, update, context)
1506 | except aria2p.ClientException:
1507 | await reply_message("❗ Failed to start download, kindly check the link and retry.", update, context)
1508 | except (error.TelegramError, FileNotFoundError, RuntimeError):
1509 | await reply_message(f"❗Failed to process the given file\nPlease check the /{LOG_CMD}", update, context)
1510 |
1511 | async def qbit_upload(update: Update, context: ContextTypes.DEFAULT_TYPE, unzip: bool = False, leech: bool = False) -> None:
1512 | logger.info(f"/{QBIT_CMD if not unzip else UNZIP_QBIT_CMD} sent by {get_user(update)}")
1513 | link: Optional[str] = None
1514 | resp: Optional[str] = None
1515 | help_txt = f"❗Send a link along with the command or reply to it. You can also reply to a .torrent file\n" \
1516 | f"Send /{QBIT_CMD} -m or /{UNZIP_QBIT_CMD} -m to disable auto uploading to gdrive\n" \
1517 | f"Send /{LEECH_QBIT_CMD} -g to leech files as a media group"
1518 | if qb_client := get_qbit_client():
1519 | try:
1520 | cmd_txt = update.message.text.strip().split(" ", maxsplit=1)
1521 | upload_mode = await get_upload_mode(cmd_txt, unzip, leech)
1522 | if reply_msg := update.message.reply_to_message:
1523 | if reply_doc := reply_msg.document:
1524 | if file_path := await is_torrent_file(reply_doc, context):
1525 | logger.info(f"Adding file to download: {reply_doc.file_name}")
1526 | resp = qb_client.torrents_add(torrent_files=file_path)
1527 | else:
1528 | await reply_message(f"❗Given file type not supported, please send a torrent file.", update, context)
1529 | return
1530 | elif reply_text := reply_msg.text:
1531 | link = reply_text
1532 | else:
1533 | await reply_message(f"❗Unsupported reply given, please reply with a torrent file or link.", update, context)
1534 | return
1535 | else:
1536 | link = cmd_txt[1][2: len(cmd_txt[1])].strip() if "M" in upload_mode or "G" in upload_mode else cmd_txt[1].strip()
1537 | if link is not None:
1538 | resp = qb_client.torrents_add(urls=link)
1539 | if resp is not None and resp == "Ok.":
1540 | await reply_message(f"🧲 Torrent added ✔", update, context)
1541 | await delete_download_status()
1542 | for torrent in qb_client.torrents_info(status_filter='all', sort='added_on', reverse=True, limit=1):
1543 | async with LOCK:
1544 | if not torrent.get('hash') in TASK_CHAT_DICT:
1545 | TASK_UPLOAD_MODE_DICT[torrent.get('hash')] = upload_mode
1546 | TASK_CHAT_DICT[torrent.get('hash')] = update.message.chat_id
1547 | TASK_STATUS_MSG_DICT[torrent.get('hash')] = "NOT_SENT"
1548 | TASK_ID_DICT[torrent.get('hash')] = ''.join(choice(ascii_letters) for i in range(8))
1549 | else:
1550 | await reply_message("❗ Failed to add it\n⚠️ Kindly verify the given link and retry", update, context)
1551 | except IndexError:
1552 | await reply_message(help_txt, update, context)
1553 | except error.TelegramError:
1554 | await reply_message("❗Failed to process the given torrent file", update, context)
1555 | finally:
1556 | qb_client.auth_log_out()
1557 | else:
1558 | await reply_message("⁉️ Error connecting to qbittorrent, please retry", update, context)
1559 |
1560 | async def aria_unzip_upload(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1561 | await aria_upload(update, context, True)
1562 |
1563 | async def qbit_unzip_upload(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1564 | await qbit_upload(update, context, True)
1565 |
1566 | async def aria_mirror_leech(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1567 | await aria_upload(update, context, False, True)
1568 |
1569 | async def qbit_mirror_leech(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1570 | await qbit_upload(update, context, False, True)
1571 |
1572 | async def aria_unzip_leech(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1573 | await aria_upload(update, context, True, True)
1574 |
1575 | async def action_handler(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
1576 | if update.message.text.startswith('#'):
1577 | _action = update.message.text.split('_', maxsplit=1)
1578 | is_pause = is_retry = is_cancel = is_resume = is_aria = False
1579 | status_msg = ''
1580 | try:
1581 | _task_id = _action[1]
1582 | _ids = list(TASK_ID_DICT.values())
1583 | if _task_id not in _ids:
1584 | await reply_message("❗Unable to find any task with the given ID", update, context)
1585 | return
1586 | else:
1587 | input_act = _action[0].lstrip('#')
1588 | if input_act == "pause":
1589 | is_pause = True
1590 | elif input_act == "resume":
1591 | is_resume = True
1592 | elif input_act == "retry":
1593 | is_retry = True
1594 | elif input_act == "cancel":
1595 | is_cancel = True
1596 | else:
1597 | raise ValueError
1598 | task_id = list(TASK_ID_DICT.keys())[_ids.index(_task_id)]
1599 | for d in aria2c.get_downloads():
1600 | if d.gid == task_id:
1601 | is_aria = True
1602 | break
1603 | if is_aria:
1604 | aria_obj = aria2c.get_download(task_id)
1605 | _name = aria_obj.name
1606 | status_msg += f"🗂️ {_name}
"
1607 | if is_pause:
1608 | aria2c.pause(downloads=[aria_obj], force=True)
1609 | status_msg += "is paused"
1610 | elif is_resume:
1611 | aria2c.resume(downloads=[aria_obj])
1612 | status_msg += "is resumed"
1613 | elif is_retry:
1614 | aria2c.retry_downloads(downloads=[aria_obj], clean=False)
1615 | status_msg += ": retry request submitted"
1616 | for item in aria2c.get_downloads():
1617 | if item.gid not in TASK_CHAT_DICT:
1618 | TASK_CHAT_DICT[item.gid] = update.message.chat_id
1619 | TASK_ID_DICT[item.gid] = ''.join(choice(ascii_letters) for i in range(8))
1620 | elif is_cancel:
1621 | remove_extracted_dir(_name)
1622 | aria2c.remove(downloads=[aria_obj], force=True, files=True, clean=True)
1623 | status_msg += "is removed"
1624 | elif qb_client := get_qbit_client():
1625 | if qb_client.torrents_files(task_id):
1626 | _name = qb_client.torrents_files(task_id)[0].get('name').split("/")[0]
1627 | else:
1628 | _name = qb_client.torrents_info(torrent_hashes=[task_id])[0].get('name')
1629 | status_msg += f"🗂️ {_name}
"
1630 | if is_pause:
1631 | qb_client.torrents_pause(torrent_hashes=[task_id])
1632 | status_msg += "is paused"
1633 | elif is_resume:
1634 | qb_client.torrents_resume(torrent_hashes=[task_id])
1635 | status_msg += "is resumed"
1636 | elif is_retry:
1637 | qb_client.torrents_set_force_start(enable=True, torrent_hashes=[task_id])
1638 | status_msg += ": retry request submitted"
1639 | elif is_cancel:
1640 | remove_extracted_dir(_name)
1641 | qb_client.torrents_delete(delete_files=True, torrent_hashes=[task_id])
1642 | status_msg += "is removed"
1643 | if status_msg:
1644 | await reply_message(status_msg, update, context)
1645 | await delete_download_status()
1646 | else:
1647 | await reply_message("⚠️ Unable to find any task associated with the given ID", update, context)
1648 | except (IndexError, ValueError):
1649 | await reply_message("⚠️ Unsupported command sent", update, context)
1650 | except (aria2p.ClientException, qbittorrentapi.exceptions.APIError) as err:
1651 | logger.error(f"Failed to process: {update.message.text}[{err.__class__.__name__}]")
1652 | await reply_message(f"⁉️Failed to process the request [{err.__class__.__name__}]", update, context)
1653 | except RuntimeError:
1654 | pass
1655 |
1656 | async def get_action_str(task: Union[aria2p.Download, qbittorrentapi.TorrentDictionary]) -> str:
1657 | action_str = ''
1658 | _retry = "├♻️ Retry: #retry_{}
\n"
1659 | _pause = "├⏸ Pause: #pause_{}
\n"
1660 | _resume = "├▶ Resume: #resume_{}
\n"
1661 | _cancel = "╰❌ Cancel: #cancel_{}
\n"
1662 | if isinstance(task, aria2p.Download):
1663 | task_id = TASK_ID_DICT.get(task.gid, '')
1664 | if not task_id:
1665 | return action_str
1666 | if "error" == task.status:
1667 | action_str += _retry.format(task_id)
1668 | elif "paused" == task.status:
1669 | action_str += _resume.format(task_id)
1670 | elif task.is_active:
1671 | action_str += _pause.format(task_id)
1672 | else:
1673 | task_id = TASK_ID_DICT.get(task.hash, '')
1674 | if not task_id:
1675 | return action_str
1676 | if task.state_enum.is_errored:
1677 | action_str += _retry.format(task_id)
1678 | elif "pausedDL" == task.state_enum.value:
1679 | action_str += _resume.format(task_id)
1680 | elif task.state_enum.is_downloading:
1681 | action_str += _pause.format(task_id)
1682 | action_str += _cancel.format(task_id)
1683 | return action_str
1684 |
1685 | async def extract_and_upload(torrent: Union[aria2p.Download, qbittorrentapi.TorrentDictionary], upload: bool = True,
1686 | leech: bool = False, in_group: bool = False) -> None:
1687 | if isinstance(torrent, qbittorrentapi.TorrentDictionary):
1688 | file_name = torrent.files[0].get('name').split("/")[0] if torrent.files else torrent.get('name')
1689 | file_id = torrent.get('hash')
1690 | else:
1691 | file_name = torrent.name
1692 | file_id = torrent.gid
1693 | chat_id = TASK_CHAT_DICT[file_id] if file_id in TASK_CHAT_DICT else None
1694 | if is_archive_file(file_name):
1695 | if is_file_extracted(file_name):
1696 | msg = f"🗂️ File: {file_name}
is already extracted ✔️"
1697 | await send_msg_async(msg, chat_id)
1698 | else:
1699 | folder_name = os.path.splitext(file_name)[0]
1700 | os.makedirs(name=f"{DOWNLOAD_PATH}/{folder_name}", exist_ok=True)
1701 | try:
1702 | await asyncio.to_thread(patoolib.extract_archive, archive=f"{DOWNLOAD_PATH}/{file_name}", outdir=f"{DOWNLOAD_PATH}/{folder_name}", interactive=False)
1703 | msg = f"🗂️ File: {file_name}
extracted ✔️"
1704 | await send_msg_async(msg, chat_id)
1705 | except patoolib.util.PatoolError as err:
1706 | shutil.rmtree(path=f"{DOWNLOAD_PATH}/{folder_name}", ignore_errors=True)
1707 | await send_msg_async(f"⁉️Failed to extract: {file_name}
\n⚠️ Error: {str(err).replace('>', '').replace('<', '')}
\n"
1708 | f"Check /{LOG_CMD} for more details.", chat_id)
1709 | else:
1710 | if upload and await upload_to_gdrive(name=folder_name, chat_id=chat_id):
1711 | if AUTO_DEL_TASK is True:
1712 | clear_task_files(file_id, isinstance(torrent, qbittorrentapi.TorrentDictionary))
1713 | if leech:
1714 | await trigger_tg_upload(f"{DOWNLOAD_PATH}/{folder_name}", file_id, in_group)
1715 |
1716 | async def trigger_extract_upload(torrent: Union[aria2p.Download, qbittorrentapi.TorrentDictionary], task_id: str) -> None:
1717 | if task_id in TASK_UPLOAD_MODE_DICT:
1718 | upload_mode = TASK_UPLOAD_MODE_DICT.get(task_id)
1719 | up_arg = {'gid': task_id} if isinstance(torrent, aria2p.Download) else {'hash': task_id}
1720 | up_arg['chat_id'] = TASK_CHAT_DICT[task_id] if task_id in TASK_CHAT_DICT else None
1721 | if isinstance(torrent, qbittorrentapi.TorrentDictionary):
1722 | file_name = torrent.files[0].get('name').split("/")[0] if torrent.files else torrent.get('name')
1723 | else:
1724 | file_name = torrent.name
1725 | try:
1726 | loop = asyncio.get_running_loop()
1727 | if upload_mode == "A":
1728 | asyncio.run_coroutine_threadsafe(upload_to_gdrive(**up_arg), loop)
1729 | elif upload_mode == "E":
1730 | asyncio.run_coroutine_threadsafe(extract_and_upload(torrent=torrent), loop)
1731 | elif upload_mode == "ME":
1732 | asyncio.run_coroutine_threadsafe(extract_and_upload(torrent=torrent, upload=False), loop)
1733 | elif upload_mode == "L":
1734 | asyncio.run_coroutine_threadsafe(trigger_tg_upload(f"{DOWNLOAD_PATH}/{file_name}", task_id), loop)
1735 | elif upload_mode == "LG":
1736 | asyncio.run_coroutine_threadsafe(trigger_tg_upload(f"{DOWNLOAD_PATH}/{file_name}", task_id, True), loop)
1737 | elif upload_mode == "EL":
1738 | asyncio.run_coroutine_threadsafe(extract_and_upload(torrent=torrent, upload=False, leech=True), loop)
1739 | elif upload_mode == "ELG":
1740 | asyncio.run_coroutine_threadsafe(extract_and_upload(torrent=torrent, upload=False, leech=True, in_group=True), loop)
1741 | else:
1742 | logger.info(f"Nothing needs to be done for: {task_id}")
1743 | except RuntimeError:
1744 | logger.error("Failed to run trigger_extract_upload()")
1745 |
1746 | async def aria_qbit_listener(context: ContextTypes.DEFAULT_TYPE) -> None:
1747 | total_downloads: List[Union[aria2p.Download, qbittorrentapi.TorrentDictionary]] = []
1748 | if qb_client := get_qbit_client():
1749 | try:
1750 | total_downloads.extend([torrent for torrent in qb_client.torrents_info(status_filter="all")])
1751 | total_downloads.extend([down for down in aria2c.get_downloads()])
1752 | total_downloads.extend([up_down for up_down in UPDOWN_LIST if up_down.status == "IN_PROGRESS"])
1753 | target_chat_ids: Set[int] = set()
1754 | dl_report = ''
1755 | for torrent in natsorted(seq=total_downloads,
1756 | key=lambda x: x.name if isinstance(x, aria2p.Download) else x.get('name') if
1757 | isinstance(x, qbittorrentapi.TorrentDictionary) else x.file_name):
1758 | if isinstance(torrent, qbittorrentapi.TorrentDictionary):
1759 | present_in_dict = torrent.get('hash') in TASK_STATUS_MSG_DICT
1760 | if not torrent.get('hash') in TASK_ID_DICT:
1761 | async with LOCK:
1762 | TASK_ID_DICT[torrent.get('hash')] = ''.join(choice(ascii_letters) for i in range(8))
1763 | TASK_CHAT_DICT[torrent.get('hash')] = list(AUTHORIZED_USERS)[0]
1764 | if torrent.state_enum.is_complete or "pausedUP" == torrent.state_enum.value:
1765 | if present_in_dict:
1766 | if TASK_STATUS_MSG_DICT.get(torrent.get('hash')) == "NOT_SENT":
1767 | msg = f"✅ Downloaded: {torrent.get('name')}
\n📀 Size: {humanize.naturalsize(torrent.get('size'))}
\n" \
1768 | f"⏳ Time taken: {humanize.naturaldelta(torrent.get('completion_on') - torrent.get('added_on'))}
"
1769 | file_name = torrent.files[0].get('name').split("/")[0] if torrent.files else torrent.get('name')
1770 | msg += await get_ngrok_file_url(file_name)
1771 | send_status_update(msg)
1772 | TASK_STATUS_MSG_DICT[torrent.get('hash')] = "SENT"
1773 | asyncio.run_coroutine_threadsafe(coro=trigger_extract_upload(torrent, torrent.get('hash')), loop=asyncio.get_running_loop())
1774 | await delete_download_status()
1775 | elif torrent.state_enum.is_errored:
1776 | dl_report += f"{await get_qbit_info(torrent.get('hash'), qb_client)}{await get_action_str(torrent)}\n"
1777 | if present_in_dict:
1778 | if TASK_STATUS_MSG_DICT.get(torrent.get('hash')) == "NOT_SENT":
1779 | send_status_update(f"❌ Failed to download: {torrent.get('name')}
")
1780 | TASK_STATUS_MSG_DICT[torrent.get('hash')] = "SENT"
1781 | else:
1782 | dl_report += f"{await get_qbit_info(torrent.get('hash'), qb_client)}{await get_action_str(torrent)}\n"
1783 | if not present_in_dict:
1784 | TASK_STATUS_MSG_DICT[torrent.get('hash')] = "NOT_SENT"
1785 | if torrent.get('hash') in TASK_CHAT_DICT:
1786 | target_chat_ids.add(TASK_CHAT_DICT[torrent.get('hash')])
1787 | elif isinstance(torrent, aria2p.Download):
1788 | down = torrent
1789 | if down.is_complete:
1790 | if not down.is_metadata and not down.followed_by_ids:
1791 | if down.gid in TASK_STATUS_MSG_DICT and TASK_STATUS_MSG_DICT[down.gid] == "NOT_SENT":
1792 | msg = f"✅ Downloaded: {down.name}
\n📀 Size: {humanize.naturalsize(down.total_length)}
{await get_ngrok_file_url(down.name)}"
1793 | send_status_update(msg)
1794 | TASK_STATUS_MSG_DICT[down.gid] = "SENT"
1795 | asyncio.run_coroutine_threadsafe(coro=trigger_extract_upload(down, down.gid), loop=asyncio.get_running_loop())
1796 | await delete_download_status()
1797 | else:
1798 | for fgid in down.followed_by_ids:
1799 | TASK_UPLOAD_MODE_DICT[fgid] = TASK_UPLOAD_MODE_DICT.get(down.gid) if down.gid in TASK_UPLOAD_MODE_DICT else "M"
1800 | TASK_CHAT_DICT[fgid] = TASK_CHAT_DICT.get(down.gid) if down.gid in TASK_CHAT_DICT else None
1801 | TASK_ID_DICT[fgid] = ''.join(choice(ascii_letters) for i in range(8))
1802 | logger.info(f"Removing file: {down.name}")
1803 | aria2c.remove(downloads=[down])
1804 | elif down.has_failed:
1805 | dl_report += f"{await get_download_info(down)}{await get_action_str(down)}\n"
1806 | if down.gid in TASK_STATUS_MSG_DICT and TASK_STATUS_MSG_DICT[down.gid] == "NOT_SENT":
1807 | send_status_update(f"⁉️Failed to download: {down.name}
[{down.error_message}]")
1808 | TASK_STATUS_MSG_DICT[down.gid] = "SENT"
1809 | else:
1810 | dl_report += f"{await get_download_info(down)}{await get_action_str(down)}\n"
1811 | if down.gid not in TASK_STATUS_MSG_DICT:
1812 | TASK_STATUS_MSG_DICT[down.gid] = "NOT_SENT"
1813 | if down.gid in TASK_CHAT_DICT:
1814 | target_chat_ids.add(TASK_CHAT_DICT[down.gid])
1815 | elif isinstance(torrent, UpDownProgressUpdate):
1816 | dl_report += f"{await torrent.get_status_msg()}\n\n"
1817 | target_chat_ids.add(int(torrent.user_id) if torrent.user_id else int(list(AUTHORIZED_USERS)[0]))
1818 | for chat_id in target_chat_ids:
1819 | async with LOCK:
1820 | chat_id_present = chat_id in CHAT_UPDATE_MSG_DICT
1821 | if dl_report:
1822 | if not chat_id_present:
1823 | msg_id = await send_msg_async(dl_report, chat_id)
1824 | CHAT_UPDATE_MSG_DICT[chat_id] = msg_id
1825 | else:
1826 | await pyro_app.edit_message_text(chat_id=chat_id, message_id=CHAT_UPDATE_MSG_DICT[chat_id], text=dl_report)
1827 | if not dl_report or not target_chat_ids:
1828 | await delete_download_status()
1829 | qb_client.auth_log_out()
1830 | except (qbittorrentapi.APIConnectionError, aria2p.ClientException, RuntimeError) as err:
1831 | logger.warning(f"Error in aria qbit listener [{err.__class__.__name__}]")
1832 | except errors.RPCError as err:
1833 | logger.debug(f"Failed to update download status [{err.ID}]")
1834 |
1835 | def start_bot() -> None:
1836 | global BOT_START_TIME
1837 | logger.info("Starting the BOT")
1838 | try:
1839 | application: Application = ApplicationBuilder().token(BOT_TOKEN).build()
1840 | except (TypeError, error.InvalidToken):
1841 | logger.error("Failed to initialize bot")
1842 | else:
1843 | BOT_START_TIME = time.time()
1844 | logger.info("Registering commands")
1845 | try:
1846 | start_handler = CommandHandler(START_CMD, start, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1847 | aria_handler = CommandHandler(MIRROR_CMD, aria_upload, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1848 | status_handler = CommandHandler(TASK_CMD, get_total_downloads, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1849 | info_handler = CommandHandler(INFO_CMD, sys_info_handler, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1850 | log_handler = CommandHandler(LOG_CMD, send_log_file, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1851 | qbit_handler = CommandHandler(QBIT_CMD, qbit_upload, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1852 | ngrok_handler = CommandHandler(NGROK_CMD, ngrok_info, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1853 | unzip_aria = CommandHandler(UNZIP_ARIA_CMD, aria_unzip_upload, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1854 | unzip_qbit = CommandHandler(UNZIP_QBIT_CMD, qbit_unzip_upload, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1855 | leech_aria = CommandHandler(LEECH_ARIA_CMD, aria_mirror_leech, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1856 | leech_qbit = CommandHandler(LEECH_QBIT_CMD, qbit_mirror_leech, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1857 | unzip_leech_aria = CommandHandler(UNZIP_LEECH_CMD, aria_unzip_leech, Chat(chat_id=AUTHORIZED_USERS, allow_empty=False))
1858 | task_action_handler = MessageHandler(filters.TEXT & ~filters.COMMAND, action_handler)
1859 | callback_handler = CallbackQueryHandler(bot_callback_handler, pattern="^aria|qbit|sys")
1860 | application.add_handlers([start_handler, aria_handler, callback_handler, status_handler, info_handler, log_handler, task_action_handler,
1861 | qbit_handler, ngrok_handler, unzip_aria, unzip_qbit, leech_aria, leech_qbit, unzip_leech_aria])
1862 | application.job_queue.run_repeating(callback=aria_qbit_listener, interval=6, name="aria_qbit_listener")
1863 | application.run_polling(drop_pending_updates=True)
1864 | except error.TelegramError as err:
1865 | logger.error(f"Failed to start bot: {str(err)}")
1866 |
1867 | def start_pyrogram() -> None:
1868 | global pyro_app
1869 | logger.info("Starting pyrogram session")
1870 | try:
1871 | sess_str = os.getenv(key="USER_SESSION_STRING", default="")
1872 | pyro_app = Client(
1873 | name="pyrogram",
1874 | api_id=os.environ["TG_API_ID"],
1875 | api_hash=os.environ["TG_API_HASH"],
1876 | no_updates=True,
1877 | parse_mode=enums.ParseMode.HTML,
1878 | bot_token=os.environ["BOT_TOKEN"] if not sess_str else None,
1879 | session_string=sess_str if sess_str else None,
1880 | in_memory=True if sess_str else None,
1881 | takeout=True,
1882 | max_concurrent_transmissions=10
1883 | )
1884 | pyro_app.start()
1885 | logger.info(f"Session started, premium: {pyro_app.me.is_premium}")
1886 | except KeyError:
1887 | logger.error("Missing required values, please check the config")
1888 | _exit(os.EX_CONFIG)
1889 | except ConnectionError:
1890 | logger.warning("Pyrogram session already started")
1891 | except errors.RPCError as err:
1892 | logger.error(f"Failed to start pyrogram session, error: {err.MESSAGE}")
1893 | _exit(os.EX_UNAVAILABLE)
1894 |
1895 | def get_trackers(aria: bool=True) -> str:
1896 | trackers = ''
1897 | logger.info("Fetching trackers list")
1898 | try:
1899 | for index, url in enumerate(TRACKER_URLS):
1900 | track_resp = requests.get(url=url)
1901 | if track_resp.ok:
1902 | if aria is True:
1903 | if index == 0:
1904 | trackers += track_resp.text.strip('\n')
1905 | else:
1906 | trackers += track_resp.text.replace('\n', ',').rstrip(',')
1907 | else:
1908 | if index == 0:
1909 | trackers += track_resp.text.strip('\n').replace(',', '\\n')
1910 | else:
1911 | trackers += track_resp.text.rstrip('\n').replace('\n', '\\n')
1912 | track_resp.close()
1913 | else:
1914 | logger.error(f"Failed to get data from tracker link {index}")
1915 | except requests.exceptions.RequestException:
1916 | logger.error("Failed to retrieve trackers")
1917 | return trackers
1918 |
1919 | def start_aria() -> None:
1920 | global aria2c
1921 | trackers = get_trackers()
1922 | logger.info(f"Fetched {len(trackers.split(','))} trackers")
1923 | aria_command_args = ARIA_COMMAND.format(trackers).split(' ')
1924 | logger.info("Downloading dht files")
1925 | try:
1926 | dht_file = requests.get(url=DHT_FILE_URL)
1927 | if dht_file.ok:
1928 | with open("/usr/src/app/dht.dat", "wb") as f:
1929 | f.write(dht_file.content)
1930 | aria_command_args.extend(["--enable-dht=true", "--dht-file-path=/usr/src/app/dht.dat"])
1931 | dht6_file = requests.get(url=DHT6_FILE_URL)
1932 | if dht6_file.ok:
1933 | with open("/usr/src/app/dht6.dat", "wb") as f:
1934 | f.write(dht6_file.content)
1935 | aria_command_args.extend(["--enable-dht6=true", "--dht-file-path6=/usr/src/app/dht6.dat"])
1936 | except requests.exceptions.RequestException:
1937 | logger.warning("Failed to download dht file")
1938 | else:
1939 | dht_file.close()
1940 | dht6_file.close()
1941 | if os.path.exists("/usr/src/app/aria2.session"):
1942 | aria_command_args.append("--input-file=/usr/src/app/aria2.session")
1943 | logger.info("Starting aria2c daemon")
1944 | try:
1945 | subprocess.run(args=aria_command_args, check=True)
1946 | time.sleep(2)
1947 | aria2c = aria2p.API(aria2p.Client(host="http://localhost", port=6800, secret=""))
1948 | aria2c.get_downloads()
1949 | logger.info("aria2c daemon started")
1950 | except (subprocess.CalledProcessError, aria2p.client.ClientException, requests.exceptions.RequestException, OSError) as err:
1951 | logger.error(f"Failed to start aria2c, error: {str(err)}")
1952 | _exit(os.EX_UNAVAILABLE)
1953 |
1954 | def start_qbit() -> None:
1955 | qbit_conf_path = '/usr/src/app/.config'
1956 | logger.info("Initializing qbittorrent-nox")
1957 | qbit_conf_data = QBIT_CONF.replace('{trackers_url}', get_trackers(False))
1958 | os.makedirs(name=f"{qbit_conf_path}/qBittorrent/config", exist_ok=True)
1959 | with open(f'{qbit_conf_path}/qBittorrent/config/qBittorrent.conf', 'w', encoding='utf-8') as conf_file:
1960 | conf_file.write(qbit_conf_data)
1961 | logger.info("Starting qbittorrent-nox daemon")
1962 | try:
1963 | subprocess.run(args=["/usr/bin/qbittorrent-nox", "--daemon", "--webui-port=8090", f"--profile={qbit_conf_path}"], check=True)
1964 | time.sleep(2)
1965 | qb_client = get_qbit_client()
1966 | logger.info(f"qbittorrent version: {qb_client.app.version}")
1967 | qb_client.auth_log_out()
1968 | except (subprocess.CalledProcessError, AttributeError, qbittorrentapi.exceptions.APIConnectionError,
1969 | qbittorrentapi.exceptions.LoginFailed) as err:
1970 | logger.error(f"Failed to start qbittorrent-nox, error: {str(err)}")
1971 | _exit(os.EX_UNAVAILABLE)
1972 |
1973 | def start_ngrok() -> None:
1974 | logger.info("Starting ngrok tunnel")
1975 | with open("/usr/src/app/ngrok.yml", "w") as config:
1976 | config.write(f"version: 2\nauthtoken: {NGROK_AUTH_TOKEN}\nregion: in\nconsole_ui: false\nlog_level: info")
1977 | ngrok_conf = conf.PyngrokConfig(
1978 | config_path="/usr/src/app/ngrok.yml",
1979 | auth_token=NGROK_AUTH_TOKEN,
1980 | region="in",
1981 | max_logs=5,
1982 | ngrok_version="v3",
1983 | monitor_thread=False)
1984 | try:
1985 | conf.set_default(ngrok_conf)
1986 | file_tunnel = ngrok.connect(addr=f"file://{DOWNLOAD_PATH}", proto="http", schemes=["http"], name="files_tunnel", inspect=False)
1987 | logger.info(f"Ngrok tunnel started: {file_tunnel.public_url}")
1988 | except ngrok.PyngrokError as err:
1989 | logger.error(f"Failed to start ngrok, error: {str(err)}")
1990 |
1991 | def setup_bot() -> None:
1992 | global BOT_TOKEN
1993 | global NGROK_AUTH_TOKEN
1994 | global GDRIVE_FOLDER_ID
1995 | global AUTHORIZED_USERS
1996 | global AUTO_DEL_TASK
1997 | os.makedirs(name=DOWNLOAD_PATH, exist_ok=True)
1998 | if CONFIG_FILE_URL is not None:
1999 | logger.info("Downloading config file")
2000 | try:
2001 | config_file = requests.get(url=CONFIG_FILE_URL)
2002 | except requests.exceptions.RequestException:
2003 | logger.error("Failed to download config file")
2004 | else:
2005 | if config_file.ok:
2006 | logger.info("Loading config values")
2007 | if load_dotenv(stream=StringIO(config_file.text), override=True):
2008 | config_file.close()
2009 | BOT_TOKEN = os.environ['BOT_TOKEN']
2010 | NGROK_AUTH_TOKEN = os.environ['NGROK_AUTH_TOKEN']
2011 | GDRIVE_FOLDER_ID = os.environ['GDRIVE_FOLDER_ID']
2012 | AUTO_DEL_TASK = os.getenv(key='AUTO_DEL_TASK', default="False").lower() == "true"
2013 | try:
2014 | AUTHORIZED_USERS = json.loads(os.environ['USER_LIST'])
2015 | except json.JSONDecodeError:
2016 | logger.error("Failed to parse AUTHORIZED_USERS data")
2017 | else:
2018 | logger.info("Downloading token.pickle file")
2019 | try:
2020 | pickle_file = requests.get(url=os.environ['PICKLE_FILE_URL'])
2021 | except requests.exceptions.RequestException:
2022 | logger.error("Failed to download pickle file")
2023 | else:
2024 | if pickle_file.ok:
2025 | with open(PICKLE_FILE_NAME, 'wb') as f:
2026 | f.write(pickle_file.content)
2027 | pickle_file.close()
2028 | logger.info("config.env data loaded successfully")
2029 | start_aria()
2030 | start_qbit()
2031 | start_ngrok()
2032 | start_pyrogram()
2033 | start_bot()
2034 | else:
2035 | logger.error("Failed to get pickle file data")
2036 | else:
2037 | logger.error("Failed to parse config data")
2038 | else:
2039 | logger.error("Failed to get config data")
2040 | else:
2041 | logger.error("CONFIG_FILE_URL is None")
2042 |
2043 | if __name__ == '__main__':
2044 | setup_bot()
2045 |
--------------------------------------------------------------------------------
/qbit_conf.py:
--------------------------------------------------------------------------------
1 | QBIT_CONF = r"""
2 | [AutoRun]
3 | OnTorrentAdded\Enabled=false
4 | OnTorrentAdded\Program=
5 | enabled=false
6 | program=
7 |
8 | [BitTorrent]
9 | Session\AddExtensionToIncompleteFiles=true
10 | Session\AddTrackersEnabled=true
11 | Session\AdditionalTrackers={trackers_url}
12 | Session\AlternativeGlobalDLSpeedLimit=0
13 | Session\AlternativeGlobalUPSpeedLimit=0
14 | Session\AnnounceToAllTrackers=true
15 | Session\AnonymousModeEnabled=false
16 | Session\AsyncIOThreadsCount=16
17 | Session\DHTEnabled=true
18 | Session\DefaultSavePath=/usr/src/app/downloads/
19 | Session\DiskCacheSize=-1
20 | Session\ExcludedFileNames=
21 | Session\GlobalMaxRatio=0
22 | Session\GlobalMaxSeedingMinutes=-1
23 | Session\IgnoreLimitsOnLAN=true
24 | Session\IgnoreSlowTorrentsForQueueing=true
25 | Session\LSDEnabled=true
26 | Session\MaxActiveDownloads=100
27 | Session\MaxActiveTorrents=50
28 | Session\MaxActiveUploads=50
29 | Session\MaxConnections=-1
30 | Session\MaxConnectionsPerTorrent=-1
31 | Session\MaxRatioAction=0
32 | Session\MaxUploads=-1
33 | Session\MaxUploadsPerTorrent=-1
34 | Session\MultiConnectionsPerIp=true
35 | Session\PeXEnabled=true
36 | Session\Port=6336
37 | Session\Preallocation=true
38 | Session\QueueingSystemEnabled=false
39 | Session\SlowTorrentsDownloadRate=2
40 | Session\SlowTorrentsInactivityTimer=600
41 | Session\SlowTorrentsUploadRate=2
42 | Session\TempPath=/usr/src/app/downloads/incomplete/
43 | Session\TempPathEnabled=true
44 | Session\uTPRateLimited=false
45 |
46 | [Core]
47 | AutoDeleteAddedTorrentFile=IfAdded
48 |
49 | [LegalNotice]
50 | Accepted=true
51 |
52 | [Meta]
53 | MigrationVersion=4
54 |
55 | [Network]
56 | Proxy\OnlyForTorrents=false
57 |
58 | [Preferences]
59 | Advanced\AnnounceToAllTrackers=true
60 | Advanced\AnonymousMode=false
61 | Advanced\IgnoreLimitsLAN=false
62 | Advanced\LtTrackerExchange=true
63 | Advanced\RecheckOnCompletion=false
64 | Advanced\trackerPort=9000
65 | Advanced\trackerPortForwarding=false
66 | Bittorrent\AddTrackers=false
67 | Bittorrent\DHT=true
68 | Bittorrent\LSD=true
69 | Bittorrent\MaxConnecs=-1
70 | Bittorrent\MaxConnecsPerTorrent=-1
71 | Bittorrent\MaxRatio=-1
72 | Bittorrent\MaxRatioAction=0
73 | Bittorrent\MaxUploads=-1
74 | Bittorrent\MaxUploadsPerTorrent=-1
75 | Bittorrent\PeX=true
76 | Connection\ResolvePeerCountries=true
77 | Downloads\DiskWriteCacheSize=-1
78 | Downloads\PreAllocation=true
79 | Downloads\SavePath=/usr/src/app/downloads/
80 | Downloads\TempPath=/usr/src/app/downloads/incomplete/
81 | Downloads\UseIncompleteExtension=true
82 | DynDNS\DomainName=changeme.dyndns.org
83 | DynDNS\Enabled=false
84 | DynDNS\Password=
85 | DynDNS\Service=DynDNS
86 | DynDNS\Username=
87 | General\Locale=
88 | General\PreventFromSuspendWhenDownloading=true
89 | MailNotification\email=
90 | MailNotification\enabled=false
91 | MailNotification\password=adminadmin
92 | MailNotification\req_auth=true
93 | MailNotification\req_ssl=false
94 | MailNotification\sender=qBittorrent_notification@example.com
95 | MailNotification\smtp_server=smtp.changeme.com
96 | MailNotification\username=admin
97 | Queueing\IgnoreSlowTorrents=true
98 | Queueing\MaxActiveDownloads=100
99 | Queueing\MaxActiveTorrents=50
100 | Queueing\MaxActiveUploads=50
101 | Queueing\QueueingEnabled=false
102 | Search\SearchEnabled=true
103 | WebUI\Address=*
104 | WebUI\AlternativeUIEnabled=false
105 | WebUI\AuthSubnetWhitelist=@Invalid()
106 | WebUI\AuthSubnetWhitelistEnabled=false
107 | WebUI\BanDuration=3600
108 | WebUI\CSRFProtection=false
109 | WebUI\ClickjackingProtection=false
110 | WebUI\CustomHTTPHeaders=
111 | WebUI\CustomHTTPHeadersEnabled=false
112 | WebUI\Enabled=true
113 | WebUI\HTTPS\CertificatePath=
114 | WebUI\HTTPS\Enabled=false
115 | WebUI\HTTPS\KeyPath=
116 | WebUI\HostHeaderValidation=false
117 | WebUI\LocalHostAuth=true
118 | WebUI\MaxAuthenticationFailCount=5
119 | WebUI\Port=8090
120 | WebUI\ReverseProxySupportEnabled=false
121 | WebUI\RootFolder=
122 | WebUI\SecureCookie=true
123 | WebUI\ServerDomains=*
124 | WebUI\SessionTimeout=3600
125 | WebUI\TrustedReverseProxiesList=
126 | WebUI\UseUPnP=false
127 | WebUI\Username=admin
128 | """
129 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | python-telegram-bot[job-queue]==20.0
2 | google-api-python-client==2.60.0
3 | google-auth-oauthlib==1.0.0
4 | humanize==4.4.0
5 | python-dotenv==0.21.0
6 | aria2p==0.11.3
7 | psutil==5.9.4
8 | tenacity==8.1.0
9 | qbittorrent-api==2022.11.40
10 | pyngrok==5.2.1
11 | patool==1.12
12 | python-magic==0.4.27
13 | pyrogram==2.0.99
14 | tgCrypto==1.2.5
15 | natsort==8.2.0
16 | Pillow==9.4.0
17 | ffmpeg-python==0.2.0
18 | ffprobe-python==1.0.3
19 | filesplit==4.0.1
20 | uvloop==0.17.0
21 | beautifulsoup4==4.11.2
22 | cfscrape==2.1.1
23 | lxml==4.9.2
24 | git+https://github.com/zevtyardt/lk21.git
--------------------------------------------------------------------------------
/sample_config.env:
--------------------------------------------------------------------------------
1 | # Google Drive token.pickle
2 | PICKLE_FILE_URL = ""
3 | # Telegram settings
4 | BOT_TOKEN = ""
5 | TG_API_ID = ""
6 | TG_API_HASH = ""
7 | # To upload files in telegram premium
8 | USER_SESSION_STRING = ""
9 | # Authorized users and chat to use the bot
10 | USER_LIST = '[12345, 67890]'
11 | # Drive/Folder ID to upload files
12 | GDRIVE_FOLDER_ID = 'abcXYZ'
13 | # For serving download directory with ngrok's built-in file server
14 | NGROK_AUTH_TOKEN = ""
15 | # For clearing tasks whose upload is completed
16 | AUTO_DEL_TASK = False
17 | # For downloading files from uptobox
18 | UPTOBOX_TOKEN = ""
19 | # For sending files to log channel
20 | LOG_CHANNEL = ""
21 | # For sending files to you
22 | BOT_PM = True
23 | # Create worker using https://gitlab.com/GoogleDriveIndex/Google-Drive-Index
24 | # Example: https://index.workers.dev/0: (Add drive index num with : at the end)
25 | INDEX_LINK = ""
26 |
--------------------------------------------------------------------------------
/session_generator.py:
--------------------------------------------------------------------------------
1 | from asyncio import get_event_loop
2 | from pyrogram import Client
3 |
4 | async def generate_string_sesion():
5 | print('Required pyrogram V2 or greater.')
6 |
7 | API_ID= int(input("Enter API_ID: "))
8 | API_HASH = input("Enter API_HASH: ")
9 |
10 | async with Client(name='USS', api_id=API_ID, api_hash=API_HASH, in_memory=True) as app:
11 | print(await app.export_session_string())
12 |
13 | get_event_loop().run_until_complete(generate_string_sesion())
14 |
--------------------------------------------------------------------------------