├── .env ├── .gitignore ├── LICENSE ├── README.md ├── index.js ├── package.json ├── requirements.txt └── vodDownloader.py /.env: -------------------------------------------------------------------------------- 1 | GUC_SK_USERNAME=your_username 2 | GUC_SK_PASSWORD=your_password -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | node_modules/ 2 | cms_downloads/ 3 | .DS_Store -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Ahmed Ashraf 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # cms-downloader 2 | 3 | > **⚠️ This script is tested only on mac and linux. It's not tested on windows machines** 4 | 5 | > **✅ Your login credentials are saved on your local machine. They don't leave it at all. They are only sent to CMS platform. You can make sure by reviewing the code** 6 | 7 | ## Motivation 8 | 9 | Every developer has the coding superpower 😎. So, as a developer, you should automate all the boring stuff 😪 to free up some threads in your mind for the serious work. 👀🤓 10 | 11 | ## Script Description 12 | 13 | This script is created to automate downloading the content from the GUC-CMS platform. It fetches the courses you are subscribed in, creates a folder for each course, downloads the content (ALL the content), each file in its corresponding week folder. Et voila! Everything is downloaded 🔥🔥 14 | 15 | ## Skip chromium installation (if you already have chromium installed) 16 | 17 | > **⚠️ Make sure you have chromium installed using `which chromium`** 18 | 19 | ```bash 20 | export CHROMIUM_EXECUTABLE_PATH=$(which chromium) 21 | export PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true 22 | ``` 23 | 24 | When `PUPPETEER_SKIP_CHROMIUM_DOWNLOAD` is set to `true`, puppeteer skips downloading the binaries for chromium, however you **must** provide an executable path to a chromium binary (which is done via the `CHROMIUM_EXECUTABLE_PATH` environment variable). 25 | 26 | ## Prepare your machine (if windows machine) 27 | 28 | 1. Download [Node.js](https://nodejs.org/en/) the LTS version then install it. The usual install just next next... 29 | 30 | 2. Download [Git Bash](https://git-scm.com/downloads) the windows version and also the usual install. Just tick the option of creating a desktop icon 31 | 32 | 3. You should be ready to move to the next step (How to run) 33 | 34 | ## How to run 35 | 36 | 1. Clone the repo 37 | 38 | ``` 39 | git clone https://github.com/AhmedAshrafAZ/cms-downloader.git 40 | ``` 41 | 42 | 2. Navigate to the script directory 43 | 44 | ``` 45 | cd cms-downloader 46 | ``` 47 | 48 | 3. Install node modules 49 | 50 | ``` 51 | npm install && pip3 install -r requirements.txt 52 | ``` 53 | 54 | 4. Add your username (without @student.guc.edu.eg) to the ".env" file 55 | 56 | ``` 57 | echo "GUC_SK_USERNAME=your_username" > .env 58 | ``` 59 | 60 | 5. Add your password to the ".env" file 61 | 62 | ``` 63 | echo "GUC_SK_PASSWORD=your_password" >> .env 64 | ``` 65 | 66 | 6. Add your University cms link (https://cms.guc.edu.eg for GUC and https://cms.giu-uni.de for GIU students) to the ".env" file 67 | 68 | ``` 69 | echo "HOST_URL=your_url" >> .env 70 | ``` 71 | 72 | 7. Run the script and let the magic begin 🎩🎩🔥🔥 73 | > if you want to convert downloaded videos to mp4 format, you can use the `--convert` flag. But make sure you have [ffmpeg](https://www.ffmpeg.org/) installed on your machine. 74 | ``` 75 | node index.js && python3 vodDownloader.py --convert 76 | ``` 77 | > if not, just run the script 78 | ``` 79 | node index.js && python3 vodDownloader.py 80 | ``` 81 | 82 | ## Unrevealing the magic behind the script 🤓 83 | 84 | This script is mainly based on web scraping 🕷🕸 and DOM manipulation. 85 | 86 | - Get the login credintials form the '.env' file and test authentication 87 | - Fetch the home page to get all the courses that the student is subscribed in. 88 | - Navigate to each course 89 | - Select (using querySelector) all the anchor tags that contains download attribute 90 | - Rename the value of the download attribute to the name its card 91 | - Select all the unrated courses (this is my flag for the non-downloaded content) 92 | - Download the unrated courses after checking that the week folder and course folder do exist 93 | - Rate each content after downloading it. 94 | - And repeat until everything is DONE 🔥🎩💪🏻 95 | 96 | ## Contribution 👀 97 | 98 | You are very welcome to contribute to this repo. Just create the your Pull Request, I will review it & your updates will be merged ASAP insha'Allah. 💪🏻💪🏻 99 | -------------------------------------------------------------------------------- /index.js: -------------------------------------------------------------------------------- 1 | 'use strict'; 2 | const puppeteer = require('puppeteer'); 3 | const inquirer = require('inquirer'); 4 | const fs = require('fs'); 5 | const httpntlm = require('httpntlm'); 6 | require('dotenv').config(); 7 | 8 | const machine_type = process.platform; 9 | const fileSeparator = () => { 10 | return machine_type === 'win32' ? '\\' : '/'; 11 | }; 12 | 13 | const pupp_options = { 14 | headless: true, 15 | }; 16 | 17 | const userAuthData = { 18 | username: process.env.GUC_SK_USERNAME, 19 | password: process.env.GUC_SK_PASSWORD, 20 | }; 21 | 22 | const HOST = process.env.HOST_URL; 23 | 24 | const authenticateUser = () => { 25 | return new Promise((resolve, reject) => { 26 | httpntlm.get( 27 | { 28 | ...userAuthData, 29 | url: `${HOST}/apps/student/HomePageStn.aspx`, 30 | rejectUnauthorized: false, 31 | }, 32 | (err, res) => { 33 | console.log(res.statusCode === 200 ? '[+] You are authorized\n============' : '[!] You are not authorized. Please review your login credentials.'); 34 | resolve(res.statusCode === 200); 35 | } 36 | ); 37 | }); 38 | }; 39 | 40 | const navigateTo = async (page, target_link) => { 41 | await page.goto(target_link, { 42 | waitUntil: 'networkidle2', 43 | timeout: 500000, 44 | }); 45 | }; 46 | 47 | const getSeasons = async (page) => { 48 | return await page.evaluate(function () { 49 | const seasons = []; 50 | document.querySelectorAll('div[class="menu-header-title"]').forEach((el) => { 51 | const title = el.innerHTML.trim(); 52 | seasons.push({ 53 | name: title.substring(title.indexOf('Title') + 6).trim(), 54 | sid: parseInt(title.substring(title.indexOf(':') + 1, title.indexOf(',')).trim()), 55 | courses: [], 56 | }); 57 | }); 58 | seasons.forEach((_, index) => { 59 | const seasonCourses = document.querySelectorAll(`table[id="ContentPlaceHolderright_ContentPlaceHoldercontent_r1_GridView1_${index}"]`)[0].children[0].children; 60 | for (let i = 1; i < seasonCourses.length; i++) { 61 | const courseName = seasonCourses[i].children[1].innerText.trim().replaceAll('|', ''); 62 | const is_active = (seasonCourses[i].children[2].innerText.trim() === 'Active'); 63 | if(!is_active){ 64 | continue; 65 | } 66 | seasons[index].courses.push({ 67 | name: courseName.substring(0, courseName.lastIndexOf('(')).trim().replace('(', '[').replace(')', ']'), 68 | id: parseInt(courseName.substring(courseName.lastIndexOf('(') + 1, courseName.lastIndexOf(')')).trim()), 69 | }); 70 | } 71 | }); 72 | return seasons; 73 | }); 74 | }; 75 | 76 | const resolveContentName = async (page) => { 77 | await page.evaluate(() => { 78 | document.querySelectorAll('a[download]').forEach((el) => { 79 | const fileName = el.parentElement.parentElement.parentElement.children[0].children[0].innerHTML; 80 | const fileExtension = el.href.split('.')[document.querySelectorAll('a[download]')[0].href.split('.').length - 1]; 81 | const fullName = `${fileName}.${fileExtension}`; 82 | el.download = fullName; 83 | }); 84 | }); 85 | }; 86 | 87 | const getContent = async (page, courses, seasonId) => { 88 | const content = []; 89 | const getWeeks = async (page) => { 90 | return await page.evaluate(() => { 91 | const weeks = []; 92 | 93 | document.querySelectorAll('div.weeksdata').forEach((el) => { 94 | const weekAnnouncement = el.children[1].children[0].innerText.trim(); 95 | const weekDescription = el.children[1].children[1].innerText.trim(); 96 | const tempWeekContent = el.children[1].children[2].children; 97 | const weekContent = []; 98 | 99 | for (let i = 1; i < tempWeekContent.length; i++) { 100 | const orgName = tempWeekContent[i].children[0].children[0].innerText.trim(); 101 | const name = tempWeekContent[i].children[0].children[2].children[0].children[0].download.trim().replace('/', '').replace(':', '').toLowerCase(); 102 | const id = tempWeekContent[i].children[0].children[2].children[0].children[1].id; 103 | const watchId = tempWeekContent[i].children[0].children[2].children[0].children[1].attributes['data-contentid'].value; 104 | const url = orgName.includes('(VoD)') 105 | ? `https://playback.dacast.com/content/info?contentId=${id}&provider=dacast` 106 | : tempWeekContent[i].children[0].children[2].querySelector('a#download').href.trim(); 107 | const watched = tempWeekContent[i].children[0].children[3].querySelector('i.fa-eye-slash').style.display == 'none'; 108 | if (!orgName.includes('https://')) 109 | weekContent.push({ 110 | name, 111 | url, 112 | watched, 113 | watchId, 114 | }); 115 | } 116 | 117 | weeks.push({ 118 | name: el.querySelector('h2.text-big').innerText, 119 | announcement: weekAnnouncement, 120 | description: weekDescription, 121 | content: weekContent, 122 | }); 123 | }); 124 | return weeks; 125 | }); 126 | }; 127 | 128 | const getCourseAnnouncements = async (page) => { 129 | return await page.evaluate(() => document.querySelector('div[id="ContentPlaceHolderright_ContentPlaceHoldercontent_desc"]').innerText.trim()); 130 | }; 131 | 132 | for (let i = 0; i < courses.length; i++) { 133 | const courseUrl = `${HOST}/apps/student/CourseViewStn.aspx?id=${courses[i].id}&sid=${seasonId}`; 134 | await navigateTo(page, courseUrl); 135 | await resolveContentName(page); 136 | content.push({ 137 | name: courses[i].name, 138 | url: courseUrl, 139 | weeks: await getWeeks(page), 140 | announcements: await getCourseAnnouncements(page), 141 | }); 142 | } 143 | return content; 144 | }; 145 | 146 | const getAnswers = async (questions, checkbox, message) => { 147 | const answers = await inquirer.prompt([ 148 | { 149 | type: checkbox ? 'checkbox' : 'list', 150 | message: message, 151 | name: 'userAnswers', 152 | choices: questions, 153 | validate(answer) { 154 | if (answer.length < 1) { 155 | return 'You must choose at least one course.'; 156 | } 157 | return true; 158 | }, 159 | loop: false, 160 | }, 161 | ]); 162 | return checkbox ? answers.userAnswers.map((a) => questions.findIndex((q) => q.name === a)) : questions.findIndex((q) => (q.name || q) === answers.userAnswers); 163 | }; 164 | 165 | const watchContent = async (page, watchId) => { 166 | return await page.evaluate((watchId) => { 167 | let el = document.querySelector(`input[data-contentid="${watchId}"]`); 168 | if (el !== null) el.click(); 169 | el = document.querySelector('button[class="close closeclose"]'); 170 | if (el !== null) el.click(); 171 | }, watchId); 172 | }; 173 | const downloadContent = async (page, season, courseName, weeks) => { 174 | courseName = courseName.replace(':', ''); 175 | const download = (url, file_path, file_name) => { 176 | if (!fs.existsSync(file_path)) 177 | fs.mkdirSync(file_path, { recursive: true }, (err) => { 178 | console.log('There is an error in making directories, please report it. Error is: ', err.message); 179 | }); 180 | 181 | console.log(`[-] Downloading file (${file_name})...`); 182 | 183 | return new Promise((resolve, reject) => { 184 | if (url.includes('https://playback.dacast.com')) { 185 | const line = `${file_path}${fileSeparator()}${file_name}==${url}\n`; 186 | fs.appendFile('VODs.txt', line, (err) => { 187 | if (err) console.log('There is an error in file writing, please report it. Error is: ', err.message); 188 | }); 189 | console.log(`[+] The VOD (${file_name}) will be downloaded later`); 190 | console.log('------------'); 191 | resolve(); 192 | } else { 193 | httpntlm.get( 194 | { 195 | ...userAuthData, 196 | url: url, 197 | rejectUnauthorized: false, 198 | binary: true, 199 | }, 200 | (err, res) => { 201 | // Request failed 202 | if (err) { 203 | console.log('There is an error in the request, please report it. Error is: ', err.message); 204 | reject('Request Error'); 205 | } 206 | 207 | // Request success, write to the file 208 | fs.writeFile(`${file_path}${fileSeparator()}${file_name}`, res.body, (err) => { 209 | if (err) { 210 | console.log('There is an error in file writing, please report it. Error is: ', err.message); 211 | reject('FileWriting Error'); 212 | } 213 | console.log(`[+] Download completed. "${file_name}" is saved successfully in ${file_path}`); 214 | console.log('------------'); 215 | resolve(); 216 | }); 217 | } 218 | ); 219 | } 220 | }); 221 | }; 222 | 223 | const rootPath = `.${fileSeparator()}cms_downloads${fileSeparator()}${season}${fileSeparator()}${courseName}`; 224 | 225 | for (let i = 0; i < weeks.length; i++) { 226 | const weekName = weeks[i].name.replace(':', '').toLowerCase(); 227 | const weekAnnouncement = weeks[i].announcement; 228 | const weekDescription = weeks[i].description; 229 | const weekContent = weeks[i].content; 230 | for (let j = 0; j < weekContent.length; j++) { 231 | const fileUrl = weekContent[j].url; 232 | const fileName = weekContent[j].name.replace(':', '').toLowerCase(); 233 | await download(fileUrl, `${rootPath}${fileSeparator()}${weekName}`, fileName); 234 | 235 | // Watch the downloaded content 236 | await watchContent(page, weekContent[j].watchId); 237 | } 238 | } 239 | }; 240 | 241 | (async () => { 242 | console.log('==> Session Started <=='); 243 | const browser = await puppeteer.launch(pupp_options); 244 | const page = await browser.newPage(); 245 | 246 | // 00- Authenticate User 247 | console.log('[-] Authenticating...'); 248 | let user_auth = await authenticateUser(); 249 | if (!user_auth) { 250 | await browser.close(); 251 | return; 252 | } 253 | 254 | await page.authenticate(userAuthData); 255 | await navigateTo(page, `${HOST}/apps/student/ViewAllCourseStn`); 256 | 257 | const seasons = await getSeasons(page); 258 | const selectedSeason = seasons[await getAnswers(seasons, false, 'Please select a season', ['sid'])]; 259 | const downloadTypes = ['All content', 'Unwatched content', 'Select courses']; 260 | const downloadType = await getAnswers(downloadTypes, false, 'Please select the download type'); 261 | let selectedCourses = []; 262 | let coursesContent = []; 263 | switch (downloadType) { 264 | case 0: 265 | selectedCourses = selectedSeason.courses; 266 | coursesContent = await getContent(page, selectedCourses, selectedSeason.sid); 267 | break; 268 | case 1: 269 | selectedCourses = selectedSeason.courses; 270 | coursesContent = await getContent(page, selectedCourses, selectedSeason.sid); 271 | coursesContent = coursesContent.map((course) => { 272 | return { 273 | ...course, 274 | weeks: course.weeks.map((week) => { 275 | return { 276 | ...week, 277 | content: week.content.filter((content) => content.watched === false), 278 | }; 279 | }), 280 | }; 281 | }); 282 | break; 283 | case 2: 284 | selectedCourses = (await getAnswers(selectedSeason.courses, true, 'Please select the courses you want', ['id'])).map((c) => selectedSeason.courses[c]); 285 | coursesContent = await getContent(page, selectedCourses, selectedSeason.sid); 286 | break; 287 | default: 288 | break; 289 | } 290 | 291 | for (let i = 0; i < coursesContent.length; i++) { 292 | await navigateTo(page, coursesContent[i].url); 293 | await downloadContent(page, selectedSeason.name, coursesContent[i].name, coursesContent[i].weeks); 294 | } 295 | 296 | // 6- End the session 297 | await browser.close(); 298 | console.log('==> Session Ended <=='); 299 | })(); 300 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "cms-downloader", 3 | "version": "1.0.0", 4 | "description": "This is a script that helps in downloading the content posted on GUC-CMS platform using DOM manipulation", 5 | "main": "index.js", 6 | "scripts": { 7 | "prestart": "git pull origin main", 8 | "start": "node index.js", 9 | "test": "echo \"Error: no test specified\" && exit 1" 10 | }, 11 | "repository": { 12 | "type": "git", 13 | "url": "git+https://github.com/AhmedAshrafAZ/cms-downloader.git" 14 | }, 15 | "author": "", 16 | "license": "ISC", 17 | "bugs": { 18 | "url": "https://github.com/AhmedAshrafAZ/cms-downloader/issues" 19 | }, 20 | "homepage": "https://github.com/AhmedAshrafAZ/cms-downloader#readme", 21 | "dependencies": { 22 | "dotenv": "^8.2.0", 23 | "httpntlm": "^1.7.6", 24 | "inquirer": "^8.1.2", 25 | "puppeteer": "^5.4.0" 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | termcolor==1.1.0 2 | pycurl==7.44.1 3 | simplejson==3.17.5 4 | argparse==1.4.0 -------------------------------------------------------------------------------- /vodDownloader.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | from io import BytesIO 4 | import simplejson as json 5 | import pycurl 6 | from termcolor import colored 7 | import argparse 8 | 9 | def getIndexUrl(base_url): 10 | c = pycurl.Curl() 11 | b = BytesIO() 12 | c.setopt(c.URL, base_url) 13 | c.setopt(pycurl.SSL_VERIFYPEER, 0) 14 | c.setopt(c.WRITEFUNCTION, b.write) 15 | c.setopt(c.WRITEDATA, b) 16 | c.perform() 17 | vodId = json.loads(b.getvalue())['contentInfo']['contentId'] 18 | base_url = 'https://playback.dacast.com/content/access?contentId=' + vodId + '&provider=universe' 19 | finalUrl = "" 20 | try: 21 | b = BytesIO() 22 | c.setopt(c.URL, base_url) 23 | c.setopt(pycurl.SSL_VERIFYPEER, 0) 24 | c.setopt(c.WRITEFUNCTION, b.write) 25 | c.setopt(c.WRITEDATA, b) 26 | c.perform() 27 | tempUrl = json.loads(b.getvalue())['hls'] 28 | indexUrl = tempUrl.replace('manifest.m3u8', 'index_0_av.m3u8') 29 | b = BytesIO() 30 | c.setopt(c.URL, indexUrl) 31 | c.setopt(pycurl.SSL_VERIFYPEER, 0) 32 | c.setopt(c.WRITEFUNCTION, b.write) 33 | c.setopt(c.WRITEDATA, b) 34 | c.perform() 35 | data = b.getvalue().decode("utf-8").split('\n') 36 | partialUrl = "" 37 | for i in range(len(data)): 38 | temp = data[i].replace('\'', '').replace('"', '').strip() 39 | if(temp.startswith("stream-audio")): 40 | partialUrl = data[i] 41 | break 42 | dacast = tempUrl.split('stream.ismd')[0] + "stream.ismd/" 43 | finalUrl = dacast + partialUrl 44 | except: 45 | sys.stdout.write("\033[F") 46 | sys.stdout.write("\033[K") 47 | print(colored("[!] Error Downloading: " + fileName, 'red')) 48 | failedVods.append(line.strip()) 49 | c.close() 50 | b.close() 51 | return finalUrl 52 | 53 | def fetchSegments(base_url): 54 | # Fetch video segments then save it to the m3u8 file 55 | print(colored("[.] Fetching segments...", 'red')) 56 | index_file = open("index_0_av.m3u8", 'wb') 57 | # Build request 58 | curl = pycurl.Curl() 59 | curl.setopt(curl.URL, base_url) 60 | curl.setopt(pycurl.SSL_VERIFYPEER, 0) 61 | curl.setopt(curl.WRITEDATA, index_file) 62 | curl.perform() 63 | # Close everything and print confirmation 64 | curl.close() 65 | index_file.close() 66 | sys.stdout.write("\033[F") 67 | sys.stdout.write("\033[K") 68 | print(colored("[+] Completed fetching segments", 'green')) 69 | 70 | def filterSegments(): 71 | # To remove the comments from the m3u8 file and save the segments' link to segments.txt 72 | index_file = open("index_0_av.m3u8", 'r') 73 | segments_file = open(fileName + "_segments.txt", 'w') 74 | number_of_lines = 0 75 | for line in index_file: 76 | if '#' not in line and line != '': 77 | segments_file.write(line) 78 | number_of_lines = number_of_lines + 1 79 | index_file.close() 80 | segments_file.close() 81 | os.remove("index_0_av.m3u8") 82 | return number_of_lines 83 | 84 | def downloadSegments(): 85 | number_of_lines = filterSegments() 86 | # Build progress bar 87 | progress_bar = "[" + " " * 100 + "]" 88 | progress_bar_counter = 1 89 | print(colored("Downloading ==> ", 'red') + colored("".join(progress_bar), 'green')) 90 | segments_file = open(fileName + "_segments.txt", 'r') 91 | video = open(fileName + ".ts", 'wb') 92 | 93 | # Start fetching 94 | for url in segments_file: 95 | # Build request 96 | curl = pycurl.Curl() 97 | curl.setopt(curl.WRITEDATA, video) 98 | curl.setopt(curl.URL, dacastUrl + url.strip()) 99 | downloaded = True 100 | try: 101 | curl.perform() 102 | # Update progress bar 103 | loaded = int((progress_bar_counter / number_of_lines) * 100) 104 | progress_bar = "[" + "=" * loaded + " " * (100 - loaded) + "]" 105 | progress_bar_counter += 1 106 | sys.stdout.write("\033[F") 107 | sys.stdout.write("\033[K") 108 | print(colored("Downloading " + str(loaded) + "% ==> ", 'red') + colored("".join(progress_bar), 'green')) 109 | # Close to start new session due to target server limits 110 | curl.close() 111 | except: 112 | sys.stdout.write("\033[F") 113 | sys.stdout.write("\033[K") 114 | sys.stdout.write("\033[F") 115 | sys.stdout.write("\033[K") 116 | print(colored("[!] Error Downloading: " + fileName, 'red')) 117 | failedVods.append(line.strip()) 118 | downloaded = False 119 | break 120 | if downloaded: 121 | sys.stdout.write("\033[F") 122 | sys.stdout.write("\033[K") 123 | print(colored("[+] Downloaded", 'green')) 124 | 125 | # Convert & Close everything 126 | segments_file.close() 127 | os.remove(fileName + "_segments.txt") 128 | 129 | if(convertVods): 130 | print(colored("[-] Converting", 'red')) 131 | os.system('ffmpeg -y -i "' + fileName + '.ts" "' + fileName + '" -loglevel quiet') 132 | os.remove(fileName + ".ts") 133 | sys.stdout.write("\033[F") 134 | sys.stdout.write("\033[K") 135 | 136 | sys.stdout.write("\033[F") 137 | sys.stdout.write("\033[K") 138 | sys.stdout.write("\033[F") 139 | sys.stdout.write("\033[K") 140 | print(colored("[+] Done downloading " + fileName, 'green')) 141 | 142 | # Starting the everything 143 | parser = argparse.ArgumentParser() 144 | parser.add_argument("-c", "--convert", help="Convert to mp4", action="store_true") 145 | convertVods = parser.parse_args().convert 146 | 147 | try: 148 | vods = open("VODs.txt", 'r') 149 | except: 150 | print(colored("[+] No VODs to download", 'green')) 151 | sys.exit() 152 | failedVods = [] 153 | 154 | for line in vods: 155 | fileName = line.split('==')[0].strip() 156 | base_url = getIndexUrl(line.split('==')[1].strip()) 157 | dacastUrl = base_url.split('stream.ismd')[0] + "stream.ismd/" 158 | if(base_url): 159 | fetchSegments(base_url) 160 | downloadSegments() 161 | vods.close() 162 | os.remove("VODs.txt") 163 | 164 | if len(failedVods) > 0: 165 | with open('VODs.txt', 'w') as f: 166 | for item in failedVods: 167 | f.write("%s\n" % item) --------------------------------------------------------------------------------