├── MKVToolNix-Logo.png
├── LICENSE
├── Google-Drive-Logo.svg
├── README.md
└── MKVToolNix-in-Google-Colab.ipynb
/MKVToolNix-Logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dropcreations/MKVToolNix-in-Google-Colab/HEAD/MKVToolNix-Logo.png
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 dropcreations
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/Google-Drive-Logo.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # __MKVToolNix-in-Google-Colab__
2 | Install latest __MKVToolNix__ into the Google Colab runtime. Use __mkvmerge, mkvextract, mkvpropedit__
3 |
4 |
5 |
6 | - Click on the ***"Open in Colab"*** button to open this notebook in google colab
7 |
8 |
9 |
10 |
11 |
12 |
13 | ## __mkvmerge__
14 |
15 | * You can __add__ tracks while running the cell.
16 | * Don't use `#` and `"` in `mkvTitle`.
17 | * Add XML file path for `globalTags`, `segmentInfo`.
18 | * Choose a `splitMode` and add `splitArgument` __according__ to the `splitMode`.
19 | * Chapters are accpeted __both__ `XML` and `OGM_txt` files.
20 | * When adding a language use __laguage codes__.
21 | * If you don't want to fill a field, leave it blank.
22 | * If you don't know what is the relevant `mime-type`, also leave it blank in `mimeType`.
23 | * When adding track `default`, `forced` flags, If you leave it blank, it's value will be __"No"__.
24 | * For all `[y/n]` inputs __"y"__ for __"yes"__ and __"n"__ for __"no"__.
25 | * `webmCompliantFile`: Create a WebM compliant file instead of mkv file output.
26 | * While adding tracks if you have done adding, leave `inputFile` as blank to continue.
27 | * When saving options are asked, enter folder path that you want to save your output file in `Enter folder path: ` and enter a file name to add to your output file in `Enter a name to save: `.
28 | * If you leave both `Enter folder path: ` and `Enter a name to save: `, output folder will be `/content/` and your file will be renamed as the string that you gave in `mkvTitle`. If you haven't added `mkvTitle`, then your file will be renamed as same as the first inputed file's name. If a file exsists with the name generated for file, this will add `_new` to the end.
29 |
30 | ## __mkvextract__
31 |
32 | * You can extract __all tracks, attachments, chapters, tags, cues, cue sheet, timecodes__ from `MKV` and `WebM` files.
33 | * Also you can extract a __single track__ by selecting `extractMode: Single Track` option.
34 | * While extracting chapters, enter `chapters extract type?` as `xml` or `ogm` when asked for extract in that format
35 | * `inputFile` can be a file or folder that contains `MKV` and `WebM` files. Don't need to contain only MKV and WebM files.
36 |
37 | ## __mkvpropedit__
38 |
39 | * You can edit __segment info, track info, chapters, attachments, tags__ in a mkv file.
40 | * If you want to __delete statistic tags__ from tracks select `deleteTrackStatisticsTags`
41 |
42 | ## __Matroska/WebM Tags__
43 |
44 | * Enter __mkv or webm file's path__ in `mkvFile`.
45 | * If you want to delete `Title`, leave it as blank.
46 | * You can add __tags__ using a __text document__, but the text document's content must be as formatted as below.
47 | ```
48 | Tag name: Tag value
49 | Tag name: Tag value, Tag value
50 | ```
51 | * This doesn't add tags as __multiple tags__, adds all values as a single tag.
52 | * Use __official tag names__ to add __matroska official tags__. see [here](https://www.matroska.org/technical/tagging.html)
53 |
--------------------------------------------------------------------------------
/MKVToolNix-in-Google-Colab.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "provenance": [],
7 | "authorship_tag": "ABX9TyMVtOfxNUVj0oA3wCfHLjW/",
8 | "include_colab_link": true
9 | },
10 | "kernelspec": {
11 | "name": "python3",
12 | "display_name": "Python 3"
13 | },
14 | "language_info": {
15 | "name": "python"
16 | }
17 | },
18 | "cells": [
19 | {
20 | "cell_type": "markdown",
21 | "metadata": {
22 | "id": "view-in-github",
23 | "colab_type": "text"
24 | },
25 | "source": [
26 | " "
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "source": [
32 | "# __Mount Google Drive__"
33 | ],
34 | "metadata": {
35 | "id": "tpPY4m7on3Kt"
36 | }
37 | },
38 | {
39 | "cell_type": "code",
40 | "execution_count": null,
41 | "metadata": {
42 | "cellView": "form",
43 | "id": "iSp5IM5Qnpax"
44 | },
45 | "outputs": [],
46 | "source": [
47 | "#@markdown \n",
48 | "#@markdown Mount Google Drive \n",
49 | "\n",
50 | "from google.colab import drive\n",
51 | "\n",
52 | "Mode = \"Mount\" #@param [\"Mount\", \"Unmount\"]\n",
53 | "\n",
54 | "drive.mount._DEBUG = False\n",
55 | "\n",
56 | "if Mode == \"Mount\":\n",
57 | " drive.mount('/content/drive', force_remount=True)\n",
58 | "\n",
59 | "elif Mode == \"Unmount\":\n",
60 | " try:\n",
61 | " drive.flush_and_unmount()\n",
62 | " except ValueError:\n",
63 | " pass\n",
64 | " get_ipython().system_raw(\"rm -rf /root/.config/Google/DriveFS\")"
65 | ]
66 | },
67 | {
68 | "cell_type": "markdown",
69 | "source": [
70 | "# __MKVToolNix__"
71 | ],
72 | "metadata": {
73 | "id": "B6LAJCgCoAJ8"
74 | }
75 | },
76 | {
77 | "cell_type": "markdown",
78 | "source": [
79 | "* Run below cell to **install** MKVToolNix.\n",
80 | "* You can use any version, to do that enter the version in `Version`.\n",
81 | "* If you want to install `version 70.0.0`, just type `70` in `Version`."
82 | ],
83 | "metadata": {
84 | "id": "u71dEFKaoDmD"
85 | }
86 | },
87 | {
88 | "cell_type": "code",
89 | "source": [
90 | "#@markdown \n",
91 | "#@markdown Install MKVToolNix \n",
92 | "\n",
93 | "import json\n",
94 | "import requests\n",
95 | "from IPython.display import clear_output\n",
96 | "\n",
97 | "Version = \"latest-release\" #@param [\"latest-release\"] {allow-input: true}\n",
98 | "\n",
99 | "response = requests.get(\"https://mkvtoolnix.download/releases.json\")\n",
100 | "release_info = json.loads(response.text)\n",
101 | "latest_version = release_info[\"mkvtoolnix-releases\"][\"latest-source\"].get(\"version\")\n",
102 | "\n",
103 | "releases = []\n",
104 | "\n",
105 | "for i in range(len(release_info[\"mkvtoolnix-releases\"].get(\"releases\"))):\n",
106 | " ver = release_info[\"mkvtoolnix-releases\"][\"releases\"][i].get(\"version\")\n",
107 | " releases.append(ver)\n",
108 | "\n",
109 | "if Version == \"latest-release\":\n",
110 | " download_link = f'https://mkvtoolnix.download/appimage/MKVToolNix_GUI-{latest_version}-x86_64.AppImage'\n",
111 | "else:\n",
112 | " for match in sorted(releases):\n",
113 | " if Version in match:\n",
114 | " Version = match\n",
115 | " download_link = f'https://mkvtoolnix.download/appimage/MKVToolNix_GUI-{Version}-x86_64.AppImage'\n",
116 | "\n",
117 | "!rm -f '/usr/local/bin/mkvpropedit'\n",
118 | "!rm -f '/usr/local/bin/mkvmerge'\n",
119 | "!rm -f '/usr/local/bin/mkvextract'\n",
120 | "!rm -f '/usr/local/bin/mkvinfo'\n",
121 | "\n",
122 | "!sudo curl -L {download_link} -o /usr/local/bin/MKVToolNix_GUI.AppImage\n",
123 | "!sudo chmod u+rx /usr/local/bin/MKVToolNix_GUI.AppImage\n",
124 | "!sudo ln -s /usr/local/bin/MKVToolNix_GUI.AppImage /usr/local/bin/mkvpropedit\n",
125 | "!sudo chmod a+rx /usr/local/bin/mkvpropedit\n",
126 | "!sudo ln -s /usr/local/bin/MKVToolNix_GUI.AppImage /usr/local/bin/mkvmerge\n",
127 | "!sudo chmod a+rx /usr/local/bin/mkvmerge\n",
128 | "!sudo ln -s /usr/local/bin/MKVToolNix_GUI.AppImage /usr/local/bin/mkvextract\n",
129 | "!sudo chmod a+rx /usr/local/bin/mkvextract\n",
130 | "!sudo ln -s /usr/local/bin/MKVToolNix_GUI.AppImage /usr/local/bin/mkvinfo\n",
131 | "!sudo chmod a+rx /usr/local/bin/mkvinfo\n",
132 | "\n",
133 | "clear_output()\n",
134 | "!mkvmerge --version"
135 | ],
136 | "metadata": {
137 | "cellView": "form",
138 | "id": "zpswo4hzoGWL"
139 | },
140 | "execution_count": null,
141 | "outputs": []
142 | },
143 | {
144 | "cell_type": "markdown",
145 | "source": [
146 | "### __mkvmerge__"
147 | ],
148 | "metadata": {
149 | "id": "oF7MVTCpoP3r"
150 | }
151 | },
152 | {
153 | "cell_type": "markdown",
154 | "source": [
155 | "* You can __add__ tracks while running the cell.\n",
156 | "* Don't use `#` and `\"` in `mkvTitle`.\n",
157 | "* Add XML file path for `globalTags`, `segmentInfo`.\n",
158 | "* Choose a `splitMode` and add `splitArgument` __according__ to the `splitMode`.\n",
159 | "* Chapters are accpeted __both__ `XML` and `OGM_txt` files.\n",
160 | "* When adding a language use __laguage codes__.\n",
161 | "* If you don't want to fill a field, leave it blank.\n",
162 | "* If you don't know what is the relevant `mime-type`, also leave it blank in `mimeType`.\n",
163 | "* When adding track `default`, `forced` flags, If you leave it blank, it's value will be __\"No\"__.\n",
164 | "* For all `[y/n]` inputs __\"y\"__ for __\"yes\"__ and __\"n\"__ for __\"no\"__.\n",
165 | "* `webmCompliantFile`: Create a WebM compliant file instead of mkv file output.\n",
166 | "* While adding tracks if you have done adding, leave `inputFile` as blank to continue.\n",
167 | "* When saving options are asked, enter folder path that you want to save your output file in `Enter folder path: ` and enter a file name to add to your output file in `Enter a name to save: `.\n",
168 | "* If you leave both `Enter folder path: ` and `Enter a name to save: `, output folder will be `/content/` and your file will be renamed as the string that you gave in `mkvTitle`. If you haven't added `mkvTitle`, then your file will be renamed as same as the first inputed file's name. If a file exsists with the name generated for file, this will add `_new` to the end."
169 | ],
170 | "metadata": {
171 | "id": "hmAiKh-QoS8D"
172 | }
173 | },
174 | {
175 | "cell_type": "code",
176 | "source": [
177 | "#@markdown General \n",
178 | "mkvTitle = \"\" #@param {type:\"string\"}\n",
179 | "globalTags = \"\" #@param {type:\"string\"}\n",
180 | "segmentInfo = \"\" #@param {type:\"string\"}\n",
181 | "#@markdown Split \n",
182 | "splitMode = \"Do not split\" #@param [\"Do not split\", \"After output size\", \"After output duration\", \"After specific timestamps\", \"by parts based on timestamps\", \"by parts based on frame/field numbers\", \"After frame/field numbers\", \"Before chapters\"]\n",
183 | "splitArgument = \"\" #@param {type:\"string\"}\n",
184 | "maxSplits = 0 #@param {type:\"integer\"}\n",
185 | "linkFiles = False #@param {type:\"boolean\"}\n",
186 | "#@markdown Chapters \n",
187 | "chaptersFile = \"\" #@param {type:\"string\"}\n",
188 | "chaptersLang = \"\" #@param {type:\"string\"}\n",
189 | "#@markdown Attachments \n",
190 | "attachment = \"\" #@param {type:\"string\"}\n",
191 | "name = \"\" #@param {type:\"string\"}\n",
192 | "mimeType = \"\" #@param {type:\"string\"}\n",
193 | "description = \"\" #@param {type:\"string\"}\n",
194 | "#@markdown Miscellaneous \n",
195 | "webmCompliantFile = False #@param {type:\"boolean\"}\n",
196 | "disableTrackStatisticsTags = False #@param {type:\"boolean\"}\n",
197 | "\n",
198 | "import os\n",
199 | "import json\n",
200 | "import subprocess\n",
201 | "from prettytable import PrettyTable\n",
202 | "\n",
203 | "class mkvmerge(object):\n",
204 | " def __init__(self):\n",
205 | " self.__input_files = []\n",
206 | " self.__input_cmds = []\n",
207 | " self.__input_codes = \"\"\n",
208 | "\n",
209 | " def __get_json(self):\n",
210 | " json_cmd = [\n",
211 | " \"mkvmerge\",\n",
212 | " \"--identify\",\n",
213 | " \"--identification-format\",\n",
214 | " \"json\",\n",
215 | " os.path.abspath(self.input_file)\n",
216 | " ]\n",
217 | " json_data = subprocess.check_output(json_cmd, stderr=subprocess.DEVNULL)\n",
218 | " json_data = json.loads(json_data)\n",
219 | "\n",
220 | " return json_data\n",
221 | " \n",
222 | " def __get_info(self, i):\n",
223 | " json_data = self.__get_json()\n",
224 | " self.id = json_data[\"tracks\"][i].get(\"id\")\n",
225 | " self.language = json_data[\"tracks\"][i][\"properties\"].get(\"language\")\n",
226 | " self.codec = json_data[\"tracks\"][i].get(\"codec\")\n",
227 | " self.track_type = json_data[\"tracks\"][i].get(\"type\")\n",
228 | "\n",
229 | " def __view_tracks(self, length):\n",
230 | " table = PrettyTable(['id ', 'type', 'language', 'codec'])\n",
231 | "\n",
232 | " for i in range(length):\n",
233 | " self.__get_info(i)\n",
234 | " table.add_row([self.id, self.track_type, self.language, self.codec])\n",
235 | " \n",
236 | " table.align['id '] = \"c\"\n",
237 | " table.align['type'] = \"l\"\n",
238 | " table.align['language'] = \"c\"\n",
239 | " table.align['codec'] = \"l\"\n",
240 | "\n",
241 | " print(table)\n",
242 | "\n",
243 | " def get_start(self):\n",
244 | " while True:\n",
245 | " self.input_file = input(\"\\ninputFile: \")\n",
246 | " if self.input_file == \"\":\n",
247 | " break\n",
248 | " self.__input_files.append(self.input_file)\n",
249 | " json_data = self.__get_json()\n",
250 | "\n",
251 | " track_count = len(json_data.get(\"tracks\"))\n",
252 | " track_id = 0\n",
253 | "\n",
254 | " if track_count > 1:\n",
255 | " print()\n",
256 | " self.__view_tracks(track_count)\n",
257 | " track_id = int(input(\"\\ntrack ID: \"))\n",
258 | " \n",
259 | " if json_data[\"tracks\"][track_id].get(\"type\") == \"video\":\n",
260 | " display_dimensions = json_data[\"tracks\"][track_id][\"properties\"].get('display_dimensions')\n",
261 | " self.__input_codes += f'--no-audio --no-subtitles --no-attachments --no-track-tags --no-global-tags --no-chapters --display-dimensions {track_id}:{display_dimensions} '\n",
262 | " \n",
263 | " elif json_data[\"tracks\"][track_id].get(\"type\") == \"audio\":\n",
264 | " self.__input_codes += f'--no-video --no-subtitles --no-attachments --no-track-tags --no-global-tags --no-chapters '\n",
265 | " \n",
266 | " elif json_data[\"tracks\"][track_id].get(\"type\") == \"subtitles\":\n",
267 | " self.__input_codes += f'--no-video --no-audio --no-attachments --no-track-tags --no-global-tags --no-chapters '\n",
268 | "\n",
269 | " track_title = input(\"|\\n|--trackTitle: \")\n",
270 | " if track_title: self.__input_codes += f'--track-name {track_id}:\"{track_title}\" '\n",
271 | "\n",
272 | " track_language = input(\"|--trackLanguage: \")\n",
273 | " if track_language: self.__input_codes += f'--language {track_id}:{track_language} '\n",
274 | "\n",
275 | " track_default = input(\"|--Set track as default? [y/n] \")\n",
276 | " if track_default.lower() == \"n\" or track_default == \"\": self.__input_codes += f'--default-track-flag {track_id}:no '\n",
277 | " elif track_default.lower() == \"y\": self.__input_codes += f'--default-track-flag {track_id}:yes '\n",
278 | "\n",
279 | " track_forced = input(\"|--Set track as forced? [y/n] \")\n",
280 | " if track_forced.lower() == \"n\" or track_forced == \"\": self.__input_codes += f'--forced-display-flag {track_id}:no '\n",
281 | " elif track_forced.lower() == \"y\": self.__input_codes += f'--forced-display-flag {track_id}:yes '\n",
282 | "\n",
283 | " self.__input_codes += f'\"{self.input_file}\" '\n",
284 | " self.__input_cmds.append(self.__input_codes)\n",
285 | " self.__input_codes = \"\"\n",
286 | " \n",
287 | " if globalTags != \"\": self.__input_codes += f'--global-tags \"{globalTags}\" '\n",
288 | " if segmentInfo != \"\": self.__input_codes += f'--segmentinfo \"{segmentInfo}\" '\n",
289 | "\n",
290 | " if splitMode == \"After output size\": self.__input_codes += f'--split size:{splitArgument} '\n",
291 | " elif splitMode == \"After output duration\": self.__input_codes += f'--split duration:{splitArgument} '\n",
292 | " elif splitMode == \"After specific timestamps\": self.__input_codes += f'--split timestamps:{splitArgument} '\n",
293 | " elif splitMode == \"by parts based on timestamps\": self.__input_codes += f'--split parts:{splitArgument} '\n",
294 | " elif splitMode == \"by parts based on frame/field numbers\": self.__input_codes += f'--split parts-frames:{splitArgument} '\n",
295 | " elif splitMode == \"After frame/field numbers\": self.__input_codes += f'--split frames:{splitArgument} '\n",
296 | " elif splitMode == \"Before chapters\": self.__input_codes += f'--split chapters:{splitArgument} '\n",
297 | " \n",
298 | " if maxSplits > 1: self.__input_codes += f'--split-max-files {maxSplits} '\n",
299 | " if linkFiles: self.__input_codes += f'--link '\n",
300 | "\n",
301 | " if chaptersLang != \"\": self.__input_codes += f'--chapter-language {chaptersLang} '\n",
302 | " if chaptersFile != \"\": self.__input_codes += f'--chapters \"{chaptersFile}\" '\n",
303 | "\n",
304 | " if attachment != \"\":\n",
305 | " if name != \"\":\n",
306 | " attachment_ext = os.path.splitext(attachment)[1]\n",
307 | " name_ext = os.path.splitext(name)[1]\n",
308 | " if name_ext == attachment_ext: attachment_ext = \"\"\n",
309 | " self.__input_codes += f'--attachment-name \"{name}{attachment_ext}\" '\n",
310 | " if mimeType != \"\": self.__input_codes += f'--attachment-mime-type {mimeType} '\n",
311 | " if description != \"\": self.__input_codes += f'--attachment-description \"{description}\" '\n",
312 | " self.__input_codes += f'--attach-file \"{attachment}\" '\n",
313 | "\n",
314 | " print(\"\\nSave output file to...\\n|\")\n",
315 | " output_dir = input(\"|--Enter folder path: \")\n",
316 | " output_name = input(\"|--Enter a name to save: \")\n",
317 | " print()\n",
318 | "\n",
319 | " if not output_dir: output_dir = \"/content/\"\n",
320 | " if not output_name:\n",
321 | " if not mkvTitle:\n",
322 | " if webmCompliantFile: output_name = os.path.splitext(os.path.basename(self.__input_files[0]))[0] + '.webm'\n",
323 | " else: output_name = os.path.splitext(os.path.basename(self.__input_files[0]))[0] + '.mkv'\n",
324 | " else:\n",
325 | " if webmCompliantFile: output_name = mkvTitle + '.webm'\n",
326 | " else: output_name = mkvTitle + '.mkv'\n",
327 | " else:\n",
328 | " if not (output_name.endswith('.webm') or output_name.endswith('.mkv')):\n",
329 | " if webmCompliantFile: output_name += '.webm'\n",
330 | " else: output_name += '.mkv'\n",
331 | "\n",
332 | " output_dir = os.path.join(os.path.abspath(output_dir), output_name)\n",
333 | " if os.path.exists(output_dir):\n",
334 | " output_dir = f\"{os.path.splitext(output_dir)[0]}_new{os.path.splitext(output_dir)[1]}\"\n",
335 | " command_line = f'--output \"{output_dir}\" '\n",
336 | "\n",
337 | " if webmCompliantFile: command_line += f'--webm '\n",
338 | " if disableTrackStatisticsTags: command_line += f'--disable-track-statistics-tags '\n",
339 | "\n",
340 | " command_line += ''.join(self.__input_cmds) + self.__input_codes\n",
341 | " if mkvTitle != \"\": command_line += f'--title \"{mkvTitle}\"'\n",
342 | "\n",
343 | " return command_line\n",
344 | " \n",
345 | "mkv_merge = mkvmerge()\n",
346 | "!mkvmerge {mkv_merge.get_start()}"
347 | ],
348 | "metadata": {
349 | "cellView": "form",
350 | "id": "AdykP64AoVrC"
351 | },
352 | "execution_count": null,
353 | "outputs": []
354 | },
355 | {
356 | "cell_type": "markdown",
357 | "source": [
358 | "### __mkvextract__"
359 | ],
360 | "metadata": {
361 | "id": "6pSfPzpvoZhT"
362 | }
363 | },
364 | {
365 | "cell_type": "markdown",
366 | "source": [
367 | "* You can extract __all tracks, attachments, chapters, tags, cues, cue sheet, timecodes__ from `MKV` and `WebM` files.\n",
368 | "* Also you can extract a __single track__ by selecting `extractMode: Single Track` option.\n",
369 | "* While extracting chapters, enter `chapters extract type?` as `xml` or `ogm` when asked for extract in that format\n",
370 | "* `inputFile` can be a file or folder that contains `MKV` and `WebM` files. Don't need to contain only MKV and WebM files."
371 | ],
372 | "metadata": {
373 | "id": "RItcuvkEobmM"
374 | }
375 | },
376 | {
377 | "cell_type": "code",
378 | "source": [
379 | "inputFile = \"\" #@param {type:\"string\"}\n",
380 | "extractMode = \"Tracks\" #@param [\"Tracks\", \"Chapters\", \"Attachments\", \"Tags\", \"Cue Sheet\", \"Timecodes\", \"Cues\", \"Single Track\"]\n",
381 | "\n",
382 | "import os\n",
383 | "import json\n",
384 | "import subprocess\n",
385 | "from prettytable import PrettyTable\n",
386 | "\n",
387 | "class mkvextract(object):\n",
388 | " def __init__(self):\n",
389 | " self.input_file = os.path.abspath(inputFile)\n",
390 | " if not os.path.isfile(self.input_file): self.input_file = self.__process_dir()\n",
391 | " if not isinstance(self.input_file, list): self.input_file = [self.input_file]\n",
392 | "\n",
393 | " def __process_dir(self):\n",
394 | " content = os.listdir(self.input_file)\n",
395 | " content = [os.path.join(self.input_file, file) for file in content if os.path.splitext(file)[-1] in [\".mkv\", \".webm\"]]\n",
396 | " return content\n",
397 | "\n",
398 | " def __get_json(self, input_file):\n",
399 | " json_cmd = [\n",
400 | " \"mkvmerge\",\n",
401 | " \"--identify\",\n",
402 | " \"--identification-format\",\n",
403 | " \"json\",\n",
404 | " input_file\n",
405 | " ]\n",
406 | " json_data = subprocess.check_output(json_cmd, stderr=subprocess.DEVNULL)\n",
407 | " json_data = json.loads(json_data)\n",
408 | " return json_data\n",
409 | " \n",
410 | " def __get_info(self, json_data, i:int):\n",
411 | " self.id = json_data[\"tracks\"][i].get(\"id\")\n",
412 | " self.language = json_data[\"tracks\"][i][\"properties\"].get(\"language\")\n",
413 | " self.language_ietf = json_data[\"tracks\"][i][\"properties\"].get(\"language_ieft\")\n",
414 | " self.codec = json_data[\"tracks\"][i].get(\"codec\")\n",
415 | " self.track_type = json_data[\"tracks\"][i].get(\"type\")\n",
416 | " self.track_name = json_data[\"tracks\"][i][\"properties\"].get('track_name')\n",
417 | " self.codec_id = json_data[\"tracks\"][i][\"properties\"].get('codec_id')\n",
418 | "\n",
419 | " def __view_tracks(self, length, input_file):\n",
420 | " table = PrettyTable(['id ', 'type', 'language', 'codec', 'name'])\n",
421 | "\n",
422 | " for i in range(length):\n",
423 | " json_data = self.__get_json(input_file)\n",
424 | " self.__get_info(json_data, i)\n",
425 | " table.add_row([self.id, self.track_type, self.language, self.codec, self.track_name])\n",
426 | " \n",
427 | " table.align['id '] = \"c\"\n",
428 | " table.align['type'] = \"l\"\n",
429 | " table.align['language'] = \"c\"\n",
430 | " table.align['codec'] = \"l\"\n",
431 | " table.align['name'] = \"l\"\n",
432 | "\n",
433 | " print(table)\n",
434 | "\n",
435 | " def __make_extract_dir(self, input_file):\n",
436 | " input_dir = os.path.dirname(input_file)\n",
437 | " input_name = os.path.splitext(os.path.basename(input_file))[0]\n",
438 | " os.makedirs(os.path.join(input_dir, input_name), exist_ok=True)\n",
439 | " return os.path.join(input_dir, input_name)\n",
440 | "\n",
441 | " def __process_file(self, i:int, input_file):\n",
442 | " json_data = self.__get_json(input_file)\n",
443 | " if self.track_type == \"video\":\n",
444 | " pixel_dimensions = json_data[\"tracks\"][i][\"properties\"].get('pixel_dimensions')\n",
445 | " self.extract_name = f'Track_{self.id}_[{self.track_type}]_[{pixel_dimensions}]_[{self.language}]'\n",
446 | " elif self.track_type == \"audio\":\n",
447 | " audio_channels = json_data[\"tracks\"][i][\"properties\"].get('audio_channels')\n",
448 | " audio_sampling_frequency = json_data[\"tracks\"][i][\"properties\"].get('audio_sampling_frequency')\n",
449 | " self.extract_name = f'Track_{self.id}_[{self.track_type}]_[{audio_channels}CH]_[{audio_sampling_frequency / 1000}kHz]_[{self.language}]'\n",
450 | " elif self.track_type == \"subtitles\":\n",
451 | " self.extract_name = f'Track_{self.id}_[{self.track_type}]_[{self.language}]'\n",
452 | "\n",
453 | " if \"AVC\" in self.codec_id:\n",
454 | " self.extract_name += \".h264\"\n",
455 | " elif \"HEVC\" in self.codec_id:\n",
456 | " self.extract_name += \".h265\"\n",
457 | " elif \"V_VP8\" in self.codec_id or \"V_VP9\" in self.codec_id:\n",
458 | " self.extract_name += \".ivf\"\n",
459 | " elif \"V_AV1\" in self.codec_id:\n",
460 | " self.extract_name += \".ivf\"\n",
461 | " elif \"V_MPEG1\" in self.codec_id or \"V_MPEG2\" in self.codec_id:\n",
462 | " self.extract_name += \".mpg\"\n",
463 | " elif \"V_REAL\" in self.codec_id:\n",
464 | " self.extract_name += \".rm\"\n",
465 | " elif \"V_THEORA\" in self.codec_id:\n",
466 | " self.extract_name += \".ogg\"\n",
467 | " elif \"V_MS/VFW/FOURCC\" in self.codec_id:\n",
468 | " self.extract_name += \".avi\"\n",
469 | " elif \"AAC\" in self.codec_id:\n",
470 | " self.extract_name += \".aac\"\n",
471 | " elif \"A_AC3\" in self.codec_id:\n",
472 | " self.extract_name += \".ac3\"\n",
473 | " elif \"A_EAC3\" in self.codec_id:\n",
474 | " self.extract_name += \".eac3\"\n",
475 | " elif \"ALAC\" in self.codec_id:\n",
476 | " self.extract_name += \".caf\"\n",
477 | " elif \"DTS\" in self.codec_id:\n",
478 | " self.extract_name += \".dts\"\n",
479 | " elif \"FLAC\" in self.codec_id:\n",
480 | " self.extract_name += \".flac\"\n",
481 | " elif \"MPEG/L2\" in self.codec_id:\n",
482 | " self.extract_name += \".mp2\"\n",
483 | " elif \"MPEG/L3\" in self.codec_id:\n",
484 | " self.extract_name += \".mp3\"\n",
485 | " elif \"OPUS\" in self.codec_id:\n",
486 | " self.extract_name += \".ogg\"\n",
487 | " elif \"PCM\" in self.codec_id:\n",
488 | " self.extract_name += \".wav\"\n",
489 | " elif \"REAL\" in self.codec_id:\n",
490 | " self.extract_name += \".ra\"\n",
491 | " elif \"TRUEHD\" in self.codec_id:\n",
492 | " self.extract_name += \".thd\"\n",
493 | " elif \"MLP\" in self.codec_id:\n",
494 | " self.extract_name += \".mlp\"\n",
495 | " elif \"TTA1\" in self.codec_id:\n",
496 | " self.extract_name += \".tta\"\n",
497 | " elif \"VORBIS\" in self.codec_id:\n",
498 | " self.extract_name += \".ogg\"\n",
499 | " elif \"WAVPACK4\" in self.codec_id:\n",
500 | " self.extract_name += \".wv\"\n",
501 | " elif \"PGS\" in self.codec_id:\n",
502 | " self.extract_name += \".sup\"\n",
503 | " elif \"ASS\" in self.codec_id:\n",
504 | " self.extract_name += \".ass\"\n",
505 | " elif \"SSA\" in self.codec_id:\n",
506 | " self.extract_name += \".ssa\"\n",
507 | " elif \"UTF8\" in self.codec_id or \"ASCII\" in self.codec_id:\n",
508 | " self.extract_name += \".srt\"\n",
509 | " elif \"VOBSUB\" in self.codec_id:\n",
510 | " self.extract_name += \".sub\"\n",
511 | " elif \"S_KATE\" in self.codec_id:\n",
512 | " self.extract_name += \".ogg\"\n",
513 | " elif \"USF\" in self.codec_id:\n",
514 | " self.extract_name += \".usf\"\n",
515 | " elif \"WEBVTT\" in self.codec_id:\n",
516 | " self.extract_name += \".vtt\"\n",
517 | "\n",
518 | " def __extract_tracks(self, input_file):\n",
519 | " extract_dir = self.__make_extract_dir(input_file)\n",
520 | " json_data = self.__get_json(input_file)\n",
521 | "\n",
522 | " extract_codes = []\n",
523 | "\n",
524 | " for i in range(len(json_data[\"tracks\"])):\n",
525 | " self.__get_info(json_data, int(i))\n",
526 | " self.__process_file(int(i), input_file)\n",
527 | " extract_codes.append(f'{self.id}:\"{os.path.join(extract_dir, self.extract_name)}\"')\n",
528 | "\n",
529 | " cmd = f'mkvextract \"{input_file}\" tracks {\" \".join(extract_codes)}'\n",
530 | " print(\"Extracting...\")\n",
531 | " !{cmd}\n",
532 | "\n",
533 | " def __extract_a_single_track(self, input_file):\n",
534 | " extract_dir = self.__make_extract_dir(input_file)\n",
535 | " json_data = self.__get_json(input_file)\n",
536 | "\n",
537 | " self.__view_tracks(len(json_data[\"tracks\"]), input_file)\n",
538 | " i = input(\"\\ntrack ID: \")\n",
539 | "\n",
540 | " self.__get_info(json_data, int(i))\n",
541 | " self.__process_file(int(i), input_file)\n",
542 | "\n",
543 | " cmd = f'mkvextract \"{input_file}\" tracks {self.id}:\"{os.path.join(extract_dir, self.extract_name)}\"'\n",
544 | " print(\"\\nExtracting...\")\n",
545 | " !{cmd}\n",
546 | "\n",
547 | " def __extract_attachments(self, input_file):\n",
548 | " extract_dir = self.__make_extract_dir(input_file)\n",
549 | " json_data = self.__get_json(input_file)\n",
550 | "\n",
551 | " if len(json_data.get(\"attachments\")) > 0:\n",
552 | " for attachment in json_data.get(\"attachments\"):\n",
553 | " attachment_id = attachment.get(\"id\")\n",
554 | " attachment_name = attachment.get(\"file_name\")\n",
555 | " extractParam = (f'')\n",
556 | " cmd = f'mkvextract \"{input_file}\" attachments {attachment_id}:\"{os.path.join(extract_dir, attachment_name)}\"'\n",
557 | " !{cmd}\n",
558 | " elif len(json_data.get(\"attachments\")) == 0:\n",
559 | " print(\"No attachments are available.\")\n",
560 | "\n",
561 | " def __extract_chapters(self, chapter_type, input_file):\n",
562 | " extract_dir = self.__make_extract_dir(input_file)\n",
563 | " json_data = self.__get_json(input_file)\n",
564 | "\n",
565 | " if len(json_data.get(\"chapters\")) > 0:\n",
566 | " if chapter_type == \"xml\": cmd = f'mkvextract \"{input_file}\" chapters \"{os.path.join(extract_dir, \"Chapters.xml\")}\"'\n",
567 | " elif chapter_type == \"ogm\": cmd = f'mkvextract \"{input_file}\" chapters --simple \"{os.path.join(extract_dir, \"Chapters.txt\")}\"'\n",
568 | " !{cmd}\n",
569 | " print(\"Extracted.\")\n",
570 | " elif len(json_data.get(\"chapters\")) == 0:\n",
571 | " print(\"No chapters are available.\")\n",
572 | "\n",
573 | " def __extract_tags(self, input_file):\n",
574 | " extract_dir = self.__make_extract_dir(input_file)\n",
575 | " json_data = self.__get_json(input_file)\n",
576 | "\n",
577 | " if len(json_data.get(\"global_tags\")) or len(json_data.get(\"track_tags\")) > 0:\n",
578 | " cmd = f'mkvextract \"{input_file}\" tags \"{os.path.join(extract_dir, \"Tags.xml\")}\"'\n",
579 | " !{cmd}\n",
580 | " print(\"Extracted.\")\n",
581 | " elif (len(json_data.get(\"global_tags\")) == 0) and (len(json_data.get(\"track_tags\")) == 0):\n",
582 | " print(\"No tags are available.\")\n",
583 | "\n",
584 | " def __extract_timecodes(self, input_file):\n",
585 | " extract_dir = self.__make_extract_dir(input_file)\n",
586 | " json_data = self.__get_json(input_file)\n",
587 | "\n",
588 | " extract_codes = []\n",
589 | "\n",
590 | " for i in range(len(json_data[\"tracks\"])):\n",
591 | " self.__get_info(json_data, int(i))\n",
592 | " extract_name = f'Track_{self.id}_[{self.track_type}]_tc.txt'\n",
593 | " extract_codes.append(f'{self.id}:\"{os.path.join(extract_dir, extract_name)}\"')\n",
594 | "\n",
595 | " cmd = f'mkvextract \"{input_file}\" timecodes_v2 {\" \".join(extract_codes)}'\n",
596 | " print(\"Extracting...\")\n",
597 | " !{cmd}\n",
598 | "\n",
599 | " def __extract_cues(self, input_file):\n",
600 | " extract_dir = self.__make_extract_dir(input_file)\n",
601 | " json_data = self.__get_json(input_file)\n",
602 | "\n",
603 | " extract_codes = []\n",
604 | "\n",
605 | " for i in range(len(json_data[\"tracks\"])):\n",
606 | " self.__get_info(json_data, int(i))\n",
607 | " extract_name = f'Track_{self.id}_[{self.track_type}]_cues.txt'\n",
608 | " extract_codes.append(f'{self.id}:\"{os.path.join(extract_dir, extract_name)}\"')\n",
609 | "\n",
610 | " cmd = f'mkvextract \"{input_file}\" cues {\" \".join(extract_codes)}'\n",
611 | " print(\"Extracting...\")\n",
612 | " !{cmd}\n",
613 | "\n",
614 | " def __extract_cuesheet(self, input_file):\n",
615 | " extract_dir = self.__make_extract_dir(input_file)\n",
616 | " cmd = f'mkvextract \"{input_file}\" cuesheet \"{os.path.join(extract_dir, \"CueSheet.cue\")}\"'\n",
617 | " !{cmd}\n",
618 | " print(\"Extracted.\")\n",
619 | "\n",
620 | " def get_start(self):\n",
621 | " if extractMode == \"Tracks\":\n",
622 | " for file in self.input_file:\n",
623 | " print()\n",
624 | " self.__extract_tracks(file)\n",
625 | " elif extractMode == \"Single Track\":\n",
626 | " for file in self.input_file:\n",
627 | " print()\n",
628 | " self.__extract_a_single_track(file)\n",
629 | " elif extractMode == \"Chapters\":\n",
630 | " for file in self.input_file:\n",
631 | " print()\n",
632 | " chapter_type = input(\"chapters extract type? [xml/ogm] \")\n",
633 | " if chapter_type.lower() == \"xml\" or chapter_type.lower() == \"ogm\":\n",
634 | " self.__extract_chapters(chapter_type.lower(), file)\n",
635 | " else: print(\"Incorrect input.\")\n",
636 | " elif extractMode == \"Attachments\":\n",
637 | " for file in self.input_file:\n",
638 | " print()\n",
639 | " self.__extract_attachments(file)\n",
640 | " elif extractMode == \"Tags\":\n",
641 | " for file in self.input_file:\n",
642 | " print()\n",
643 | " self.__extract_tags(file)\n",
644 | " elif extractMode == \"Timecodes\":\n",
645 | " for file in self.input_file:\n",
646 | " print()\n",
647 | " self.__extract_timecodes(file)\n",
648 | " elif extractMode == \"Cues\":\n",
649 | " for file in self.input_file:\n",
650 | " print()\n",
651 | " self.__extract_cues(file)\n",
652 | " elif extractMode == \"Cue Sheet\":\n",
653 | " for file in self.input_file:\n",
654 | " print()\n",
655 | " self.__extract_cuesheet(file)\n",
656 | "\n",
657 | "mkv_extract = mkvextract()\n",
658 | "mkv_extract.get_start()"
659 | ],
660 | "metadata": {
661 | "cellView": "form",
662 | "id": "ozXBsr91od_6"
663 | },
664 | "execution_count": null,
665 | "outputs": []
666 | },
667 | {
668 | "cell_type": "markdown",
669 | "source": [
670 | "### __mkvpropedit__"
671 | ],
672 | "metadata": {
673 | "id": "fKmlX5v-ogty"
674 | }
675 | },
676 | {
677 | "cell_type": "markdown",
678 | "source": [
679 | "* You can edit __segment info, track info, chapters, attachments, tags__ in a mkv file.\n",
680 | "* If you want to __delete statistic tags__ from tracks select `deleteTrackStatisticsTags`"
681 | ],
682 | "metadata": {
683 | "id": "Mnmf5LHiojfD"
684 | }
685 | },
686 | {
687 | "cell_type": "code",
688 | "source": [
689 | "mkvFile = \"\" #@param {type:\"string\"}\n",
690 | "handleMode = \"None\" #@param [\"None\", \"Segment Info\", \"Tracks\", \"Chapters\", \"Attachments\", \"Tags\"]\n",
691 | "deleteTrackStatisticsTags = False #@param {type:\"boolean\"}\n",
692 | "\n",
693 | "import os\n",
694 | "import math\n",
695 | "import json\n",
696 | "import subprocess\n",
697 | "from prettytable import PrettyTable\n",
698 | "\n",
699 | "class mkvpropedit():\n",
700 | " def __init__(self):\n",
701 | " self.input_file = os.path.abspath(mkvFile)\n",
702 | " self.__input_codes = \"\"\n",
703 | "\n",
704 | " def __get_json(self):\n",
705 | " json_cmd = [\n",
706 | " \"mkvmerge\",\n",
707 | " \"--identify\",\n",
708 | " \"--identification-format\",\n",
709 | " \"json\",\n",
710 | " os.path.abspath(self.input_file)\n",
711 | " ]\n",
712 | " json_data = subprocess.check_output(json_cmd, stderr=subprocess.DEVNULL)\n",
713 | " json_data = json.loads(json_data)\n",
714 | " return json_data\n",
715 | " \n",
716 | " def __get_track_info(self, json_data, i:int):\n",
717 | " self.id = json_data[\"tracks\"][i].get(\"id\")\n",
718 | " self.language = json_data[\"tracks\"][i][\"properties\"].get(\"language\")\n",
719 | " self.language_ietf = json_data[\"tracks\"][i][\"properties\"].get(\"language_ieft\")\n",
720 | " self.codec = json_data[\"tracks\"][i].get(\"codec\")\n",
721 | " self.track_type = json_data[\"tracks\"][i].get(\"type\")\n",
722 | " self.track_name = json_data[\"tracks\"][i][\"properties\"].get('track_name')\n",
723 | " self.codec_id = json_data[\"tracks\"][i][\"properties\"].get('codec_id')\n",
724 | "\n",
725 | " def __get_attachment_info(self, json_data, i:int):\n",
726 | " self.attachment_id = json_data[\"attachments\"][i].get(\"id\")\n",
727 | " self.attachment_name = json_data[\"attachments\"][i].get(\"file_name\")\n",
728 | " self.attachment_type = json_data[\"attachments\"][i].get(\"content_type\")\n",
729 | " self.attachment_description = json_data[\"attachments\"][i].get(\"description\")\n",
730 | " if self.attachment_description == \"\": self.attachment_description = \"None\"\n",
731 | " self.attachment_uid = json_data[\"attachments\"][i][\"properties\"].get(\"uid\")\n",
732 | " self.attachment_size = self.__convert_size(json_data[\"attachments\"][i].get(\"size\"))\n",
733 | "\n",
734 | " def __convert_size(self, size_bytes):\n",
735 | " if size_bytes == 0:\n",
736 | " return \"0B\"\n",
737 | " size_name = (\"B\", \"KB\", \"MB\", \"GB\", \"TB\", \"PB\", \"EB\", \"ZB\", \"YB\")\n",
738 | " i = int(math.floor(math.log(size_bytes, 1024)))\n",
739 | " p = math.pow(1024, i)\n",
740 | " s = round(size_bytes / p, 2)\n",
741 | " return f\"{s} {size_name[i]}\"\n",
742 | "\n",
743 | " def __view_tracks(self, json_data, length):\n",
744 | " table = PrettyTable(['id ', 'type', 'language', 'codec', 'name'])\n",
745 | "\n",
746 | " for i in range(length):\n",
747 | " self.__get_track_info(json_data, i)\n",
748 | " table.add_row([self.id, self.track_type, self.language, self.codec, self.track_name])\n",
749 | " \n",
750 | " table.align['id '] = \"c\"\n",
751 | " table.align['type'] = \"l\"\n",
752 | " table.align['language'] = \"c\"\n",
753 | " table.align['codec'] = \"l\"\n",
754 | " table.align['name'] = \"l\"\n",
755 | " print(table)\n",
756 | " \n",
757 | " def __view_attachments(self, json_data, length):\n",
758 | " table = PrettyTable(['id ', 'name', 'mime-type', 'size', 'uid', 'description'])\n",
759 | "\n",
760 | " for i in range(length):\n",
761 | " self.__get_attachment_info(json_data, i)\n",
762 | " table.add_row([self.attachment_id, self.attachment_name, self.attachment_type, self.attachment_size, self.attachment_uid, self.attachment_description])\n",
763 | " \n",
764 | " table.align['id '] = \"c\"\n",
765 | " table.align['name'] = \"l\"\n",
766 | " table.align['mime-type'] = \"l\"\n",
767 | " table.align['size'] = \"l\"\n",
768 | " table.align['uid'] = \"l\"\n",
769 | " table.align['description'] = \"l\"\n",
770 | " print(table)\n",
771 | "\n",
772 | " def __edit_tracks(self):\n",
773 | " json_data = self.__get_json()\n",
774 | " self.__view_tracks(json_data, len(json_data.get(\"tracks\")))\n",
775 | " track_id = int(input('\\ntrack ID: '))\n",
776 | " mkvpropedit_id = track_id + 1\n",
777 | " property_type = input(\"\\nproperty type to edit?\\n\\n1: Track Title\\n2: Track Language\\n3: Track Default\\n4: Track Forced\\n\\n[1/2/3/4] \")\n",
778 | " if property_type in [\"1\", \"2\", \"3\", \"4\"]:\n",
779 | " if property_type == \"1\":\n",
780 | " edit_type = input(\"\\nHow to edit?\\n\\n1: Add\\n2: Change\\n3: Delete\\n\\n[1/2/3] \")\n",
781 | " if edit_type in [\"1\", \"2\"]:\n",
782 | " track_title = input(\"\\nTitle: \")\n",
783 | " if edit_type == \"1\": self.__input_codes += f'--add name=\"{track_title}\"'\n",
784 | " elif edit_type == \"2\": self.__input_codes += f'--set name=\"{track_title}\"'\n",
785 | " elif edit_type == \"3\": self.__input_codes += '--delete name'\n",
786 | " else: return \"Incorrect input\"\n",
787 | " elif property_type == \"2\":\n",
788 | " edit_type = input(\"\\nHow to edit?\\n\\n1: Add\\n2: Change\\n3: Delete\\n\\n[1/2/3] \")\n",
789 | " if edit_type in [\"1\", \"2\"]:\n",
790 | " track_lang = input(\"\\nLanguage: \")\n",
791 | " if edit_type == \"1\": self.__input_codes += f'--add language={track_lang}'\n",
792 | " elif edit_type == \"2\": self.__input_codes += f'--set language={track_lang}'\n",
793 | " elif edit_type == \"3\": self.__input_codes += '--delete language'\n",
794 | " else: return \"Incorrect input\"\n",
795 | " elif property_type == \"3\":\n",
796 | " track_default = input(f'\\nSet this track as default?\\n\\n0: No\\n1: Yes\\n\\n[0/1] ')\n",
797 | " if track_default in [\"0\", \"1\"]: self.__input_codes += f'--set flag-default={track_default}'\n",
798 | " else: return \"Incorrect input\"\n",
799 | " elif property_type == \"4\":\n",
800 | " track_forced = input(f'\\nSet this track as forced?\\n\\n0: No\\n1: Yes\\n\\n[0/1] ')\n",
801 | " if track_forced in [\"0\", \"1\"]: self.__input_codes += f'--set flag-forced={track_forced}'\n",
802 | " else: return \"Incorrect input\"\n",
803 | " return f'\"{self.input_file}\" --edit track:{mkvpropedit_id} {self.__input_codes} '\n",
804 | " else: return \"Incorrect input\"\n",
805 | "\n",
806 | " def __edit_segment_info(self):\n",
807 | " edit_type = input(\"Segment infomation : Title\\n\\n1: Add\\n2: Change\\n3: Delete\\n\\n[1/2/3] \")\n",
808 | " if edit_type in [\"1\", \"2\"]:\n",
809 | " mkv_title = input(\"\\nTitle: \")\n",
810 | " if edit_type == \"1\": self.__input_codes += f'--add title=\"{mkv_title}\"'\n",
811 | " elif edit_type == \"2\": self.__input_codes += f'--set title=\"{mkv_title}\"'\n",
812 | " elif edit_type == \"3\": self.__input_codes += '--delete title'\n",
813 | " else: return \"Incorrect input\"\n",
814 | " return f'\"{self.input_file}\" --edit info {self.__input_codes} '\n",
815 | " \n",
816 | " def __edit_chapters(self):\n",
817 | " edit_type = input(\"Chapters\\n\\n1: Add\\n2: Remove\\n\\n[1/2] \")\n",
818 | " if edit_type == \"1\":\n",
819 | " file_path = input(\"\\nChapter file path: \")\n",
820 | " self.__input_codes += f'--chapters \"{file_path}\"'\n",
821 | " elif edit_type == \"2\": self.__input_codes += '--chapters \"\"'\n",
822 | " else: return \"Incorrect input\"\n",
823 | " return f'\"{self.input_file}\" --edit info {self.__input_codes} '\n",
824 | " \n",
825 | " def __edit_attachments(self):\n",
826 | " edit_type = input(f'Attachments\\n\\n1: Add\\n2: Replace\\n3: Update\\n4: Remove\\n5: List\\n\\n[1/2/3/4/5] ')\n",
827 | " if edit_type == \"1\":\n",
828 | " attachment_path = input(f'\\nattachment path: ')\n",
829 | " attachment_ext = os.path.splitext(os.path.abspath(attachment_path))[-1]\n",
830 | " attachment_name = input(f'\\nName: ')\n",
831 | " attachment_mimetype = input(f'MIME-Type: ')\n",
832 | " attachment_desc = input(f'Description: ')\n",
833 | " attachment_uid = input(f'UID: ')\n",
834 | "\n",
835 | " if attachment_name: self.__input_codes += f'--attachment-name \"{attachment_name}{attachment_ext}\" '\n",
836 | " if attachment_mimetype: self.__input_codes += f'--attachment-mime-type \"{attachment_mimetype}\" '\n",
837 | " if attachment_desc: self.__input_codes += f'--attachment-description \"{attachment_desc}\" '\n",
838 | " if attachment_uid: self.__input_codes += f'--attachment-uid \"{attachment_uid}\" '\n",
839 | "\n",
840 | " self.__input_codes += f'--add-attachment \"{attachment_path}\" '\n",
841 | " return f'\"{self.input_file}\" {self.__input_codes} '\n",
842 | " elif edit_type in [\"2\", \"3\", \"4\", \"5\"]:\n",
843 | " print()\n",
844 | " json_data = self.__get_json()\n",
845 | " self.__view_attachments(json_data, len(json_data.get(\"attachments\")))\n",
846 | "\n",
847 | " if edit_type in [\"2\", \"3\"]:\n",
848 | " attachment_id = input(\"\\nattachment ID: \")\n",
849 | " attachment_path = input(f'\\nattachment path: ')\n",
850 | " attachment_ext = os.path.splitext(os.path.abspath(attachment_path))[-1]\n",
851 | " attachment_name = input(f'\\nName: ')\n",
852 | " attachment_mimetype = input(f'MIME-Type: ')\n",
853 | " attachment_desc = input(f'Description: ')\n",
854 | " attachment_uid = input(f'UID: ')\n",
855 | "\n",
856 | " if attachment_name: self.__input_codes += f'--attachment-name \"{attachment_name}{attachment_ext}\" '\n",
857 | " if attachment_mimetype: self.__input_codes += f'--attachment-mime-type \"{attachment_mimetype}\" '\n",
858 | " if attachment_desc: self.__input_codes += f'--attachment-description \"{attachment_desc}\" '\n",
859 | " if attachment_uid: self.__input_codes += f'--attachment-uid \"{attachment_uid}\" '\n",
860 | "\n",
861 | " if edit_type == \"2\": self.__input_codes += f'--replace-attachment {attachment_id}:\"{attachment_path}\" '\n",
862 | " elif edit_type == \"3\": self.__input_codes += f'--update-attachment {attachment_id} '\n",
863 | "\n",
864 | " return f'\"{self.input_file}\" {self.__input_codes} '\n",
865 | " elif edit_type == \"4\":\n",
866 | " attachment_id = input(\"\\nattachment ID: \")\n",
867 | " self.__input_codes += f'--delete-attachment {attachment_id} '\n",
868 | " return f'\"{self.input_file}\" {self.__input_codes} '\n",
869 | " return \"list\"\n",
870 | " else: return \"Incorrect input\"\n",
871 | "\n",
872 | " def __edit_tags(self):\n",
873 | " edit_type = input('Tags\\n\\n1: All tags\\n2: Global tags\\n3: Track tags\\n\\n[1/2/3] ')\n",
874 | " if edit_type in [\"1\", \"2\", \"3\"]:\n",
875 | " if edit_type == \"1\":\n",
876 | " tag_type = input('\\nAll Tags\\n\\n1: Add\\n2: Remove\\n\\n[1/2] ')\n",
877 | " if tag_type in [\"1\", \"2\"]:\n",
878 | " if tag_type == \"1\":\n",
879 | " xml_path = input(f'\\nXML file path: ')\n",
880 | " self.__input_codes += f'--tags all:\"{xml_path}\" '\n",
881 | " elif tag_type == \"2\": self.__input_codes += '--tags all: '\n",
882 | " return f'\"{self.input_file}\" {self.__input_codes} '\n",
883 | " else: return \"Incorrect input\"\n",
884 | " elif edit_type == \"2\":\n",
885 | " tag_type = input('\\nGlobal Tags\\n\\n1: Add\\n2: Remove\\n\\n[1/2] ')\n",
886 | " if tag_type in [\"1\", \"2\"]:\n",
887 | " if tag_type == \"1\":\n",
888 | " xml_path = input(f'\\nXML file path: ')\n",
889 | " self.__input_codes += f'--tags global:\"{xml_path}\" '\n",
890 | " elif tag_type == \"2\": self.__input_codes += '--tags global: '\n",
891 | " return f'\"{self.input_file}\" {self.__input_codes} '\n",
892 | " else: return \"Incorrect input\"\n",
893 | " elif edit_type == \"3\":\n",
894 | " print()\n",
895 | " json_data = self.__get_json()\n",
896 | " self.__view_tracks(json_data, len(json_data.get(\"tracks\")))\n",
897 | " track_id = input(f'\\ntrack ID: ')\n",
898 | " mkvpropedit_id = int(track_id) + 1\n",
899 | " tag_type = input('\\nTrack Tags\\n\\n1: Add\\n2: Remove\\n\\n[1/2] ')\n",
900 | " if tag_type in [\"1\", \"2\"]:\n",
901 | " if tag_type == '1':\n",
902 | " xml_path = input(f'\\nXML file path: ')\n",
903 | " self.__input_codes += f'--tags track:{mkvpropedit_id}:\"{xml_path}\" '\n",
904 | " elif tag_type == '2': self.__input_codes += f'--tags track:{mkvpropedit_id}: '\n",
905 | " return f'\"{self.input_file}\" {self.__input_codes} '\n",
906 | " else: return \"Incorrect input\"\n",
907 | " else: return \"Incorrect input\"\n",
908 | "\n",
909 | " def get_start(self):\n",
910 | " if deleteTrackStatisticsTags:\n",
911 | " if handleMode == 'None': return f'{self.input_file} --delete-track-statistics-tags'\n",
912 | " elif handleMode == 'Tracks': return f'{self.__edit_tracks()} --delete-track-statistics-tags'\n",
913 | " elif handleMode == 'Segment Info': return f'{self.__edit_segment_info()} --delete-track-statistics-tags'\n",
914 | " elif handleMode == 'Chapters': return f'{self.__edit_chapters()} --delete-track-statistics-tags'\n",
915 | " elif handleMode == 'Attachments': return f'{self.__edit_attachments()} --delete-track-statistics-tags'\n",
916 | " elif handleMode == 'Tags': return f'{self.__edit_tags()} --delete-track-statistics-tags'\n",
917 | " else:\n",
918 | " if handleMode == 'None': return f'{self.input_file}'\n",
919 | " elif handleMode == 'Tracks': return self.__edit_tracks()\n",
920 | " elif handleMode == 'Segment Info': return self.__edit_segment_info()\n",
921 | " elif handleMode == 'Chapters': return self.__edit_chapters()\n",
922 | " elif handleMode == 'Attachments': return self.__edit_attachments()\n",
923 | " elif handleMode == 'Tags': return self.__edit_tags()\n",
924 | "\n",
925 | "mkv_propedit = mkvpropedit()\n",
926 | "cmd = mkv_propedit.get_start()\n",
927 | "print()\n",
928 | "if not \"Incorrect input\" in cmd and not \"list\" in cmd:\n",
929 | " !mkvpropedit {cmd}\n",
930 | "elif \"Incorrect input\" in cmd:\n",
931 | " print(\"Incorrect input\")"
932 | ],
933 | "metadata": {
934 | "cellView": "form",
935 | "id": "BJ_Ny5Z6omvD"
936 | },
937 | "execution_count": null,
938 | "outputs": []
939 | },
940 | {
941 | "cell_type": "markdown",
942 | "source": [
943 | "### __Matroska/WebM Tags__"
944 | ],
945 | "metadata": {
946 | "id": "DghT6NtFfR56"
947 | }
948 | },
949 | {
950 | "cell_type": "markdown",
951 | "source": [
952 | "* Enter __mkv or webm file's path__ in `mkvFile`.\n",
953 | "* If you want to delete `Title`, leave it as blank.\n",
954 | "* You can add __tags__ using a __text document__, but the text document's content must be as formatted as below. \n",
955 | "`Tag name: Tag value, Tag value`\n",
956 | "* This doesn't add tags as __multiple tags__, adds all values as a single tag.\n",
957 | "* Use __official tag names__ to add __matroska official tags__. see [here](https://www.matroska.org/technical/tagging.html)"
958 | ],
959 | "metadata": {
960 | "id": "YHHyNR8yfUUR"
961 | }
962 | },
963 | {
964 | "cell_type": "code",
965 | "source": [
966 | "mkvFile = \"\" #@param {type:\"string\"}\n",
967 | "saveXML = True #@param {type:\"boolean\"}\n",
968 | "\n",
969 | "import os\n",
970 | "from xml.etree.ElementTree import Element, tostring\n",
971 | "\n",
972 | "class mkvtagger(object):\n",
973 | " def __init__(self):\n",
974 | " self.tags_dict = {}\n",
975 | " self.input_file = os.path.abspath(mkvFile)\n",
976 | " self.__get_xml()\n",
977 | "\n",
978 | " def __get_xml(self):\n",
979 | " source_dir = os.path.dirname(self.input_file)\n",
980 | " source_name = os.path.splitext(os.path.basename(self.input_file))[0]\n",
981 | " self.xml_path = os.path.join(source_dir, f'{source_name}_[TagsXML].xml')\n",
982 | "\n",
983 | " def __get_tags(self):\n",
984 | " print(\"Matroska/WebM Tags\")\n",
985 | " print(\"│\")\n",
986 | " self.mkv_title = input(\"├── Title: \")\n",
987 | " print(\"│\")\n",
988 | " tags_txt = input(\"├── Do you want to add tags from a text document? [y/n] \")\n",
989 | " if tags_txt.lower() in [\"y\", \"yes\"]:\n",
990 | " txt_path = input(\"├── Text document's path: \")\n",
991 | " print(\"│\")\n",
992 | " with open(os.path.abspath(txt_path), 'r') as txt:\n",
993 | " for custom_tag in txt:\n",
994 | " custom_tag = custom_tag.strip()\n",
995 | " tag_name = custom_tag.split(\":\")[0]\n",
996 | " tag_name = tag_name.strip()\n",
997 | " tag_values = custom_tag.split(\":\")[1]\n",
998 | " tag_values = tag_values.split(\",\")\n",
999 | "\n",
1000 | " i = 0\n",
1001 | " for value in tag_values:\n",
1002 | " value = value.strip()\n",
1003 | " tag_values[i] = value\n",
1004 | " i += 1\n",
1005 | " \n",
1006 | " tag_value = \", \".join(tag_values)\n",
1007 | " self.tags_dict[tag_name] = tag_value\n",
1008 | " print(f\"├── {tag_name}: {tag_value}\")\n",
1009 | " elif tags_txt.lower() in [\"n\", \"no\"]:\n",
1010 | " print(\"│\")\n",
1011 | " print(\"├── Type [tag name] first and [tag value] second.\")\n",
1012 | " print(\"├── When finished, leave 'Tag name: ' as blank.\")\n",
1013 | " print(\"│\")\n",
1014 | "\n",
1015 | " while True:\n",
1016 | " tag_name = input(\"├── Tag name: \")\n",
1017 | " if tag_name == \"\":\n",
1018 | " break\n",
1019 | " else:\n",
1020 | " tag_value = input(f'├── {tag_name}: ')\n",
1021 | " self.tags_dict[tag_name] = tag_value\n",
1022 | " else:\n",
1023 | " print('│\\n└── Input is invalid. Try again...')\n",
1024 | " print()\n",
1025 | " return \"error\"\n",
1026 | "\n",
1027 | " def __create_xml(self):\n",
1028 | " node_tags = Element('Tags')\n",
1029 | " node_tag = Element('Tag')\n",
1030 | " for name, value in self.tags_dict.items():\n",
1031 | " node_simple = Element('Simple')\n",
1032 | " node_name = Element('Name')\n",
1033 | " node_name.text = name\n",
1034 | " node_simple.append(node_name)\n",
1035 | " node_string = Element('String')\n",
1036 | " node_string.text = value\n",
1037 | " node_simple.append(node_string)\n",
1038 | " node_tag.append(node_simple)\n",
1039 | " node_tags.append(node_tag)\n",
1040 | " xml_data = tostring(node_tags)\n",
1041 | "\n",
1042 | " with open(self.xml_path, 'wb') as xml:\n",
1043 | " xml.write(xml_data)\n",
1044 | "\n",
1045 | " def get_start(self):\n",
1046 | " t = self.__get_tags()\n",
1047 | " self.__create_xml()\n",
1048 | "\n",
1049 | " if t != \"error\":\n",
1050 | " command_line = f'\"{self.input_file}\" --tags all:\"{self.xml_path}\" --edit info '\n",
1051 | "\n",
1052 | " if not self.mkv_title: command_line += \"--delete title \"\n",
1053 | " else: command_line += f'--set title=\"{self.mkv_title}\" '\n",
1054 | " \n",
1055 | " print(\"│\")\n",
1056 | " encoded_date = input('├── Do you want to remove encoded date? [y/n] ')\n",
1057 | " writing_application = input('├── Do you want to remove writing application? [y/n] ')\n",
1058 | " writing_library = input('└── Do you want to remove writing library? [y/n] ')\n",
1059 | " print()\n",
1060 | "\n",
1061 | " if encoded_date in [\"y\", \"yes\"]: command_line += \"--delete date \"\n",
1062 | " if writing_application in [\"y\", \"yes\"]: command_line += '--set writing-application=\"\" '\n",
1063 | " if writing_library in [\"y\", \"yes\"]: command_line += '--set muxing-application=\"\" '\n",
1064 | "\n",
1065 | " if not saveXML: os.remove(self.xml_path)\n",
1066 | " return command_line\n",
1067 | " \n",
1068 | "mkv_tagger = mkvtagger()\n",
1069 | "print()\n",
1070 | "!mkvpropedit {mkv_tagger.get_start()}"
1071 | ],
1072 | "metadata": {
1073 | "cellView": "form",
1074 | "id": "v66UqyFsfYAh"
1075 | },
1076 | "execution_count": null,
1077 | "outputs": []
1078 | }
1079 | ]
1080 | }
--------------------------------------------------------------------------------