├── README.md └── 4bit_benchmark.ipynb /README.md: -------------------------------------------------------------------------------- 1 | # Insanely Fast Whisper 2 | 3 | Powered by 🤗 *Transformers* & *Optimum* 4 | 5 | **TL;DR** - Transcribe **300** minutes (5 hours) of audio in less than **10** minutes - with [OpenAI's Whisper Large v2](https://huggingface.co/openai/whisper-large-v2). Blazingly fast transcription is now a reality!⚡️ 6 | 7 | Basically all you need to do is this: 8 | 9 | ```python 10 | import torch 11 | from transformers import pipeline 12 | 13 | pipe = pipeline("automatic-speech-recognition", 14 | "openai/whisper-large-v2", 15 | torch_dtype=torch.float16, 16 | device="cuda:0") 17 | 18 | pipe.model = pipe.model.to_bettertransformer() 19 | 20 | outputs = pipe("", 21 | chunk_length_s=30, 22 | batch_size=24, 23 | return_timestamps=True) 24 | 25 | outputs["text"] 26 | ``` 27 | 28 | Not convinced? Here are some benchmarks we ran on a free [Google Colab T4 GPU](https://colab.research.google.com/github/Vaibhavs10/insanely-fast-whisper/blob/main/infer_transformers_whisper_large_v2.ipynb)! 👇 29 | 30 | | Optimisation type | Time to Transcribe (150 mins of Audio) | 31 | |------------------|------------------| 32 | | Transformers (`fp32`) | ~31 (*31 min 1 sec*) | 33 | | Transformers (`fp32` + `batching [8]`) | ~13 (*13 min 19 sec*) | 34 | | Transformers (`fp16` + `batching [16]`) | ~6 (*6 min 13 sec*) | 35 | | Transformers (`fp16` + `batching [16]` + `bettertransformer`) | ~5.42 (*5 min 42 sec*) | 36 | | Transformers (`fp16` + `batching [24]` + `bettertransformer`) | ~5 (*5 min 2 sec*) | 37 | | Transformers (`4-bit` + `batching [24]` | ~9 (*8 min 35 sec*) | 38 | | Transformers (`4-bit` + `batching [40]` | ~6 (*6 min 29 sec*) | 39 | | Faster Whisper (`fp16` + `beam_size [1]`) | ~9.23 (*9 min 23 sec*) | 40 | | Faster Whisper (`8-bit` + `beam_size [1]`) | ~8 (*8 min 15 sec*) | 41 | 42 | ### Let's go!! 43 | 44 | Here-in, we'll dive into optimisations that can make Whisper faster for fun and profit! Our goal is to be able to transcribe a 2-3 hour long audio in the fastest amount of time possible. We'll start with the most basic usage and work our way up to make it fast! 45 | 46 | The only fitting test audio to use for our benchmark would be [Lex interviewing Sam Altman](https://www.youtube.com/watch?v=L_Guz73e6fw&t=8s). We'll use the audio file corresponding to his podcast. I uploaded it on a wee dataset on the hub [here](https://huggingface.co/datasets/reach-vb/random-audios/blob/main/sam_altman_lex_podcast_367.flac). 47 | 48 | ## Installation 49 | 50 | ```python 51 | pip install -q --upgrade torch torchvision torchaudio 52 | pip install -q git+https://github.com/huggingface/transformers 53 | pip install -q accelerate optimum 54 | pip install -q ipython-autotime 55 | ``` 56 | 57 | Let's download the audio file corresponding to the podcast. 58 | 59 | ```python 60 | wget https://huggingface.co/datasets/reach-vb/random-audios/resolve/main/sam_altman_lex_podcast_367.flac 61 | ``` 62 | 63 | ## Base Case 64 | 65 | ```python 66 | import torch 67 | from transformers import pipeline 68 | 69 | pipe = pipeline("automatic-speech-recognition", 70 | "openai/whisper-large-v2", 71 | device="cuda:0") 72 | ``` 73 | 74 | ```python 75 | outputs = pipe("sam_altman_lex_podcast_367.flac", 76 | chunk_length_s=30, 77 | return_timestamps=True) 78 | 79 | outputs["text"][:200] 80 | ``` 81 | 82 | Sample output: 83 | ``` 84 | We have been a misunderstood and badly mocked org for a long time. When we started, we announced the org at the end of 2015 and said we were going to work on AGI, people thought we were batshit insan 85 | ``` 86 | 87 | *Time to transcribe the entire podcast*: **31min 1s** 88 | 89 | ## Batching 90 | 91 | ```python 92 | outputs = pipe("sam_altman_lex_podcast_367.flac", 93 | chunk_length_s=30, 94 | batch_size=8, 95 | return_timestamps=True) 96 | 97 | outputs["text"][:200] 98 | ``` 99 | 100 | *Time to transcribe the entire podcast*: **13min 19s** 101 | 102 | ## Half-Precision 103 | 104 | ```python 105 | pipe = pipeline("automatic-speech-recognition", 106 | "openai/whisper-large-v2", 107 | torch_dtype=torch.float16, 108 | device="cuda:0") 109 | ``` 110 | 111 | ```python 112 | outputs = pipe("sam_altman_lex_podcast_367.flac", 113 | chunk_length_s=30, 114 | batch_size=16, 115 | return_timestamps=True) 116 | 117 | outputs["text"][:200] 118 | ``` 119 | 120 | *Time to transcribe the entire podcast*: **6min 13s** 121 | 122 | ## BetterTransformer w/ Optimum 123 | 124 | ```python 125 | pipe = pipeline("automatic-speech-recognition", 126 | "openai/whisper-large-v2", 127 | torch_dtype=torch.float16, 128 | device="cuda:0") 129 | 130 | pipe.model = pipe.model.to_bettertransformer() 131 | ``` 132 | 133 | ```python 134 | outputs = pipe("sam_altman_lex_podcast_367.flac", 135 | chunk_length_s=30, 136 | batch_size=24, 137 | return_timestamps=True) 138 | 139 | outputs["text"][:200] 140 | ``` 141 | 142 | *Time to transcribe the entire podcast*: **5min 2s** 143 | 144 | ## Roadmap 145 | 146 | - [ ] Add benchmarks for Whisper.cpp 147 | - [x] Add benchmarks for 4-bit inference 148 | - [ ] Add a light CLI script 149 | - [ ] Deployment script with Inference API 150 | 151 | ## Community showcase 152 | 153 | @ochen1 created a brilliant MVP for a CLI here: https://github.com/ochen1/insanely-fast-whisper-cli (Try it out now!) 154 | -------------------------------------------------------------------------------- /4bit_benchmark.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "id": "view-in-github", 7 | "colab_type": "text" 8 | }, 9 | "source": [ 10 | "\"Open" 11 | ] 12 | }, 13 | { 14 | "cell_type": "markdown", 15 | "metadata": { 16 | "id": "Si5t4N2pot37" 17 | }, 18 | "source": [ 19 | "#4bit quantization benchmark\n", 20 | "\n", 21 | "Additional to the optimizations made using batching and bettertransformer, we can do quantization optimizations. We would test this strategies with the same audio that was used on the *transformers* 🤗 and fast-whisper benchmarks.\n" 22 | ] 23 | }, 24 | { 25 | "cell_type": "code", 26 | "execution_count": null, 27 | "metadata": { 28 | "colab": { 29 | "base_uri": "https://localhost:8080/" 30 | }, 31 | "id": "y4nps5BwuzSm", 32 | "outputId": "272258bd-1b2b-4c65-885c-fb76bd99be9c" 33 | }, 34 | "outputs": [ 35 | { 36 | "output_type": "stream", 37 | "name": "stdout", 38 | "text": [ 39 | "--2023-10-24 22:10:15-- https://huggingface.co/datasets/reach-vb/random-audios/resolve/main/sam_altman_lex_podcast_367.flac\n", 40 | "Resolving huggingface.co (huggingface.co)... 18.172.134.124, 18.172.134.4, 18.172.134.88, ...\n", 41 | "Connecting to huggingface.co (huggingface.co)|18.172.134.124|:443... connected.\n", 42 | "HTTP request sent, awaiting response... 302 Found\n", 43 | "Location: https://cdn-lfs.huggingface.co/repos/96/e4/96e4f69cd112b019dd764318570e47e5fe96de53d8c32a99d745e72d9086e355/b2fd593ce144a8d904cf49a4ed77ed06eb50644a053dddd280c81a3ef94fb60e?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27sam_altman_lex_podcast_367.flac%3B+filename%3D%22sam_altman_lex_podcast_367.flac%22%3B&response-content-type=audio%2Fx-flac&Expires=1698444615&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY5ODQ0NDYxNX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy85Ni9lNC85NmU0ZjY5Y2QxMTJiMDE5ZGQ3NjQzMTg1NzBlNDdlNWZlOTZkZTUzZDhjMzJhOTlkNzQ1ZTcyZDkwODZlMzU1L2IyZmQ1OTNjZTE0NGE4ZDkwNGNmNDlhNGVkNzdlZDA2ZWI1MDY0NGEwNTNkZGRkMjgwYzgxYTNlZjk0ZmI2MGU%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=NYFNSfn8bkw-aPVfiCC2jUTr3EeKLK8ZhxBamDN1NoySPhv2jzqyuOmbLvcz6vDGtTqVm02Zah6JTpO37B9HnQwH7Vw%7EtO2GPxZJD%7EVTjlubzSYStcGVgoW1t4rXpzBg5KQY6fobHGGMBZNsOhpWKhnNDA3Lo-uP53%7EZDQM77Nip%7EQmPrLgy1A6A4-uTNgSTKxASTQhNsPVudv4og9PM35XpolPgVaXAAC8-wUxQF3BWEW0pxbretSGjWlajQLptclxw2UyXxNNcFuycFhZflnqIF-C1jlHgWCvL2FTPAmej5UhrCe%7E4uIQAhm-Q%7ExdZsbgC2B-NOi8zw7EPrtR5yw__&Key-Pair-Id=KVTP0A1DKRTAX [following]\n", 44 | "--2023-10-24 22:10:15-- https://cdn-lfs.huggingface.co/repos/96/e4/96e4f69cd112b019dd764318570e47e5fe96de53d8c32a99d745e72d9086e355/b2fd593ce144a8d904cf49a4ed77ed06eb50644a053dddd280c81a3ef94fb60e?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27sam_altman_lex_podcast_367.flac%3B+filename%3D%22sam_altman_lex_podcast_367.flac%22%3B&response-content-type=audio%2Fx-flac&Expires=1698444615&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY5ODQ0NDYxNX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy85Ni9lNC85NmU0ZjY5Y2QxMTJiMDE5ZGQ3NjQzMTg1NzBlNDdlNWZlOTZkZTUzZDhjMzJhOTlkNzQ1ZTcyZDkwODZlMzU1L2IyZmQ1OTNjZTE0NGE4ZDkwNGNmNDlhNGVkNzdlZDA2ZWI1MDY0NGEwNTNkZGRkMjgwYzgxYTNlZjk0ZmI2MGU%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=NYFNSfn8bkw-aPVfiCC2jUTr3EeKLK8ZhxBamDN1NoySPhv2jzqyuOmbLvcz6vDGtTqVm02Zah6JTpO37B9HnQwH7Vw%7EtO2GPxZJD%7EVTjlubzSYStcGVgoW1t4rXpzBg5KQY6fobHGGMBZNsOhpWKhnNDA3Lo-uP53%7EZDQM77Nip%7EQmPrLgy1A6A4-uTNgSTKxASTQhNsPVudv4og9PM35XpolPgVaXAAC8-wUxQF3BWEW0pxbretSGjWlajQLptclxw2UyXxNNcFuycFhZflnqIF-C1jlHgWCvL2FTPAmej5UhrCe%7E4uIQAhm-Q%7ExdZsbgC2B-NOi8zw7EPrtR5yw__&Key-Pair-Id=KVTP0A1DKRTAX\n", 45 | "Resolving cdn-lfs.huggingface.co (cdn-lfs.huggingface.co)... 108.156.120.58, 108.156.120.59, 108.156.120.55, ...\n", 46 | "Connecting to cdn-lfs.huggingface.co (cdn-lfs.huggingface.co)|108.156.120.58|:443... connected.\n", 47 | "HTTP request sent, awaiting response... 200 OK\n", 48 | "Length: 351705020 (335M) [audio/x-flac]\n", 49 | "Saving to: ‘sam_altman_lex_podcast_367.flac’\n", 50 | "\n", 51 | "sam_altman_lex_podc 100%[===================>] 335.41M 28.5MB/s in 12s \n", 52 | "\n", 53 | "2023-10-24 22:10:27 (29.1 MB/s) - ‘sam_altman_lex_podcast_367.flac’ saved [351705020/351705020]\n", 54 | "\n", 55 | "ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers\n", 56 | " built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)\n", 57 | " configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared\n", 58 | " libavutil 56. 70.100 / 56. 70.100\n", 59 | " libavcodec 58.134.100 / 58.134.100\n", 60 | " libavformat 58. 76.100 / 58. 76.100\n", 61 | " libavdevice 58. 13.100 / 58. 13.100\n", 62 | " libavfilter 7.110.100 / 7.110.100\n", 63 | " libswscale 5. 9.100 / 5. 9.100\n", 64 | " libswresample 3. 9.100 / 3. 9.100\n", 65 | " libpostproc 55. 9.100 / 55. 9.100\n", 66 | "Input #0, flac, from 'sam_altman_lex_podcast_367.flac':\n", 67 | " Metadata:\n", 68 | " encoder : Lavf58.29.100\n", 69 | " Duration: N/A, start: 0.000000, bitrate: N/A\n", 70 | " Stream #0:0: Audio: flac, 48000 Hz, stereo, s16\n", 71 | "Stream mapping:\n", 72 | " Stream #0:0 -> #0:0 (flac (native) -> pcm_s16le (native))\n", 73 | "Press [q] to stop, [?] for help\n", 74 | "Output #0, wav, to 'sam_altman_lex_podcast_367.wav':\n", 75 | " Metadata:\n", 76 | " ISFT : Lavf58.76.100\n", 77 | " Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s\n", 78 | " Metadata:\n", 79 | " encoder : Lavc58.134.100 pcm_s16le\n", 80 | "size= 269887kB time=02:23:56.37 bitrate= 256.0kbits/s speed= 544x \n", 81 | "video:0kB audio:269887kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000028%\n" 82 | ] 83 | } 84 | ], 85 | "source": [ 86 | "!wget https://huggingface.co/datasets/reach-vb/random-audios/resolve/main/sam_altman_lex_podcast_367.flac\n", 87 | "!ffmpeg -y -i sam_altman_lex_podcast_367.flac -ac 1 -ar 16000 sam_altman_lex_podcast_367.wav\n", 88 | "\n" 89 | ] 90 | }, 91 | { 92 | "cell_type": "markdown", 93 | "metadata": { 94 | "id": "jSPwqC--u4VJ" 95 | }, 96 | "source": [ 97 | "First install al the requirements" 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": null, 103 | "metadata": { 104 | "colab": { 105 | "base_uri": "https://localhost:8080/" 106 | }, 107 | "id": "K3JfjudEoXL1", 108 | "outputId": "29289177-41ae-4d5e-9180-5978d822fe6c" 109 | }, 110 | "outputs": [ 111 | { 112 | "output_type": "stream", 113 | "name": "stdout", 114 | "text": [ 115 | " Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n", 116 | " Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n", 117 | " Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n", 118 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m302.0/302.0 kB\u001b[0m \u001b[31m4.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 119 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.8/3.8 MB\u001b[0m \u001b[31m13.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 120 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m23.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 121 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m295.0/295.0 kB\u001b[0m \u001b[31m27.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 122 | "\u001b[?25h Building wheel for transformers (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n", 123 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m261.0/261.0 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 124 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m301.0/301.0 kB\u001b[0m \u001b[31m19.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 125 | "\u001b[?25h Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n", 126 | " Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n", 127 | " Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n", 128 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m46.0/46.0 kB\u001b[0m \u001b[31m6.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 129 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m493.7/493.7 kB\u001b[0m \u001b[31m38.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 130 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m64.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 131 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m86.8/86.8 kB\u001b[0m \u001b[31m12.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 132 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m115.3/115.3 kB\u001b[0m \u001b[31m12.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 133 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m19.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 134 | "\u001b[?25h Building wheel for optimum (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n", 135 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.6/1.6 MB\u001b[0m \u001b[31m7.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 136 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m92.6/92.6 MB\u001b[0m \u001b[31m9.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", 137 | "\u001b[?25h" 138 | ] 139 | } 140 | ], 141 | "source": [ 142 | "!pip install -q --upgrade torch torchvision torchaudio\n", 143 | "!pip install -q -U git+https://github.com/huggingface/transformers\n", 144 | "!pip install -q -U accelerate optimum\n", 145 | "!pip install -q ipython-autotime\n", 146 | "!pip install -q -U bitsandbytes" 147 | ] 148 | }, 149 | { 150 | "cell_type": "markdown", 151 | "metadata": { 152 | "id": "NIuF191vQW00" 153 | }, 154 | "source": [ 155 | "let's load the 3 model whisper parts: Tokenizer, Feature extractor and the model itself with the 4bit quantization option.\n", 156 | "\n", 157 | "We can then pass each of this elements as arguments to the automatic speech recognition pipeline" 158 | ] 159 | }, 160 | { 161 | "cell_type": "code", 162 | "execution_count": null, 163 | "metadata": { 164 | "colab": { 165 | "base_uri": "https://localhost:8080/", 166 | "height": 369, 167 | "referenced_widgets": [ 168 | "926b11c35f0944268c819f58302491b0", 169 | "b53b00de8b604c6ebc523d86859db60a", 170 | "574c13f19e3a48d09fb84e4ac8aac5d2", 171 | "854c3e796feb4112be3a81d85413b193", 172 | "4636b59bf73a4d6db9d49b3c350681d5", 173 | "d3b457f10abc4156b026014c3afd365f", 174 | "5c97b7e5bf5f459abb143b23fd8d9e53", 175 | "de240d18f34d4e9bbae2a61ad595b5f2", 176 | "28f7c539f7954c9d895c2198b165127b", 177 | "773146bf25864a0eb2a6f76b9b496df8", 178 | "ae7b4aeb03f44b77a7f977cf239c8ee5", 179 | "cc6fe376f97940309d2ba710989bd5ea", 180 | "5e917e3a099749e7bda54e8f131d13b2", 181 | "a3b1518754214b55b8c3f0160acede0b", 182 | "c44e375aa1eb4831b013ed6c23c2a994", 183 | "9b4c74e779524919adb8da437b7031b1", 184 | "d90688bcda954ddca4881bc707023449", 185 | "4de3b05962db430c80e98a953246c3a2", 186 | "26529c108f3a47f6a7deabebfcb2bf24", 187 | "e779aa779d4848d7a566ff641021fc3e", 188 | "e7ff4d6dd1a241f88d56d38d852502cf", 189 | "90cd55f917494f06b8044acd7334d6dc", 190 | "5403d692d1f747e49c40332aede9523d", 191 | "0138f38060da4c71b17c2a4472ac1274", 192 | "eab40fa0471d4b648cb0de6105c59565", 193 | "88617fe6b056498999b9901799fb6954", 194 | "45af6ac24b6041b9b49508def4c58933", 195 | "6ec5ae82e33b48e5806ceeefeafa4243", 196 | "26c764182b2942b594a4394e20e167c5", 197 | "5e41358d95d5473782890d04bbb6331f", 198 | "8df10f38a603448fb6a845b51947e1b8", 199 | "40229b87dfa746cdab4fe4407142faf0", 200 | "73e7e79768fc47869aa25b93f7f7497b", 201 | "5d3d1055497b40ea9094828727e629af", 202 | "3130e425bd3a41a394fdf7c9574fc6b4", 203 | "3b7c8a98adb541d6b4cffe256d21a5ed", 204 | "c272e0d4be5446ec963d30164321771d", 205 | "da3f86d973e4441f8438324799357b22", 206 | "87ce9180f5e14385bba8b3572dc9dcf9", 207 | "2d480c1a51024a2885083eed2894f873", 208 | "ded16b1910fa438e80069413512ce3ba", 209 | "5eb9628240cd4bfa8302d1807fb3228e", 210 | "594f98845aae4adfaeb7a7ca67c872d3", 211 | "7f2a65e8c1dc4143be40b652ac5d203a", 212 | "d00a3e670ad54a609f7f70e78f8f65be", 213 | "430652979b9a40278f185d1e9446ffa8", 214 | "1ae5ad016c664416953319088259253e", 215 | "123b0167a8634d4db806be24e6a76489", 216 | "bc10014f5f334676817a1882661af9ba", 217 | "cbada1101d8d41ec86307f5cd784301b", 218 | "9c7acb2b21f54ca8a7024f296917295c", 219 | "5f497e9961384904aed853bbcc465369", 220 | "95211eea82734f4a8ce24176e73e22d7", 221 | "69fb9fbeecab47e1a6166e597e6cee6a", 222 | "1d7ef876dfee47a18fbafe4127ccb4fe", 223 | "0f6b64fae7854f14849f6d6dbd4aadc8", 224 | "7b9b603d3f8f41cabbabffde3287fc7c", 225 | "1ab6f67dec4e472ea0bab67b8ca98634", 226 | "7b474a0cbb4d4d189aa7974f5e848177", 227 | "52cf5b3a91b64ab8ba42936a07cc80b9", 228 | "78e71e3f711b4e1fad1e88558ff2354e", 229 | "1b2599e1da164d5d99f80bc558bdab0e", 230 | "4cf367ad0600469b995491a3d09fbc84", 231 | "2bc38624824b4f78b4dc6e9114eba23f", 232 | "3c0422ccc2d44ca9a7e52e4e65ee2e5d", 233 | "f9b4667809824d84a764647809644696", 234 | "962dbebd379143608c5719d18cf9d233", 235 | "6612d5753613490cbf77ba64c18d627c", 236 | "7bcb681c16b34c81a82e4ba117d8533d", 237 | "6c52d5c88bbc4170b5fe0c031a78de6d", 238 | "4919e8c677da4008a8654a0caf0dd469", 239 | "0ca7f452fc464f45804a5ed21cca01c5", 240 | "e22f8e229593455cacf9e8576a395e73", 241 | "fe823d8a1e144969bb0359c54c798333", 242 | "2245d2d73b344935bb4e1a6acc6c3b78", 243 | "7fe16a004a2a4082abaef38d93fe8a9a", 244 | "6eee953925634deab14c5d7ecb7d313b", 245 | "436ba6be7cbd4cefa91794461622898c", 246 | "9beb9af2850349529ca841f0219f2baa", 247 | "1034e90f597647eaaf36c2083841672c", 248 | "dc4f29cd20f64d27be449dccdaefa9bf", 249 | "513218200bce4890abd8df30d00b3a9d", 250 | "5f22baaff33546028ffd04a4c5bf2974", 251 | "758571108186454b824ab93d18327880", 252 | "32a9e92d26e04c85a79cbc6c4bdf2856", 253 | "cd8e01ba46474b92a4ebebfa870edd71", 254 | "11e530a4f8a0478494d5bafd022683d8", 255 | "956c27a20d1e49c185df6bdfcd1f0022", 256 | "bc4d185198654d4ba34259b5fd4a0d83", 257 | "7d5fb2fa6a1a44159eb5784be190605e", 258 | "e45a20f246184e34995a051f80828e47", 259 | "75ab66b725a34743bafaffa874842f9c", 260 | "25d55e9c34f94100b6ffa5dbc337c77d", 261 | "19afc6b3374c406bb39aa512ab44ab5a", 262 | "f95695509dbb4497b825a7c30e8384fa", 263 | "2533e2b9f68e4aec96ac9af584af1789", 264 | "3ad4c2acbc974296b5316f5f41368cf9", 265 | "1c39f8bfee58400292ae311e078b6e3f", 266 | "e59636747e4e43fbb0c6f71c61bc5354", 267 | "d764da5f68bd4247b212bbc33eda42d7", 268 | "addf6cd54b0746ad8345ce9349188193", 269 | "898993b4beeb47c481e9e5fcb079ae3f", 270 | "b6d131054ecd4a1bb30fbc7fc5159744", 271 | "b63eecd647c74ea49ab04a7384ae0112", 272 | "0708ac5685b44ea58e66422a49a93fcb", 273 | "a544886c840e437292b2da734b560d37", 274 | "9c1052a5295042eeaa8ab79063ab5c02", 275 | "525e02b5e993483db38ac8565b490d08", 276 | "097a12a032ec4ceaa0285047f12d5389", 277 | "67127ed03e924b1f8792069a5d5ab4ee", 278 | "cfa176ae256d48519561200cf52731b2", 279 | "8722945bbfd74e7480b0ca812b7d8d70", 280 | "f9ddbb4712014448bd8d2068311a177a", 281 | "5b7f93d50e3645e69335e10dd43f8e3e", 282 | "400f837e2a11437aad4a23e0e6159b0f", 283 | "7be108c0d7eb4f169ddd95714b0161a3", 284 | "047d705c45794960a9f1afc8440ea3b5", 285 | "f2f401c93b9341dc964792133343b3e1", 286 | "1b1199c0aa37475fa88f1dfa88311a7a", 287 | "d2c595c704c14156836506a61de40719", 288 | "607a40d910d74950801222bd86d3de7e" 289 | ] 290 | }, 291 | "id": "sqbHjcEJok-n", 292 | "outputId": "2132f6a0-217b-4eb1-bd17-78001762f05e" 293 | }, 294 | "outputs": [ 295 | { 296 | "output_type": "display_data", 297 | "data": { 298 | "text/plain": [ 299 | "Downloading (…)rocessor_config.json: 0%| | 0.00/185k [00:00