Name already in use
1215 |1239 | whisper/approach.png 1240 |
1241 | Go to file 1242 | 1243 |
├── README.md
├── approach.png
├── approach.png.1
├── cog.yaml
├── language-breakdown.svg
└── predict.py
/README.md:
--------------------------------------------------------------------------------
1 |
2 | **NOTE**:
3 |
4 | Some folks reported signigiface slow down in the lastest version including `large-v2` checkpoint, therefore it has been temporaily removed from https://replicate.com/openai/whisper, but added here https://replicate.com/cjwbw/whisper instead if you want to access it.
5 |
6 | Have personally tested both versions however did not find the slow-down issue as reported. It has been raised to the team and see how to proceed regarding merge back to the mainline model.
7 |
8 | # Whisper
9 | [[Blog]](https://openai.com/blog/whisper)
10 | [[Paper]](https://cdn.openai.com/papers/whisper.pdf)
11 | [[Model card]](model-card.md)
12 | [[Colab example]](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb)
13 | [](https://replicate.com/openai/whisper)
14 |
15 | Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
16 |
17 |
18 | ## Approach
19 |
20 | 
21 |
22 | A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. All of these tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing for a single model to replace many different stages of a traditional speech processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.
23 |
24 |
25 | ## Setup
26 |
27 | We used Python 3.9.9 and [PyTorch](https://pytorch.org/) 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.7 or later and recent PyTorch versions. The codebase also depends on a few Python packages, most notably [HuggingFace Transformers](https://huggingface.co/docs/transformers/index) for their fast tokenizer implementation and [ffmpeg-python](https://github.com/kkroening/ffmpeg-python) for reading audio files. The following command will pull and install the latest commit from this repository, along with its Python dependencies
28 |
29 | pip install git+https://github.com/openai/whisper.git
30 |
31 | To update the package to the latest version of this repository, please run:
32 |
33 | pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git
34 |
35 | It also requires the command-line tool [`ffmpeg`](https://ffmpeg.org/) to be installed on your system, which is available from most package managers:
36 |
37 | ```bash
38 | # on Ubuntu or Debian
39 | sudo apt update && sudo apt install ffmpeg
40 |
41 | # on Arch Linux
42 | sudo pacman -S ffmpeg
43 |
44 | # on MacOS using Homebrew (https://brew.sh/)
45 | brew install ffmpeg
46 |
47 | # on Windows using Chocolatey (https://chocolatey.org/)
48 | choco install ffmpeg
49 |
50 | # on Windows using Scoop (https://scoop.sh/)
51 | scoop install ffmpeg
52 | ```
53 |
54 | You may need [`rust`](http://rust-lang.org) installed as well, in case [tokenizers](https://pypi.org/project/tokenizers/) does not provide a pre-built wheel for your platform. If you see installation errors during the `pip install` command above, please follow the [Getting started page](https://www.rust-lang.org/learn/get-started) to install Rust development environment. Additionally, you may need to configure the `PATH` environment variable, e.g. `export PATH="$HOME/.cargo/bin:$PATH"`. If the installation fails with `No module named 'setuptools_rust'`, you need to install `setuptools_rust`, e.g. by running:
55 |
56 | ```bash
57 | pip install setuptools-rust
58 | ```
59 |
60 |
61 | ## Available models and languages
62 |
63 | There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and relative speed.
64 |
65 |
66 | | Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
67 | |:------:|:----------:|:------------------:|:------------------:|:-------------:|:--------------:|
68 | | tiny | 39 M | `tiny.en` | `tiny` | ~1 GB | ~32x |
69 | | base | 74 M | `base.en` | `base` | ~1 GB | ~16x |
70 | | small | 244 M | `small.en` | `small` | ~2 GB | ~6x |
71 | | medium | 769 M | `medium.en` | `medium` | ~5 GB | ~2x |
72 | | large | 1550 M | N/A | `large` | ~10 GB | 1x |
73 |
74 | For English-only applications, the `.en` models tend to perform better, especially for the `tiny.en` and `base.en` models. We observed that the difference becomes less significant for the `small.en` and `medium.en` models.
75 |
76 | Whisper's performance varies widely depending on the language. The figure below shows a WER breakdown by languages of Fleurs dataset, using the `large` model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in [the paper](https://cdn.openai.com/papers/whisper.pdf).
77 |
78 | 
79 |
80 |
81 | ## More examples
82 |
83 | Please use the [🙌 Show and tell](https://github.com/openai/whisper/discussions/categories/show-and-tell) category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.
84 |
85 |
86 | ## License
87 |
88 | The code and the model weights of Whisper are released under the MIT License. See [LICENSE](LICENSE) for further details.
89 |
--------------------------------------------------------------------------------
/approach.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/chenxwh/cog-whisper/7416f6e1c2dbb05aff8cfdea2e59615d8de0cf42/approach.png
--------------------------------------------------------------------------------
/approach.png.1:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |