AIXerum / faster-whisper
View external linksLinks

faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models. This implementation is up to 4 times faster than openai/whisper for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU.
29Oct 18, 2024Updated last year

Alternatives and similar repositories for faster-whisper

Users that are interested in faster-whisper are comparing it to the libraries listed below

Sorting:

Are these results useful?