Skip to content

Running speech to text model (whisper.cpp) in Unity3d on your local machine.

License

Notifications You must be signed in to change notification settings

Macoron/whisper.unity

Repository files navigation

whisper.unity

License: MIT whisper.cpp Ko-fi

Testing

This is Unity3d bindings for the whisper.cpp. It provides high-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model running on your local machine.

This repository comes with "ggml-tiny.bin" model weights. This is the smallest and fastest version of whisper model, but it has worse quality comparing to other models. If you want better quality, check out other models weights.

Main features:

  • Multilingual, supports around 60 languages
  • Can translate one language to another (e.g. German speech to English text)
  • Different models sizes offering speed and accuracy tradeoffs
  • Runs on local users device without Internet connection
  • Free and open source, can be used in commercial projects

Supported platforms:

Samples

multilang.mp4

"whisper-small.bin" model tested in English, German and Russian from microphone

tiny.mp4

"whisper-tiny.bin" model, 50x faster than realtime on Macbook with M1 Pro

Getting started

Clone this repository and open it as regular Unity project. It comes with examples and tiny multilanguage model weights.

Alternatively you can add this repository to your project as a Unity Package. Add it by this git URL to your Unity Package Manager:

https://github.com/Macoron/whisper.unity.git?path=/Packages/com.whisper.unity

GPU Acceleration

Whisper supports GPU Acceleration using Vulkan (Windows, Linux) or Metal (macOS, iOS, and visionOS), which can drastically improve performance on some hardware.

To activate GPU usage, find the WhisperManager in your scene and enable the Use GPU toggle. Whisper will attempt to use GPU inference and fall back to CPU inference if the hardware is unsupported.

CUDA is no longer supported and has been replaced by Vulkan. If you require CUDA support, please use an earlier release.

whisper.cpp supports Metal only on Apple7 GPUs or newer (starting from Apple M1 chips). On older hardware, inference will fall back to CPU.

Downloading other model weights

You can try different Whisper model weights. For example, you can improve English language transcription by using English-only weights or by trying bigger models.

You can download model weights from here. Just put them into your StreamingAssets folder.

For more information about models differences and formats read whisper.cpp readme and OpenAI readme.

Compiling C++ libraries from source

This project comes with prebuild libraries of whisper.cpp for all supported platforms. You can rebuild them from source using Github Actions. To do that make fork of this repo and go into Actions => Build C++ => Run workflow. After pipeline completed, download compiled libraries in artifacts tab.

In case you want to build libraries on your machine:

  1. Clone the original whisper.cpp repository
  2. Checkout tag v1.7.5. Other versions might not work with this Unity bindings.
  3. Open whisper.unity folder with command line
  4. If you are using Windows write:
.\build_cpp.bat path\to\whisper
  1. If you are using MacOS write:
sh build_cpp.sh path/to/whisper all path/to/ndk/android.toolchain.cmake
  1. If you are using Linux write
sh build_cpp_linux.sh path/to/whisper
  1. If build was successful compiled libraries should be automatically update package Plugins folder.

Windows will produce only Windows library, Linux will produce only Linux. MacOS will produce MacOS, iOS and Android libraries.

License

This project is licensed under the MIT License.

It uses compiled libraries and model weighs of whisper.cpp which is under MIT license.

Original OpenAI Whisper code and weights are also under MIT license.