arlo-phoenix / CTranslate2-rocmLinks
Fast inference engine for Transformer models
☆54Updated last year
Alternatives and similar repositories for CTranslate2-rocm
Users that are interested in CTranslate2-rocm are comparing it to the libraries listed below
Sorting:
- ☆420Updated 8 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆723Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last month
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,104Updated last week
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆107Updated 8 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆266Updated last week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆609Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,399Updated this week
- 8-bit CUDA functions for PyTorch☆69Updated 3 months ago
- A complete package that provides you with all the components needed to get started of dive deeper into Machine Learning Workloads on Cons…☆44Updated 2 months ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆560Updated last week
- Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.c…☆154Updated 5 months ago
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆64Updated 2 months ago
- DEPRECATED!☆50Updated last year
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,086Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆149Updated this week
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆158Updated 4 months ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆103Updated last month
- ☆87Updated 3 weeks ago
- ☆236Updated 2 years ago
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆24Updated last year
- ROCm docker images with fixes/support for extra architectures, such as gfx803/gfx1010.☆31Updated 2 years ago
- Stable Diffusion and Flux in pure C/C++☆24Updated last week
- Fork of ollama for vulkan support☆109Updated 10 months ago
- Web UI for ExLlamaV2☆514Updated 10 months ago
- A utility that uses Whisper to transcribe videos and various translation APIs to translate the transcribed text and save them as SRT (sub…☆74Updated last year
- RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library☆55Updated 6 months ago
- LLM Frontend in a single html file☆675Updated this week
- build scripts for ROCm☆188Updated last year
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆67Updated 2 years ago