google-ai-edge / LiteRT-LMLinks
☆388Updated this week
Alternatives and similar repositories for LiteRT-LM
Users that are interested in LiteRT-LM are comparing it to the libraries listed below
Sorting:
- ☆152Updated 3 weeks ago
- A command-line interface tool for serving LLM using vLLM.☆418Updated last month
- Fast Streaming TTS with Orpheus + WebRTC (with FastRTC)☆309Updated 5 months ago
- Inference, Fine Tuning and many more recipes with Gemma family of models☆269Updated 2 months ago
- ☆674Updated this week
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆901Updated last month
- Train Large Language Models on MLX.☆183Updated last week
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆20Updated this week
- ☆300Updated 2 months ago
- LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. Now with LiteRT Next, we're exp…☆845Updated this week
- API Server for Transformer Lab☆79Updated this week
- No-code CLI designed for accelerating ONNX workflows☆214Updated 3 months ago
- LLM inference in C/C++☆102Updated last month
- Verifiers for LLM Reinforcement Learning☆75Updated 3 weeks ago
- A Tree Search Library with Flexible API for LLM Inference-Time Scaling☆475Updated 2 months ago
- Gemma 2 optimized for your local machine.☆376Updated last year
- Code to accompany the Universal Deep Research paper (https://arxiv.org/abs/2509.00244)☆441Updated last month
- Fast parallel LLM inference for MLX☆220Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆331Updated 6 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆223Updated last year
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆651Updated this week
- Docs for GGUF quantization (unofficial)☆275Updated 2 months ago
- Sparse Inferencing for transformer based LLMs☆201Updated last month
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,385Updated this week
- Liquid Audio - Speech-to-Speech audio models by Liquid AI☆101Updated last week
- A flexible, adaptive classification system for dynamic text classification☆463Updated 2 weeks ago
- ☆232Updated 3 months ago
- Kyutai with an "eye"☆221Updated 6 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆211Updated this week
- Enhancing LLMs with LoRA☆159Updated 3 weeks ago