NimbleEdge / sparse_transformersLinks
Sparse Inferencing for transformer based LLMs
☆197Updated last month
Alternatives and similar repositories for sparse_transformers
Users that are interested in sparse_transformers are comparing it to the libraries listed below
Sorting:
- InferX is a Inference Function as a Service Platform☆132Updated this week
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆42Updated last week
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆110Updated 2 months ago
- Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆102Updated 2 weeks ago
- LLM Inference on consumer devices☆124Updated 5 months ago
- ☆165Updated last month
- ☆61Updated 2 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆88Updated 3 months ago
- AI management tool☆121Updated 10 months ago
- ☆133Updated 4 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆146Updated 3 months ago
- ☆56Updated 2 months ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆80Updated 11 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated this week
- ☆99Updated 3 weeks ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆92Updated 4 months ago
- ☆155Updated 4 months ago
- Super simple python connectors for llama.cpp, including vision models (Gemma 3, Qwen2-VL). Compile llama.cpp and run!☆28Updated last month
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆536Updated 3 weeks ago
- ☆56Updated 3 months ago
- automatically quant GGUF models☆200Updated this week
- Enhancing LLMs with LoRA☆130Updated last month
- 1.58-bit LLaMa model☆82Updated last year
- ☆596Updated 3 weeks ago
- Lightweight Inference server for OpenVINO☆210Updated this week
- KoboldCpp Smart Launcher with GPU Layer and Tensor Override Tuning☆27Updated 3 months ago
- VLLM Port of the Chatterbox TTS model☆293Updated last week
- Use the Moondream 2 model to detect faces and their gaze directions in videos.☆44Updated 8 months ago
- 🚀 FlexLLama - Lightweight self-hosted tool for running multiple llama.cpp server instances with OpenAI v1 API compatibility and multi-GP…☆32Updated last month
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆52Updated 7 months ago