Infini-AI-Lab / UMbreLLaLinks
LLM Inference on consumer devices
☆125Updated 7 months ago
Alternatives and similar repositories for UMbreLLa
Users that are interested in UMbreLLa are comparing it to the libraries listed below
Sorting:
- ☆152Updated 4 months ago
- ☆60Updated 4 months ago
- KV cache compression for high-throughput LLM inference☆143Updated 8 months ago
- ☆103Updated this week
- ☆58Updated 5 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆129Updated 2 months ago
- 1.58-bit LLaMa model☆83Updated last year
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆147Updated 2 weeks ago
- [NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆125Updated 2 weeks ago
- Sparse Inferencing for transformer based LLMs☆201Updated 2 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆308Updated 5 months ago
- Samples of good AI generated CUDA kernels☆91Updated 5 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆95Updated 5 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆288Updated this week
- scalable and robust tree-based speculative decoding algorithm☆361Updated 9 months ago
- ☆80Updated 11 months ago
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆249Updated 5 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last year
- Efficient non-uniform quantization with GPTQ for GGUF☆52Updated last month
- ☆449Updated this week
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆198Updated 4 months ago
- QuIP quantization☆59Updated last year
- ☆64Updated 11 months ago
- Code for data-aware compression of DeepSeek models☆56Updated 4 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆238Updated 11 months ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆195Updated this week
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆249Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆183Updated this week
- GRadient-INformed MoE☆264Updated last year