Infini-AI-Lab / UMbreLLaLinks
LLM Inference on consumer devices
☆125Updated 8 months ago
Alternatives and similar repositories for UMbreLLa
Users that are interested in UMbreLLa are comparing it to the libraries listed below
Sorting:
- ☆154Updated 4 months ago
- ☆62Updated 5 months ago
- Sparse Inferencing for transformer based LLMs☆208Updated 3 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆130Updated 3 months ago
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆112Updated last week
- Samples of good AI generated CUDA kernels☆91Updated 5 months ago
- ☆107Updated this week
- KV cache compression for high-throughput LLM inference☆143Updated 9 months ago
- ☆60Updated 6 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated 2 weeks ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆300Updated this week
- Code for data-aware compression of DeepSeek models☆63Updated last week
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆240Updated last year
- scalable and robust tree-based speculative decoding algorithm☆362Updated 9 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆70Updated this week
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆98Updated 6 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 9 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆44Updated 3 weeks ago
- Efficient non-uniform quantization with GPTQ for GGUF☆53Updated 2 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆254Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆210Updated this week
- 3x Faster Inference; Unofficial implementation of EAGLE Speculative Decoding☆78Updated 4 months ago
- QuIP quantization☆60Updated last year
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆251Updated 5 months ago
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆560Updated 2 months ago
- ☆455Updated 3 weeks ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆238Updated 2 weeks ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆206Updated 5 months ago