[ICML 2024] CLLMs: Consistency Large Language Models
☆414Nov 16, 2024Updated last year
Alternatives and similar repositories for Consistency_LLM
Users that are interested in Consistency_LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,730Jun 25, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,313Feb 20, 2026Updated 2 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆79Nov 4, 2024Updated last year
- ☆89Oct 17, 2025Updated 6 months ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- scalable and robust tree-based speculative decoding algorithm☆377Jan 28, 2025Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,156May 8, 2024Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,909Jan 21, 2024Updated 2 years ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆757Sep 27, 2024Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆368Apr 13, 2026Updated 3 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆543Feb 10, 2025Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,878Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A throughput-oriented high-performance serving framework for LLMs☆956Mar 29, 2026Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 7 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,206Apr 18, 2026Updated 3 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,210Apr 8, 2026Updated last month
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆835Mar 6, 2025Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,225Jul 11, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,678Mar 8, 2024Updated 2 years ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Jun 15, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Aug 19, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆4,046Updated this week
- Ring attention implementation with flash attention☆1,015Sep 10, 2025Updated 7 months ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- ☆355Apr 2, 2024Updated 2 years ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆124Mar 15, 2024Updated 2 years ago
- FlashInfer: Kernel Library for LLM Serving☆5,544May 2, 2026Updated last week
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆718Aug 13, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆218Mar 5, 2026Updated 2 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆116Mar 20, 2025Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,110Jun 30, 2025Updated 10 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆183Jul 12, 2024Updated last year
- Tile primitives for speedy kernels☆3,336Apr 29, 2026Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆273Oct 3, 2025Updated 7 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year