[ICML 2024] CLLMs: Consistency Large Language Models
☆414Nov 16, 2024Updated last year
Alternatives and similar repositories for Consistency_LLM
Users that are interested in Consistency_LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,329Mar 6, 2025Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,273Feb 20, 2026Updated last month
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆78Nov 4, 2024Updated last year
- ☆87Oct 17, 2025Updated 6 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- scalable and robust tree-based speculative decoding algorithm☆376Jan 28, 2025Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,152May 8, 2024Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,909Jan 21, 2024Updated 2 years ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆755Sep 27, 2024Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆364Apr 8, 2026Updated last week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆536Feb 10, 2025Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,872Updated this week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A throughput-oriented high-performance serving framework for LLMs☆952Mar 29, 2026Updated 3 weeks ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,185Mar 31, 2026Updated 2 weeks ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 7 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Apr 8, 2026Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,211Jul 11, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,675Mar 8, 2024Updated 2 years ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Jun 15, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆4,016Updated this week
- Ring attention implementation with flash attention☆1,006Sep 10, 2025Updated 7 months ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- ☆355Apr 2, 2024Updated 2 years ago
- FlashInfer: Kernel Library for LLM Serving☆5,372Apr 11, 2026Updated last week
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆124Mar 15, 2024Updated 2 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆717Aug 13, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆217Mar 5, 2026Updated last month
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆116Mar 20, 2025Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,107Jun 30, 2025Updated 9 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆181Jul 12, 2024Updated last year
- Tile primitives for speedy kernels☆3,312Apr 8, 2026Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆274Oct 3, 2025Updated 6 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year