astramind-ai / Mixture-of-depths
Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"
☆145Updated 6 months ago
Alternatives and similar repositories for Mixture-of-depths:
Users that are interested in Mixture-of-depths are comparing it to the libraries listed below
- ☆190Updated last month
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆262Updated 3 months ago
- Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆107Updated last month
- ☆212Updated 8 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆146Updated last month
- ☆107Updated 3 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆139Updated 3 months ago
- Explorations into some recent techniques surrounding speculative decoding☆229Updated 3 weeks ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆58Updated 8 months ago
- ☆124Updated 11 months ago
- ☆69Updated this week
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆170Updated 5 months ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆110Updated last month
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆213Updated 2 months ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆186Updated last month
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆75Updated 7 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆78Updated this week
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 7 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆377Updated 3 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆145Updated last month
- PB-LLM: Partially Binarized Large Language Models☆150Updated last year
- DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆417Updated 2 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆152Updated 6 months ago
- KV cache compression for high-throughput LLM inference☆103Updated last month
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆325Updated 5 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆192Updated last month
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆171Updated 3 months ago
- ☆139Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 7 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆368Updated 2 months ago