Arenaa / Accelerated-Generation-Techniques
This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).
☆11Updated 11 months ago
Alternatives and similar repositories for Accelerated-Generation-Techniques:
Users that are interested in Accelerated-Generation-Techniques are comparing it to the libraries listed below
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆27Updated last year
- ☆76Updated last week
- ☆21Updated 3 weeks ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 10 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- ☆43Updated 2 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated last year
- ☆53Updated 9 months ago
- ☆78Updated 8 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆51Updated 2 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆45Updated 6 months ago
- Control LLM☆14Updated 3 weeks ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 8 months ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated 9 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆90Updated this week
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆81Updated 5 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆40Updated last year
- ☆14Updated last year
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆46Updated 5 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆18Updated 9 months ago
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆46Updated 2 years ago
- Exploration of automated dataset selection approaches at large scales.☆39Updated last month
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆51Updated 10 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆34Updated 2 months ago
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆76Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 10 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆57Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆42Updated 5 months ago
- ☆17Updated 3 months ago