Arenaa / Accelerated-Generation-TechniquesLinks
This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).
☆11Updated last year
Alternatives and similar repositories for Accelerated-Generation-Techniques
Users that are interested in Accelerated-Generation-Techniques are comparing it to the libraries listed below
Sorting:
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆33Updated last year
- Codebase for Instruction Following without Instruction Tuning☆35Updated 11 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆29Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆92Updated 9 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆55Updated 2 years ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 7 months ago
- ☆19Updated 8 months ago
- Are gradient information useful for pruning of LLMs?☆46Updated 3 weeks ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆26Updated last month
- Code for paper "Patch-Level Training for Large Language Models"☆87Updated 10 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Updated 3 weeks ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆61Updated 9 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆39Updated last year
- Control LLM☆19Updated 5 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 11 months ago
- ☆25Updated last month
- ☆10Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆105Updated last week
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆25Updated 10 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆52Updated 10 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆48Updated 5 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆54Updated 4 months ago
- ☆85Updated last year
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆34Updated 4 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated last year
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 11 months ago