Arenaa / Accelerated-Generation-TechniquesLinks
This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).
☆11Updated last year
Alternatives and similar repositories for Accelerated-Generation-Techniques
Users that are interested in Accelerated-Generation-Techniques are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆56Updated 11 months ago
- Are gradient information useful for pruning of LLMs?☆47Updated 4 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Updated last year
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆20Updated 3 months ago
- ☆61Updated 6 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆62Updated last year
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 2 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆41Updated last week
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆29Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Updated 4 months ago
- Official PyTorch implementation of CD-MOE☆12Updated 9 months ago
- ☆19Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆71Updated last year
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆50Updated 4 months ago
- ☆64Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated 11 months ago
- ☆10Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆66Updated 9 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆96Updated last month
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Updated last year
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆31Updated 10 months ago
- ☆26Updated last month
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year