RobertCsordas / switchheadLinks
☆14Updated 8 months ago
Alternatives and similar repositories for switchhead
Users that are interested in switchhead are comparing it to the libraries listed below
Sorting:
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 11 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆94Updated this week
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆38Updated 7 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- ☆23Updated 2 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆68Updated 7 months ago
- ☆21Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 9 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆61Updated 2 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 11 months ago
- A repository for DenseSSMs☆87Updated last year
- ☆56Updated last year
- ☆18Updated 6 months ago
- ☆25Updated last year
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆46Updated 6 months ago
- ☆47Updated 2 months ago
- Triton implement of bi-directional (non-causal) linear attention☆48Updated 4 months ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆37Updated 4 months ago
- Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆34Updated last week
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆69Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 8 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆37Updated 8 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆104Updated 3 weeks ago
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆54Updated 11 months ago
- [ICML 2025 Spotlight] Mixture of Lookup Experts☆24Updated 3 weeks ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆28Updated last month
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆56Updated 5 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆35Updated 2 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆51Updated last year