Caiyun-AI / MUDDFormerLinks
☆84Updated 4 months ago
Alternatives and similar repositories for MUDDFormer
Users that are interested in MUDDFormer are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆99Updated 4 months ago
- ☆197Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆329Updated 7 months ago
- ☆219Updated 7 months ago
- ☆147Updated last year
- [COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation☆150Updated 3 months ago
- A repository for DenseSSMs☆88Updated last year
- The official GitHub page for the survey paper "Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey". And this paper is unde…☆65Updated 2 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆51Updated 4 months ago
- Parameter-Efficient Fine-Tuning for Foundation Models☆93Updated 6 months ago
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆397Updated 2 weeks ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆154Updated 3 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆377Updated 2 weeks ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆63Updated last year
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆120Updated 7 months ago
- ☆73Updated 8 months ago
- ☆117Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆132Updated 6 months ago
- Official repository of InLine attention (NeurIPS 2024)☆56Updated 9 months ago
- ☆47Updated 3 months ago
- DeepSeek Native Sparse Attention pytorch implementation☆100Updated 2 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆189Updated 2 months ago
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆125Updated this week
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 8 months ago
- State Space Models☆70Updated last year
- ☆210Updated 11 months ago
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆76Updated 11 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- The official GitHub page for the survey paper "A Survey of RWKV".☆29Updated 9 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year