Caiyun-AI / MUDDFormerLinks
☆90Updated 8 months ago
Alternatives and similar repositories for MUDDFormer
Users that are interested in MUDDFormer are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆108Updated 8 months ago
- ☆222Updated 11 months ago
- ☆201Updated 2 years ago
- The official GitHub page for the survey paper "Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey". And this paper is unde…☆77Updated this week
- ☆152Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆169Updated last week
- [COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation☆166Updated 7 months ago
- A repository for DenseSSMs☆88Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆340Updated 11 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆70Updated 2 months ago
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆130Updated 11 months ago
- Parameter-Efficient Fine-Tuning for Foundation Models☆110Updated 10 months ago
- The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink…☆834Updated last month
- ☆79Updated last year
- Lion and Adam optimization comparison☆64Updated 2 years ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆143Updated 10 months ago
- ☆218Updated 2 months ago
- ☆49Updated 7 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆114Updated last week
- ☆125Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆429Updated 4 months ago
- implementations and experimentation on mHC by deepseek - https://arxiv.org/abs/2512.24880☆290Updated this week
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆153Updated last month
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆86Updated last year
- A Tight-fisted Optimizer☆50Updated 2 years ago
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆39Updated 10 months ago
- DeepSeek Native Sparse Attention pytorch implementation☆115Updated last month
- Triton implement of bi-directional (non-causal) linear attention☆65Updated last week
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year