Caiyun-AI / MUDDFormerLinks
☆86Updated 5 months ago
Alternatives and similar repositories for MUDDFormer
Users that are interested in MUDDFormer are comparing it to the libraries listed below
Sorting:
- ☆218Updated 8 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆330Updated 8 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆100Updated 5 months ago
- ☆197Updated last year
- ☆148Updated last year
- A repository for DenseSSMs☆89Updated last year
- ☆49Updated 4 months ago
- [COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation☆155Updated 3 months ago
- The official GitHub page for the survey paper "Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey". And this paper is unde…☆65Updated 2 months ago
- ☆75Updated 8 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆157Updated 4 months ago
- Parameter-Efficient Fine-Tuning for Foundation Models☆96Updated 7 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆53Updated 5 months ago
- ☆213Updated last year
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆126Updated this week
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆133Updated 6 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆393Updated last month
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆123Updated 8 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆375Updated last month
- ☆120Updated last year
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆269Updated 9 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆192Updated 3 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆64Updated last year
- DeepSeek Native Sparse Attention pytorch implementation☆106Updated 3 weeks ago
- qwen-nsa☆83Updated 2 weeks ago
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆78Updated last year
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆143Updated 5 months ago
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 8 months ago
- ☆108Updated 4 months ago