Caiyun-AI / DCFormerLinks
☆222Updated 11 months ago
Alternatives and similar repositories for DCFormer
Users that are interested in DCFormer are comparing it to the libraries listed below
Sorting:
- ☆90Updated 8 months ago
- ☆152Updated last year
- My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing o…☆44Updated last year
- ☆201Updated 2 years ago
- A repository for DenseSSMs☆88Updated last year
- Awesome list of papers that extend Mamba to various applications.☆138Updated 8 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆120Updated last week
- Simba☆217Updated last year
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆101Updated 3 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆189Updated last year
- PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model …☆86Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated 3 months ago
- tinybig for deep function learning☆60Updated 8 months ago
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆108Updated last year
- ☆79Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆250Updated 2 years ago
- Implementation of ViTaR: ViTAR: Vision Transformer with Any Resolution in PyTorch☆39Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆231Updated 3 months ago
- The official repo for [TPAMI'23] "Vision Transformer with Quadrangle Attention"☆234Updated 4 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆206Updated 3 weeks ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆340Updated 11 months ago
- State Space Models☆72Updated last year
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆136Updated 3 weeks ago
- Minimal Mamba-2 implementation in PyTorch☆242Updated last year
- ☆125Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆171Updated last week
- [ICLR 2025 Spotlight] Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures☆540Updated 11 months ago
- ☆43Updated last year
- [COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation☆166Updated 7 months ago
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆83Updated 3 months ago