fangyuan-ksgk / selective-attention-transformerLinks
Unofficial Implementation of Selective Attention Transformer
☆17Updated 8 months ago
Alternatives and similar repositories for selective-attention-transformer
Users that are interested in selective-attention-transformer are comparing it to the libraries listed below
Sorting:
- ☆19Updated 3 months ago
- ☆33Updated 4 months ago
- ☆82Updated 10 months ago
- Remasking Discrete Diffusion Models with Inference-Time Scaling☆34Updated 4 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆75Updated 8 months ago
- ☆33Updated 6 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆27Updated 2 months ago
- ☆32Updated 8 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆115Updated last week
- ☆20Updated last year
- ☆28Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆29Updated 8 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated 2 weeks ago
- ☆13Updated 6 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆38Updated 9 months ago
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆53Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆68Updated 3 weeks ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆28Updated 3 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated 10 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆32Updated 8 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆110Updated 10 months ago
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆55Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆99Updated last week
- ☆43Updated 5 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆40Updated 9 months ago
- Official Code Repository for the paper "Continuous Diffusion Model for Language Modeling".☆34Updated 4 months ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆25Updated 6 months ago
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆34Updated 3 weeks ago
- ☆30Updated 5 months ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆104Updated last year