fangyuan-ksgk / selective-attention-transformer
Unofficial Implementation of Selective Attention Transformer
☆16Updated 6 months ago
Alternatives and similar repositories for selective-attention-transformer
Users that are interested in selective-attention-transformer are comparing it to the libraries listed below
Sorting:
- ☆29Updated 2 months ago
- ☆78Updated 8 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆26Updated 6 months ago
- ☆18Updated last month
- ☆12Updated 4 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆50Updated 2 months ago
- ☆19Updated 10 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- ☆53Updated 7 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆99Updated this week
- ☆13Updated 3 weeks ago
- Remasking Discrete Diffusion Models with Inference-Time Scaling☆19Updated 2 months ago
- Official implementation of ECCV24 paper: POA☆24Updated 9 months ago
- ☆31Updated 4 months ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆21Updated 4 months ago
- Official code for the paper "Attention as a Hypernetwork"☆33Updated 10 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆68Updated 3 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆13Updated 2 weeks ago
- ☆17Updated 4 months ago
- Here we will test various linear attention designs.☆60Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 8 months ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆27Updated last month
- Code for the paper "Function-Space Learning Rates"☆20Updated 3 weeks ago
- ☆31Updated last year
- ☆26Updated last year
- The official repo of continuous speculative decoding☆26Updated last month
- Code for experiments on transformers using Markovian data.☆14Updated 5 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Stick-breaking attention☆53Updated 2 months ago
- Official implementation of "The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs"☆27Updated 3 weeks ago