fangyuan-ksgk / selective-attention-transformerLinks
Unofficial Implementation of Selective Attention Transformer
☆17Updated 10 months ago
Alternatives and similar repositories for selective-attention-transformer
Users that are interested in selective-attention-transformer are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆92Updated 3 weeks ago
- ☆85Updated last year
- ☆20Updated last year
- ☆35Updated 6 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆30Updated 4 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆83Updated 10 months ago
- Remasking Discrete Diffusion Models with Inference-Time Scaling☆43Updated 6 months ago
- ☆34Updated 8 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆30Updated 10 months ago
- ☆19Updated 5 months ago
- ☆39Updated 2 weeks ago
- ☆13Updated 8 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆31Updated 5 months ago
- ☆12Updated 6 months ago
- The official github repo for "Diffusion Language Models are Super Data Learners".☆111Updated last month
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆54Updated 4 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆130Updated this week
- Official code for the paper "Attention as a Hypernetwork"☆42Updated last year
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆34Updated 10 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆42Updated 11 months ago
- ☆31Updated last year
- Multi-Layer Sparse Autoencoders (ICLR 2025)☆24Updated 7 months ago
- Stick-breaking attention☆60Updated 2 months ago
- ☆31Updated 7 months ago
- Official Code Repository for the paper "Continuous Diffusion Model for Language Modeling".☆40Updated 6 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆14Updated 5 months ago
- ☆57Updated 11 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆34Updated 3 weeks ago