akhilkedia / TranformersGetStableLinks
[ICML 2024] Official Repository for the paper "Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models"
☆10Updated 11 months ago
Alternatives and similar repositories for TranformersGetStable
Users that are interested in TranformersGetStable are comparing it to the libraries listed below
Sorting:
- Official code for the paper "Attention as a Hypernetwork"☆39Updated last year
- Scaling Sparse Fine-Tuning to Large Language Models☆16Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated 10 months ago
- ☆32Updated last year
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆11Updated 2 years ago
- Here we will test various linear attention designs.☆59Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆26Updated 4 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆23Updated 7 months ago
- Efficient Scaling laws and collaborative pretraining.☆16Updated 5 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- PyTorch implementation of StableMask (ICML'24)☆13Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated 11 months ago
- Triton implement of bi-directional (non-causal) linear attention☆50Updated 4 months ago
- ☆32Updated last year
- Official implementation of ECCV24 paper: POA