belindal / state-trackingLinks
Code and data for paper "(How) do Language Models Track State?"
☆19Updated 6 months ago
Alternatives and similar repositories for state-tracking
Users that are interested in state-tracking are comparing it to the libraries listed below
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Using FlexAttention to compute attention with different masking patterns☆45Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- Official code for the paper "Attention as a Hypernetwork"☆43Updated last year
- Here we will test various linear attention designs.☆61Updated last year
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆23Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Awesome Triton Resources☆36Updated 5 months ago
- ☆20Updated last year
- Parallel Associative Scan for Language Models☆17Updated last year
- ☆48Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated last month
- Bayes-Adaptive RL for LLM Reasoning☆40Updated 4 months ago
- ☆34Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆80Updated 3 months ago
- Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers☆22Updated 7 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation☆52Updated this week
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆119Updated 3 months ago
- M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models☆40Updated 2 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- ☆56Updated last year
- ☆33Updated last year
- Code for the paper "Function-Space Learning Rates"☆23Updated 4 months ago
- ☆22Updated 5 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆39Updated last year
- ☆41Updated 6 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆35Updated last month
- Stick-breaking attention☆60Updated 3 months ago