srush / annotated-s4Links
Implementation of https://srush.github.io/annotated-s4
☆502Updated 3 months ago
Alternatives and similar repositories for annotated-s4
Users that are interested in annotated-s4 are comparing it to the libraries listed below
Sorting:
- ☆302Updated 8 months ago
- Annotated version of the Mamba paper☆490Updated last year
- Long Range Arena for Benchmarking Efficient Transformers☆764Updated last year
- ☆281Updated last year
- Language Modeling with the H3 State Space Model☆518Updated 2 years ago
- Accelerated First Order Parallel Associative Scan☆190Updated last year
- ☆185Updated last year
- ☆164Updated 2 years ago
- Sequence modeling with Mega.☆300Updated 2 years ago
- ☆363Updated last year
- Neural Networks and the Chomsky Hierarchy☆209Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆757Updated 2 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆667Updated this week
- ☆256Updated 3 months ago
- Code for our NeurIPS 2022 paper☆369Updated 2 years ago
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆363Updated last year
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆122Updated 11 months ago
- For optimization algorithm research and development.☆539Updated last week
- ☆215Updated 10 months ago
- Puzzles for exploring transformers☆371Updated 2 years ago
- JAX Synergistic Memory Inspector☆180Updated last year
- An implementation of local windowed attention for language modeling☆480Updated 2 months ago
- An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"☆321Updated last year
- Implementation of Block Recurrent Transformer - Pytorch☆222Updated last year
- A PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR with PyTorch Lightning scripts for distributed training☆490Updated last year
- Efficient optimizers☆265Updated 2 weeks ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆399Updated this week
- Library for reading and processing ML training data.☆548Updated this week
- CLU lets you write beautiful training loops in JAX.☆356Updated 3 months ago