bobby-he / simplified_transformers
☆285Updated last month
Alternatives and similar repositories for simplified_transformers:
Users that are interested in simplified_transformers are comparing it to the libraries listed below
- Annotated version of the Mamba paper☆469Updated 10 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆256Updated 8 months ago
- Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆477Updated this week
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆370Updated last year
- ☆240Updated 4 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆609Updated last month
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆297Updated 7 months ago
- Reading list for research topics in state-space models☆253Updated 3 weeks ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆668Updated last year
- An implementation of local windowed attention for language modeling☆403Updated this week
- Helpful tools and examples for working with flex-attention☆583Updated this week
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆180Updated 2 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆51Updated 2 months ago
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆266Updated 3 weeks ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆210Updated last year
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆104Updated 3 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆209Updated 7 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆118Updated 5 months ago
- Collection of papers on state-space models☆568Updated 3 weeks ago
- Minimal Mamba-2 implementation in PyTorch☆164Updated 7 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆154Updated 2 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆146Updated 8 months ago
- Sequence modeling with Mega.☆297Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆270Updated 2 months ago
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆384Updated 5 months ago
- Code release for "Dropout Reduces Underfitting"☆311Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆292Updated 2 weeks ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆724Updated 8 months ago
- Implementation of https://srush.github.io/annotated-s4☆477Updated last year