bobby-he / simplified_transformersLinks
☆292Updated 6 months ago
Alternatives and similar repositories for simplified_transformers
Users that are interested in simplified_transformers are comparing it to the libraries listed below
Sorting:
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆298Updated 2 months ago
- Annotated version of the Mamba paper☆485Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆341Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated last year
- ☆286Updated 2 months ago
- ☆190Updated this week
- Collection of papers on state-space models☆595Updated last month
- An implementation of local windowed attention for language modeling☆454Updated 5 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆562Updated 4 months ago
- Sequence modeling with Mega.☆296Updated 2 years ago
- Implementation of Infini-Transformer in Pytorch☆111Updated 5 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆138Updated 4 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆223Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆554Updated 5 months ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆319Updated 5 months ago
- Implementation of Block Recurrent Transformer - Pytorch☆219Updated 10 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆226Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆170Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆694Updated 6 months ago
- ☆166Updated last year
- Helpful tools and examples for working with flex-attention☆831Updated 2 weeks ago
- Implementation of the proposed minGRU in Pytorch☆299Updated 3 months ago
- ☆191Updated last year
- The official implementation of Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆375Updated this week
- Official implementation of TransNormerLLM: A Faster and Better LLM☆245Updated last year
- When it comes to optimizers, it's always better to be safe than sorry☆241Updated 2 months ago
- Understand and test language model architectures on synthetic tasks.☆217Updated 2 weeks ago
- Implementation of Linformer for Pytorch☆288Updated last year