bobby-he / simplified_transformersLinks
☆290Updated 5 months ago
Alternatives and similar repositories for simplified_transformers
Users that are interested in simplified_transformers are comparing it to the libraries listed below
Sorting:
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆378Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆294Updated 2 months ago
- Annotated version of the Mamba paper☆482Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆333Updated 11 months ago
- Implementation of Block Recurrent Transformer - Pytorch☆217Updated 9 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆222Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆169Updated last month
- ☆286Updated last month
- Code release for "Dropout Reduces Underfitting"☆313Updated 2 years ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆408Updated 4 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆559Updated 3 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆213Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆550Updated 5 months ago
- Sequence modeling with Mega.☆295Updated 2 years ago
- Implementation of Linformer for Pytorch☆285Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆166Updated last year
- When it comes to optimizers, it's always better to be safe than sorry☆233Updated 2 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆228Updated 8 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆243Updated last year
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆226Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆55Updated last month
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆681Updated 6 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆417Updated 3 weeks ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆282Updated 2 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆294Updated 3 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆193Updated last month
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆514Updated 2 weeks ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆135Updated 4 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆758Updated last year