fkodom / dilated-attention-pytorch
(Unofficial) Implementation of dilated attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens" (https://arxiv.org/abs/2307.02486)
☆51Updated last year
Related projects ⓘ
Alternatives and complementary repositories for dilated-attention-pytorch
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆100Updated last year
- Implementation of Infini-Transformer in Pytorch☆104Updated last month
- My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other h…☆52Updated last year
- Bi-Directional Equivariant Long-Range DNA Sequence Modeling☆160Updated last month
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆205Updated 3 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆157Updated 11 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆86Updated 5 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆226Updated 2 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆109Updated last month
- Repository for StripedHyena, a state-of-the-art beyond Transformer architecture☆299Updated 8 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆293Updated 5 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆259Updated 2 weeks ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆103Updated 3 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆94Updated this week
- Explorations into the recently proposed Taylor Series Linear Attention☆90Updated 3 months ago
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆109Updated 2 weeks ago
- Implementation of the proposed minGRU in Pytorch☆247Updated last week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆248Updated 7 months ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆280Updated last month
- Understand and test language model architectures on synthetic tasks.☆163Updated 6 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆137Updated 2 weeks ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆84Updated last week
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 6 months ago
- Implementation of the dilated self attention as described in "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆13Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆393Updated this week
- Some preliminary explorations of Mamba's context scaling.☆191Updated 9 months ago
- ☆176Updated this week
- Recurrent Memory Transformer☆147Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated this week
- A repository for log-time feedforward networks☆216Updated 7 months ago