DonRL10 / RetNet
an implementation of paper"Retentive Network: A Successor to Transformer for Large Language Models" https://arxiv.org/pdf/2307.08621.pdf
☆12Updated last year
Alternatives and similar repositories for RetNet:
Users that are interested in RetNet are comparing it to the libraries listed below
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated last year
- ☆53Updated 9 months ago
- A large-scale RWKV v6, v7(World, ARWKV, PRWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy o…☆35Updated this week
- ☆27Updated 9 months ago
- ☆32Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- Jax implementation of "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆14Updated 11 months ago
- ☆23Updated 11 months ago
- Official PyTorch implementation of LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation.☆13Updated 2 weeks ago
- ☆23Updated 7 months ago
- ☆13Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 8 months ago
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Updated 10 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- ☆19Updated 2 years ago
- PyTorch implementation of Retentive Network: A Successor to Transformer for Large Language Models☆14Updated last year
- ☆47Updated last year
- ☆14Updated 2 years ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 5 months ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated 10 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated this week
- Blog post☆17Updated last year
- ☆24Updated last month
- sigma-MoE layer☆18Updated last year
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 10 months ago
- ☆16Updated 2 years ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 8 months ago
- ☆22Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆47Updated last year