Hannibal046 / RWKV-howto
possibly useful materials for learning RWKV language model.
☆24Updated last year
Alternatives and similar repositories for RWKV-howto:
Users that are interested in RWKV-howto are comparing it to the libraries listed below
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 9 months ago
- Here we will test various linear attention designs.☆58Updated 9 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆52Updated 5 months ago
- ☆43Updated 3 months ago
- RWKV model implementation☆37Updated last year
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆33Updated 2 months ago
- GoldFinch and other hybrid transformer components☆43Updated 6 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆62Updated 9 months ago
- sigma-MoE layer☆18Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆113Updated 3 months ago
- Utilities for Training Very Large Models☆57Updated 4 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 4 months ago
- ☆26Updated 11 months ago
- ☆32Updated last year
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆21Updated 9 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 7 months ago
- This is the code that went into our practical dive using mamba as information extraction☆51Updated last year
- ☆36Updated 8 months ago
- ☆46Updated last year
- ☆80Updated 4 months ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 5 months ago
- Fast and memory-efficient exact attention☆57Updated last month
- ☆32Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆51Updated this week
- Minimum Description Length probing for neural network representations☆18Updated this week
- Evaluating the Mamba architecture on the Othello game☆44Updated 9 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆25Updated 9 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆52Updated last year