proger / hippogriffLinks
Griffin MQA + Hawk Linear RNN Hybrid
☆89Updated last year
Alternatives and similar repositories for hippogriff
Users that are interested in hippogriff are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆133Updated last month
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- Here we will test various linear attention designs.☆62Updated last year
- Implementation of Infini-Transformer in Pytorch☆113Updated 11 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 6 months ago
- Mixture of A Million Experts☆52Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆243Updated 6 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆135Updated last year
- ☆51Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- ☆91Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- Evaluating the Mamba architecture on the Othello game☆49Updated last year
- RWKV, in easy to read code☆72Updated 9 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆121Updated last year
- ☆82Updated last year
- Accelerated First Order Parallel Associative Scan☆193Updated last year
- RWKV-7: Surpassing GPT☆101Updated last year
- Understand and test language model architectures on synthetic tasks.☆246Updated 3 months ago
- ☆62Updated last year
- ☆83Updated 2 years ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆79Updated last month
- ☆53Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year