proger / hippogriffLinks
Griffin MQA + Hawk Linear RNN Hybrid
☆89Updated last year
Alternatives and similar repositories for hippogriff
Users that are interested in hippogriff are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆131Updated last month
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Here we will test various linear attention designs.☆62Updated last year
- Implementation of Infini-Transformer in Pytorch☆113Updated 11 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆50Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆243Updated 5 months ago
- ☆89Updated last year
- ☆61Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- ☆82Updated last year
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆47Updated 3 months ago
- ☆53Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆83Updated last year
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Understand and test language model architectures on synthetic tasks.☆240Updated 2 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated 11 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 5 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- Mixture of A Million Experts☆50Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆135Updated 11 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- Token Omission Via Attention☆127Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year