irhum / hyenaLinks
JAX/Flax implementation of the Hyena Hierarchy
☆34Updated 2 years ago
Alternatives and similar repositories for hyena
Users that are interested in hyena are comparing it to the libraries listed below
Sorting:
- An annotated implementation of the Hyena Hierarchy paper☆34Updated 2 years ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆93Updated 2 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- ☆13Updated last month
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- Minimum Description Length probing for neural network representations☆20Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- ☆62Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46Updated 2 years ago
- Your favourite classical machine learning algos on the GPU/TPU☆21Updated last month
- Parallel Associative Scan for Language Models☆18Updated 2 years ago
- My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other h…☆54Updated 2 years ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated 2 years ago
- ☆82Updated last year
- ☆35Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated 2 years ago
- ☆32Updated 2 years ago
- ☆18Updated last year
- Embedding Recycling for Language models☆38Updated 2 years ago
- ☆31Updated 2 weeks ago
- PyTorch implementation for "Long Horizon Temperature Scaling", ICML 2023☆20Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Blog post☆17Updated last year
- ☆35Updated last year
- ☆51Updated 2 years ago
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆97Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year