expz / annotated-hyenaLinks
An annotated implementation of the Hyena Hierarchy paper
☆33Updated 2 years ago
Alternatives and similar repositories for annotated-hyena
Users that are interested in annotated-hyena are comparing it to the libraries listed below
Sorting:
- JAX/Flax implementation of the Hyena Hierarchy☆34Updated 2 years ago
- My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other h…☆53Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆89Updated last year
- PyTorch implementation for "Long Horizon Temperature Scaling", ICML 2023☆20Updated 2 years ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- ☆31Updated 8 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆74Updated 7 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆83Updated last year
- ☆53Updated 8 months ago
- ☆32Updated last year
- ☆37Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆31Updated 7 months ago
- Code for "Theoretical Foundations of Deep Selective State-Space Models" (NeurIPS 2024)☆15Updated 5 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆50Updated 11 months ago
- Minimum Description Length probing for neural network representations☆18Updated 4 months ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆54Updated last year
- ☆45Updated last year
- Code for GFlowNet-EM, a novel algorithm for fitting latent variable models with compositional latents and an intractable true posterior.☆40Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆32Updated 8 months ago
- ☆32Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆51Updated 3 months ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆59Updated 3 years ago
- Implementation of Infini-Transformer in Pytorch☆111Updated 5 months ago
- Efficient PScan implementation in PyTorch☆16Updated last year
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆21Updated last month
- ☆81Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 9 months ago