radarFudan / mamba-minimal-jaxView external linksLinks
☆35Nov 22, 2024Updated last year
Alternatives and similar repositories for mamba-minimal-jax
Users that are interested in mamba-minimal-jax are comparing it to the libraries listed below
Sorting:
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆93Jan 25, 2024Updated 2 years ago
- ☆10Jun 27, 2024Updated last year
- ☆29Jul 9, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Jaxpr Visualisation Tool☆35Dec 22, 2024Updated last year
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning (Published in TMLR)☆23Oct 15, 2024Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 2 years ago
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆14Sep 18, 2025Updated 4 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- Code for "Theoretical Foundations of Deep Selective State-Space Models" (NeurIPS 2024)☆15Jan 7, 2025Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Decision Transformer JAX - Reproduction of 'Decision Transformer: Reinforcement Learning via Sequence Modeling' in JAX and Haiku☆12Aug 14, 2024Updated last year
- Jaxplorer is a Jax reinforcement learning (RL) framework for exploring new ideas.☆13Jul 19, 2024Updated last year
- ☆39Apr 5, 2024Updated last year
- JAX implementation of RL algorithms and vectorized environments☆51Dec 26, 2023Updated 2 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆20Mar 15, 2025Updated 11 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated 11 months ago
- ☆18Apr 22, 2024Updated last year
- ☆51Jan 28, 2024Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- ☆45Apr 30, 2018Updated 7 years ago
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆98Dec 5, 2024Updated last year
- Flax Implementation of DreamerV3 on Crafter☆18Nov 29, 2025Updated 2 months ago
- ☆20May 30, 2024Updated last year
- ☆52Jan 20, 2023Updated 3 years ago
- Single-file SAC-N implementation on jax with flax and equinox. 10x faster than pytorch☆56May 21, 2023Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Official implementation for "Anti-Exploration by Random Network Distillation", ICML 2023☆56Feb 3, 2023Updated 3 years ago
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- Official implementation for "Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size", NeurIPS 2022, Offline RL Worksho…☆21Feb 27, 2023Updated 2 years ago
- Unofficial implementation of Linear Recurrent Units, by Deepmind, in Pytorch☆73Apr 22, 2025Updated 9 months ago
- Implementation of Proximal Policy Optimization in Jax+Flax☆21May 18, 2023Updated 2 years ago
- Author's implementation of ReBRAC, a minimalist improvement upon TD3+BC☆61Aug 3, 2023Updated 2 years ago