radarFudan / mamba-minimal-jaxLinks
☆33Updated 10 months ago
Alternatives and similar repositories for mamba-minimal-jax
Users that are interested in mamba-minimal-jax are comparing it to the libraries listed below
Sorting:
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆89Updated last year
- ☆34Updated last year
- ☆120Updated 3 months ago
- Parallelizing non-linear sequential models over the sequence length☆54Updated 3 months ago
- AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning (Published in TMLR)☆21Updated 11 months ago
- ☆49Updated last year
- ☆58Updated last year
- Parallel Associative Scan for Language Models☆18Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆34Updated 11 months ago
- Exploration into the Scaling Value Iteration Networks paper, from Schmidhuber's group☆36Updated last year
- A simple library for scaling up JAX programs☆143Updated 11 months ago
- Minimal but scalable implementation of large language models in JAX☆35Updated last month
- Implementation of Denoising Diffusion Probabilistic Models (DDPM) in JAX and Flax.☆20Updated last year
- A simple, performant and scalable JAX-based world modeling codebase☆76Updated this week
- ☆13Updated last year
- Implementation of PSGD optimizer in JAX☆33Updated 9 months ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆53Updated 10 months ago
- JAX implementation of VQVAE/VQGAN autoencoders (+FSQ)☆37Updated last year
- ☆38Updated last year
- ☆39Updated last year
- ☆33Updated 11 months ago
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆57Updated last month
- TPU pod commander is a package for managing and launching jobs on Google Cloud TPU pods.☆21Updated last week
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Efficient PScan implementation in PyTorch☆16Updated last year
- Machine Learning eXperiment Utilities☆47Updated 2 months ago
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- ☆40Updated last month