NX-AI / xlstm-jaxLinks
Official JAX implementation of xLSTM including fast and efficient training and inference code. 7B model available at https://huggingface.co/NX-AI/xLSTM-7b.
โ92Updated 5 months ago
Alternatives and similar repositories for xlstm-jax
Users that are interested in xlstm-jax are comparing it to the libraries listed below
Sorting:
- Latent Program Network (from the "Searching Latent Program Spaces" paper)โ87Updated 3 months ago
- ๐งฑ Modula software packageโ200Updated 2 months ago
- supporting pytorch FSDP for optimizersโ82Updated 6 months ago
- โ220Updated 3 weeks ago
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.โ103Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ134Updated last week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"โ101Updated 6 months ago
- โ190Updated 6 months ago
- DeMo: Decoupled Momentum Optimizationโ188Updated 6 months ago
- โ150Updated 10 months ago
- Bootstrapping ARCโ127Updated 7 months ago
- A MAD laboratory to improve AI architecture designs ๐งชโ120Updated 6 months ago
- Cost aware hyperparameter tuning algorithmโ158Updated 11 months ago
- Mixture of A Million Expertsโ46Updated 10 months ago
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).โ275Updated 2 weeks ago
- โ270Updated 11 months ago
- Understand and test language model architectures on synthetic tasks.โ217Updated 2 weeks ago
- โ81Updated last year
- A State-Space Model with Rational Transfer Function Representation.โ78Updated last year
- Getting crystal-like representations with harmonic lossโ190Updated 2 months ago
- โ98Updated 5 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.โ57Updated last month
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge fasterโ70Updated last month
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmindโ127Updated 10 months ago
- Accelerated First Order Parallel Associative Scanโ182Updated 10 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAXโ83Updated last year
- Code repository for Black Mambaโ247Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)โ190Updated last year
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).โ120Updated 8 months ago
- WIPโ93Updated 10 months ago