MatX-inc / seqax
seqax = sequence modeling + JAX
β150Updated this week
Alternatives and similar repositories for seqax:
Users that are interested in seqax are comparing it to the libraries listed below
- β214Updated 8 months ago
- A simple library for scaling up JAX programsβ134Updated 4 months ago
- A MAD laboratory to improve AI architecture designs π§ͺβ108Updated 3 months ago
- LoRA for arbitrary JAX models and functionsβ135Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ557Updated this week
- Minimal but scalable implementation of large language models in JAXβ34Updated 4 months ago
- Cost aware hyperparameter tuning algorithmβ147Updated 8 months ago
- Understand and test language model architectures on synthetic tasks.β185Updated 2 weeks ago
- Named Tensors for Legible Deep Learning in JAXβ167Updated this week
- β184Updated last month
- JAX implementation of the Llama 2 modelβ216Updated last year
- A set of Python scripts that makes your experience on TPU betterβ50Updated 8 months ago
- Experiment of using Tangent to autodiff tritonβ78Updated last year
- JAX Synergistic Memory Inspectorβ170Updated 8 months ago
- Inference code for LLaMA models in JAXβ116Updated 10 months ago
- 𧱠Modula software packageβ173Updated 2 weeks ago
- jax-triton contains integrations between JAX and OpenAI Tritonβ386Updated last week
- β290Updated this week
- If it quacks like a tensor...β57Updated 4 months ago
- Train very large language models in Jax.β203Updated last year
- JAX bindings for Flash Attention v2β88Updated 8 months ago
- Jax/Flax rewrite of Karpathy's nanoGPTβ57Updated 2 years ago
- A library for unit scaling in PyTorchβ124Updated 3 months ago
- supporting pytorch FSDP for optimizersβ79Updated 3 months ago
- β220Updated last month
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentaβ¦β483Updated last week
- β58Updated 3 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β100Updated 4 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"β223Updated last month