young-geng / mlxu
Machine Learning eXperiment Utilities
☆45Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for mlxu
- A simple library for scaling up JAX programs☆127Updated 3 weeks ago
- Minimal but scalable implementation of large language models in JAX☆26Updated 3 weeks ago
- ☆45Updated 9 months ago
- If it quacks like a tensor...☆52Updated last week
- Simple and efficient pytorch-native transformer training and inference (batched)☆61Updated 7 months ago
- ☆35Updated 7 months ago
- ☆29Updated this week
- some common Huggingface transformers in maximal update parametrization (µP)☆77Updated 2 years ago
- LoRA for arbitrary JAX models and functions☆133Updated 8 months ago
- JAX implementation of the Mistral 7b v0.2 model☆33Updated 4 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated 3 weeks ago
- A set of Python scripts that makes your experience on TPU better☆40Updated 4 months ago
- Train very large language models in Jax.☆195Updated last year
- ☆73Updated 4 months ago
- Inference code for LLaMA models in JAX☆113Updated 6 months ago
- ☆53Updated 3 weeks ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆79Updated 9 months ago
- Automatically take good care of your preemptible TPUs☆32Updated last year
- Flash Attention Implementation with Multiple Backend Support and Sharding This module provides a flexible implementation of Flash Attenti…☆18Updated last week
- JAX bindings for Flash Attention v2☆80Updated 4 months ago
- ☆20Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- JAX Synergistic Memory Inspector☆164Updated 4 months ago
- ☆46Updated last month
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- Pytorch-like dataloaders in JAX.☆59Updated last month
- Randomized Positional Encodings Boost Length Generalization of Transformers☆78Updated 8 months ago