young-geng / mlxuLinks
Machine Learning eXperiment Utilities
☆46Updated last year
Alternatives and similar repositories for mlxu
Users that are interested in mlxu are comparing it to the libraries listed below
Sorting:
- Minimal but scalable implementation of large language models in JAX☆35Updated 7 months ago
- A simple library for scaling up JAX programs☆139Updated 7 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- ☆60Updated 3 years ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 weeks ago
- ☆20Updated last year
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆24Updated 3 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆76Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆82Updated last year
- ☆31Updated 7 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- ☆78Updated 11 months ago
- Inference code for LLaMA models in JAX☆118Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- LoRA for arbitrary JAX models and functions☆138Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 3 months ago
- ☆48Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last week
- JAX bindings for Flash Attention v2☆89Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- If it quacks like a tensor...☆58Updated 7 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- ☆53Updated last year
- TPU pod commander is a package for managing and launching jobs on Google Cloud TPU pods.☆20Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆31Updated 7 months ago
- A toolkit for scaling law research ⚖☆49Updated 4 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆134Updated this week
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆54Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆89Updated last year
- ☆37Updated last year