openai / chzLinks
☆179Updated 2 months ago
Alternatives and similar repositories for chz
Users that are interested in chz are comparing it to the libraries listed below
Sorting:
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆91Updated last year
- JAX bindings for Flash Attention v2☆97Updated last week
- A simple library for scaling up JAX programs☆144Updated last year
- A set of Python scripts that makes your experience on TPU better☆54Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆170Updated 4 months ago
- 🧱 Modula software package☆299Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Understand and test language model architectures on synthetic tasks.☆234Updated last month
- seqax = sequence modeling + JAX☆168Updated 3 months ago
- WIP☆93Updated last year
- LoRA for arbitrary JAX models and functions☆141Updated last year
- A library for unit scaling in PyTorch☆132Updated 3 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- ☆56Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆86Updated 3 years ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Train very large language models in Jax.☆209Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆132Updated 10 months ago
- Modular, scalable library to train ML models☆168Updated this week
- moodist☆22Updated last month
- JAX implementation of the Llama 2 model☆216Updated last year
- ☆283Updated last year
- Beyond Straight-Through☆102Updated 2 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆241Updated 4 months ago
- Supporting code for the blog post on modular manifolds.☆94Updated last month
- Attention Kernels for Symmetric Power Transformers☆121Updated last month
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 11 months ago
- supporting pytorch FSDP for optimizers☆83Updated 10 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 4 months ago