openai / chzLinks
☆132Updated last week
Alternatives and similar repositories for chz
Users that are interested in chz are comparing it to the libraries listed below
Sorting:
- Experiment of using Tangent to autodiff triton☆80Updated last year
- ☆83Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- JAX bindings for Flash Attention v2☆91Updated 2 weeks ago
- ☆34Updated 11 months ago
- A library for unit scaling in PyTorch☆128Updated last month
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- A simple library for scaling up JAX programs☆140Updated 9 months ago
- LoRA for arbitrary JAX models and functions☆140Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆127Updated 8 months ago
- research impl of Native Sparse Attention (2502.11089)☆60Updated 5 months ago
- Beyond Straight-Through☆100Updated 2 years ago
- WIP☆94Updated 11 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- seqax = sequence modeling + JAX☆165Updated 2 weeks ago
- Easily run PyTorch on multiple GPUs & machines☆46Updated last month
- ☆115Updated 2 months ago
- ☆54Updated last year
- Attention Kernels for Symmetric Power Transformers☆111Updated last week
- TorchFix - a linter for PyTorch-using code with autofix support☆145Updated 6 months ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- 🧱 Modula software package☆216Updated 2 weeks ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆85Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆149Updated last month
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆97Updated 2 months ago
- ☆53Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆123Updated 7 months ago
- Train very large language models in Jax.☆206Updated last year