openai / chzLinks
☆115Updated last month
Alternatives and similar repositories for chz
Users that are interested in chz are comparing it to the libraries listed below
Sorting:
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆80Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆129Updated last year
- JAX bindings for Flash Attention v2☆90Updated last year
- Using FlexAttention to compute attention with different masking patterns☆44Updated 9 months ago
- Beyond Straight-Through☆100Updated 2 years ago
- A simple library for scaling up JAX programs☆139Updated 8 months ago
- Machine Learning eXperiment Utilities☆46Updated 3 weeks ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- If it quacks like a tensor...☆58Updated 8 months ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆94Updated last month
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 11 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆128Updated 7 months ago
- ☆53Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆31Updated last month
- train with kittens!☆61Updated 8 months ago
- Here we will test various linear attention designs.☆60Updated last year
- ☆19Updated 2 months ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆66Updated 7 months ago
- ☆34Updated 10 months ago
- LoRA for arbitrary JAX models and functions☆140Updated last year
- Easily run PyTorch on multiple GPUs & machines☆46Updated 3 weeks ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆77Updated last year
- Losslessly encode text natively with arithmetic coding and HuggingFace Transformers☆76Updated 11 months ago
- ☆52Updated last year
- Code for the paper "On the Expressivity Role of LayerNorm in Transformers' Attention" (Findings of ACL'2023)☆56Updated 9 months ago
- A set of Python scripts that makes your experience on TPU better☆55Updated last year