ronghanghu / vit_10b_fsdp_example
See details in https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md
☆21Updated last year
Related projects: ⓘ
- ☆66Updated 3 months ago
- M4 experiment logbook☆56Updated last year
- ☆68Updated 2 months ago
- LL3M: Large Language and Multi-Modal Model in Jax☆62Updated 4 months ago
- ☆27Updated this week
- Experiment of using Tangent to autodiff triton☆66Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasks☆91Updated 5 months ago
- gpu tester detects broken and slow gpus in a cluster☆63Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆76Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆43Updated last year
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆53Updated 5 months ago
- ☆29Updated last year
- A simple library for scaling up JAX programs☆116Updated last month
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆48Updated last month
- A library to create and manage configuration files, especially for machine learning projects.☆77Updated 2 years ago
- ☆56Updated 2 years ago
- ☆35Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆110Updated 5 months ago
- Big-Interleaved-Dataset☆57Updated last year
- ☆75Updated last month
- A set of Python scripts that makes your experience on TPU better☆37Updated 2 months ago
- Automatically take good care of your preemptible TPUs☆28Updated last year
- Another attempt at a long-context / efficient transformer by me☆37Updated 2 years ago
- ☆67Updated 2 years ago
- Machine Learning eXperiment Utilities☆42Updated 3 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆77Updated 9 months ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- ☆20Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆60Updated this week