sholtodouglas / multihost_dataloadingLinks
Experimenting with how best to do multi-host dataloading
☆10Updated 2 years ago
Alternatives and similar repositories for multihost_dataloading
Users that are interested in multihost_dataloading are comparing it to the libraries listed below
Sorting:
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- ☆61Updated 3 years ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 3 months ago
- ☆88Updated last year
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 3 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- ☆188Updated 2 weeks ago
- Train very large language models in Jax.☆209Updated last year
- A set of Python scripts that makes your experience on TPU better☆54Updated last year
- Machine Learning eXperiment Utilities☆47Updated last month
- ☆20Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆36Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated 3 weeks ago
- Inference code for LLaMA models in JAX☆120Updated last year
- ☆53Updated last year
- RWKV model implementation☆38Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆46Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- ☆118Updated last year
- AdamW optimizer for bfloat16 models in pytorch 🔥.☆36Updated last year
- ☆21Updated last year
- Train vision models using JAX and 🤗 transformers☆100Updated this week