apoorvkh / torchrunxLinks
Easily run PyTorch on multiple GPUs & machines
☆46Updated 2 weeks ago
Alternatives and similar repositories for torchrunx
Users that are interested in torchrunx are comparing it to the libraries listed below
Sorting:
- Utilities for Training Very Large Models☆58Updated 9 months ago
- ☆79Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆129Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated last month
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- ☆112Updated last year
- ☆48Updated 10 months ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆97Updated this week
- A place to store reusable transformer components of my own creation or found on the interwebs☆56Updated last week
- ☆20Updated 2 years ago
- ☆81Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆64Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆81Updated 3 years ago
- LL3M: Large Language and Multi-Modal Model in Jax☆72Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆34Updated 10 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆68Updated 11 months ago
- See details in https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md☆24Updated 2 years ago
- ☆29Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last month
- ☆35Updated last year
- DPO, but faster 🚀☆43Updated 7 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- ☆45Updated last year
- Mobile Viewer for W&B, built on top of Flutter.☆35Updated last year