cat-state / tinyparLinks
โ20Updated 2 years ago
Alternatives and similar repositories for tinypar
Users that are interested in tinypar are comparing it to the libraries listed below
Sorting:
- Large scale 4D parallelism pre-training for ๐ค transformers in Mixture of Experts *(still work in progress)*โ87Updated last year
- โ50Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Trainingโ51Updated last year
- โ62Updated 3 years ago
- โ91Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizersโ116Updated 3 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pileโ116Updated 2 years ago
- A set of Python scripts that makes your experience on TPU betterโ54Updated 2 months ago
- An implementation of the Llama architecture, to instruct and delightโ21Updated 6 months ago
- JAX implementation of the Llama 2 modelโ216Updated last year
- some common Huggingface transformers in maximal update parametrization (ยตP)โ87Updated 3 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitโ63Updated 2 years ago
- โ121Updated last year
- โ53Updated last year
- Experiments for efforts to train a new and improved t5โ76Updated last year
- Experiment of using Tangent to autodiff tritonโ80Updated last year
- Code repository for the c-BTM paperโ108Updated 2 years ago
- โ53Updated last year
- Language models scale reliably with over-training and on downstream tasksโ100Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ132Updated last year
- โ47Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.โ73Updated last year
- A toolkit for scaling law research โโ53Updated 10 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountโฆโ53Updated 2 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)โ79Updated last year
- Automatically take good care of your preemptible TPUsโ37Updated 2 years ago
- โ39Updated last year
- โ19Updated this week
- A place to store reusable transformer components of my own creation or found on the interwebsโ62Updated 2 weeks ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS โฆโ60Updated last year