cat-state / tinyparLinks
โ20Updated 2 years ago
Alternatives and similar repositories for tinypar
Users that are interested in tinypar are comparing it to the libraries listed below
Sorting:
- Large scale 4D parallelism pre-training for ๐ค transformers in Mixture of Experts *(still work in progress)*โ87Updated last year
- โ87Updated last year
- A set of Python scripts that makes your experience on TPU betterโ54Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Trainingโ50Updated last year
- โ118Updated last year
- โ61Updated 3 years ago
- โ49Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ132Updated last year
- JAX implementation of the Llama 2 modelโ219Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pileโ116Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (ยตP)โ82Updated 3 years ago
- An implementation of the Llama architecture, to instruct and delightโ21Updated 3 months ago
- Experiment of using Tangent to autodiff tritonโ81Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitโ63Updated 2 years ago
- Fast, Modern, and Low Precision PyTorch Optimizersโ109Updated last week
- โ53Updated last year
- โ19Updated 3 months ago
- โ53Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.โ70Updated last year
- Inference code for LLaMA models in JAXโ119Updated last year
- Experiments for efforts to train a new and improved t5โ76Updated last year
- Automatically take good care of your preemptible TPUsโ36Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasksโ99Updated last year
- Collection of autoregressive model implementationโ86Updated 4 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ160Updated 2 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)โ78Updated last year
- Triton Implementation of HyperAttention Algorithmโ48Updated last year
- Code repository for the c-BTM paperโ107Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023โ137Updated last year
- A toolkit for scaling law research โโ51Updated 7 months ago