cat-state / tinyparLinks
β20Updated 2 years ago
Alternatives and similar repositories for tinypar
Users that are interested in tinypar are comparing it to the libraries listed below
Sorting:
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β86Updated 2 years ago
- β92Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ51Updated 2 years ago
- Fast, Modern, and Low Precision PyTorch Optimizersβ121Updated last month
- some common Huggingface transformers in maximal update parametrization (Β΅P)β87Updated 3 years ago
- A set of Python scripts that makes your experience on TPU betterβ56Updated 4 months ago
- β63Updated 3 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated 2 years ago
- β50Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" givenβ¦β15Updated 2 years ago
- Code repository for the c-BTM paperβ108Updated 2 years ago
- Experiments for efforts to train a new and improved t5β76Updated last year
- β124Updated last year
- Automatically take good care of your preemptible TPUsβ37Updated 2 years ago
- A toolkit for scaling law research ββ55Updated last year
- Experiment of using Tangent to autodiff tritonβ82Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pileβ116Updated 2 years ago
- β53Updated last year
- β53Updated 2 years ago
- Triton Implementation of HyperAttention Algorithmβ48Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Updated last year
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/β¦β34Updated 11 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for trβ¦β66Updated 2 months ago
- An implementation of the Llama architecture, to instruct and delightβ21Updated 8 months ago
- β19Updated 2 months ago
- JAX implementation of the Llama 2 modelβ216Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"β38Updated 7 months ago
- Inference code for LLaMA models in JAXβ120Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebsβ72Updated 3 weeks ago
- Machine Learning eXperiment Utilitiesβ48Updated 6 months ago