yixiaoer / tpuxLinks
A set of Python scripts that makes your experience on TPU better
☆54Updated last year
Alternatives and similar repositories for tpux
Users that are interested in tpux are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- ☆14Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- ☆87Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- Inference code for LLaMA models in JAX☆118Updated last year
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- Minimal but scalable implementation of large language models in JAX☆35Updated 3 weeks ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- seqax = sequence modeling + JAX☆166Updated last month
- Machine Learning eXperiment Utilities☆46Updated 3 weeks ago
- ☆20Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆124Updated 8 months ago
- ☆53Updated last year
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- LoRA for arbitrary JAX models and functions☆141Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 2 months ago
- Train very large language models in Jax.☆206Updated last year
- nanoGPT-like codebase for LLM training☆102Updated 3 months ago
- ☆118Updated last year
- 🧱 Modula software package☆222Updated 3 weeks ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- JAX bindings for Flash Attention v2☆90Updated 3 weeks ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆60Updated 3 months ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- ☆61Updated 3 years ago