ClashLuke / tpucare
Automatically take good care of your preemptible TPUs
☆32Updated last year
Related projects ⓘ
Alternatives and complementary repositories for tpucare
- HomebrewNLP in JAX flavour for maintable TPU-Training☆46Updated 10 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated last year
- ☆29Updated 3 weeks ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 3 months ago
- Latent Diffusion Language Models☆67Updated last year
- ☆73Updated 4 months ago
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- ☆31Updated 2 months ago
- PyTorch interface for TrueGrad Optimizers☆39Updated last year
- An implementation of PSGD Kron second-order optimizer for PyTorch☆16Updated this week
- Train vision models using JAX and 🤗 transformers☆95Updated 3 weeks ago
- ☆53Updated 10 months ago
- The 2D discrete wavelet transform for JAX☆38Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆35Updated 4 months ago
- If it quacks like a tensor...☆52Updated last week
- ☆18Updated last month
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- DiCE: The Infinitely Differentiable Monte-Carlo Estimator☆30Updated last year
- Utilities for PyTorch distributed☆23Updated last year
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆52Updated last year
- Machine Learning eXperiment Utilities☆45Updated 5 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated 2 weeks ago
- LoRA for arbitrary JAX models and functions☆132Updated 8 months ago
- Implementation of PSGD optimizer in JAX☆17Updated last week
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- ☆46Updated last month
- ☆20Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆43Updated last year