ClashLuke / tpucareLinks
Automatically take good care of your preemptible TPUs
☆36Updated 2 years ago
Alternatives and similar repositories for tpucare
Users that are interested in tpucare are comparing it to the libraries listed below
Sorting:
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- ☆34Updated last year
- ☆87Updated last year
- Train vision models using JAX and 🤗 transformers☆99Updated 2 weeks ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 3 months ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- ☆53Updated last year
- Amos optimizer with JEstimator lib.☆82Updated last year
- ☆57Updated 11 months ago
- ☆61Updated 3 years ago
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆45Updated last year
- ☆31Updated 2 months ago
- ☆19Updated 3 months ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- ☆20Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 3 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated 2 weeks ago
- Latent Diffusion Language Models☆69Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Code for the paper "Function-Space Learning Rates"☆23Updated 3 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 3 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year