yixiaoer / tpu-training-exampleLinks
☆14Updated 11 months ago
Alternatives and similar repositories for tpu-training-example
Users that are interested in tpu-training-example are comparing it to the libraries listed below
Sorting:
- JAX implementation of the Mistral 7b v0.2 model☆35Updated 11 months ago
- A set of Python scripts that makes your experience on TPU better☆55Updated 11 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 3 months ago
- ☆78Updated 11 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Updated 7 months ago
- ☆44Updated last year
- Einsum-like high-level array sharding API for JAX☆35Updated 11 months ago
- ☆20Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 weeks ago
- Code for the paper "Function-Space Learning Rates"☆20Updated 2 weeks ago
- Collection of autoregressive model implementation☆85Updated last month
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 9 months ago
- JAX Scalify: end-to-end scaled arithmetics☆16Updated 7 months ago
- supporting pytorch FSDP for optimizers☆82Updated 6 months ago
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆24Updated 3 months ago
- Awesome Triton Resources☆31Updated last month
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆70Updated last week
- Machine Learning eXperiment Utilities☆46Updated last year
- Official code release for "SuperBPE: Space Travel for Language Models"☆54Updated 2 weeks ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆19Updated last month
- Make triton easier☆46Updated last year
- research impl of Native Sparse Attention (2502.11089)☆54Updated 4 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 8 months ago
- DPO, but faster 🚀☆43Updated 6 months ago
- Using FlexAttention to compute attention with different masking patterns☆43Updated 9 months ago
- A repository for research on medium sized language models.☆76Updated last year
- ☆53Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆69Updated this week