AI-Hypercomputer / ray-tpuLinks
☆15Updated 3 months ago
Alternatives and similar repositories for ray-tpu
Users that are interested in ray-tpu are comparing it to the libraries listed below
Sorting:
- torchprime is a reference model implementation for PyTorch on TPU.☆34Updated this week
- ☆14Updated 3 months ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆76Updated last month
- Experimenting with how best to do multi-host dataloading☆10Updated 2 years ago
- Various transformers for FSDP research☆38Updated 2 years ago
- A JAX-native LLM Post-Training Library☆123Updated this week
- ☆118Updated last year
- ☆21Updated 5 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago
- ☆20Updated 2 years ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆69Updated last week
- Fast, Modern, and Low Precision PyTorch Optimizers☆108Updated 3 weeks ago
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated last month
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 5 months ago
- Blazing fast data loading with HuggingFace Dataset and Ray Data☆16Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆48Updated 10 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- A set of Python scripts that makes your experience on TPU better☆54Updated last year
- python bindings for symphonia/opus - read various audio formats from python and write opus files☆65Updated last month
- ☆61Updated 3 years ago
- Scalable and Performant Data Loading☆291Updated this week
- (EasyDel Former) is a utility library designed to simplify and enhance the development in JAX☆28Updated last week
- Repository for fine-tuning Transformers 🤗 based seq2seq speech models in JAX/Flax.☆37Updated 2 years ago
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated 2 years ago
- Load compute kernels from the Hub☆244Updated this week
- Machine Learning eXperiment Utilities☆46Updated 3 weeks ago
- DPO, but faster 🚀☆44Updated 8 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆119Updated 8 months ago