AI-Hypercomputer / ray-tpuLinks
☆15Updated 8 months ago
Alternatives and similar repositories for ray-tpu
Users that are interested in ray-tpu are comparing it to the libraries listed below
Sorting:
- ☆16Updated 7 months ago
- torchprime is a reference model implementation for PyTorch on TPU.☆44Updated last week
- ☆124Updated last year
- Various transformers for FSDP research☆38Updated 3 years ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆119Updated 2 weeks ago
- Machine Learning eXperiment Utilities☆47Updated 5 months ago
- DPO, but faster 🚀☆46Updated last year
- ☆20Updated 2 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 2 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆71Updated last week
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆94Updated last month
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆82Updated last month
- ☆21Updated 10 months ago
- A toolkit for scaling law research ⚖☆55Updated 11 months ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- ☆63Updated 3 years ago
- Official implementation of "GPT or BERT: why not both?"☆63Updated 5 months ago
- ☆16Updated last year
- A library for unit scaling in PyTorch☆133Updated 6 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 7 months ago
- ☆192Updated this week
- ☆22Updated last year
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated 2 years ago
- (EasyDel Former) is a utility library designed to simplify and enhance the development in JAX☆29Updated last week
- Utilities for PyTorch distributed☆25Updated 10 months ago
- A set of Python scripts that makes your experience on TPU better☆55Updated 4 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Updated last year
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated last year