AI-Hypercomputer / ray-tpuLinks
☆15Updated 8 months ago
Alternatives and similar repositories for ray-tpu
Users that are interested in ray-tpu are comparing it to the libraries listed below
Sorting:
- ☆16Updated 8 months ago
- torchprime is a reference model implementation for PyTorch on TPU.☆44Updated this week
- ☆124Updated last year
- ☆21Updated 11 months ago
- ☆20Updated 2 years ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆97Updated last week
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆124Updated last month
- Machine Learning eXperiment Utilities☆48Updated 6 months ago
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated 2 years ago
- Various transformers for FSDP research☆38Updated 3 years ago
- (EasyDel Former) is a utility library designed to simplify and enhance the development in JAX☆29Updated this week
- A toolkit for scaling law research ⚖☆55Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- DPO, but faster 🚀☆47Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆72Updated 3 weeks ago
- A set of Python scripts that makes your experience on TPU better☆56Updated 4 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 8 months ago
- ☆22Updated last year
- ☆16Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- A library for unit scaling in PyTorch☆133Updated 6 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆79Updated last month
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Blazing fast data loading with HuggingFace Dataset and Ray Data☆16Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago