AI-Hypercomputer / kitharaLinks
☆16Updated 8 months ago
Alternatives and similar repositories for kithara
Users that are interested in kithara are comparing it to the libraries listed below
Sorting:
- ☆15Updated 8 months ago
- torchprime is a reference model implementation for PyTorch on TPU.☆44Updated last month
- ☆72Updated last week
- ☆124Updated last year
- ☆192Updated this week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆79Updated last month
- ☆152Updated last month
- ☆26Updated last month
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- A toolkit for scaling law research ⚖☆55Updated last year
- ☆20Updated 2 years ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆121Updated last month
- Various transformers for FSDP research☆38Updated 3 years ago
- Load compute kernels from the Hub☆389Updated last week
- Google TPU optimizations for transformers models☆135Updated 2 weeks ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆228Updated this week
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 3 months ago
- A library for unit scaling in PyTorch☆133Updated 6 months ago
- Official repo to On the Generalization Ability of Retrieval-Enhanced Transformers☆46Updated last year
- A set of Python scripts that makes your experience on TPU better☆56Updated 4 months ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆162Updated this week
- An implementation of the Llama architecture, to instruct and delight☆21Updated 8 months ago
- Official code for the NeurIPS25 paper "RAT: Bridging RNN Efficiencyand Attention Accuracy in Language Modeling" (https://arxiv.org/abs/25…☆23Updated last month
- Muon fsdp 2☆52Updated 6 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆279Updated 2 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆20Updated 2 years ago
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆34Updated 11 months ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆547Updated 3 weeks ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- ☆147Updated this week