AI-Hypercomputer / kitharaLinks
☆16Updated 6 months ago
Alternatives and similar repositories for kithara
Users that are interested in kithara are comparing it to the libraries listed below
Sorting:
- ☆15Updated 6 months ago
- torchprime is a reference model implementation for PyTorch on TPU.☆41Updated last month
- ☆24Updated 2 weeks ago
- ☆54Updated this week
- ☆148Updated last month
- ☆121Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆112Updated last month
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆78Updated 2 months ago
- ☆190Updated 2 weeks ago
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆154Updated this week
- Randomized Positional Encodings Boost Length Generalization of Transformers☆83Updated last year
- A set of Python scripts that makes your experience on TPU better☆54Updated 2 months ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆116Updated 3 months ago
- ☆20Updated 2 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 6 months ago
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆33Updated 10 months ago
- 🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.☆17Updated 6 months ago
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆30Updated 9 months ago
- Machine Learning eXperiment Utilities☆46Updated 4 months ago
- A toolkit for scaling law research ⚖☆53Updated 10 months ago
- Google TPU optimizations for transformers models☆123Updated 10 months ago
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated 2 years ago
- A library for unit scaling in PyTorch☆132Updated 4 months ago
- Scalable and Performant Data Loading☆349Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆392Updated 5 months ago
- Various transformers for FSDP research☆38Updated 3 years ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- Official repo to On the Generalization Ability of Retrieval-Enhanced Transformers☆44Updated last year
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated last year