huggingface / optimum-tpu
Google TPU optimizations for transformers models
☆109Updated 3 months ago
Alternatives and similar repositories for optimum-tpu:
Users that are interested in optimum-tpu are comparing it to the libraries listed below
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆60Updated last month
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆71Updated this week
- ☆115Updated 3 weeks ago
- Load compute kernels from the Hub☆115Updated last week
- Manage scalable open LLM inference endpoints in Slurm clusters☆254Updated 9 months ago
- ☆49Updated last year
- ☆201Updated this week
- Inference server benchmarking tool☆56Updated last week
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated last month
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 9 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆105Updated this week
- ☆129Updated 8 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆244Updated this week
- ☆67Updated 2 years ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆92Updated 9 months ago
- Data preparation code for Amber 7B LLM☆89Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 6 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- A tool to configure, launch and manage your machine learning experiments.☆144Updated this week
- JAX implementation of the Llama 2 model☆218Updated last year
- ☆78Updated 10 months ago
- Collection of autoregressive model implementation☆85Updated last week
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 5 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆82Updated last year
- Easy and Efficient Quantization for Transformers☆197Updated 2 months ago
- ☆80Updated last year
- ☆125Updated last year
- ☆209Updated 3 months ago