huggingface / optimum-tpu
Google TPU optimizations for transformers models
☆103Updated 2 months ago
Alternatives and similar repositories for optimum-tpu:
Users that are interested in optimum-tpu are comparing it to the libraries listed below
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆54Updated last month
- ☆197Updated this week
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated 11 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆232Updated 2 weeks ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- A tool to configure, launch and manage your machine learning experiments.☆127Updated this week
- ☆67Updated 2 years ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 8 months ago
- ☆113Updated 5 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆48Updated last week
- Collection of autoregressive model implementation☆83Updated last month
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆87Updated 8 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆297Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆272Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 10 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 5 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆298Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆123Updated 3 months ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆250Updated this week
- ☆49Updated last year
- Let's build better datasets, together!☆256Updated 3 months ago
- experiments with inference on llama☆104Updated 9 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆177Updated this week
- Set of scripts to finetune LLMs☆37Updated 11 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 5 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- ☆125Updated last year