huggingface / optimum-tpu
Google TPU optimizations for transformers models
☆75Updated this week
Related projects ⓘ
Alternatives and complementary repositories for optimum-tpu
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆40Updated last week
- Set of scripts to finetune LLMs☆36Updated 7 months ago
- ☆119Updated this week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆173Updated 4 months ago
- ☆156Updated last week
- ☆93Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆253Updated last month
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆29Updated 6 months ago
- ☆64Updated 2 years ago
- ☆49Updated 8 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆153Updated this week
- Manage scalable open LLM inference endpoints in Slurm clusters☆236Updated 4 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆232Updated this week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆80Updated 11 months ago
- ☆118Updated 3 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆134Updated 3 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆221Updated 3 weeks ago
- code for training & evaluating Contextual Document Embedding models☆117Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆193Updated this week
- ☆99Updated last month
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- ☆73Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- Collection of autoregressive model implementation☆67Updated this week
- ☆40Updated 2 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated last week
- experiments with inference on llama☆105Updated 5 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆58Updated 4 months ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆208Updated this week