huggingface / optimum-tpuLinks
Google TPU optimizations for transformers models
☆131Updated this week
Alternatives and similar repositories for optimum-tpu
Users that are interested in optimum-tpu are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hub☆352Updated last week
- 👷 Build compute kernels☆195Updated this week
- Collection of autoregressive model implementation☆85Updated 8 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- ☆136Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 2 weeks ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated 2 years ago
- ☆138Updated 4 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆103Updated 7 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated 2 months ago
- ☆124Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- code for training & evaluating Contextual Document Embedding models☆201Updated 7 months ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆209Updated this week
- MoE training for Me and You and maybe other people☆239Updated last week
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆272Updated this week
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated last month
- Multipack distributed sampler for fast padding-free training of LLMs☆202Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- ☆47Updated last year
- ☆204Updated last year
- ☆198Updated last year
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- ☆50Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- LM engine is a library for pretraining/finetuning LLMs☆102Updated this week