huggingface / optimum-tpuLinks
Google TPU optimizations for transformers models
☆114Updated 5 months ago
Alternatives and similar repositories for optimum-tpu
Users that are interested in optimum-tpu are comparing it to the libraries listed below
Sorting:
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆64Updated 3 months ago
- ☆134Updated 10 months ago
- Load compute kernels from the Hub☆203Updated this week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 11 months ago
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated last year
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆128Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 9 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- ☆230Updated this week
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆73Updated last week
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆156Updated this week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆85Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 4 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆265Updated last year
- ☆124Updated 8 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆62Updated 8 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆318Updated 2 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆128Updated 7 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆64Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆165Updated 5 months ago
- ☆49Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 8 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆243Updated 5 months ago
- A tool to configure, launch and manage your machine learning experiments.☆171Updated this week
- NanoGPT-speedrunning for the poor T4 enjoyers☆68Updated 2 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆139Updated this week
- some common Huggingface transformers in maximal update parametrization (µP)☆81Updated 3 years ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆211Updated this week