huggingface / optimum-habanaLinks
Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)
โ194Updated this week
Alternatives and similar repositories for optimum-habana
Users that are interested in optimum-habana are comparing it to the libraries listed below
Sorting:
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ315Updated this week
- Large Language Model Text Generation Inference on Habana Gaudiโ34Updated 5 months ago
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ167Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ266Updated 11 months ago
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ489Updated this week
- A tool to configure, launch and manage your machine learning experiments.โ190Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welโฆโ374Updated 3 months ago
- Easy and Efficient Quantization for Transformersโ203Updated 2 months ago
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ209Updated last week
- An innovative library for efficient LLM inference via low-bit quantizationโ348Updated last year
- ๐ Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashโฆโ265Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMsโ83Updated this week
- GPTQ inference Triton kernelโ307Updated 2 years ago
- โ294Updated last month
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ62Updated 2 months ago
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.โ509Updated 4 months ago
- โ252Updated last year
- Blazing fast training of ๐ค Transformers on Graphcore IPUsโ86Updated last year
- Google TPU optimizations for transformers modelsโ120Updated 7 months ago
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic โฆโ100Updated this week
- โ217Updated 7 months ago
- Pipeline Parallelism for PyTorchโ779Updated last year
- Inference server benchmarking toolโ98Updated 4 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsโ89Updated this week
- โ121Updated last year
- Applied AI experiments and examples for PyTorchโ295Updated 3 weeks ago
- Fast low-bit matmul kernels in Tritonโ365Updated this week
- โ197Updated 4 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)โ395Updated 2 weeks ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.โ895Updated last year