huggingface / optimum-habanaLinks
Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)
โ188Updated this week
Alternatives and similar repositories for optimum-habana
Users that are interested in optimum-habana are comparing it to the libraries listed below
Sorting:
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ304Updated 3 weeks ago
- Large Language Model Text Generation Inference on Habana Gaudiโ33Updated 3 months ago
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ473Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ76Updated this week
- Easy and Efficient Quantization for Transformersโ199Updated 4 months ago
- Fast low-bit matmul kernels in Tritonโ322Updated this week
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ204Updated this week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ163Updated last month
- An innovative library for efficient LLM inference via low-bit quantizationโ349Updated 9 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ264Updated 8 months ago
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ61Updated 3 months ago
- ๐ Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashโฆโ253Updated this week
- Applied AI experiments and examples for PyTorchโ277Updated 3 weeks ago
- โ119Updated last year
- โ194Updated last month
- A tool to configure, launch and manage your machine learning experiments.โ161Updated this week
- โ213Updated 5 months ago
- The Triton backend for the ONNX Runtime.โ152Updated this week
- GPTQ inference Triton kernelโ302Updated 2 years ago
- Collection of kernels written in Triton languageโ128Updated 2 months ago
- Inference server benchmarking toolโ74Updated last month
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)โ119Updated this week
- oneCCL Bindings for Pytorch*โ97Updated last month
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).โ252Updated 7 months ago
- Blazing fast training of ๐ค Transformers on Graphcore IPUsโ85Updated last year
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic โฆโ94Updated this week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Traโฆโ525Updated this week
- Google TPU optimizations for transformers modelsโ113Updated 5 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ129Updated last month
- Perplexity GPU Kernelsโ375Updated 2 weeks ago