huggingface / optimum-habanaLinks
Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)
โ186Updated this week
Alternatives and similar repositories for optimum-habana
Users that are interested in optimum-habana are comparing it to the libraries listed below
Sorting:
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ466Updated this week
- Large Language Model Text Generation Inference on Habana Gaudiโ33Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ75Updated this week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ161Updated 2 weeks ago
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ301Updated last week
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ61Updated 2 months ago
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ196Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ263Updated 7 months ago
- Easy and Efficient Quantization for Transformersโ198Updated 3 months ago
- ๐ Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashโฆโ249Updated this week
- Benchmark suite for LLMs from Fireworks.aiโ75Updated 2 weeks ago
- This repository contains the experimental PyTorch native float8 training UXโ223Updated 10 months ago
- Applied AI experiments and examples for PyTorchโ271Updated this week
- Google TPU optimizations for transformers modelsโ112Updated 4 months ago
- Fast low-bit matmul kernels in Tritonโ303Updated last week
- โ118Updated last year
- An innovative library for efficient LLM inference via low-bit quantizationโ348Updated 9 months ago
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.โ97Updated 9 months ago
- โ71Updated 2 months ago
- Inference server benchmarking toolโ67Updated last month
- โ193Updated 3 weeks ago
- Perplexity GPU Kernelsโ324Updated 2 weeks ago
- โ260Updated 2 weeks ago
- The Triton backend for the ONNX Runtime.โ148Updated 2 weeks ago
- Microsoft Automatic Mixed Precision Libraryโ602Updated 8 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the โฆโ169Updated last week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Trainingโ208Updated 9 months ago
- โ53Updated 8 months ago
- The Triton backend for the PyTorch TorchScript models.โ149Updated 2 weeks ago
- oneCCL Bindings for Pytorch*โ97Updated last month