huggingface / optimum-habana
Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)
โ166Updated this week
Alternatives and similar repositories for optimum-habana:
Users that are interested in optimum-habana are comparing it to the libraries listed below
- Large Language Model Text Generation Inference on Habana Gaudiโ31Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ50Updated this week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ159Updated last week
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ58Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMsโ257Updated 3 months ago
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ436Updated this week
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ186Updated last week
- Applied AI experiments and examples for PyTorchโ216Updated last week
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ284Updated this week
- โ114Updated 10 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindโฆโ153Updated last month
- Blazing fast training of ๐ค Transformers on Graphcore IPUsโ85Updated 10 months ago
- Easy and Efficient Quantization for Transformersโ192Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.โ690Updated 4 months ago
- GPTQ inference Triton kernelโ292Updated last year
- The Triton backend for the ONNX Runtime.โ136Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")โ294Updated this week
- โ192Updated last week
- oneCCL Bindings for Pytorch*โ87Updated 3 weeks ago
- Fast low-bit matmul kernels in Tritonโ199Updated last week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the โฆโ86Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).โ232Updated 3 months ago
- This repository contains the experimental PyTorch native float8 training UXโ219Updated 5 months ago
- The Triton backend for the PyTorch TorchScript models.โ141Updated last week
- Advanced Quantization Algorithm for LLMs/VLMs.โ362Updated this week
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationโ327Updated 5 months ago
- โ58Updated 8 months ago
- Google TPU optimizations for transformers modelsโ90Updated last week
- ๐ Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashโฆโ218Updated this week