huggingface / optimum-habana
Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)
โ181Updated this week
Alternatives and similar repositories for optimum-habana:
Users that are interested in optimum-habana are comparing it to the libraries listed below
- Large Language Model Text Generation Inference on Habana Gaudiโ32Updated 2 weeks ago
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ290Updated 2 months ago
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ456Updated this week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ162Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMsโ62Updated this week
- Blazing fast training of ๐ค Transformers on Graphcore IPUsโ84Updated last year
- โ238Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ262Updated 5 months ago
- Applied AI experiments and examples for PyTorchโ251Updated last week
- Fast low-bit matmul kernels in Tritonโ275Updated this week
- Easy and Efficient Quantization for Transformersโ195Updated last month
- Google TPU optimizations for transformers modelsโ104Updated 2 months ago
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ62Updated 3 weeks ago
- This repository contains the experimental PyTorch native float8 training UXโ222Updated 8 months ago
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ190Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsโ87Updated this week
- A tool to configure, launch and manage your machine learning experiments.โ133Updated this week
- An innovative library for efficient LLM inference via low-bit quantizationโ351Updated 7 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.โ783Updated 6 months ago
- โ116Updated last year
- ๐ Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashโฆโ234Updated this week
- โ63Updated last week
- OpenAI compatible API for TensorRT LLM triton backendโ202Updated 8 months ago
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic โฆโ90Updated this week
- Advanced Quantization Algorithm for LLMs/VLMs.โ413Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ123Updated last week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindโฆโ155Updated 3 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the โฆโ131Updated last week
- โ184Updated 6 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).โ243Updated 5 months ago