huggingface / optimum-habana
Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)
โ186Updated this week
Alternatives and similar repositories for optimum-habana
Users that are interested in optimum-habana are comparing it to the libraries listed below
Sorting:
- Large Language Model Text Generation Inference on Habana Gaudiโ33Updated last month
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ300Updated this week
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ464Updated this week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ162Updated 2 weeks ago
- Easy and Efficient Quantization for Transformersโ197Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ70Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ263Updated 7 months ago
- ๐ Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashโฆโ244Updated this week
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ194Updated last week
- Blazing fast training of ๐ค Transformers on Graphcore IPUsโ85Updated last year
- An innovative library for efficient LLM inference via low-bit quantizationโ350Updated 8 months ago
- โ253Updated last week
- GPTQ inference Triton kernelโ299Updated last year
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.โ501Updated 3 weeks ago
- Fast low-bit matmul kernels in Tritonโ299Updated this week
- Applied AI experiments and examples for PyTorchโ265Updated 2 weeks ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the โฆโ156Updated this week
- oneCCL Bindings for Pytorch*โ97Updated 2 weeks ago
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ61Updated 2 months ago
- Pipeline Parallelism for PyTorchโ764Updated 8 months ago
- Google TPU optimizations for transformers modelsโ109Updated 3 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.โ818Updated 8 months ago
- โ68Updated last month
- Comparison of Language Model Inference Enginesโ217Updated 4 months ago
- โ117Updated last year
- Benchmark suite for LLMs from Fireworks.aiโ72Updated this week
- โ53Updated 7 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).โ248Updated 6 months ago
- A safetensors extension to efficiently store sparse quantized tensors on diskโ109Updated this week
- Microsoft Automatic Mixed Precision Libraryโ595Updated 7 months ago