huggingface / optimum-habanaLinks
Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)
โ199Updated this week
Alternatives and similar repositories for optimum-habana
Users that are interested in optimum-habana are comparing it to the libraries listed below
Sorting:
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ318Updated last month
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ502Updated this week
- Large Language Model Text Generation Inference on Habana Gaudiโ34Updated 7 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ266Updated last year
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ165Updated last month
- A tool to configure, launch and manage your machine learning experiments.โ198Updated this week
- Easy and Efficient Quantization for Transformersโ202Updated 4 months ago
- An innovative library for efficient LLM inference via low-bit quantizationโ349Updated last year
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.โ507Updated 6 months ago
- Blazing fast training of ๐ค Transformers on Graphcore IPUsโ85Updated last year
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ215Updated last week
- The Triton backend for the ONNX Runtime.โ162Updated 2 weeks ago
- ๐ Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashโฆโ270Updated 3 months ago
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.โ102Updated last year
- โ121Updated last year
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ63Updated 3 months ago
- โ252Updated last year
- โ302Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ83Updated this week
- GPTQ inference Triton kernelโ311Updated 2 years ago
- โ205Updated 5 months ago
- Pipeline Parallelism for PyTorchโ780Updated last year
- Google TPU optimizations for transformers modelsโ120Updated 9 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindโฆโ161Updated last month
- Module, Model, and Tensor Serialization/Deserializationโ270Updated 2 months ago
- The Triton backend for the PyTorch TorchScript models.โ160Updated last week
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"โ385Updated last year
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic โฆโ102Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsโ90Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on diskโ180Updated this week