huggingface / optimum-habanaLinks
Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)
โ202Updated this week
Alternatives and similar repositories for optimum-habana
Users that are interested in optimum-habana are comparing it to the libraries listed below
Sorting:
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ323Updated 2 months ago
- Large Language Model Text Generation Inference on Habana Gaudiโ34Updated 9 months ago
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ518Updated this week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ169Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ267Updated 2 weeks ago
- An innovative library for efficient LLM inference via low-bit quantizationโ351Updated last year
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.โ509Updated 8 months ago
- Google TPU optimizations for transformers modelsโ125Updated 11 months ago
- Easy and Efficient Quantization for Transformersโ203Updated 5 months ago
- A tool to configure, launch and manage your machine learning experiments.โ210Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ85Updated this week
- GPTQ inference Triton kernelโ317Updated 2 years ago
- Blazing fast training of ๐ค Transformers on Graphcore IPUsโ86Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsโ94Updated last week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welโฆโ396Updated 6 months ago
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ63Updated 5 months ago
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ216Updated last week
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.โ106Updated last year
- The Triton backend for the PyTorch TorchScript models.โ167Updated this week
- Benchmark suite for LLMs from Fireworks.aiโ84Updated 3 weeks ago
- โ321Updated this week
- โ219Updated 10 months ago
- Module, Model, and Tensor Serialization/Deserializationโ279Updated 4 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"โ391Updated last year
- โ122Updated last year
- โ413Updated 2 years ago
- A safetensors extension to efficiently store sparse quantized tensors on diskโ220Updated this week
- The package used to build the documentation of our Hugging Face reposโ131Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".โ279Updated 2 years ago
- โ252Updated last year