huggingface / tgi-gaudi
Large Language Model Text Generation Inference on Habana Gaudi
โ32Updated last month
Alternatives and similar repositories for tgi-gaudi:
Users that are interested in tgi-gaudi are comparing it to the libraries listed below
- A high-throughput and memory-efficient inference and serving engine for LLMsโ67Updated this week
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ185Updated this week
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ62Updated last month
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ125Updated 3 weeks ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.โ13Updated last month
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ295Updated 2 months ago
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ162Updated 2 weeks ago
- Benchmark suite for LLMs from Fireworks.aiโ70Updated 2 months ago
- GenAI components at micro-service level; GenAI service composer to create mega-serviceโ136Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inferenceโ60Updated 4 months ago
- Easy and Efficient Quantization for Transformersโ197Updated 2 months ago
- โ68Updated 3 weeks ago
- โ246Updated last week
- NVIDIA NCCL Tests for Distributed Trainingโ88Updated this week
- oneCCL Bindings for Pytorch*โ94Updated last week
- Google TPU optimizations for transformers modelsโ108Updated 3 months ago
- โ49Updated 5 months ago
- A tool to configure, launch and manage your machine learning experiments.โ139Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)โ66Updated this week
- โ38Updated this week
- The Triton backend for the PyTorch TorchScript models.โ146Updated this week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"โ59Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ262Updated 6 months ago
- โ196Updated 2 weeks ago
- โ29Updated this week
- โ53Updated 7 months ago
- OpenAI compatible API for TensorRT LLM triton backendโ205Updated 8 months ago
- Fast low-bit matmul kernels in Tritonโ291Updated this week
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safetyโฆโ29Updated this week
- โ61Updated this week