huggingface / tgi-gaudiLinks
Large Language Model Text Generation Inference on Habana Gaudi
โ34Updated 3 months ago
Alternatives and similar repositories for tgi-gaudi
Users that are interested in tgi-gaudi are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMsโ77Updated this week
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ190Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ129Updated last week
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ61Updated 2 weeks ago
- Benchmark suite for LLMs from Fireworks.aiโ76Updated last week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ166Updated last week
- Easy and Efficient Quantization for Transformersโ198Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ264Updated 9 months ago
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ305Updated last month
- โ173Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inferenceโ61Updated 2 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsโ87Updated this week
- โ55Updated 7 months ago
- โ271Updated last month
- Google TPU optimizations for transformers modelsโ114Updated 5 months ago
- โ73Updated 3 months ago
- A tool to configure, launch and manage your machine learning experiments.โ171Updated this week
- โ228Updated this week
- โ56Updated 9 months ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"โ64Updated 3 months ago
- โ195Updated 2 months ago
- Inference server benchmarking toolโ79Updated 2 months ago
- Fast and memory-efficient exact attentionโ80Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on diskโ135Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welโฆโ354Updated last month
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decodingโ121Updated 7 months ago
- vLLM adapter for a TGIS-compatible gRPC server.โ33Updated this week
- GenAI components at micro-service level; GenAI service composer to create mega-serviceโ161Updated this week
- A low-latency & high-throughput serving engine for LLMsโ388Updated last month
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ205Updated last week