huggingface / tgi-gaudiLinks
Large Language Model Text Generation Inference on Habana Gaudi
โ33Updated 2 months ago
Alternatives and similar repositories for tgi-gaudi
Users that are interested in tgi-gaudi are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMsโ75Updated this week
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ186Updated this week
- Benchmark suite for LLMs from Fireworks.aiโ75Updated 2 weeks ago
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ61Updated 2 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ127Updated last month
- IBM development fork of https://github.com/huggingface/text-generation-inferenceโ60Updated 3 weeks ago
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ161Updated 2 weeks ago
- โ99Updated this week
- โ53Updated 8 months ago
- โ71Updated 2 months ago
- vLLM adapter for a TGIS-compatible gRPC server.โ30Updated this week
- vLLM performance dashboardโ30Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.โ13Updated 2 weeks ago
- โ36Updated this week
- Ongoing research training transformer models at scaleโ22Updated this week
- โ215Updated this week
- โ46Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decodingโ116Updated 6 months ago
- ๐ Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.โ196Updated this week
- LLM Serving Performance Evaluation Harnessโ78Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ263Updated 7 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsityโ75Updated 9 months ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"โ60Updated 2 months ago
- โ18Updated this week
- Google TPU optimizations for transformers modelsโ112Updated 4 months ago
- Easy and Efficient Quantization for Transformersโ198Updated 3 months ago
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ301Updated last week
- Inference server benchmarking toolโ67Updated last month
- A low-latency & high-throughput serving engine for LLMsโ370Updated this week
- oneCCL Bindings for Pytorch*โ97Updated last month