opendatahub-io / vllm-tgis-adapterLinks
vLLM adapter for a TGIS-compatible gRPC server.
β39Updated this week
Alternatives and similar repositories for vllm-tgis-adapter
Users that are interested in vllm-tgis-adapter are comparing it to the libraries listed below
Sorting:
- Benchmark suite for LLMs from Fireworks.aiβ83Updated 2 weeks ago
- π· Build compute kernelsβ143Updated this week
- Train, tune, and infer Bamba modelβ132Updated 3 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language modelsβ88Updated 3 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decodingβ126Updated 9 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS β¦β60Updated 11 months ago
- Code for KaLM-Embedding modelsβ91Updated 2 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.β147Updated this week
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.β87Updated 2 weeks ago
- DPO, but faster πβ44Updated 9 months ago
- Google TPU optimizations for transformers modelsβ120Updated 7 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)β210Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inferenceβ61Updated 4 months ago
- KV cache compression for high-throughput LLM inferenceβ138Updated 7 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found heβ¦β31Updated 2 years ago
- Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainingsβ81Updated last month
- Data preparation code for Amber 7B LLMβ91Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.β36Updated last month
- β97Updated 11 months ago
- Accelerating your LLM training to full speed! Made with β€οΈ by ServiceNow Researchβ225Updated this week
- A massively multilingual modern encoder language modelβ80Updated last week
- β58Updated 3 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTOβ¦β56Updated last week
- Repository for CPU Kernel Generation for LLM Inferenceβ26Updated 2 years ago
- Data preparation code for CrystalCoder 7B LLMβ45Updated last year
- Load compute kernels from the Hubβ283Updated this week
- β39Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMsβ266Updated 11 months ago
- A collection of reproducible inference engine benchmarksβ33Updated 4 months ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unslothβ172Updated this week