IBM / text-generation-inferenceLinks
IBM development fork of https://github.com/huggingface/text-generation-inference
☆61Updated last month
Alternatives and similar repositories for text-generation-inference
Users that are interested in text-generation-inference are comparing it to the libraries listed below
Sorting:
- Benchmark suite for LLMs from Fireworks.ai☆82Updated last week
- vLLM adapter for a TGIS-compatible gRPC server.☆41Updated this week
- 🚀 Collection of tuning recipes with HuggingFace SFTTrainer and PyTorch FSDP.☆52Updated last week
- LM engine is a library for pretraining/finetuning LLMs☆72Updated this week
- experiments with inference on llama☆103Updated last year
- ☆15Updated last month
- ☆197Updated last year
- ☆63Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated last year
- Google TPU optimizations for transformers models☆120Updated 9 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- ☆64Updated 6 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆60Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆90Updated this week
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆211Updated last week
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- Train, tune, and infer Bamba model☆134Updated 4 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated last month
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆252Updated last week
- A collection of reproducible inference engine benchmarks☆34Updated 6 months ago
- Easy and Efficient Quantization for Transformers☆202Updated 3 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆283Updated this week
- Python library for Synthetic Data Generation☆51Updated last week
- Data preparation code for Amber 7B LLM☆92Updated last year
- ☆218Updated 9 months ago
- ☆51Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆68Updated last year
- ☆257Updated last week
- A collection of all available inference solutions for the LLMs☆91Updated 7 months ago
- InstructLab Training Library - Efficient Fine-Tuning with Message-Format Data☆43Updated last week