IBM / text-generation-inferenceLinks
IBM development fork of https://github.com/huggingface/text-generation-inference
☆61Updated 2 weeks ago
Alternatives and similar repositories for text-generation-inference
Users that are interested in text-generation-inference are comparing it to the libraries listed below
Sorting:
- Benchmark suite for LLMs from Fireworks.ai☆83Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 11 months ago
- ☆199Updated last year
- ☆64Updated 6 months ago
- ☆15Updated 3 weeks ago
- Google TPU optimizations for transformers models☆120Updated 8 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆138Updated last year
- LM engine is a library for pretraining/finetuning LLMs☆67Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆90Updated this week
- vLLM adapter for a TGIS-compatible gRPC server.☆41Updated this week
- experiments with inference on llama☆104Updated last year
- ☆61Updated 3 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆226Updated last week
- 🚀 Collection of tuning recipes with HuggingFace SFTTrainer and PyTorch FSDP.☆49Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆50Updated this week
- A collection of all available inference solutions for the LLMs☆91Updated 7 months ago
- Python library for Synthetic Data Generation☆50Updated last week
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- ☆135Updated last month
- Data preparation code for Amber 7B LLM☆94Updated last year
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆209Updated last week
- ☆39Updated 3 years ago
- Easy and Efficient Quantization for Transformers☆203Updated 3 months ago
- ☆254Updated 2 weeks ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- A collection of reproducible inference engine benchmarks☆33Updated 5 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆66Updated last year
- ☆54Updated 10 months ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year