☆198Feb 9, 2024Updated 2 years ago
Alternatives and similar repositories for text-generation-inference
Users that are interested in text-generation-inference are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Sep 18, 2025Updated 7 months ago
- Tool to apply Legal Matter Specification Standard (LMSS) to documents☆12Aug 15, 2024Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆112Sep 10, 2023Updated 2 years ago
- Rust bindings for CTranslate2☆14Jun 21, 2023Updated 2 years ago
- Interface for interacting with Gradient AI in Python☆15Jun 28, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Generate images from an initial frame and text☆37Jul 28, 2023Updated 2 years ago
- batched loras☆351Sep 6, 2023Updated 2 years ago
- Large Language Model Text Generation Inference☆10,848Mar 21, 2026Updated last month
- ☆29Sep 10, 2025Updated 7 months ago
- ☆25Aug 1, 2023Updated 2 years ago
- ☆50Mar 14, 2024Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- Easy and Efficient Quantization for Transformers☆206Mar 25, 2026Updated last month
- Extract full next-token probabilities via language model APIs☆247Feb 23, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Convert all of libgen to high quality markdown☆255Dec 13, 2023Updated 2 years ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆32Sep 22, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,155May 8, 2024Updated last year
- ☆94Oct 5, 2023Updated 2 years ago
- Chat Markup Language conversation library☆55Jan 3, 2024Updated 2 years ago
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆430Apr 23, 2026Updated last week
- A simple uv workspace☆19Apr 5, 2025Updated last year
- A blazing fast inference solution for text embeddings models☆4,755Apr 17, 2026Updated 2 weeks ago
- This repository contains the source code for the Saving 77% of the Parameters in Large Language Models Technical Report☆57Dec 2, 2025Updated 5 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆10Feb 11, 2025Updated last year
- Large-scale LLM inference engine☆1,714Updated this week
- 👾📦 CodeBoxAPI is the simplest sandboxing infrastructure for your LLM Apps and Services.☆364Jan 30, 2025Updated last year
- Adaptive Inter-Class Similarity Distillation for Semantic Segmentation (MTAP 2025)☆29Nov 14, 2025Updated 5 months ago
- Go ahead and axolotl questions☆11,779Updated this week
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- ☆25Sep 19, 2023Updated 2 years ago
- A library capturing message patterns and protocols speaking to Noteable's APIs☆17Jan 2, 2024Updated 2 years ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,886Jan 28, 2024Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Personal configuration☆13Feb 27, 2026Updated 2 months ago
- Modified Beam Search with periodical restart☆12Sep 12, 2024Updated last year
- 100% Private & Simple. OSS 🐍 Code Interpreter for LLMs 🦙☆34Aug 29, 2023Updated 2 years ago
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, Slurm, 20+ cl…☆9,923Updated this week
- Fast inference engine for Transformer models☆4,457Feb 4, 2026Updated 2 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Feb 4, 2024Updated 2 years ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Sep 19, 2025Updated 7 months ago