Preemo-Inc / text-generation-inferenceLinks
β197Updated last year
Alternatives and similar repositories for text-generation-inference
Users that are interested in text-generation-inference are comparing it to the libraries listed below
Sorting:
- experiments with inference on llamaβ103Updated last year
- πΉοΈ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.β139Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching oβ¦β151Updated 4 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.β82Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeβ231Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytesβ¦β146Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'β239Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first appβ¦β169Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hubβ160Updated 2 years ago
- Domain Adapted Language Modeling Toolkit - E2E RAGβ330Updated last year
- π Datasets and models for instruction-tuningβ237Updated 2 years ago
- Unofficial python bindings for the rust llm library. πβ€οΈπ¦β76Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers modelsβ178Updated last year
- β94Updated 2 years ago
- batched lorasβ347Updated 2 years ago
- Fast & more realistic evaluation of chat language models. Includes leaderboard.β189Updated last year
- β138Updated 2 months ago
- Tune MPTsβ84Updated 2 years ago
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models accessβ¦β114Updated last year
- β159Updated 11 months ago
- Command-line script for inferencing from models such as MPT-7B-Chatβ99Updated 2 years ago
- Drop in replacement for OpenAI, but with Open models.β153Updated 2 years ago
- β210Updated 4 months ago
- Efficient vector database for hundred millions of embeddings.β208Updated last year
- A framework for evaluating function calls made by LLMsβ40Updated last year
- Small finetuned LLMs for a diverse set of useful tasksβ126Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clustersβ274Updated last year
- data cleaning and curation for unstructured textβ328Updated last year
- A Lightweight Library for AI Observabilityβ251Updated 8 months ago
- A bagel, with everything.β324Updated last year