premAI-io / benchmarksLinks
🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.
☆138Updated last year
Alternatives and similar repositories for benchmarks
Users that are interested in benchmarks are comparing it to the libraries listed below
Sorting:
- experiments with inference on llama☆103Updated last year
- ☆198Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆155Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- ☆210Updated 7 months ago
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆59Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 2 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆280Updated last year
- ☆141Updated 5 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆249Updated last year
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Updated 4 months ago
- Let's build better datasets, together!☆269Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆220Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆294Updated 11 months ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆246Updated 2 years ago
- Efficient vector database for hundred millions of embeddings.☆211Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆47Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- A Lightweight Library for AI Observability☆255Updated 11 months ago
- C++ inference wrappers for running blazing fast embedding services on your favourite serverless like AWS Lambda. By Prithivi Da, PRs welc…☆23Updated last year
- Comparison of Language Model Inference Engines☆239Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆120Updated 8 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated 2 years ago
- ☆67Updated 10 months ago
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 6 months ago