premAI-io / benchmarksLinks
🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.
☆138Updated last year
Alternatives and similar repositories for benchmarks
Users that are interested in benchmarks are comparing it to the libraries listed below
Sorting:
- experiments with inference on llama☆104Updated last year
- ☆199Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆149Updated 2 months ago
- ☆135Updated last month
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆59Updated 2 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year
- ☆210Updated 3 months ago
- Let's build better datasets, together!☆263Updated 9 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated 2 weeks ago
- Efficient vector database for hundred millions of embeddings.☆208Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆243Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated 11 months ago
- ☆64Updated 6 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆289Updated 6 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆112Updated 4 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated 11 months ago
- Self-host LLMs with vLLM and BentoML☆150Updated this week
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆179Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- Simple UI for debugging correlations of text embeddings☆291Updated 4 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆170Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated last year
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆225Updated 3 months ago
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated last year