premAI-io / benchmarksLinks
🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.
☆137Updated last year
Alternatives and similar repositories for benchmarks
Users that are interested in benchmarks are comparing it to the libraries listed below
Sorting:
- experiments with inference on llama☆104Updated last year
- ☆199Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆146Updated last month
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 10 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆271Updated last year
- ☆210Updated 2 months ago
- ☆134Updated 3 weeks ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 11 months ago
- Efficient vector database for hundred millions of embeddings.☆207Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated 10 months ago
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆59Updated 2 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated 10 months ago
- Let's build better datasets, together!☆263Updated 8 months ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆242Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated 11 months ago
- Multi-threaded matrix multiplication and cosine similarity calculations for dense and sparse matrices. Appropriate for calculating the K …☆83Updated 8 months ago
- Simple UI for debugging correlations of text embeddings☆290Updated 3 months ago
- A Lightweight Library for AI Observability☆251Updated 6 months ago
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Baguetter is a flexible, efficient, and hackable search engine library implemented in Python. It's designed for quickly benchmarking, imp…☆189Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆145Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆112Updated 3 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆117Updated 5 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 11 months ago
- C++ inference wrappers for running blazing fast embedding services on your favourite serverless like AWS Lambda. By Prithivi Da, PRs welc…☆23Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year