premAI-io / benchmarksLinks
🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.
☆139Updated last year
Alternatives and similar repositories for benchmarks
Users that are interested in benchmarks are comparing it to the libraries listed below
Sorting:
- experiments with inference on llama☆103Updated last year
- ☆197Updated last year
- ☆138Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆151Updated 3 months ago
- ☆210Updated 4 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year
- Efficient vector database for hundred millions of embeddings.☆208Updated last year
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆58Updated last month
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆50Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆244Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- Simple UI for debugging correlations of text embeddings☆298Updated 5 months ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆290Updated 8 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- Let's build better datasets, together!☆264Updated 10 months ago
- Multi-threaded matrix multiplication and cosine similarity calculations for dense and sparse matrices. Appropriate for calculating the K …☆83Updated 10 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆242Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆113Updated 5 months ago
- Google TPU optimizations for transformers models☆122Updated 9 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated last year
- ☆124Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated last year
- Evaluation of bm42 sparse indexing algorithm☆72Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago