MinhNgyuen / llm-benchmark
Benchmark llm performance
☆95Updated 9 months ago
Alternatives and similar repositories for llm-benchmark
Users that are interested in llm-benchmark are comparing it to the libraries listed below
Sorting:
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆220Updated 2 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆152Updated 11 months ago
- ☆89Updated 4 months ago
- Code execution utilities for Open WebUI & Ollama☆276Updated 6 months ago
- a Repository of Open-WebUI tools to use with your favourite LLMs☆219Updated this week
- Lightweight Inference server for OpenVINO☆166Updated this week
- Command-line personal assistant using your favorite proprietary or local models with access to over 30+ tools☆106Updated last month
- A open webui function for better R1 experience☆79Updated 2 months ago
- InferX is a Inference Function as a Service Platform☆77Updated this week
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆94Updated last year
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆80Updated 5 months ago
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆91Updated last month
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆61Updated this week
- A fast batching API to serve LLM models☆182Updated last year
- This is the Mixture-of-Agents (MoA) concept, adapted from the original work by TogetherAI. My version is tailored for local model usage a…☆115Updated 10 months ago
- ☆202Updated 3 weeks ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆115Updated 11 months ago
- ☆74Updated this week
- A Lightweight Library for AI Observability☆243Updated 2 months ago
- open-webui pipe the aims to replicate the o1 experience☆25Updated 3 months ago
- Self-host LLMs with vLLM and BentoML☆109Updated last week
- 🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.☆362Updated last week
- ☆94Updated this week
- Practical and advanced guide to LLMOps. It provides a solid understanding of large language models’ general concepts, deployment techniqu…☆65Updated 9 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 8 months ago
- Distributed Inference for mlx LLm☆91Updated 9 months ago
- Export and Backup Ollama models into GGUF and ModelFile☆70Updated 8 months ago
- ☆24Updated 3 months ago
- ☆72Updated last week
- This project demonstrates a basic chain-of-thought interaction with any LLM (Large Language Model)☆318Updated 7 months ago