fixie-ai / ai-benchmarksLinks
Benchmarking suite for popular AI APIs
☆88Updated 11 months ago
Alternatives and similar repositories for ai-benchmarks
Users that are interested in ai-benchmarks are comparing it to the libraries listed below
Sorting:
- Benchmark suite for LLMs from Fireworks.ai☆86Updated 2 weeks ago
- Website with current metrics on the fastest AI models.☆42Updated last year
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Updated 4 months ago
- ☆165Updated 5 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆90Updated 4 months ago
- Vector Database with support for late interaction and token level embeddings.☆54Updated 7 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- SGLang is fast serving framework for large language models and vision language models.☆32Updated 2 months ago
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Structured inference with Llama 2 in your browser☆53Updated last year
- A collection of all available inference solutions for the LLMs☆94Updated 11 months ago
- ☆198Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆246Updated 2 years ago
- ReLM is a Regular Expression engine for Language Models☆107Updated 2 years ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆44Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- An OpenAI Completions API compatible server for NLP transformers models☆66Updated 2 years ago
- ☆51Updated last year
- ☆476Updated 2 years ago
- Self-host LLMs with vLLM and BentoML☆167Updated last week
- Synthetic Data for LLM Fine-Tuning☆120Updated 2 years ago
- Transformer GPU VRAM estimator☆67Updated last year
- ☆75Updated 7 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆282Updated this week
- Tutorial for building LLM router☆242Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago
- A framework for evaluating function calls made by LLMs☆40Updated last year
- experiments with inference on llama☆103Updated last year