opea-project / GenAIEvalLinks
Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety, and hallucination
☆37Updated 2 weeks ago
Alternatives and similar repositories for GenAIEval
Users that are interested in GenAIEval are comparing it to the libraries listed below
Sorting:
- GenAI components at micro-service level; GenAI service composer to create mega-service☆180Updated this week
- This repo contains documents of the OPEA project☆44Updated last month
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆687Updated this week
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆194Updated last week
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆180Updated 5 months ago
- ☆257Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆622Updated last week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated last month
- A collection of all available inference solutions for the LLMs☆91Updated 7 months ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 7 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 weeks ago
- Self-host LLMs with vLLM and BentoML☆151Updated last week
- 🚀 Use NVIDIA NIMs with Haystack pipelines☆31Updated last year
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆50Updated last month
- InstructLab Training Library - Efficient Fine-Tuning with Message-Format Data☆43Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆15Updated last week
- Route LLM requests to the best model for the task at hand.☆109Updated 3 weeks ago
- ☆49Updated 2 months ago
- Repository for open inference protocol specification☆59Updated 5 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆73Updated last year
- An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model☆63Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆60Updated this week
- Python SDK for Llama Stack☆183Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆200Updated this week
- Fine-tune an LLM to perform batch inference and online serving.☆112Updated 4 months ago
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated 2 years ago
- 📡 Deploy AI models and apps to Kubernetes without developing a hernia☆33Updated last year
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆278Updated last week
- ☆64Updated 6 months ago
- Benchmark suite for LLMs from Fireworks.ai☆83Updated 2 weeks ago