opea-project / GenAIEvalLinks
Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety, and hallucination
☆38Updated last week
Alternatives and similar repositories for GenAIEval
Users that are interested in GenAIEval are comparing it to the libraries listed below
Sorting:
- GenAI components at micro-service level; GenAI service composer to create mega-service☆193Updated last week
- This repo contains documents of the OPEA project☆43Updated 3 weeks ago
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆714Updated this week
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆216Updated 2 weeks ago
- A collection of all available inference solutions for the LLMs☆94Updated 10 months ago
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆199Updated 8 months ago
- Self-host LLMs with vLLM and BentoML☆163Updated last month
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆55Updated last month
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 9 months ago
- ☆274Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- 🚀 Use NVIDIA NIMs with Haystack pipelines☆32Updated last year
- This repository is a combination of llama workflows and agents together which is a powerful concept.☆17Updated last year
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated 2 years ago
- ☆67Updated 9 months ago
- Benchmark and optimize LLM inference across frameworks with ease☆155Updated 4 months ago
- Hugging Face Deep Learning Containers (DLCs) for Google Cloud☆162Updated last month
- ☆18Updated last year
- ☆18Updated 2 weeks ago
- Fine-tune an LLM to perform batch inference and online serving.☆117Updated 7 months ago
- Inference server benchmarking tool☆136Updated 3 months ago
- Large Language Model Hosting Container☆91Updated 3 months ago
- An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model☆69Updated last year
- The Granite Guardian models are designed to detect risks in prompts and responses.☆127Updated 3 months ago
- Python SDK for Llama Stack☆192Updated this week
- ScalarLM - a unified training and inference stack☆95Updated 2 months ago
- InstructLab Training Library - Efficient Fine-Tuning with Message-Format Data☆44Updated last week
- Route LLM requests to the best model for the task at hand.☆161Updated last week