opea-project / GenAICompsLinks
GenAI components at micro-service level; GenAI service composer to create mega-service
☆193Updated 3 weeks ago
Alternatives and similar repositories for GenAIComps
Users that are interested in GenAIComps are comparing it to the libraries listed below
Sorting:
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆718Updated this week
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆39Updated last month
- This repo contains documents of the OPEA project☆43Updated last month
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆203Updated 9 months ago
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆221Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆843Updated this week
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆59Updated 3 weeks ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆130Updated 4 months ago
- This NVIDIA RAG blueprint serves as a reference solution for a foundational Retrieval Augmented Generation (RAG) pipeline.☆470Updated this week
- Route LLM requests to the best model for the task at hand.☆177Updated 3 weeks ago
- Self-host LLMs with vLLM and BentoML☆168Updated 3 weeks ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 10 months ago
- An NVIDIA AI Workbench example project for Retrieval Augmented Generation (RAG)☆362Updated 6 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆205Updated last week
- A calculator to estimate the memory footprint, capacity, and latency on VMware Private AI with NVIDIA.☆38Updated 6 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆433Updated this week
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆345Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆391Updated this week
- A collection of all available inference solutions for the LLMs☆94Updated 11 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week
- An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model☆69Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- Python SDK for Llama Stack☆193Updated this week
- ☆185Updated this week
- Official repository of the Intel Certified Developer Program☆87Updated 6 months ago
- The NVIDIA NeMo Agent Toolkit UI streamlines interacting with NeMo Agent Toolkit workflows in an easy-to-use web application.☆75Updated this week
- ☆270Updated 7 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated last year
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Updated 4 months ago
- ☆282Updated 2 weeks ago