opea-project / GenAICompsLinks
GenAI components at micro-service level; GenAI service composer to create mega-service
☆169Updated this week
Alternatives and similar repositories for GenAIComps
Users that are interested in GenAIComps are comparing it to the libraries listed below
Sorting:
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆682Updated this week
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆37Updated 3 weeks ago
- This repo contains documents of the OPEA project☆44Updated 3 weeks ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆566Updated this week
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆191Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆132Updated last week
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆180Updated 4 months ago
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆49Updated 3 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆334Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 5 months ago
- A collection of all available inference solutions for the LLMs☆91Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆83Updated this week
- Route LLM requests to the best model for the task at hand.☆102Updated this week
- Python SDK for Llama Stack☆178Updated this week
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆258Updated this week
- ☆241Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated 4 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆489Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆315Updated this week
- Self-host LLMs with vLLM and BentoML☆145Updated this week
- Inference server benchmarking tool☆98Updated 4 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆194Updated this week
- This is the documentation repository for SGLang. It is auto-generated from https://github.com/sgl-project/sglang/tree/main/docs.☆76Updated this week
- Official repository of the Intel Certified Developer Program☆88Updated last month
- An NVIDIA AI Workbench example project for Retrieval Augmented Generation (RAG)☆340Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated last year
- LLMPerf is a library for validating and benchmarking LLMs☆1,001Updated 9 months ago
- ☆165Updated this week
- This NVIDIA RAG blueprint serves as a reference solution for a foundational Retrieval Augmented Generation (RAG) pipeline.☆255Updated 2 weeks ago
- Build Research and Rag agents with Granite on your laptop☆141Updated this week