opea-project / GenAICompsLinks
GenAI components at micro-service level; GenAI service composer to create mega-service
☆178Updated last week
Alternatives and similar repositories for GenAIComps
Users that are interested in GenAIComps are comparing it to the libraries listed below
Sorting:
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆37Updated last month
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆688Updated this week
- This repo contains documents of the OPEA project☆44Updated last month
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆179Updated 5 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆596Updated last week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆345Updated this week
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆267Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆132Updated 2 weeks ago
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆50Updated last month
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆190Updated last week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆83Updated this week
- A collection of all available inference solutions for the LLMs☆91Updated 7 months ago
- An NVIDIA AI Workbench example project for Retrieval Augmented Generation (RAG)☆340Updated last month
- Open source project for data preparation for GenAI applications☆808Updated this week
- Inference server benchmarking tool☆112Updated 5 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆267Updated last week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆315Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- Self-host LLMs with vLLM and BentoML☆150Updated last week
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆61Updated 2 weeks ago
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆647Updated last week
- Build Research and Rag agents with Granite on your laptop☆144Updated this week
- InstructLab Training Library - Efficient Fine-Tuning with Message-Format Data☆42Updated last week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated 2 weeks ago
- This NVIDIA RAG blueprint serves as a reference solution for a foundational Retrieval Augmented Generation (RAG) pipeline.☆280Updated last week
- Customizable, AI-driven virtual assistant designed to streamline customer service operations, handle common inquiries, and improve overal…☆193Updated 2 months ago
- ☆264Updated 3 months ago
- Intel® AI for Enterprise Inference optimizes AI inference services on Intel hardware using Kubernetes Orchestration. It automates LLM mod…☆23Updated 2 weeks ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆198Updated this week