opea-project / GenAICompsLinks
GenAI components at micro-service level; GenAI service composer to create mega-service
☆153Updated last week
Alternatives and similar repositories for GenAIComps
Users that are interested in GenAIComps are comparing it to the libraries listed below
Sorting:
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆460Updated this week
- This repo contains documents of the OPEA project☆42Updated last week
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆35Updated 2 weeks ago
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆43Updated 3 weeks ago
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated 2 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆282Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆186Updated this week
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆165Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆75Updated this week
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆182Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆317Updated this week
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite models☆195Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆127Updated last month
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆467Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆60Updated 3 weeks ago
- ☆156Updated last week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆161Updated 2 weeks ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆301Updated last week
- Fine-tune an LLM to perform batch inference and online serving.☆111Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated 9 months ago
- Intel® AI for Enterprise RAG converts enterprise data into actionable insights with excellent TCO. Utilizing Intel Gaudi AI accelerators …☆16Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆263Updated 7 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated 3 months ago
- Self-host LLMs with vLLM and BentoML☆114Updated last week
- ☆53Updated 8 months ago
- Build Research and Rag agents with Granite on your laptop☆133Updated 2 weeks ago
- ☆427Updated this week
- An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model☆57Updated last year
- 🚀 Use NVIDIA NIMs with Haystack pipelines☆31Updated 9 months ago
- ☆99Updated last week