opea-project / docsLinks
This repo contains documents of the OPEA project
☆43Updated 2 months ago
Alternatives and similar repositories for docs
Users that are interested in docs are comparing it to the libraries listed below
Sorting:
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆37Updated this week
- GenAI components at micro-service level; GenAI service composer to create mega-service☆180Updated 2 weeks ago
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆692Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 7 months ago
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆204Updated last week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated last month
- Containerization and cloud native suite for OPEA☆70Updated last month
- ☆264Updated this week
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆187Updated 6 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated last month
- Benchmark suite for LLMs from Fireworks.ai☆83Updated last week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆199Updated this week
- ☆268Updated 4 months ago
- An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model☆63Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆679Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 4 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆317Updated last month
- Route LLM requests to the best model for the task at hand.☆122Updated last week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆92Updated this week
- Self-host LLMs with vLLM and BentoML☆154Updated last week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆292Updated this week
- ☆58Updated last year
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆368Updated this week
- Inference server benchmarking tool☆124Updated last month
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆52Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated last year
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆62Updated last month
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆165Updated last month
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆690Updated this week