opea-project / docs
This repo contains documents of the OPEA project
☆32Updated last week
Alternatives and similar repositories for docs:
Users that are interested in docs are comparing it to the libraries listed below
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆29Updated this week
- GenAI components at micro-service level; GenAI service composer to create mega-service☆136Updated this week
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆419Updated this week
- Containerization and cloud native suite for OPEA☆53Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆32Updated last month
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆177Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆125Updated 3 weeks ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆60Updated 4 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆183Updated this week
- ☆191Updated 2 weeks ago
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆37Updated this week
- ☆53Updated 7 months ago
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆156Updated last month
- Repository for open inference protocol specification☆53Updated 9 months ago
- Inference server benchmarking tool☆51Updated 2 weeks ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated last month
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆258Updated this week
- 📡 Deploy AI models and apps to Kubernetes without developing a hernia☆32Updated 10 months ago
- Benchmark suite for LLMs from Fireworks.ai☆70Updated 2 months ago
- Benchmarking suite for popular AI APIs☆83Updated 2 months ago
- ☆66Updated 10 months ago
- Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the …☆56Updated last year
- A collection of all available inference solutions for the LLMs☆84Updated last month
- ☆15Updated 3 weeks ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆266Updated this week
- Infrastructure as code for GPU accelerated managed Kubernetes clusters.☆55Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆14Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated 7 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆109Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆459Updated this week