opea-project / docsLinks
This repo contains documents of the OPEA project
☆44Updated last month
Alternatives and similar repositories for docs
Users that are interested in docs are comparing it to the libraries listed below
Sorting:
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆37Updated 2 weeks ago
- GenAI components at micro-service level; GenAI service composer to create mega-service☆178Updated last week
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆687Updated this week
- Containerization and cloud native suite for OPEA☆70Updated 2 weeks ago
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆180Updated 5 months ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 7 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆359Updated this week
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆194Updated last week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated last month
- ☆257Updated this week
- An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model☆63Updated last year
- A collection of all available inference solutions for the LLMs☆91Updated 7 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆200Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- ☆264Updated 3 months ago
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆52Updated last week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆501Updated this week
- Self-host LLMs with vLLM and BentoML☆151Updated last week
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆61Updated last month
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆622Updated this week
- Route LLM requests to the best model for the task at hand.☆109Updated 3 weeks ago
- Helm charts for llm-d☆50Updated 2 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 3 months ago
- For individual users, watsonx Code Assistant can access a local IBM Granite model☆35Updated 3 months ago
- Python framework which enables you to transform how a user calls or infers an IBM Granite model and how the output from the model is retu…☆46Updated last week
- GroqFlow provides an automated tool flow for compiling machine learning and linear algebra workloads into Groq programs and executing tho…☆112Updated 2 months ago
- 📡 Deploy AI models and apps to Kubernetes without developing a hernia☆33Updated last year
- ☆15Updated last month
- Prompt Declaration Language (PDL) is a declarative prompt programming language.☆239Updated this week