opea-project / docsLinks
This repo contains documents of the OPEA project
☆44Updated last month
Alternatives and similar repositories for docs
Users that are interested in docs are comparing it to the libraries listed below
Sorting:
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆37Updated last month
- GenAI components at micro-service level; GenAI service composer to create mega-service☆167Updated last week
- Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open…☆662Updated this week
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆178Updated 3 months ago
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆189Updated this week
- ☆232Updated this week
- Containerization and cloud native suite for OPEA☆70Updated last week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 4 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆461Updated last week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆191Updated this week
- Route LLM requests to the best model for the task at hand.☆87Updated last month
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated 3 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆128Updated last month
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆316Updated this week
- An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model☆60Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 11 months ago
- Benchmark suite for LLMs from Fireworks.ai☆76Updated last week
- Repository for open inference protocol specification☆59Updated 2 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated last month
- Inference server benchmarking tool☆87Updated 3 months ago
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆46Updated last week
- Granite 3.1 Language Models☆117Updated last month
- InstructLab Training Library - Efficient Fine-Tuning with Message-Format Data☆42Updated this week
- ☆261Updated last month
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆307Updated 2 months ago
- Self-host LLMs with vLLM and BentoML☆139Updated last week
- A collection of all available inference solutions for the LLMs☆91Updated 5 months ago
- ☆38Updated this week
- For individual users, watsonx Code Assistant can access a local IBM Granite model☆34Updated last month
- 📡 Deploy AI models and apps to Kubernetes without developing a hernia☆32Updated last year