opea-project / Enterprise-InferenceView on GitHub
Intel® AI for Enterprise Inference optimizes AI inference services on Intel hardware using Kubernetes Orchestration. It automates LLM model deployment for faster inference, resource provisioning, and optimal settings to simplify processes and reduce manual work.
38Mar 30, 2026Updated last week

Alternatives and similar repositories for Enterprise-Inference

Users that are interested in Enterprise-Inference are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?