opea-project / Enterprise-InferenceLinks

Intel® AI for Enterprise Inference optimizes AI inference services on Intel hardware using Kubernetes Orchestration. It automates LLM model deployment for faster inference, resource provisioning, and optimal settings to simplify processes and reduce manual work.
14Updated last week

Alternatives and similar repositories for Enterprise-Inference

Users that are interested in Enterprise-Inference are comparing it to the libraries listed below

Sorting: