intel / ai-containersLinks
This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow and PyTorch that have been optimized for Intel platforms. Scaling is done with python, Docker, kubernetes, kubeflow, cnvrg.io, Helm, and other container orchestration frameworks for use in the cloud and on-prem…
☆59Updated last week
Alternatives and similar repositories for ai-containers
Users that are interested in ai-containers are comparing it to the libraries listed below
Sorting:
- OpenVINO Tokenizers extension☆48Updated 2 weeks ago
- Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools fro…☆65Updated 10 months ago
- Setup and Installation Instructions for Habana binaries, docker image creation☆28Updated 3 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆226Updated 7 months ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 10 months ago
- A curated list of OpenVINO based AI projects☆179Updated 7 months ago
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆64Updated 4 months ago
- AMD related optimizations for transformer models☆97Updated 3 months ago
- oneAPI Specification source files☆211Updated 2 weeks ago
- ☆91Updated 3 weeks ago
- Machine Learning using oneAPI. Explores Intel Extensions for scikit-learn* and NumPy, SciPy, Pandas powered by oneAPI☆41Updated last year
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 7 months ago
- Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Releva…☆39Updated 4 months ago
- Intel® AI for Enterprise Inference optimizes AI inference services on Intel hardware using Kubernetes Orchestration. It automates LLM mod…☆32Updated this week
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆38Updated 3 weeks ago
- Intel® AI Super Builder☆153Updated this week
- MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.☆72Updated 2 months ago
- An Awesome list of oneAPI projects☆158Updated 5 months ago
- ☆135Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 4 months ago
- Source Code and Usage Samples for the Resources hosted in the NVIDIA AI Enterprise AzureML Registry☆21Updated last year
- This repo contains documents of the OPEA project☆43Updated last month
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆424Updated this week
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆41Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆113Updated last week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆531Updated this week
- ☆74Updated this week
- This repo hosts code for vLLM CI & Performance Benchmark infrastructure.☆29Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- A GPU-driven system framework for scalable AI applications☆124Updated 11 months ago