intel / ai-containersLinks
This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow and PyTorch that have been optimized for Intel platforms. Scaling is done with python, Docker, kubernetes, kubeflow, cnvrg.io, Helm, and other container orchestration frameworks for use in the cloud and on-prem…
☆52Updated this week
Alternatives and similar repositories for ai-containers
Users that are interested in ai-containers are comparing it to the libraries listed below
Sorting:
- OpenVINO Tokenizers extension☆40Updated this week
- Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools fro…☆50Updated 5 months ago
- A curated list of OpenVINO based AI projects☆149Updated 2 months ago
- AMD related optimizations for transformer models☆83Updated last week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 5 months ago
- Setup and Installation Instructions for Habana binaries, docker image creation☆25Updated 3 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆325Updated this week
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆61Updated 3 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆208Updated 2 months ago
- oneAPI Specification source files☆207Updated last week
- Source Code and Usage Samples for the Resources hosted in the NVIDIA AI Enterprise AzureML Registry☆21Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated 2 weeks ago
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆180Updated 3 months ago
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆33Updated this week
- ☆87Updated last week
- This repo contains documents of the OPEA project☆44Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated 2 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆193Updated this week
- Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety…☆37Updated last week
- RAPIDS Documentation Site☆45Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆485Updated last week
- Legacy CM repository with a collection of portable, reusable and cross-platform CM automations for MLOps and MLPerf to simplify the proce…☆18Updated 5 months ago
- ☆126Updated this week
- Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Releva…☆39Updated 3 months ago
- Run cloud native workloads on NVIDIA GPUs☆193Updated this week
- Intel® Extension for TensorFlow*☆346Updated 5 months ago
- ☆238Updated this week
- MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.☆47Updated last month
- GenAI components at micro-service level; GenAI service composer to create mega-service☆167Updated this week