intel / ai-containersLinks
This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow and PyTorch that have been optimized for Intel platforms. Scaling is done with python, Docker, kubernetes, kubeflow, cnvrg.io, Helm, and other container orchestration frameworks for use in the cloud and on-prem…
☆58Updated this week
Alternatives and similar repositories for ai-containers
Users that are interested in ai-containers are comparing it to the libraries listed below
Sorting:
- OpenVINO Tokenizers extension☆44Updated 3 weeks ago
- Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools fro…☆62Updated 9 months ago
- Setup and Installation Instructions for Habana binaries, docker image creation☆28Updated last month
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 6 months ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 9 months ago
- No-code CLI designed for accelerating ONNX workflows☆221Updated 6 months ago
- ☆90Updated this week
- A curated list of OpenVINO based AI projects☆177Updated 6 months ago
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆63Updated 3 months ago
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆40Updated this week
- AMD related optimizations for transformer models☆96Updated 2 months ago
- ☆132Updated 3 weeks ago
- oneAPI Specification source files☆209Updated 3 weeks ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated last week
- Intel® AI Assistant Builder☆140Updated this week
- Source Code and Usage Samples for the Resources hosted in the NVIDIA AI Enterprise AzureML Registry☆21Updated last year
- Intel® AI for Enterprise Inference optimizes AI inference services on Intel hardware using Kubernetes Orchestration. It automates LLM mod…☆31Updated 3 weeks ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆204Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆527Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- Intel® SHMEM - Device initiated shared memory based communication library☆32Updated last month
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆89Updated 3 weeks ago
- ☆147Updated 3 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆410Updated this week
- oneAPI Level Zero Conformance & Performance test content☆59Updated this week
- Intel® Extension for TensorFlow*☆350Updated 2 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆113Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆16Updated last year