HabanaAI / Setup_and_InstallLinks
Setup and Installation Instructions for Habana binaries, docker image creation
☆28Updated last month
Alternatives and similar repositories for Setup_and_Install
Users that are interested in Setup_and_Install are comparing it to the libraries listed below
Sorting:
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆169Updated this week
- ☆132Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 6 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆204Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 9 months ago
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆58Updated this week
- ☆131Updated 3 weeks ago
- oneCCL Bindings for Pytorch* (deprecated)☆104Updated last week
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆63Updated 3 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆527Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆368Updated this week
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic …☆104Updated this week
- ☆71Updated this week
- The Triton backend for the PyTorch TorchScript models.☆170Updated this week
- Machine Learning Agility (MLAgility) benchmark and benchmarking tools☆40Updated 5 months ago
- Bandwidth test for ROCm☆73Updated this week
- MLPerf™ logging library☆38Updated 3 weeks ago
- ☆55Updated this week
- SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi☆42Updated 11 months ago
- Issues related to MLPerf® Inference policies, including rules and suggested changes☆63Updated this week
- Development repository for the Triton language and compiler☆140Updated this week
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆16Updated last year
- General policies for MLPerf® benchmarks including submission rules, coding standards, etc.☆31Updated this week
- The Triton backend for the ONNX Runtime.☆170Updated this week
- AMD related optimizations for transformer models☆96Updated 2 months ago
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆198Updated 8 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆325Updated 3 months ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆14Updated 4 months ago