HabanaAI / Setup_and_Install
Setup and Installation Instructions for Habana binaries, docker image creation
☆23Updated last month
Related projects ⓘ
Alternatives and complementary repositories for Setup_and_Install
- Large Language Model Text Generation Inference on Habana Gaudi☆27Updated last week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆155Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆153Updated this week
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆55Updated 2 weeks ago
- SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi☆37Updated last year
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆27Updated this week
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆124Updated this week
- Machine Learning Agility (MLAgility) benchmark and benchmarking tools☆38Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆43Updated this week
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆148Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆11Updated last month
- GPU Stress Test is a tool to stress the compute engine of NVIDIA Tesla GPU’s by running a BLAS matrix multiply using different data types…☆77Updated last month
- Development repository for the Triton language and compiler☆96Updated this week
- ☆13Updated this week
- Run cloud native workloads on NVIDIA GPUs☆135Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆57Updated 2 months ago
- OpenAI Triton backend for Intel® GPUs☆143Updated this week
- Issues related to MLPerf™ Inference policies, including rules and suggested changes☆57Updated 2 weeks ago
- ☆15Updated 2 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆103Updated last week
- oneCCL Bindings for Pytorch*☆86Updated 3 weeks ago
- General policies for MLPerf™ including submission rules, coding standards, etc.☆28Updated this week
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆13Updated last month
- AMD SMI☆42Updated this week
- EFA/NCCL base AMI build Packer and CodeBuild/Pipeline files. Also base Docker build files to enable EFA/NCCL in containers☆41Updated last year
- A validation and profiling tool for AI infrastructure☆277Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆45Updated this week
- Notes and artifacts from the ONNX steering committee☆25Updated last week
- hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditiona…☆63Updated this week
- ☆30Updated this week