HabanaAI / Gaudi-tutorialsLinks
Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://developer.habana.ai/
☆61Updated last week
Alternatives and similar repositories for Gaudi-tutorials
Users that are interested in Gaudi-tutorials are comparing it to the libraries listed below
Sorting:
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated 3 months ago
- oneCCL Bindings for Pytorch*☆97Updated 2 months ago
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆164Updated last month
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆188Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76Updated this week
- Full End-to-End examples showing how to use First-gen Gaudi and Gaudi2 in common use cases☆12Updated 6 months ago
- ☆19Updated this week
- Collection of kernels written in Triton language☆132Updated 2 months ago
- SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi☆42Updated 4 months ago
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆13Updated 6 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated last week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆214Updated last year
- Fast low-bit matmul kernels in Triton☆322Updated last week
- ☆46Updated this week
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆40Updated 11 months ago
- Library for modelling performance costs of different Neural Network workloads on NPU devices☆34Updated last week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆13Updated last month
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- LLM-Inference-Bench☆45Updated 2 weeks ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆213Updated 7 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆28Updated 3 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated last month
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆252Updated 7 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆201Updated last year
- Cataloging released Triton kernels.☆238Updated 5 months ago
- ☆149Updated 2 years ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆247Updated last week
- Applied AI experiments and examples for PyTorch☆277Updated 3 weeks ago
- Provides the examples to write and build Habana custom kernels using the HabanaTools☆21Updated 2 months ago
- OpenAI Triton backend for Intel® GPUs☆191Updated this week