HabanaAI / Gaudi-tutorialsLinks
Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://developer.habana.ai/
☆60Updated last week
Alternatives and similar repositories for Gaudi-tutorials
Users that are interested in Gaudi-tutorials are comparing it to the libraries listed below
Sorting:
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆161Updated 2 weeks ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated 3 months ago
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated 2 months ago
- Machine Learning Agility (MLAgility) benchmark and benchmarking tools☆39Updated 3 weeks ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 2 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆186Updated this week
- Collection of kernels written in Triton language☆125Updated 2 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆211Updated 6 months ago
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆13Updated 5 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆107Updated last month
- Fast Hadamard transform in CUDA, with a PyTorch interface☆195Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆211Updated last year
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆51Updated last year
- ☆32Updated last year
- oneCCL Bindings for Pytorch*☆97Updated last month
- An experimental CPU backend for Triton☆119Updated this week
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆108Updated 7 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆303Updated 4 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆105Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆75Updated this week
- Fast low-bit matmul kernels in Triton☆311Updated this week
- End to End steps for adding custom ops in PyTorch.☆23Updated 4 years ago
- ☆80Updated 6 months ago
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆50Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 6 months ago
- This library empowers users to seamlessly port pretrained models and checkpoints on the HuggingFace (HF) hub (developed using HF transfor…☆68Updated this week
- ☆67Updated 7 months ago
- ☆149Updated 2 years ago
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆29Updated 6 months ago