intel / intel-extension-for-deepspeedLinks
Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note XPU is already supported in stock DeepSpeed (upstream).
☆61Updated 2 weeks ago
Alternatives and similar repositories for intel-extension-for-deepspeed
Users that are interested in intel-extension-for-deepspeed are comparing it to the libraries listed below
Sorting:
- oneCCL Bindings for Pytorch*☆99Updated this week
- A CUTLASS implementation using SYCL☆30Updated this week
- OpenAI Triton backend for Intel® GPUs☆191Updated this week
- ☆48Updated last week
- RCCL Performance Benchmark Tests☆70Updated last week
- ☆40Updated this week
- oneAPI Collective Communications Library (oneCCL)☆238Updated last week
- Ahead of Time (AOT) Triton Math Library☆70Updated this week
- Microsoft Collective Communication Library☆64Updated 7 months ago
- Development repository for the Triton language and compiler☆125Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆343Updated this week
- ☆62Updated 6 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆385Updated this week
- ☆73Updated 3 months ago
- An extension library of WMMA API (Tensor Core API)☆99Updated last year
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆40Updated 11 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆255Updated 8 months ago
- A lightweight design for computation-communication overlap.☆146Updated 3 weeks ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆95Updated 6 years ago
- AI Tensor Engine for ROCm☆226Updated this week
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆147Updated last week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 3 months ago
- ☆94Updated 6 months ago
- ☆102Updated last year
- ☆123Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆112Updated last year
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆251Updated 2 weeks ago
- An experimental CPU backend for Triton☆135Updated last month
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆216Updated last year
- ☆20Updated last week