intel / intel-extension-for-deepspeedLinks
Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note XPU is already supported in stock DeepSpeed (upstream).
☆61Updated 3 months ago
Alternatives and similar repositories for intel-extension-for-deepspeed
Users that are interested in intel-extension-for-deepspeed are comparing it to the libraries listed below
Sorting:
- oneCCL Bindings for Pytorch*☆97Updated last month
- OpenAI Triton backend for Intel® GPUs☆187Updated this week
- RCCL Performance Benchmark Tests☆67Updated last week
- ☆47Updated last week
- oneAPI Collective Communications Library (oneCCL)☆234Updated last week
- ☆61Updated 5 months ago
- ☆46Updated this week
- ☆36Updated this week
- Intel® Tensor Processing Primitives extension for Pytorch*☆17Updated 2 weeks ago
- A CUTLASS implementation using SYCL☆23Updated this week
- Microsoft Collective Communication Library☆65Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆75Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆365Updated this week
- An extension library of WMMA API (Tensor Core API)☆97Updated 10 months ago
- ☆71Updated 2 months ago
- ☆24Updated 3 weeks ago
- Ahead of Time (AOT) Triton Math Library☆64Updated last week
- Development repository for the Triton language and compiler☆122Updated this week
- AI Tensor Engine for ROCm☆201Updated this week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆161Updated 2 weeks ago
- ☆86Updated 5 months ago
- rocSHMEM intra-kernel networking runtime for AMD dGPUs on the ROCm platform.☆86Updated last week
- A lightweight design for computation-communication overlap.☆132Updated 3 weeks ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆94Updated 6 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆251Updated 7 months ago
- ☆79Updated 6 months ago
- ☆18Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated 2 months ago
- Benchmarks to capture important workloads.☆31Updated 4 months ago
- CUDA GPU Benchmark☆26Updated 4 months ago