intel / intel-extension-for-deepspeedLinks
Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note XPU is already supported in stock DeepSpeed (upstream).
☆63Updated 5 months ago
Alternatives and similar repositories for intel-extension-for-deepspeed
Users that are interested in intel-extension-for-deepspeed are comparing it to the libraries listed below
Sorting:
- oneCCL Bindings for Pytorch* (deprecated)☆103Updated last month
- OpenAI Triton backend for Intel® GPUs☆221Updated this week
- ☆51Updated this week
- RCCL Performance Benchmark Tests☆81Updated last week
- ☆65Updated last week
- oneAPI Collective Communications Library (oneCCL)☆248Updated this week
- Ahead of Time (AOT) Triton Math Library☆84Updated 3 weeks ago
- Development repository for the Triton language and compiler☆137Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- ☆71Updated 8 months ago
- SYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs☆58Updated this week
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆35Updated 3 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆119Updated last week
- Microsoft Collective Communication Library☆66Updated last year
- MAD (Model Automation and Dashboarding)☆30Updated 2 weeks ago
- AI Tensor Engine for ROCm☆309Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆363Updated last week
- ROCm Communication Collectives Library (RCCL)☆403Updated last week
- ☆62Updated 11 months ago
- ☆94Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 3 months ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆153Updated last week
- rocSHMEM intra-kernel networking runtime for AMD dGPUs on the ROCm platform.☆130Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated last week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆102Updated 7 years ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆439Updated this week
- Github mirror of trition-lang/triton repo.☆100Updated this week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆276Updated 3 months ago
- A lightweight design for computation-communication overlap.☆190Updated last month
- An extension library of WMMA API (Tensor Core API)☆109Updated last year