intel / intel-extension-for-deepspeedLinks
Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note XPU is already supported in stock DeepSpeed (upstream).
☆61Updated last week
Alternatives and similar repositories for intel-extension-for-deepspeed
Users that are interested in intel-extension-for-deepspeed are comparing it to the libraries listed below
Sorting:
- oneCCL Bindings for Pytorch*☆97Updated 2 months ago
- OpenAI Triton backend for Intel® GPUs☆191Updated this week
- ☆62Updated 6 months ago
- oneAPI Collective Communications Library (oneCCL)☆237Updated last week
- ☆46Updated this week
- A CUTLASS implementation using SYCL☆27Updated this week
- Intel® Tensor Processing Primitives extension for Pytorch*☆17Updated last week
- Microsoft Collective Communication Library☆64Updated 7 months ago
- ☆47Updated 3 weeks ago
- RCCL Performance Benchmark Tests☆68Updated last month
- Synthesizer for optimal collective communication algorithms☆108Updated last year
- ☆38Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆379Updated this week
- rocSHMEM intra-kernel networking runtime for AMD dGPUs on the ROCm platform.☆90Updated this week
- NCCL Profiling Kit☆138Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76Updated this week
- ☆90Updated 5 months ago
- A lightweight design for computation-communication overlap.☆143Updated this week
- Ahead of Time (AOT) Triton Math Library☆66Updated last week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆337Updated this week
- Development repository for the Triton language and compiler☆125Updated this week
- ☆72Updated 2 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆188Updated this week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 3 months ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆144Updated last week
- ☆81Updated 7 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆252Updated 7 months ago
- ☆79Updated 2 years ago
- Microsoft Collective Communication Library☆350Updated last year
- ☆98Updated last year