HabanaAI / DeepSpeedLinks
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
☆14Updated 3 weeks ago
Alternatives and similar repositories for DeepSpeed
Users that are interested in DeepSpeed are comparing it to the libraries listed below
Sorting:
- oneCCL Bindings for Pytorch* (deprecated)☆104Updated last month
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆64Updated 7 months ago
- SYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs☆66Updated last week
- oneAPI Collective Communications Library (oneCCL)☆254Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆170Updated 3 weeks ago
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆86Updated 2 weeks ago
- Intel® Tensor Processing Primitives extension for Pytorch*☆18Updated 3 weeks ago
- ☆61Updated last year
- Development repository for the Triton language and compiler☆140Updated last week
- Issues related to MLPerf® Inference policies, including rules and suggested changes☆63Updated this week
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆410Updated this week
- Training material for Nsight developer tools☆177Updated last year
- [DEPRECATED] Moved to ROCm/rocm-libraries repo. NOTE: develop branch is maintained as a read-only mirror☆518Updated this week
- OpenAI Triton backend for Intel® GPUs☆226Updated this week
- ☆60Updated this week
- Provides the examples to write and build Habana custom kernels using the HabanaTools☆25Updated 9 months ago
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆144Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 10 months ago
- ☆24Updated 3 months ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆380Updated this week
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆156Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆298Updated 2 weeks ago
- oneAPI Level Zero Conformance & Performance test content☆60Updated this week
- ☆74Updated this week
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 5 months ago
- Microsoft Collective Communication Library☆66Updated last year
- A tool for bandwidth measurements on NVIDIA GPUs.☆617Updated 9 months ago
- Magnum IO community repo☆109Updated 2 months ago