HabanaAI / gaudi-pytorch-bridgeLinks
☆15Updated last week
Alternatives and similar repositories for gaudi-pytorch-bridge
Users that are interested in gaudi-pytorch-bridge are comparing it to the libraries listed below
Sorting:
- ☆123Updated 2 months ago
- ☆94Updated 6 months ago
- A lightweight design for computation-communication overlap.☆146Updated 3 weeks ago
- ☆102Updated last year
- ☆216Updated last year
- A CUTLASS implementation using SYCL☆30Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆77Updated this week
- ☆84Updated last month
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆52Updated this week
- Optimize GEMM with tensorcore step by step☆28Updated last year
- ☆125Updated 7 months ago
- Github mirror of trition-lang/triton repo.☆48Updated this week
- ☆79Updated 2 years ago
- High performance Transformer implementation in C++.☆125Updated 5 months ago
- ☆48Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆216Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆67Updated 3 months ago
- Microsoft Collective Communication Library☆64Updated 7 months ago
- OpenAI Triton backend for Intel® GPUs☆191Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆113Updated last week
- ☆62Updated 6 months ago
- ☆35Updated last year
- ☆149Updated 11 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- ☆96Updated 10 months ago
- LLM Inference analyzer for different hardware platforms☆79Updated last week
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆59Updated last year
- ☆106Updated 8 months ago
- An extension library of WMMA API (Tensor Core API)☆99Updated last year
- ☆64Updated last year