DeepLink-org / ditorchLinks
☆27Updated last year
Alternatives and similar repositories for ditorch
Users that are interested in ditorch are comparing it to the libraries listed below
Sorting:
- ☆22Updated last week
- ☆130Updated last year
- ☆96Updated 10 months ago
- ☆34Updated last year
- ☆60Updated last year
- FlagCX is a scalable and adaptive cross-chip communication library.☆172Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Updated 11 months ago
- ☆152Updated last year
- PyTorch distributed training acceleration framework☆55Updated 5 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Updated last year
- ☆141Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- A Triton JIT runtime and ffi provider in C++☆31Updated last week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 4 months ago
- LLM Inference via Triton (Flexible & Modular): Focused on Kernel Optimization using CUBIN binaries, Starting from gpt-oss Model☆64Updated 3 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Updated last year
- ☆14Updated 3 months ago
- FlagTree is a unified compiler supporting multiple AI chip backends for custom Deep Learning operations, which is forked from triton-lang…☆200Updated this week
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 7 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- ☆105Updated last year
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Updated last week
- ☆76Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Updated 6 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆107Updated this week
- ☆38Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆54Updated last year
- llama INT4 cuda inference with AWQ☆55Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Updated 5 months ago