DeepLink-org / ditorchLinks
☆23Updated 6 months ago
Alternatives and similar repositories for ditorch
Users that are interested in ditorch are comparing it to the libraries listed below
Sorting:
- ☆31Updated 5 months ago
- ☆128Updated 7 months ago
- ☆90Updated 3 months ago
- ☆16Updated this week
- 使用 CUDA C++ 实现的 llama 模型推理框架☆58Updated 8 months ago
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆64Updated last week
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated 10 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 4 months ago
- ☆149Updated 6 months ago
- ☆79Updated last year
- ☆139Updated last year
- ☆59Updated 8 months ago
- PyTorch distributed training acceleration framework☆51Updated 5 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆102Updated 2 months ago
- ☆96Updated 10 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆93Updated last week
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.🎉☆192Updated 2 months ago
- 分层解耦的深度学习推理引擎☆74Updated 5 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆251Updated 3 weeks ago
- ☆79Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆108Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆87Updated 2 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆39Updated last month
- 使用 cutlass 实现 flash-attention 精简版,具有教 学意义☆44Updated 11 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆60Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆95Updated 6 years ago
- ☆14Updated 11 months ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆87Updated 2 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆102Updated 3 months ago