DeepLink-org / ditorchLinks
☆23Updated 7 months ago
Alternatives and similar repositories for ditorch
Users that are interested in ditorch are comparing it to the libraries listed below
Sorting:
- ☆128Updated 7 months ago
- ☆31Updated 6 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆40Updated 5 months ago
- ☆150Updated 7 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆104Updated 3 months ago
- ☆84Updated last week
- ☆92Updated 4 months ago
- ☆140Updated last year
- PyTorch distributed training acceleration framework☆51Updated last week
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated 11 months ago
- ☆59Updated 9 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆111Updated last year
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆72Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated 2 weeks ago
- ☆97Updated 11 months ago
- ☆17Updated this week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆45Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆104Updated 4 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆60Updated 9 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆138Updated this week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆257Updated this week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆211Updated 2 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆100Updated 3 months ago
- llama INT4 cuda inference with AWQ☆54Updated 7 months ago
- ☆145Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- 分层解耦的深度学习推理引擎☆75Updated 6 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆98Updated 7 years ago
- ☆12Updated 5 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆74Updated last year