☆155Mar 4, 2025Updated last year
Alternatives and similar repositories for deepseekv2-profile
Users that are interested in deepseekv2-profile are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 2 months ago
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- ☆23Aug 14, 2024Updated last year
- ☆261Jul 11, 2024Updated last year
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆491Jan 20, 2026Updated 2 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- ☆13Jan 7, 2025Updated last year
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆49May 10, 2024Updated last year
- ☆119May 16, 2025Updated 10 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- Expert Specialization MoE Solution based on CUTLASS☆27Jan 19, 2026Updated 2 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Sep 10, 2024Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- Fast inference from large lauguage models via speculative decoding☆904Aug 22, 2024Updated last year
- ☆52May 19, 2025Updated 10 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆977Feb 5, 2026Updated last month
- Materials for learning SGLang☆775Jan 5, 2026Updated 2 months ago
- DeepSeek-V3/R1 inference performance simulator☆189Mar 27, 2025Updated 11 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 7 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆151Dec 23, 2025Updated 3 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,394Mar 11, 2026Updated last week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆531Feb 10, 2025Updated last year
- PyTorch implementation of the Flash Spectral Transform Unit.☆22Sep 19, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆649Jan 15, 2026Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- Fastest kernels written from scratch☆561Sep 18, 2025Updated 6 months ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- how to optimize some algorithm in cuda.☆2,872Mar 17, 2026Updated last week
- Odysseus: Playground of LLM Sequence Parallelism☆79Jun 17, 2024Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆149May 10, 2025Updated 10 months ago
- ☆21Jul 24, 2025Updated 8 months ago
- https://bbuf.github.io/gpu-glossary-zh/☆26Nov 7, 2025Updated 4 months ago