madsys-dev / deepseekv2-profile
☆101Updated 6 months ago
Alternatives and similar repositories for deepseekv2-profile:
Users that are interested in deepseekv2-profile are comparing it to the libraries listed below
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 6 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated 11 months ago
- ☆81Updated 5 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆101Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆94Updated last month
- An easy-to-use package for implementing SmoothQuant for LLMs☆92Updated 9 months ago
- ☆62Updated 2 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆290Updated this week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆245Updated 2 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆64Updated 3 months ago
- ☆67Updated 2 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆424Updated this week
- High performance Transformer implementation in C++.☆102Updated last month
- ☆140Updated 9 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 8 months ago
- Implement Flash Attention using Cute.☆69Updated 2 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆220Updated last month
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆138Updated 7 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆237Updated 8 months ago
- Implement some method of LLM KV Cache Sparsity☆30Updated 8 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 5 months ago
- ☆83Updated 3 months ago
- ☆24Updated 6 months ago
- A low-latency & high-throughput serving engine for LLMs☆312Updated 3 weeks ago
- ☆127Updated last month
- A fast communication-overlapping library for tensor parallelism on GPUs.☆296Updated 3 months ago
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆102Updated 3 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆104Updated 5 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆132Updated this week
- flash attention tutorial written in python, triton, cuda, cutlass☆260Updated last month