BBuf / megatron-lm-parallel-group-playground
☆13Updated 10 months ago
Alternatives and similar repositories for megatron-lm-parallel-group-playground:
Users that are interested in megatron-lm-parallel-group-playground are comparing it to the libraries listed below
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 7 months ago
- ☆11Updated last year
- ☆73Updated 6 months ago
- OneFlow Serving☆20Updated last month
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆27Updated 2 months ago
- ☆76Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆61Updated 2 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆53Updated 5 months ago
- GPTQ inference TVM kernel☆38Updated 9 months ago
- ☆59Updated last month
- ☆79Updated 4 months ago
- Quantized Attention on GPU☆34Updated 2 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆16Updated 4 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆31Updated 2 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆37Updated 6 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆52Updated 2 weeks ago
- ☆55Updated 2 weeks ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated last year
- Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!☆40Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆23Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆86Updated this week
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆61Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆86Updated 3 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆99Updated 4 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆34Updated 4 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆91Updated 10 months ago
- ☆59Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated 7 months ago