BBuf / megatron-lm-parallel-group-playgroundLinks
☆16Updated last year
Alternatives and similar repositories for megatron-lm-parallel-group-playground
Users that are interested in megatron-lm-parallel-group-playground are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆72Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- ☆79Updated last year
- OneFlow Serving☆20Updated 3 months ago
- ☆92Updated 4 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆42Updated last month
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆40Updated last month
- ☆50Updated 2 months ago
- ☆39Updated 2 months ago
- ☆11Updated last year
- A simple calculation for LLM MFU.☆42Updated 5 months ago
- Quantized Attention on GPU☆44Updated 8 months ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆55Updated 9 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆46Updated last year
- GPTQ inference TVM kernel☆40Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆103Updated 4 months ago
- Distributed IO-aware Attention algorithm☆21Updated 11 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 5 months ago
- ☆78Updated 3 months ago
- ☆145Updated 5 months ago
- ☆75Updated 2 months ago
- patches for huggingface transformers to save memory☆27Updated 2 months ago
- ☆60Updated 3 months ago
- Datasets, Transforms and Models specific to Computer Vision☆87Updated last year
- ☆96Updated 11 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆18Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated 10 months ago