kwai / Megatron-KwaiLinks
LLM training technologies developed by kwai
☆70Updated last week
Alternatives and similar repositories for Megatron-Kwai
Users that are interested in Megatron-Kwai are comparing it to the libraries listed below
Sorting:
- ☆155Updated 10 months ago
- Allow torch tensor memory to be released and resumed later☆207Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆185Updated last month
- Zero Bubble Pipeline Parallelism☆448Updated 8 months ago
- Pipeline Parallelism Emulation and Visualization☆76Updated 3 weeks ago
- A lightweight design for computation-communication overlap.☆213Updated last week
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- Sequence-level 1F1B schedule for LLMs.☆38Updated 5 months ago
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- ☆47Updated last year
- ☆105Updated last year
- PyTorch distributed training acceleration framework☆55Updated 5 months ago
- ☆112Updated 8 months ago
- ☆340Updated 3 weeks ago
- ☆131Updated last year
- High performance Transformer implementation in C++.☆148Updated last year
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆283Updated 10 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- Ongoing research training transformer models at scale☆19Updated last week
- High Performance LLM Inference Operator Library☆222Updated last week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆81Updated 4 months ago
- ATC23 AE☆46Updated 2 years ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆457Updated 8 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆659Updated this week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆154Updated 4 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆479Updated last week
- ☆152Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Updated 10 months ago