Lightning-AI / forked-pdbLinks
Python pdb for multiple processes
☆76Updated 7 months ago
Alternatives and similar repositories for forked-pdb
Users that are interested in forked-pdb are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- Triton implementation of FlashAttention2 that adds Custom Masks.☆159Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆123Updated last year
- pytorch-profiler☆50Updated 2 years ago
- ☆115Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆220Updated last year
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆149Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆180Updated 3 weeks ago
- Triton-based implementation of Sparse Mixture of Experts.☆259Updated 3 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆225Updated 6 months ago
- ☆133Updated 7 months ago
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆93Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆139Updated 7 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 6 months ago
- ☆124Updated last year
- ☆160Updated 2 years ago
- Examples for MS-AMP package.☆30Updated 5 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆256Updated 5 months ago
- ring-attention experiments☆161Updated last year
- ☆22Updated 2 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- ☆150Updated 2 years ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆218Updated this week
- ☆45Updated 2 years ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆308Updated this week
- flex-block-attn: an efficient block sparse attention computation library☆103Updated 2 weeks ago
- Training library for Megatron-based models with bi-directional Hugging Face conversion capability