Lightning-AI / forked-pdbLinks
Python pdb for multiple processes
☆79Updated 8 months ago
Alternatives and similar repositories for forked-pdb
Users that are interested in forked-pdb are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- Triton implementation of FlashAttention2 that adds Custom Masks.☆165Updated last year
- pytorch-profiler☆50Updated 2 years ago
- ☆160Updated 2 years ago
- Examples for MS-AMP package.☆30Updated 6 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆123Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆229Updated 7 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆142Updated 8 months ago
- ☆115Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated this week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆158Updated 2 weeks ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- ☆132Updated 8 months ago
- ☆124Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆184Updated last month
- Patch convolution to avoid large GPU memory usage of Conv2D☆95Updated last year
- Torch Distributed Experimental☆117Updated last year
- ☆22Updated 2 years ago
- ☆32Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 7 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆263Updated 4 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Updated last year
- ring-attention experiments☆165Updated last year
- Megatron's multi-modal data loader☆315Updated last week
- 📑 Dive into Big Model Training☆116Updated 3 years ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Updated 3 weeks ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year