ailzhang / minPPLinks
Pipeline parallelism for the minimalist
☆18Updated this week
Alternatives and similar repositories for minPP
Users that are interested in minPP are comparing it to the libraries listed below
Sorting:
- ☆74Updated 4 months ago
- [WIP] Better (FP8) attention for Hopper☆32Updated 5 months ago
- ☆75Updated 2 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated 10 months ago
- ☆26Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆79Updated 2 weeks ago
- Make triton easier☆47Updated last year
- Quantize transformers to any learned arbitrary 4-bit numeric format☆39Updated last month
- Example ML projects that use the Determined library.☆32Updated 11 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆93Updated last month
- A Python library transfers PyTorch tensors between CPU and NVMe☆118Updated 8 months ago
- extensible collectives library in triton☆88Updated 4 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆18Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆142Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 11 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆75Updated last year
- A parallel framework for training deep neural networks☆63Updated 4 months ago
- A collection of reproducible inference engine benchmarks☆32Updated 3 months ago
- PyTorch RFCs (experimental)☆134Updated 2 months ago
- Ahead of Time (AOT) Triton Math Library☆75Updated this week
- llama INT4 cuda inference with AWQ☆54Updated 6 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated 11 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆113Updated last year
- A block oriented training approach for inference time optimization.☆33Updated 11 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆47Updated last year
- ☆77Updated 8 months ago
- ☆108Updated 11 months ago
- ☆158Updated last year