facebookresearch / MODel_optLinks
Memory Optimizations for Deep Learning (ICML 2023)
☆114Updated last year
Alternatives and similar repositories for MODel_opt
Users that are interested in MODel_opt are comparing it to the libraries listed below
Sorting:
- ☆115Updated last year
- extensible collectives library in triton☆92Updated 9 months ago
- Collection of kernels written in Triton language☆174Updated 9 months ago
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆162Updated 3 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆310Updated this week
- Applied AI experiments and examples for PyTorch☆312Updated 4 months ago
- A bunch of kernels that might make stuff slower 😉☆73Updated last week
- Fast low-bit matmul kernels in Triton☆423Updated last month
- ring-attention experiments☆161Updated last year
- ☆160Updated 2 years ago
- ☆28Updated last year
- A Python library transfers PyTorch tensors between CPU and NVMe☆124Updated last year
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆62Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆66Updated last week
- ☆271Updated last week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆185Updated this week
- Cataloging released Triton kernels.☆287Updated 4 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆48Updated 4 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆218Updated this week
- A block oriented training approach for inference time optimization.☆34Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆124Updated last year
- Explore training for quantized models☆26Updated 6 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆92Updated 3 months ago
- PyTorch RFCs (experimental)☆136Updated 7 months ago
- ☆71Updated 9 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆154Updated 2 years ago
- Triton-based Symmetric Memory operators and examples☆74Updated this week
- a minimal cache manager for PagedAttention, on top of llama3.☆130Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year