facebookresearch / MODel_opt
Memory Optimizations for Deep Learning (ICML 2023)
☆62Updated last year
Alternatives and similar repositories for MODel_opt:
Users that are interested in MODel_opt are comparing it to the libraries listed below
- ☆62Updated 3 weeks ago
- ☆101Updated 6 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing. By pro…☆68Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆104Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆100Updated 8 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆135Updated last year
- extensible collectives library in triton☆83Updated 5 months ago
- ☆73Updated 4 months ago
- ☆137Updated 7 months ago
- GPTQ inference TVM kernel☆39Updated 10 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆73Updated 4 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆63Updated 2 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆110Updated 3 months ago
- A schedule language for large model training☆145Updated 9 months ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆130Updated 3 years ago
- ☆35Updated 3 months ago
- ☆190Updated 8 months ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- llama INT4 cuda inference with AWQ☆53Updated 2 months ago
- ☆87Updated 6 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆63Updated this week
- Fast Hadamard transform in CUDA, with a PyTorch interface☆152Updated 9 months ago
- ☆44Updated last year
- ☆55Updated 2 months ago
- Sparsity support for PyTorch☆35Updated last month
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆50Updated last year
- ☆157Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆44Updated 8 months ago