TonyTangYu / pytorchLinks
DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation
☆12Updated last year
Alternatives and similar repositories for pytorch
Users that are interested in pytorch are comparing it to the libraries listed below
Sorting:
- ☆12Updated last year
- ☆82Updated 8 months ago
- This repository is established to store personal notes and annotated papers during daily research.☆179Updated 2 weeks ago
- ☆15Updated last year
- ☆131Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆236Updated 2 weeks ago
- ☆15Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Updated 10 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆283Updated 11 months ago
- Dynamic resources changes for multi-dimensional parallelism training☆30Updated 5 months ago
- LLM serving cluster simulator☆135Updated last year
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Updated 2 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆93Updated 2 years ago
- High performance Transformer implementation in C++.☆150Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- Summary of some awesome work for optimizing LLM inference☆172Updated 2 months ago
- ☆24Updated last year
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆34Updated last year
- Artifacts for our ASPLOS'23 paper ElasticFlow☆55Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- ☆29Updated last year
- Compiler for Dynamic Neural Networks☆45Updated 2 years ago
- ☆166Updated last year
- Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). If you hav…☆23Updated 3 months ago
- ☆77Updated 4 years ago
- LLM training technologies developed by kwai☆70Updated 2 weeks ago
- ☆323Updated 2 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆77Updated 3 months ago
- An experimental parallel training platform☆56Updated last year