jasperzhong / read-papers-and-code
My paper/code reading notes in Chinese
☆46Updated 10 months ago
Alternatives and similar repositories for read-papers-and-code:
Users that are interested in read-papers-and-code are comparing it to the libraries listed below
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆53Updated 7 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆92Updated 2 years ago
- FGNN's artifact evaluation (EuroSys 2022)☆17Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 2 years ago
- SOTA Learning-augmented Systems☆35Updated 2 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆23Updated last year
- ☆53Updated 4 years ago
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆51Updated 2 years ago
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆78Updated 3 years ago
- Analysis for the traces from byteprofile☆30Updated last year
- Code for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆39Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated 10 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated last year
- ☆14Updated 2 years ago
- Seminar on selected tools in Computer Science☆24Updated 4 years ago
- ☆37Updated 3 years ago
- Artifacts for our SIGCOMM'22 paper Muri☆41Updated last year
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆49Updated 2 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆74Updated 4 years ago
- Stateful LLM Serving☆48Updated last week
- ☆16Updated 10 months ago
- General system research material (not limited to paper) reading notes.☆21Updated 4 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆108Updated last year
- A Factored System for Sample-based GNN Training over GPUs☆42Updated last year
- PetPS: Supporting Huge Embedding Models with Tiered Memory☆30Updated 10 months ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆31Updated 2 years ago
- Compiler for Dynamic Neural Networks☆45Updated last year
- ☆32Updated 9 months ago