HKUST-SING / heraldLinks
Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)
☆23Updated last year
Alternatives and similar repositories for herald
Users that are interested in herald are comparing it to the libraries listed below
Sorting:
- ☆51Updated 2 years ago
- ☆44Updated last year
- Artifacts for our SIGCOMM'22 paper Muri☆42Updated last year
- ☆37Updated last year
- Helios Traces from SenseTime☆57Updated 2 years ago
- Lucid: A Non-Intrusive, Scalable and Interpretable Scheduler for Deep Learning Training Jobs☆55Updated 2 years ago
- Source code for OSDI 2023 paper titled "Cilantro - Performance-Aware Resource Allocation for General Objectives via Online Feedback"☆40Updated 2 years ago
- ☆37Updated 2 months ago
- ☆16Updated last year
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆45Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- ☆56Updated last year
- A Hybrid Framework to Build High-performance Adaptive Neural Networks for Kernel Datapath☆27Updated 2 years ago
- Artifacts for our NSDI'23 paper TGS☆84Updated last year
- [NSDI 2023] TopoOpt: Optimizing the Network Topology for Distributed DNN Training☆32Updated 11 months ago
- Primo: Practical Learning-Augmented Systems with Interpretable Models☆19Updated last year
- ☆50Updated 8 months ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- ☆192Updated 6 years ago
- Analyze network performance in distributed training☆18Updated 4 years ago
- Managed collective communication service☆22Updated last year
- ☆22Updated last year
- Tiresias is a GPU cluster manager for distributed deep learning training.☆158Updated 5 years ago
- This repository contains code for the paper: Bergsma S., Zeyl T., Senderovich A., and Beck J. C., "Generating Complex, Realistic Cloud Wo…☆43Updated 3 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆51Updated 2 years ago
- Model-less Inference Serving☆91Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆63Updated 9 months ago
- The prototype for NSDI paper "NetHint: White-Box Networking for Multi-Tenant Data Centers"☆26Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆84Updated 2 years ago
- ☆44Updated 3 years ago