HKUST-SING / heraldLinks
Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)
☆23Updated last year
Alternatives and similar repositories for herald
Users that are interested in herald are comparing it to the libraries listed below
Sorting:
- Analyze network performance in distributed training☆18Updated 4 years ago
- Artifacts for our SIGCOMM'22 paper Muri☆41Updated last year
- ☆51Updated 2 years ago
- ☆37Updated 11 months ago
- A Hybrid Framework to Build High-performance Adaptive Neural Networks for Kernel Datapath☆27Updated 2 years ago
- ☆44Updated last year
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 3 years ago
- Source code for OSDI 2023 paper titled "Cilantro - Performance-Aware Resource Allocation for General Objectives via Online Feedback"☆40Updated 2 years ago
- ☆56Updated last year
- Helios Traces from SenseTime☆56Updated 2 years ago
- Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training (MLSys '23)☆9Updated 2 years ago
- ☆37Updated last month
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- ☆16Updated last year
- ☆192Updated 5 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆44Updated 2 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆50Updated 2 years ago
- Lucid: A Non-Intrusive, Scalable and Interpretable Scheduler for Deep Learning Training Jobs☆55Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- ☆22Updated last year
- ☆81Updated 3 years ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆98Updated 2 years ago
- Slowdown prediction module of Echo: Simulating Distributed Training at Scale☆12Updated 2 months ago
- ☆49Updated 7 months ago
- Managed collective communication service☆22Updated 11 months ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆19Updated last year
- This repository contains code for the paper: Bergsma S., Zeyl T., Senderovich A., and Beck J. C., "Generating Complex, Realistic Cloud Wo…☆43Updated 3 years ago
- ☆44Updated 3 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆127Updated last year
- GPU-accelerated LLM Training Simulator☆35Updated last month