EMDC-OS / mg-lru
☆9Updated 3 months ago
Alternatives and similar repositories for mg-lru:
Users that are interested in mg-lru are comparing it to the libraries listed below
- Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.☆38Updated 3 months ago
- Know Your Enemy To Save Cloud Energy: Energy-Performance Characterization of Machine Learning Serving (HPCA '23)☆13Updated 3 months ago
- ☆23Updated 3 years ago
- ☆14Updated 3 months ago
- ☆10Updated 6 months ago
- ☆285Updated last year
- ☆47Updated 3 months ago
- ☆12Updated last week
- Curated collection of papers in machine learning systems☆292Updated 2 weeks ago
- ☆24Updated 2 months ago
- An interference-aware scheduler for fine-grained GPU sharing☆132Updated 2 months ago
- ☆186Updated 5 years ago
- Artifacts for our NSDI'23 paper TGS☆75Updated 10 months ago
- FastFlow is a system that automatically detects CPU bottlenecks in deep learning training pipelines and resolves the bottlenecks with dat…☆26Updated 2 years ago
- MISO: Exploiting Multi-Instance GPU Capability on Multi-Tenant GPU Clusters☆18Updated last year
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆72Updated last year
- Network Contention-Aware Cluster Scheduling with Reinforcement Learning (IEEE ICPADS 2023)☆16Updated 6 months ago
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆14Updated 4 years ago
- MIST: High-performance IoT Stream Processing☆17Updated 6 years ago
- "JABAS: Joint Adaptive Batching and Automatic Scaling for DNN Training on Heterogeneous GPUs" (EuroSys '25)☆13Updated last week
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated last year
- Helios Traces from SenseTime☆53Updated 2 years ago
- Dotfile management with bare git☆19Updated last week
- [ATC '24] Metis: Fast automatic distributed training on heterogeneous GPUs (https://www.usenix.org/conference/atc24/presentation/um)☆25Updated 5 months ago
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆105Updated last month
- Microsoft Collective Communication Library☆342Updated last year
- ☆37Updated 3 years ago
- ☆64Updated 3 weeks ago
- ☆10Updated 4 months ago