icloud-ecnu / OparaLinks
Opara is a lightweight and resource-aware DNN Operator parallel scheduling framework to accelerate the execution of DNN inference on GPUs.
☆23Updated 10 months ago
Alternatives and similar repositories for Opara
Users that are interested in Opara are comparing it to the libraries listed below
Sorting:
- iGniter, an interference-aware GPU resource provisioning framework for achieving predictable performance of DNN inference in the cloud.☆38Updated last year
- spotDNN is a heterogeneity-aware spot instance provisioning framework to provide predictable performance for DDNN training workloads in t…☆15Updated 2 years ago
- iSpot is a lightweight and cost-effective instance provisioning framework for Directed Acyclic Graph (DAG)-style big data analytics, in …☆11Updated 2 years ago
- ebrowser, an energy-efficient and lightweight human interaction framework without degrading the user experience in mobile Web browsers.☆12Updated 2 years ago
- DelayStage is a simple yet effective stage delay scheduling strategy to interleave the cluster resources across the parallel stages, so a…☆14Updated 2 years ago
- Prophet is a predictable communication scheduling strategy to schedule the gradient transfer in an adequate order, with the aim of maximi…☆16Updated 2 years ago
- Reading paper list for iCloud group☆14Updated 3 weeks ago
- This repository is established to store personal notes and annotated papers during daily research.☆157Updated last week
- Summary of some awesome work for optimizing LLM inference☆134Updated last week
- An interference-aware scheduler for fine-grained GPU sharing☆150Updated 9 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆157Updated last year
- LLM serving cluster simulator☆116Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- An Optimizing Compiler for Recommendation Model Inference☆26Updated 5 months ago
- ☆12Updated 2 years ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆216Updated 3 months ago
- ☆158Updated last year
- ☆138Updated 11 months ago
- ☆53Updated 4 months ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆89Updated last month
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆66Updated 6 months ago
- High performance Transformer implementation in C++.☆140Updated 9 months ago
- ☆53Updated 10 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆69Updated 3 weeks ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- ☆83Updated 2 years ago
- ☆45Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last month
- Large Language Model (LLM) Serving Paper and Resource List☆24Updated 5 months ago