icloud-ecnu / OparaLinks
Opara is a lightweight and resource-aware DNN Operator parallel scheduling framework to accelerate the execution of DNN inference on GPUs.
☆22Updated 8 months ago
Alternatives and similar repositories for Opara
Users that are interested in Opara are comparing it to the libraries listed below
Sorting:
- iGniter, an interference-aware GPU resource provisioning framework for achieving predictable performance of DNN inference in the cloud.☆38Updated last year
- iSpot is a lightweight and cost-effective instance provisioning framework for Directed Acyclic Graph (DAG)-style big data analytics, in …☆11Updated 2 years ago
- spotDNN is a heterogeneity-aware spot instance provisioning framework to provide predictable performance for DDNN training workloads in t…☆15Updated 2 years ago
- Prophet is a predictable communication scheduling strategy to schedule the gradient transfer in an adequate order, with the aim of maximi…☆16Updated last year
- ebrowser, an energy-efficient and lightweight human interaction framework without degrading the user experience in mobile Web browsers.☆12Updated 2 years ago
- DelayStage is a simple yet effective stage delay scheduling strategy to interleave the cluster resources across the parallel stages, so a…☆14Updated 2 years ago
- Reading paper list for iCloud group☆14Updated last month
- An interference-aware scheduler for fine-grained GPU sharing☆145Updated 7 months ago
- This repository is established to store personal notes and annotated papers during daily research.☆148Updated 3 weeks ago
- A lightweight design for computation-communication overlap.☆161Updated this week
- ☆12Updated last year
- ☆131Updated 9 months ago
- Examples of CUDA implementations by Cutlass CuTe☆225Updated 2 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆376Updated 11 months ago
- ☆150Updated last year
- Summary of some awesome work for optimizing LLM inference☆103Updated 3 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆266Updated 6 months ago
- nnScaler: Compiling DNN models for Parallel Training☆118Updated this week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆472Updated 11 months ago
- A direct convolution library targeting ARM multi-core CPUs.☆12Updated 9 months ago
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Updated 11 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆66Updated 5 months ago
- High performance Transformer implementation in C++.☆132Updated 7 months ago
- ☆136Updated 3 months ago
- ☆50Updated 8 months ago
- ☆229Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆152Updated last year
- ☆101Updated 3 months ago
- collection of benchmarks to measure basic GPU capabilities☆412Updated 6 months ago