This is the proof-of-concept CPU implementation of ASPEN used for the NeurIPS'23 paper ASPEN: Breaking Operator Barriers for Efficient Parallelization of Deep Neural Networks.
☆13Apr 4, 2024Updated last year
Alternatives and similar repositories for ASPEN
Users that are interested in ASPEN are comparing it to the libraries listed below
Sorting:
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆28May 10, 2021Updated 4 years ago
- Opara is a lightweight and resource-aware DNN Operator parallel scheduling framework to accelerate the execution of DNN inference on GPUs…☆23Dec 19, 2024Updated last year
- A source-to-source compiler for optimizing CUDA dynamic parallelism by aggregating launches☆15Jun 21, 2019Updated 6 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated this week
- ☆25Feb 20, 2024Updated 2 years ago
- A multiphase field model based on machine learning method☆49Feb 10, 2022Updated 4 years ago
- Neural network compatible DDEs☆13Apr 8, 2025Updated 10 months ago
- Kinematic and dynamic models of continuum and articulated soft robots.☆15Nov 22, 2025Updated 3 months ago
- Projection operator method for statistical data analysis☆10Mar 11, 2025Updated 11 months ago
- DECAF is a tool that measure the performance of cloud gaming platforms such as Google Stadia, Amazon Luna, NVIDIA GeForceNow.☆12Dec 17, 2021Updated 4 years ago
- Toolkit for Bayesian scaling analysis☆14Sep 8, 2022Updated 3 years ago
- The official implementation of the paper SimVP: Towards Simple yet Powerful Spatiotemporal Predictive learning.☆10Jan 2, 2024Updated 2 years ago