librahfacebook / BranchyNet_chainer
基于提前退出部分样本原理而实现的带分支网络(supported by chainer)
☆42Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for BranchyNet_chainer
- Pytorch-based early exit network inspired by branchynet☆29Updated last year
- This project will realize experiments about BranchyNet partitioning using pytorch framework☆28Updated 4 years ago
- A DNN model partition demo☆30Updated 4 years ago
- Autodidactic Neurosurgeon Collaborative Deep Inference for Mobile Edge Intelligence via Online Learning☆37Updated 3 years ago
- DNN_Partition辅助工具,用于对pytorch模型进行简单的性能分析以及支持模型切分☆11Updated 3 years ago
- 2021 Summer Research Internship project (UROP) at Imperial College London. Supervised by Prof George Constantinides and Ben Biggs☆16Updated last year
- ☆12Updated 4 years ago
- [IEEE Access] "Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-constrained Edge Computing Systems" and […☆35Updated last year
- A PyTorch Implementation for experiements in paper: Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge.☆10Updated last year
- 2-stage pruning to favor distributed inference (local device compute half of the model, upload the feature for further computing on stron…☆23Updated 6 years ago
- Deep neural network (DNN) implementation for inference tasks☆13Updated 5 years ago
- The implementation of paper : RTCoInfer: Real-time Edge-Cloud Collaborative CNN Inference for Stream Analytics on Ubiquitous Images☆13Updated 2 years ago
- ☆124Updated last year
- Auto-Split: A General Framework of Collaborative Edge-Cloud AI☆12Updated 3 years ago
- 云边协同- collaborative inference 📚Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge☆63Updated last year
- Code for the paper: "BottleNet++: An End-to-End Approach for Feature Compression in Device-Edge Co-Inference Systems"☆49Updated 3 years ago
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆25Updated 3 years ago
- Federated Dynamic Sparse Training☆29Updated 2 years ago
- 云边协同- collaborative inference📚Dynamic adaptive DNN surgery for inference acceleration on the edge☆30Updated last year
- Code for paper "Joint Architecture Design and Workload Partitioning for DNN Inference on Industrial IoT Clusters"☆11Updated last year
- ☆17Updated last year
- This is a repository for the implementation of the paper "Green, Quantized Federated Learning over Wireless Networks: An Energy-Efficient…☆11Updated last year
- FedNAS: Federated Deep Learning via Neural Architecture Search☆52Updated 3 years ago
- It is implementation of Research paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING". Deep g…☆18Updated 5 years ago
- PyTorch implementation of the paper: Decomposing Vision Transformers for Collaborative Inference in Edge Devices☆10Updated 3 months ago
- ☆16Updated 4 years ago
- Adaptive Offloading of Federated Learning on IoT Devices☆68Updated 2 years ago
- PyTorch implementation of the paper: Multi-Agent Collaborative Inference via DNN Decoupling: Intermediate Feature Compression and Edge Le…☆27Updated last year
- 随着移动云计算和边缘计算的快速发展,以及人工智能的广泛应用,产生了边缘智能(Edge Intelligence)的概念。深度神经网络(例如CNN)已被广泛应用于移动智能应用程序中,但是移动设备有限的存储和计算资源无法满足深度神经网络计算的需求。神经网络压缩与加速技术可以加速…☆285Updated 2 years ago