kimihe / Octo
Create tiny ML systems for on-device learning.
☆20Updated 3 years ago
Related projects: ⓘ
- MobiSys#114☆21Updated last year
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 3 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆23Updated last year
- Federated Dynamic Sparse Training☆30Updated 2 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆31Updated last year
- A curated list of early exiting☆24Updated 3 weeks ago
- ☆18Updated 2 years ago
- ☆10Updated 3 years ago
- Measuring and predicting on-device metrics (latency, power, etc.) of machine learning models☆66Updated last year
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated last year
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆28Updated 3 years ago
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- ☆14Updated 2 years ago
- ☆42Updated 2 years ago
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆25Updated 7 months ago
- [ICML 2021] "Double-Win Quant: Aggressively Winning Robustness of Quantized DeepNeural Networks via Random Precision Training and Inferen…☆12Updated 2 years ago
- ☆93Updated 8 months ago
- Changing several bit which overwhelms the quantized CNN☆39Updated 4 years ago
- ☆13Updated 3 years ago
- [ACM SoCC'22] Pisces: Efficient Federated Learning via Guided Asynchronous Training☆10Updated 9 months ago
- Federated Learning Framework Benchmark (UniFed)☆47Updated last year
- ☆16Updated 3 months ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- ☆28Updated 3 years ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆50Updated 3 years ago
- Post-training sparsity-aware quantization☆32Updated last year
- Understanding Top-k Sparsification in Distributed Deep Learning☆22Updated 4 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 4 years ago
- InFi is a library for building input filters for resource-efficient inference.☆37Updated 10 months ago
- This is a list of awesome edgeAI inference related papers.☆84Updated 8 months ago