kimihe / Octo
Create tiny ML systems for on-device learning.
☆20Updated 3 years ago
Alternatives and similar repositories for Octo:
Users that are interested in Octo are comparing it to the libraries listed below
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated last year
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- MobiSys#114☆21Updated last year
- Federated Dynamic Sparse Training☆29Updated 2 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆29Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated last year
- A curated list of early exiting (LLM, CV, NLP, etc)☆38Updated 4 months ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆52Updated 3 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆23Updated 2 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- [ICML 2021] "Double-Win Quant: Aggressively Winning Robustness of Quantized DeepNeural Networks via Random Precision Training and Inferen…☆13Updated 2 years ago
- ☆10Updated 3 years ago
- Measuring and predicting on-device metrics (latency, power, etc.) of machine learning models☆66Updated last year
- [ACM SoCC'22] Pisces: Efficient Federated Learning via Guided Asynchronous Training☆12Updated last year
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- ☆18Updated 2 years ago
- Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022☆50Updated last year
- ☆14Updated 3 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 4 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 2 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated last year
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated last year
- ☆14Updated 3 years ago
- ☆45Updated 4 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆20Updated 4 years ago
- Federated Learning Framework Benchmark (UniFed)☆48Updated last year
- ☆43Updated 2 years ago
- ☆25Updated 3 years ago
- A collection of research papers on efficient training of DNNs☆69Updated 2 years ago