Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
☆182Nov 19, 2018Updated 7 years ago
Alternatives and similar repositories for terngrad
Users that are interested in terngrad are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- QSGD-TF☆21May 15, 2019Updated 6 years ago
- Code for the signSGD paper☆93Jan 12, 2021Updated 5 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Oct 25, 2018Updated 7 years ago
- vector quantization for stochastic gradient descent.☆35May 12, 2020Updated 5 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆32Jul 25, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Atomo: Communication-efficient Learning via Atomic Sparsification☆28Dec 9, 2018Updated 7 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆382Mar 8, 2020Updated 6 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆171Apr 5, 2023Updated 2 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆225Jul 10, 2024Updated last year
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆37Aug 19, 2019Updated 6 years ago
- ☆77Jun 7, 2019Updated 6 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Nov 15, 2019Updated 6 years ago
- Layer-wise Sparsification of Distributed Deep Learning☆10Jul 6, 2020Updated 5 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Jul 23, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆33Dec 3, 2019Updated 6 years ago
- Caffe Implementation for Incremental network quantization☆191Jul 29, 2018Updated 7 years ago
- Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)☆74Jul 25, 2020Updated 5 years ago
- a high performance system for customized-precision distributed deep learning☆12Dec 10, 2020Updated 5 years ago
- Parallel SGD, done locally and remote☆14May 19, 2016Updated 9 years ago
- ☆12Nov 15, 2018Updated 7 years ago
- ☆28Oct 21, 2020Updated 5 years ago
- DRACO: Byzantine-resilient Distributed Training via Redundant Gradients☆23Dec 9, 2018Updated 7 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆25Nov 26, 2019Updated 6 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Jan 14, 2021Updated 5 years ago
- Code example for the ICLR 2018 oral paper☆151May 31, 2018Updated 7 years ago
- Implementation of Ternary Weight Networks In Caffe☆63Nov 29, 2016Updated 9 years ago
- Sketched SGD☆28Jul 4, 2020Updated 5 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Oct 29, 2024Updated last year
- Implementation of (overlap) local SGD in Pytorch☆34Jul 12, 2020Updated 5 years ago
- deep learning model compression based on keras☆32Aug 10, 2018Updated 7 years ago
- Artifacts of VLDB'22 paper "COMET: A Novel Memory-Efficient Deep Learning TrainingFramework by Using Error-Bounded Lossy Compression"☆10Aug 2, 2022Updated 3 years ago
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks☆245Aug 30, 2022Updated 3 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Caffe: a fast open framework for deep learning.☆13Jul 19, 2016Updated 9 years ago
- Ristretto: Quantization and compression of large AI models. Author: Philipp Gysel.☆288Jan 24, 2026Updated 2 months ago
- Papers and blogs related to distributed deep learning☆96Nov 22, 2017Updated 8 years ago
- Collective communications library with various primitives for multi-machine training.☆1,407Mar 20, 2026Updated last week
- Sublinear memory optimization for deep learning, reduce GPU memory cost to train deeper nets☆28Apr 22, 2016Updated 9 years ago
- implement distributed machine learning with Pytorch + OpenMPI☆53Mar 22, 2019Updated 7 years ago
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago