Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
☆182Nov 19, 2018Updated 7 years ago
Alternatives and similar repositories for terngrad
Users that are interested in terngrad are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- QSGD-TF☆21May 15, 2019Updated 6 years ago
- Code for the signSGD paper☆94Jan 12, 2021Updated 5 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Oct 25, 2018Updated 7 years ago
- vector quantization for stochastic gradient descent.☆36May 12, 2020Updated 5 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆32Jul 25, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Atomo: Communication-efficient Learning via Atomic Sparsification☆28Dec 9, 2018Updated 7 years ago
- Caffe for Sparse and Low-rank Deep Neural Networks☆382Mar 8, 2020Updated 6 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆171Apr 5, 2023Updated 3 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆226Jul 10, 2024Updated last year
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆37Aug 19, 2019Updated 6 years ago
- ☆77Jun 7, 2019Updated 6 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Nov 15, 2019Updated 6 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Jul 23, 2024Updated last year
- Caffe Implementation for Incremental network quantization☆191Jul 29, 2018Updated 7 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)☆74Jul 25, 2020Updated 5 years ago
- a high performance system for customized-precision distributed deep learning☆12Dec 10, 2020Updated 5 years ago
- Parallel SGD, done locally and remote☆14May 19, 2016Updated 9 years ago
- ☆12Nov 15, 2018Updated 7 years ago
- ☆28Oct 21, 2020Updated 5 years ago
- PMLS-Caffe: Distributed Deep Learning Framework for Parallel ML System☆194May 10, 2018Updated 7 years ago
- DRACO: Byzantine-resilient Distributed Training via Redundant Gradients☆23Dec 9, 2018Updated 7 years ago
- MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning☆12Apr 26, 2021Updated 5 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆25Nov 26, 2019Updated 6 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Code example for the ICLR 2018 oral paper☆150May 31, 2018Updated 7 years ago
- Implementation of Ternary Weight Networks In Caffe☆63Nov 29, 2016Updated 9 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆150Oct 29, 2024Updated last year
- Implementation of (overlap) local SGD in Pytorch☆34Jul 12, 2020Updated 5 years ago
- Artifacts of VLDB'22 paper "COMET: A Novel Memory-Efficient Deep Learning TrainingFramework by Using Error-Bounded Lossy Compression"☆10Aug 2, 2022Updated 3 years ago
- Caffe: a fast open framework for deep learning.☆13Jul 19, 2016Updated 9 years ago
- Ristretto: Quantization and compression of large AI models. Author: Philipp Gysel.☆288Jan 24, 2026Updated 3 months ago
- Papers and blogs related to distributed deep learning☆96Nov 22, 2017Updated 8 years ago
- Collective communications library with various primitives for multi-machine training.☆1,422Apr 21, 2026Updated 2 weeks ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Sublinear memory optimization for deep learning, reduce GPU memory cost to train deeper nets☆28Apr 22, 2016Updated 10 years ago
- implement distributed machine learning with Pytorch + OpenMPI☆53Mar 22, 2019Updated 7 years ago
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- It is implementation of Research paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING". Deep g…☆18Aug 14, 2019Updated 6 years ago
- A Tool for Automatic Parallelization of Deep Learning Training in Distributed Multi-GPU Environments.☆130Feb 21, 2022Updated 4 years ago
- MPI for Torch☆60May 22, 2017Updated 8 years ago
- Byzantine-resilient distributed SGD with TensorFlow.☆40Jan 22, 2021Updated 5 years ago