hclhkbu / gtopkssgdView external linksLinks
gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning
☆37Aug 19, 2019Updated 6 years ago
Alternatives and similar repositories for gtopkssgd
Users that are interested in gtopkssgd are comparing it to the libraries listed below
Sorting:
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆32Jul 25, 2024Updated last year
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Oct 25, 2018Updated 7 years ago
- QSGD-TF☆21May 15, 2019Updated 6 years ago
- Stochastic Gradient Push for Distributed Deep Learning☆170Apr 5, 2023Updated 2 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆28Dec 9, 2018Updated 7 years ago
- Sketched SGD☆28Jul 4, 2020Updated 5 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36May 29, 2020Updated 5 years ago
- ☆10Jun 4, 2021Updated 4 years ago
- A decentralised application that creates high quality machine learning datasets☆13Jan 22, 2019Updated 7 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Jul 23, 2024Updated last year
- MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning☆12Apr 26, 2021Updated 4 years ago
- Code for paper: Variance Reduced Local SGD with Lower Communication Complexity☆12May 20, 2020Updated 5 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Jan 14, 2021Updated 5 years ago
- ☆12Nov 15, 2018Updated 7 years ago
- vector quantization for stochastic gradient descent.☆35May 12, 2020Updated 5 years ago
- FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning☆18Jun 2, 2022Updated 3 years ago
- divide-and-conquer eigenvalues algorithm for symmetric tridiagonal matrix, designed by Cuppen☆16Mar 1, 2020Updated 5 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Oct 29, 2024Updated last year
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆226Jul 10, 2024Updated last year
- Adaptive gradient sparsification for efficient federated learning: an online learning approach☆18Oct 31, 2020Updated 5 years ago
- ☆23Jun 5, 2019Updated 6 years ago
- implement distributed machine learning with Pytorch + OpenMPI☆53Mar 22, 2019Updated 6 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Dec 10, 2022Updated 3 years ago
- [c++]使用boost.asio写的简单内存键值对缓存☆11Jul 31, 2017Updated 8 years ago
- ☆30Oct 22, 2020Updated 5 years ago
- ☆33Dec 3, 2019Updated 6 years ago
- Bayesian Nonparametric Federated Learning of Neural Networks☆146May 29, 2019Updated 6 years ago
- java-like synchronized blocks in c++☆13Mar 8, 2014Updated 11 years ago
- Working through OS development course☆10Jan 25, 2018Updated 8 years ago
- Sparse Matrix Factorization (SMF) is a key component in many machine learning problems and there exist a verity a applications in real-w…☆11Jan 25, 2016Updated 10 years ago
- ☆90May 27, 2020Updated 5 years ago
- ☆42Feb 9, 2020Updated 6 years ago
- Code for the paper "Knowledge-Aware Federated Active Learning with Non-IID Data", ICCV2023☆10Sep 8, 2023Updated 2 years ago
- First causal agentic AI memory☆24Dec 22, 2025Updated last month
- React 农历日历☆10May 5, 2015Updated 10 years ago
- Fast and high-concurrent C++ RPC framework, based on protobuf and boost::asio☆10Aug 16, 2019Updated 6 years ago
- Code for paper "Learning a Code: Machine Learning for Approximate Non-Linear Coded-Computation"☆11Dec 21, 2020Updated 5 years ago
- LASSO is a parallel regression model learning system☆69Nov 29, 2013Updated 12 years ago
- LINQ for C++11 done right☆15Feb 18, 2016Updated 9 years ago