yandex-research / btardLinks
Code for the paper "Secure Distributed Training at Scale" (ICML 2022)
☆16Updated 4 months ago
Alternatives and similar repositories for btard
Users that are interested in btard are comparing it to the libraries listed below
Sorting:
- "Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts" (NeurIPS 2020), original PyTorch implemen…☆56Updated 4 years ago
- Compression schema for gradients of activations in backward pass☆44Updated last year
- "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices", official implementation☆29Updated 4 months ago
- Memory-efficient transformer. Work in progress.☆19Updated 2 years ago
- ☆70Updated 9 months ago
- ☆17Updated 11 months ago
- ☆27Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- SGD with large step sizes learns sparse features [ICML 2023]☆32Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆116Updated 3 years ago
- This is unofficial repository for Towards Efficient and Scalable Sharpness-Aware Minimization.☆36Updated last year
- PyTorch implementation of HashedNets