yandex-research / moshpit-sgdLinks
"Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices", official implementation
☆30Updated 11 months ago
Alternatives and similar repositories for moshpit-sgd
Users that are interested in moshpit-sgd are comparing it to the libraries listed below
Sorting:
- "Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts" (NeurIPS 2020), original PyTorch implemen…☆56Updated 5 years ago
- Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)☆16Updated 4 years ago
- Factorized Neural Layers☆31Updated 2 years ago
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- A Learnable LSH Framework for Efficient NN Training☆34Updated 4 years ago
- An adaptive training algorithm for residual network☆17Updated 5 years ago
- Hyperparameter tuning via uncertainty modeling☆49Updated last year
- Official repository for the paper "Zero-Shot AutoML with Pretrained Models"☆48Updated 2 years ago
- JAX implementation of "Fine-Tuning Language Models with Just Forward Passes"☆19Updated 2 years ago
- ☆22Updated 5 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago
- ☆75Updated 3 years ago
- Memory-efficient transformer. Work in progress.☆19Updated 3 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Updated 2 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆51Updated 7 months ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Updated 4 years ago
- NeurIPS 2021 - Few-shot learning competition☆26Updated 4 years ago
- [NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks☆60Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆59Updated 4 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆47Updated 5 years ago
- ☆21Updated 2 years ago
- ☆23Updated 2 years ago
- ☆29Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- 👑 Pytorch code for the Nero optimiser.☆20Updated 3 years ago
- Recycling diverse models☆46Updated 3 years ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆71Updated 3 years ago
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆61Updated 3 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- Distributed K-FAC preconditioner for PyTorch☆94Updated this week