yandex-research / btardLinks
Code for the paper "Secure Distributed Training at Scale" (ICML 2022)
☆16Updated 8 months ago
Alternatives and similar repositories for btard
Users that are interested in btard are comparing it to the libraries listed below
Sorting:
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆148Updated last year
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆50Updated last year
- Compression schema for gradients of activations in backward pass☆44Updated 2 years ago
- "Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts" (NeurIPS 2020), original PyTorch implemen…☆56Updated 4 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆43Updated 2 years ago
- ☆26Updated 2 years ago
- ☆93Updated 3 years ago
- Python library for argument and configuration management☆55Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated 2 years ago
- "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices", official implementation☆29Updated 8 months ago
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Updated 2 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆41Updated 4 years ago
- Distributed K-FAC preconditioner for PyTorch☆91Updated this week
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- Experiments from "The Generalization-Stability Tradeoff in Neural Network Pruning": https://arxiv.org/abs/1906.03728.☆14Updated 5 years ago
- Model Fusion via Optimal Transport, NeurIPS 2020☆151Updated 2 years ago
- ☆18Updated last year
- nanoGPT-like codebase for LLM training☆110Updated this week
- Latest Weight Averaging (NeurIPS HITY 2022)☆31Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- ☆70Updated last year
- ☆71Updated 10 months ago
- ☆37Updated 3 years ago
- ☆33Updated last year
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆30Updated 3 years ago
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆60Updated 3 years ago
- ☆45Updated last week
- ☆37Updated 2 years ago
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆27Updated 2 years ago