☆94Jul 3, 2022Updated 3 years ago
Alternatives and similar repositories for DT-FM
Users that are interested in DT-FM are comparing it to the libraries listed below
Sorting:
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆149Dec 11, 2023Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆118Jan 13, 2022Updated 4 years ago
- Website for Systems Research Seminar at UIUC☆20Updated this week
- A resilient distributed training framework☆97Apr 11, 2024Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- ☆15Sep 15, 2022Updated 3 years ago
- ☆251Jul 25, 2024Updated last year
- ☆78May 4, 2021Updated 4 years ago
- PyTorch training at CSCS☆20Jul 4, 2025Updated 8 months ago
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16May 4, 2023Updated 2 years ago
- Here you find the public transportation data sets from Zurich, San Francisco and Geneva for the www.urbandatachallenge.org Read the Getti…☆69Mar 22, 2013Updated 12 years ago
- Early exit ensembles☆12Dec 4, 2021Updated 4 years ago
- A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Dec 1, 2022Updated 3 years ago
- ☆18Apr 17, 2019Updated 6 years ago
- Solidity contracts for the decentralized Prime Network protocol☆26Jul 6, 2025Updated 8 months ago
- A framework for PyTorch to enable fault management for collective communication libraries (CCL) such as NCCL☆20Feb 9, 2026Updated last month
- Federated reconnaissance mini-ImageNet benchmark and baseline models☆13Sep 2, 2021Updated 4 years ago
- Large scale graph learning on a single machine.☆167Feb 25, 2025Updated last year
- Integrated Training Platform (ITP) traces used in ElasticFlow paper.☆31Dec 23, 2022Updated 3 years ago
- Recycling diverse models☆46Jan 18, 2023Updated 3 years ago
- A Japanese dependency parser based on BERT☆23Oct 26, 2022Updated 3 years ago
- Sample, estimate, aggregate: A recipe for causal discovery foundation models☆17Jun 21, 2024Updated last year
- Training a model similar to OpenAI DALL-E with volunteers from all over the Internet using hivemind and dalle-pytorch (NeurIPS 2021 demo)☆27May 29, 2023Updated 2 years ago
- Some CS notes during Jiawei's undergrad.☆33Jan 6, 2022Updated 4 years ago
- Distributed Communication-Optimal Shuffle and Transpose Algorithm☆14Feb 20, 2026Updated last month
- [ASPLOS'25] Towards End-to-End Optimization of LLM-based Applications with Ayo☆65Mar 11, 2026Updated last week
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Oct 29, 2023Updated 2 years ago
- ☆77Apr 29, 2024Updated last year
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆47Nov 24, 2022Updated 3 years ago
- This repository contains all codes for the VLDB 2022 paper "Dynamic Spanning Trees for Connectivity Queries on Fully-dynamic Undirected G…☆12Feb 24, 2025Updated last year
- Surrogate-based Hyperparameter Tuning System☆29Jun 29, 2023Updated 2 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Updated this week
- Swan Benchmark Suite☆13Sep 17, 2025Updated 6 months ago
- Smart Contract tools to help streamline Ethereum dapp development and deployment☆13Oct 16, 2017Updated 8 years ago
- Distributed Communication-Optimal LU-factorization Algorithm☆12Aug 1, 2021Updated 4 years ago
- Proof of concept, using Sysdig metrics as the decision variable for a Kubernetes scheduler☆14Nov 3, 2017Updated 8 years ago
- This technique modifies image data so that any model trained on it will bear an identifiable mark.☆44Aug 13, 2021Updated 4 years ago
- MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation☆14Sep 2, 2024Updated last year
- "Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts" (NeurIPS 2020), original PyTorch implemen…☆56Nov 5, 2020Updated 5 years ago