GuanhuaWang / sensAI
sensAI: ConvNets Decomposition via Class Parallelism for Fast Inference on Live Data
☆64Updated last month
Related projects: ⓘ
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆54Updated 3 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆33Updated last year
- ☆22Updated 3 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- Model-less Inference Serving☆78Updated 10 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆61Updated 2 years ago
- Machine Learning System☆14Updated 4 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 4 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆124Updated 2 years ago
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated last year
- Multi-Instance-GPU profiling tool☆51Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆69Updated 3 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated last year
- Distributed ML Training Benchmarks☆27Updated last year
- ☆81Updated last year
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆24Updated last year
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆12Updated 9 months ago
- Deadline-based hyperparameter tuning on RayTune.☆31Updated 4 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆134Updated last month
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆101Updated 9 months ago
- ☆21Updated last year
- ☆19Updated last year
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆55Updated last year
- ☆45Updated last year
- Development repository for integrating FlexFlow (A distributed deep learning framework that supports flexible parallelization strategies)…☆28Updated 2 years ago
- Implementation of Parameter Server using PyTorch communication lib☆42Updated 5 years ago
- Memory footprint reduction for transformer models☆11Updated last year
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated 4 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆76Updated last year
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆140Updated 2 weeks ago