jaywonchung / dotfiles
Dotfile management with bare git
☆19Updated 3 weeks ago
Alternatives and similar repositories for dotfiles:
Users that are interested in dotfiles are comparing it to the libraries listed below
- Welcome to PeriFlow CLI ☁︎☆12Updated last year
- ☆101Updated last year
- ☆25Updated 2 years ago
- ☆46Updated 8 months ago
- Know Your Enemy To Save Cloud Energy: Energy-Performance Characterization of Machine Learning Serving (HPCA '23)☆13Updated 4 months ago
- ☆15Updated 3 years ago
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated last year
- A resilient distributed training framework☆95Updated last year
- ☆48Updated 4 months ago
- FMO (Friendli Model Optimizer)☆12Updated 4 months ago
- FastFlow is a system that automatically detects CPU bottlenecks in deep learning training pipelines and resolves the bottlenecks with dat…☆26Updated 2 years ago
- Releasing the spot availability traces used in "Can't Be Late" paper.☆18Updated last year
- Network Contention-Aware Cluster Scheduling with Reinforcement Learning (IEEE ICPADS 2023)☆16Updated 6 months ago
- ☆24Updated last year
- ☆12Updated last month
- ☆24Updated 6 years ago
- ☆66Updated last month
- Thunder Research Group's Collective Communication Library☆36Updated last year
- "JABAS: Joint Adaptive Batching and Automatic Scaling for DNN Training on Heterogeneous GPUs" (EuroSys '25)☆13Updated last month
- [ATC '24] Metis: Fast automatic distributed training on heterogeneous GPUs (https://www.usenix.org/conference/atc24/presentation/um)☆25Updated 5 months ago
- Model-less Inference Serving☆88Updated last year
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆23Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆62Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆164Updated 6 months ago
- Dynamic resources changes for multi-dimensional parallelism training☆25Updated 5 months ago
- Tiresias is a GPU cluster manager for distributed deep learning training.☆153Updated 5 years ago
- Lightweight and Parallel Deep Learning Framework☆261Updated 2 years ago
- Artifacts for our SIGCOMM'22 paper Muri☆41Updated last year