sigopt / sigoptliteLinks
Optimize with SigOpt with this standalone SigOpt client driver.
☆12Updated last week
Alternatives and similar repositories for sigoptlite
Users that are interested in sigoptlite are comparing it to the libraries listed below
Sorting:
- Surrogate-based Hyperparameter Tuning System☆27Updated 2 years ago
- GRACE - GRAdient ComprEssion for distributed deep learning☆139Updated last year
- Code for "Solving Large-Scale Granular Resource Allocation Problems Efficiently with POP", which appeared at SOSP 2021☆27Updated 3 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Updated last year
- ☆14Updated 3 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- ☆101Updated last year
- This repository introduces the NICT Darknet Dataset 2022 https://csdataset.nict.go.jp/darknet-2022/☆11Updated 2 years ago
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆62Updated 2 years ago
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆36Updated last year
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated last year
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆13Updated last year
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52Updated 2 years ago
- RLScheduler: An AutomatedHPC Batch Job Scheduler Using Reinforcement Learning [SC'20]☆63Updated 2 years ago
- Model-less Inference Serving☆91Updated 2 years ago
- a deep learning-driven scheduler for elastic training in deep learning clusters☆31Updated 4 years ago
- ☆23Updated 3 years ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆35Updated 2 months ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆20Updated last year
- Related code and datasets on NetBench: A Large-Scale and Comprehensive Network Traffic Benchmark Dataset for Foundation Models☆33Updated last year
- ☆41Updated 5 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Updated 2 years ago
- HeliosArtifact