ryantd / veloceLinks
WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.
☆18Updated 2 years ago
Alternatives and similar repositories for veloce
Users that are interested in veloce are comparing it to the libraries listed below
Sorting:
- A Ray-based data loader with per-epoch shuffling and configurable pipelining, for shuffling and loading training data for distributed tra…☆18Updated 2 years ago
- Some microbenchmarks and design docs before commencement☆12Updated 4 years ago
- Distributed ML Optimizer☆32Updated 3 years ago
- A collection of reproducible inference engine benchmarks☆31Updated last month
- Ray-based Apache Beam runner☆42Updated last year
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated last year
- An IR for efficiently simulating distributed ML computation.☆28Updated last year
- Deadline-based hyperparameter tuning on RayTune.☆31Updated 5 years ago
- Lightning In-Memory Object Store☆46Updated 3 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- DL Dataloader Benchmarks☆18Updated 4 months ago
- MLPerf™ logging library☆36Updated last month
- ☆15Updated last year
- Benchmarks to capture important workloads.☆31Updated 4 months ago
- ☆71Updated 2 months ago
- Python bindings for UCX☆135Updated this week
- Repository to go along with the paper "Plumber: Diagnosing and Removing Performance Bottlenecks in Machine Learning Data Pipelines"☆10Updated 3 years ago
- ☆17Updated last year
- Tracking Ray Enhancement Proposals☆54Updated last month
- TensorRT LLM Benchmark Configuration☆13Updated 10 months ago
- A lightweight, user-friendly data-plane for LLM training.☆16Updated last month
- ☆27Updated last month
- ☆11Updated 4 years ago
- The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network training on Kubernetes☆44Updated 3 years ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆45Updated last week
- pytorch code examples for measuring the performance of collective communication calls in AI workloads☆18Updated 7 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆157Updated 5 months ago
- ☆43Updated last year
- ☆53Updated last year