Azure / MoneoLinks
Distributed AI/HPC Monitoring Framework
☆29Updated 9 months ago
Alternatives and similar repositories for Moneo
Users that are interested in Moneo are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- Accepted to MLSys 2026☆70Updated last week
- NCCL Profiling Kit☆150Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆73Updated 8 months ago
- ☆77Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Updated last year
- Offline optimization of your disaggregated Dynamo graph☆177Updated last week
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆67Updated last year
- A GPU-driven system framework for scalable AI applications☆124Updated last year
- Issues related to MLPerf® Inference policies, including rules and suggested changes☆63Updated this week
- RDMA and SHARP plugins for nccl library☆221Updated 3 weeks ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆161Updated 4 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- An experimental parallel training platform☆56Updated last year
- Multi-Instance-GPU profiling tool☆58Updated 2 years ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆156Updated this week
- A TUI-based utility for real-time monitoring of InfiniBand traffic and performance metrics on the local node☆63Updated last month
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Updated last week
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- ☆93Updated 10 months ago
- Thunder Research Group's Collective Communication Library☆47Updated 7 months ago
- An IR for efficiently simulating distributed ML computation.☆32Updated 2 years ago
- A validation and profiling tool for AI infrastructure☆360Updated this week
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆90Updated last month
- [DEPRECATED] Moved to ROCm/rocm-systems repo☆86Updated 2 weeks ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆461Updated last month
- ☆17Updated last year
- Systematic and comprehensive benchmarks for LLM systems.☆50Updated last week