Oneflow-Inc / dfcclLinks
☆27Updated 7 months ago
Alternatives and similar repositories for dfccl
Users that are interested in dfccl are comparing it to the libraries listed below
Sorting:
- ☆57Updated 4 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- ☆83Updated 2 years ago
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆63Updated 4 months ago
- Thunder Research Group's Collective Communication Library☆42Updated 2 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆31Updated 7 months ago
- A lightweight design for computation-communication overlap.☆177Updated 2 weeks ago
- ☆46Updated 9 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆66Updated last week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆132Updated 2 weeks ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆111Updated 4 months ago
- A memory profiler for NVIDIA GPUs to explore memory inefficiencies in GPU-accelerated applications.☆25Updated 11 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆62Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- ☆85Updated 6 months ago
- Microsoft Collective Communication Library☆66Updated 10 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated 10 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆100Updated 2 years ago
- A tool for examining GPU scheduling behavior.☆88Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆62Updated 3 weeks ago
- Sample Codes using NVSHMEM on Multi-GPU☆29Updated 2 years ago
- Artifact of ASPLOS'23 paper entitled: GRACE: A Scalable Graph-Based Approach to Accelerating Recommendation Model Inference☆19Updated 2 years ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆41Updated last week
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆55Updated 3 years ago
- NCCL Profiling Kit☆145Updated last year
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆38Updated 6 months ago
- Tile-based language built for AI computation across all scales☆61Updated last week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 3 months ago