uccl-project / ucclLinks
UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g., GPU-driven)
☆1,116Updated this week
Alternatives and similar repositories for uccl
Users that are interested in uccl are comparing it to the libraries listed below
Sorting:
- Extending eBPF Programmability and Observability to GPUs (merged into https://github.com/eunomia-bpf/bpftime)☆274Updated 2 weeks ago
- ☆759Updated last month
- A highly optimized LLM inference acceleration engine for Llama and its variants.☆904Updated 5 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Models☆1,288Updated 11 months ago
- TVM Documentation in Chinese Simplified / TVM 中文文档☆2,807Updated 3 weeks ago
- CXL remote offloading data movement aware compiler☆70Updated last week
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆238Updated last year
- ☆937Updated this week
- [Neurips 2025] R-KV: Redundancy-aware KV Cache Compression for Reasoning Models☆1,157Updated last month
- Expert Kit is an efficient foundation of Expert Parallelism (EP) for MoE model Inference on heterogenous hardware☆60Updated last month
- ☆135Updated 4 months ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆269Updated 3 months ago
- FlagPerf is an open-source software platform for benchmarking AI chips.☆353Updated last month
- PTX on XPUs☆110Updated last month
- A distributed framework for LLM agents☆289Updated this week
- Some Hardware Architectures for GEMM☆282Updated 6 months ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Prediction☆94Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆275Updated 7 months ago
- Heterogeneous Containerization of Large Language Model Apps☆107Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆491Updated 8 months ago
- ☆328Updated last month
- Distributed Compiler based on Triton for Parallel Systems☆1,269Updated this week
- Efficient and easy multi-instance LLM serving☆517Updated 3 months ago
- ☆103Updated 5 years ago
- NVIDIA Inference Xfer Library (NIXL)☆753Updated this week
- [NeurIPS'25] KVCOMM: Online Cross-context KV-cache Communication for Efficient LLM-based Multi-agent Systems☆102Updated last month
- Perplexity GPU Kernels☆536Updated last month
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆273Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆749Updated 8 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆896Updated 2 weeks ago