perplexityai / pplx-gardenLinks
Perplexity open source garden for inference technology
☆324Updated 2 weeks ago
Alternatives and similar repositories for pplx-garden
Users that are interested in pplx-garden are comparing it to the libraries listed below
Sorting:
- torchcomms: a modern PyTorch communications API☆319Updated this week
- Perplexity GPU Kernels☆552Updated 2 months ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆443Updated last week
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆484Updated last month
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆368Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆251Updated this week
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆516Updated 3 weeks ago
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆102Updated last week
- NVIDIA Inference Xfer Library (NIXL)☆801Updated this week
- ☆338Updated last week
- ☆81Updated 2 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆744Updated this week
- A low-latency & high-throughput serving engine for LLMs☆464Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆454Updated 7 months ago
- Microsoft Collective Communication Library☆66Updated last year
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆148Updated this week
- Open Source Continuous Inference Benchmarking - GB200 NVL72 vs MI355X vs B200 vs H200 vs MI325X & soon™ TPUv6e/v7/Trainium2/3/GB300 NVL72…☆413Updated this week
- Ship correct and fast LLM kernels to PyTorch☆130Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆451Updated this week
- JAX backend for SGL☆211Updated this week
- Accelerating MoE with IO and Tile-aware Optimizations☆522Updated last week
- KV cache store for distributed LLM inference☆384Updated 2 months ago
- ☆73Updated last year
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆77Updated 3 weeks ago
- ☆72Updated 11 months ago
- AI Tensor Engine for ROCm☆334Updated this week
- Helpful kernel tutorials and examples for tile-based GPU programming☆554Updated this week
- ☆270Updated last week
- Fast low-bit matmul kernels in Triton☆418Updated 3 weeks ago
- Open ABI and FFI for Machine Learning Systems☆293Updated this week