perplexityai / pplx-gardenLinks
Perplexity open source garden for inference technology
☆274Updated last week
Alternatives and similar repositories for pplx-garden
Users that are interested in pplx-garden are comparing it to the libraries listed below
Sorting:
- torchcomms: a modern PyTorch communications API☆295Updated this week
- Perplexity GPU Kernels☆531Updated 3 weeks ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆393Updated 2 weeks ago
- ☆79Updated last month
- An early research stage MoE load balancer based on inear programming.☆415Updated 2 weeks ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆234Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆321Updated last week
- ☆324Updated 2 weeks ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆444Updated 6 months ago
- NVIDIA Inference Xfer Library (NIXL)☆740Updated this week
- Microsoft Collective Communication Library☆66Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆63Updated 2 months ago
- A low-latency & high-throughput serving engine for LLMs☆450Updated last month
- A lightweight design for computation-communication overlap.☆188Updated last month
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆674Updated 3 weeks ago
- Stateful LLM Serving☆89Updated 8 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆119Updated last week
- A large-scale simulation framework for LLM inference☆488Updated 4 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆199Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆132Updated last year
- ☆147Updated 11 months ago
- High performance Transformer implementation in C++.☆142Updated 10 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆437Updated last week
- Efficient and easy multi-instance LLM serving☆512Updated 3 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆262Updated last month
- AI Tensor Engine for ROCm☆309Updated this week
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆261Updated last week
- ☆72Updated 10 months ago
- LLM Serving Performance Evaluation Harness☆81Updated 9 months ago
- ☆48Updated last year