tile-ai / TileRTLinks
Tile-Based Runtime for Ultra-Low-Latency LLM Inference
☆391Updated this week
Alternatives and similar repositories for TileRT
Users that are interested in TileRT are comparing it to the libraries listed below
Sorting:
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆456Updated 3 weeks ago
- Allow torch tensor memory to be released and resumed later☆184Updated last week
- Perplexity GPU Kernels☆536Updated last month
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆345Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆236Updated 2 weeks ago
- torchcomms: a modern PyTorch communications API☆302Updated this week
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆407Updated last month
- Autonomous GPU Kernel Generation via Deep Agents☆179Updated this week
- A lightweight design for computation-communication overlap.☆194Updated 2 months ago
- ☆97Updated 8 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆446Updated 6 months ago
- ☆328Updated last month
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆146Updated 2 months ago
- Tile-based language built for AI computation across all scales☆85Updated this week
- Perplexity open source garden for inference technology☆295Updated this week
- ☆102Updated last year
- Open ABI and FFI for Machine Learning Systems☆223Updated last week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆294Updated 6 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆256Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆302Updated this week
- ☆79Updated last month
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆539Updated this week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆858Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆227Updated 2 years ago
- DeeperGEMM: crazy optimized version☆73Updated 7 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆263Updated last month
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆82Updated this week
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆72Updated 3 months ago
- kernels, of the mega variety☆623Updated 2 months ago