hao-ai-lab / cse234-w25-PALinks
☆44Updated 9 months ago
Alternatives and similar repositories for cse234-w25-PA
Users that are interested in cse234-w25-PA are comparing it to the libraries listed below
Sorting:
- Accelerating MoE with IO and Tile-aware Optimizations☆351Updated this week
- ring-attention experiments☆160Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆172Updated last year
- Cataloging released Triton kernels.☆277Updated 3 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆149Updated 10 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆127Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆158Updated 2 months ago
- JAX backend for SGL☆200Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆135Updated last year
- 16-fold memory access reduction with nearly no loss☆109Updated 8 months ago
- ☆227Updated 11 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆244Updated last year
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆107Updated last month
- ☆133Updated 6 months ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆45Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆357Updated 5 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆147Updated last month
- Autonomous GPU Kernel Generation via Deep Agents☆187Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 6 months ago
- Student version of Assignment 2 for Stanford CS336 - Language Modeling From Scratch☆137Updated 4 months ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆155Updated this week
- The evaluation framework for training-free sparse attention in LLMs☆106Updated 2 months ago
- Learn CUDA with PyTorch☆124Updated 3 weeks ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆66Updated last year
- KV cache compression for high-throughput LLM inference☆148Updated 10 months ago
- ☆263Updated this week
- Collection of kernels written in Triton language☆172Updated 8 months ago
- Efficient LLM Inference over Long Sequences☆393Updated 5 months ago
- Estimate MFU for DeepSeekV3☆26Updated 11 months ago
- A minimal implementation of vllm.☆63Updated last year