NoakLiu / DRTRLinks
Adaptive Topology Reconstruction for Robust Graph Representation Learning [Efficient ML Model]
☆10Updated 8 months ago
Alternatives and similar repositories for DRTR
Users that are interested in DRTR are comparing it to the libraries listed below
Sorting:
- Accelerating Multitask Training Trough Adaptive Transition [Efficient ML Model]☆12Updated 4 months ago
- Efficient Foundation Model Design: A Perspective From Model and System Co-Design [Efficient ML System & Model]☆25Updated 7 months ago
- GraphSnapShot: Caching Local Structure for Fast Graph Learning [Efficient ML System]☆40Updated 3 weeks ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆105Updated 3 weeks ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆559Updated 2 weeks ago
- ☆29Updated 7 months ago
- ☆46Updated 10 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆219Updated 2 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆376Updated 7 months ago
- ☆60Updated 10 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆118Updated 6 months ago
- ☆13Updated 8 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆338Updated 3 months ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆156Updated 3 weeks ago
- ☆79Updated last year
- ☆38Updated 7 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆143Updated this week
- ☆140Updated 3 months ago
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆86Updated 7 months ago
- ☆283Updated 3 months ago
- Multi-Candidate Speculative Decoding☆36Updated last year
- Implement some method of LLM KV Cache Sparsity☆39Updated last year
- ☆59Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆155Updated last year
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆53Updated 2 weeks ago
- Code repo for efficient quantized MoE inference with mixture of low-rank compensators☆25Updated 6 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆71Updated this week
- Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). If you hav…☆23Updated 7 months ago
- [ICLR 2025🔥] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models☆23Updated 3 months ago
- ThinK: Thinner Key Cache by Query-Driven Pruning☆24Updated 8 months ago