NoakLiu / GraphSnapShotLinks
GraphSnapShot: Caching Local Structure for Fast Graph Learning [Efficient ML System]
☆40Updated 3 weeks ago
Alternatives and similar repositories for GraphSnapShot
Users that are interested in GraphSnapShot are comparing it to the libraries listed below
Sorting:
- Accelerating Multitask Training Trough Adaptive Transition [Efficient ML Model]☆12Updated 4 months ago
- Adaptive Topology Reconstruction for Robust Graph Representation Learning [Efficient ML Model]☆10Updated 8 months ago
- Efficient Foundation Model Design: A Perspective From Model and System Co-Design [Efficient ML System & Model]☆25Updated 7 months ago
- Paper List of Inference/Test Time Scaling/Computing☆313Updated last month
- [TMLR 2025] Efficient Reasoning Models: A Survey☆271Updated this week
- Code for the paper "VTool-R1: VLMs Learn to Think with Images via Reinforcement Learning on Multimodal Tool Use"☆131Updated 2 months ago
- Official PyTorch implementation of the paper "Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Princ…☆33Updated 3 months ago
- A Serving System for Distributed and Parallel LLM Quantization [Efficient ML System]☆26Updated 4 months ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆29Updated 6 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆79Updated 3 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆103Updated 11 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆56Updated last month
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆79Updated this week
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆135Updated 3 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆164Updated last month
- Paper list for Efficient Reasoning.☆684Updated 3 weeks ago
- Code release for VTW (AAAI 2025 Oral)☆50Updated 3 months ago
- ☆33Updated 2 weeks ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆553Updated last week
- PhyX: Does Your Model Have the "Wits" for Physical Reasoning?☆46Updated this week
- Survey Paper List - Efficient LLM and Foundation Models☆257Updated last year
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆71Updated 9 months ago
- Survey on Data-centric Large Language Models☆86Updated last year
- ☆52Updated last week
- ☆171Updated 5 months ago
- 📚 Collection of token-level model compression resources.☆173Updated last month
- ThinK: Thinner Key Cache by Query-Driven Pruning☆24Updated 8 months ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated last year
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆45Updated 2 weeks ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆147Updated 3 months ago