AIoT-MLSys-Lab / SVD-LLMLinks
[ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2
β275Updated 5 months ago
Alternatives and similar repositories for SVD-LLM
Users that are interested in SVD-LLM are comparing it to the libraries listed below
Sorting:
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ240Updated last year
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,298Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ276Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inferenceβ280Updated 8 months ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Predictionβ94Updated last year
- APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mentionβ267Updated 2 months ago
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"β560Updated 6 months ago
- [Arxiv] Discrete Diffusion in Large Language and Multimodal Models: A Surveyβ356Updated 2 months ago
- The framework to prune LLMs to any size and any config.β94Updated last year
- Codebase for Iterative DPO Using Rule-based Rewardsβ267Updated 9 months ago
- A scalable, end-to-end training pipeline for general-purpose agentsβ362Updated 6 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Mergingβ140Updated 10 months ago
- SDAR (Synergy of Diffusion and AutoRegression), a large diffusion language modelοΌ1.7B, 4B, 8B, 30BοΌβ327Updated last month
- [Neurips 2025] R-KV: Redundancy-aware KV Cache Compression for Reasoning Modelsβ1,170Updated 3 months ago
- Support mixed-precsion inference with vllmβ84Updated 6 months ago
- β1,089Updated last week
- β¨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framworkβ308Updated 4 months ago
- β140Updated 6 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Modelsβ284Updated 10 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.β146Updated 5 months ago
- The Official Implementation of Ada-KV [NeurIPS 2025]β125Updated 2 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.β187Updated 6 months ago
- [ICML2025] Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignmentβ139Updated 2 months ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compressionβ72Updated 10 months ago
- β332Updated 5 months ago
- β49Updated last year
- [NeurIPS 2025π₯]Main source code of SRPO framework.β186Updated 2 months ago
- β56Updated last year
- Awesome list for LLM pruning.β280Updated 3 months ago
- Open source code for ICLR 2026 Paper: Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactionsβ207Updated this week