AIoT-MLSys-Lab / SVD-LLM
[ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2
β201Updated last month
Alternatives and similar repositories for SVD-LLM:
Users that are interested in SVD-LLM are comparing it to the libraries listed below
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ223Updated 7 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,039Updated 4 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ248Updated 8 months ago
- The framework to prune LLMs to any size and any config.β92Updated last year
- Support mixed-precsion inference with vllmβ83Updated 3 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inferenceβ171Updated last week
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Mergingβ134Updated last month
- APOLLO: SGD-like Memory, AdamW-level Performanceβ212Updated last week
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Predictionβ88Updated 6 months ago
- Mixed precision inference by Tensorrt-LLMβ79Updated 6 months ago
- Codebase for Iterative DPO Using Rule-based Rewardsβ243Updated 3 weeks ago
- [ICLR 2025] BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsβ36Updated 2 months ago
- β¨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framworkβ205Updated last month
- R1-like Computer-use Agentβ67Updated last month
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cacheβ42Updated 9 months ago
- adds Sequence Parallelism into LLaMA-Factoryβ471Updated last week
- β106Updated 4 years ago
- MPLSandbox is an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler aβ¦β176Updated 3 weeks ago
- β45Updated last month
- Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTSβ1,173Updated last month
- β155Updated 2 weeks ago
- The official implementation of Self-Play Preference Optimization (SPPO)β545Updated 3 months ago
- SurveyForge: On the Outline Heuristics, Memory-Driven Generation, and Multi-dimensional Evaluation for Automated Survey Writingβ139Updated last month
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoninβ¦β167Updated 5 months ago
- The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Modelsβ631Updated last week
- β57Updated last month
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark oβ¦β77Updated 2 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ174Updated 6 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inferenceβ37Updated 10 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.β85Updated 5 months ago