AIoT-MLSys-Lab / SVD-LLMLinks
[ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2
β231Updated 3 months ago
Alternatives and similar repositories for SVD-LLM
Users that are interested in SVD-LLM are comparing it to the libraries listed below
Sorting:
- An acceleration library that supports arbitrary bit-width combinatorial quantization operationsβ227Updated 9 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ258Updated 10 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,190Updated 6 months ago
- APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mentionβ241Updated 2 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inferenceβ205Updated 2 months ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Predictionβ91Updated 8 months ago
- β195Updated this week
- Codebase for Iterative DPO Using Rule-based Rewardsβ252Updated 3 months ago
- The framework to prune LLMs to any size and any config.β93Updated last year
- A scalable, end-to-end training pipeline for general-purpose agentsβ258Updated last week
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"β249Updated last week
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Mergingβ136Updated 3 months ago
- β¨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framworkβ238Updated last month
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Modelsβ263Updated 4 months ago
- Support mixed-precsion inference with vllmβ85Updated 6 months ago
- adds Sequence Parallelism into LLaMA-Factoryβ525Updated last week
- [ICLR 2025] BitStack: Any-Size Compression of Large Language Models in Variable Memory Environmentsβ36Updated 4 months ago
- The official implementation of Self-Play Preference Optimization (SPPO)β569Updated 5 months ago
- β216Updated 2 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.β185Updated last month
- Official code of "StreamBP: Memory-Efficient Exact Backpropagation for Long Sequence Training of LLMs".β69Updated 3 weeks ago
- Mixed precision inference by Tensorrt-LLMβ80Updated 8 months ago
- R1-like Computer-use Agentβ77Updated 3 months ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compressionβ61Updated 3 months ago
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoninβ¦β169Updated 7 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ176Updated 8 months ago
- Recipes to train the self-rewarding reasoning LLMs.β224Updated 4 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.β96Updated 7 months ago
- SQuant [ICLR22]β131Updated 2 years ago
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AIβ110Updated last week