SalesforceAIResearch / ThinKLinks
ThinK: Thinner Key Cache by Query-Driven Pruning
☆23Updated 6 months ago
Alternatives and similar repositories for ThinK
Users that are interested in ThinK are comparing it to the libraries listed below
Sorting:
- A Sober Look at Language Model Reasoning☆81Updated 2 months ago
- ☆20Updated 9 months ago
- [ICLR 2025🔥] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models☆20Updated last month
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- Codes for Merging Large Language Models☆33Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆119Updated last month
- ☆100Updated 4 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆92Updated 2 months ago
- ☆38Updated last year
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- ☆139Updated last year
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆117Updated 3 weeks ago
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆23Updated 6 months ago
- Test-time-training on nearest neighbors for large language models☆45Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆40Updated 4 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆176Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 4 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆67Updated 6 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆59Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆208Updated last year
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆79Updated 2 months ago
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆63Updated last month
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆101Updated last year
- ☆74Updated 9 months ago
- ☆41Updated 4 months ago
- ☆23Updated last month
- Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation, ICML 2024☆22Updated last year
- ☆15Updated last year
- ☆45Updated 9 months ago