GATECH-EIC / ACT
[ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
☆37Updated 10 months ago
Alternatives and similar repositories for ACT:
Users that are interested in ACT are comparing it to the libraries listed below
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆72Updated 6 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆67Updated 2 months ago
- ☆76Updated last week
- ☆43Updated 3 weeks ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆58Updated 4 months ago
- A Survey on the Honesty of Large Language Models☆57Updated 4 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 6 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆58Updated 2 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆73Updated 5 months ago
- ☆18Updated 5 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆67Updated 2 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆18Updated 3 weeks ago
- The official repository for the paper "Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark"☆51Updated this week
- Codes for Merging Large Language Models☆29Updated 8 months ago
- ☆29Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆65Updated 11 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆93Updated 5 months ago
- ☆59Updated 3 weeks ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆21Updated 4 months ago
- ☆77Updated 2 weeks ago
- ☆35Updated last year
- ☆22Updated 11 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆83Updated 5 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆136Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆109Updated last year
- ☆34Updated 2 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆43Updated last month
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆29Updated 7 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆118Updated last month
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated last month