sail-sg / LightTransLinks
The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"
☆20Updated 6 months ago
Alternatives and similar repositories for LightTrans
Users that are interested in LightTrans are comparing it to the libraries listed below
Sorting:
- ☆19Updated 6 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 8 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆46Updated last year
- ☆45Updated last month
- [ICML'25] Official code of paper "Fast Large Language Model Collaborative Decoding via Speculation"☆28Updated 4 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆80Updated 4 months ago
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆16Updated last year
- ☆22Updated 11 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆44Updated 7 months ago
- The official github repo for "Training Optimal Large Diffusion Language Models", the first-ever large-scale diffusion language models sca…☆37Updated this week
- Codebase for decoding compressed trust.☆24Updated last year
- V1: Toward Multimodal Reasoning by Designing Auxiliary Task☆36Updated 6 months ago
- ☆134Updated last month
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆57Updated 5 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆88Updated 11 months ago
- The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink…☆101Updated last month
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆132Updated 4 months ago
- The official repository of NeurIPS'25 paper "Ada-R1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆20Updated last month
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆23Updated 5 months ago
- Extending context length of visual language models☆12Updated 10 months ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆57Updated 5 months ago
- ☆54Updated 3 months ago
- A Sober Look at Language Model Reasoning☆87Updated last month
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆186Updated 4 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 3 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆47Updated 3 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆97Updated 8 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆103Updated 3 weeks ago
- ☆16Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆89Updated 3 weeks ago