GeniusHTX / TALELinks
☆140Updated 3 months ago
Alternatives and similar repositories for TALE
Users that are interested in TALE are comparing it to the libraries listed below
Sorting:
- ☆136Updated 9 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆197Updated 3 weeks ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆98Updated 10 months ago
- A Sober Look at Language Model Reasoning☆92Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆258Updated 7 months ago
- ☆72Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆86Updated 9 months ago
- ☆45Updated 2 months ago
- ☆175Updated 2 weeks ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆94Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 9 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆118Updated 7 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆126Updated 8 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆80Updated last month
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆73Updated 5 months ago
- ☆106Updated 2 weeks ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆210Updated 3 weeks ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆119Updated last year
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆341Updated 3 months ago
- ☆346Updated 4 months ago
- ☆69Updated 6 months ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆74Updated 6 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆114Updated 4 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆70Updated 5 months ago
- ☆76Updated last year
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆182Updated 5 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆86Updated 6 months ago
- ☆213Updated 6 months ago
- ☆213Updated 10 months ago