shivamag125 / EM_PTLinks
☆16Updated this week
Alternatives and similar repositories for EM_PT
Users that are interested in EM_PT are comparing it to the libraries listed below
Sorting:
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆86Updated 5 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆82Updated 6 months ago
- ☆19Updated 5 months ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆39Updated 3 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆29Updated 5 months ago
- The official repository of paper "AdaR1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆18Updated 3 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆41Updated 2 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆149Updated last week
- ☆156Updated 2 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆77Updated last month
- ☆127Updated 2 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆68Updated 4 months ago
- ☆26Updated 4 months ago
- ☆65Updated 4 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆34Updated last month
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆59Updated 7 months ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆20Updated 3 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆171Updated last month
- ☆28Updated 4 months ago
- ☆39Updated 3 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆73Updated 7 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆25Updated 5 months ago
- ☆49Updated last month
- Model merging is a highly efficient approach for long-to-short reasoning.☆78Updated 2 months ago
- [ICLR 2025 Spotlight] Weak-to-strong preference optimization: stealing reward from weak aligned model☆13Updated 5 months ago
- ☆15Updated 2 months ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆71Updated last month
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆69Updated 3 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆40Updated last year
- ☆16Updated 8 months ago