shivamag125 / EM_PTLinks
☆26Updated 5 months ago
Alternatives and similar repositories for EM_PT
Users that are interested in EM_PT are comparing it to the libraries listed below
Sorting:
- ☆23Updated 11 months ago
- The official repository of NeurIPS'25 paper "Ada-R1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆21Updated 2 months ago
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆57Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆89Updated 11 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆93Updated last year
- ☆46Updated 4 months ago
- ☆33Updated 2 months ago
- ☆63Updated 6 months ago
- A Unified Framework for High-Performance and Extensible LLM Steering☆158Updated this week
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆85Updated 7 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆97Updated 11 months ago
- [ICLR 2026] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆47Updated 7 months ago
- ☆34Updated 8 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆73Updated 9 months ago
- ☆45Updated last month
- [ICLR 2026] Do Not Let Low-Probability Tokens Over-Dominate in RL for LLMs☆41Updated 8 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆25Updated 5 months ago
- ☆56Updated 3 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆64Updated last year
- ☆25Updated 9 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆87Updated 11 months ago
- ☆16Updated 7 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆34Updated 10 months ago
- ☆204Updated last month
- ☆141Updated 10 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆73Updated 6 months ago
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Updated 9 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆169Updated 8 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆53Updated 7 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆28Updated 11 months ago