ASTRAL-Group / data-efficient-llm-rlLinks
☆33Updated 2 weeks ago
Alternatives and similar repositories for data-efficient-llm-rl
Users that are interested in data-efficient-llm-rl are comparing it to the libraries listed below
Sorting:
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆47Updated 8 months ago
- A Sober Look at Language Model Reasoning☆89Updated last month
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 5 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆49Updated 6 months ago
- ☆57Updated 5 months ago
- ☆10Updated 7 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆88Updated 8 months ago
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆43Updated 10 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆86Updated 9 months ago
- ☆70Updated 8 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆48Updated 2 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆235Updated last week
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…☆16Updated this week
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆98Updated 10 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆44Updated 4 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆33Updated 8 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆90Updated last year
- ☆189Updated 7 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆86Updated 6 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆166Updated 7 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆48Updated last year
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆45Updated this week
- ☆55Updated 2 years ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆115Updated 4 months ago
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆31Updated 3 weeks ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆33Updated 9 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆93Updated last year
- 📜 Paper list on decoding methods for LLMs and LVLMs☆67Updated last month
- ☆45Updated 2 months ago