ASTRAL-Group / data-efficient-llm-rlLinks
☆35Updated 3 weeks ago
Alternatives and similar repositories for data-efficient-llm-rl
Users that are interested in data-efficient-llm-rl are comparing it to the libraries listed below
Sorting:
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆54Updated 7 months ago
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆51Updated 10 months ago
- ☆74Updated 9 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆38Updated 6 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆96Updated last year
- A Sober Look at Language Model Reasoning☆92Updated 2 months ago
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆58Updated last week
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆88Updated 11 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆172Updated 9 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆97Updated 11 months ago
- ☆10Updated 9 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆85Updated 7 months ago
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆46Updated 11 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Updated 10 months ago
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…☆16Updated last month
- ☆63Updated 6 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆72Updated 10 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Updated 3 weeks ago
- Reasoning or Memorization? Unreliable Results of Reinforcement Learning Due to Data Contamination.☆21Updated 6 months ago
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆25Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆34Updated 11 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆49Updated 4 months ago
- ☆58Updated 2 years ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆28Updated 10 months ago
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆25Updated last month
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆36Updated 10 months ago
- A Unified Framework for High-Performance and Extensible LLM Steering☆163Updated last week
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆59Updated 5 months ago
- [ICML 2025] Official Implementation of GLIDER☆72Updated 4 months ago
- ☆46Updated 4 months ago