ASTRAL-Group / data-efficient-llm-rlLinks
☆29Updated 2 months ago
Alternatives and similar repositories for data-efficient-llm-rl
Users that are interested in data-efficient-llm-rl are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 4 months ago
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆45Updated 7 months ago
- ☆56Updated 4 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆87Updated 8 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆48Updated 5 months ago
- ☆54Updated 2 years ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆47Updated last month
- ☆67Updated 7 months ago
- A Sober Look at Language Model Reasoning☆89Updated last week
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆165Updated 7 months ago
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆13Updated 9 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆33Updated 8 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆86Updated 11 months ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆25Updated 8 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆34Updated 10 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆52Updated 2 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆93Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆78Updated this week
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆97Updated 9 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆48Updated last year
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…☆16Updated 2 months ago
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆29Updated last week
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆25Updated last year
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆85Updated 9 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆82Updated 5 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆42Updated 3 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆27Updated 9 months ago
- ☆184Updated 6 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆31Updated 8 months ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆30Updated 5 months ago