ASTRAL-Group / data-efficient-llm-rlLinks
☆26Updated last month
Alternatives and similar repositories for data-efficient-llm-rl
Users that are interested in data-efficient-llm-rl are comparing it to the libraries listed below
Sorting:
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆27Updated 4 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 3 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆44Updated 7 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆85Updated 7 months ago
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆41Updated 8 months ago
- ☆54Updated 3 months ago
- ☆54Updated 2 years ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆37Updated 2 months ago
- A Sober Look at Language Model Reasoning☆87Updated last month
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆161Updated 6 months ago
- ☆67Updated 6 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆188Updated last week
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆46Updated 4 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆47Updated 11 months ago
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆24Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆91Updated last year
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆81Updated 10 months ago
- ☆179Updated 5 months ago
- ☆10Updated 6 months ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆25Updated 7 months ago
- ☆53Updated 5 months ago
- ☆33Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆31Updated 8 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆32Updated 7 months ago
- ☆28Updated 7 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆82Updated 8 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆46Updated last month
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆76Updated this week
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆50Updated 2 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆34Updated 9 months ago