PKU-Alignment / llms-resist-alignmentLinks
[ACL2025 Best Paper] Language Models Resist Alignment
☆40Updated 6 months ago
Alternatives and similar repositories for llms-resist-alignment
Users that are interested in llms-resist-alignment are comparing it to the libraries listed below
Sorting:
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆53Updated 2 weeks ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 9 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆63Updated last year
- The rule-based evaluation subset and code implementation of Omni-MATH☆26Updated last year
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆50Updated 6 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆146Updated 2 months ago
- ☆47Updated 9 months ago
- ☆51Updated last year
- Code for Research Project TLDR☆25Updated 5 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 3 months ago
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆61Updated 3 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- RL with Experience Replay☆51Updated 5 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆85Updated 10 months ago
- OS-Sentinel☆36Updated 2 months ago
- ☆21Updated 8 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated 11 months ago
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆50Updated 9 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆96Updated 2 months ago
- ☆43Updated last year
- ☆70Updated 6 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆62Updated 7 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆87Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- ☆16Updated last year
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated last year
- A Sober Look at Language Model Reasoning☆92Updated last month
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆161Updated 7 months ago
- Reproducing R1 for Code with Reliable Rewards☆12Updated 9 months ago