Yiwei98 / ESCLinks
☆14Updated 6 months ago
Alternatives and similar repositories for ESC
Users that are interested in ESC are comparing it to the libraries listed below
Sorting:
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆52Updated 9 months ago
- ☆47Updated 10 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Updated 10 months ago
- ☆51Updated 2 years ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆98Updated 3 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆118Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆89Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 10 months ago
- ☆32Updated 10 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆70Updated 2 weeks ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆64Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆38Updated 6 months ago
- ☆44Updated last year
- ☆29Updated last year
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization…☆16Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- ☆69Updated 10 months ago
- ☆73Updated 9 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆49Updated 3 months ago
- ☆72Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆50Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆157Updated 3 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆73Updated 6 months ago
- [NeurIPS25] RULE: Reinforcement UnLEarning Achieves Forge-retain Pareto Optimality☆19Updated 3 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- Reproducing R1 for Code with Reliable Rewards☆12Updated 9 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year