ShujinWu-0814 / ALOELinks
Public code repo for COLING 2025 paper "Aligning LLMs with Individual Preferences via Interaction"
☆29Updated 2 months ago
Alternatives and similar repositories for ALOE
Users that are interested in ALOE are comparing it to the libraries listed below
Sorting:
- ☆74Updated last year
- ☆44Updated last year
- ☆46Updated 7 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆65Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated 10 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- A Survey on the Honesty of Large Language Models☆57Updated 6 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆48Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆112Updated last year
- ☆24Updated 2 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆60Updated 7 months ago
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆29Updated 3 weeks ago
- [EMNLP 2023] Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts☆27Updated last year
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆25Updated 11 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆112Updated 9 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 7 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 9 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 8 months ago
- ☆29Updated last year
- ☆59Updated 9 months ago
- This the implementation of LeCo☆31Updated 5 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆81Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 6 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 6 months ago
- Source code of our EMNLP 2024 paper "FactAlign: Long-form Factuality Alignment of Large Language Models"☆19Updated 8 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆69Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated 11 months ago