OpenBMB / CPO
☆21Updated 9 months ago
Alternatives and similar repositories for CPO:
Users that are interested in CPO are comparing it to the libraries listed below
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆43Updated 6 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆35Updated 8 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆58Updated 4 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆24Updated last year
- ☆20Updated last month
- ☆41Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆16Updated last year
- ☆25Updated 11 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆59Updated 9 months ago
- ☆22Updated 9 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆26Updated 4 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆80Updated last year
- ☆15Updated last week
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆32Updated 10 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 6 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆20Updated 2 months ago
- ☆59Updated 7 months ago
- A Survey on the Honesty of Large Language Models☆57Updated 4 months ago
- Methods and evaluation for aligning language models temporally☆29Updated last year
- [NeurIPS 2024] Code and Data Repo for Paper "Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning"☆25Updated 10 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆41Updated 6 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- AbstainQA, ACL 2024☆25Updated 6 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 5 months ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆15Updated 3 months ago
- ☆29Updated 3 months ago
- ☆17Updated last year