OpenBMB / CPO
☆20Updated 8 months ago
Alternatives and similar repositories for CPO:
Users that are interested in CPO are comparing it to the libraries listed below
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆37Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆41Updated 5 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆24Updated 3 months ago
- A Survey on the Honesty of Large Language Models☆56Updated 3 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆54Updated 11 months ago
- ☆25Updated 10 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆79Updated last year
- AbstainQA, ACL 2024☆25Updated 5 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆16Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated 10 months ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 9 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆46Updated 3 months ago
- ☆41Updated last year
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆35Updated 7 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 7 months ago