OpenBMB / CPO
☆20Updated 7 months ago
Alternatives and similar repositories for CPO:
Users that are interested in CPO are comparing it to the libraries listed below
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆23Updated 11 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆54Updated 10 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆33Updated 6 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆15Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated 9 months ago
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆32Updated 8 months ago
- ☆20Updated 7 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆27Updated 7 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆34Updated last year
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 8 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆22Updated 2 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆54Updated 7 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆79Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆50Updated 11 months ago
- A Survey on the Honesty of Large Language Models☆54Updated 2 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆59Updated 3 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆14Updated 2 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 4 months ago
- AbstainQA, ACL 2024☆25Updated 4 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 5 months ago
- ☆42Updated 4 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 9 months ago
- Methods and evaluation for aligning language models temporally☆27Updated last year
- ☆25Updated 2 years ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆41Updated 4 months ago
- ☆58Updated 6 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆19Updated 4 months ago
- ☆30Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 6 months ago