EthanLeo-LYX / LLMQA
[WWW2024 Oral] Harnessing Multi-Role Capabilities of Large Language Models for Open-Domain Question Answering
☆11Updated this week
Alternatives and similar repositories for LLMQA:
Users that are interested in LLMQA are comparing it to the libraries listed below
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- AbstainQA, ACL 2024☆25Updated 6 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆34Updated 2 months ago
- Evaluate the Quality of Critique☆34Updated 10 months ago
- This is the repository for paper "CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models"☆24Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 4 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 4 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆25Updated last month
- The code of arXiv paper: "Dynamic Scaling of Unit Tests for Code Reward Modeling"☆18Updated 3 months ago
- ☆29Updated 4 months ago
- This is for EMNLP 2024 Paper: AppBench: Planning of Multiple APIs from Various APPs for Complex User Instruction☆13Updated 5 months ago
- ☆22Updated 9 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago
- ☆22Updated 4 months ago
- ☆21Updated 9 months ago
- Public code repo for COLING 2025 paper "Aligning LLMs with Individual Preferences via Interaction"☆26Updated 3 weeks ago