smallporridge / TrustworthyRAGLinks
☆16Updated last year
Alternatives and similar repositories for TrustworthyRAG
Users that are interested in TrustworthyRAG are comparing it to the libraries listed below
Sorting:
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 11 months ago
- ☆38Updated 3 months ago
- HierSearch: A Hierarchical Enterprise Deep Search Framework Integrating Local and Web Searches☆35Updated last month
- ☆14Updated 9 months ago
- Official code implementation for the ACL 2025 paper: 'Dynamic Scaling of Unit Tests for Code Reward Modeling'☆26Updated 6 months ago
- ☆45Updated last month
- ☆21Updated 6 months ago
- ☆17Updated 3 months ago
- ☆23Updated last year
- This repo contains code for the paper "Both Text and Images Leaked! A Systematic Analysis of Data Contamination in Multimodal LLM"☆16Updated last month
- ☆45Updated 3 months ago
- ☆14Updated 11 months ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆28Updated last month
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆16Updated last year
- [ICLR 2025] Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization☆12Updated 9 months ago
- ☆19Updated 8 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆46Updated 4 months ago
- JudgeLRM: Large Reasoning Models as a Judge☆40Updated 2 months ago
- ☆23Updated 11 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆26Updated last month
- [ACL 2025 (Findings)] DEMO: Reframing Dialogue Interaction with Fine-grained Element Modeling☆20Updated 11 months ago
- Official repository for Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning☆12Updated last year
- [ACL 2025] Knowledge Unlearning for Large Language Models☆46Updated 2 months ago
- ☆23Updated 2 weeks ago
- ☆16Updated 6 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 8 months ago
- ☆22Updated 4 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆24Updated 3 months ago
- The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".☆15Updated 2 months ago