IDEA-FinAI / LLM-as-a-Judge
☆87Updated this week
Alternatives and similar repositories for LLM-as-a-Judge:
Users that are interested in LLM-as-a-Judge are comparing it to the libraries listed below
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆78Updated last month
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆136Updated 4 months ago
- Critique-out-Loud Reward Models☆55Updated 5 months ago
- augmented LLM with self reflection☆117Updated last year
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆116Updated 4 months ago
- Reformatted Alignment☆115Updated 6 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆79Updated 7 months ago
- ☆143Updated 3 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆109Updated 8 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆179Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆67Updated 11 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆101Updated this week
- Code implementation of synthetic continued pretraining☆95Updated 2 months ago
- ☆65Updated 4 months ago
- ☆83Updated last week
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆91Updated last month
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆229Updated last month
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆131Updated 4 months ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆128Updated 8 months ago
- EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.☆131Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆131Updated last month
- Codebase accompanying the Summary of a Haystack paper.☆75Updated 6 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆166Updated 2 weeks ago
- ☆102Updated 3 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆73Updated 9 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆107Updated 11 months ago
- Fantastic Data Engineering for Large Language Models☆83Updated 2 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆121Updated 8 months ago
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆65Updated 7 months ago