baaivision / JudgeLMLinks
[ICLR 2025 Spotlight] An open-sourced LLM judge for evaluating LLM-generated answers.
☆374Updated 5 months ago
Alternatives and similar repositories for JudgeLM
Users that are interested in JudgeLM are comparing it to the libraries listed below
Sorting:
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆464Updated last year
- FuseAI Project☆578Updated 5 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆357Updated 10 months ago
- Official repository for ORPO☆458Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆427Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆712Updated 9 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆640Updated 11 months ago
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆364Updated last week
- [ACL 2024] Progressive LLaMA with Block Expansion.☆505Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆503Updated 6 months ago
- ☆310Updated last year
- Generative Representational Instruction Tuning☆658Updated 3 weeks ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆229Updated 8 months ago
- MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts☆321Updated 7 months ago
- ☆524Updated 7 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆657Updated last year
- RewardBench: the first evaluation tool for reward models.☆612Updated last month
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆551Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆466Updated last year
- Codebase for Merging Language Models (ICML 2024)☆842Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆397Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆730Updated 4 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆329Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆558Updated 7 months ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆280Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆383Updated last year
- Repo for paper "Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration"☆339Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- ☆294Updated 11 months ago