aounon / llm-rank-optimizerLinks
☆107Updated 3 weeks ago
Alternatives and similar repositories for llm-rank-optimizer
Users that are interested in llm-rank-optimizer are comparing it to the libraries listed below
Sorting:
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 11 months ago
- ☆48Updated last year
- ☆67Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 8 months ago
- Finding semantically meaningful and accurate prompts.☆47Updated last year
- The first dense retrieval model that can be prompted like an LM☆86Updated 3 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆61Updated 8 months ago
- ☆79Updated this week
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆100Updated last month
- ☆22Updated 6 months ago
- Official Repo for CRMArena and CRMArena-Pro☆110Updated 2 months ago
- The Synthetic-Persona-Chat dataset is a synthetically generated persona-based dialogue dataset. It extends the original Persona-Chat data…☆99Updated last year
- Functional Benchmarks and the Reasoning Gap☆88Updated 11 months ago
- Backtracing: Retrieving the Cause of the Query, EACL 2024 Long Paper, Findings.☆90Updated last year
- ☆43Updated last year
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆75Updated 10 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆114Updated last month
- Code accompanying "How I learned to start worrying about prompt formatting".☆109Updated 2 months ago
- Official repo of Respond-and-Respond: data, code, and evaluation☆103Updated last year
- ☆73Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆110Updated 11 months ago
- Learning to route instances for Human vs AI Feedback (ACL 2025 Main)☆23Updated last month
- A set of utilities for running few-shot prompting experiments on large-language models☆122Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 7 months ago
- EcoAssistant: using LLM assistant more affordably and accurately☆133Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆102Updated last year
- ☆48Updated 3 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 7 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated last year