aounon / llm-rank-optimizerLinks
☆114Updated 3 months ago
Alternatives and similar repositories for llm-rank-optimizer
Users that are interested in llm-rank-optimizer are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- Official Repo for CRMArena and CRMArena-Pro☆126Updated 3 weeks ago
- ☆79Updated last year
- The first dense retrieval model that can be prompted like an LM☆89Updated 6 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆116Updated 4 months ago
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆123Updated last month
- Attribute (or cite) statements generated by LLMs back to in-context information.☆300Updated last year
- LangCode - Improving alignment and reasoning of large language models (LLMs) with natural language embedded program (NLEP).☆49Updated 2 years ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆172Updated last year
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts…☆197Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 11 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning☆46Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆112Updated 5 months ago
- Official repo of Respond-and-Respond: data, code, and evaluation☆104Updated last year
- Red-Teaming Language Models with DSPy☆238Updated 9 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆93Updated last month
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 9 months ago
- Examining how large language models (LLMs) perform across various synthetic regression tasks when given (input, output) examples in their…☆156Updated last month
- EcoAssistant: using LLM assistant more affordably and accurately☆133Updated last year
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆75Updated last year
- Backtracing: Retrieving the Cause of the Query, EACL 2024 Long Paper, Findings.☆91Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆117Updated last month
- Comparing retrieval abilities from GPT4-Turbo and a RAG system on a toy example for various context lengths☆35Updated last year
- ☆69Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆141Updated last month