aounon / llm-rank-optimizerLinks
☆117Updated 4 months ago
Alternatives and similar repositories for llm-rank-optimizer
Users that are interested in llm-rank-optimizer are comparing it to the libraries listed below
Sorting:
- LangCode - Improving alignment and reasoning of large language models (LLMs) with natural language embedded program (NLEP).☆48Updated 2 years ago
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆111Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- AIDE: the Machine Learning CodeGen Agent☆25Updated last year
- ☆41Updated last year
- Interaction-first method for generating demonstrations for web-agents on any website☆51Updated 7 months ago
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆113Updated 6 months ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆312Updated last year
- ☆49Updated last year
- ☆79Updated last year
- Evaluating LLMs with fewer examples☆170Updated last year
- The first dense retrieval model that can be prompted like an LM☆89Updated 7 months ago
- WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting.☆54Updated last year
- FrugalGPT: better quality and lower cost for LLM applications☆245Updated 10 months ago
- Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts…☆197Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆125Updated last month
- Official Repo for CRMArena and CRMArena-Pro☆126Updated last month
- Official repo of Respond-and-Respond: data, code, and evaluation☆104Updated last year
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆66Updated last year
- A small library of LLM judges☆309Updated 4 months ago
- Comparing retrieval abilities from GPT4-Turbo and a RAG system on a toy example for various context lengths☆35Updated 2 years ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 4 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆300Updated last week
- ☆100Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆107Updated 3 months ago
- A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you…☆82Updated last year
- A set of utilities for running few-shot prompting experiments on large-language models☆126Updated 2 years ago