tcapelle / mistral_wandbLinks
A full fledged mistral+wandb
☆13Updated last year
Alternatives and similar repositories for mistral_wandb
Users that are interested in mistral_wandb are comparing it to the libraries listed below
Sorting:
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆140Updated this week
- ☆146Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- A small library of LLM judges☆285Updated last month
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆168Updated last week
- This is the reproduction repository for my 🤗 Hugging Face blog post on synthetic data☆68Updated last year
- ☆80Updated last year
- Learning to route instances for Human vs AI Feedback (ACL Main '25)☆24Updated 2 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆103Updated last week
- Sample notebooks and prompts for LLM evaluation☆138Updated 3 months ago
- ☆48Updated last year
- awesome synthetic (text) datasets☆297Updated 2 months ago
- Functional Benchmarks and the Reasoning Gap☆89Updated 11 months ago
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆112Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆232Updated 2 weeks ago
- Official codebase for "Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions" (Matrenok …☆27Updated 2 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆183Updated 6 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 9 months ago
- NAACL 2024. Code & Dataset for "🌁 Bridging the Novice-Expert Gap via Models of Decision-Making: A Case Study on Remediating Math Mistake…☆43Updated last year
- Let Me Speak Freely? A Study on the Impact of Format Restrictions on Performance of Large Language Models☆24Updated 3 months ago
- Includes examples on how to evaluate LLMs☆23Updated 10 months ago
- ☆23Updated 2 years ago
- ☆74Updated last year
- WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting.☆49Updated last year
- ☆54Updated last year
- Attribute (or cite) statements generated by LLMs back to in-context information.☆286Updated 11 months ago
- ☆133Updated this week
- This repo is the central repo for all the RAG Evaluation reference material and partner workshop☆76Updated 5 months ago
- Evaluating LLMs with fewer examples☆161Updated last year