tcapelle / mistral_wandbLinks
A full fledged mistral+wandb
☆13Updated last year
Alternatives and similar repositories for mistral_wandb
Users that are interested in mistral_wandb are comparing it to the libraries listed below
Sorting:
- A small library of LLM judges☆276Updated last month
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆167Updated this week
- Includes examples on how to evaluate LLMs☆23Updated 10 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 11 months ago
- This is the reproduction repository for my 🤗 Hugging Face blog post on synthetic data☆68Updated last year
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆138Updated 2 weeks ago
- Sample notebooks and prompts for LLM evaluation☆138Updated 2 months ago
- ☆145Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆102Updated last year
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆23Updated last week
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆109Updated last year
- Official codebase for "Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions" (Matrenok …☆25Updated last month
- ☆80Updated last year
- ☆43Updated last year
- Attribute (or cite) statements generated by LLMs back to in-context information.☆274Updated 10 months ago
- ☆35Updated last month
- A programming framework for agentic AI. Discord: https://discord.gg/pAbnFJrkgZ☆136Updated 7 months ago
- ☆73Updated last year
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆100Updated last month
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆231Updated this week
- Code for "Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs"☆54Updated 6 months ago
- ☆118Updated last year
- ☆237Updated 6 months ago
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆176Updated 11 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- Official Repo for CRMArena and CRMArena-Pro☆110Updated 2 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆182Updated 5 months ago
- ☆20Updated last year
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆223Updated 2 years ago
- ☆114Updated 5 months ago