prometheus-eval / prometheus-evalLinks
Evaluate your LLM's response with Prometheus and GPT4 π―
β1,022Updated 8 months ago
Alternatives and similar repositories for prometheus-eval
Users that are interested in prometheus-eval are comparing it to the libraries listed below
Sorting:
- Automated Evaluation of RAG Systemsβ681Updated 8 months ago
- Stanford NLP Python library for Representation Finetuning (ReFT)β1,548Updated 10 months ago
- Automatically evaluate your LLMs in Google Colabβ677Updated last year
- A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.β816Updated 5 months ago
- Official repository for ORPOβ468Updated last year
- Generative Representational Instruction Tuningβ680Updated 6 months ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Modelsβ588Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ2,212Updated last week
- β559Updated last year
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. β π€π€β1,083Updated 10 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifiβ¦β2,995Updated this week
- Best practices for distilling large language models.β595Updated last year
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".β661Updated last week
- Doing simple retrieval from LLM models at various context lengths to measure accuracyβ2,117Updated last year
- Chat Templates for π€ HuggingFace Large Language Modelsβ708Updated last year
- List of papers on hallucination detection in LLMs.β1,008Updated last month
- Easily embed, cluster and semantically label text datasetsβ586Updated last year
- A reading list on LLM based Synthetic Data Generation π₯β1,494Updated 6 months ago
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.β561Updated last week
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.β500Updated last year
- β693Updated 7 months ago
- Code for paper "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment"β400Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).β897Updated 2 months ago
- An Open Source Toolkit For LLM Distillationβ814Updated this week
- xLAM: A Family of Large Action Models to Empower AI Agent Systemsβ593Updated 4 months ago
- Train Models Contrastively in Pytorchβ769Updated 9 months ago
- awesome synthetic (text) datasetsβ315Updated last month
- π€ Benchmark Large Language Models Reliably On Your Dataβ419Updated this week
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR tβ¦β501Updated 10 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data β¦β807Updated 9 months ago