Muhtasham / summarization-eval
π Reference-Free automatic summarization evaluation with potential hallucination detection
β101Updated last year
Alternatives and similar repositories for summarization-eval:
Users that are interested in summarization-eval are comparing it to the libraries listed below
- Doing simple retrieval from LLM models at various context lengths to measure accuracyβ100Updated 10 months ago
- β76Updated 8 months ago
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching oβ¦β119Updated last month
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).β79Updated 10 months ago
- β57Updated 3 months ago
- Using various instructor clients evaluating the quality and capabilities of extractions and reasoning.β48Updated 4 months ago
- β48Updated last year
- Writing Blog Posts with Generative Feedback Loops!β47Updated 10 months ago
- Generalist and Lightweight Model for Text Classificationβ65Updated 3 weeks ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β48Updated 7 months ago
- A framework for evaluating function calls made by LLMsβ36Updated 6 months ago
- β77Updated 8 months ago
- Routing on Random Forest (RoRF)β112Updated 4 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing β‘β64Updated 3 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimizationβ57Updated 11 months ago