embeddings-benchmark / mtebLinks
MTEB: Massive Text Embedding Benchmark
☆3,041Updated this week
Alternatives and similar repositories for mteb
Users that are interested in mteb are comparing it to the libraries listed below
Sorting:
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,279Updated last year
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆2,022Updated 11 months ago
- A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.☆2,035Updated 2 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,127Updated last year
- Efficient Retrieval Augmentation and Generation Framework☆1,756Updated 11 months ago
- The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval☆1,513Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,008Updated last week
- ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)☆3,746Updated 2 months ago
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,438Updated last week
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,635Updated 3 weeks ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,584Updated last week
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,973Updated 4 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,788Updated last week
- Bringing BERT into modernity via both architecture changes and scaling☆1,598Updated 6 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,931Updated 4 months ago
- Automated Evaluation of RAG Systems☆682Updated 9 months ago
- A blazing fast inference solution for text embeddings models☆4,345Updated last week
- ☆1,334Updated 10 months ago
- Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)☆1,993Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,654Updated last year
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,032Updated last month
- Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.☆2,000Updated this week
- ☆2,114Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,226Updated 2 weeks ago
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,589Updated 2 weeks ago
- RAGChecker: A Fine-grained Framework For Diagnosing RAG☆1,033Updated last year
- Toolkit for creating, sharing and using natural language prompts.☆2,986Updated 2 years ago
- Retrieval and Retrieval-augmented LLMs☆11,055Updated 2 weeks ago
- Supercharge Your LLM Application Evaluations 🚀☆11,964Updated this week
- SPLADE: sparse neural search (SIGIR21, SIGIR22)☆959Updated last year