puppetm4st3r / baai_m3_simple_server
This code sets up a simple yet robust server using FastAPI for handling asynchronous requests for embedding generation and reranking tasks using the BAAI M3 multilingual model.
☆63Updated 10 months ago
Alternatives and similar repositories for baai_m3_simple_server:
Users that are interested in baai_m3_simple_server are comparing it to the libraries listed below
- Open Source Text Embedding Models with OpenAI Compatible API☆150Updated 8 months ago
- Code for explaining and evaluating late chunking (chunked pooling)☆355Updated 3 months ago
- A fast, lightweight and easy-to-use Python library for splitting text into semantically meaningful chunks.☆272Updated this week
- ☆215Updated 3 months ago
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆160Updated 6 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆84Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated 9 months ago
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆419Updated this week
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆67Updated last year
- ☆310Updated last year
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆228Updated 7 months ago
- An enterprise-grade AI retriever designed to streamline AI integration into your applications, ensuring cutting-edge accuracy.☆282Updated last week
- Fine-Tuning Embedding for RAG with Synthetic Data☆489Updated last year
- AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark