puppetm4st3r / baai_m3_simple_serverLinks
This code sets up a simple yet robust server using FastAPI for handling asynchronous requests for embedding generation and reranking tasks using the BAAI M3 multilingual model.
☆70Updated last year
Alternatives and similar repositories for baai_m3_simple_server
Users that are interested in baai_m3_simple_server are comparing it to the libraries listed below
Sorting:
- Open Source Text Embedding Models with OpenAI Compatible API☆160Updated last year
- Code for explaining and evaluating late chunking (chunked pooling)☆456Updated 10 months ago
- ☆238Updated 4 months ago
- A fast, lightweight and easy-to-use Python library for splitting text into semantically meaningful chunks.☆400Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- Fine-Tuning Embedding for RAG with Synthetic Data☆514Updated 2 years ago
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cro…☆877Updated last month
- An enterprise-grade AI retriever designed to streamline AI integration into your applications, ensuring cutting-edge accuracy.☆290Updated 4 months ago
- TextEmbed is a REST API crafted for high-throughput and low-latency embedding inference. It accommodates a wide variety of embedding mode…☆25Updated last year
- A tool for generating function arguments and choosing what function to call with local LLMs☆431Updated last year
- ☆320Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆547Updated this week
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆375Updated this week
- This repository presents the original implementation of LumberChunker: Long-Form Narrative Document Segmentation by André V. Duarte, João…☆80Updated last year
- Meta-Chunking: Learning Efficient Text Segmentation via Logical Perception☆251Updated last month
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆67Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆179Updated last year
- ☆197Updated this week
- ☆65Updated last year
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆240Updated last year
- ☆899Updated last year
- Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.☆398Updated last year
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA☆511Updated 10 months ago
- Ready-to-go containerized RAG service. Implemented with text-embedding-inference + Qdrant/LanceDB.☆73Updated 10 months ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆243Updated last year
- A Structured Output Framework for LLM Outputs☆366Updated 5 months ago
- A lightweight version of Milvus☆384Updated 3 weeks ago
- HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels☆554Updated 10 months ago
- Code implement reposity of Paper HiQA☆103Updated 8 months ago
- ☆506Updated last year