puppetm4st3r / baai_m3_simple_serverLinks
This code sets up a simple yet robust server using FastAPI for handling asynchronous requests for embedding generation and reranking tasks using the BAAI M3 multilingual model.
☆72Updated last year
Alternatives and similar repositories for baai_m3_simple_server
Users that are interested in baai_m3_simple_server are comparing it to the libraries listed below
Sorting:
- Code for explaining and evaluating late chunking (chunked pooling)☆476Updated last year
- Open Source Text Embedding Models with OpenAI Compatible API☆164Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- TextEmbed is a REST API crafted for high-throughput and low-latency embedding inference. It accommodates a wide variety of embedding mode…☆27Updated last year
- ☆242Updated 6 months ago
- A fast, lightweight and easy-to-use Python library for splitting text into semantically meaningful chunks.☆514Updated last month
- Fine-Tuning Embedding for RAG with Synthetic Data☆521Updated 2 years ago
- A tool for generating function arguments and choosing what function to call with local LLMs☆433Updated last year
- An enterprise-grade AI retriever designed to streamline AI integration into your applications, ensuring cutting-edge accuracy.☆292Updated 6 months ago
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆68Updated 2 years ago
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆181Updated last year
- ☆66Updated last year
- ☆201Updated this week
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆44Updated last year
- ☆321Updated 2 years ago
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cro…☆906Updated 3 months ago
- Meta-Chunking: Learning Efficient Text Segmentation via Logical Perception☆264Updated 3 months ago
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆242Updated last year
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆91Updated 3 months ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆206Updated 2 years ago
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆561Updated last week
- A Python library to chunk/group your texts based on semantic similarity.☆101Updated last year
- Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.☆405Updated last year
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA☆516Updated 11 months ago
- Ready-to-go containerized RAG service. Implemented with text-embedding-inference + Qdrant/LanceDB.☆73Updated last year
- Finetune ALL LLMs with ALL Adapeters on ALL Platforms!☆331Updated 5 months ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆261Updated last year
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆388Updated this week
- ☆276Updated last year
- OpenAI compatible API for LLMs and embeddings (LLaMA, Vicuna, ChatGLM and many others)☆275Updated 2 years ago