qdrant / demo-colpali-optimizedLinks
☆37Updated last year
Alternatives and similar repositories for demo-colpali-optimized
Users that are interested in demo-colpali-optimized are comparing it to the libraries listed below
Sorting:
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆117Updated 8 months ago
- Simple UI for debugging correlations of text embeddings☆302Updated 6 months ago
- Recipes for learning, fine-tuning, and adapting ColPali to your multimodal RAG use cases. 👨🏻🍳☆343Updated 6 months ago
- This repo is the central repo for all the RAG Evaluation reference material and partner workshop☆77Updated 7 months ago
- A Lightweight Library for AI Observability☆252Updated 9 months ago
- RAG example using DSPy, Gradio, FastAPI☆86Updated last year
- this project will bootstrap and scaffold the projects for specific semantic search and RAG applications along with regular boiler plate c…☆92Updated 11 months ago
- Complete example of how to build an Agentic RAG architecture with Redis, Amazon Bedrock, and LlamaIndex.☆100Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆179Updated last year
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆78Updated last year
- Dynamic Metadata based RAG Framework☆78Updated last year
- Solving data for LLMs - Create quality synthetic datasets!☆150Updated 10 months ago
- Data extraction with LLM on CPU☆112Updated last year
- Research repository on interfacing LLMs with Weaviate APIs. Inspired by the Berkeley Gorilla LLM.☆138Updated 3 months ago
- Mistral + Haystack: build RAG pipelines that rock 🤘☆106Updated last year
- From data to vector database effortlessly☆88Updated 6 months ago
- ☆114Updated last year
- Build a Streamlit Chatbot using Langchain, ColBERT, Ragatouille, and ChromaDB☆123Updated last year
- Chunk your text using gpt4o-mini more accurately☆44Updated last year
- ☆125Updated 9 months ago
- Fine-tune an LLM to perform batch inference and online serving.