CassioML / langchain-flare-pdf-qa-demoLinks
PDF FLARE demo with Langchain and Cassandra as Vector Store
☆14Updated last year
Alternatives and similar repositories for langchain-flare-pdf-qa-demo
Users that are interested in langchain-flare-pdf-qa-demo are comparing it to the libraries listed below
Sorting:
- RAGStack is an out of the box solution simplifying Retrieval Augmented Generation (RAG) in AI apps.☆175Updated last month
- LangChain agent chatbot powered by OpenAI Chat GPT LLMs and Pinecone vector database.☆9Updated last year
- A reusable leave-behind for enterprise customers showing the differentiator of using the best Vector Store in the world: Astra DB☆21Updated last year
- DataStax Astra DB integration with LangChain☆29Updated last week
- LangStream. Event-Driven Developer Platform for Building and Running LLM AI Apps. Powered by Kubernetes and Kafka.☆420Updated 11 months ago
- Drop in replacement for the OpenAI Assistants API☆199Updated 2 months ago
- AI_Powered_Dev_Search_Engine☆12Updated last year
- Python Server for C3 AI app. A project that brings the power of Large Language Models (LLM) and Retrieval-Augmented Generation (RAG) with…☆23Updated last year
- Chrome Extension for YouTube. Acts as an assistant for the YouTube video you are watching☆23Updated 2 years ago
- XmodelLM☆39Updated 7 months ago
- Medical Mixture of Experts LLM using Mergekit.☆20Updated last year
- Conduct consumer interviews with synthetic focus groups using LLMs and LangChain☆43Updated last year
- A starter app to build AI powered chat bots with Astra DB and LlamaIndex☆74Updated last year
- Use LLMs to access any services with a GraphQL schema, without writing plugin logic☆16Updated 2 years ago
- ☆1Updated 11 months ago
- BH hackathon☆14Updated last year
- ☆12Updated last month
- large language model for mastering data analysis using pandas☆47Updated last year
- Apps that run on modal.com☆12Updated last year
- Extract information, summarize, ask questions, and search videos using OpenAI's Vision API 🚀🎦☆62Updated last year
- Finetune any model on HF in less than 30 seconds☆57Updated 2 months ago
- The Swarm Ecosystem☆21Updated 10 months ago
- Seamless Voice Interactions with LLMs☆12Updated last year
- ☆16Updated last year
- ☆29Updated last year
- ☆12Updated last year
- ☆11Updated last year
- Experimenting text-embeddings-inference server on both CPU and GPU☆18Updated last year
- ☆54Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year