fau-masters-collected-works-cgarbin / gpt-all-local
A "chat with your data" example: using a large language models (LLM) to interact with our own (local) data. Everything is local: the embedding model, the LLM, the vector database. This is an example of retrieval-augmented generation (RAG): we find relevant sections from our documents and pass it to the LLM as part of the prompt (see pics).
☆22Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for gpt-all-local
- ☆32Updated last year
- DocumentGPT is a web application that allows you to chat over your research document using OpenAI's chat API and perform semantic search …☆110Updated last year
- Python Streamlit web app utilizing OpenAI (GPT4) and LangChain LLM tools with access to Wikipedia, DuckDuckgo Search, and a ChromaDB with…☆68Updated last year
- RAG Tool using Haystack, Mistral, and Chainlit. All open source stack on CPU.☆23Updated last year
- Agentic RAG using Crew AI.☆19Updated 4 months ago
- Automate web research way beyond the first page of search results; curate knowledge bases to chat with.☆42Updated 2 months ago
- ☆44Updated last year
- SLIM Models by LLMWare. A streamlit app showing the capabilities for AI Agents and Function Calls.☆18Updated 9 months ago
- A collection of apps powered by the LlamaIndex LLM framework.☆54Updated 3 weeks ago
- Retrieval Augmented Generation (RAG) on audio data with LangChain☆12Updated last year
- AI Assistant using Streamlit and the OpenAI Assistant API☆33Updated 9 months ago
- A project that brings the power of Large Language Models (LLM) and Retrieval-Augmented Generation (RAG) within reach of everyone, particu…☆32Updated 10 months ago