tyrell / llm-ollama-llamaindex-bootstrapLinks
Designed for offline use, this RAG application template offers a starting point for building your own local RAG pipeline, independent of online APIs and cloud-based LLM services like OpenAI.
☆47Updated last year
Alternatives and similar repositories for llm-ollama-llamaindex-bootstrap
Users that are interested in llm-ollama-llamaindex-bootstrap are comparing it to the libraries listed below
Sorting:
- Simple example to showcase how to use llamaparser to parse PDF files☆91Updated last year
- Simple Chat UI as well as chat with documents using LLMs with Ollama (mistral model) locally, LangChaiin and Chainlit☆83Updated last year
- Chainlit app for advanced RAG. Uses llamaparse, langchain, qdrant and models from groq.☆47Updated last year
- A set of Ollama Tutorials from my youtube channel☆43Updated last year
- Awesome LLM application repo☆86Updated 7 months ago
- Collaborative Multi-Agent RAG with CrewAI☆65Updated last year
- ☆62Updated last year
- YouTube Video Summarization App built using open source LLM and Framework like Llama 2, Haystack, Whisper, and Streamlit. This app smooth…☆56Updated last year
- Chat Bot with LLM and Fact Reference. RAG(Retrieval Augmented Generation) and LangChain backed☆129Updated last year
- Data extraction with LLM on CPU☆267Updated last year
- Simple Chainlit UI for running llms locally using Ollama and LangChain☆46Updated last year
- llmware RAG Demo App.☆17Updated last year
- This is a LlamaIndex project bootstrapped with create-llama to act as a full stack UI to accompany Retrieval-Augmented Generation (RAG) B…☆31Updated last year
- SLIM Models by LLMWare. A streamlit app showing the capabilities for AI Agents and Function Calls.☆20Updated last year
- Function Calling Mistral 7B. Learn how to make functions call for open source LLMs.☆48Updated last year
- I have explained how to create superior RAG pipeline for complex pdfs using LlamaParse. We can extract text and tables from pdf and QA on…☆47Updated last year
- Graph RAG AI assistant for Data Day Texas 2024☆72Updated last year
- Simple Chainlit UI for running llms locally using Ollama and LangChain☆120Updated last year
- On-device LLM Inference using Mediapipe LLM Inference API.☆22Updated last year
- ☆25Updated last year
- Tutorial on how to create a ReAct agent without a LLM framework☆58Updated last year
- Data extraction with LLM on CPU☆112Updated last year
- ☆42Updated last year
- Research assistant for performing online research on a given topic, using Llamaindex Workflows and Tavily API. Inspired by GPT-Researcher☆168Updated last year
- ☆47Updated last year
- Local-GenAI-Search is a generative search engine based on Llama 3, langchain and qdrant that answers questions based on your local files☆95Updated last year
- Run CrewAI agent workflows on local LLM models with Llamafile and Ollama☆39Updated last year
- This repository will contain projects on multi-agent applications using frameworks such as crewai, langchain, gradio, hugging face etc.☆24Updated last year
- Build your own RAG and run it locally on your laptop: ColBERT + DSPy + Streamlit☆57Updated last year
- This code implements a Local LLM Selector from the list of Local Installed Ollama LLMs for your specific user Query☆102Updated last year