AIAnytime / Function-Calling-Mistral-7BLinks
Function Calling Mistral 7B. Learn how to make functions call for open source LLMs.
☆48Updated last year
Alternatives and similar repositories for Function-Calling-Mistral-7B
Users that are interested in Function-Calling-Mistral-7B are comparing it to the libraries listed below
Sorting:
- ☆54Updated 6 months ago
- Simple Chainlit UI for running llms locally using Ollama and LangChain☆120Updated last year
- RAG Tool using Haystack, Mistral, and Chainlit. All open source stack on CPU.☆24Updated last year
- Chainlit app for advanced RAG. Uses llamaparse, langchain, qdrant and models from groq.☆47Updated last year
- Simple example to showcase how to use llamaparser to parse PDF files☆90Updated 11 months ago
- ☆37Updated last year
- RAG example using DSPy, Gradio, FastAPI☆83Updated last year
- SLIM Models by LLMWare. A streamlit app showing the capabilities for AI Agents and Function Calls.☆20Updated last year
- Framework for building, orchestrating and deploying multi-agent systems. Managed by OpenAI Solutions team. Experimental framework.☆92Updated 10 months ago
- Simple Chainlit UI for running llms locally using Ollama and LangChain☆45Updated last year
- Tutorial on how to create a ReAct agent without a LLM framework☆59Updated last year
- A RAG powered web search with Tavily, LangChain, Mistral AI ( leveraging groq LPU) . The full stack web app build in Databutton.☆36Updated last year
- This repo contains codes covered in the youtube tutorials.☆84Updated 2 months ago
- Democratizing Function Calling Capabilities for Open-Source Language Models☆41Updated last year
- ☆43Updated last year
- ☆45Updated last year
- Automate web research way beyond the first page of search results; curate knowledge bases to chat with.☆45Updated 3 weeks ago
- This program uses CrewAI to build a web-app using three agents doing some research stuff in the internet.☆71Updated last year
- This is a User Interface built for Autogen using ChainLit.☆119Updated last year
- This code implements a Local LLM Selector from the list of Local Installed Ollama LLMs for your specific user Query☆103Updated last year
- Zephyr 7B beta RAG Demo inside a Gradio app powered by BGE Embeddings, ChromaDB, and Zephyr 7B Beta LLM.☆35Updated last year
- ☆89Updated last year
- ☆45Updated last year
- Data extraction with LLM on CPU☆112Updated last year
- A project that brings the power of Large Language Models (LLM) and Retrieval-Augmented Generation (RAG) within reach of everyone, particu…☆34Updated last year
- Perplexity Lite using Langgraph, Tavily, and GPT-4.☆14Updated last year
- Haystack and Mistral 7B RAG Implementation. It is based on completely open-source stack.☆79Updated last year
- I have explained how to create superior RAG pipeline for complex pdfs using LlamaParse. We can extract text and tables from pdf and QA on…☆47Updated last year
- ☆18Updated last year
- Experiments with open source LLMs☆74Updated 3 weeks ago