AIAnytime / Function-Calling-Mistral-7BLinks
Function Calling Mistral 7B. Learn how to make functions call for open source LLMs.
☆48Updated last year
Alternatives and similar repositories for Function-Calling-Mistral-7B
Users that are interested in Function-Calling-Mistral-7B are comparing it to the libraries listed below
Sorting:
- RAG Tool using Haystack, Mistral, and Chainlit. All open source stack on CPU.☆24Updated 2 years ago
- ☆55Updated 4 months ago
- Simple Chainlit UI for running llms locally using Ollama and LangChain☆119Updated last year
- Chainlit app for advanced RAG. Uses llamaparse, langchain, qdrant and models from groq.☆47Updated last year
- Simple example to showcase how to use llamaparser to parse PDF files☆94Updated last year
- I have explained how to create superior RAG pipeline for complex pdfs using LlamaParse. We can extract text and tables from pdf and QA on…☆48Updated last year
- YouTube Video Summarization App built using open source LLM and Framework like Llama 2, Haystack, Whisper, and Streamlit. This app smooth…☆58Updated last year
- ☆37Updated last year
- A RAG powered web search with Tavily, LangChain, Mistral AI ( leveraging groq LPU) . The full stack web app build in Databutton.☆37Updated last year
- SLIM Models by LLMWare. A streamlit app showing the capabilities for AI Agents and Function Calls.☆20Updated last year
- Framework for building, orchestrating and deploying multi-agent systems. Managed by OpenAI Solutions team. Experimental framework.☆93Updated last year
- This program uses CrewAI to build a web-app using three agents doing some research stuff in the internet.☆70Updated last year
- ☆45Updated last year
- This is a User Interface built for Autogen using ChainLit.☆120Updated last year
- Zephyr 7B beta RAG Demo inside a Gradio app powered by BGE Embeddings, ChromaDB, and Zephyr 7B Beta LLM.☆36Updated 2 years ago
- Haystack and Mistral 7B RAG Implementation. It is based on completely open-source stack.☆79Updated 2 years ago
- Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).☆103Updated last year
- RAG example using DSPy, Gradio, FastAPI☆87Updated last year
- A simple project for enabling LLM agents to use tools.☆103Updated last year
- Data extraction with LLM on CPU☆112Updated last year
- This repo contains codes covered in the youtube tutorials.☆87Updated 6 months ago
- ☆44Updated last year
- This code implements a Local LLM Selector from the list of Local Installed Ollama LLMs for your specific user Query☆103Updated 2 years ago
- ☆30Updated last year
- Simple Chainlit UI for running llms locally using Ollama and LangChain☆46Updated last year
- ☆29Updated 2 years ago
- ☆79Updated 2 years ago
- A project that brings the power of Large Language Models (LLM) and Retrieval-Augmented Generation (RAG) within reach of everyone, particu…☆37Updated last year
- Medical Mixture of Experts LLM using Mergekit.☆20Updated last year
- An LLM GUI application; enables you to interact with your files, offering dynamic parameters that can modify response behavior during run…☆95Updated last month