NVIDIA / workbench-example-mistral-finetune
An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model
☆55Updated 10 months ago
Alternatives and similar repositories for workbench-example-mistral-finetune:
Users that are interested in workbench-example-mistral-finetune are comparing it to the libraries listed below
- An NVIDIA AI Workbench Example Project for Finetuning Llama 2☆28Updated 7 months ago
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆156Updated this week
- An NVIDIA AI Workbench example project for Retrieval Augmented Generation (RAG)☆310Updated 2 weeks ago
- An NVIDIA AI Workbench example project for an Agentic Retrieval Augmented Generation (RAG)☆68Updated 2 months ago
- ☆158Updated 2 months ago
- ☆149Updated last week
- ☆52Updated 2 months ago
- ☆29Updated last year
- Dynamic Metadata based RAG Framework☆72Updated 8 months ago
- Educational framework exploring ergonomic, lightweight multi-agent orchestration. Modified to use local Ollama endpoint☆50Updated 6 months ago
- This NVIDIA RAG blueprint serves as a reference solution for a foundational Retrieval Augmented Generation (RAG) pipeline.☆78Updated last month
- Python Server for C3 AI app. A project that brings the power of Large Language Models (LLM) and Retrieval-Augmented Generation (RAG) with…☆23Updated last year
- ☆66Updated 10 months ago
- Complete example of how to build an Agentic RAG architecture with Redis, Amazon Bedrock, and LlamaIndex.☆91Updated 4 months ago
- RAG example using DSPy, Gradio, FastAPI☆78Updated last year
- ☆60Updated 11 months ago
- ☆88Updated last year
- An NVIDIA AI Workbench example project for customizing an SDXL model☆48Updated 2 weeks ago
- ☆40Updated 2 weeks ago
- Code and notebooks associated with my blogposts☆63Updated 4 months ago
- Simple examples using Argilla tools to build AI☆52Updated 5 months ago
- Neo4j Extensions and Integrations with Vertex AI and LangChain☆25Updated 2 weeks ago
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆120Updated last year
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated last year
- ☆57Updated 5 months ago
- Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B☆126Updated last year
- I have explained how to create superior RAG pipeline for complex pdfs using LlamaParse. We can extract text and tables from pdf and QA on…☆44Updated last year
- Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)☆76Updated 2 months ago
- ☆28Updated 2 months ago
- Build your own RAG and run it locally on your laptop: ColBERT + DSPy + Streamlit☆56Updated last year