NVIDIA / RTX-AI-ToolkitLinks
The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PCs and cloud.
☆175Updated 10 months ago
Alternatives and similar repositories for RTX-AI-Toolkit
Users that are interested in RTX-AI-Toolkit are comparing it to the libraries listed below
Sorting:
- An NVIDIA AI Workbench example project for customizing an SDXL model☆57Updated last week
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆125Updated last year
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench☆179Updated 5 months ago
- An NVIDIA AI Workbench example project for Retrieval Augmented Generation (RAG)☆340Updated last month
- ☆167Updated this week
- Unsloth Studio☆110Updated 6 months ago
- This is an NVIDIA AI Workbench example project that demonstrates an end-to-end model development workflow using Llamafactory.☆67Updated 11 months ago
- llama.cpp fork used by GPT4All☆56Updated 7 months ago
- An NVIDIA AI Workbench example project for fine-tuning a Nemotron-3 8B model☆54Updated last year
- automatically quant GGUF models☆204Updated last week
- An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model☆63Updated last year
- Customizable, AI-driven virtual assistant designed to streamline customer service operations, handle common inquiries, and improve overal…☆193Updated 2 months ago
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B☆129Updated last year
- Running Microsoft's BitNet via Electron, React & Astro☆44Updated last week
- A list of language models with permissive licenses such as MIT or Apache 2.0☆24Updated 7 months ago
- GRadient-INformed MoE☆264Updated last year
- Simple UI for Llama-3.2-11B-Vision & Molmo-7B-D☆137Updated last year
- ☆152Updated 3 weeks ago
- Gradio based tool to run opensource LLM models directly from Huggingface☆95Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- Route LLM requests to the best model for the task at hand.☆108Updated 2 weeks ago
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆159Updated 5 months ago
- Utils for Unsloth https://github.com/unslothai/unsloth☆153Updated this week
- A sleek and user-friendly interface for interacting with Ollama models, built with Python and Gradio.☆35Updated 5 months ago
- ☆102Updated last year
- MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.☆51Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs (Windows build & kernels)☆175Updated last week
- An NVIDIA AI Workbench example project for an Agentic Retrieval Augmented Generation (RAG)☆105Updated 2 months ago
- Context-Aware RAG library for Knowledge Graph ingestion and retrieval functions.☆35Updated last week