dusty-nv / NanoLLM
Optimized local inference for LLMs with HuggingFace-like APIs for quantization, vision/language models, multimodal agents, speech, vector DB, and RAG.
☆249Updated 5 months ago
Alternatives and similar repositories for NanoLLM:
Users that are interested in NanoLLM are comparing it to the libraries listed below
- ☆101Updated this week
- A tutorial introducing knowledge distillation as an optimization technique for deployment on NVIDIA Jetson☆184Updated last year
- A reference application for a local AI assistant with LLM and RAG☆108Updated 3 months ago
- A collection of reference AI microservices and workflows for Jetson Platform Services☆38Updated last month
- A utility library to help integrate Python applications with Metropolis Microservices for Jetson☆12Updated 3 months ago
- A project that optimizes Whisper for low latency inference using NVIDIA TensorRT☆74Updated 5 months ago
- ☆93Updated 6 months ago
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆49Updated 9 months ago
- This repo has the code of the 3 demos I presented at Google Gemma2 DevDay Tokyo, using Gemma2 on a Jetson Orin Nano device.☆38Updated 5 months ago
- A reference example for integrating NanoOwl with Metropolis Microservices for Jetson☆30Updated 9 months ago
- ASR/NLP/TTS deep learning inference library for NVIDIA Jetson using PyTorch and TensorRT☆203Updated last year
- A project that optimizes OWL-ViT for real-time inference with NVIDIA TensorRT.☆312Updated last month
- Collection of reference workflows for building intelligent agents with NIMs☆149Updated 2 months ago
- A tool to configure, launch and manage your machine learning experiments.☆129Updated this week
- Using FastChat-T5 Large Language Model, Vosk API for automatic speech recognition, and Piper for text-to-speech☆117Updated last year
- Quick start scripts and tutorial notebooks to get started with TAO Toolkit☆74Updated 6 months ago
- TAO Toolkit deep learning networks with PyTorch backend☆91Updated 4 months ago
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆144Updated 4 months ago
- Advanced Quantization Algorithm for LLMs/VLMs.☆394Updated this week
- A family of compressed models obtained via pruning and knowledge distillation☆329Updated 4 months ago
- Beginner's Guide to reComputer Jetson☆106Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 5 months ago
- NVIDIA Riva runnable tutorials☆127Updated this week
- Package for deploying deep learning models from TAO Toolkit☆19Updated 6 months ago
- ☆16Updated this week
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆60Updated this week
- The jetson-examples repository by Seeed Studio offers a seamless, one-line command deployment to run vision AI and Generative AI models o…☆158Updated last month
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆120Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆205Updated 10 months ago