dusty-nv / NanoLLMLinks
Optimized local inference for LLMs with HuggingFace-like APIs for quantization, vision/language models, multimodal agents, speech, vector DB, and RAG.
☆338Updated last year
Alternatives and similar repositories for NanoLLM
Users that are interested in NanoLLM are comparing it to the libraries listed below
Sorting:
- GitHub repo for Jetson AI Lab☆123Updated last week
- A tutorial introducing knowledge distillation as an optimization technique for deployment on NVIDIA Jetson☆227Updated 2 years ago
- A reference application for a local AI assistant with LLM and RAG☆117Updated last year
- ☆106Updated 2 months ago
- Quick start scripts and tutorial notebooks to get started with TAO Toolkit☆129Updated 2 weeks ago
- Collection of reference workflows for building intelligent agents with NIMs☆183Updated 11 months ago
- A collection of reference AI microservices and workflows for Jetson Platform Services☆51Updated 11 months ago
- A project that optimizes OWL-ViT for real-time inference with NVIDIA TensorRT.☆390Updated 10 months ago
- A utility library to help integrate Python applications with Metropolis Microservices for Jetson☆15Updated last year
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆64Updated 7 months ago
- This repo has the code of the 3 demos I presented at Google Gemma2 DevDay Tokyo, using Gemma2 on a Jetson Orin Nano device.☆60Updated 5 months ago
- TAO Toolkit deep learning networks with PyTorch backend☆107Updated 3 weeks ago
- A project demonstrating how to make DeepStream docker images.☆92Updated 2 months ago
- Blueprint for Ingesting massive volumes of live or archived videos and extract insights for summarization and interactive Q&A☆362Updated 3 weeks ago
- Creation of annotated datasets from scratch using Generative AI and Foundation Computer Vision models☆132Updated last week
- A reference example for integrating NanoOwl with Metropolis Microservices for Jetson☆30Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆254Updated last year
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆174Updated this week
- TinyChatEngine: On-Device LLM Inference Library☆934Updated last year
- An open source light-weight and high performance inference framework for Hailo devices☆150Updated last month
- Using FastChat-T5 Large Language Model, Vosk API for automatic speech recognition, and Piper for text-to-speech☆126Updated 2 years ago
- The jetson-examples repository by Seeed Studio offers a seamless, one-line command deployment to run vision AI and Generative AI models o…☆237Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 3 weeks ago
- NVIDIA DLA-SW, the recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.☆224Updated last year
- Beginner's Guide to reComputer Jetson☆119Updated 2 months ago
- Inference and fine-tuning examples for vision models from 🤗 Transformers☆162Updated 4 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆302Updated last year
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆179Updated last month
- A Toolkit to Help Optimize Onnx Model☆288Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year