dusty-nv / NanoLLMLinks
Optimized local inference for LLMs with HuggingFace-like APIs for quantization, vision/language models, multimodal agents, speech, vector DB, and RAG.
☆320Updated 11 months ago
Alternatives and similar repositories for NanoLLM
Users that are interested in NanoLLM are comparing it to the libraries listed below
Sorting:
- ☆115Updated this week
- A tutorial introducing knowledge distillation as an optimization technique for deployment on NVIDIA Jetson☆213Updated last year
- Quick start scripts and tutorial notebooks to get started with TAO Toolkit☆108Updated last week
- A reference application for a local AI assistant with LLM and RAG☆117Updated 10 months ago
- A collection of reference AI microservices and workflows for Jetson Platform Services☆49Updated 8 months ago
- ☆100Updated last year
- Collection of reference workflows for building intelligent agents with NIMs☆175Updated 8 months ago
- A utility library to help integrate Python applications with Metropolis Microservices for Jetson☆15Updated 9 months ago
- This repo has the code of the 3 demos I presented at Google Gemma2 DevDay Tokyo, using Gemma2 on a Jetson Orin Nano device.☆57Updated 2 months ago
- A project that optimizes OWL-ViT for real-time inference with NVIDIA TensorRT.☆369Updated 8 months ago
- A project that optimizes Whisper for low latency inference using NVIDIA TensorRT☆90Updated 11 months ago
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆62Updated 5 months ago
- Creation of annotated datasets from scratch using Generative AI and Foundation Computer Vision models☆128Updated last week
- ASR/NLP/TTS deep learning inference library for NVIDIA Jetson using PyTorch and TensorRT☆217Updated last year
- Blueprint for Ingesting massive volumes of live or archived videos and extract insights for summarization and interactive Q&A☆266Updated this week
- A project demonstrating how to make DeepStream docker images.☆84Updated this week
- From scratch implementation of a vision language model in pure PyTorch☆243Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆296Updated last year
- Using FastChat-T5 Large Language Model, Vosk API for automatic speech recognition, and Piper for text-to-speech☆125Updated 2 years ago
- A reference example for integrating NanoOwl with Metropolis Microservices for Jetson☆30Updated last year
- TAO Toolkit deep learning networks with PyTorch backend☆104Updated last week
- Inference and fine-tuning examples for vision models from 🤗 Transformers☆162Updated 2 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆159Updated last week
- ☆19Updated 6 months ago
- TinyChatEngine: On-Device LLM Inference Library☆897Updated last year
- This project is a native implementation of a RAG pipeline for Small Language Models tested on Android devices. The main goal was to fit t…☆93Updated last year
- A Toolkit to Help Optimize Onnx Model☆220Updated last week
- This repository provides optical character detection and recognition solution optimized on Nvidia devices.☆80Updated 4 months ago
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆175Updated 10 months ago