dusty-nv / NanoLLM
Optimized local inference for LLMs with HuggingFace-like APIs for quantization, vision/language models, multimodal agents, speech, vector DB, and RAG.
☆262Updated 6 months ago
Alternatives and similar repositories for NanoLLM:
Users that are interested in NanoLLM are comparing it to the libraries listed below
- A reference application for a local AI assistant with LLM and RAG☆110Updated 5 months ago
- ☆107Updated last month
- Quick start scripts and tutorial notebooks to get started with TAO Toolkit☆81Updated 8 months ago
- A utility library to help integrate Python applications with Metropolis Microservices for Jetson☆12Updated 4 months ago
- A collection of reference AI microservices and workflows for Jetson Platform Services☆38Updated 3 months ago
- ASR/NLP/TTS deep learning inference library for NVIDIA Jetson using PyTorch and TensorRT☆206Updated last year
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆55Updated 11 months ago
- ☆17Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated 8 months ago
- This repo has the code of the 3 demos I presented at Google Gemma2 DevDay Tokyo, using Gemma2 on a Jetson Orin Nano device.☆43Updated last month
- Blueprint for Ingesting massive volumes of live or archived videos and extract insights for summarization and interactive Q&A☆46Updated last week
- A reference example for integrating NanoOwl with Metropolis Microservices for Jetson☆30Updated 10 months ago
- ☆94Updated 7 months ago
- Using FastChat-T5 Large Language Model, Vosk API for automatic speech recognition, and Piper for text-to-speech☆118Updated last year
- Collection of reference workflows for building intelligent agents with NIMs☆155Updated 3 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆205Updated 9 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆336Updated 5 months ago
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆120Updated last year
- High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices☆134Updated last month
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆270Updated last year
- ☆173Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆214Updated last year
- Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.☆70Updated this week
- TAO Toolkit deep learning networks with PyTorch backend☆93Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆263Updated 7 months ago
- Maybe the new state of the art vision model? we'll see 🤷♂️☆163Updated last year
- Advanced Quantization Algorithm for LLMs/VLMs.☆454Updated this week
- Getting started with TensorRT-LLM using BLOOM as a case study☆18Updated last year
- Creation of annotated datasets from scratch using Generative AI and Foundation Computer Vision models☆117Updated this week
- Quick exploration into fine tuning florence 2☆309Updated 7 months ago