BestAnHongjun / LMDeploy-JetsonLinks
Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function independently without continuous internet access.
☆103Updated last year
Alternatives and similar repositories for LMDeploy-Jetson
Users that are interested in LMDeploy-Jetson are comparing it to the libraries listed below
Sorting:
- ☆52Updated last year
- 基于昇腾310芯片的大语言模型部署☆24Updated last year
- llm-export can export llm model to onnx.☆336Updated last month
- 基于InternLM2大模型的离线具身智能导盲犬☆110Updated last year
- An onnx-based quantitation tool.☆71Updated last year
- 将SmolVLM2的视觉头与Qwen3-0.6B模型进行了拼接微调☆465Updated 3 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆77Updated last year
- ☆23Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆51Updated 2 years ago
- ☆66Updated 2 weeks ago
- run ChatGLM2-6B in BM1684X☆49Updated last year
- 一大波学习onnx的案例☆23Updated last year
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLM, VLM, and video generation models.☆638Updated last month
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- Serving Inside Pytorch☆166Updated last week
- ☆61Updated last year
- 记录量化LLM中的总结。☆49Updated 2 weeks ago
- Large Language Model Onnx Inference Framework☆36Updated 3 weeks ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Updated 2 years ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆416Updated 4 months ago
- 该代码与B站上的视频 https://www.bilibili.com/video/BV18L41197Uz/?spm_id_from=333.788&vd_source=eefa4b6e337f16d87d87c2c357db8ca7 相关联。☆71Updated 2 years ago
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆240Updated 7 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆425Updated last week
- ☆153Updated last year
- 这是一个不基于任何框架实现 的从0到1的VLM finetune(包括Pre-train和SFT)☆35Updated 3 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆270Updated 4 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆254Updated 2 weeks ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆94Updated last year
- A Light-Weight Framework for Open-Set Object Detection with Decoupled Feature Alignment in Joint Space☆95Updated 2 weeks ago
- TensorRT 2022 亚军方案,tensorrt加速mobilevit模型☆68Updated 3 years ago