BestAnHongjun / LMDeploy-JetsonLinks
Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function independently without continuous internet access.
☆108Updated last year
Alternatives and similar repositories for LMDeploy-Jetson
Users that are interested in LMDeploy-Jetson are comparing it to the libraries listed below
Sorting:
- ☆54Updated last year
- 基于昇腾310芯片的大语言模型部署☆24Updated last year
- llm-export can export llm model to onnx.☆341Updated 3 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆51Updated 2 years ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆78Updated last year
- ☆72Updated this week
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆248Updated 9 months ago
- An onnx-based quantitation tool.☆71Updated 2 years ago
- 将SmolVLM2的视觉头与Qwen3-0.6B模型进行了拼接微调☆509Updated 4 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆669Updated 2 months ago
- run ChatGLM2-6B in BM1684X☆49Updated last year
- Run generative AI models in sophgo BM1684X/BM1688☆263Updated last week
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Updated 2 years ago
- ☆61Updated last year
- 记录量化LLM中的总结。☆57Updated 2 weeks ago
- simplify >2GB large onnx model☆70Updated last year
- mllm-npu: training multimodal large language models on Ascend NPUs☆95Updated last year
- 基于InternLM2大模型的离线具身智能导盲犬☆111Updated last year
- 该代码与B站上的视频 https://www.bilibili.com/video/BV18L41197Uz/?spm_id_from=333.788&vd_source=eefa4b6e337f16d87d87c2c357db8ca7 相关联。☆71Updated 2 years ago
- Serving Inside Pytorch☆170Updated this week
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆27Updated last year
- ☆135Updated last year
- TensorRT 2022 亚军方案,tensorrt加速mobilevit模型☆68Updated 3 years ago
- Large Language Model Onnx Inference Framework☆36Updated 2 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆274Updated 5 months ago
- Explore LLM model deployment based on AXera's AI chips☆137Updated this week
- LLM 推理服务性能测试☆44Updated 2 years ago
- High-performance, light-weight C++ LLM and VLM Inference Software for Physical AI☆197Updated 3 weeks ago
- ☆155Updated 2 years ago
- 青稞Talk☆189Updated this week