yinghuo302 / ascend-llm
基于昇腾310芯片的大语言模型部署
☆18Updated 8 months ago
Alternatives and similar repositories for ascend-llm:
Users that are interested in ascend-llm are comparing it to the libraries listed below
- Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function ind…☆90Updated 10 months ago
- ☆39Updated 3 months ago
- An onnx-based quantitation tool.☆71Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated 11 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆41Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆44Updated last year
- ☆130Updated last year
- ☆20Updated last year
- 一些大语言模型和多模态模型的应用,主要包括Rag,小模型,Agent,跨模态搜索,OCR等等☆155Updated 3 months ago
- Easy Training Official YOLOv8、YOLOv7、YOLOv6、YOLOv5 and Prune all_model using Torch-Pruning!☆57Updated last year
- async inference for machine learning model☆26Updated 2 years ago
- TensorRT 2022 亚军方案,tensorrt加速mobilevit模型☆61Updated 2 years ago
- ☆23Updated last year
- Compare multiple optimization methods on triton to imporve model service performance☆49Updated last year
- ☆27Updated last year
- ☆37Updated 7 months ago
- 彻底弄懂BP反向传播,15行代码,C++实现也简单,MNIST分类98.29%精度☆34Updated 2 years ago
- bilibili视频【CUDA 12.x 并行编程入门(C++版)】配套代码☆30Updated 6 months ago
- llm-export can export llm model to onnx.☆263Updated last month
- LLM Tokenizer with BPE algorithm☆29Updated 9 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆54Updated last month
- ☆91Updated last month
- Inference code for LLaMA models☆113Updated last year
- llama 2 Inference☆41Updated last year
- Music large model based on InternLM2-chat.☆22Updated 2 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆45Updated 3 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆80Updated this week
- 高效部署:YOLO X, V3, V4, V5, V6, V7, V8, EdgeYOLO TRT推理 ™️ ,前后处理均由CUDA核函数实现 CPP/CUDA🚀☆49Updated last year
- ☆118Updated last year