sophgo / LLM-TPULinks
Run generative AI models in sophgo BM1684X/BM1688
☆257Updated 2 weeks ago
Alternatives and similar repositories for LLM-TPU
Users that are interested in LLM-TPU are comparing it to the libraries listed below
Sorting:
- llm-export can export llm model to onnx.☆337Updated 2 months ago
- Explore LLM model deployment based on AXera's AI chips☆132Updated last week
- Samples code for world class Artificial Intelligence SoCs for computer vision applications.☆279Updated this week
- Machine learning compiler based on MLIR for Sophgo TPU.☆836Updated last week
- run ChatGLM2-6B in BM1684X☆49Updated last year
- export llama to onnx☆137Updated last year
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆101Updated last week
- ☆44Updated last year
- ☆459Updated last week
- simplify >2GB large onnx model☆70Updated last year
- LLaMa/RWKV onnx models, quantization and testcase☆367Updated 2 years ago
- The Pipeline example based on AX650N/AX8850 shows the software development skills of Image Processing, NPU, Codec, and Display modules, …☆12Updated 4 months ago
- ☆132Updated 3 weeks ago
- Examples for SophonSDK☆107Updated 3 years ago
- A high performance, high expansion, easy to use framework for AI application. 为AI应用的开发者提供一套统一的高性能、易用的编程框架,快速基于AI全栈服务、开发跨端边云的AI行业应用,支持GPU,…☆160Updated last year
- PyTorch Neural Network eXchange☆665Updated this week
- VeriSilicon Tensor Interface Module☆246Updated last month
- ☆53Updated last year
- NCNN的代码学习,各种小Demo。☆127Updated last year
- ☆149Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73Updated last year
- Large Language Model Onnx Inference Framework☆36Updated last month
- A Toolkit to Help Optimize Large Onnx Model☆162Updated 2 months ago
- ☆60Updated last year
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆489Updated last year
- DDK for Rockchip NPU☆69Updated 5 years ago
- stable diffusion using mnn☆67Updated 2 years ago
- llm deploy project based onnx.☆48Updated last year
- Zhouyi model zoo☆105Updated 2 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆272Updated 5 months ago