sophgo / LLM-TPU
Run generative AI models in sophgo BM1684X
☆194Updated this week
Alternatives and similar repositories for LLM-TPU:
Users that are interested in LLM-TPU are comparing it to the libraries listed below
- llm-export can export llm model to onnx.☆276Updated 2 months ago
- ☆329Updated 2 weeks ago
- run ChatGLM2-6B in BM1684X☆49Updated last year
- Samples code for world class Artificial Intelligence SoCs for computer vision applications.☆250Updated 3 weeks ago
- Explore LLM model deployment based on AXera's AI chips☆99Updated 2 weeks ago
- Machine learning compiler based on MLIR for Sophgo TPU.☆704Updated 2 weeks ago
- export llama to onnx☆121Updated 3 months ago
- ☆104Updated 2 weeks ago
- ☆40Updated 9 months ago
- simplify >2GB large onnx model☆55Updated 4 months ago
- LLaMa/RWKV onnx models, quantization and testcase☆360Updated last year
- NCNN的代码学习,各种小Demo。☆107Updated last year
- A Toolkit to Help Optimize Large Onnx Model☆153Updated 11 months ago
- A converter for llama2.c legacy models to ncnn models.☆87Updated last year
- ☆711Updated last week
- Serving Inside Pytorch☆159Updated this week
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆82Updated this week
- Examples for SophonSDK☆105Updated 2 years ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆452Updated this week
- A high performance, high expansion, easy to use framework for AI application. 为AI应用的开发者提供一套统一的高性能、易用的编程框架,快速基于AI全栈服务、开发跨端边云的AI行业应用,支持GPU,…☆152Updated 10 months ago
- A light llama-like llm inference framework based on the triton kernel.☆106Updated last week
- nndeploy is an end-to-end model inference and deployment framework. It aims to provide users with a powerful, easy-to-use, high-performan…☆717Updated last week
- VeriSilicon Tensor Interface Module☆234Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆49Updated 5 months ago
- ☆124Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆244Updated last week
- ☆77Updated last year
- ☆58Updated 4 months ago
- Large Language Model Onnx Inference Framework☆32Updated 3 months ago
- stable diffusion using mnn☆67Updated last year