luchangli03 / onnxsim_large_model
simplify >2GB large onnx model
☆54Updated 3 months ago
Alternatives and similar repositories for onnxsim_large_model:
Users that are interested in onnxsim_large_model are comparing it to the libraries listed below
- export llama to onnx☆117Updated 2 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆46Updated last year
- ☆24Updated last year
- ☆127Updated 3 months ago
- ☆139Updated 11 months ago
- Large Language Model Onnx Inference Framework☆31Updated 2 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆22Updated last year
- ☆58Updated 4 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 3 weeks ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆41Updated last year
- llm deploy project based onnx.☆31Updated 5 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆95Updated 10 months ago
- A Toolkit to Help Optimize Large Onnx Model☆153Updated 10 months ago
- llm-export can export llm model to onnx.☆272Updated 2 months ago
- ☆26Updated last year
- ☆71Updated 2 years ago
- run ChatGLM2-6B in BM1684X☆49Updated last year
- ☆145Updated 2 months ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆108Updated 2 weeks ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆110Updated 2 months ago
- A quantization algorithm for LLM☆136Updated 9 months ago
- ☆124Updated last year
- MegEngine到其他框架的转换器☆69Updated last year
- ☆36Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆35Updated 2 weeks ago
- stable diffusion using mnn☆65Updated last year
- ☆39Updated 4 months ago