daquexian / faster-rwkv
☆124Updated last year
Alternatives and similar repositories for faster-rwkv:
Users that are interested in faster-rwkv are comparing it to the libraries listed below
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆63Updated 3 weeks ago
- ☆84Updated 2 years ago
- Inference RWKV with multiple supported backends.☆43Updated this week
- llm deploy project based onnx.☆36Updated 6 months ago
- stable diffusion using mnn☆68Updated last year
- simplify >2GB large onnx model☆56Updated 5 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆120Updated last month
- A converter for llama2.c legacy models to ncnn models.☆87Updated last year
- ☆32Updated 9 months ago
- A Toolkit to Help Optimize Large Onnx Model☆155Updated 11 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆97Updated last month
- Large Language Model Onnx Inference Framework☆33Updated 3 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 7 months ago
- ☆72Updated 5 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆168Updated last month
- ☆139Updated last year
- A Toolkit to Help Optimize Onnx Model☆145Updated this week
- qwen2 and llama3 cpp implementation☆44Updated 11 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆36Updated 2 months ago
- 将MNN拆解的简易前向推理框架(for study!)☆22Updated 4 years ago
- ☆58Updated 5 months ago
- export llama to onnx☆123Updated 4 months ago
- ☆11Updated 8 months ago
- ☆36Updated 6 months ago
- ☆156Updated last month
- Inference TinyLlama models on ncnn☆24Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆40Updated 8 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆54Updated 6 months ago
- ☆127Updated 4 months ago