daquexian / faster-rwkv
☆124Updated last year
Alternatives and similar repositories for faster-rwkv:
Users that are interested in faster-rwkv are comparing it to the libraries listed below
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆55Updated last week
- llm deploy project based onnx.☆31Updated 5 months ago
- stable diffusion using mnn☆65Updated last year
- A converter for llama2.c legacy models to ncnn models.☆87Updated last year
- simplify >2GB large onnx model☆54Updated 3 months ago
- Inference RWKV with multiple supported backends.☆35Updated this week
- qwen2 and llama3 cpp implementation☆43Updated 9 months ago
- ☆84Updated 2 years ago
- ☆32Updated 8 months ago
- ☆139Updated 11 months ago
- A Toolkit to Help Optimize Large Onnx Model☆153Updated 10 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆106Updated 6 months ago
- ☆127Updated 2 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- Large Language Model Onnx Inference Framework☆31Updated 2 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆41Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆38Updated 7 months ago
- ☆58Updated 4 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆46Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 3 weeks ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆108Updated last week
- A Toolkit to Help Optimize Onnx Model☆124Updated this week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- Efficient inference of large language models.☆146Updated 3 months ago
- ☆36Updated 5 months ago
- ☆74Updated 3 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆35Updated 2 weeks ago
- ☆60Updated 2 years ago