daquexian / faster-rwkvLinks
☆124Updated last year
Alternatives and similar repositories for faster-rwkv
Users that are interested in faster-rwkv are comparing it to the libraries listed below
Sorting:
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆72Updated this week
- llm deploy project based onnx.☆42Updated 8 months ago
- stable diffusion using mnn☆68Updated last year
- ☆84Updated 2 years ago
- Inference RWKV with multiple supported backends.☆50Updated this week
- A converter for llama2.c legacy models to ncnn models.☆81Updated last year
- simplify >2GB large onnx model☆58Updated 6 months ago
- ☆32Updated 11 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 3 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆126Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 9 months ago
- Large Language Model Onnx Inference Framework☆35Updated 5 months ago
- A Toolkit to Help Optimize Large Onnx Model☆157Updated last year
- qwen2 and llama3 cpp implementation☆44Updated last year
- ☆139Updated last year
- ☆58Updated 7 months ago
- ☆86Updated 2 months ago
- ggml学习笔记,ggml是一个机器学习的推理框架☆15Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆41Updated 10 months ago
- ☆36Updated 8 months ago
- ☆127Updated 5 months ago
- ☆74Updated 6 months ago
- ☆29Updated 4 months ago
- ☆31Updated 9 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆48Updated last year
- Inference TinyLlama models on ncnn☆24Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆94Updated 6 years ago
- A quantization algorithm for LLM☆141Updated last year
- export llama to onnx☆126Updated 5 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆100Updated 2 months ago