ubergarm / r1-ktransformers-guideLinks
run DeepSeek-R1 GGUFs on KTransformers
☆251Updated 6 months ago
Alternatives and similar repositories for r1-ktransformers-guide
Users that are interested in r1-ktransformers-guide are comparing it to the libraries listed below
Sorting:
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,278Updated this week
- LM inference server implementation based on *.cpp.☆273Updated last month
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆264Updated last month
- gpt_server是一个用于生产级部署LLMs、Embedding、Reranker、ASR、TTS、文生图、图片编辑和文生视频的开源框架。☆209Updated this week
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆247Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆62Updated 10 months ago
- LLM Inference benchmark☆426Updated last year
- C++ implementation of Qwen-LM☆605Updated 9 months ago
- 支持中文场景的的小语言模型 llama2.c-zh☆149Updated last year
- CPU inference for the DeepSeek family of large language models in C++☆313Updated 3 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆192Updated this week
- a huggingface mirror site.☆302Updated last year
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆205Updated last month
- ☆428Updated this week
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆608Updated last year
- KTransformers 一键部署脚本☆51Updated 5 months ago
- Community maintained hardware plugin for vLLM on Ascend☆1,128Updated this week
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆566Updated last year
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆784Updated this week
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆43Updated 4 months ago
- ☆353Updated this week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆704Updated this week
- ☆50Updated 10 months ago
- Low-bit LLM inference on CPU/NPU with lookup table☆857Updated 3 months ago
- ☆329Updated this week
- This is a user guide for the MiniCPM and MiniCPM-V series of small language models (SLMs) developed by ModelBest. “面壁小钢炮” focuses on achi…☆291Updated 2 months ago
- LLM 并发性能测试工具,支持自动化压力测试和性能报告生成。☆159Updated 5 months ago
- ☆135Updated 7 months ago
- Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm☆168Updated 4 months ago
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆1,690Updated this week