ubergarm / r1-ktransformers-guideLinks
run DeepSeek-R1 GGUFs on KTransformers
☆259Updated 9 months ago
Alternatives and similar repositories for r1-ktransformers-guide
Users that are interested in r1-ktransformers-guide are comparing it to the libraries listed below
Sorting:
- LM inference server implementation based on *.cpp.☆294Updated last month
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,364Updated this week
- ☆384Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆271Updated 4 months ago
- gpt_server是一个用于生产级部署LLMs、Embedding、Reranker、ASR、TTS、文生图、图片编辑和文生视频的开源框架。☆243Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆72Updated last year
- ☆341Updated 2 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆222Updated 4 months ago
- LvLLM is a special NUMA extension of vllm that makes full use of CPU and memory resources, reduces GPU memory requirements, and features …☆97Updated this week
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆952Updated this week
- KTransformers 一键部署脚本☆55Updated 8 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆213Updated 2 months ago
- LLM 并发性能测试工具,支持自动化压力测试和性能报告生成。☆201Updated 3 weeks ago
- ☆434Updated 3 months ago
- Community maintained hardware plugin for vLLM on Ascend☆1,520Updated this week
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆352Updated this week
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆727Updated 2 years ago
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆44Updated 8 months ago
- CPU inference for the DeepSeek family of large language models in C++☆317Updated 2 months ago
- LLM Inference benchmark☆430Updated last year
- C++ implementation of Qwen-LM☆612Updated last year
- a huggingface mirror site.☆321Updated last year
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆186Updated last week
- Convert files into markdown to help RAG or LLM understand, based on markitdown and MinerU, which could provide high quality pdf parser.☆131Updated 9 months ago
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆138Updated 2 weeks ago
- Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm☆169Updated 8 months ago
- xllamacpp - a Python wrapper of llama.cpp☆68Updated this week
- A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation and performance benchmarking.☆2,191Updated this week
- 添加🚀流式 Web 服务到 GraphRAG,兼容 OpenAI SDK,支持可访问的实体链接🔗,支持建议问题,兼容本地嵌入模型,修复诸多问题。Add streaming web server to GraphRAG, compatible with OpenAI SD…☆262Updated 9 months ago
- ☆349Updated last year