ubergarm / r1-ktransformers-guideLinks
run DeepSeek-R1 GGUFs on KTransformers
☆250Updated 5 months ago
Alternatives and similar repositories for r1-ktransformers-guide
Users that are interested in r1-ktransformers-guide are comparing it to the libraries listed below
Sorting:
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,245Updated this week
- LM inference server implementation based on *.cpp.☆271Updated 2 weeks ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆263Updated 3 weeks ago
- KTransformers 一键部署脚本☆50Updated 4 months ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆194Updated this week
- gpt_server是一个用于生产级部署LLMs、Embedding、Reranker、ASR和TTS的开源框架。☆205Updated last week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆200Updated last week
- Community maintained hardware plugin for vLLM on Ascend☆1,048Updated this week
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆742Updated this week
- LLM Inference benchmark☆426Updated last year
- ☆427Updated last week
- C++ implementation of Qwen-LM☆609Updated 8 months ago
- 支持中文场景的的小语言模型 llama2.c-zh☆149Updated last year
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆1,545Updated last week
- Phi3 中文后训练模型仓库☆321Updated 9 months ago
- CPU inference for the DeepSeek family of large language models in C++☆310Updated 2 months ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆413Updated this week
- GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthetic Data Generation☆324Updated this week
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆172Updated 3 weeks ago
- xllamacpp - a Python wrapper of llama.cpp☆52Updated this week
- ☆341Updated this week
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆608Updated last year
- ☆326Updated last month
- Low-bit LLM inference on CPU/NPU with lookup table☆846Updated 2 months ago
- a huggingface mirror site.☆296Updated last year
- Convert files into markdown to help RAG or LLM understand, based on markitdown and MinerU, which could provide high quality pdf parser.☆124Updated 5 months ago
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆692Updated last year
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆154Updated last month
- ☆263Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆56Updated 10 months ago