ubergarm / r1-ktransformers-guide
run DeepSeek-R1 GGUFs on KTransformers
☆227Updated 2 months ago
Alternatives and similar repositories for r1-ktransformers-guide
Users that are interested in r1-ktransformers-guide are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆251Updated this week
- LM inference server implementation based on *.cpp.☆191Updated this week
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,113Updated this week
- gpt_server是一个用于生产级部署LLMs、Embedding、Reranker、ASR和TTS的开源框架。☆180Updated this week
- LLM Inference benchmark☆417Updated 9 months ago
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆961Updated this week
- Mixture-of-Experts (MoE) Language Model☆186Updated 8 months ago
- Community maintained hardware plugin for vLLM on Ascend☆631Updated this week
- ☆131Updated 3 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆164Updated this week
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆558Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆37Updated 4 months ago
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆40Updated 2 weeks ago
- ☆44Updated 6 months ago
- ☆310Updated 5 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated 10 months ago
- ☆142Updated 11 months ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆473Updated this week
- LLM 并发性能测试工具,支持自动化压力测试和性能报告生成。☆63Updated last month
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 6 months ago
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆351Updated last week
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆551Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆49Updated 6 months ago
- ☆691Updated last month
- ☆425Updated this week
- GLM Series Edge Models☆139Updated 2 months ago
- This is a user guide for the MiniCPM and MiniCPM-V series of small language models (SLMs) developed by ModelBest. “面壁小钢炮” focuses on achi…☆236Updated 6 months ago
- ☆228Updated 2 months ago
- Yuan 2.0 Large Language Model☆683Updated 10 months ago
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆38Updated 5 months ago