intel / ipex-llm-tutorialLinks
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
☆165Updated 2 months ago
Alternatives and similar repositories for ipex-llm-tutorial
Users that are interested in ipex-llm-tutorial are comparing it to the libraries listed below
Sorting:
- ☆429Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆261Updated last month
- LLM Inference benchmark☆423Updated 11 months ago
- llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deploy…☆85Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆75Updated 3 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆227Updated this week
- ☆313Updated last week
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆392Updated this week
- C++ implementation of Qwen-LM☆600Updated 7 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated 2 weeks ago
- Low-bit LLM inference on CPU/NPU with lookup table☆827Updated last month
- FlagScale is a large model toolkit based on open-sourced projects.☆327Updated this week
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆85Updated 2 months ago
- ☆169Updated this week
- 支持中文场景的的小语言模型 llama2.c-zh☆147Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆161Updated last week
- ☆466Updated this week
- LLM/MLOps/LLMOps☆98Updated last month
- ☆89Updated last month
- FlagPerf is an open-source software platform for benchmarking AI chips.☆343Updated last month
- llm-export can export llm model to onnx.☆301Updated 6 months ago
- LLM101n: Let's build a Storyteller 中文版☆131Updated 11 months ago
- a lightweight LLM model inference framework☆732Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆243Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated 8 months ago
- A high-performance inference system for large language models, designed for production environments.☆456Updated this week
- ☆48Updated 8 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆242Updated 4 months ago
- LLM 推理服务性能测试☆42Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆54Updated 8 months ago