intel / ipex-llm-tutorialLinks
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
☆168Updated 5 months ago
Alternatives and similar repositories for ipex-llm-tutorial
Users that are interested in ipex-llm-tutorial are comparing it to the libraries listed below
Sorting:
- ☆430Updated 3 weeks ago
- LLM Inference benchmark☆426Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆265Updated 2 months ago
- ☆355Updated this week
- ☆174Updated this week
- Triton Documentation in Chinese Simplified / Triton 中文文档☆85Updated 5 months ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆437Updated 3 weeks ago
- ☆503Updated 3 weeks ago
- Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature se…☆73Updated last week
- run DeepSeek-R1 GGUFs on KTransformers☆252Updated 7 months ago
- LLM 推理服务性能测试☆43Updated last year
- 支持中文场景的的小语言模型 llama2.c-zh☆150Updated last year
- pretrain a wiki llm using transformers☆51Updated last year
- C++ implementation of Qwen-LM☆606Updated 10 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆248Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆132Updated 2 weeks ago
- FlagScale is a large model toolkit based on open-sourced projects.☆358Updated last week
- llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deploy…☆86Updated last year
- LLM101n: Let's build a Storyteller 中文版☆132Updated last year
- Accelerate inference without tears☆333Updated last week
- ☆50Updated 11 months ago
- LLM/MLOps/LLMOps☆116Updated 4 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆197Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆64Updated 11 months ago
- ☆63Updated 3 weeks ago
- Run generative AI models in sophgo BM1684X/BM1688☆248Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆115Updated last year
- ☆137Updated 3 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆874Updated last week
- Community maintained hardware plugin for vLLM on Ascend☆1,179Updated last week