intel / ipex-llm-tutorialLinks
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
☆169Updated 8 months ago
Alternatives and similar repositories for ipex-llm-tutorial
Users that are interested in ipex-llm-tutorial are comparing it to the libraries listed below
Sorting:
- ☆434Updated 3 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆271Updated 4 months ago
- LLM Inference benchmark☆430Updated last year
- ☆384Updated this week
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆468Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆72Updated last year
- ☆180Updated last week
- FlagScale is a large model toolkit based on open-sourced projects.☆430Updated last week
- Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature se…☆94Updated this week
- Run generative AI models in sophgo BM1684X/BM1688☆257Updated last week
- ☆518Updated last month
- C++ implementation of Qwen-LM☆612Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆96Updated 2 weeks ago
- 支持中文场景的的小语言模型 llama2.c-zh☆150Updated last year
- run DeepSeek-R1 GGUFs on KTransformers☆259Updated 9 months ago
- LLM101n: Let's build a Storyteller 中文版☆137Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆213Updated 2 months ago
- Efficient AI Inference & Serving☆478Updated last year
- Community maintained hardware plugin for vLLM on Ascend☆1,520Updated this week
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆647Updated last month
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆464Updated last month
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆972Updated this week
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆250Updated last year
- ☆77Updated last year
- ☆73Updated last year
- a lightweight LLM model inference framework☆746Updated last year
- Low-bit LLM inference on CPU/NPU with lookup table☆903Updated 6 months ago
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆727Updated 2 years ago
- ☆157Updated 3 weeks ago