intel / ipex-llm-tutorial
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
☆164Updated 2 weeks ago
Alternatives and similar repositories for ipex-llm-tutorial
Users that are interested in ipex-llm-tutorial are comparing it to the libraries listed below
Sorting:
- ☆425Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆251Updated this week
- llm-export can export llm model to onnx.☆289Updated 3 months ago
- Community maintained hardware plugin for vLLM on Ascend☆631Updated this week
- LLM Inference benchmark☆417Updated 9 months ago
- pretrain a wiki llm using transformers☆42Updated 8 months ago
- LLM/MLOps/LLMOps☆86Updated 8 months ago
- export llama to onnx☆124Updated 4 months ago
- ☆162Updated last month
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆354Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆274Updated this week
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆473Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆126Updated 2 weeks ago
- ☆48Updated last week
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆55Updated 9 months ago
- A light llama-like llm inference framework based on the triton kernel.☆115Updated last week
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆551Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆49Updated 6 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆227Updated 2 months ago
- 通义千问VLLM推理部署DEMO☆577Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆88Updated 4 months ago
- a lightweight LLM model inference framework☆728Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆65Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆37Updated 4 months ago
- ☆308Updated 3 weeks ago
- ☆50Updated 5 months ago
- Materials for learning SGLang☆408Updated 2 weeks ago
- ☆44Updated 6 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆69Updated last month
- run ChatGLM2-6B in BM1684X☆49Updated last year