intel / ipex-llm-tutorialLinks
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
☆168Updated 9 months ago
Alternatives and similar repositories for ipex-llm-tutorial
Users that are interested in ipex-llm-tutorial are comparing it to the libraries listed below
Sorting:
- ☆437Updated 4 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆274Updated 6 months ago
- LLM Inference benchmark☆433Updated last year
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆483Updated last week
- ☆392Updated last week
- ☆183Updated last week
- Low-bit LLM inference on CPU/NPU with lookup table☆916Updated 8 months ago
- ☆73Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆154Updated last month
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 4 months ago
- Run generative AI models in sophgo BM1684X/BM1688☆266Updated 2 weeks ago
- LLM/MLOps/LLMOps☆133Updated 8 months ago
- a lightweight LLM model inference framework☆749Updated last year
- A high-performance inference system for large language models, designed for production environments.☆491Updated last month
- ☆55Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆225Updated 3 weeks ago
- LLM101n: Let's build a Storyteller 中文版☆137Updated last year
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆428Updated this week
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆733Updated 2 years ago
- pretrain a wiki llm using transformers☆61Updated last year
- 支持中文场景的的小语言模型 llama2.c-zh☆150Updated last year
- ☆523Updated 2 weeks ago
- ☆325Updated 7 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆471Updated last week
- LLM 推理服务性能测试☆44Updated 2 years ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆102Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76Updated last year
- ☆74Updated this week
- MindSpore online courses: Step into LLM☆483Updated last month
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆475Updated this week