⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
☆2,178Oct 8, 2024Updated last year
Alternatives and similar repositories for intel-extension-for-transformers
Users that are interested in intel-extension-for-transformers are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,612Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆351Aug 30, 2024Updated last year
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,010Mar 30, 2026Updated last week
- High-speed Large Language Model Serving for Local Deployment☆9,275Jan 24, 2026Updated 2 months ago
- Large Language Model Text Generation Inference☆10,830Mar 21, 2026Updated 3 weeks ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- SOTA rounding-based quantization for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype supp…☆957Updated this week
- ☆437Sep 18, 2025Updated 6 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,493Mar 4, 2026Updated last month
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,208Jul 11, 2024Updated last year
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,304Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,488Jul 17, 2025Updated 8 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,042Apr 11, 2025Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,194Aug 22, 2025Updated 7 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,107Updated this week
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Robust recipes to align language models with human and AI preferences☆5,551Apr 2, 2026Updated last week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,322May 11, 2025Updated 11 months ago
- Universal LLM Deployment Engine with ML Compilation☆22,414Updated this week
- Transformer related optimization, including BERT, GPT☆6,410Mar 27, 2024Updated 2 years ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,107Jun 30, 2025Updated 9 months ago
- Tensor library for machine learning☆14,394Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,720Jun 25, 2024Updated last year
- SGLang is a high-performance serving framework for large language models and multimodal models.☆25,643Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆75,637Updated this week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,933May 3, 2024Updated last year
- Tools for merging pretrained large language models.☆6,945Mar 15, 2026Updated 3 weeks ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆561Apr 2, 2026Updated last week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,280Apr 4, 2026Updated last week
- Serving multiple LoRA finetuned LLM as one☆1,152May 8, 2024Updated last year
- Sparsity-aware deep learning inference runtime for CPUs☆3,163Jun 2, 2025Updated 10 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,907Jan 21, 2024Updated 2 years ago
- Fast and memory-efficient exact attention☆23,185Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,273Apr 4, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,327Mar 6, 2025Updated last year
- Go ahead and axolotl questions☆11,608Updated this week
- Fast inference engine for Transformer models☆4,417Feb 4, 2026Updated 2 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,865Jun 10, 2024Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,872Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,354Apr 2, 2026Updated last week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,158Updated this week