intel / intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
☆2,138Updated last month
Related projects ⓘ
Alternatives and complementary repositories for intel-extension-for-transformers
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,526Updated last month
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,765Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,904Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,227Updated this week
- FlashInfer: Kernel Library for LLM Serving☆1,452Updated this week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,941Updated 7 months ago
- 🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools☆2,576Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆3,680Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,205Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,814Updated 9 months ago
- PyTorch native quantization and sparsity for training and inference☆1,585Updated this week
- ☆1,022Updated 10 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,979Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆6,127Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆2,613Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,257Updated 4 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,497Updated last month
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,755Updated 10 months ago
- A blazing fast inference solution for text embeddings models☆2,846Updated 2 weeks ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,149Updated last month
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,312Updated 4 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,669Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆636Updated 2 months ago
- ☆892Updated last month
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain…☆8,681Updated last week
- Accessible large language models via k-bit quantization for PyTorch.☆6,299Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆701Updated last week
- The Triton TensorRT-LLM Backend☆706Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated 2 months ago