intel / intel-extension-for-transformersLinks
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
☆2,174Updated last year
Alternatives and similar repositories for intel-extension-for-transformers
Users that are interested in intel-extension-for-transformers are comparing it to the libraries listed below
Sorting:
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,092Updated 7 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,312Updated 8 months ago
- ☆1,028Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Updated 2 years ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,581Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,431Updated 6 months ago
- ☆1,029Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,254Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,140Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Updated 11 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆912Updated last month
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,859Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,879Updated 2 years ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,699Updated last year
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,279Updated 3 weeks ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Updated 9 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,011Updated last year
- PyTorch native quantization and sparsity for training and inference☆2,668Updated this week
- Training LLMs with QLoRA + FSDP☆1,539Updated last year
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,315Updated 6 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,152Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated last year
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,325Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,600Updated last year
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week
- ☆553Updated last year
- The Triton TensorRT-LLM Backend☆918Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,440Updated 2 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,586Updated last week
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year