☆25Oct 31, 2024Updated last year
Alternatives and similar repositories for TesseraQ
Users that are interested in TesseraQ are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆30Mar 5, 2025Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆171Nov 26, 2025Updated 5 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆39Sep 24, 2024Updated last year
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆89Jul 28, 2025Updated 9 months ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆73Jul 8, 2025Updated 9 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Training Quantized Neural Networks with a Full-precision Auxiliary Module☆13Jun 19, 2020Updated 5 years ago
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆28Apr 15, 2025Updated last year
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Sep 15, 2022Updated 3 years ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆213Nov 25, 2025Updated 5 months ago
- ☆28Nov 5, 2021Updated 4 years ago
- ☆15Apr 6, 2026Updated 3 weeks ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- ☆17Oct 25, 2022Updated 3 years ago
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Residual vector quantization for KV cache compression in large language model☆12Oct 22, 2024Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 8 months ago
- ☆19Nov 6, 2023Updated 2 years ago
- ☆16Dec 9, 2023Updated 2 years ago
- Official implementation of the ICLR 2024 paper AffineQuant☆29Mar 30, 2024Updated 2 years ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- [CVPR 2025 Highlight] FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix Approximation☆28Jun 16, 2025Updated 10 months ago
- ☆22Oct 27, 2024Updated last year
- A quantization algorithm for LLM☆149Jun 21, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆33Nov 11, 2024Updated last year
- ☆25Dec 11, 2021Updated 4 years ago
- ☆30Jul 22, 2024Updated last year
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆21Mar 25, 2026Updated last month
- ☆13Apr 1, 2026Updated last month
- This project is the official implementation of our accepted IEEE TPAMI paper Diverse Sample Generation: Pushing the Limit of Data-free Qu…☆15Feb 26, 2023Updated 3 years ago
- AAAI2023 Efficient and Accurate Models towards Practical Deep Learning Baseline☆13Nov 29, 2022Updated 3 years ago
- ☆20Nov 5, 2025Updated 5 months ago
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2…☆14Dec 7, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆12Jul 30, 2025Updated 9 months ago
- ☆52Nov 5, 2024Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- ☆14Jun 22, 2025Updated 10 months ago
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆390Feb 14, 2025Updated last year
- ☆15Nov 7, 2024Updated last year
- [ICML2025] KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆28Jan 27, 2026Updated 3 months ago