[EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs
☆15Jul 18, 2024Updated last year
Alternatives and similar repositories for ApiQ
Users that are interested in ApiQ are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Oct 15, 2024Updated last year
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆29Oct 9, 2025Updated 6 months ago
- ☆33Nov 11, 2024Updated last year
- [CVPR 2025] Efficient Personalization of Quantized Diffusion Model without Backpropagation☆16Mar 31, 2025Updated last year
- ☆15Nov 6, 2022Updated 3 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆40Jan 13, 2025Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 9 months ago
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆35Mar 9, 2026Updated last month
- ☆10Apr 16, 2024Updated 2 years ago
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆28Dec 14, 2025Updated 4 months ago
- [ICML2025] LoRA fine-tune directly on the INT4 models.☆40Nov 25, 2024Updated last year
- ICLR 2025☆31May 21, 2025Updated 10 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆170Nov 26, 2025Updated 4 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Awesome Low-Rank Adaptation☆59Aug 6, 2025Updated 8 months ago
- ☆25Oct 31, 2024Updated last year
- A minimal re-implementation of orthogonal fine-tuning (OFT), a diffusion method, for LLMs. Based on nanoGPT and minLoRA.☆14Nov 17, 2023Updated 2 years ago
- LoFiT: Localized Fine-tuning on LLM Representations☆45Jan 15, 2025Updated last year
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…☆18Dec 17, 2025Updated 4 months ago
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆28Aug 5, 2025Updated 8 months ago
- ☆14May 4, 2024Updated last year
- ☆11Dec 15, 2025Updated 4 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆66Jul 6, 2025Updated 9 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆36Jan 18, 2025Updated last year
- [CVPR 2025] LoRA Recycle: Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs☆14Jun 20, 2025Updated 9 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Oct 10, 2024Updated last year
- Codebase for ICML submission "DOGE: Domain Reweighting with Generalization Estimation"☆21Feb 29, 2024Updated 2 years ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆56Jan 13, 2025Updated last year
- Prune transformer layers☆74May 30, 2024Updated last year
- Exploring the Limitations of Large Language Models on Multi-Hop Queries☆33Mar 2, 2025Updated last year
- ☆11Feb 26, 2024Updated 2 years ago
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆22May 28, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆53Jul 18, 2024Updated last year
- flex-block-attn: an efficient block sparse attention computation library☆130Dec 26, 2025Updated 3 months ago
- Large Language Model inference written in Zig☆23Mar 17, 2024Updated 2 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Jun 2, 2023Updated 2 years ago
- An adaptive sampling framework for Reinforce-style LLM post training.☆95Nov 29, 2025Updated 4 months ago
- The first model supporting content-preserving style transfer for Qwen-Image☆26Updated this week
- ☆125Jul 6, 2024Updated last year