qwopqwop200 / gptqlora
GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ
☆97Updated last year
Related projects ⓘ
Alternatives and complementary repositories for gptqlora
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated 2 months ago
- QuIP quantization☆46Updated 8 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆173Updated 4 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆196Updated 6 months ago
- PB-LLM: Partially Binarized Large Language Models☆148Updated last year
- ☆122Updated 9 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆142Updated 9 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated 9 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆162Updated 6 months ago
- Experiments on speculative sampling with Llama models☆118Updated last year
- Unofficial Implementation of Evolutionary Model Merging☆33Updated 7 months ago
- ☆199Updated 5 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆199Updated 6 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆92Updated last month
- Code repository for the c-BTM paper☆105Updated last year
- ☆184Updated last month
- ☆93Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆74Updated 10 months ago
- A pipeline for LLM knowledge distillation☆78Updated 3 months ago
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- ☆200Updated 4 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Prune transformer layers☆64Updated 5 months ago
- ☆63Updated 4 months ago