AlpinDale / sparsegpt-for-LLaMALinks
Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.
☆70Updated 2 years ago
Alternatives and similar repositories for sparsegpt-for-LLaMA
Users that are interested in sparsegpt-for-LLaMA are comparing it to the libraries listed below
Sorting:
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆93Updated last month
- Model REVOLVER, a human in the loop model mixing system.☆32Updated 2 years ago
- ☆40Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- ☆73Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆102Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆207Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- ☆26Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- A community list of common phrases generated by GPT and Claude models☆78Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆40Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆108Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆40Updated 2 years ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆102Updated last year
- Train Llama Loras Easily☆30Updated 2 years ago
- ☆50Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆55Updated 2 years ago
- Efficient 3bit/4bit quantization of LLaMA models☆19Updated 2 years ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆181Updated last week
- Multi-Domain Expert Learning☆66Updated last year