This is our own implementation of 'Layer Selective Rank Reduction'
☆240May 26, 2024Updated last year
Alternatives and similar repositories for laserRMT
Users that are interested in laserRMT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆68May 26, 2024Updated last year
- ☆166Aug 8, 2025Updated 7 months ago
- Simple Model Similarities Analysis☆21Feb 3, 2024Updated 2 years ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆263Apr 23, 2024Updated last year
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Feb 18, 2024Updated 2 years ago
- A bagel, with everything.☆326Apr 11, 2024Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Jul 9, 2024Updated last year
- ☆142Aug 20, 2025Updated 7 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year
- Extract a single expert from a Mixture Of Experts model using slerp interpolation.☆19May 26, 2024Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Dec 30, 2023Updated 2 years ago
- ☆78Dec 26, 2023Updated 2 years ago
- ☆138Aug 19, 2024Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21May 28, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31May 22, 2024Updated last year
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- My Gen AI research☆11Jun 3, 2024Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Jan 17, 2024Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆253Oct 30, 2024Updated last year
- Full finetuning of large language models without large memory requirements☆94Sep 22, 2025Updated 6 months ago
- Automatically evaluate your LLMs in Google Colab☆687May 7, 2024Updated last year
- Merge Transformers language models by use of gradient parameters.☆214Aug 8, 2024Updated last year
- A benchmark for emotional intelligence in large language models☆417Jul 26, 2024Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,131Mar 16, 2026Updated last week
- Customizable implementation of the self-instruct paper.☆1,052Mar 7, 2024Updated 2 years ago
- Go ahead and axolotl questions☆11,460Updated this week
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 10 months ago
- All the world is a play, we are but actors in it.☆50Jul 21, 2025Updated 8 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Oct 31, 2024Updated last year
- An Open Source Toolkit For LLM Distillation☆891Mar 14, 2026Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated 3 weeks ago
- ☆56Nov 6, 2024Updated last year
- Dolphin System Messages☆388Feb 15, 2025Updated last year
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- 🐜🔧 A minimalistic tool to fine-tune your LLMs☆18Aug 17, 2023Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆13Feb 14, 2024Updated 2 years ago
- An unofficial implementation of SOLAR-10.7B model and the newly proposed interlocked-DUS(iDUS) implementation and experiment details.☆14Mar 20, 2024Updated 2 years ago