This is our own implementation of 'Layer Selective Rank Reduction'
☆240May 26, 2024Updated last year
Alternatives and similar repositories for laserRMT
Users that are interested in laserRMT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆69May 26, 2024Updated last year
- ☆167Aug 8, 2025Updated 8 months ago
- Simple Model Similarities Analysis☆21Feb 3, 2024Updated 2 years ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆268Apr 23, 2024Updated 2 years ago
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Feb 18, 2024Updated 2 years ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- A bagel, with everything.☆326Apr 11, 2024Updated 2 years ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Jul 9, 2024Updated last year
- ☆145Aug 20, 2025Updated 8 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated 2 years ago
- Extract a single expert from a Mixture Of Experts model using slerp interpolation.☆19May 26, 2024Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Dec 30, 2023Updated 2 years ago
- ☆78Dec 26, 2023Updated 2 years ago
- ☆138Aug 19, 2024Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21May 28, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31May 22, 2024Updated last year
- My Gen AI research☆11Jun 3, 2024Updated last year
- Tools for merging pretrained large language models.☆7,023Mar 15, 2026Updated last month
- Using multiple LLMs for ensemble Forecasting☆16Jan 17, 2024Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Sep 20, 2024Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆259Oct 30, 2024Updated last year
- Full finetuning of large language models without large memory requirements☆94Sep 22, 2025Updated 7 months ago
- Merge Transformers language models by use of gradient parameters.☆214Aug 8, 2024Updated last year
- Automatically evaluate your LLMs in Google Colab☆688May 7, 2024Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated 2 years ago
- A benchmark for emotional intelligence in large language models☆424Jul 26, 2024Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,199Updated this week
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 11 months ago
- Customizable implementation of the self-instruct paper.☆1,053Mar 7, 2024Updated 2 years ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Apr 11, 2024Updated 2 years ago
- Go ahead and axolotl questions☆11,779Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆931Feb 26, 2026Updated 2 months ago
- An Open Source Toolkit For LLM Distillation☆931Mar 14, 2026Updated last month
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆234Oct 31, 2024Updated last year
- Dolphin System Messages☆392Feb 15, 2025Updated last year
- All the world is a play, we are but actors in it.☆50Jul 21, 2025Updated 9 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆511Aug 26, 2024Updated last year
- 🐜🔧 A minimalistic tool to fine-tune your LLMs☆18Aug 17, 2023Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆13Feb 14, 2024Updated 2 years ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Jan 4, 2024Updated 2 years ago