ruz048 / AutoLoRALinks
☆11Updated last year
Alternatives and similar repositories for AutoLoRA
Users that are interested in AutoLoRA are comparing it to the libraries listed below
Sorting:
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆14Updated last year
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆80Updated 10 months ago
- ☆160Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆124Updated 7 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆182Updated 3 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆191Updated last year
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆170Updated last week
- ☆51Updated 3 months ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆61Updated 3 months ago
- ☆171Updated 5 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆227Updated 10 months ago
- ☆54Updated 4 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆92Updated 11 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated 11 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆69Updated 6 months ago
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆18Updated 7 months ago
- The implementation of paper "On Reasoning Strength Planning in Large Reasoning Models"☆24Updated 3 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆91Updated 7 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆44Updated 6 months ago
- ☆67Updated 6 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆86Updated 4 months ago
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…☆15Updated 2 weeks ago
- [SIGIR'24] The official implementation code of MOELoRA.☆182Updated last year
- ☆275Updated 3 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 8 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆247Updated 2 months ago
- The official repository of "Whoever Started the Interference Should End It: Guiding Data-Free Model Merging via Task Vectors""☆31Updated 2 weeks ago
- ThinK: Thinner Key Cache by Query-Driven Pruning☆24Updated 8 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆69Updated 7 months ago
- ☆13Updated 3 months ago