ruz048 / AutoLoRALinks
☆10Updated last year
Alternatives and similar repositories for AutoLoRA
Users that are interested in AutoLoRA are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆76Updated 8 months ago
- ☆151Updated last year
- An effective weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study uncovering how reasoning length…☆12Updated last week
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆111Updated 5 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆153Updated 2 weeks ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆55Updated 2 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆174Updated 2 months ago
- ☆66Updated 4 months ago
- ☆51Updated 2 months ago
- ☆48Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆180Updated last year
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆69Updated 5 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆49Updated 10 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆40Updated 4 months ago
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆17Updated 6 months ago
- ☆49Updated last month
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆28Updated 7 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆88Updated 10 months ago
- The repo for In-context Autoencoder☆136Updated last year
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆14Updated last year
- ☆163Updated 3 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆35Updated 7 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆244Updated 2 weeks ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆80Updated 2 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆510Updated this week
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆222Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆84Updated 6 months ago
- ThinK: Thinner Key Cache by Query-Driven Pruning☆23Updated 6 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆86Updated 6 months ago