Chaos96 / fourierft
☆144Updated 7 months ago
Alternatives and similar repositories for fourierft:
Users that are interested in fourierft are comparing it to the libraries listed below
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆139Updated 2 months ago
- ☆194Updated 6 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆195Updated 5 months ago
- ☆100Updated 10 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆75Updated last year
- A curated list of Model Merging methods.☆92Updated 7 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆157Updated 8 months ago
- ☆132Updated 9 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆115Updated 3 weeks ago
- ☆22Updated 11 months ago
- ☆189Updated last year
- Awesome-Low-Rank-Adaptation☆94Updated 6 months ago
- Paper List of Inference/Test Time Scaling/Computing☆207Updated last week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆67Updated 2 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆83Updated 5 months ago
- [Arxiv 2025] Efficient Reasoning Models: A Survey☆130Updated this week
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆37Updated 6 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆88Updated 2 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆47Updated 9 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆136Updated last month
- Survey on Data-centric Large Language Models☆83Updated 9 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated last year
- [NeurIPS 2024] Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆147Updated 2 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆58Updated 2 months ago
- ☆95Updated last month
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆326Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆119Updated 6 months ago
- SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator☆71Updated 4 months ago