Chaos96 / fourierft
☆137Updated 4 months ago
Alternatives and similar repositories for fourierft:
Users that are interested in fourierft are comparing it to the libraries listed below
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆120Updated last week
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆142Updated last month
- ☆91Updated 6 months ago
- ☆121Updated 5 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆70Updated 10 months ago
- ☆169Updated 2 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆94Updated 2 months ago
- Awesome-Low-Rank-Adaptation☆61Updated 3 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆119Updated 4 months ago
- Survey on Data-centric Large Language Models☆72Updated 6 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆17Updated this week
- ☆185Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆270Updated 8 months ago
- [EMNLP 2024 Findings🔥] Official implementation of "LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Infe…☆88Updated 2 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆202Updated this week
- ☆45Updated last month
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆200Updated this week
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆70Updated 7 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 8 months ago
- A repository for DenseSSMs☆87Updated 9 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆142Updated 5 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆60Updated 9 months ago
- ☆17Updated 7 months ago
- A curated list of Model Merging methods.☆89Updated 4 months ago
- SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator☆41Updated 3 weeks ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆47Updated last month
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆33Updated 3 months ago
- [ATTRIB @ NeurIPS 2024 Oral] When Attention Sink Emerges in Language Models: An Empirical View☆43Updated 3 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆308Updated last week