VijayLingam95 / SVFTLinks
☆33Updated 8 months ago
Alternatives and similar repositories for SVFT
Users that are interested in SVFT are comparing it to the libraries listed below
Sorting:
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆35Updated 11 months ago
- ☆33Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆55Updated 2 years ago
- ☆30Updated 2 years ago
- ☆17Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated 11 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated 2 years ago
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆97Updated last year
- ☆41Updated 2 years ago
- Model Merging with SVD to Tie the KnOTS [ICLR 2025]☆70Updated 6 months ago
- ☆33Updated last year
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆33Updated 8 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 8 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆41Updated 2 months ago
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆28Updated last year
- ☆86Updated last year
- ☆20Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- PyTorch implementation of StableMask (ICML'24)☆14Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆26Updated 3 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆184Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆59Updated 10 months ago
- Codes for Merging Large Language Models☆33Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆133Updated 6 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆61Updated 2 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆73Updated 4 months ago
- Source code for a LoRA-based continual relation extraction method.☆12Updated 2 years ago
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆16Updated 2 years ago