VijayLingam95 / SVFTLinks
☆31Updated 7 months ago
Alternatives and similar repositories for SVFT
Users that are interested in SVFT are comparing it to the libraries listed below
Sorting:
- ☆30Updated last year
- ☆34Updated 2 years ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆34Updated 10 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆32Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆54Updated 2 years ago
- ☆16Updated 11 months ago
- ☆85Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- ☆30Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆30Updated 10 months ago
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆28Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆70Updated 2 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆38Updated last year
- ☆20Updated last year
- ☆28Updated 7 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 7 months ago
- Source code for a LoRA-based continual relation extraction method.☆12Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆96Updated last year
- ☆19Updated 7 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆179Updated last year
- ☆74Updated 3 years ago
- Codes for Merging Large Language Models☆33Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆59Updated last month
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆83Updated 10 months ago
- Unofficial Implementation of Selective Attention Transformer☆17Updated 10 months ago
- ☆190Updated last year
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆20Updated 2 months ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated last year