zzp1012 / LLFC
[NeurIPS 2023] Code release for "Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature Connectivity"
☆16Updated last year
Related projects ⓘ
Alternatives and complementary repositories for LLFC
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆53Updated last year
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆20Updated 2 years ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆49Updated last week
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆25Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆10Updated last week
- ☆58Updated last year
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆44Updated last year
- Implementation of Beyond Neural Scaling beating power laws for deep models and prototype-based models☆32Updated 2 months ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆36Updated last year
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆13Updated last month
- Official repo for PAC-Bayes Information Bottleneck. ICLR 2022.☆46Updated 2 years ago
- [NeurIPS 2022] A novel 1-Lipschitz network that can be efficiently trained to achieve certified L-infinity robustness for free!☆30Updated 2 years ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆34Updated 2 weeks ago
- A curated list of Model Merging methods.☆82Updated last month
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆27Updated last month
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆29Updated 2 weeks ago
- ☆21Updated last year
- PyTorch implementation of the paper "Discovering and Explaining the Representation Bottleneck of DNNs" (ICLR 2022)☆37Updated last week
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆24Updated last week
- [ICLR 2024] Improving Convergence and Generalization Using Parameter Symmetries☆28Updated 5 months ago
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆39Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆46Updated last year
- ☆37Updated last year
- ☆10Updated 2 years ago
- Code for paper: “What Data Benefits My Classifier?” Enhancing Model Performance and Interpretability through Influence-Based Data Selecti…☆22Updated 5 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆15Updated 5 months ago
- ☆10Updated 6 months ago
- Respect to the input tensor instead of paramters of NN☆15Updated 2 years ago