calgaryml / condensed-sparsity
[ICLR 2024] Dynamic Sparse Training with Structured Sparsity
☆17Updated last year
Alternatives and similar repositories for condensed-sparsity:
Users that are interested in condensed-sparsity are comparing it to the libraries listed below
- Official implementation for Sparse MetA-Tuning (SMAT)☆16Updated 9 months ago
- Recycling diverse models☆44Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆18Updated last month
- ☆51Updated 10 months ago
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆12Updated last month
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆28Updated last year
- Official code and data for NeurIPS 2023 paper "ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial …☆38Updated last year
- ☆21Updated 2 years ago
- ☆29Updated 10 months ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- ☆18Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 8 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆65Updated 6 months ago
- Repository for the PopulAtion Parameter Averaging (PAPA) paper☆26Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated last year
- Code for "Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?" [ICML 2023]☆32Updated 7 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆71Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆28Updated 10 months ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- Code for "Merging Text Transformers from Different Initializations"☆20Updated 2 months ago
- Code for visualizing the loss landscape of neural nets☆10Updated 4 years ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆53Updated 7 months ago
- Code for T-MARS data filtering☆35Updated last year
- Patching open-vocabulary models by interpolating weights☆91Updated last year
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆47Updated last year
- ☆17Updated 2 years ago
- ☆9Updated last month