wjxts / RegularizedBNLinks
☆21Updated 2 years ago
Alternatives and similar repositories for RegularizedBN
Users that are interested in RegularizedBN are comparing it to the libraries listed below
Sorting:
- ☆106Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆79Updated 3 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Updated last year
- ☆29Updated 3 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 7 months ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆187Updated last year
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆66Updated 2 years ago
- Code for T-MARS data filtering☆35Updated 2 years ago
- Patching open-vocabulary models by interpolating weights☆91Updated 2 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆57Updated 2 years ago
- ☆33Updated 4 years ago
- ☆30Updated 2 years ago
- ☆32Updated last year
- ☆21Updated 2 years ago
- ☆35Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆69Updated last year
- Experiments for "A Closer Look at In-Context Learning under Distribution Shifts"☆19Updated 2 years ago
- ☆52Updated last year
- ☆51Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Updated 2 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 4 years ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 9 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆143Updated 5 months ago
- ☆58Updated 2 years ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆121Updated last year