wjxts / RegularizedBN
☆21Updated 2 years ago
Alternatives and similar repositories for RegularizedBN:
Users that are interested in RegularizedBN are comparing it to the libraries listed below
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated last year
- Mixture of Attention Heads☆44Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆49Updated 2 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆53Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 7 months ago
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆25Updated 3 years ago
- Official code for the paper "Attention as a Hypernetwork"☆27Updated 9 months ago
- Code for T-MARS data filtering☆35Updated last year
- Structured Pruning Adapters in PyTorch☆17Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆65Updated 6 months ago
- Implementation of Beyond Neural Scaling beating power laws for deep models and prototype-based models☆33Updated 2 weeks ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated last year
- ☆20Updated last year
- ☆17Updated 3 months ago
- ☆57Updated 2 years ago
- ☆32Updated last year
- ☆18Updated 9 months ago
- ☆102Updated last year
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆29Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated 11 months ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated last month
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆30Updated 2 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- [ICML2023] Instant Soup Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. Ajay Jaiswal, Shiwei Liu, Ti…☆11Updated last year
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- ☆51Updated 10 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- ☆33Updated 4 years ago