amazon-science / adaptive-feature-transferLinks
Official implementation of Adaptive Feature Transfer (AFT)
☆23Updated last year
Alternatives and similar repositories for adaptive-feature-transfer
Users that are interested in adaptive-feature-transfer are comparing it to the libraries listed below
Sorting:
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆61Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Updated last year
- Official Code for ICLR 2024 Paper: Non-negative Contrastive Learning☆46Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆47Updated last year
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 9 months ago
- An official PyTorch implementation for CLIPPR☆30Updated 2 years ago
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- Unofficial Implementation of Selective Attention Transformer☆20Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆70Updated last year
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆38Updated last year
- Code for T-MARS data filtering☆35Updated 2 years ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆22Updated 2 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Updated 2 years ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- Official implementation of RMoE (Layerwise Recurrent Router for Mixture-of-Experts)☆29Updated last year
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆17Updated 10 months ago
- Distributed Optimization Infra for learning CLIP models☆27Updated last year
- Code for Principal Masked Autoencoders☆30Updated last week
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Updated last year
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆28Updated 5 months ago
- [ICML2023] Instant Soup Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. Ajay Jaiswal, Shiwei Liu, Ti…☆11Updated 2 years ago
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆23Updated 3 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28Updated 8 months ago
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Updated last year
- Model Merging with SVD to Tie the KnOTS [ICLR 2025]☆85Updated 9 months ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated 7 months ago
- ☆20Updated 2 months ago
- ☆34Updated 11 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆29Updated 6 months ago