mlfoundations / model-soups
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
☆449Updated 7 months ago
Alternatives and similar repositories for model-soups:
Users that are interested in model-soups are comparing it to the libraries listed below
- Robust fine-tuning of zero-shot models☆673Updated 2 years ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆294Updated last year
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆224Updated last year
- ☆608Updated last month
- Code release for "Dropout Reduces Underfitting"☆312Updated last year
- Learning from synthetic data - code and models☆310Updated last year
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆146Updated 8 months ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆309Updated 9 months ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆455Updated 2 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,017Updated 8 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆264Updated 10 months ago
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆205Updated last year
- ☆199Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆314Updated 8 months ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆186Updated 9 months ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆397Updated 5 months ago
- EsViT: Efficient self-supervised Vision Transformers☆411Updated last year
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆248Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆157Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆267Updated last year
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated last year
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆414Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆711Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆647Updated 2 years ago
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆566Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆809Updated 2 years ago
- Editing Models with Task Arithmetic☆453Updated last year
- CLIP-like model evaluation☆665Updated last week
- When do we not need larger vision models?☆369Updated 3 weeks ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆698Updated last year