mlfoundations / model-soupsLinks
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
☆503Updated last year
Alternatives and similar repositories for model-soups
Users that are interested in model-soups are comparing it to the libraries listed below
Sorting:
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆308Updated last year
- Robust fine-tuning of zero-shot models☆756Updated 3 years ago
- ☆693Updated 3 weeks ago
- Code release for "Dropout Reduces Underfitting"☆317Updated 2 years ago
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆227Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆339Updated 8 months ago
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆229Updated 2 years ago
- ☆210Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆185Updated 6 months ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆463Updated 3 years ago
- Editing Models with Task Arithmetic☆523Updated last year
- Compare neural networks by their feature similarity☆377Updated 2 years ago
- CLIP-like model evaluation☆795Updated 3 weeks ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆251Updated 4 months ago
- Learning from synthetic data - code and models☆326Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆320Updated last year
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆201Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆672Updated 3 years ago
- Fine-tuning Vision Transformers on various classification datasets☆112Updated last year
- DataComp: In search of the next generation of multimodal datasets☆762Updated 8 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆374Updated last year
- ☆263Updated 4 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆405Updated 2 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆409Updated last year
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆210Updated 2 years ago
- EsViT: Efficient self-supervised Vision Transformers☆411Updated 2 years ago
- When do we not need larger vision models?☆413Updated 10 months ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆886Updated 2 years ago
- Official PyTorch implementation of "ML-Decoder: Scalable and Versatile Classification Head" (2021)☆350Updated 2 years ago