krafton-ai / mini-batch-cl
☆11Updated last year
Alternatives and similar repositories for mini-batch-cl:
Users that are interested in mini-batch-cl are comparing it to the libraries listed below
- Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network (AAAI 2023)☆20Updated last year
- LISA for ICML 2022☆47Updated last year
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆15Updated last year
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆29Updated last year
- ☆15Updated last year
- Model Stock: All we need is just a few fine-tuned models☆100Updated 3 months ago
- ☆34Updated 5 months ago
- This repository contains the code for our paper "Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguo…☆39Updated last year
- Domain Adaptation and Adapters☆16Updated last year
- ☆20Updated last year
- ☆127Updated 2 years ago
- ☆26Updated 7 months ago
- BenchBench is a Python package to evaluate multi-task benchmarks.☆13Updated 6 months ago
- A PyTorch Implementation of the Luna: Linear Unified Nested Attention☆41Updated 3 years ago
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆35Updated 7 months ago
- ☆29Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆94Updated last year
- ☆43Updated 2 years ago
- Official implementation for NeurIPS'23 paper "Geodesic Multi-Modal Mixup for Robust Fine-Tuning"☆29Updated 3 months ago
- ☆27Updated 6 months ago
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated last year
- ☆107Updated last year
- Code for "SAM as an Optimal Relaxation of Bayes", ICLR 2023.☆24Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆63Updated 2 years ago
- Repository for research works and resources related to model reprogramming <https://arxiv.org/abs/2202.10629>☆59Updated 10 months ago
- ☆29Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆58Updated 3 months ago
- Implementation of Beyond Neural Scaling beating power laws for deep models and prototype-based models☆33Updated last month
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated 7 months ago