rajesh-lab / symile
Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of modalities.
☆16Updated last week
Related projects ⓘ
Alternatives and complementary repositories for symile
- ViLLA: Fine-grained vision-language representation learning from real-world data☆40Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆34Updated 8 months ago
- Holistic evaluation of multimodal foundation models☆41Updated 3 months ago
- ☆31Updated last month
- ☆32Updated this week
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆61Updated last year
- More dimensions = More fun☆21Updated 3 months ago
- "Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"☆15Updated 5 months ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆97Updated 2 months ago
- Code for the paper Self-Supervised Learning of Split Invariant Equivariant Representations☆26Updated last year
- ☆30Updated 9 months ago
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated last year
- ☆19Updated last month
- ☆33Updated 4 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆29Updated last year
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆49Updated 2 months ago
- Official repo of Progressive Data Expansion: data, code and evaluation☆27Updated last year
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)