SMILE-data / SMILELinks
SMILE: A Multimodal Dataset for Understanding Laughter
☆13Updated 2 years ago
Alternatives and similar repositories for SMILE
Users that are interested in SMILE are comparing it to the libraries listed below
Sorting:
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated 2 years ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆55Updated 7 months ago
- ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models (ICLR 2024, Official Implementation)☆16Updated 2 years ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆57Updated 2 years ago
- ☆58Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆37Updated last year
- ☆53Updated 2 years ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆19Updated last year
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated 2 years ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆22Updated 2 years ago
- https://arxiv.org/abs/2209.15162☆53Updated 3 years ago
- Official repo for the TMLR paper "Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners"☆30Updated last year
- Distributed Optimization Infra for learning CLIP models☆27Updated last year
- Rare-to-Frequent (R2F), ICLR'25, Spotlight☆53Updated 9 months ago
- [ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"☆21Updated 10 months ago
- Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation☆33Updated 7 months ago
- ☆11Updated last year
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆159Updated 4 months ago
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆19Updated last year
- ☆56Updated 5 months ago
- Data-Efficient Multimodal Fusion on a Single GPU☆68Updated last year
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆79Updated last year
- ☆30Updated 2 years ago
- Official Implementation for "MyVLM: Personalizing VLMs for User-Specific Queries" (ECCV 2024)☆185Updated last year
- [ACL 2024 Findings & ICLR 2024 WS] An Evaluator VLM that is open-source, offers reproducible evaluation, and inexpensive to use. Specific…☆79Updated last year
- SIEVE: Multimodal Dataset Pruning using Image-Captioning Models (CVPR 2024)☆18Updated last year
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆196Updated 2 years ago
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆47Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago