mbzuai-oryx / TimeTravelLinks
[ACL 2025 π₯] Time Travel is a Comprehensive Benchmark to Evaluate LMMs on Historical and Cultural Artifacts
β18Updated 4 months ago
Alternatives and similar repositories for TimeTravel
Users that are interested in TimeTravel are comparing it to the libraries listed below
Sorting:
- Official code repository of paper titled "Test-Time Low Rank Adaptation via Confidence Maximization for Zero-Shot Generalization of Visioβ¦β28Updated 5 months ago
- [CVPRW 2025] Official repository of paper titled "Towards Evaluating the Robustness of Visual State Space Models"β25Updated 4 months ago
- [ACCV 2024] ObjectCompose: Evaluating Resilience of Vision-Based Models on Object-to-Background Compositional Changes πππβ37Updated 8 months ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attentionβ47Updated last year
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite foβ¦β50Updated last year
- [BMVC 2025] Official Implementation of the paper "PerSense: Personalized Instance Segmentation in Dense Images"β27Updated 3 weeks ago
- [CVPR 2025 π₯]A Large Multimodal Model for Pixel-Level Visual Grounding in Videosβ86Updated 5 months ago
- [MICCAI 2023][Early Accept] Official code repository of paper titled "Cross-modulated Few-shot Image Generation for Colorectal Tissue Claβ¦β47Updated 2 years ago
- Code for "Enhancing In-context Learning via Linear Probe Calibration"β36Updated last year
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Studyβ16Updated 10 months ago
- [ECCVW 2024 -- ORAL] Official repository of paper titled "Makeup-Guided Facial Privacy Protection via Untrained Neural Network Priors".β12Updated last year
- [NAACL'25] Contains code and documentation for our VANE-Bench paper.β15Updated last month
- [ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillationβ63Updated 3 months ago
- VideoMathQA is a benchmark designed to evaluate mathematical reasoning in real-world educational videosβ16Updated 3 months ago
- [ICML'25] Kernel-based Unsupervised Embedding Alignment for Enhanced Visual Representation in Vision-language Modelsβ17Updated last month
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"β44Updated 10 months ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Promptβ¦β44Updated 9 months ago
- A codeβ18Updated 8 months ago
- Rui Qian, Xin Yin, Dejing Douβ : Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)β44Updated last week
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversionβ53Updated 5 months ago
- Learnable Weight Initialization for Volumetric Medical Image Segmentation [Elsevier AIM2024]β22Updated 11 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Modelsβ51Updated 7 months ago
- [ICCVW 2025 (Oral)] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Modelsβ25Updated last month
- (BMVC 2022--Oral) Official repository for "Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations" β¦β34Updated 2 years ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.β29Updated last year
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understandingβ45Updated 8 months ago
- Official Implementation of CODEβ15Updated last year
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentationβ53Updated last month
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"β36Updated last year
- CLIP-MoE: Mixture of Experts for CLIPβ47Updated last year