top-yun / SPARKLinks
A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.
☆18Updated 5 months ago
Alternatives and similar repositories for SPARK
Users that are interested in SPARK are comparing it to the libraries listed below
Sorting:
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆20Updated 2 months ago
- Code and Data for Paper: SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data☆34Updated last year
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆18Updated 11 months ago
- Official Repository of Personalized Visual Instruct Tuning☆29Updated 3 months ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆35Updated 10 months ago
- ☆42Updated 7 months ago
- [NeurIPS 2024] The official implementation of "Image Copy Detection for Diffusion Models"☆16Updated 8 months ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆34Updated last year
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆18Updated 8 months ago
- The official code of "PixelWorld: Towards Perceiving Everything as Pixels"☆14Updated 4 months ago
- ☆13Updated 9 months ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆21Updated last year
- Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning☆23Updated last month
- Code for paper: Unified Text-to-Image Generation and Retrieval☆15Updated 11 months ago
- ☆10Updated 3 months ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆43Updated last week
- [ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"☆20Updated 2 months ago
- PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"☆20Updated 7 months ago
- [Under Review] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with enla…☆60Updated 8 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- The official repo of continuous speculative decoding☆27Updated 2 months ago
- SMILE: A Multimodal Dataset for Understanding Laughter☆13Updated 2 years ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆46Updated 5 months ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆14Updated 2 weeks ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆16Updated 8 months ago
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆30Updated 8 months ago
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆29Updated last week
- PyTorch implementation of "Sample- and Parameter-Efficient Auto-Regressive Image Models" from CVPR 2025☆11Updated 3 months ago
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆19Updated last year