zhaoyucs / VSD
Code for "Visual Spatial Description: Controlled Spatial-Oriented Image-to-Text Generation"
☆25Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for VSD
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆52Updated last year
- Official implementation for CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆42Updated last year
- ☆21Updated last year
- The offical implemention of JM3D.☆27Updated last year
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆29Updated this week
- ☆35Updated last month
- ☆20Updated 3 months ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.☆29Updated last year
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆42Updated 3 months ago
- Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆40Updated 4 months ago
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆36Updated last year
- ☆56Updated last year
- Learning Situation Hyper-Graphs for Video Question Answering☆17Updated 8 months ago
- Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆39Updated last year
- This is the official repo for the incoming work: ByteVideoLLM☆11Updated last week
- FreeVA: Offline MLLM as Training-Free Video Assistant☆48Updated 5 months ago
- ☆84Updated 11 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- (ICLR 2024, CVPR 2024) SparseFormer☆63Updated 7 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆19Updated 2 weeks ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated 3 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆29Updated 2 weeks ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆46Updated last week
- ☆19Updated 2 years ago
- ☆25Updated last year
- ☆19Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆22Updated 4 months ago
- ☆19Updated 8 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆18Updated last year
- ACM Multimedia 2023 (Oral) - RTQ: Rethinking Video-language Understanding Based on Image-text Model☆14Updated 9 months ago