YeeZ93 / Awesome-Object-Centric-LearningLinks
A curated list of researches in object-centric learning
☆11Updated last year
Alternatives and similar repositories for Awesome-Object-Centric-Learning
Users that are interested in Awesome-Object-Centric-Learning are comparing it to the libraries listed below
Sorting:
- A paper list of Awesome Latent Space.☆305Updated last week
- ✨A curated list of papers on the uncertainty in multi-modal large language model (MLLM).☆57Updated 9 months ago
- ☆155Updated 11 months ago
- Official codebase for the paper Latent Visual Reasoning☆98Updated 3 months ago
- [NeurIPS 2023] Generalized Logit Adjustment☆39Updated last year
- 关于LLM和Multimodal LLM的paper list☆55Updated 2 weeks ago
- [Awesome-Spatial-VLMs] This repository is the official, community-maintained resource for the survey paper: Spatial Intelligence in Visio…☆58Updated last week
- Collections of Papers and Projects for Multimodal Reasoning.☆107Updated 9 months ago
- [CVPR 2025 (Oral)] Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Key☆102Updated 3 weeks ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆421Updated last year
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆162Updated last month
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆95Updated last year
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆233Updated 7 months ago
- Uni-OVSeg is a weakly supervised open-vocabulary segmentation framework that leverages unpaired mask-text pairs.☆53Updated last year
- [NeurIPS'24] SpatialEval: a benchmark to evaluate spatial reasoning abilities of MLLMs and LLMs☆58Updated last year
- [AAAI 24] Official Codebase for BridgeQA: Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQA☆26Updated last year
- ☆114Updated 6 months ago
- Awesome list of Mixture-of-Experts (MoE)☆26Updated last year
- Code for our ICML'24 on multimodal dataset distillation☆43Updated last year
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆58Updated 5 months ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆204Updated 3 years ago
- Awesome paper for multi-modal llm with grounding ability☆19Updated 3 months ago
- Diffusion-TTA improves pre-trained discriminative models such as image classifiers or segmentors using pre-trained generative models.☆80Updated last year
- Awsome of VLM-CL. Continual Learning for VLMs: A Survey and Taxonomy Beyond Forgetting☆145Updated last week
- Adapter-X: A Novel General Parameter-Efficient Fine-Tuning Framework for Vision☆11Updated last year
- ☆38Updated 6 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆154Updated 10 months ago
- [ICLR 2025] Official Implementation of Local-Prompt: Extensible Local Prompts for Few-Shot Out-of-Distribution Detection☆51Updated 6 months ago
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆47Updated 10 months ago
- Official implementation of ECCV 2024 paper: Take A Step Back: Rethinking the Two Stages in Visual Reasoning☆16Updated 7 months ago