tinglyfeng / figure_for_data_analysisLinks
☆10Updated 2 years ago
Alternatives and similar repositories for figure_for_data_analysis
Users that are interested in figure_for_data_analysis are comparing it to the libraries listed below
Sorting:
- The official implementation of the paper "DIP: Dual Incongruity Perceiving Network for Sarcasm Detection"☆34Updated 9 months ago
- ☆71Updated 5 months ago
- [CVPR 2024] This is the official implementation of "MART: Masked Affective RepresenTation Learning via Masked Temporal Distribution Disti…☆18Updated 3 months ago
- This is the official implementation of 2024 CVPR paper "EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models".☆87Updated 8 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 5 months ago
- ☆81Updated 10 months ago
- [ICML 2025 Spotlight] MODA: MOdular Duplex Attention for Multimodal Perception, Cognition, and Emotion Understanding☆57Updated 2 months ago
- ☆21Updated this week
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆43Updated 6 months ago
- [ICCV 2025] Official PyTorch Code for "Advancing Textual Prompt Learning with Anchored Attributes"☆98Updated 2 weeks ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆70Updated last month
- 视觉实验室新手任务☆156Updated last year
- [CVPRW 2025] UniToken is an auto-regressive generation model that combines discrete and continuous representations to process visual inpu…☆92Updated 5 months ago
- [EMNLP 2024 Main] MaPPER: Multimodal Prior-guided Parameter Efficient Tuning for Referring Expression Comprehension☆15Updated 8 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆66Updated last week
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆192Updated 2 months ago
- Official Implementation for MoPE: Parameter-Efficient and Scalable Multimodal Fusion via Mixture of Prompt☆22Updated 2 months ago
- This repository is the official code for the paper "AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation" (NeurIPS 2024).☆13Updated last week
- Use 2 lines to empower absolute time awareness for Qwen2.5VL's MRoPE☆23Updated last week
- ☆137Updated last year
- Collect the awesome works evolved around reasoning models like O1/R1 in visual domain☆41Updated 2 months ago
- Reason-before-Retrieve: One-Stage Reflective Chain-of-Thoughts for Training-Free Zero-Shot Composed Image Retrieval [CVPR 2025 Highlight]☆59Updated 2 months ago
- Unified the Anonymous and Camera Ready Version, hope everyone can get an ACCEPT☆256Updated 2 months ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆60Updated 2 weeks ago
- 对llava官方代码的一些学习笔记☆29Updated 11 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆129Updated last month
- CLIP-MoE: Mixture of Experts for CLIP☆46Updated 11 months ago
- [ICML'25] Kernel-based Unsupervised Embedding Alignment for Enhanced Visual Representation in Vision-language Models☆16Updated 2 weeks ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆153Updated 6 months ago
- Easy wrapper for inserting LoRA layers in CLIP.☆38Updated last year