DCDmllm / HyperLLaVALinks
Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models
☆28Updated last year
Alternatives and similar repositories for HyperLLaVA
Users that are interested in HyperLLaVA are comparing it to the libraries listed below
Sorting:
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 5 months ago
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆36Updated last year
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated 11 months ago
- Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆103Updated last week
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆47Updated 2 months ago
- Resa: Transparent Reasoning Models via SAEs☆41Updated last month
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆18Updated 7 months ago
- research work on multimodal cognitive ai☆64Updated last month
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆29Updated 3 months ago
- ☆24Updated 2 years ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- Video-LlaVA fine-tune for CinePile evaluation☆51Updated 11 months ago
- ☆68Updated 2 weeks ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆18Updated last year
- Official implementation of ECCV24 paper: POA☆24Updated 11 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆47Updated 7 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆52Updated 7 months ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆36Updated 11 months ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆27Updated 3 months ago
- ☆73Updated last year
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆73Updated last month
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆44Updated last year
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 5 months ago
- ☆24Updated last week
- Interface for GenAI-Arena☆14Updated last year
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆15Updated 4 months ago
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆39Updated 3 weeks ago
- ☆22Updated last month
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆34Updated last year