lzw-lzw / UnifiedMLLMLinks
UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model
☆22Updated last year
Alternatives and similar repositories for UnifiedMLLM
Users that are interested in UnifiedMLLM are comparing it to the libraries listed below
Sorting:
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆31Updated 11 months ago
- ☆37Updated 3 months ago
- AliTok: Towards Sequence Modeling Alignment between Tokenizer and Autoregressive Model☆50Updated last month
- [NeurIPS 2024] Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective☆74Updated last year
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆54Updated 5 months ago
- [NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆73Updated 2 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆34Updated 3 weeks ago
- Explore how to get a VQ-VAE models efficiently!☆63Updated 4 months ago
- A unified framework for controllable caption generation across images, videos, and audio. Supports multi-modal inputs and customizable ca…☆52Updated 4 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 7 months ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆19Updated 4 months ago
- ☆139Updated last year
- A project for tri-modal LLM benchmarking and instruction tuning.☆52Updated 8 months ago
- ☆33Updated 7 months ago
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by …☆77Updated last year
- Distributed Optimization Infra for learning CLIP models☆27Updated last year
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆41Updated 5 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆95Updated 4 months ago
- [ICCV 2025] Dynamic-VLM☆26Updated 11 months ago
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model☆17Updated 9 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆60Updated last year
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆40Updated last month
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- MIO: A Foundation Model on Multimodal Tokens☆32Updated 11 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆40Updated 9 months ago
- ☆19Updated last year
- A big_vision inspired repo that implements a generic Auto-Encoder class capable in representation learning and generative modeling.☆34Updated last year
- Official repository of "Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach" (ACL 2024 Oral)☆33Updated 8 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆20Updated 9 months ago
- Implementation of Qformer from BLIP2 in Zeta Lego blocks.☆46Updated last year