kyegomez / qformerLinks
Implementation of Qformer from BLIP2 in Zeta Lego blocks.
☆39Updated 7 months ago
Alternatives and similar repositories for qformer
Users that are interested in qformer are comparing it to the libraries listed below
Sorting:
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆26Updated 5 months ago
- LMM solved catastrophic forgetting, AAAI2025☆43Updated 2 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- Keras implement of Finite Scalar Quantization☆74Updated last year
- ☆32Updated 2 months ago
- ☆50Updated last year
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by …☆74Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated 10 months ago
- Narrative movie understanding benchmark☆72Updated 2 weeks ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated 10 months ago
- The official implementation of MAGVLT: Masked Generative Vision-and-Language Transformer (CVPR'23)☆26Updated last year
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆25Updated 2 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆55Updated 11 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search"☆25Updated last month
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆103Updated 9 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 9 months ago
- Data-Efficient Multimodal Fusion on a Single GPU☆64Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆70Updated 8 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆46Updated 5 months ago
- ☆32Updated 3 weeks ago
- ☆42Updated 7 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆51Updated last year
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 6 months ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆30Updated 8 months ago
- ☆43Updated last month
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- Recent Advances on MLLM's Reasoning Ability☆24Updated 2 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 7 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆51Updated last year
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆40Updated 5 months ago