kyegomez / qformer
Implementation of Qformer from BLIP2 in Zeta Lego blocks.
☆39Updated 5 months ago
Alternatives and similar repositories for qformer:
Users that are interested in qformer are comparing it to the libraries listed below
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆26Updated 3 months ago
- LMM solved catastrophic forgetting, AAAI2025☆41Updated 3 weeks ago
- Keras implement of Finite Scalar Quantization☆71Updated last year
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated 9 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆38Updated 10 months ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆29Updated 7 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated 9 months ago
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆24Updated 2 weeks ago
- ☆30Updated 3 weeks ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆50Updated last year
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆24Updated 4 months ago
- ☆26Updated last month
- Data-Efficient Multimodal Fusion on a Single GPU☆58Updated 11 months ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆55Updated 3 weeks ago
- Explore the Limits of Omni-modal Pretraining at Scale☆97Updated 8 months ago
- Video dataset dedicated to portrait-mode video recognition.☆48Updated 4 months ago
- The official implementation of MAGVLT: Masked Generative Vision-and-Language Transformer (CVPR'23)☆26Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆42Updated 10 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 6 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 7 months ago
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆73Updated 5 months ago
- PyTorch implementation of StableMask (ICML'24)☆12Updated 10 months ago
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Rea…☆44Updated this week
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆44Updated 5 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆160Updated 6 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆50Updated last year
- ☆51Updated last year
- A repository for DenseSSMs☆87Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆111Updated last month
- Code for paper "Patch-Level Training for Large Language Models"☆84Updated 5 months ago