kyegomez / qformer
Implementation of Qformer from BLIP2 in Zeta Lego blocks.
☆35Updated 2 months ago
Alternatives and similar repositories for qformer:
Users that are interested in qformer are comparing it to the libraries listed below
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆26Updated this week
- LMM which strictly superset LLM embedded☆37Updated 2 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆61Updated 3 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆33Updated 3 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆20Updated 5 months ago
- Data-Efficient Multimodal Fusion on a Single GPU☆52Updated 8 months ago
- ☆47Updated last year
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by …☆72Updated last year
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆49Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆29Updated 3 months ago
- Language Quantized AutoEncoders☆97Updated last year
- ☆59Updated 11 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆48Updated 6 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆154Updated 3 months ago
- Narrative movie understanding benchmark☆63Updated 8 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 4 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆35Updated 7 months ago
- The open source implementation of the cross attention mechanism from the paper: "JOINTLY TRAINING LARGE AUTOREGRESSIVE MULTIMODAL MODELS"☆25Updated 10 months ago
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆24Updated this week
- A project for tri-modal LLM benchmarking and instruction tuning.☆19Updated 2 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆96Updated 4 months ago
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆73Updated 2 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 7 months ago
- Official implementation for the paper "A Cheaper and Better Diffusion Language Model with Soft-Masked Noise"☆54Updated last year
- Keras implement of Finite Scalar Quantization☆69Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated 5 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆29Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆97Updated 2 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆122Updated 3 months ago