microsoft / BridgeTower
Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"
☆161Updated last year
Alternatives and similar repositories for BridgeTower:
Users that are interested in BridgeTower are comparing it to the libraries listed below
- Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone☆128Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆136Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆154Updated 6 months ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆197Updated 6 months ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆135Updated 2 years ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆82Updated last year
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆132Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆85Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆256Updated 10 months ago
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆133Updated 5 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆64Updated 4 months ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆166Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 8 months ago
- SVIT: Scaling up Visual Instruction Tuning☆164Updated 8 months ago
- ☆60Updated last year
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆115Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated last year
- ☆133Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆287Updated last month
- ☆49Updated last year
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆186Updated 2 years ago
- ☆91Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- The official site of paper MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation☆192Updated last year
- Official repository for the A-OKVQA dataset☆77Updated 10 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆234Updated 2 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆332Updated last month
- InstructionGPT-4☆39Updated last year