ChenDelong1999 / polite-flamingoLinks
𦩠Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)
β64Updated last year
Alternatives and similar repositories for polite-flamingo
Users that are interested in polite-flamingo are comparing it to the libraries listed below
Sorting:
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuningβ135Updated 2 years ago
- β91Updated last year
- β38Updated last year
- β133Updated last year
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioningβ35Updated 10 months ago
- β64Updated last year
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Modelsβ37Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".β58Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)β87Updated 2 years ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.β31Updated last year
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entitiesβ40Updated 2 weeks ago
- A Unified Framework for Video-Language Understandingβ57Updated 2 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Modelsβ44Updated last year
- SVIT: Scaling up Visual Instruction Tuningβ163Updated last year
- β75Updated 7 months ago
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Insβ¦β19Updated last year
- β149Updated 7 months ago
- Colorful Prompt Tuning for Pre-trained Vision-Language Modelsβ49Updated 2 years ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)β46Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.β41Updated 2 years ago
- β100Updated last year
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by β¦β74Updated last year
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Modelβ43Updated 5 months ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"β134Updated 2 years ago
- Source code for EMNLP 2022 paper βPEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Modelsββ48Updated 2 years ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without relyβ¦β51Updated last year
- A Survey on video and language understanding.β50Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"β34Updated 2 years ago
- A collection of visual instruction tuning datasets.β76Updated last year
- β29Updated 3 months ago