ChenDelong1999 / polite-flamingoLinks
𦩠Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)
β63Updated last year
Alternatives and similar repositories for polite-flamingo
Users that are interested in polite-flamingo are comparing it to the libraries listed below
Sorting:
- β63Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuningβ134Updated last year
- β91Updated last year
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Modelsβ37Updated last year
- β133Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Modelsβ44Updated 11 months ago
- Source code for EMNLP 2022 paper βPEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Modelsββ48Updated 2 years ago
- SVIT: Scaling up Visual Instruction Tuningβ162Updated 11 months ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.β41Updated 2 years ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMsβ47Updated 2 months ago
- β38Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversationβ46Updated last year
- A collection of visual instruction tuning datasets.β76Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".β58Updated last year
- β99Updated last year
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Insβ¦β19Updated last year
- β28Updated 2 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimizationβ88Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignmentβ56Updated 8 months ago
- Touchstone: Evaluating Vision-Language Models by Language Modelsβ83Updated last year
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by β¦β74Updated last year
- A huge dataset for Document Visual Question Answeringβ18Updated 10 months ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioningβ35Updated 9 months ago
- β147Updated 7 months ago
- Official repository of MMDU datasetβ91Updated 8 months ago
- Official Code of IdealGPTβ35Updated last year
- NegCLIP.β32Updated 2 years ago
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)β87Updated last year
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learningβ39Updated last year
- β47Updated 8 months ago