YunxinLi / LingCloud
Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large Language Model""
☆48Updated 6 months ago
Alternatives and similar repositories for LingCloud:
Users that are interested in LingCloud are comparing it to the libraries listed below
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated last year
- ☆31Updated last year
- ☆60Updated 8 months ago
- A curated list of resources about long-context in large-language models and video understanding.☆29Updated last year
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆18Updated 8 months ago
- Released code for our ICLR23 paper.☆63Updated last year
- ☆59Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆126Updated 4 months ago
- ☆29Updated last month
- ☆16Updated last year
- ☆39Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 8 months ago
- DSTC10 Track1 - MOD: Internet Meme Incorporated Open-domain Dialog☆50Updated 2 years ago
- ☆95Updated last year
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆30Updated last year
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- ☆22Updated 3 months ago
- ☆47Updated last year
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆32Updated 8 months ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆63Updated 2 years ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆43Updated last year
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆73Updated 2 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆89Updated last year
- ☆16Updated last year
- my commonly-used tools☆49Updated last month
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆62Updated 3 months ago