YunxinLi / LingCloud
Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large Language Model""
☆48Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for LingCloud
- ACL'2024 (Findings): TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆48Updated last year
- ☆84Updated 10 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆105Updated last month
- ☆27Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆60Updated 2 years ago
- ☆37Updated 5 months ago
- ☆43Updated last month
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆79Updated 9 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆54Updated 3 weeks ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆29Updated 7 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆133Updated last year
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆26Updated 3 months ago
- Official code for our paper "Model Composition for Multimodal Large Language Models"☆17Updated 6 months ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆42Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆16Updated 5 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆41Updated 4 months ago
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆72Updated 8 months ago
- ☆15Updated last year
- An Easy-to-use Hallucination Detection Framework for LLMs.☆49Updated 6 months ago
- my commonly-used tools☆47Updated 3 months ago
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated 11 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆33Updated 11 months ago
- Data for evaluating GPT-4V☆11Updated last year
- ☆100Updated 2 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆29Updated last year
- ☆32Updated last year
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆40Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated last year