YunxinLi / LingCloudLinks
Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large Language Model""
☆48Updated 11 months ago
Alternatives and similar repositories for LingCloud
Users that are interested in LingCloud are comparing it to the libraries listed below
Sorting:
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆74Updated 7 months ago
- ☆100Updated last year
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- ☆38Updated last year
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆47Updated 9 months ago
- ☆58Updated this week
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆26Updated last year
- ☆64Updated last year
- ☆17Updated 2 years ago
- ☆75Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆23Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 7 months ago
- ☆150Updated 7 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆50Updated 6 months ago
- Preference Learning for LLaVA☆46Updated 7 months ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆64Updated 2 years ago
- ☆29Updated 3 months ago
- ☆50Updated last year
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆75Updated 7 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆153Updated 8 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆59Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- ☆10Updated 2 months ago
- ☆18Updated 11 months ago