YunxinLi / LingCloudLinks
Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large Language Model""
☆48Updated last year
Alternatives and similar repositories for LingCloud
Users that are interested in LingCloud are comparing it to the libraries listed below
Sorting:
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated 2 years ago
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated 2 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆48Updated last year
- Visual and Embodied Concepts evaluation benchmark☆21Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆73Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated 2 years ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆51Updated last year
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆34Updated 2 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- ☆101Updated 2 years ago
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆33Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆177Updated last year
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆68Updated 2 years ago
- ☆42Updated 2 years ago
- The official site of paper MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation☆203Updated 2 years ago
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆124Updated 8 months ago
- ☆50Updated 2 years ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆69Updated 3 years ago
- ☆66Updated last year
- 🦩 Official repository of paper "Visual Instruction Tuning with Polite Flamingo" (AAAI-24 Oral)☆65Updated 2 years ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated 2 years ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆44Updated 2 years ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- Code for ACL2023 paper: Pre-Training to Learn in Context☆106Updated last year
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆132Updated 2 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 4 years ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆31Updated last year