liyongqi67 / GRACELinks
☆24Updated last year
Alternatives and similar repositories for GRACE
Users that are interested in GRACE are comparing it to the libraries listed below
Sorting:
- Official Code of our AAAI-24 Paper: "Generative Multi-modal Knowledge Retrieval with Large Language Models".☆27Updated 8 months ago
- ☆55Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆52Updated last year
- Code and model for AAAI 2024: UMIE: Unified Multimodal Information Extraction with Instruction Tuning☆38Updated last year
- [ACL 2024] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Module …☆36Updated last year
- ☆64Updated 2 months ago
- Official implementation of our LREC-COLING 2024 paper "Generative Multimodal Entity Linking".☆34Updated 6 months ago
- 自己阅读的多模态对话系统论文(及部分笔记)汇总☆23Updated 2 years ago
- The code and data of DPA-RAG, accepted by WWW 2025 main conference.☆62Updated 7 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆43Updated 2 months ago
- ☆34Updated 4 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆163Updated 11 months ago
- ☆17Updated last year
- ☆96Updated last month
- ☆81Updated last year
- a multimodal retrieval dataset☆24Updated 2 years ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- This is the official repository for Retrieval Augmented Visual Question Answering☆236Updated 8 months ago
- ☆60Updated last year
- The demo, code and data of FollowRAG☆74Updated 2 months ago
- This is the official repository for the generative information retrieval survey. [TOIS 2025]☆183Updated 4 months ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆56Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆163Updated last year
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆38Updated 8 months ago
- Code for the paper: Metacognitive Retrieval-Augmented Large Language Models☆34Updated last year
- Code for our EMNLP-2022 paper: "Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA"☆40Updated 2 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- [Findings of ACL'2023] Improving Contrastive Learning of Sentence Embeddings from AI Feedback☆40Updated 2 years ago
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆23Updated last year
- ☆27Updated 2 years ago