umd-huang-lab / MementosLinks
☆31Updated last year
Alternatives and similar repositories for Mementos
Users that are interested in Mementos are comparing it to the libraries listed below
Sorting:
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆33Updated last year
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆23Updated last week
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆29Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated last month
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated 9 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆57Updated 8 months ago
- ☆50Updated last year
- ☆55Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search"☆25Updated 2 months ago
- Data and code for NeurIPS 2021 Paper "IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning".☆52Updated last year
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated last year
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆65Updated 3 years ago
- Preference Learning for LLaVA☆47Updated 8 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 8 months ago
- Code for "Merging Text Transformers from Different Initializations"☆20Updated 5 months ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated 2 years ago
- Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"☆28Updated 11 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆79Updated 5 months ago
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆15Updated 11 months ago
- [Under Review] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with enla…☆60Updated 9 months ago
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆67Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆26Updated 7 months ago
- ☆16Updated 11 months ago
- Official Code of IdealGPT☆35Updated last year
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆20Updated last year
- Code for Let LLMs Break Free from Overthinking via Self-Braking Tuning. https://arxiv.org/abs/2505.14604☆43Updated last month
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆40Updated last month
- ☆26Updated last year
- Code and data for the ACL 2024 Findings paper "Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning"☆26Updated last year