yichengchen24 / MIGLinks
Official code for MIG: Automatic Data Selection for Instruction Tuning by Maximizing Information Gain in Semantic Space
☆19Updated 2 weeks ago
Alternatives and similar repositories for MIG
Users that are interested in MIG are comparing it to the libraries listed below
Sorting:
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆59Updated 3 weeks ago
- ☆49Updated 3 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆82Updated 10 months ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆46Updated last year
- ☆21Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- ☆13Updated 2 weeks ago
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated 10 months ago
- Official repository of MMDU dataset☆91Updated 8 months ago
- ☆33Updated 7 months ago
- ACL 2025: Synthetic data generation pipelines for text-rich images.☆73Updated 3 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆60Updated 7 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆75Updated 11 months ago
- ☆36Updated 9 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆84Updated 11 months ago
- ☆69Updated this week
- A Multimodal Finance Benchmark for Expert-level Understanding and Reasoning☆32Updated this week
- ☆29Updated 9 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆151Updated 8 months ago
- ☆15Updated last week
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆138Updated 3 months ago
- ☆17Updated last year
- Source code of paper: Process vs. Outcome Reward: Which is Better for Agentic RAG Reinforcement Learning☆22Updated this week
- ☆100Updated 8 months ago
- ☆64Updated last year
- Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head☆14Updated 2 years ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆47Updated 5 months ago
- ☆99Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆57Updated last month
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆78Updated 4 months ago