Kwai-YuanQi / TaskGalaxyView external linksLinks
Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types
☆33Jul 16, 2025Updated 6 months ago
Alternatives and similar repositories for TaskGalaxy
Users that are interested in TaskGalaxy are comparing it to the libraries listed below
Sorting:
- Advances in recent large vision language models (LVLMs)☆15Sep 23, 2024Updated last year
- Collaborative retina modelling across datasets and species.☆16Updated this week
- This repository is associated with the research paper titled ImageChain: Advancing Sequential Image-to-Text Reasoning in Multimodal Large…☆15Jun 4, 2025Updated 8 months ago
- This is an implementation of the paper "Are We Done with Object-Centric Learning?"☆12Sep 11, 2025Updated 5 months ago
- [COLM 2025] Official code for "When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoni…☆15Oct 31, 2025Updated 3 months ago
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆15Jun 3, 2025Updated 8 months ago
- ☆13Jan 22, 2025Updated last year
- ☆13May 12, 2025Updated 9 months ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- DeepTrace: A lightweight, scalable real-time diagnostic and analysis tool for distributed training tasks.☆18Nov 4, 2025Updated 3 months ago
- ☆360Jan 27, 2024Updated 2 years ago
- This repository contains the code of our paper 'Skip \n: A simple method to reduce hallucination in Large Vision-Language Models'.☆15Feb 12, 2024Updated 2 years ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆211Jun 9, 2024Updated last year
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆19Feb 14, 2025Updated last year
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- [NAACL 2025] Representing Rule-based Chatbots with Transformers☆23Feb 9, 2025Updated last year
- [TIP] Exploring Effective Factors for Improving Visual In-Context Learning☆20Jul 2, 2025Updated 7 months ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆21Feb 3, 2024Updated 2 years ago
- Landing repository for the paper "Predicting the Order of Upcoming Tokens Improves Language Modeling"☆41Sep 12, 2025Updated 5 months ago
- CIFAR-10-Warehouse: Towards Broad and More Realistic Testbeds in Model Generalization Analysis☆18Jul 15, 2024Updated last year
- 一个强大的 多模态大语言模型(MLLM),支持 文本、图像、视频等多模态输入,具备强大的理解、推理和生成能力 。☆23Mar 19, 2025Updated 10 months ago
- ☆23Apr 24, 2025Updated 9 months ago
- This repository includes various baseline techniques for label-free model evaluation task for the VDU2023 competition.☆19Mar 8, 2023Updated 2 years ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 9 months ago
- Patching open-vocabulary models by interpolating weights☆91Sep 28, 2023Updated 2 years ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆60Aug 23, 2024Updated last year
- A Benchmark for Efficient and Compositional Visual Reasoning☆25Aug 2, 2023Updated 2 years ago
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆32Mar 26, 2025Updated 10 months ago
- ☆26Apr 26, 2025Updated 9 months ago
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"☆62Dec 8, 2025Updated 2 months ago
- ☆27Mar 21, 2024Updated last year
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆89Jun 24, 2025Updated 7 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆129Jul 24, 2025Updated 6 months ago
- ☆27Jul 6, 2024Updated last year
- ☆32Dec 23, 2025Updated last month
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆91Aug 8, 2025Updated 6 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Nov 23, 2024Updated last year
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆88Sep 23, 2025Updated 4 months ago
- ☆32Feb 8, 2024Updated 2 years ago