InternLM / AlchemistCoderLinks
☆35Updated last year
Alternatives and similar repositories for AlchemistCoder
Users that are interested in AlchemistCoder are comparing it to the libraries listed below
Sorting:
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- ☆75Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 6 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆97Updated last year
- ☆29Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆163Updated 11 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated 11 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆64Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated last year
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆47Updated 9 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆75Updated 2 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search" [EMNLP25]☆35Updated 3 months ago
- ☆90Updated last year
- [EMNLP 2025] Distill Visual Chart Reasoning Ability from LLMs to MLLMs☆57Updated 3 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆169Updated last year
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆103Updated 6 months ago
- Official repo for StableLLAVA☆95Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated 2 years ago
- The code and data of We-Math, accepted by ACL 2025 main conference.☆133Updated last month
- ZeroGUI: Automating Online GUI Learning at Zero Human Cost☆102Updated 4 months ago
- ☆50Updated 2 years ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆58Updated 11 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆64Updated 6 months ago
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆34Updated 8 months ago
- Official Repo for the paper: VCR: Visual Caption Restoration. Check arxiv.org/pdf/2406.06462 for details.☆31Updated 9 months ago
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆48Updated last year
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆143Updated 6 months ago
- FuseAI Project☆87Updated 10 months ago