patrick-tssn / Awesome-Colorful-LLMLinks
Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics, Fundamental Sciences such as Mathematics, and Ominous.
☆123Updated 2 months ago
Alternatives and similar repositories for Awesome-Colorful-LLM
Users that are interested in Awesome-Colorful-LLM are comparing it to the libraries listed below
Sorting:
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆104Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- ☆66Updated last year
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆68Updated 2 years ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆167Updated 5 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆225Updated 5 months ago
- ☆73Updated last year
- ☆50Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 9 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆124Updated 4 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆58Updated 10 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated 2 months ago
- Official github repo of G-LLaVA☆146Updated 6 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆287Updated 11 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆269Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆82Updated 7 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 6 months ago
- ☆100Updated last year
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆159Updated 10 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆90Updated last year
- A curated list of resources about long-context in large-language models and video understanding.☆31Updated 2 years ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- Official repository of MMDU dataset☆93Updated 10 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆74Updated last month
- A RLHF Infrastructure for Vision-Language Models☆181Updated 9 months ago
- Reading list for Multimodal Large Language Models☆68Updated 2 years ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆60Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆352Updated last year