Mixture-AI / meta-llama-explainLinks
Explanation of the llama2 repo.
☆10Updated last year
Alternatives and similar repositories for meta-llama-explain
Users that are interested in meta-llama-explain are comparing it to the libraries listed below
Sorting:
- Modified LLaVA framework for MOSS2, and makes MOSS2 a multimodal model.☆13Updated last year
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆66Updated last week
- ☆24Updated 3 months ago
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆75Updated 8 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆33Updated last month
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 4 months ago
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆78Updated 3 months ago
- ☆132Updated 5 months ago
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆106Updated 4 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆456Updated 10 months ago
- Official repository of MMDU dataset☆97Updated last year
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆134Updated last month
- ☆116Updated this week
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆133Updated 3 months ago
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆89Updated 9 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆498Updated 2 weeks ago
- ☆61Updated 2 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆75Updated last year
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆32Updated 4 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆87Updated 2 months ago
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆169Updated 8 months ago
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆204Updated last month
- MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable Step-Level Supervision☆25Updated 5 months ago
- [CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts☆19Updated 7 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆318Updated last year
- VLM2-Bench [ACL 2025 Main]: A Closer Look at How Well VLMs Implicitly Link Explicit Matching Visual Cues☆42Updated 6 months ago
- The official repository for the paper "ThinkMorph: Emergent Properties in Multimodal Interleaved Chain-of-Thought Reasoning"☆96Updated this week
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 7 months ago
- ☆149Updated 9 months ago
- The Next Step Forward in Multimodal LLM Alignment☆186Updated 6 months ago