dannyXSC / Fudan_FreshmanTestLinks
复旦研究生入学教育测试
☆22Updated 3 weeks ago
Alternatives and similar repositories for Fudan_FreshmanTest
Users that are interested in Fudan_FreshmanTest are comparing it to the libraries listed below
Sorting:
- Official implementation of MC-LLaVA.☆140Updated last month
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated 5 months ago
- A paper list for spatial reasoning☆139Updated 3 months ago
- A python script for downloading huggingface datasets and models.☆20Updated 5 months ago
- 在没有sudo权限的情况下,在linux上使用clash☆145Updated 10 months ago
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆59Updated 7 months ago
- A vue-based project page template for academic papers. (in development) https://junyaohu.github.io/academic-project-page-template-vue☆290Updated 2 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆153Updated 6 months ago
- A tiny paper rating web☆39Updated 6 months ago
- Lecture Notes for Scientific Machine Learning 2025☆27Updated this week
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆38Updated last week
- [NeurIPS 2025]⭐️ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.☆206Updated last week
- An example reproduction checklist for AAAI-26 submissions.☆106Updated last month
- 哈尔滨工业大学2023春季学期编译系统课程实验、习题、课件以及期末复习材料☆11Updated 2 years ago
- A framework for unified personalized model, achieving mutual enhancement between personalized understanding and generation. Demonstrating…☆121Updated last month
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆297Updated this week
- Official repo and evaluation implementation of VSI-Bench☆599Updated last month
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆82Updated 3 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆265Updated 5 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆168Updated 3 months ago
- ViewSpatial-Bench:Evaluating Multi-perspective Spatial Localization in Vision-Language Models☆59Updated 3 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆32Updated 4 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆365Updated 7 months ago
- Papers and codes collection for customized, personalized and editable generative models☆27Updated 11 months ago
- [ICCV 2025] CombatVLA: An Efficient Vision-Language-Action Model for Combat Tasks in 3D Action Role-Playing Games☆19Updated 2 months ago
- Survey: https://arxiv.org/pdf/2507.20198☆145Updated 2 weeks ago
- ☆140Updated 7 months ago
- R1-like Video-LLM for Temporal Grounding☆115Updated 3 months ago
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆52Updated last month
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆293Updated last month