dannyXSC / Fudan_FreshmanTestLinks
复旦研究生入学教育测试
☆23Updated 4 months ago
Alternatives and similar repositories for Fudan_FreshmanTest
Users that are interested in Fudan_FreshmanTest are comparing it to the libraries listed below
Sorting:
- Official implementation of MC-LLaVA.☆140Updated 2 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆106Updated 8 months ago
- A python script for downloading huggingface datasets and models.☆20Updated 9 months ago
- 哈尔滨工业大学2023春季学期编译系统课程实验、习题、课件以及期末复习材料☆11Updated 2 years ago
- 在没有sudo权限的情况下,在linux上使用clash☆165Updated last year
- R1-like Video-LLM for Temporal Grounding☆130Updated 6 months ago
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆61Updated 10 months ago
- A tiny paper rating web☆38Updated 9 months ago
- A paper list of Awesome Latent Space.☆276Updated last week
- This is a collection of recent papers on reasoning in video generation models.☆91Updated last week
- A framework for unified personalized model, achieving mutual enhancement between personalized understanding and generation. Demonstrating…☆129Updated 2 weeks ago
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆68Updated last week
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆102Updated 6 months ago
- 🔥🔥🔥 Latest Papers, Codes and Datasets on Video-LMM Post-Training☆223Updated last month
- MM-ACT: Learn from Multimodal Parallel Generation to Act☆89Updated last week
- [NeurIPS 2025] 𝓡𝓣𝓥-𝓑𝓮𝓷𝓬𝓱: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video.☆29Updated last week
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆107Updated 3 weeks ago
- [NeurIPS 2025]⭐️ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.☆252Updated 3 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆154Updated 9 months ago
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆43Updated 2 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.☆375Updated this week
- [CVPR 2024] Narrative Action Evaluation with Prompt-Guided Multimodal Interaction☆40Updated last year
- Survey: https://arxiv.org/pdf/2507.20198☆269Updated 2 weeks ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆311Updated 8 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆95Updated 3 months ago
- ☆154Updated 10 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆380Updated 10 months ago
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration☆105Updated last month
- [ICCV 2025] CombatVLA: An Efficient Vision-Language-Action Model for Combat Tasks in 3D Action Role-Playing Games☆31Updated last month
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆218Updated 3 weeks ago