dannyXSC / Fudan_FreshmanTestLinks
复旦研究生入学教育测试
☆23Updated last month
Alternatives and similar repositories for Fudan_FreshmanTest
Users that are interested in Fudan_FreshmanTest are comparing it to the libraries listed below
Sorting:
- A paper list for spatial reasoning☆143Updated 4 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated 5 months ago
- A python script for downloading huggingface datasets and models.☆20Updated 6 months ago
- Official implementation of MC-LLaVA.☆140Updated last month
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆60Updated 7 months ago
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆39Updated last week
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆32Updated 5 months ago
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆83Updated last month
- Survey: https://arxiv.org/pdf/2507.20198☆172Updated this week
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆155Updated 7 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆368Updated 7 months ago
- 在没有sudo权限的情况下,在linux上使用clash☆149Updated 11 months ago
- 哈尔滨工业大学2023春季学期编译系统课程实验、习题、课件以及期末复习材料☆11Updated 2 years ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆83Updated 3 months ago
- FNIN: A Fourier Neural Operator-based Numerical Integration Network for Surface-form-gradients☆11Updated 8 months ago
- Official repo and evaluation implementation of VSI-Bench☆603Updated 2 months ago
- A framework for unified personalized model, achieving mutual enhancement between personalized understanding and generation. Demonstrating…☆121Updated 2 weeks ago
- [NeurIPS 2025]⭐️ Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning.☆218Updated 2 weeks ago
- R1-like Video-LLM for Temporal Grounding☆120Updated 3 months ago
- A tiny paper rating web☆39Updated 7 months ago
- ☆59Updated last year
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆313Updated last week
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆54Updated 2 months ago
- Accepted by CVPR 2024☆39Updated last year
- starVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆79Updated this week
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆274Updated 5 months ago
- ☆59Updated 6 months ago
- An example reproduction checklist for AAAI-26 submissions.☆105Updated 2 months ago
- Official repository for VisionZip (CVPR 2025)☆358Updated 2 months ago
- ☆58Updated 7 months ago