hhnqqq / py_hfdLinks
A python script for downloading huggingface datasets and models.
☆20Updated 7 months ago
Alternatives and similar repositories for py_hfd
Users that are interested in py_hfd are comparing it to the libraries listed below
Sorting:
- A tiny paper rating web☆38Updated 8 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆106Updated 7 months ago
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆94Updated 2 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆334Updated last month
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆213Updated last month
- This repository is the official implementation of "Look-Back: Implicit Visual Re-focusing in MLLM Reasoning".☆72Updated 4 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆154Updated 8 months ago
- Official implementation of MC-LLaVA.☆139Updated 3 weeks ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆98Updated 4 months ago
- Official codebase for the paper Latent Visual Reasoning☆44Updated last month
- Survey: https://arxiv.org/pdf/2507.20198☆228Updated last month
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆194Updated 7 months ago
- ☆151Updated 9 months ago
- ☆110Updated 2 months ago
- [NeurIPS 2025] RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video.☆26Updated 2 months ago
- R1-like Video-LLM for Temporal Grounding☆125Updated 5 months ago
- ☆293Updated last month
- Official repository for VisionZip (CVPR 2025)☆381Updated 4 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆197Updated 4 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆35Updated 6 months ago
- A Collection of Papers on Diffusion Language Models☆148Updated 2 months ago
- [CVPR 2024] Narrative Action Evaluation with Prompt-Guided Multimodal Interaction☆40Updated last year
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆61Updated 9 months ago
- This is a collection of recent papers on reasoning in video generation models.☆66Updated last week
- Imagine While Reasoning in Space: Multimodal Visualization-of-Thought (ICML 2025)☆58Updated 7 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆134Updated 9 months ago
- A collection of awesome think with videos papers.☆68Updated last week
- ViewSpatial-Bench:Evaluating Multi-perspective Spatial Localization in Vision-Language Models☆66Updated 6 months ago
- ☆130Updated 8 months ago
- ☆107Updated 4 months ago