KaijingOfficial / Vscode-Debug-Python-on-SlurmLinks
An easy way for debug python for Slurm HPC users.
☆26Updated 6 months ago
Alternatives and similar repositories for Vscode-Debug-Python-on-Slurm
Users that are interested in Vscode-Debug-Python-on-Slurm are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆72Updated 4 months ago
- An ML research template with good documentation by Boyuan Chen, an MIT PhD student☆87Updated 7 months ago
- Source codes for the paper "MindJourney: Test-Time Scaling with World Models for Spatial Reasoning"☆85Updated 2 months ago
- A paper list for spatial reasoning☆143Updated 4 months ago
- [NeurIPS 2025] InternScenes: A Large-scale Interactive Indoor Scene Dataset with Realistic Layouts.☆181Updated this week
- Physical laws underpin all existence, and harnessing them for generative modeling opens boundless possibilities for advancing science and…☆225Updated 5 months ago
- Code release for paper "Test-Time Training Done Right"☆295Updated last month
- Thinking with Videos from Open-Source Priors. We reproduce chain-of-frames visual reasoning by fine-tuning open-source video models. Give…☆131Updated last week
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆190Updated 5 months ago
- ☆143Updated 9 months ago
- ☆16Updated last year
- [ARXIV’25] Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control☆81Updated 3 months ago
- Generative World Explorer☆157Updated 4 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆42Updated 10 months ago
- Official Reporsitory of "EgoMono4D: Self-Supervised Monocular 4D Scene Reconstruction for Egocentric Videos"☆35Updated 3 weeks ago
- ☆27Updated 3 months ago
- Using message app/bot to notify you when doing time-consuming tasks. Bake your experiments!☆79Updated 3 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆55Updated last week
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆56Updated 5 months ago
- ☆90Updated 2 weeks ago
- [NeurIPS 2025 Spotlight] SimWorld: An Open-ended Realistic Simulator for Autonomous Agents in Physical and Social Worlds☆65Updated last week
- ☆176Updated 2 weeks ago
- A comprehensive list of papers investigating physical cognition in video generation, including papers, codes, and related websites.☆183Updated last week
- Collection of the latest spatial, 3D, and video/temporal reasoning papers☆22Updated 2 weeks ago
- A list of works on video generation towards world model☆167Updated 2 months ago
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆83Updated 2 months ago
- Awesome paper list and repos of the paper "A comprehensive survey of embodied world models".☆30Updated 3 weeks ago
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆119Updated 2 months ago
- ☆21Updated 3 years ago
- [NeurIPS‘24] Multi-Object 3D Grounding with Dynamic Modules and Language Informed Spatial Attention☆27Updated 4 months ago