DelinQu / pj-probeLinks
A Visualization Tool for GPU Occupancy on S Cluster.
☆13Updated 3 years ago
Alternatives and similar repositories for pj-probe
Users that are interested in pj-probe are comparing it to the libraries listed below
Sorting:
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆162Updated 4 months ago
- [ICLR 2026] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆76Updated last week
- [ICRA 2026] VITRA: Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos☆297Updated this week
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆197Updated 7 months ago
- [NeurIPS 24] The implementation and dataset of LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Rendering and…☆60Updated 10 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆164Updated 4 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆247Updated last year
- ☆142Updated 7 months ago
- ☆146Updated 2 weeks ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆60Updated 9 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆122Updated 5 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆331Updated 6 months ago
- Efficiently apply modification functions to RLDS/TFDS datasets.☆29Updated last year
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆212Updated 3 weeks ago
- Ctrl-World: A Controllable Generative World Model for Robot Manipualtion☆262Updated 2 months ago
- [NeurIPS 2025] OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆71Updated 4 months ago
- ☆184Updated last week
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆363Updated last month
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆225Updated 7 months ago
- Thinking in 360°: Humanoid Visual Search in the Wild☆115Updated 2 weeks ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆228Updated last month
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆127Updated 6 months ago
- [ECCV 2024] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation☆265Updated 10 months ago
- Official implementation of Spatial-Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model☆180Updated last month
- Code to load DreamZero model checkpoints and run evaluation on DROID-sim and Genie Sim 3.0☆664Updated this week
- List of papers on video-centric robot learning☆22Updated last year
- Official implementation of the paper: Task Reconstruction and Extrapolation for $\pi_0$ using Text Latent (https://arxiv.org/pdf/2505.035…☆102Updated 6 months ago
- [NeurIPS 2024] MSR3D: Advanced Situated Reasoning in 3D Scenes☆70Updated 2 months ago
- Implementation of VLM4VLA☆115Updated last week
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆126Updated 4 months ago