elenacliu / pytorch_cuda_driver_compatibilitiesLinks
Quick check of compatible versions of PyTorch, Python, CUDA, cuDNN, NVIDIA driver! 实现 PyTorch, Python, CUDA, cuDNN, NVIDIA driver 兼容版本速查!
☆35Updated last year
Alternatives and similar repositories for pytorch_cuda_driver_compatibilities
Users that are interested in pytorch_cuda_driver_compatibilities are comparing it to the libraries listed below
Sorting:
- EO: Open-source Unified Embodied Foundation Model Series☆29Updated last week
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆312Updated 11 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆80Updated 2 months ago
- Official implementation for BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation☆73Updated last month
- This repository is a collection of research papers on World Models.☆38Updated last year
- ☆74Updated 3 weeks ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆78Updated 2 months ago
- a thin wrapper of chatgpt for improving paper writing.☆254Updated 2 years ago
- 🦾 A Dual-System VLA with System2 Thinking☆99Updated 2 weeks ago
- RLinf is a flexible and scalable open-source infrastructure designed for post-training foundation models (LLMs, VLMs, VLAs) via reinforce…☆49Updated last week
- ☆29Updated 2 years ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆301Updated 3 months ago
- Efficiently apply modification functions to RLDS/TFDS datasets.☆32Updated last year
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning", https://arxiv.org/abs/2505.13934☆80Updated 2 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆120Updated last year
- ☆80Updated last month
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 3 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆179Updated last month
- Arxiv个性化定制化模版,实现对特定领域的相关内容、作者与学术会议的有效跟进。☆320Updated this week
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Star⭐ the repo and fol…☆165Updated 7 months ago
- OpenReivew Submission Visualization (ICLR 2024/2025)☆151Updated 10 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆83Updated 3 months ago
- ☆13Updated last month
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆143Updated last year
- [CVPR 2024] Official repository for "Tactile-Augmented Radiance Fields".☆63Updated 6 months ago
- Latest Advances on Vison-Language-Action Models.☆99Updated 6 months ago
- Paper collections of the continuous effort start from World Models.☆182Updated last year
- ☆258Updated last year
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆289Updated last week
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆104Updated 6 months ago