PKU-Alignment / safe-soraLinks
SafeSora is a human preference dataset designed to support safety alignment research in the text-to-video generation field, aiming to enhance the helpfulness and harmlessness of Large Vision Models (LVMs).
☆34Updated last year
Alternatives and similar repositories for safe-sora
Users that are interested in safe-sora are comparing it to the libraries listed below
Sorting:
- [ICML2025] The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation☆148Updated last year
- ☆311Updated last month
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation☆179Updated 2 months ago
- [ICLR 2026] Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆205Updated this week
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆232Updated last week
- ☆96Updated 7 months ago
- Empowering Unified MLLM with Multi-granular Visual Generation☆129Updated last year
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆305Updated 4 months ago
- Doodling our way to AGI ✏️ 🖼️ 🧠☆120Updated 8 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆138Updated 7 months ago
- Source code for "A Dense Reward View on Aligning Text-to-Image Diffusion with Preference" (ICML'24).☆40Updated last year
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reason…☆152Updated 4 months ago
- ☆80Updated 7 months ago
- ☆59Updated 5 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆133Updated 9 months ago
- The code repository of UniRL☆51Updated 8 months ago
- Official repo of "MMBench-GUI: Hierarchical Multi-Platform Evaluation Framework for GUI Agents". It can be used to evaluate a GUI agent w…☆97Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆179Updated 7 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆417Updated 9 months ago
- ☆46Updated last year
- [CVPR 2025] 🔥 Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".☆428Updated 5 months ago
- [NIPS 2025 DB Oral] Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆139Updated last month
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆138Updated 3 months ago
- ☆41Updated 3 months ago
- ☆155Updated last year
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆86Updated 6 months ago
- Multimodal RewardBench☆60Updated 11 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆203Updated last year
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆95Updated 8 months ago
- Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning☆236Updated 8 months ago