EmPACTLab / Awesome-Neuroscience-Agent-ReasoningLinks
Neuroscience Inspired Agent Reasoning Framework
β29Updated 8 months ago
Alternatives and similar repositories for Awesome-Neuroscience-Agent-Reasoning
Users that are interested in Awesome-Neuroscience-Agent-Reasoning are comparing it to the libraries listed below
Sorting:
- π This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.β412Updated this week
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)β240Updated 6 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]β273Updated 3 months ago
- [ICLR'26] Official PyTorch implementation of "Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models".β59Updated this week
- [ICLR 2026] Official repo of paper "Reconstruction Alignment Improves Unified Multimodal Models". Unlocking the Massive Zero-shot Potentiβ¦β361Updated this week
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaborationβ113Updated 2 months ago
- Holistic Evaluation of Multimodal LLMs on Spatial Intelligenceβ79Updated this week
- PyTorch implementation of NEPAβ308Updated 2 weeks ago
- [ICML2025] The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generationβ149Updated last year
- This is a collection of recent papers on reasoning in video generation models.β95Updated last month
- A paper list for spatial reasoningβ638Updated 3 weeks ago
- β117Updated 6 months ago
- LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Callingβ187Updated 2 weeks ago
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generationβ186Updated 8 months ago
- Official repo for UAEβ164Updated last month
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, β¦β203Updated 9 months ago
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuningβ234Updated 2 weeks ago
- Official code for MotionBench (CVPR 2025)β63Updated 11 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generationβ236Updated 5 months ago
- [ICLR 2026] Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Visionβ207Updated 2 weeks ago
- Thinking with Videos from Open-Source Priors. We reproduce chain-of-frames visual reasoning by fine-tuning open-source video models. Giveβ¦β207Updated 3 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoTβ117Updated last week
- MMSI-Video-Bench: A Holistic Benchmark for Video-Based Spatial Intelligenceβ54Updated last month
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"β307Updated 4 months ago
- Cambrian-S: Towards Spatial Supersensing in Videoβ488Updated last month
- [NeurIPS 2025] VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Modelsβ157Updated last month
- We introduce BabyVision, a benchmark revealing the infancy of AI vision.β173Updated 3 weeks ago
- Official release of "Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised Reinforcement Learning"β109Updated last month
- [ICCV 2025 Oral] Official implementation of Learning Streaming Video Representation via Multitask Training.β80Updated last month
- IMG: Calibrating Diffusion Models via Implicit Multimodal Guidance, ICCV 2025β30Updated 4 months ago