PKU-Alignment / align-anythingLinks
Align Anything: Training All-modality Model with Feedback
☆4,570Updated 2 months ago
Alternatives and similar repositories for align-anything
Users that are interested in align-anything are comparing it to the libraries listed below
Sorting:
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆3,233Updated last week
- Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)☆1,925Updated last week
- minimal-cost for training 0.5B R1-Zero☆779Updated 5 months ago
- Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI (Kunlun Inc.), specializing in vision-language reasoning.☆2,943Updated 2 months ago
- Train your Agent model via our easy and efficient framework☆1,587Updated this week
- Uni-MoE: Lychee's Large Multimodal Model Family.☆793Updated last week
- Build multimodal language agents for fast prototype and production☆2,568Updated 7 months ago
- adds Sequence Parallelism into LLaMA-Factory☆582Updated 2 weeks ago
- An MBTI Exploration of Large Language Models☆505Updated last year
- [COLM’25] DeepRetrieval — 🔥 The First Search Agent Trained by On-Policy Reinforcement Learning☆661Updated 2 weeks ago
- "VideoRAG: Chat with Your Videos"☆1,234Updated last week
- [Up-to-date] Large Language Model Agent: A Survey on Methodology, Applications and Challenges☆1,969Updated 2 weeks ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆372Updated 4 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆753Updated last month
- [NeurIPS 2024] An official implementation of "ShareGPT4Video: Improving Video Understanding and Generation with Better Captions"☆1,077Updated last year
- [NIPS'25 Spotlight] Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,221Updated last month
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,751Updated last week
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆477Updated 7 months ago
- A fork to add multimodal model training to open-r1☆1,412Updated 8 months ago
- ✨✨VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆642Updated 5 months ago
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆824Updated 5 months ago
- 🚀 EvoAgentX: Building a Self-Evolving Ecosystem of AI Agents☆2,084Updated last week
- A tutorial based on MetaGPT to quickly help you understand the concept of agent and muti-agent and get started with coding development. 基…☆1,308Updated last year
- Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。☆1,828Updated 9 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆265Updated 5 months ago
- GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning☆1,713Updated 2 weeks ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆594Updated last year
- R1-onevision, a visual language model capable of deep CoT reasoning.☆569Updated 6 months ago
- ☆320Updated 2 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆3,895Updated this week