PKU-Alignment / align-anythingLinks
Align Anything: Training All-modality Model with Feedback
☆4,620Updated last month
Alternatives and similar repositories for align-anything
Users that are interested in align-anything are comparing it to the libraries listed below
Sorting:
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆3,566Updated this week
- Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)☆1,941Updated 2 months ago
- minimal-cost for training 0.5B R1-Zero☆803Updated 8 months ago
- Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in vision-language reasoning.☆3,143Updated last month
- Train your Agent model via our easy and efficient framework☆1,687Updated last month
- Uni-MoE: Lychee's Large Multimodal Model Family.☆1,070Updated 3 weeks ago
- [EMNLP-2024] Build multimodal language agents for fast prototype and production☆2,622Updated 10 months ago
- adds Sequence Parallelism into LLaMA-Factory☆600Updated 3 months ago
- An MBTI Exploration of Large Language Models☆522Updated last year
- [Up-to-date] Large Language Model Agent: A Survey on Methodology, Applications and Challenges☆2,361Updated 2 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆766Updated 4 months ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆384Updated 7 months ago
- [COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome☆693Updated 3 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,853Updated last week
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.☆553Updated last month
- [NeurIPS 2024] An official implementation of "ShareGPT4Video: Improving Video Understanding and Generation with Better Captions"☆1,084Updated last year
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆569Updated 10 months ago
- [NIPS'25 Spotlight] Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,235Updated this week
- ☆332Updated 4 months ago
- Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。☆1,856Updated last year
- A fork to add multimodal model training to open-r1☆1,438Updated 11 months ago
- Deep Research Agent CognitiveKernel-Pro from Tencent AI Lab. Paper: https://arxiv.org/pdf/2508.00414☆485Updated 3 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,444Updated 2 weeks ago
- An official implementation of DanceGRPO: Unleashing GRPO on Visual Generation☆1,466Updated 3 months ago
- ✨✨[NeurIPS 2025] VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆669Updated 7 months ago
- Recipes to train reward model for RLHF.☆1,499Updated 8 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆605Updated last year
- ☆1,069Updated this week
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆850Updated 7 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆277Updated 8 months ago