PKU-Alignment / align-anythingLinks
Align Anything: Training All-modality Model with Feedback
☆4,631Updated 2 months ago
Alternatives and similar repositories for align-anything
Users that are interested in align-anything are comparing it to the libraries listed below
Sorting:
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆3,608Updated this week
- Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)☆1,947Updated last week
- Uni-MoE: Lychee's Large Multimodal Model Family.☆1,076Updated last month
- minimal-cost for training 0.5B R1-Zero☆806Updated 8 months ago
- Train your Agent model via our easy and efficient framework☆1,697Updated 2 months ago
- [EMNLP-2024] Build multimodal language agents for fast prototype and production☆2,624Updated 10 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆767Updated 4 months ago
- adds Sequence Parallelism into LLaMA-Factory☆603Updated 3 months ago
- Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in vision-language reasoning.☆3,149Updated last month
- An MBTI Exploration of Large Language Models☆524Updated 2 years ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆386Updated 7 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,865Updated 3 weeks ago
- [Up-to-date] Large Language Model Agent: A Survey on Methodology, Applications and Challenges☆2,402Updated 2 months ago
- [NeurIPS 2024] An official implementation of "ShareGPT4Video: Improving Video Understanding and Generation with Better Captions"☆1,084Updated last year
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆856Updated 8 months ago
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆587Updated 10 months ago
- A fork to add multimodal model training to open-r1☆1,443Updated 11 months ago
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.☆562Updated last month
- Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。☆1,866Updated last year
- [COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome☆695Updated 3 months ago
- ✨✨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,487Updated 10 months ago
- ✨✨[NeurIPS 2025] VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆673Updated 8 months ago
- An official implementation of DanceGRPO: Unleashing GRPO on Visual Generation☆1,499Updated 3 months ago
- [NIPS'25 Spotlight] Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,238Updated 3 weeks ago
- ☆334Updated 5 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆607Updated last year
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that…☆756Updated last week
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆839Updated 8 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆575Updated 9 months ago
- ✨✨ [ICLR 2026] R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆281Updated 8 months ago