PKU-Alignment / align-anythingLinks
Align Anything: Training All-modality Model with Feedback
☆4,533Updated this week
Alternatives and similar repositories for align-anything
Users that are interested in align-anything are comparing it to the libraries listed below
Sorting:
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks☆3,021Updated this week
- Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)☆1,910Updated last month
- minimal-cost for training 0.5B R1-Zero☆765Updated 3 months ago
- Build multimodal language agents for fast prototype and production☆2,546Updated 5 months ago
- Train your Agent model via our easy and efficient framework☆1,363Updated this week
- The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"☆755Updated 3 weeks ago
- Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI (Kunlun Inc.), specializing in vision-language reasoning.☆2,949Updated 3 weeks ago
- adds Sequence Parallelism into LLaMA-Factory☆551Updated last week
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆729Updated last month
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆361Updated 2 months ago
- An MBTI Exploration of Large Language Models☆498Updated last year
- [COLM'25] DeepRetrieval - 🔥 Training Search Agent with Retrieval Outcomes via Reinforcement Learning☆621Updated 2 months ago
- [Up-to-date] Large Language Model Agent: A Survey on Methodology, Applications and Challenges☆1,526Updated 2 weeks ago
- "Vimo: Chat with Your Videos"☆1,067Updated this week
- [NeurIPS 2024] An official implementation of "ShareGPT4Video: Improving Video Understanding and Generation with Better Captions"☆1,074Updated 10 months ago
- Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities。☆1,791Updated 7 months ago
- Mirix is a multi-agent personal assistant designed to track on-screen activities and answer user questions intelligently. By capturing re…☆1,264Updated this week
- Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,209Updated 5 months ago
- ✨✨VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆633Updated 3 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆3,461Updated this week
- A fork to add multimodal model training to open-r1☆1,378Updated 6 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆561Updated 4 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆250Updated 3 months ago
- [ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,652Updated this week
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆790Updated 3 months ago
- Real-time and accurate open-vocabulary end-to-end object detection☆1,335Updated 8 months ago
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆467Updated 5 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆815Updated 3 months ago
- This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages …☆681Updated last month
- Collection of AWESOME vision-language models for vision tasks☆2,900Updated 3 months ago