OPPO-Mente-Lab / DaMoLinks
The official implement of paper 《DaMo: Data Mixing Optimizer in Fine-tuning Multimodal LLMs for Mobile Phone Agents》
☆28Updated 2 months ago
Alternatives and similar repositories for DaMo
Users that are interested in DaMo are comparing it to the libraries listed below
Sorting:
- ☆14Updated last year
- instruction-following benchmark for large reasoning models☆44Updated 5 months ago
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Updated 3 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆58Updated last year
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Updated last year
- Extending context length of visual language models☆12Updated last year
- [ACL 2024] Making Long-Context Language Models Better Multi-Hop Reasoners☆18Updated last year
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆62Updated 7 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆35Updated last year
- Evaluating the faithfulness of long-context language models☆30Updated last year
- Official Repo for SvS: A Self-play with Variational Problem Synthesis strategy for RLVR training☆50Updated 3 weeks ago
- [ICLR 2025] Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization☆12Updated 11 months ago
- ☆21Updated last year
- [EMNLP 2025] Verification Engineering for RL in Instruction Following☆46Updated this week
- ☆31Updated 4 months ago
- ☆21Updated 8 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆51Updated last year
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆85Updated 10 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 5 months ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Updated last year
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆25Updated 5 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆46Updated 6 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated last year
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆33Updated last year
- ☆58Updated last year
- ☆14Updated 11 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆127Updated 8 months ago
- A comprehensive benchmark for evaluating deep research agents on academic survey tasks☆46Updated 4 months ago