alphadl / R1Links
πenhanced GRPO with more verifiable rewards and real-time evaluators
β35Updated 2 weeks ago
Alternatives and similar repositories for R1
Users that are interested in R1 are comparing it to the libraries listed below
Sorting:
- A Survey on the Honesty of Large Language Modelsβ57Updated 6 months ago
- Paper list and datasets for the paper: A Survey on Data Selection for LLM Instruction Tuningβ44Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)β40Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"β65Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learningβ39Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":β39Updated last year
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Modelsβ55Updated 11 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learningβ164Updated last year
- β34Updated 8 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ176Updated last year
- β46Updated 2 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigatingβ95Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.β75Updated 7 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$β45Updated 8 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Modelsβ55Updated last year
- my commonly-used toolsβ56Updated 5 months ago
- β24Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questionsβ112Updated 9 months ago
- BeHonest: Benchmarking Honesty in Large Language Modelsβ34Updated 10 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β63Updated 7 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language β¦β35Updated 5 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.β65Updated 3 weeks ago
- β101Updated 8 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"β50Updated 7 months ago
- β100Updated last year
- Reproducing R1 for Code with Reliable Rewardsβ10Updated 2 months ago
- A method of ensemble learning for heterogeneous large language models.β58Updated 10 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architectβ¦β58Updated 3 weeks ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Moβ¦β79Updated 4 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"β61Updated last year