openreasoner / openr
OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models
☆1,493Updated last week
Alternatives and similar repositories for openr:
Users that are interested in openr are comparing it to the libraries listed below
- O1 Replication Journey☆1,909Updated 2 weeks ago
- Large Reasoning Models☆801Updated last month
- ☆1,150Updated 2 months ago
- ☆867Updated this week
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆548Updated last week
- An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)☆4,109Updated this week
- An Open Large Reasoning Model for Real-World Solutions☆1,410Updated 2 months ago
- ☆447Updated 3 weeks ago
- veRL: Volcano Engine Reinforcement Learning for LLM☆1,135Updated this week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆803Updated 2 months ago
- Scalable RL solution for advanced reasoning of language models☆974Updated this week
- AN O1 REPLICATION FOR CODING☆311Updated last month
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,185Updated last week
- A library for advanced large language model reasoning☆1,684Updated last week
- A series of technical report on Slow Thinking with LLM☆348Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,126Updated last year
- ☆997Updated last month
- ☆905Updated 7 months ago
- Code for Quiet-STaR☆706Updated 5 months ago
- Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient an…☆584Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆536Updated last month
- Recipes to train reward model for RLHF.☆1,119Updated last week
- LongBench v2 and LongBench (ACL 2024)☆762Updated 2 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆694Updated 4 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆913Updated last month
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆791Updated this week
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks☆1,735Updated this week
- ☆2,341Updated this week
- Recipes to scale inference-time compute of open models☆971Updated last week