OpenMOSS / LorsaLinks
☆27Updated 3 months ago
Alternatives and similar repositories for Lorsa
Users that are interested in Lorsa are comparing it to the libraries listed below
Sorting:
- ☆22Updated 2 months ago
- Official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆40Updated this week
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆100Updated last month
- Official repo of paper LM2☆44Updated 7 months ago
- ☆19Updated 7 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆77Updated 6 months ago
- Resa: Transparent Reasoning Models via SAEs☆41Updated last week
- ☆23Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆34Updated last month
- [ACL 2025] How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training☆43Updated 2 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆59Updated last year
- ☆40Updated 4 months ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆24Updated 2 months ago
- The original Shared Recurrent Memory Transformer implementation☆31Updated 2 months ago
- ☆34Updated last month
- ☆85Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆106Updated 3 months ago
- ☆35Updated 4 months ago
- Code for paper called Self-Training Elicits Concise Reasoning in Large Language Models☆42Updated 5 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 6 months ago
- MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning☆101Updated 2 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 5 months ago
- A repository for research on medium sized language models.☆78Updated last year
- Esoteric Language Models☆99Updated 2 months ago
- ☆77Updated 2 weeks ago
- ☆48Updated 7 months ago
- ☆67Updated 6 months ago
- ☆54Updated 10 months ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆32Updated last month
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆48Updated 5 months ago