OpenMOSS / LorsaLinks
☆25Updated 2 months ago
Alternatives and similar repositories for Lorsa
Users that are interested in Lorsa are comparing it to the libraries listed below
Sorting:
- The official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆35Updated 2 weeks ago
- ☆19Updated 5 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated 3 weeks ago
- ☆85Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆34Updated 5 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 11 months ago
- ☆21Updated last month
- The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆92Updated last month
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆101Updated 2 months ago
- Resa: Transparent Reasoning Models via SAEs☆41Updated last week
- Official repo of paper LM2☆41Updated 6 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆73Updated 5 months ago
- ☆35Updated 3 months ago
- A repository for research on medium sized language models.☆78Updated last year
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆20Updated 3 weeks ago
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆39Updated last week
- Esoteric Language Models☆94Updated 3 weeks ago
- ☆120Updated 6 months ago
- ☆24Updated 11 months ago
- The original Shared Recurrent Memory Transformer implementation☆30Updated last month
- AgentSynth: Scalable Task Generation for Generalist Computer-Use Agents☆29Updated 2 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 4 months ago
- ☆54Updated 9 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆68Updated 4 months ago
- ☆66Updated 4 months ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem”☆28Updated 2 months ago
- Improving AI Systems with Self-Defense Mechanisms☆19Updated 5 months ago
- [ACL 2025] How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training☆40Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆107Updated 4 months ago
- accompanying material for sleep-time compute paper☆104Updated 3 months ago