long-horizon-execution / measuring-executionLinks
☆46Updated 2 months ago
Alternatives and similar repositories for measuring-execution
Users that are interested in measuring-execution are comparing it to the libraries listed below
Sorting:
- Official repo of paper LM2☆46Updated 9 months ago
- ☆29Updated 2 weeks ago
- ☆317Updated 2 weeks ago
- Esoteric Language Models☆106Updated last month
- The official github repo for "Diffusion Language Models are Super Data Learners".☆200Updated 2 weeks ago
- [Preprint] RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments☆134Updated last week
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆100Updated 2 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆255Updated last week
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆66Updated 7 months ago
- The code repository for the CURLoRA research paper. Stable LLM continual fine-tuning and catastrophic forgetting mitigation.☆52Updated last year
- ☆130Updated last month
- ☆40Updated 5 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated last month
- Resa: Transparent Reasoning Models via SAEs☆44Updated 2 months ago
- ☆52Updated 4 months ago
- Official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆54Updated last month
- ☆33Updated 10 months ago
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆33Updated 6 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆82Updated 8 months ago
- Code for paper called Self-Training Elicits Concise Reasoning in Large Language Models☆42Updated 7 months ago
- All information and news with respect to Falcon-H1 series☆93Updated last month
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆223Updated 2 weeks ago
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆233Updated last week
- [ACL 2025] How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training☆44Updated 4 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- ☆21Updated 3 months ago
- ☆19Updated 8 months ago
- Mixture of Cognitive Reasoners: Modular Reasoning with Brain-Like Specialization☆35Updated last month
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆70Updated 5 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆110Updated 11 months ago