google-deepmind / latent-multi-hop-reasoningLinks
[ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?
☆82Updated 7 months ago
Alternatives and similar repositories for latent-multi-hop-reasoning
Users that are interested in latent-multi-hop-reasoning are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- ☆135Updated 7 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆101Updated 2 months ago
- ☆60Updated 4 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆58Updated 6 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆109Updated 5 months ago
- ☆68Updated 5 months ago
- ☆124Updated 8 months ago
- ☆221Updated 8 months ago
- accompanying material for sleep-time compute paper☆117Updated 6 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated 3 weeks ago
- Leveraging Base Language Models for Few-Shot Synthetic Data Generation☆37Updated 3 weeks ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆94Updated 6 months ago
- ☆35Updated 6 months ago
- Train your own SOTA deductive reasoning model☆108Updated 8 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 6 months ago
- Open source interpretability artefacts for R1.☆163Updated 6 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆105Updated 6 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆129Updated 3 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆97Updated 3 months ago
- ☆29Updated last week
- Simple & Scalable Pretraining for Neural Architecture Research☆299Updated 2 weeks ago
- ☆81Updated this week
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆24Updated 2 weeks ago
- Storing long contexts in tiny caches with self-study☆213Updated 3 weeks ago
- rl from zero pretrain, can it be done? yes.☆280Updated last month
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆348Updated 4 months ago
- Official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆54Updated 3 weeks ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago