OSU-NLP-Group / GrokkedTransformerLinks
Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'
☆228Updated last month
Alternatives and similar repositories for GrokkedTransformer
Users that are interested in GrokkedTransformer are comparing it to the libraries listed below
Sorting:
- Repository for the paper Stream of Search: Learning to Search in Language☆150Updated 6 months ago
- ☆187Updated 4 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 7 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆245Updated last week
- ☆120Updated 6 months ago
- A simple unified framework for evaluating LLMs☆240Updated 4 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆324Updated 9 months ago
- Open source interpretability artefacts for R1.☆157Updated 4 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆177Updated 5 months ago
- ☆67Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆237Updated 9 months ago
- ☆100Updated last year
- ☆135Updated 9 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆213Updated 3 weeks ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆179Updated 2 months ago
- Replicating O1 inference-time scaling laws☆89Updated 8 months ago
- ☆101Updated 10 months ago
- ☆213Updated 5 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆107Updated 4 months ago
- ☆126Updated 10 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆104Updated 3 weeks ago
- Reproducible, flexible LLM evaluations☆237Updated last month
- Can Language Models Solve Olympiad Programming?☆118Updated 7 months ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆244Updated 9 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆210Updated 3 months ago
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆130Updated 9 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆345Updated last month
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆191Updated last year