arcprize / hierarchical-reasoning-model-analysisLinks
☆94Updated last month
Alternatives and similar repositories for hierarchical-reasoning-model-analysis
Users that are interested in hierarchical-reasoning-model-analysis are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆160Updated 2 months ago
- ☆104Updated 11 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆229Updated last month
- Normalized Transformer (nGPT)☆188Updated 9 months ago
- ☆186Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆109Updated 4 months ago
- ☆36Updated 6 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆129Updated 8 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆129Updated 9 months ago
- Open source interpretability artefacts for R1.☆158Updated 4 months ago
- ☆102Updated last month
- ☆122Updated 6 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆150Updated 7 months ago
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆77Updated 7 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆69Updated 4 months ago
- Understand and test language model architectures on synthetic tasks.☆225Updated 2 months ago
- ☆85Updated last year
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆328Updated 9 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆192Updated last year
- 📄Small Batch Size Training for Language Models☆60Updated 2 weeks ago
- PyTorch library for Active Fine-Tuning☆91Updated last week
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆82Updated 10 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆182Updated 6 months ago
- Mixture of A Million Experts☆47Updated last year
- Simple repository for training small reasoning models☆40Updated 7 months ago
- Bootstrapping ARC☆143Updated 9 months ago
- ☆98Updated 4 months ago
- Official repo of paper LM2☆42Updated 7 months ago
- ☆53Updated last year