katiekang1998 / reasoning_generalizationLinks
☆33Updated last year
Alternatives and similar repositories for reasoning_generalization
Users that are interested in reasoning_generalization are comparing it to the libraries listed below
Sorting:
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 10 months ago
- ☆91Updated last year
- Reinforcing General Reasoning without Verifiers☆96Updated 7 months ago
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆120Updated last month
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆32Updated 4 months ago
- ☆74Updated last year
- ☆19Updated 6 months ago
- [ICLR 2026] RPG: KL-Regularized Policy Gradient (https://arxiv.org/abs/2505.17508)☆65Updated 2 weeks ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Updated last month
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- ☆51Updated 2 years ago
- Exploration of automated dataset selection approaches at large scales.☆52Updated 11 months ago
- ☆20Updated 3 months ago
- The repository contains code for Adaptive Data Optimization☆32Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆89Updated last year
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆84Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- ☆99Updated last year
- ☆52Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- FROM $f(x)$ AND $g(x)$ TO $f(g(x))$: LLMs Learn New Skills in RL by Composing Old Ones☆60Updated 2 weeks ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- ☆19Updated 6 months ago
- Replicating O1 inference-time scaling laws☆93Updated last year
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 10 months ago
- ☆24Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆42Updated 3 weeks ago
- ☆108Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year