lee-ny / teaching_arithmetic
☆80Updated last year
Alternatives and similar repositories for teaching_arithmetic
Users that are interested in teaching_arithmetic are comparing it to the libraries listed below
Sorting:
- ☆177Updated last year
- ☆52Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆85Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆135Updated last year
- ☆94Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆105Updated last year
- ☆97Updated 10 months ago
- ☆33Updated last year
- ☆45Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆78Updated 2 years ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆155Updated 6 months ago
- Can Language Models Solve Olympiad Programming?☆116Updated 4 months ago
- nanoGPT-like codebase for LLM training☆94Updated last month
- Replicating O1 inference-time scaling laws☆85Updated 5 months ago
- A library for efficient patching and automatic circuit discovery.☆64Updated 3 weeks ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- ☆82Updated 9 months ago
- [NeurIPS 2023] Learning Transformer Programs☆161Updated 11 months ago
- A framework for few-shot evaluation of autoregressive language models.☆24Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆54Updated 7 months ago
- ☆83Updated 3 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆189Updated 11 months ago
- ☆46Updated this week
- Understand and test language model architectures on synthetic tasks.☆195Updated 2 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆75Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆115Updated 5 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆67Updated 9 months ago