lee-ny / teaching_arithmeticLinks
☆84Updated 2 years ago
Alternatives and similar repositories for teaching_arithmetic
Users that are interested in teaching_arithmetic are comparing it to the libraries listed below
Sorting:
- ☆185Updated last year
- ☆39Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- ☆53Updated last year
- ☆85Updated 11 months ago
- ☆107Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- ☆103Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year
- Can Language Models Solve Olympiad Programming?☆124Updated 11 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆189Updated 8 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆183Updated 7 months ago
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆87Updated 10 months ago
- ☆241Updated last year
- A library for efficient patching and automatic circuit discovery.☆84Updated 5 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- ☆45Updated 2 years ago
- ☆112Updated 10 months ago
- ☆134Updated last year
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆20Updated last year
- Understand and test language model architectures on synthetic tasks.☆247Updated 3 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆59Updated last year
- ☆119Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆208Updated last year
- Replicating O1 inference-time scaling laws☆91Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆123Updated last year