mcleish7 / arithmetic
Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)
☆186Updated 9 months ago
Alternatives and similar repositories for arithmetic:
Users that are interested in arithmetic are comparing it to the libraries listed below
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆100Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆161Updated this week
- Understand and test language model architectures on synthetic tasks.☆185Updated 2 weeks ago
- A MAD laboratory to improve AI architecture designs 🧪☆108Updated 3 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆122Updated 11 months ago
- ☆124Updated this week
- Extract full next-token probabilities via language model APIs☆233Updated last year
- ☆165Updated last year
- ☆156Updated 2 weeks ago
- Repository for the paper Stream of Search: Learning to Search in Language☆142Updated last month
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆186Updated 3 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆223Updated last month
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆189Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆123Updated 3 months ago
- ☆78Updated last year
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆139Updated 2 months ago
- ☆65Updated last month
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆168Updated 2 months ago
- Replicating O1 inference-time scaling laws☆83Updated 3 months ago
- ☆111Updated last month
- Experiments for efforts to train a new and improved t5☆77Updated 11 months ago
- ☆172Updated last year
- Erasing concepts from neural representations with provable guarantees☆226Updated last month
- ☆73Updated 7 months ago
- ☆61Updated 4 months ago
- ☆76Updated 8 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆71Updated 4 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆121Updated 7 months ago