mcleish7 / arithmeticLinks
Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)
☆193Updated last year
Alternatives and similar repositories for arithmetic
Users that are interested in arithmetic are comparing it to the libraries listed below
Sorting:
- Understand and test language model architectures on synthetic tasks.☆237Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆171Updated 4 months ago
- ☆143Updated 2 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆132Updated 10 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆241Updated 5 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 9 months ago
- nanoGPT-like codebase for LLM training☆110Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 9 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆222Updated this week
- ☆103Updated 3 months ago
- ☆204Updated 3 weeks ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆232Updated 3 months ago
- ☆185Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆167Updated 9 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 11 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated last month
- ☆75Updated last year
- ☆124Updated 8 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated 11 months ago
- Token Omission Via Attention☆127Updated last year
- ☆87Updated last year
- ☆81Updated last year
- ☆56Updated last year
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆158Updated 10 months ago
- Open source interpretability artefacts for R1.☆163Updated 6 months ago
- Replicating O1 inference-time scaling laws☆90Updated 11 months ago
- ☆197Updated 6 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year