mcleish7 / arithmeticLinks
Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)
☆190Updated last year
Alternatives and similar repositories for arithmetic
Users that are interested in arithmetic are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆141Updated 2 weeks ago
- Understand and test language model architectures on synthetic tasks.☆219Updated last month
- ☆134Updated 3 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆220Updated 7 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆123Updated 6 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆149Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆129Updated last year
- nanoGPT-like codebase for LLM training☆99Updated last month
- Functional Benchmarks and the Reasoning Gap☆88Updated 9 months ago
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆148Updated 6 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆237Updated last month
- ☆191Updated this week
- ☆66Updated last year
- Token Omission Via Attention☆128Updated 8 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆222Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 2 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆127Updated 7 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆140Updated last month
- RuLES: a benchmark for evaluating rule-following in language models☆227Updated 4 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆203Updated 6 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆192Updated this week
- Normalized Transformer (nGPT)☆184Updated 7 months ago
- ☆183Updated last year
- ☆117Updated 4 months ago
- Open source interpretability artefacts for R1.☆153Updated 2 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆177Updated 3 weeks ago