mcleish7 / arithmeticLinks
Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)
☆193Updated last year
Alternatives and similar repositories for arithmetic
Users that are interested in arithmetic are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.☆229Updated last week
- ☆142Updated 3 weeks ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆230Updated 2 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆129Updated 9 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆80Updated 10 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 3 months ago
- nanoGPT-like codebase for LLM training☆107Updated 4 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆213Updated last week
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- PyTorch library for Active Fine-Tuning☆92Updated last week
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 8 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆234Updated 7 months ago
- ☆72Updated last year
- Sparse Autoencoder Training Library☆54Updated 5 months ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- Token Omission Via Attention☆128Updated 11 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆221Updated 9 months ago
- ☆196Updated last month
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆155Updated 9 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆146Updated this week
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆168Updated 8 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆224Updated 2 weeks ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆178Updated last year
- ☆55Updated last year
- Open source interpretability artefacts for R1.☆160Updated 5 months ago