JackCai1206 / arithmetic-self-improveLinks
โ37Updated 9 months ago
Alternatives and similar repositories for arithmetic-self-improve
Users that are interested in arithmetic-self-improve are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ174Updated 5 months ago
- A MAD laboratory to improve AI architecture designs ๐งชโ135Updated 11 months ago
- โ53Updated last year
- โ144Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.โ243Updated 2 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'โ234Updated 4 months ago
- nanoGPT-like codebase for LLM trainingโ112Updated last month
- Open source replication of Anthropic's Crosscoders for Model Diffingโ63Updated last year
- Applying SAEs for fine-grained controlโ24Updated 11 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)โ195Updated last year
- โ29Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"โ86Updated last year
- โ58Updated last year
- Open source interpretability artefacts for R1.โ164Updated 7 months ago
- โ75Updated last year
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.โ55Updated 10 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"โ84Updated last year
- Universal Neurons in GPT2 Language Modelsโ31Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models โฆโ231Updated last week
- EvaByte: Efficient Byte-level Language Models at Scaleโ111Updated 7 months ago
- Attribution-based Parameter Decompositionโ33Updated 6 months ago
- Sparse Autoencoder Training Libraryโ55Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasksโ100Updated last year
- โ107Updated last week
- โ84Updated 2 years ago
- โ125Updated 9 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paperโ130Updated 3 years ago
- โ89Updated last year
- Experiments for efforts to train a new and improved t5โ76Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformersโ74Updated 5 months ago