JackCai1206 / arithmetic-self-improveLinks
☆38Updated 11 months ago
Alternatives and similar repositories for arithmetic-self-improve
Users that are interested in arithmetic-self-improve are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆187Updated 3 weeks ago
- Understand and test language model architectures on synthetic tasks.☆252Updated last month
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- ☆53Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆138Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- ☆74Updated last year
- nanoGPT-like codebase for LLM training☆113Updated 3 months ago
- ☆153Updated 5 months ago
- ☆84Updated 2 years ago
- A library for efficient patching and automatic circuit discovery.☆88Updated last month
- Sparse Autoencoder Training Library☆56Updated 9 months ago
- ☆91Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- ☆29Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆235Updated 6 months ago
- Replicating O1 inference-time scaling laws☆93Updated last year
- Open source interpretability artefacts for R1.☆170Updated 9 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆153Updated last year
- PyTorch library for Active Fine-Tuning☆96Updated 4 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆241Updated 2 weeks ago
- ☆123Updated 11 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆89Updated last year
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆74Updated 9 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆75Updated 7 months ago
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆57Updated last year
- ☆185Updated 2 years ago
- ☆58Updated last year