EleutherAI / features-across-timeLinks
Understanding how features learned by neural networks evolve throughout training
☆41Updated last year
Alternatives and similar repositories for features-across-time
Users that are interested in features-across-time are comparing it to the libraries listed below
Sorting:
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- ☆68Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Minimum Description Length probing for neural network representations☆20Updated last year
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated last year
- ☆53Updated 2 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Google Research☆46Updated 3 years ago
- Evaluation of neuro-symbolic engines☆41Updated last year
- Implementation of Influence Function approximations for differently sized ML models, using PyTorch☆16Updated 2 years ago
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 10 months ago
- ☆18Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆63Updated 4 months ago
- ☆112Updated 11 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆32Updated 4 months ago
- ☆57Updated 2 years ago
- ☆36Updated 3 years ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated 2 years ago
- Latent Diffusion Language Models☆70Updated 2 years ago
- ☆59Updated 2 months ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- LLM training in simple, raw C/CUDA☆15Updated last year
- Sparse Autoencoder Training Library☆56Updated 9 months ago
- Bayesian scaling laws for in-context learning.☆15Updated 10 months ago
- A centralized place for deep thinking code and experiments☆90Updated 2 years ago
- The repository contains code for Adaptive Data Optimization☆32Updated last year
- ☆44Updated last year