EleutherAI / features-across-time
Understanding how features learned by neural networks evolve throughout training
☆31Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for features-across-time
- Open source replication of Anthropic's Crosscoders for Model Diffing☆13Updated last week
- ☆68Updated 2 months ago
- Sparse and discrete interpretability tool for neural networks☆53Updated 8 months ago
- Experiments for efforts to train a new and improved t5☆76Updated 6 months ago
- Minimum Description Length probing for neural network representations☆16Updated last week
- Evaluation of neuro-symbolic engines☆33Updated 3 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆53Updated 9 months ago
- LLM training in simple, raw C/CUDA☆12Updated last month
- gzip Predicts Data-dependent Scaling Laws☆32Updated 5 months ago
- Latent Diffusion Language Models☆67Updated last year
- ☆24Updated last year
- Embedding Recycling for Language models☆38Updated last year
- ☆61Updated 2 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 5 months ago
- ☆50Updated last week
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆57Updated last year
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated 6 months ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆34Updated 7 months ago
- Efficient Dictionary Learning with Switch Sparse Autoencoders (SAEs)☆13Updated 3 weeks ago
- ☆76Updated 6 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆60Updated last month
- ☆46Updated last month
- ☆44Updated 2 months ago
- ☆18Updated 2 weeks ago
- Google Research☆45Updated 2 years ago
- ☆55Updated 11 months ago
- ☆24Updated 7 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆19Updated 4 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year