EleutherAI / features-across-timeLinks
Understanding how features learned by neural networks evolve throughout training
☆36Updated 9 months ago
Alternatives and similar repositories for features-across-time
Users that are interested in features-across-time are comparing it to the libraries listed below
Sorting:
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆69Updated 11 months ago
- Google Research☆46Updated 2 years ago
- Minimum Description Length probing for neural network representations☆18Updated 6 months ago
- Evaluation of neuro-symbolic engines☆38Updated last year
- A centralized place for deep thinking code and experiments☆85Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- ☆104Updated 5 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- PyTorch library for Active Fine-Tuning☆87Updated 5 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 2 weeks ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 6 months ago
- ☆54Updated 2 years ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆31Updated 4 months ago
- ☆27Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆49Updated 3 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 8 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- gzip Predicts Data-dependent Scaling Laws☆35Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆41Updated 9 months ago
- ☆35Updated 2 years ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 6 months ago
- ☆51Updated last year
- ☆23Updated 7 months ago
- ☆81Updated last year
- Embedding Recycling for Language models☆39Updated 2 years ago
- ☆34Updated 6 months ago