gladia-research-group / explanatory-learningLinks
This is the official repository for "Explanatory Learning: Beyond Empiricism in Neural Networks".
☆14Updated 3 years ago
Alternatives and similar repositories for explanatory-learning
Users that are interested in explanatory-learning are comparing it to the libraries listed below
Sorting:
- A Python package for analyzing and transforming neural latent spaces.☆49Updated 2 months ago
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- How to Turn Your Knowledge Graph Embeddings into Generative Models☆53Updated last year
- Neural Networks and the Chomsky Hierarchy☆209Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- The Energy Transformer block, in JAX☆59Updated last year
- Erasing concepts from neural representations with provable guarantees☆233Updated 7 months ago
- The Happy Faces Benchmark☆15Updated 2 years ago
- 🧠 Starter templates for doing interpretability research☆73Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆128Updated 3 years ago
- ☆27Updated 2 years ago
- Probabilistic programming with large language models☆135Updated last month
- ☆107Updated 7 months ago
- Language-annotated Abstraction and Reasoning Corpus☆93Updated 2 years ago
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆161Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆79Updated 3 years ago
- Attribution-based Parameter Decomposition☆30Updated 3 months ago
- ☆68Updated 2 years ago
- Stochastic Automatic Differentiation library for PyTorch.☆206Updated last year
- Library that contains implementations of machine learning components in the hyperbolic space☆141Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆19Updated 8 months ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Sparse Autoencoder Training Library☆54Updated 4 months ago
- ☆312Updated 6 months ago
- ☆54Updated 2 years ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆173Updated 2 years ago
- ☆68Updated 2 weeks ago
- A centralized place for deep thinking code and experiments☆86Updated 2 years ago
- Modalities, a PyTorch-native framework for distributed and reproducible foundation model training.☆84Updated last week