google-deepmind / neural_networks_chomsky_hierarchyLinks
Neural Networks and the Chomsky Hierarchy
☆209Updated last year
Alternatives and similar repositories for neural_networks_chomsky_hierarchy
Users that are interested in neural_networks_chomsky_hierarchy are comparing it to the libraries listed below
Sorting:
- ☆256Updated 4 months ago
- An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"☆321Updated last year
- Language-annotated Abstraction and Reasoning Corpus☆93Updated 2 years ago
- Train very large language models in Jax.☆209Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- ☆363Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆211Updated 4 months ago
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆79Updated 3 years ago
- ☆547Updated last year
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆130Updated 3 months ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆191Updated 2 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Inference code for LLaMA models in JAX☆120Updated last year
- A domain-specific probabilistic programming language for modeling and inference with language models☆136Updated 5 months ago
- ☆166Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆189Updated 3 years ago
- An interactive exploration of Transformer programming.☆269Updated last year
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- Learning Universal Predictors☆79Updated last year
- JAX Synergistic Memory Inspector☆180Updated last year
- ☆69Updated 2 years ago
- Implementation of https://srush.github.io/annotated-s4☆501Updated 3 months ago
- Materials for ConceptARC paper☆103Updated 11 months ago
- Resources from the EleutherAI Math Reading Group☆54Updated 7 months ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆173Updated 2 years ago
- Named Tensors for Legible Deep Learning in JAX☆207Updated this week
- Code for 1st place solution to Kaggle's Abstraction and Reasoning Challenge☆160Updated 2 months ago