irregular-rhomboid / EAI-Math-Reading-GroupLinks
Resources from the EleutherAI Math Reading Group
☆54Updated 9 months ago
Alternatives and similar repositories for EAI-Math-Reading-Group
Users that are interested in EAI-Math-Reading-Group are comparing it to the libraries listed below
Sorting:
- A puzzle to learn about prompting☆135Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆131Updated 3 years ago
- ☆285Updated last year
- Erasing concepts from neural representations with provable guarantees☆240Updated 10 months ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆81Updated 3 years ago
- ☆167Updated 2 years ago
- Neural Networks and the Chomsky Hierarchy☆212Updated last year
- 🧠 Starter templates for doing interpretability research☆74Updated 2 years ago
- An interactive exploration of Transformer programming.☆270Updated 2 years ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated 2 years ago
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- JAX implementation of the Llama 2 model☆216Updated last year
- 🧱 Modula software package☆316Updated 4 months ago
- Understand and test language model architectures on synthetic tasks.☆246Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆180Updated 5 months ago
- ☆283Updated last year
- LoRA for arbitrary JAX models and functions☆143Updated last year
- Code associated to papers on superposition (in ML interpretability)☆35Updated 3 years ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- ☆460Updated last year
- Puzzles for exploring transformers☆380Updated 2 years ago
- Functional local implementations of main model parallelism approaches☆95Updated 2 years ago
- ☆91Updated last year
- Train very large language models in Jax.☆210Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆216Updated this week
- we got you bro☆36Updated last year
- ☆144Updated 3 months ago
- ☆132Updated 2 years ago
- A set of Python scripts that makes your experience on TPU better☆54Updated 3 months ago