srush / raspyLinks
An interactive exploration of Transformer programming.
☆269Updated last year
Alternatives and similar repositories for raspy
Users that are interested in raspy are comparing it to the libraries listed below
Sorting:
- An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"☆319Updated 11 months ago
- git extension for {collaborative, communal, continual} model development☆217Updated 9 months ago
- Puzzles for exploring transformers☆366Updated 2 years ago
- A puzzle to learn about prompting☆132Updated 2 years ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ☆542Updated last year
- Resources from the EleutherAI Math Reading Group☆53Updated 5 months ago
- ☆444Updated 10 months ago
- ☆275Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 2 years ago
- Erasing concepts from neural representations with provable guarantees☆232Updated 6 months ago
- Functional local implementations of main model parallelism approaches☆96Updated 2 years ago
- A pure-functional implementation of a machine learning transformer model in Python/JAX☆178Updated 3 months ago
- Neural Networks and the Chomsky Hierarchy☆208Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆208Updated 2 months ago
- ☆166Updated 2 years ago
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆195Updated 3 years ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆643Updated this week
- Implementing RASP transformer programming language https://arxiv.org/pdf/2106.06981.pdf.☆58Updated 3 years ago
- Automatic gradient descent☆208Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- seqax = sequence modeling + JAX☆166Updated last month
- 🧱 Modula software package☆222Updated 3 weeks ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- 🧠 Starter templates for doing interpretability research☆73Updated 2 years ago
- Simple Transformer in Jax☆139Updated last year
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆290Updated 11 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆221Updated last week
- ☆138Updated 4 months ago