dan-gittik / infosec17
☆1Updated 7 years ago
Alternatives and similar repositories for infosec17:
Users that are interested in infosec17 are comparing it to the libraries listed below
- ספר מלא בעברית על למידת מכונה ולמידה עמוקה☆955Updated 11 months ago
- A scheduling helper web application for Technion students☆52Updated this week
- Sparse Autoencoder for Mechanistic Interpretability☆216Updated 6 months ago
- ☆71Updated 3 years ago
- Using sparse coding to find distributed representations used by neural networks.☆213Updated last year
- Hebrew Machine and Deep Learning Tutorials☆234Updated last year
- Tools for studying developmental interpretability in neural networks.☆84Updated 3 weeks ago
- ☆203Updated 4 months ago
- Mechanistic Interpretability Visualizations using React☆231Updated 2 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆184Updated 8 months ago
- ☆239Updated last week
- מוסיף בס"ד בעזרת השם ועוד ברכות לקוד אולי זה יעזור☆35Updated 4 years ago
- (Model-written) LLM evals library☆18Updated 2 months ago
- Sparse Autoencoder Training Library☆41Updated 3 months ago
- ☆27Updated 3 years ago
- ☆116Updated last year
- ☆29Updated 9 months ago
- ☆86Updated last week
- Winners of the International Obfuscated C Code Contest☆565Updated this week
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆14Updated 10 months ago
- A website that shows Coronavirus cases in Israel☆34Updated 3 years ago
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).☆199Updated last week
- Rashi Plus Plus - the new Hebrew scripting language☆91Updated 3 years ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆208Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆182Updated 2 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆489Updated this week
- The Happy Faces Benchmark☆14Updated last year
- ☆150Updated this week
- Erasing concepts from neural representations with provable guarantees☆222Updated 3 weeks ago
- Applying SAEs for fine-grained control☆14Updated 2 months ago