snu-mllab / Targeted-Cause-DiscoveryLinks
[TMLR'25] Official implementation for "Large-Scale Targeted Cause Discovery via Learning from Simulated Data"
☆25Updated last month
Alternatives and similar repositories for Targeted-Cause-Discovery
Users that are interested in Targeted-Cause-Discovery are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆73Updated 4 months ago
- ☆81Updated last year
- PyTorch library for Active Fine-Tuning☆93Updated last month
- ☆11Updated last year
- Model Stock: All we need is just a few fine-tuned models☆125Updated 2 months ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆129Updated last year
- Code repository for Black Mamba☆258Updated last year
- ☆166Updated 2 years ago
- Mixture of A Million Experts☆48Updated last year
- ☆110Updated 8 months ago
- ☆108Updated 8 months ago
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- nanoGPT-like codebase for LLM training☆110Updated this week
- Official implementation of FIND (NeurIPS '23) Function Interpretation Benchmark and Automated Interpretability Agents☆51Updated last year
- ☆33Updated last year
- Decomposing and Editing Predictions by Modeling Model Computation☆138Updated last year
- ☆102Updated 3 months ago
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆92Updated last week
- LLM-Merging: Building LLMs Efficiently through Merging☆204Updated last year
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆53Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- ☆56Updated last year
- ☆58Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- ☆33Updated last year
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" 🐍☆45Updated 11 months ago
- ☆18Updated 3 months ago
- Official PyTorch implementation of "Neural Relation Graph: A Unified Framework for Identifying Label Noise and Outlier Data" (NeurIPS'23)☆15Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆82Updated last year