r-three / realistic_evaluation_of_model_merging_for_compositional_generalizationLinks
☆12Updated last year
Alternatives and similar repositories for realistic_evaluation_of_model_merging_for_compositional_generalization
Users that are interested in realistic_evaluation_of_model_merging_for_compositional_generalization are comparing it to the libraries listed below
Sorting:
- ☆20Updated 2 weeks ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆29Updated last month
- ☆51Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 9 months ago
- ☆104Updated last year
- The repository contains code for Adaptive Data Optimization☆28Updated 11 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆61Updated 3 months ago
- Mamba support for transformer lens☆18Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆85Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆60Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 7 months ago
- ☆16Updated last year
- ☆15Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated 2 years ago
- Exploration of automated dataset selection approaches at large scales.☆48Updated 8 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated last month
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆55Updated 2 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 7 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆40Updated last year
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆141Updated 4 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆24Updated 6 months ago
- ☆33Updated 10 months ago
- Efficient Scaling laws and collaborative pretraining.☆18Updated 2 months ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆33Updated 9 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆31Updated 2 years ago
- ☆45Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year