daeveraert / gradient-information-optimization
Implementation of Gradient Information Optimization (GIO) for effective and scalable training data selection
☆13Updated last year
Alternatives and similar repositories for gradient-information-optimization:
Users that are interested in gradient-information-optimization are comparing it to the libraries listed below
- ☆33Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year
- ☆66Updated 3 years ago
- Bayesian low-rank adaptation for large language models☆23Updated 11 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated 11 months ago
- ☆11Updated 2 years ago
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆27Updated 10 months ago
- ☆17Updated 3 weeks ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆74Updated 3 months ago
- ☆38Updated last year
- ☆13Updated 7 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 10 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆45Updated last year
- Uncertainty quantification for in-context learning of large language models☆16Updated last year
- Learning adapter weights from task descriptions☆16Updated last year
- Codebase for ICML submission "DOGE: Domain Reweighting with Generalization Estimation"☆16Updated last year
- ☆28Updated 8 months ago
- Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments (Zhou et al., EMNLP 2024)☆13Updated 6 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated 2 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆43Updated 2 years ago
- ☆28Updated last year
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆19Updated 5 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆90Updated 3 years ago
- Code for paper: “What Data Benefits My Classifier?” Enhancing Model Performance and Interpretability through Influence-Based Data Selecti…☆22Updated 10 months ago
- Augmenting Statistical Models with Natural Language Parameters☆24Updated 6 months ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆37Updated 2 years ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 2 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- ☆50Updated last year