Liang137 / DPZero
[ICML 2024] DPZero: Private Fine-Tuning of Language Models without Backpropagation
☆13Updated 7 months ago
Alternatives and similar repositories for DPZero:
Users that are interested in DPZero are comparing it to the libraries listed below
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 10 months ago
- ☆42Updated 2 months ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆24Updated 9 months ago
- ☆13Updated 8 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆44Updated 6 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- ☆26Updated last year
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆27Updated 11 months ago
- ☆11Updated 2 years ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated 9 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 5 months ago
- ☆27Updated 2 months ago
- Implementation of Gradient Information Optimization (GIO) for effective and scalable training data selection☆13Updated last year
- Codebase for decoding compressed trust.☆23Updated 11 months ago
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆23Updated last year
- ☆18Updated 3 weeks ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆24Updated 10 months ago
- ☆13Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆57Updated 6 months ago
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆17Updated last year
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆74Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆42Updated 5 months ago
- ☆35Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆64Updated 6 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆50Updated 2 years ago
- Official implementation of GOAT model (ICML2023)☆37Updated last year
- A curated list of Model Merging methods.☆91Updated 7 months ago
- ☆31Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆30Updated 5 months ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆44Updated 2 years ago