Twilight92z / Quantize-WatermarkLinks
☆19Updated 2 years ago
Alternatives and similar repositories for Quantize-Watermark
Users that are interested in Quantize-Watermark are comparing it to the libraries listed below
Sorting:
- Codebase for decoding compressed trust.☆25Updated last year
- ☆16Updated last year
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Updated last year
- ☆23Updated last year
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆52Updated 9 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- A block pruning framework for LLMs.☆27Updated 8 months ago
- ☆18Updated last year
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Updated 9 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆52Updated 8 months ago
- Implementation of "Decoding-time Realignment of Language Models", ICML 2024.☆21Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 9 months ago
- Codes for Merging Large Language Models☆35Updated last year
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆84Updated last year
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24Updated 8 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆58Updated 11 months ago
- ☆63Updated 2 years ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆143Updated 9 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Updated last year
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆73Updated last year
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38Updated 8 months ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated last year
- ☆43Updated 2 years ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆30Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆66Updated last year
- The loss landscape of Large Language Models resemble basin!☆36Updated 6 months ago