zhang-wei-chao / DC-PDDLinks
This repository presents the original implementation of Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method by Weichao Zhang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, Xueqi Cheng
☆21Updated 6 months ago
Alternatives and similar repositories for DC-PDD
Users that are interested in DC-PDD are comparing it to the libraries listed below
Sorting:
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆137Updated last year
- A method of ensemble learning for heterogeneous large language models.☆63Updated last year
- [ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models☆38Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆36Updated 10 months ago
- ☆56Updated 4 months ago
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆14Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆123Updated last year
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆86Updated 11 months ago
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆55Updated 6 months ago
- An implementation of online data mixing for the Pile dataset, based on the GPT-NeoX library.☆13Updated last year
- ☆30Updated 9 months ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆31Updated 6 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆91Updated last month
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆130Updated 3 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆86Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆148Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆166Updated last year
- LLM Unlearning☆177Updated 2 years ago
- ☆69Updated 5 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- ☆21Updated 8 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆166Updated 8 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆47Updated last year
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆24Updated 3 months ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆50Updated 6 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆132Updated 8 months ago
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆18Updated last year
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Updated 10 months ago