October2001 / ProLong
[ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models
β53Updated 3 months ago
Related projects β
Alternatives and complementary repositories for ProLong
- πΌ Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Expertsβ34Updated last month
- [ICML'2024] Can AI Assistants Know What They Don't Know?β70Updated 9 months ago
- β15Updated last month
- β70Updated 10 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"β59Updated 8 months ago
- BeHonest: Benchmarking Honesty in Large Language Modelsβ29Updated 2 months ago
- Towards Systematic Measurement for Long Text Qualityβ28Updated 2 months ago
- β36Updated 10 months ago
- β25Updated last month
- L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?β17Updated 2 weeks ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planningβ35Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"β59Updated 6 months ago
- β53Updated 2 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"β67Updated 5 months ago
- β47Updated 2 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ141Updated 4 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Modelsβ50Updated 6 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"β81Updated last month
- β46Updated 4 months ago
- β26Updated last year
- β37Updated 5 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QAβ89Updated last month
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learningβ38Updated last year
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimizationβ¦β13Updated 8 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β43Updated last week
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models".β36Updated this week
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don'tβ¦β84Updated 3 months ago
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets andβ¦β27Updated 2 weeks ago
- Code and data for "Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue" (ACL 2024)β21Updated 3 months ago
- β63Updated 5 months ago