swj0419 / in-context-pretraining
☆48Updated 11 months ago
Alternatives and similar repositories for in-context-pretraining:
Users that are interested in in-context-pretraining are comparing it to the libraries listed below
- ☆85Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆19Updated 7 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆130Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 8 months ago
- ☆31Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆76Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆158Updated 9 months ago
- ☆98Updated 5 months ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆22Updated 7 months ago
- ☆30Updated 10 months ago
- Towards Systematic Measurement for Long Text Quality☆33Updated 6 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 9 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆108Updated 8 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆67Updated 11 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆49Updated last year
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).