swj0419 / in-context-pretraining
☆49Updated last year
Alternatives and similar repositories for in-context-pretraining:
Users that are interested in in-context-pretraining are comparing it to the libraries listed below
- ☆85Updated 2 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆20Updated 8 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆59Updated 9 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆23Updated 8 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆76Updated 2 years ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆64Updated 2 years ago
- Towards Systematic Measurement for Long Text Quality☆34Updated 7 months ago
- Methods and evaluation for aligning language models temporally☆29Updated last year
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆25Updated 8 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆130Updated last year
- ☆95Updated last year
- ☆61Updated 2 years ago
- Retrieval as Attention☆83Updated 2 years ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆168Updated 10 months ago
- ☆31Updated last year
- ☆41Updated last year
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆99Updated 2 years ago
- ☆28Updated last year
- ☆98Updated 6 months ago
- TBC☆26Updated 2 years ago
- ☆65Updated last year
- ☆44Updated 7 months ago
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆67Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆64Updated 5 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago