swj0419 / in-context-pretrainingLinks
☆51Updated last year
Alternatives and similar repositories for in-context-pretraining
Users that are interested in in-context-pretraining are comparing it to the libraries listed below
Sorting:
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated 10 months ago
- ☆86Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆69Updated last year
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆65Updated 2 years ago
- ☆31Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆25Updated 10 months ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆23Updated 10 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated 11 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last month
- Retrieval as Attention☆82Updated 2 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- ☆41Updated last year
- Towards Systematic Measurement for Long Text Quality☆35Updated 9 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆71Updated 7 months ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆33Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆176Updated last year
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆74Updated 2 years ago
- ☆24Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆40Updated 2 years ago
- ☆87Updated 2 years ago
- Revisiting Mid-training in the Era of RL Scaling☆62Updated 2 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆112Updated 9 months ago
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆14Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆39Updated last month
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆37Updated last year
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆50Updated last week
- ☆98Updated last year