princeton-nlp / CEPELinks
[ACL 2024] Long-Context Language Modeling with Parallel Encodings
β168Updated last year
Alternatives and similar repositories for CEPE
Users that are interested in CEPE are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ200Updated last month
- [ICLR 2025] 𧬠RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)β185Updated 11 months ago
- β109Updated 6 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ186Updated 7 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Modelsβ194Updated last year
- Counting-Stars (β )β83Updated 2 months ago
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Processβ30Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β245Updated 4 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Modelsβ78Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMsβ260Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".β83Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMβ¦β68Updated last year
- Towards Systematic Measurement for Long Text Qualityβ37Updated last year
- Code implementation of synthetic continued pretrainingβ146Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialoguesβ140Updated last year
- A Comprehensive Survey on Long Context Language Modelingβ222Updated 2 months ago
- β125Updated last year
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free toβ¦β57Updated 2 years ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]β110Updated 11 months ago
- β72Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"β78Updated last year
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Modelsβ58Updated last year
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QAβ146Updated last month
- β55Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.β98Updated 3 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Lβ¦β53Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)β101Updated 11 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsβ269Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Modelsβ119Updated 7 months ago
- β18Updated last year