princeton-nlp / CEPELinks
[ACL 2024] Long-Context Language Modeling with Parallel Encodings
☆157Updated last year
Alternatives and similar repositories for CEPE
Users that are interested in CEPE are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆168Updated 6 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆185Updated last year
- ☆105Updated 2 months ago
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆174Updated 2 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆185Updated 11 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆221Updated 6 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆112Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆81Updated 8 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- ☆18Updated 9 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆170Updated 3 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆254Updated 8 months ago
- A Comprehensive Survey on Long Context Language Modeling☆182Updated 2 months ago
- Counting-Stars (★)☆83Updated 3 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆149Updated 6 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆80Updated 3 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆64Updated 10 months ago
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- ☆114Updated last year
- The repo for In-context Autoencoder☆138Updated last year
- Code implementation of synthetic continued pretraining☆127Updated 8 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆107Updated 6 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆92Updated 6 months ago
- Repository of LV-Eval Benchmark☆70Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆77Updated 9 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆164Updated last year
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆56Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆129Updated 10 months ago
- ☆51Updated 3 months ago