GAIR-NLP / ProX
Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"
☆242Updated 2 weeks ago
Alternatives and similar repositories for ProX:
Users that are interested in ProX are comparing it to the libraries listed below
- ☆276Updated 9 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆249Updated 4 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆149Updated 7 months ago
- A Comprehensive Survey on Long Context Language Modeling☆138Updated last month
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆182Updated 6 months ago
- ☆314Updated 7 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆323Updated 7 months ago
- Reformatted Alignment☆115Updated 7 months ago
- ☆149Updated this week
- ☆287Updated last month
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆176Updated 3 weeks ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆136Updated 9 months ago
- ☆143Updated 10 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆123Updated 5 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆256Updated last year
- ☆94Updated 4 months ago
- ☆162Updated last month
- ☆150Updated 4 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆139Updated 6 months ago
- ☆192Updated 2 months ago
- ☆115Updated last week
- Code implementation of synthetic continued pretraining☆107Updated 4 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆181Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆154Updated 10 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆177Updated last month
- Fantastic Data Engineering for Large Language Models☆87Updated 4 months ago
- ☆46Updated 10 months ago
- An Open Math Pre-trainng Dataset with 370B Tokens.☆78Updated last month
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆175Updated last month