GAIR-NLP / ProX
Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"
☆212Updated this week
Alternatives and similar repositories for ProX:
Users that are interested in ProX are comparing it to the libraries listed below
- ☆255Updated 6 months ago
- Reformatted Alignment☆114Updated 4 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆241Updated 2 months ago
- ☆89Updated 2 months ago
- ☆303Updated 4 months ago
- ☆139Updated 7 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆111Updated 3 months ago
- ☆123Updated 2 weeks ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆175Updated 4 months ago
- [ACL 2024] AUTOACT: Automatic Agent Learning from Scratch for QA via Self-Planning☆206Updated last month
- A series of technical report on Slow Thinking with LLM☆393Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆451Updated 10 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆173Updated 10 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆139Updated 5 months ago
- ☆129Updated last month
- ☆120Updated 8 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆125Updated 6 months ago
- Code implementation of synthetic continued pretraining☆87Updated last month
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆306Updated 4 months ago
- ☆48Updated 11 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆135Updated 3 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆244Updated 5 months ago
- ☆89Updated 2 months ago
- ☆98Updated 2 months ago
- ☆81Updated 9 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆168Updated 9 months ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆114Updated 7 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆49Updated 4 months ago