GAIR-NLP / ProX
Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"
☆226Updated last month
Alternatives and similar repositories for ProX:
Users that are interested in ProX are comparing it to the libraries listed below
- ☆263Updated 7 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆241Updated 3 months ago
- Reformatted Alignment☆114Updated 5 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆179Updated 5 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆114Updated 4 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆146Updated 6 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 5 months ago
- ☆143Updated 3 months ago
- Code implementation of synthetic continued pretraining☆94Updated 2 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆453Updated last year
- Generative Judge for Evaluating Alignment☆230Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆177Updated 11 months ago
- ☆132Updated last month
- ☆91Updated 3 months ago
- ☆101Updated 3 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆128Updated 7 months ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆126Updated 8 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆184Updated 10 months ago
- ☆311Updated 6 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆136Updated 4 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆244Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆248Updated 6 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆161Updated last week