DAMO-NLP-SG / CLEX
[ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models
β72Updated 7 months ago
Related projects β
Alternatives and complementary repositories for CLEX
- β89Updated last month
- 𧬠RegMix: Data Mixture as Regression for Language Model Pre-trainingβ87Updated last month
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β43Updated last week
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ141Updated 4 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodingsβ142Updated 4 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ123Updated 2 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β111Updated last week
- β98Updated 5 months ago
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Processβ22Updated 3 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"β67Updated 5 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models".β36Updated this week
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"β56Updated 8 months ago
- Code for paper "Patch-Level Training for Large Language Models"β67Updated 3 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialoguesβ46Updated 3 months ago
- Fantastic Data Engineering for Large Language Modelsβ49Updated 3 months ago
- Towards Systematic Measurement for Long Text Qualityβ28Updated 2 months ago
- Unofficial implementation of AlpaGasusβ84Updated last year
- β51Updated 7 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignmentβ66Updated 4 months ago
- Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"β59Updated this week
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free toβ¦β48Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don'tβ¦β84Updated 3 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learningβ38Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modelingβ36Updated 8 months ago
- [NeurIPS-2024] π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623β67Updated last month
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QAβ89Updated last month
- β70Updated 10 months ago
- The official repository of the Omni-MATH benchmark.β45Updated last week
- Code implementation of synthetic continued pretrainingβ54Updated last month
- ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenariosβ62Updated 6 months ago