SparkJiao / llama-pipeline-parallelLinks
A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to copy code and launch discussions about the problems you have encoured.
☆55Updated last year
Alternatives and similar repositories for llama-pipeline-parallel
Users that are interested in llama-pipeline-parallel are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆77Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆138Updated 3 months ago
- ☆100Updated 8 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 11 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆157Updated 8 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 7 months ago
- Repository of LV-Eval Benchmark☆65Updated 9 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 4 months ago
- [ACL 2025, Main Conference] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆28Updated 10 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated 2 weeks ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆249Updated 5 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆184Updated 2 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆45Updated 7 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆78Updated 4 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆100Updated last week
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆75Updated 7 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆52Updated 6 months ago
- ☆63Updated 6 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆174Updated 11 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆184Updated 7 months ago
- ☆47Updated 11 months ago
- ☆18Updated 6 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆80Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 6 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆120Updated 7 months ago
- ☆79Updated 4 months ago