SparkJiao / llama-pipeline-parallelLinks
A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to copy code and launch discussions about the problems you have encoured.
☆57Updated 2 years ago
Alternatives and similar repositories for llama-pipeline-parallel
Users that are interested in llama-pipeline-parallel are comparing it to the libraries listed below
Sorting:
- ☆105Updated 2 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆163Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆171Updated 7 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 9 months ago
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆176Updated 3 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆188Updated last year
- ☆115Updated last year
- Counting-Stars (★)☆83Updated 4 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆68Updated 11 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆229Updated 3 weeks ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆149Updated 6 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆184Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- ☆105Updated 2 years ago
- Code for paper "Patch-Level Training for Large Language Models"☆88Updated 10 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆119Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆77Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 4 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆131Updated 11 months ago
- ☆18Updated 10 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆266Updated last year
- Rectified Rotary Position Embeddings☆381Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆36Updated 9 months ago
- ☆96Updated 2 years ago