☆84Sep 9, 2023Updated 2 years ago
Alternatives and similar repositories for Megatron-DeepSpeed-Llama
Users that are interested in Megatron-DeepSpeed-Llama are comparing it to the libraries listed below
Sorting:
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Jul 20, 2023Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Jul 20, 2023Updated 2 years ago
- A LLaMA1/LLaMA12 Megatron implement.☆28Dec 13, 2023Updated 2 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆69Aug 16, 2023Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Nov 21, 2023Updated 2 years ago
- Best practice for training LLaMA models in Megatron-LM☆663Jan 2, 2024Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆97Feb 5, 2024Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,437Mar 20, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,233Aug 14, 2025Updated 6 months ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,534Dec 15, 2025Updated 2 months ago
- distributed trainer for LLMs☆590May 20, 2024Updated last year
- Apply the Circular to the Pretraining Model☆38Apr 25, 2022Updated 3 years ago
- Finetuning LLaMA with DeepSpeed☆10Apr 14, 2023Updated 2 years ago
- ☆43Dec 15, 2023Updated 2 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Aug 27, 2023Updated 2 years ago
- Built upon Megatron-Deepspeed and HuggingFace Trainer, EasyLLM has reorganized the code logic with a focus on usability. While enhancing …☆49Sep 18, 2024Updated last year
- ☆12Nov 10, 2023Updated 2 years ago
- ☆15Aug 4, 2021Updated 4 years ago
- Code for "Improving Translation Faithfulness of Large Language Models via Augmenting Instructions"☆12Aug 26, 2023Updated 2 years ago
- 本项目采用BERT等预训练模型实现多项选择型阅读理解任务(Multiple Choice MRC)☆16Jun 20, 2021Updated 4 years ago
- Implementation of Chinese ChatGPT☆289Nov 20, 2023Updated 2 years ago
- [AAAI 2026] SIFThinker: Spatially-Aware Image Focus for Visual Reasoning☆23Dec 2, 2025Updated 3 months ago
- llama inference for tencentpretrain☆99Jun 8, 2023Updated 2 years ago
- Collaborative Training of Large Language Models in an Efficient Way☆419Aug 28, 2024Updated last year
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,090Aug 4, 2024Updated last year
- ☆45Jan 21, 2025Updated last year
- ☆16Mar 30, 2024Updated last year
- ☆19May 11, 2024Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Oct 16, 2023Updated 2 years ago
- ☆21Oct 13, 2021Updated 4 years ago
- Transformer related optimization, including BERT, GPT☆17Jul 29, 2023Updated 2 years ago
- ☆21Sep 12, 2023Updated 2 years ago
- ACPBench: Reasoning about Action, Change, and Planning. A benchmark designed to evaluate the fundamental reasoning abilities in the dom…☆32Feb 11, 2026Updated 3 weeks ago
- Code for EMNLP 2021 paper "CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization"☆46Jan 17, 2022Updated 4 years ago
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,055Apr 14, 2024Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,009Jul 29, 2024Updated last year
- PULSE: Pretrained and Unified Language Service Engine☆494Dec 26, 2023Updated 2 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆644Jan 15, 2026Updated last month
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆42Aug 16, 2023Updated 2 years ago