☆84Sep 9, 2023Updated 2 years ago
Alternatives and similar repositories for Megatron-DeepSpeed-Llama
Users that are interested in Megatron-DeepSpeed-Llama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Jul 20, 2023Updated 2 years ago
- A LLaMA1/LLaMA12 Megatron implement.☆28Dec 13, 2023Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Jul 20, 2023Updated 2 years ago
- everyone_can_pretrain_language_model☆25Jan 13, 2021Updated 5 years ago
- FL-Tuning☆12Jul 11, 2022Updated 3 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- NTK scaled version of ALiBi position encoding in Transformer.☆69Aug 16, 2023Updated 2 years ago
- Best practice for training LLaMA models in Megatron-LM☆663Jan 2, 2024Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Nov 21, 2023Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆97Feb 5, 2024Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,437Mar 20, 2024Updated 2 years ago
- distributed trainer for LLMs☆590May 20, 2024Updated last year
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,551Dec 15, 2025Updated 3 months ago
- Towards Systematic Measurement for Long Text Quality☆38Sep 5, 2024Updated last year
- Apply the Circular to the Pretraining Model☆38Apr 25, 2022Updated 3 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- PULSE: Pretrained and Unified Language Service Engine☆492Dec 26, 2023Updated 2 years ago
- ☆15Aug 4, 2021Updated 4 years ago
- 基于Bart语言模型的指针生成网络,用于中文语法纠错任务☆16Sep 8, 2022Updated 3 years ago
- Implementation of Chinese ChatGPT☆289Nov 20, 2023Updated 2 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Aug 27, 2023Updated 2 years ago
- Code for "Improving Translation Faithfulness of Large Language Models via Augmenting Instructions"☆12Aug 26, 2023Updated 2 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Oct 16, 2023Updated 2 years ago
- Silk Road will be the dataset zoo for Luotuo(骆驼). Luotuo is an open sourced Chinese-LLM project founded by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子…☆40Nov 5, 2023Updated 2 years ago
- Collaborative Training of Large Language Models in an Efficient Way☆420Aug 28, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆13Feb 21, 2025Updated last year
- ☆21Sep 12, 2023Updated 2 years ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,089Aug 4, 2024Updated last year
- ☆12Nov 10, 2023Updated 2 years ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆653Apr 10, 2023Updated 2 years ago
- ☆19May 11, 2024Updated last year
- ☆26Jun 5, 2023Updated 2 years ago
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,052Apr 14, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆660Jan 15, 2026Updated 2 months ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- ☆21Oct 13, 2021Updated 4 years ago
- ☆14May 26, 2025Updated 10 months ago
- 基于PyTorch GPT-2的针对各种数据并行pretrain的研究代码.☆11Dec 16, 2022Updated 3 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Jan 4, 2024Updated 2 years ago
- Code for EMNLP 2021 paper "CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization"☆47Jan 17, 2022Updated 4 years ago
- Test-time compute in information retrieval☆54Jul 8, 2025Updated 9 months ago
- ☆38Sep 21, 2020Updated 5 years ago