bigscience-workshop / Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
☆1,338Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for Megatron-DeepSpeed
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,893Updated last month
- Fast Inference Solutions for BLOOM☆560Updated last month
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆980Updated 3 months ago
- Best practice for training LLaMA models in Megatron-LM☆628Updated 10 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,941Updated 7 months ago
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆735Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,904Updated this week
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,607Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,149Updated last month
- [NIPS2023] RRHF & Wombat☆798Updated last year
- Crosslingual Generalization through Multitask Finetuning☆516Updated last month
- ☆1,474Updated 3 weeks ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,216Updated last year
- distributed trainer for LLMs☆545Updated 6 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆564Updated 3 months ago
- LOMO: LOw-Memory Optimization☆979Updated 4 months ago
- Expanding natural instructions☆959Updated 11 months ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆306Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,979Updated this week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,353Updated 7 months ago
- Automatically split your PyTorch models on multiple GPUs for training & inference☆626Updated 10 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,624Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆507Updated last year
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09…☆1,948Updated this week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆721Updated this week
- A modular RL library to fine-tune language models to human preferences☆2,213Updated 8 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,008Updated 10 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,079Updated 8 months ago
- A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.☆923Updated 2 years ago