hpcaitech / ColossalAI-Examples
Examples of training models with hybrid parallelism using ColossalAI
☆339Updated 2 years ago
Alternatives and similar repositories for ColossalAI-Examples:
Users that are interested in ColossalAI-Examples are comparing it to the libraries listed below
- Scalable PaLM implementation of PyTorch☆191Updated 2 years ago
- Large-scale model inference.☆628Updated last year
- Performance benchmarking with ColossalAI☆39Updated 2 years ago
- Fast Inference Solutions for BLOOM☆562Updated 5 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆400Updated 2 weeks ago
- ☆459Updated 9 months ago
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆790Updated this week
- minichatgpt - To Train ChatGPT In 5 Minutes☆167Updated last year
- ☆214Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆579Updated 8 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆243Updated last year
- [NIPS2023] RRHF & Wombat☆805Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,382Updated last year
- Microsoft Automatic Mixed Precision Library☆587Updated 6 months ago
- Sky Computing: Accelerating Geo-distributed Computing in Federated Learning☆90Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆471Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆216Updated last year
- Efficient Inference for Big Models☆580Updated 2 years ago
- Multi-language Enhanced LLaMA☆301Updated last year
- GPTQ inference Triton kernel☆300Updated last year
- Official repository for LongChat and LongEval☆515Updated 10 months ago
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆85Updated 2 years ago
- Efficient AI Inference & Serving☆469Updated last year
- ☆543Updated 3 months ago
- The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )☆221Updated 3 months ago
- Running BERT without Padding☆471Updated 3 years ago
- ☆411Updated last year
- Best practice for training LLaMA models in Megatron-LM☆646Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆309Updated 2 years ago