hpcaitech / ColossalAI-Pytorch-lightningLinks
β24Updated 2 years ago
Alternatives and similar repositories for ColossalAI-Pytorch-lightning
Users that are interested in ColossalAI-Pytorch-lightning are comparing it to the libraries listed below
Sorting:
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AIβ56Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- β38Updated last year
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)β50Updated 2 years ago
- Transformers at any scaleβ41Updated last year
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generationβ121Updated 2 years ago
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ46Updated 4 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"β47Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β48Updated 3 years ago
- Long-context pretrained encoder-decoder modelsβ96Updated 2 years ago
- A pre-trained model with multi-exit transformer architecture.β55Updated 2 years ago
- β98Updated 3 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Proβ¦β62Updated this week
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".β65Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β102Updated 4 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogueβ32Updated 2 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"β18Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β95Updated 2 years ago
- Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.β32Updated 3 years ago
- β21Updated 4 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Modelsβ38Updated 3 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β113Updated 2 years ago
- Code for paper 'Data-Efficient FineTuning'β28Updated 2 years ago
- β35Updated 2 years ago
- Train π€transformers with DeepSpeed: ZeRO-2, ZeRO-3β23Updated 4 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)β74Updated 3 years ago
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewritingβ17Updated 3 years ago
- An Experiment on Dynamic NTK Scaling RoPEβ64Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β138Updated 2 years ago