hpcaitech / ColossalAI-Pytorch-lightningLinks
β24Updated 3 years ago
Alternatives and similar repositories for ColossalAI-Pytorch-lightning
Users that are interested in ColossalAI-Pytorch-lightning are comparing it to the libraries listed below
Sorting:
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AIβ56Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ46Updated 4 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generationβ121Updated 2 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β48Updated 3 years ago
- Transformers at any scaleβ42Updated last year
- β39Updated last year
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)β50Updated 2 years ago
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semantiβ¦β21Updated 3 years ago
- PyTorch reimplementation of REALM and ORQAβ22Updated 3 years ago
- Long-context pretrained encoder-decoder modelsβ96Updated 3 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"β48Updated 3 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β113Updated 2 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Modelsβ38Updated 3 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficieβ¦β26Updated 3 years ago
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewritingβ17Updated 4 years ago
- Code for paper 'Data-Efficient FineTuning'β28Updated 2 years ago
- FairSeq repo with Apollo optimizerβ114Updated last year
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ52Updated 2 years ago
- β35Updated 2 years ago
- The code of paper "Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation" published at NeurIPS 202β¦β48Updated 3 years ago
- β21Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β102Updated 5 years ago
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generationβ27Updated last year
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration πβ114Updated 3 years ago
- β99Updated 3 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"β18Updated 3 years ago
- An Experiment on Dynamic NTK Scaling RoPEβ64Updated 2 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogueβ32Updated 2 years ago