hpcaitech / ColossalAI-Pytorch-lightningLinks
β24Updated 3 years ago
Alternatives and similar repositories for ColossalAI-Pytorch-lightning
Users that are interested in ColossalAI-Pytorch-lightning are comparing it to the libraries listed below
Sorting:
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AIβ56Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- β39Updated last year
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)β50Updated 2 years ago
- Transformers at any scaleβ42Updated 2 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β48Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ46Updated 4 years ago
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generationβ123Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β102Updated 5 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Modelsβ38Updated 3 years ago
- Long-context pretrained encoder-decoder modelsβ96Updated 3 years ago
- β21Updated 4 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Followingβ78Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Updated 2 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogueβ32Updated 3 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β112Updated 2 years ago
- An implementation of an autoregressive language model using an improved Transformer and DeepSpeed pipeline parallelism.β30Updated 3 weeks ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Proβ¦β62Updated 4 months ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"β18Updated 3 years ago
- Code for paper 'Data-Efficient FineTuning'β28Updated 2 years ago
- Scalable PaLM implementation of PyTorchβ190Updated 3 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretrainingβ118Updated 2 years ago
- β35Updated 2 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"β48Updated 3 years ago
- FairSeq repo with Apollo optimizerβ114Updated 2 years ago
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semantiβ¦β21Updated 3 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficieβ¦β26Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β138Updated 2 years ago
- β35Updated 2 years ago