hpcaitech / ColossalAI-Pytorch-lightningLinks
β24Updated 2 years ago
Alternatives and similar repositories for ColossalAI-Pytorch-lightning
Users that are interested in ColossalAI-Pytorch-lightning are comparing it to the libraries listed below
Sorting:
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AIβ56Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)β50Updated 2 years ago
- β38Updated last year
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generationβ121Updated 2 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β48Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ46Updated 4 years ago
- Transformers at any scaleβ41Updated last year
- Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.β32Updated 3 years ago
- Long-context pretrained encoder-decoder modelsβ96Updated 2 years ago
- Train π€transformers with DeepSpeed: ZeRO-2, ZeRO-3β23Updated 4 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"β48Updated 3 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β102Updated 4 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Proβ¦β62Updated 3 weeks ago
- Code for paper 'Data-Efficient FineTuning'β28Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β113Updated 2 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"β18Updated 2 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ17Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ52Updated 2 years ago
- β35Updated 2 years ago
- Scalable PaLM implementation of PyTorchβ188Updated 2 years ago
- β21Updated 4 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrievalβ29Updated 3 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretrainingβ117Updated 2 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".β65Updated 4 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β95Updated 2 years ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-trainingβ17Updated last year
- The code of paper "Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation" published at NeurIPS 202β¦β47Updated 3 years ago
- [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Treesβ24Updated 2 years ago