BlinkDL / minGPT-tunedLinks
A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
☆119Updated 4 years ago
Alternatives and similar repositories for minGPT-tuned
Users that are interested in minGPT-tuned are comparing it to the libraries listed below
Sorting:
- Code for the paper "Query-Key Normalization for Transformers"☆50Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆58Updated 4 years ago
- ☆67Updated last year
- Transformers at any scale☆42Updated last year
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 3 years ago
- ☆98Updated 2 years ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- A pytorch &keras implementation and demo of Fastformer.☆191Updated 3 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆70Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 5 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 3 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated 2 years ago
- ☆24Updated 3 years ago
- FairSeq repo with Apollo optimizer☆114Updated 2 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 3 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 3 years ago
- 逻辑回归和单层softmax的解析解☆12Updated 4 years ago
- The code of paper "Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation" published at NeurIPS 202…☆48Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆71Updated 5 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆198Updated 2 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆57Updated 3 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 3 years ago
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated 3 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- ☆95Updated last year
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 5 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆48Updated 3 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆106Updated last year