SIC98 / GPT2-python-code-generator
GPT2 finetuning with transformers π€
β27Updated 3 years ago
Related projects: β
- A basic and simple tool for code auto completionβ57Updated last month
- Code Generatorβ23Updated last year
- Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.β31Updated 2 years ago
- β23Updated last year
- A Streamlit app running GPT-2 language model for text classification, built with Pytorch, Transformers and AWS SageMaker.β38Updated 2 years ago
- β67Updated last year
- Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loβ¦β38Updated 8 months ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AIβ58Updated last year
- Code for the NLP4Prog workshop paper "Reading StackOverflow Encourages Cheating: Adding Question TextImproves Extractive Code Generation"β21Updated 3 years ago
- β33Updated last year
- Observe the slow deterioration of my mental sanity in the github commit historyβ13Updated last year
- Codebase for the Medium Article on Fine-tuning GPT2 for Text Generationβ68Updated 4 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β45Updated 2 years ago
- Magnum-NLC2CMD is the winning solution for the NeurIPS 2020 NLC2CMD challenge.β31Updated last year
- Official code release for the paper Coder Reviewer Reranking for Code Generation.β41Updated last year
- β32Updated last year
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)β52Updated 2 years ago
- β44Updated 3 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β82Updated 2 years ago
- This repository is the official implementation of our paper MVP: Multi-task Supervised Pre-training for Natural Language Generation.β68Updated last year
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorchβ35Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ50Updated last year
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Followingβ79Updated last week
- All my experiments with the various transformers and various transformer frameworks availableβ14Updated 3 years ago
- Prompt tuning toolkit for GPT-2 and GPT-Neoβ88Updated 2 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorchβ72Updated last year
- This project shows how to derive the total number of training tokens from a large text dataset from π€ datasets with Apache Beam and Dataβ¦β23Updated last year
- The jiant toolkit for general-purpose text understanding modelsβ21Updated 3 years ago
- Transformers at any scaleβ39Updated 8 months ago
- β65Updated last year