rosewang2008 / language_modeling_via_stochastic_processesLinks
Language modeling via stochastic processes. Oral @ ICLR 2022.
☆138Updated 2 years ago
Alternatives and similar repositories for language_modeling_via_stochastic_processes
Users that are interested in language_modeling_via_stochastic_processes are comparing it to the libraries listed below
Sorting:
- ☆113Updated 3 years ago
- DiffusER: Discrete Diffusion via Edit-based Reconstruction (Reid, Hellendoorn & Neubig, 2022)☆54Updated 4 months ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Updated 3 years ago
- Official Code for the papers: "Controlled Text Generation as Continuous Optimization with Multiple Constraints" and "Gradient-based Const…☆64Updated last year
- ☆70Updated 3 years ago
- ☆75Updated 2 years ago
- Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control☆75Updated 3 years ago
- Repository for ACL 2022 paper Mix and Match: Learning-free Controllable Text Generation using Energy Language Models☆45Updated 3 years ago
- ☆36Updated last year
- Code for Editing Factual Knowledge in Language Models☆142Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- Source code of LatentOps☆78Updated 2 years ago
- Code accompanying our papers on the "Generative Distributional Control" framework☆118Updated 2 years ago
- Distributional Generalization in NLP. A roadmap.☆88Updated 2 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 3 years ago
- ☆98Updated 2 years ago
- [Paperlist] Awesome paper list of controllable text generation via latent auto-encoders. Contributions of any kind are welcome.☆51Updated 2 years ago
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆20Updated 2 years ago
- ☆177Updated last year
- ☆82Updated 2 years ago
- A library for finding knowledge neurons in pretrained transformer models.☆159Updated 3 years ago
- Knowledge Infused Decoding☆71Updated last year
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆153Updated 2 years ago
- MEND: Fast Model Editing at Scale☆253Updated 2 years ago
- reStructured Pre-training☆98Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆271Updated 2 years ago
- ☆101Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆104Updated 3 years ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Updated 2 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆69Updated 3 years ago