MiuLab / PE-StudyLinks
Study of Pre-Trained Positional Embeddings
☆16Updated 5 years ago
Alternatives and similar repositories for PE-Study
Users that are interested in PE-Study are comparing it to the libraries listed below
Sorting:
- Automatic metrics for GEM tasks☆67Updated 3 years ago
- NAACL 2022: Can Rationalization Improve Robustness? https://arxiv.org/abs/2204.11790☆27Updated 3 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- Code accompanying our papers on the "Generative Distributional Control" framework☆118Updated 3 years ago
- Code for Editing Factual Knowledge in Language Models☆142Updated 3 years ago
- ☆177Updated last year
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- code associated with ACL 2021 DExperts paper☆118Updated 2 years ago
- ☆113Updated 3 years ago
- MEND: Fast Model Editing at Scale☆253Updated 2 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Updated 3 years ago
- [ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Y…☆85Updated 2 years ago
- ☆75Updated 2 years ago
- Official Code for the papers: "Controlled Text Generation as Continuous Optimization with Multiple Constraints" and "Gradient-based Const…☆64Updated last year
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Updated 2 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆111Updated 2 years ago
- ☆32Updated last year
- ☆101Updated 3 years ago
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆168Updated 4 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆271Updated 2 years ago
- A library for finding knowledge neurons in pretrained transformer models.☆158Updated 3 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- ☆50Updated 2 years ago
- PyTorch code for the RetoMaton paper: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022)☆74Updated 3 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".☆88Updated 4 years ago
- ☆130Updated 3 years ago
- ☆50Updated 4 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆103Updated 2 years ago