Yuanhy1997 / HyPeLinks
HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [ACL 2023]
☆14Updated 2 years ago
Alternatives and similar repositories for HyPe
Users that are interested in HyPe are comparing it to the libraries listed below
Sorting:
- ☆14Updated 2 years ago
- Transformers at any scale☆42Updated 2 years ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Updated 2 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 4 months ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 3 years ago
- ☆20Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆28Updated last week
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 7 months ago
- Repo for "Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks" ACL 2023 Findings☆15Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 3 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated 2 years ago
- [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees☆24Updated 2 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- code for "Natural Language to Code Translation with Execution"☆41Updated 3 years ago
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆27Updated last year
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆32Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated 2 years ago
- Minimum Description Length probing for neural network representations☆20Updated last year
- Few-shot Learning with Auxiliary Data☆31Updated 2 years ago
- ☆23Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"