cimeister / typical-samplingLinks
π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
β82Updated 3 years ago
Alternatives and similar repositories for typical-sampling
Users that are interested in typical-sampling are comparing it to the libraries listed below
Sorting:
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"β57Updated 2 years ago
- Apps built using Inspired Cognition's Critique.β58Updated 2 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β136Updated last year
- β21Updated 3 years ago
- β67Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 4 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"β28Updated 3 years ago
- Knowledge Infused Decodingβ71Updated last year
- Pre-training BART in Flax on The Pile datasetβ21Updated 3 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Dataβ56Updated 3 years ago
- Code for the paper "UnNatural Language Inference" to appear at ACL 2021 (Long Paper)β36Updated 3 years ago
- β72Updated 2 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorchβ117Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β48Updated 3 years ago
- β38Updated 11 months ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ46Updated 4 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Followingβ79Updated 9 months ago
- β29Updated 3 years ago
- β97Updated last year
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'β17Updated 3 years ago
- β44Updated 7 months ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ16Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention networkβ35Updated 2 years ago
- Long-context pretrained encoder-decoder modelsβ95Updated 2 years ago
- Implementation of Mixout with PyTorchβ75Updated 2 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)β30Updated 3 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generationβ119Updated 2 years ago