cimeister / typical-samplingLinks
π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
β82Updated 3 years ago
Alternatives and similar repositories for typical-sampling
Users that are interested in typical-sampling are comparing it to the libraries listed below
Sorting:
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"β58Updated 2 years ago
- β46Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β136Updated 2 years ago
- β97Updated 2 years ago
- β29Updated 3 years ago
- PyTorch reimplementation of REALM and ORQAβ22Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ16Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generationβ119Updated 2 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorchβ46Updated 4 years ago
- β100Updated 2 years ago
- β67Updated 2 years ago
- Apps built using Inspired Cognition's Critique.β58Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 4 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'β17Updated 3 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Dataβ57Updated 4 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β48Updated 3 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen languageβ73Updated last year
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorchβ76Updated 2 years ago
- Transformers at any scaleβ41Updated last year
- Implementation of the GBST block from the Charformer paper, in Pytorchβ118Updated 4 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.β74Updated 11 months ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β27Updated 3 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generationβ97Updated 2 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"β30Updated 3 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated last year
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Followingβ79Updated 10 months ago
- Helper scripts and notes that were used while porting various nlp modelsβ45Updated 3 years ago