microsoft / POLARLinks
Experiments for "Automatic Calibration and Error Correction for Large Language Models via Pareto Optimal Self-Supervision"
☆14Updated 2 years ago
Alternatives and similar repositories for POLAR
Users that are interested in POLAR are comparing it to the libraries listed below
Sorting:
- The official repo of our research work "Interactive Editing for Text Summarization".☆23Updated 2 years ago
- BotSIM - a data-efficient end-to-end Bot SIMulation toolkit for evaluation, diagnosis, and improvement of commercial chatbots☆116Updated 8 months ago
- ☆31Updated 3 years ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆123Updated 2 years ago
- Implementation of Z-BERT-A: a zero-shot pipeline for unknown intent detection.☆44Updated 2 years ago
- Repository for Findings of EMNLP 2020 "Context-aware Stand-alone Neural Spelling Correction"☆18Updated 5 years ago
- ☆44Updated 4 years ago
- A unified versatile interface for dialogue datasets☆18Updated 2 years ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training☆18Updated last year
- This repository contains the implementation of the paper: "Span Classification with Structured Information for Disfluency Detection in Sp…☆15Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- ☆85Updated 2 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- Open source library for few shot NLP☆78Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- Repo for "Smart Word Suggestions" (SWS) task and benchmark☆20Updated 2 years ago
- Evaluating tool-augmented LLMs in conversation settings☆88Updated last year
- ☆39Updated last year
- ☆97Updated 3 years ago
- BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generat…☆28Updated 3 years ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆77Updated last year
- Transformers at any scale☆42Updated 2 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Updated 3 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Rough codebase for exploring initialization strategies for new word embeddings in pretrained LMs☆19Updated 4 years ago
- ☆184Updated 2 years ago
- ☆46Updated 3 years ago
- Source code for <Sequence-Level Training for Non-Autoregressive Neural Machine Translation>.☆24Updated 4 years ago
- The code of paper "Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation" published at NeurIPS 202…☆48Updated 3 years ago