asahi417 / lmpplLinks
Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder LM (eg. Flan-T5).
☆162Updated 2 months ago
Alternatives and similar repositories for lmppl
Users that are interested in lmppl are comparing it to the libraries listed below
Sorting:
- ☆185Updated last month
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆255Updated 2 years ago
- Repository for EMNLP 2022 Paper: Towards a Unified Multi-Dimensional Evaluator for Text Generation☆209Updated last year
- Multilingual Large Language Models Evaluation Benchmark☆129Updated last year
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆223Updated 9 months ago
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆137Updated last year
- BARTScore: Evaluating Generated Text as Text Generation☆357Updated 3 years ago
- ☆77Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆94Updated 2 years ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆373Updated 4 months ago
- ☆244Updated last year
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆167Updated 3 years ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆163Updated last year
- A Multilingual Replicable Instruction-Following Model☆94Updated 2 years ago
- contrastive decoding☆203Updated 2 years ago
- A Survey of Attributions for Large Language Models☆211Updated last year
- Finetune mistral-7b-instruct for sentence embeddings☆86Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆181Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆498Updated last year
- Scalable training for dense retrieval models.☆299Updated 2 months ago
- RARR: Researching and Revising What Language Models Say, Using Language Models☆48Updated 2 years ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆196Updated 8 months ago
- Code, datasets, and checkpoints for the paper "Improving Passage Retrieval with Zero-Shot Question Generation (EMNLP 2022)"☆101Updated 2 years ago
- Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting ir…☆43Updated 10 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆132Updated last year
- ☆138Updated 7 months ago
- ☆286Updated last year
- Codebase, data and models for the SummaC paper in TACL☆99Updated 7 months ago