HendrikStrobelt / LMdiff
A diff tool for language models
β42Updated last year
Alternatives and similar repositories for LMdiff:
Users that are interested in LMdiff are comparing it to the libraries listed below
- Google's BigBird (Jax/Flax & PyTorch) @ π€Transformersβ48Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- β44Updated 3 months ago
- β67Updated 2 years ago
- β97Updated 2 years ago
- Code and Data for Evaluation WGβ41Updated 2 years ago
- Embedding Recycling for Language modelsβ38Updated last year
- Ranking of fine-tuned HF models as base models.β35Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyerβ55Updated 2 years ago
- β74Updated 3 years ago
- Apps built using Inspired Cognition's Critique.β58Updated 2 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)β60Updated last year
- β19Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β136Updated last year
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'β17Updated 2 years ago
- The Python library with command line tools to interact with Dynabench(https://dynabench.org/), such as uploading models.β55Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 laβ¦β46Updated last year
- Shared code for training sentence embeddings with Flax / JAXβ27Updated 3 years ago
- Generate BERT vocabularies and pretraining examples from Wikipediasβ18Updated 4 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated last year
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- β77Updated last year
- A highly sophisticated sequence-to-sequence model for code generationβ40Updated 3 years ago
- Open source library for few shot NLPβ77Updated last year
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ75Updated 4 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 3 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformersβ56Updated 9 months ago
- A case study of efficient training of large language models using commodity hardware.β68Updated 2 years ago
- Code & data for EMNLP 2020 paper "MOCHA: A Dataset for Training and Evaluating Reading Comprehension Metrics".β16Updated 2 years ago
- Parallel data preprocessing for NLP and ML.β34Updated 4 months ago