chrishokamp / zero-shot-ner-fine-tuningLinks
zero shot NER fine tuning
☆13Updated 10 months ago
Alternatives and similar repositories for zero-shot-ner-fine-tuning
Users that are interested in zero-shot-ner-fine-tuning are comparing it to the libraries listed below
Sorting:
- LTG-Bert☆34Updated 2 years ago
- A tiny BERT for low-resource monolingual models☆31Updated last month
- This repository contains the code for the paper 'PARM: Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval' pu…☆41Updated 4 years ago
- zero-vocab or low-vocab embeddings☆18Updated 3 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- Implementation of Z-BERT-A: a zero-shot pipeline for unknown intent detection.☆44Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆87Updated last year
- Experiments for XLM-V Transformers Integeration☆13Updated 2 years ago
- PyTorch-IE: State-of-the-art Information Extraction in PyTorch☆77Updated 4 months ago
- GLADIS: A General and Large Acronym Disambiguation Benchmark (EACL 23)☆18Updated last year
- Library for fast text representation and classification.☆31Updated 2 years ago
- Semantically Structured Sentence Embeddings☆71Updated last year
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- Repository with code for MaChAmp: https://aclanthology.org/2021.eacl-demos.22/☆90Updated 2 weeks ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆67Updated last week
- Using short models to classify long texts☆21Updated 2 years ago
- RaKUn 2.0 - A fast keyword detection algorithm☆70Updated 5 months ago
- Bi-encoder entity linking architecture☆51Updated last year
- Code for pre-training CharacterBERT models (as well as BERT models).☆34Updated 4 years ago
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆36Updated 7 months ago
- Pre-train Static Word Embeddings☆94Updated 4 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Simply, faster, sentence-transformers☆143Updated last year
- ParaNames: A multilingual resource for parallel names☆39Updated last year
- Official implementation of "GPT or BERT: why not both?"☆63Updated 6 months ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 3 years ago
- ☆52Updated 2 years ago
- ☆89Updated 9 months ago
- Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging☆69Updated 3 years ago