aiintelligentsystems / next-level-bertLinks
☆15Updated last year
Alternatives and similar repositories for next-level-bert
Users that are interested in next-level-bert are comparing it to the libraries listed below
Sorting:
- Embedding Recycling for Language models☆39Updated 2 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆49Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings (EMNLP 2022 paper)☆71Updated 2 years ago
- Code for SaGe subword tokenizer (EACL 2023)☆25Updated 7 months ago
- LTG-Bert☆33Updated last year
- Simple and scalable tools for data-driven pretraining data selection.☆24Updated last month
- Official implementation of "BERTs are Generative In-Context Learners"☆30Updated 4 months ago
- ☆14Updated 2 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆21Updated 3 weeks ago
- ☆54Updated 2 years ago
- ☆12Updated 7 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- Few-shot Learning with Auxiliary Data☆29Updated last year
- Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).☆20Updated 3 years ago
- ☆13Updated last year
- Easy modernBERT fine-tuning and multi-task learning☆59Updated 2 weeks ago
- One-stop shop for running and fine-tuning transformer-based language models for retrieval☆57Updated last week
- ☆72Updated 2 years ago
- ☆66Updated last year
- Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"☆19Updated last year
- Official implementation of "GPT or BERT: why not both?"☆55Updated this week
- ☆30Updated last month
- Repo for Aspire - A scientific document similarity model based on matching fine-grained aspects of scientific papers.☆54Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆62Updated 2 months ago
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages (ACL 2022)☆19Updated 3 years ago
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆24Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆105Updated 4 months ago
- 🌾 Universal, customizable and deployable fine-grained evaluation for text generation.☆23Updated last year
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆49Updated 2 years ago