aiintelligentsystems / next-level-bert
☆16Updated 10 months ago
Alternatives and similar repositories for next-level-bert:
Users that are interested in next-level-bert are comparing it to the libraries listed below
- Few-shot Learning with Auxiliary Data☆27Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆27Updated last month
- ☆13Updated this week
- Embedding Recycling for Language models☆38Updated last year
- Simple and scalable tools for data-driven pretraining data selection.☆22Updated 2 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆19Updated 2 months ago
- ☆26Updated 4 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- LTG-Bert☆32Updated last year
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆48Updated 2 years ago
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆24Updated last year
- ☆11Updated last year
- ☆28Updated last year
- Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings (EMNLP 2022 paper)☆67Updated 2 years ago
- Official implementation of "GPT or BERT: why not both?"☆52Updated last month
- ☆11Updated 4 months ago
- The code related to the baselines from NeurIPS 2021 paper "DUE: End-to-End Document Understanding Benchmark."☆36Updated 2 years ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆45Updated 2 weeks ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆40Updated last month
- Repo to hold code and track issues for the collection of permissively licensed data☆23Updated 2 weeks ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).☆20Updated 2 years ago
- Repository for Multilingual-VQA task created during HuggingFace JAX/Flax community week.☆34Updated 3 years ago
- Repo for Aspire - A scientific document similarity model based on matching fine-grained aspects of scientific papers.☆52Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- In-BoXBART: Get Instructions into Biomedical Multi-task Learning☆14Updated 2 years ago
- 🌾 Universal, customizable and deployable fine-grained evaluation for text generation.☆22Updated last year