iliaschalkidis / flash-roberta
Hugging Face RoBERTa with Flash Attention 2
☆21Updated last year
Alternatives and similar repositories for flash-roberta:
Users that are interested in flash-roberta are comparing it to the libraries listed below
- ☆21Updated 3 years ago
- Observe the slow deterioration of my mental sanity in the github commit history☆12Updated last year
- ☆55Updated 2 years ago
- Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"☆15Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆46Updated last year
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆14Updated last year
- ☆17Updated 6 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆17Updated 2 weeks ago
- Dense hybrid representations for text retrieval☆62Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- PyTorch reimplementation of the paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization"☆16Updated 3 years ago
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆40Updated 3 months ago
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆22Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆28Updated 2 years ago
- ☆46Updated 2 years ago
- [ACL 2023] Few-shot Reranking for Multi-hop QA via Language Model Prompting☆27Updated last year
- Embedding Recycling for Language models☆38Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- ☆12Updated 4 months ago
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Updated last year
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆15Updated 3 years ago
- ☆29Updated last year
- 🦮 Code and pretrained models for Findings of ACL 2022 paper "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrie…☆49Updated 2 years ago
- ☆27Updated 11 months ago
- ☆96Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated 11 months ago
- ☆45Updated 2 years ago
- 🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.☆15Updated this week