The implementation of DeBERTa
☆2,202Sep 29, 2023Updated 2 years ago
Alternatives and similar repositories for DeBERTa
Users that are interested in DeBERTa are comparing it to the libraries listed below
Sorting:
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,370Mar 23, 2024Updated last year
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,494Jan 14, 2026Updated 2 months ago
- State-of-the-Art Text Embeddings☆18,427Mar 12, 2026Updated last week
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,190Sep 30, 2025Updated 5 months ago
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821☆3,643Oct 16, 2024Updated last year
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,046Jan 23, 2026Updated last month
- Longformer: The Long-Document Transformer☆2,189Feb 8, 2023Updated 3 years ago
- Dense Passage Retriever - is a set of tools and models for open domain Q&A task.☆1,863Apr 6, 2023Updated 2 years ago
- This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"☆1,626Jun 12, 2023Updated 2 years ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,257Mar 7, 2024Updated 2 years ago
- Code for using and evaluating SpanBERT.☆906Jul 25, 2023Updated 2 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆331Jan 10, 2024Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,739Jan 8, 2024Updated 2 years ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,176May 28, 2023Updated 2 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆470Jun 22, 2022Updated 3 years ago
- ☆1,297Dec 15, 2022Updated 3 years ago
- Ongoing research training transformer models at scale☆15,744Updated this week
- BertViz: Visualize Attention in Transformer Models☆7,954Jan 8, 2026Updated 2 months ago
- An open-source NLP research library, built on PyTorch.☆11,893Nov 22, 2022Updated 3 years ago
- ☆221Jun 8, 2020Updated 5 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Jul 25, 2023Updated 2 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,927Feb 14, 2023Updated 3 years ago
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,155Jan 22, 2024Updated 2 years ago
- Unsupervised text tokenizer for Neural Network-based text generation.☆11,700Mar 1, 2026Updated 2 weeks ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆540Nov 15, 2021Updated 4 years ago
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations☆3,279Apr 14, 2023Updated 2 years ago
- Data augmentation for NLP☆4,652Jun 24, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,809Updated this week
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- ☆1,560Updated this week
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆157Dec 20, 2023Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,869Updated this week
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆359Feb 22, 2022Updated 4 years ago
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723☆730Aug 29, 2022Updated 3 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,563Updated this week
- ☆2,952Mar 9, 2026Updated last week
- Train transformer language models with reinforcement learning.☆17,697Updated this week
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,804Mar 1, 2026Updated 2 weeks ago
- A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.☆2,107Oct 16, 2025Updated 5 months ago