chandar-lab / NeoBERTLinks
☆97Updated 6 months ago
Alternatives and similar repositories for NeoBERT
Users that are interested in NeoBERT are comparing it to the libraries listed below
Sorting:
- ☆89Updated 5 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆67Updated 2 months ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated last month
- ☆53Updated 10 months ago
- Crispy reranking models by Mixedbread☆42Updated 2 months ago
- Official implementation of "GPT or BERT: why not both?"☆63Updated 4 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆57Updated 5 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆112Updated last month
- Truly flash T5 realization!☆71Updated last year
- ☆55Updated 10 months ago
- A massively multilingual modern encoder language model☆115Updated 2 months ago
- ☆48Updated 2 months ago
- Pre-train Static Word Embeddings☆92Updated 3 months ago
- ☆39Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆116Updated 3 months ago
- ☆48Updated last year
- Nearly Inference Free Embeddings: make your RAG queries 500x faster☆66Updated 3 weeks ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆88Updated 3 weeks ago
- ☆86Updated this week
- Datamodels for hugging face tokenizers☆86Updated last week
- code for training & evaluating Contextual Document Embedding models☆201Updated 7 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆71Updated last year
- Model implementation for the contextual embeddings project☆37Updated 6 months ago
- Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Training☆74Updated last month
- PyLate efficient inference engine☆68Updated 3 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 4 months ago
- Efficient few-shot learning with cross-encoders.☆60Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆66Updated last year
- State-of-the-art paired encoder and decoder models (17M-1B params)☆53Updated 4 months ago