chandar-lab / NeoBERTLinks
☆79Updated 2 months ago
Alternatives and similar repositories for NeoBERT
Users that are interested in NeoBERT are comparing it to the libraries listed below
Sorting:
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆62Updated 2 months ago
- ☆64Updated last month
- ☆49Updated 5 months ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated last year
- ☆51Updated 6 months ago
- Crispy reranking models by Mixedbread☆33Updated 3 weeks ago
- Official implementation of "GPT or BERT: why not both?"☆57Updated last week
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆40Updated last month
- Pre-train Static Word Embeddings☆85Updated 2 months ago
- Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Training☆67Updated 6 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 4 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year
- ☆37Updated last year
- Efficient few-shot learning with cross-encoders.☆56Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- ☆48Updated 11 months ago
- code for training & evaluating Contextual Document Embedding models☆196Updated 2 months ago
- ☆56Updated 3 months ago
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆142Updated 3 weeks ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated 2 months ago
- ☆29Updated last month
- Truly flash T5 realization!☆68Updated last year
- Optimus is a flexible and scalable framework built to train language models efficiently across diverse hardware configurations, including…☆66Updated last month
- Code for SaGe subword tokenizer (EACL 2023)☆25Updated 8 months ago
- Official Repository for "Hypencoder: Hypernetworks for Information Retrieval"☆27Updated 5 months ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆185Updated last month
- State-of-the-art paired encoder and decoder models (17M-1B params)☆38Updated last week
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆93Updated 8 months ago
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆29Updated 6 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago