chandar-lab / NeoBERTLinks
☆101Updated 7 months ago
Alternatives and similar repositories for NeoBERT
Users that are interested in NeoBERT are comparing it to the libraries listed below
Sorting:
- ☆90Updated 6 months ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated 2 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆67Updated 3 months ago
- Crispy reranking models by Mixedbread☆42Updated 3 months ago
- ☆53Updated 10 months ago
- Pre-train Static Word Embeddings☆94Updated 3 months ago
- Official implementation of "GPT or BERT: why not both?"☆63Updated 5 months ago
- ☆48Updated last year
- A massively multilingual modern encoder language model☆117Updated 2 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆112Updated 2 months ago
- ☆56Updated last week
- Datamodels for hugging face tokenizers☆86Updated last month
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆59Updated 5 months ago
- Truly flash T5 realization!☆71Updated last year
- ☆59Updated last month
- Fast, Modern, and Low Precision PyTorch Optimizers☆119Updated last week
- ☆50Updated 2 months ago
- Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Training☆74Updated 2 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- Efficient few-shot learning with cross-encoders.☆60Updated last year
- ☆89Updated 3 weeks ago
- ☆39Updated last year
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆87Updated last month
- ☆138Updated 4 months ago
- An introduction to LLM Sampling☆79Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆66Updated last year
- State-of-the-art paired encoder and decoder models (17M-1B params)☆53Updated 4 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Updated 3 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 5 months ago
- PyLate efficient inference engine☆68Updated 3 months ago