trapoom555 / Language-Model-STS-CFTLinks
Improving Text Embedding of Language Models Using Contrastive Fine-tuning
☆64Updated last year
Alternatives and similar repositories for Language-Model-STS-CFT
Users that are interested in Language-Model-STS-CFT are comparing it to the libraries listed below
Sorting:
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆75Updated 9 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 11 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 6 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 10 months ago
- Verifiers for LLM Reinforcement Learning☆68Updated 3 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 6 months ago
- ☆57Updated 10 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated last week
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- ☆48Updated 11 months ago
- ☆53Updated 8 months ago
- ☆35Updated 8 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆65Updated 2 years ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆62Updated 2 months ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆20Updated 3 months ago
- This is the official repository for Inheritune.☆112Updated 5 months ago
- ☆37Updated last year
- [ICLR'25] "Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers"☆28Updated 4 months ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Embedding Recycling for Language models☆39Updated 2 years ago
- ☆49Updated 5 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 10 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆92Updated 8 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 11 months ago