nomic-ai / contrastorsLinks
Train Models Contrastively in Pytorch
☆750Updated 6 months ago
Alternatives and similar repositories for contrastors
Users that are interested in contrastors are comparing it to the libraries listed below
Sorting:
- Generative Representational Instruction Tuning☆674Updated 3 months ago
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆536Updated 3 weeks ago
- Easily embed, cluster and semantically label text datasets☆578Updated last year
- ☆541Updated 10 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆720Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,321Updated last week
- Evaluation suite for LLMs☆363Updated 2 months ago
- A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.☆793Updated 2 months ago
- A large-scale information-rich web dataset, featuring millions of real clicked query-document labels☆342Updated 9 months ago
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,346Updated 3 weeks ago
- Official repository for ORPO☆464Updated last year
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,596Updated 8 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,613Updated last year
- Train and Infer Powerful Sentence Embeddings with AnglE | 🔥 SOTA on STS and MTEB Leaderboard☆555Updated 6 months ago
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,062Updated 8 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆661Updated last year
- Best practices for distilling large language models.☆577Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,514Updated 8 months ago
- ☆447Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆912Updated 5 months ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆986Updated last year
- Bringing BERT into modernity via both architecture changes and scaling☆1,529Updated 3 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆627Updated last year
- ☆570Updated last year
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆640Updated 2 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆747Updated last year
- An Open Source Toolkit For LLM Distillation☆732Updated 2 months ago
- Code repository for the paper - "Matryoshka Representation Learning"☆565Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,973Updated this week
- A repository for research on medium sized language models.☆511Updated 4 months ago