nomic-ai / contrastorsLinks
Train Models Contrastively in Pytorch
☆753Updated 7 months ago
Alternatives and similar repositories for contrastors
Users that are interested in contrastors are comparing it to the libraries listed below
Sorting:
- Generative Representational Instruction Tuning☆678Updated 4 months ago
- ☆552Updated 11 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆729Updated last year
- Official repository for ORPO☆463Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆548Updated last week
- Evaluation suite for LLMs☆365Updated 4 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆660Updated last year
- Easily embed, cluster and semantically label text datasets☆584Updated last year
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,077Updated 9 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.☆801Updated 4 months ago
- A large-scale information-rich web dataset, featuring millions of real clicked query-document labels☆345Updated 10 months ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,343Updated last week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,009Updated 6 months ago
- A repository for research on medium sized language models.☆518Updated 5 months ago
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,384Updated 2 weeks ago
- ☆575Updated last year
- Train and Infer Powerful Sentence Embeddings with AnglE | 🔥 SOTA on STS and MTEB Leaderboard☆559Updated 3 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- An Open Source Toolkit For LLM Distillation☆779Updated 4 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,213Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆892Updated last month
- YaRN: Efficient Context Window Extension of Large Language Models☆1,634Updated last year
- Official inference library for pre-processing of Mistral models☆812Updated this week
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆685Updated last year
- Training LLMs with QLoRA + FSDP☆1,529Updated last year
- ☆446Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆560Updated 10 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆786Updated 7 months ago