nomic-ai / contrastorsLinks
Train Models Contrastively in Pytorch
☆716Updated 2 months ago
Alternatives and similar repositories for contrastors
Users that are interested in contrastors are comparing it to the libraries listed below
Sorting:
- ☆517Updated 6 months ago
- Generative Representational Instruction Tuning☆640Updated 2 months ago
- Evaluation suite for LLMs☆348Updated 2 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆453Updated last week
- Code for Quiet-STaR☆732Updated 9 months ago
- Official repository for ORPO☆453Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,229Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,574Updated last week
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,159Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆727Updated 8 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆502Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆705Updated 2 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,384Updated last year
- An Open Source Toolkit For LLM Distillation☆618Updated this week
- Train and Infer Powerful Sentence Embeddings with AnglE | 🔥 SOTA on STS and MTEB Leaderboard☆543Updated 2 months ago
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,193Updated this week
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆876Updated last month
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆651Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Best practices for distilling large language models.☆547Updated last year
- Easily embed, cluster and semantically label text datasets☆542Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆851Updated last week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,489Updated last year
- Bringing BERT into modernity via both architecture changes and scaling☆1,385Updated 3 weeks ago
- ☆536Updated 9 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,396Updated last week
- Evaluate your LLM's response with Prometheus and GPT4 💯☆950Updated last month
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆279Updated 3 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆462Updated last year