datadreamer-dev / DataDreamerLinks
DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. β π€π€
β1,091Updated last year
Alternatives and similar repositories for DataDreamer
Users that are interested in DataDreamer are comparing it to the libraries listed below
Sorting:
- A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.β819Updated 6 months ago
- Evaluate your LLM's response with Prometheus and GPT4 π―β1,043Updated 9 months ago
- Easily embed, cluster and semantically label text datasetsβ592Updated last year
- Train Models Contrastively in Pytorchβ774Updated 10 months ago
- Automatically evaluate your LLMs in Google Colabβ685Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifiβ¦β3,074Updated last week
- β561Updated last year
- Best practices for distilling large language models.β604Updated 2 years ago
- Stanford NLP Python library for Representation Finetuning (ReFT)β1,555Updated 3 weeks ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.β1,592Updated last month
- Automated Evaluation of RAG Systemsβ687Updated 10 months ago
- Data and tools for generating and inspecting OLMo pre-training data.β1,403Updated 3 months ago
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipyβ1,477Updated this week
- awesome synthetic (text) datasetsβ321Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ2,291Updated 2 weeks ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.β2,877Updated this week
- Generative Representational Instruction Tuningβ685Updated 7 months ago
- LLM Analyticsβ705Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.β574Updated last week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuningβ668Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAIβ1,407Updated last year
- β1,033Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ737Updated last year
- An Open Source Toolkit For LLM Distillationβ859Updated last month
- HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labelsβ568Updated last year
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.β446Updated last year
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'β1,644Updated 2 months ago
- Official repository for ORPOβ469Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRAβ630Updated 2 years ago
- Fast Multimodal Semantic Deduplication & Filteringβ882Updated 2 weeks ago