MoritzLaurer / synthetic-data-blogLinks
This is the reproduction repository for my π€ Hugging Face blog post on synthetic data
β68Updated last year
Alternatives and similar repositories for synthetic-data-blog
Users that are interested in synthetic-data-blog are comparing it to the libraries listed below
Sorting:
- awesome synthetic (text) datasetsβ297Updated 3 months ago
- Let's build better datasets, together!β262Updated 9 months ago
- β136Updated last month
- β119Updated last year
- Codebase accompanying the Summary of a Haystack paper.β79Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo rankerβ119Updated last week
- Lightweight demos for finetuning LLMs. Powered by π€ transformers and open-source datasets.β78Updated 11 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β49Updated last year
- Notebooks for training universal 0-shot classifiers on many different tasksβ136Updated 9 months ago
- Manage scalable open LLM inference endpoints in Slurm clustersβ274Updated last year
- β146Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Mβ¦β240Updated 11 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalizationβ275Updated last year
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"β135Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimizationβ66Updated last year
- Attribute (or cite) statements generated by LLMs back to in-context information.β289Updated last year
- Generalist and Lightweight Model for Text Classificationβ162Updated 3 months ago
- β88Updated last year
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paperβ¦β113Updated last year
- π¦ Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data β¦β210Updated this week
- A Lightweight Library for AI Observabilityβ251Updated 7 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).β80Updated last year
- My personal siteβ78Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on taskβ¦β179Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer modelsβ289Updated 7 months ago
- Set of scripts to finetune LLMs