tencent-ailab / persona-hubLinks
Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"
☆1,437Updated 10 months ago
Alternatives and similar repositories for persona-hub
Users that are interested in persona-hub are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆817Updated 9 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,535Updated 7 months ago
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆674Updated 6 months ago
- A library for advanced large language model reasoning☆2,319Updated 7 months ago
- ☆968Updated 11 months ago
- ☆1,344Updated last year
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,029Updated 8 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,933Updated 5 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆937Updated 10 months ago
- A reading list on LLM based Synthetic Data Generation 🔥☆1,496Updated 7 months ago
- Code for Quiet-STaR☆742Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,137Updated last year
- ☆1,032Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆898Updated 3 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,039Updated 3 weeks ago
- List of language agents based on paper "Cognitive Architectures for Language Agents"☆1,114Updated 11 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,061Updated 5 months ago
- O1 Replication Journey☆2,003Updated 11 months ago
- Large Reasoning Models☆804Updated last year
- Code and Data for Tau-Bench☆1,048Updated 4 months ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆768Updated last year
- Arena-Hard-Auto: An automatic LLM benchmark.☆978Updated 6 months ago
- 🔍 Search-o1: Agentic Search-Enhanced Large Reasoning Models [EMNLP 2025]☆1,136Updated last month
- FuseAI Project☆585Updated 11 months ago
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,289Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,233Updated last week
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,230Updated last year
- Official repository for ORPO☆468Updated last year
- Recipes to scale inference-time compute of open models☆1,123Updated 7 months ago
- Generative Representational Instruction Tuning☆681Updated 6 months ago