tencent-ailab / persona-hubLinks
Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"
☆1,303Updated 6 months ago
Alternatives and similar repositories for persona-hub
Users that are interested in persona-hub are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆759Updated 5 months ago
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆657Updated 2 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,516Updated 3 months ago
- A reading list on LLM based Synthetic Data Generation 🔥☆1,392Updated 2 months ago
- ☆1,358Updated 9 months ago
- ☆956Updated 7 months ago
- ☆1,033Updated 8 months ago
- A library for advanced large language model reasoning☆2,231Updated 2 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,840Updated 3 weeks ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆981Updated 4 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,993Updated last year
- Code and Data for Tau-Bench☆791Updated last month
- Large Reasoning Models☆804Updated 8 months ago
- O1 Replication Journey☆1,998Updated 7 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆880Updated last month
- AllenAI's post-training codebase☆3,124Updated this week
- Code for Quiet-STaR☆738Updated last year
- 🔍 Search-o1: Agentic Search-Enhanced Large Reasoning Models [EMNLP 2025]☆1,026Updated last week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆913Updated 6 months ago
- List of language agents based on paper "Cognitive Architectures for Language Agents"☆1,007Updated 7 months ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆726Updated 10 months ago
- Arena-Hard-Auto: An automatic LLM benchmark.☆912Updated 2 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,841Updated last week
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,510Updated 6 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆544Updated last week
- ReCall: Learning to Reason with Tool Call for LLMs via Reinforcement Learning☆1,162Updated 3 months ago
- Official repository for ORPO☆463Updated last year
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆910Updated this week
- RewardBench: the first evaluation tool for reward models.☆628Updated 2 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,863Updated this week