tencent-ailab / persona-hub
Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"
☆868Updated last month
Related projects ⓘ
Alternatives and complementary repositories for persona-hub
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆476Updated this week
- ☆916Updated this week
- ☆445Updated last week
- Code for Quiet-STaR☆639Updated 2 months ago
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆531Updated last week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆701Updated this week
- ☆778Updated 3 weeks ago
- Large Reasoning Models☆457Updated this week
- Codebase for Merging Language Models (ICML 2024)☆765Updated 6 months ago
- A reading list on LLM based Synthetic Data Generation 🔥☆761Updated this week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆738Updated last week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,551Updated 2 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,034Updated 6 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆794Updated 2 months ago
- ☆488Updated 3 weeks ago
- Generative Representational Instruction Tuning☆562Updated this week
- O1 Replication Journey: A Strategic Progress Report – Part I☆1,254Updated last week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆1,612Updated this week
- RAGChecker: A Fine-grained Framework For Diagnosing RAG☆528Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆642Updated last month
- ReFT: Representation Finetuning for Language Models☆1,145Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,000Updated 9 months ago
- Official repository for ORPO☆420Updated 5 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆435Updated this week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,511Updated 2 weeks ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆479Updated 5 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆610Updated 5 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆586Updated 3 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆332Updated 2 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆494Updated 5 months ago