Xuekai-Zhu / key-configuration-of-llms
☆24Updated last year
Alternatives and similar repositories for key-configuration-of-llms:
Users that are interested in key-configuration-of-llms are comparing it to the libraries listed below
- Explore what LLMs are really leanring over SFT☆28Updated 11 months ago
- ☆28Updated this week
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- my commonly-used tools☆51Updated 2 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆72Updated 7 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆55Updated 3 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆93Updated 11 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆50Updated 4 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆130Updated last month
- GenRM-CoT: Data release for verification rationales☆53Updated 5 months ago
- ☆17Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 8 months ago
- ☆43Updated 5 months ago
- ☆65Updated 11 months ago
- Paper collections of methods that using language to interact with environment, including interact with real world, simulated world or WWW…☆126Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆71Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆52Updated 9 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆119Updated 6 months ago
- ☆30Updated last year
- [NeurIPS 2023] Large Language Models Are Semi-Parametric Reinforcement Learning Agents☆34Updated 10 months ago
- ☆73Updated 10 months ago
- ☆48Updated last month
- A Survey on the Honesty of Large Language Models☆56Updated 3 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆165Updated 2 months ago
- Analyzing LLM Alignment via Token distribution shift☆15Updated last year
- The official code repository for PRMBench.☆68Updated last month
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆78Updated last year
- ☆61Updated 4 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆31Updated 2 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆41Updated 5 months ago