CyberAgentAILab / filtered-dpo
Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lower-quality samples compared to those generated by the learning model
☆12Updated 4 months ago
Alternatives and similar repositories for filtered-dpo:
Users that are interested in filtered-dpo are comparing it to the libraries listed below
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- ☆16Updated 9 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆22Updated 3 weeks ago
- Self-Supervised Alignment with Mutual Information☆17Updated 11 months ago
- Tasks for describing differences between text distributions.☆16Updated 8 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated 11 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆17Updated 3 weeks ago
- ☆28Updated last year
- Repository for Skill Set Optimization☆12Updated 9 months ago
- ☆10Updated 5 months ago
- ☆18Updated 9 months ago
- Exploration of automated dataset selection approaches at large scales.☆39Updated last month
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆26Updated 7 months ago
- ☆27Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆25Updated 5 months ago
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆32Updated last year
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆19Updated 5 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆43Updated last week
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆34Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆29Updated 7 months ago
- Implementation for the paper "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning"☆9Updated 3 months ago
- ☆20Updated 5 months ago
- Directional Preference Alignment☆57Updated 7 months ago
- ☆13Updated last year
- ☆14Updated last year
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated last year
- Code for paper: "LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits"☆13Updated 6 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆26Updated 4 months ago
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆19Updated last year