ctlllll / reward_collapse
☆27Updated last year
Alternatives and similar repositories for reward_collapse:
Users that are interested in reward_collapse are comparing it to the libraries listed below
- ☆18Updated 10 months ago
- ☆18Updated 8 months ago
- ☆28Updated last year
- ☆31Updated 2 months ago
- The repository contains code for Adaptive Data Optimization☆20Updated 3 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated 2 weeks ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆38Updated last year
- ☆13Updated last year
- ☆39Updated 2 years ago
- ☆38Updated 5 months ago
- Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment☆16Updated 7 months ago
- Exploration of automated dataset selection approaches at large scales.☆34Updated last month
- Offcial Repo of Paper "Eliminating Position Bias of Language Models: A Mechanistic Approach""☆13Updated 7 months ago
- Self-Supervised Alignment with Mutual Information☆16Updated 10 months ago
- Code for paper: "LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits"☆13Updated 6 months ago
- ☆81Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated 2 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆52Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated 10 months ago
- ☆14Updated 4 months ago
- Tasks for describing differences between text distributions.☆16Updated 7 months ago
- ☆16Updated 8 months ago
- ☆26Updated 8 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆26Updated 6 months ago
- Data Valuation on In-Context Examples (ACL23)☆23Updated 2 months ago
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆23Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 10 months ago