YuejiangLIU / cslLinks
Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts
☆16Updated last year
Alternatives and similar repositories for csl
Users that are interested in csl are comparing it to the libraries listed below
Sorting:
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 11 months ago
- ☆96Updated last year
- ☆44Updated last year
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆28Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆76Updated 5 months ago
- ☆51Updated last year
- ☆27Updated 2 years ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆52Updated last year
- ☆29Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 4 months ago
- ☆52Updated 4 months ago
- ☆56Updated 3 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 9 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆31Updated 7 months ago
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆20Updated 2 years ago
- ☆14Updated last month
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆28Updated last year
- Pitfalls of Rule- and Model-based Verifiers: A Case Study on Mathematical Reasoning.☆23Updated 3 months ago
- Code and data for "Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?" (ACL 2024)☆32Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆27Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆72Updated 2 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆86Updated last year
- Directional Preference Alignment☆59Updated 11 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆77Updated 2 years ago
- AbstainQA, ACL 2024☆28Updated 10 months ago
- A Sober Look at Language Model Reasoning☆81Updated 2 months ago
- Lightweight Adapting for Black-Box Large Language Models☆23Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆108Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆177Updated 4 months ago