YuejiangLIU / cslLinks
Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts
☆16Updated last year
Alternatives and similar repositories for csl
Users that are interested in csl are comparing it to the libraries listed below
Sorting:
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆21Updated 2 years ago
- ☆29Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆32Updated 11 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Updated 9 months ago
- ☆46Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- ☆51Updated last year
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆29Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆26Updated last year
- ☆43Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆80Updated 2 years ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆94Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆79Updated last year
- ☆103Updated 2 years ago
- ☆69Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆78Updated 6 months ago
- ☆15Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆19Updated 5 months ago
- ☆182Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆33Updated 9 months ago
- ☆51Updated 2 years ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆46Updated 8 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆28Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Updated 11 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆20Updated last year