wenzhe-li / Self-MoALinks
☆17Updated 5 months ago
Alternatives and similar repositories for Self-MoA
Users that are interested in Self-MoA are comparing it to the libraries listed below
Sorting:
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Exploration of automated dataset selection approaches at large scales.☆47Updated 4 months ago
- ☆20Updated last year
- ☆18Updated 4 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆58Updated last year
- Offcial Repo of Paper "Eliminating Position Bias of Language Models: A Mechanistic Approach""☆14Updated last month
- Official implementation of ICML 2025 paper "Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment" (https:…☆25Updated 2 months ago
- ☆18Updated 8 months ago
- Evaluate the Quality of Critique☆36Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆33Updated 2 months ago
- ☆25Updated 10 months ago
- Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment☆16Updated 11 months ago
- ☆33Updated 6 months ago
- ☆87Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 5 months ago
- Official implementation of AAAI 2025 paper "Augmenting Math Word Problems via Iterative Question Composing"(https://arxiv.org/abs/2401.09…☆20Updated 7 months ago
- ☆29Updated 2 years ago
- ☆18Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆22Updated last year
- ☆83Updated 2 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- ☆20Updated 2 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆11Updated 3 months ago
- ☆14Updated last year
- [ICML 2025] Official code of "AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization"☆19Updated 9 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆45Updated 11 months ago
- ☆19Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 3 months ago