uclaml / SPPO
The official implementation of Self-Play Preference Optimization (SPPO)
☆498Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for SPPO
- A recipe for online RLHF and online iterative DPO.☆436Updated 2 weeks ago
- Recipes to train reward model for RLHF.☆927Updated this week
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆387Updated 9 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆215Updated this week
- AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning (NeurIPS 2024)☆170Updated this week
- ☆368Updated 6 months ago
- Benchmarking LLMs via Uncertainty Quantification☆221Updated 9 months ago
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models"☆91Updated last month
- Unified KV Cache Compression Methods for LLMs☆767Updated this week
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆162Updated last week
- The Official Repo of ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code (https://a…☆355Updated this week
- The official implementation of our pre-print paper "AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs".☆153Updated last week
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆149Updated 2 months ago
- The repository for the paper titled "Leopard: A Vision Language Model For Text-Rich Multi-Image Tasks"☆184Updated 3 weeks ago
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆213Updated last week
- ☆356Updated 5 months ago
- ☆520Updated last week
- A curated list of awesome leaderboard-oriented resources for foundation models☆194Updated this week
- An open-source implementation for training LLaVA-NeXT.☆395Updated last month
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated 2 months ago
- ☆287Updated 2 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆198Updated 3 weeks ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆199Updated 6 months ago
- RewardBench: the first evaluation tool for reward models.☆437Updated last month
- A simple unified framework for evaluating LLMs☆145Updated 2 weeks ago
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆495Updated 2 weeks ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆716Updated 2 weeks ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆270Updated 7 months ago
- The official evaluation suite and dynamic data release for MixEval.☆224Updated 2 weeks ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆158Updated 4 months ago