BunsenFeng / model_swarmLinks
☆17Updated 6 months ago
Alternatives and similar repositories for model_swarm
Users that are interested in model_swarm are comparing it to the libraries listed below
Sorting:
- ☆13Updated last year
- Official Implementation of "Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning" at EMNLP 2024 Main Conf…☆29Updated 5 months ago
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆37Updated 2 months ago
- Direct preference optimization with f-divergences.☆13Updated 7 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆28Updated 2 months ago
- ☆49Updated last year
- A Sober Look at Language Model Reasoning☆74Updated last week
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆25Updated last week
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆20Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆72Updated 2 weeks ago
- The offical code for paper "What Constitutes a Faithful Summary? Preserving Author Perspectives in News Summarization"☆10Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 3 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆29Updated 5 months ago
- [Up-to-date] Awesome Agentic Deep Research Resources☆30Updated this week
- ☆65Updated 2 months ago
- ☆139Updated last month
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆45Updated 8 months ago
- [COLM'24] "Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought Reasoning"☆21Updated last year
- Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation☆20Updated last year
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆136Updated 2 weeks ago
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆37Updated 10 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆79Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆121Updated 9 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆112Updated 9 months ago
- ☆44Updated last year
- awesome SAE papers☆35Updated last month
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 7 months ago
- GenRM-CoT: Data release for verification rationales☆61Updated 8 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆72Updated last week
- An effective weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study uncovering how reasoning length…☆12Updated 2 weeks ago