ruocwang / mixture-of-prompts
[ICML 2024] One Prompt is Not Enough: Automated Construction of a Mixture-of-Expert Prompts - TurningPoint AI
โ20Updated 6 months ago
Alternatives and similar repositories for mixture-of-prompts:
Users that are interested in mixture-of-prompts are comparing it to the libraries listed below
- โ83Updated 2 weeks ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)โ56Updated 11 months ago
- [๐๐๐๐๐ ๐ ๐ข๐ง๐๐ข๐ง๐ ๐ฌ ๐๐๐๐ & ๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ซ๐๐ฅ] ๐๐ฏ๐ฉ๐ข๐ฏ๐ค๐ช๐ฏ๐จ ๐๐ข๐ต๐ฉ๐ฆ๐ฎ๐ข๐ต๐ช๐ค๐ข๐ญ ๐๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏโฆโ48Updated 10 months ago
- โ49Updated 7 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"โ35Updated 2 months ago
- โ37Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineeringโ52Updated 4 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)โ107Updated 11 months ago
- Lightweight Adapting for Black-Box Large Language Modelsโ23Updated last year
- LoFiT: Localized Fine-tuning on LLM Representationsโ34Updated 2 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspectiveโ25Updated last month
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)โ17Updated 3 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruningโ64Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervisionโ119Updated 6 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"โ103Updated last year
- Interpretable Contrastive Monte Carlo Tree Search Reasoningโ46Updated 4 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)โ75Updated 5 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.โ104Updated this week
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."โ42Updated 5 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimizationโ72Updated 7 months ago
- โ128Updated last week
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deepโ82Updated 8 months ago
- The code of โImproving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learningโโ16Updated last year
- โ50Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433โ24Updated 3 months ago
- โ61Updated 4 months ago
- This is a unified platform for implementing and evaluating test-time reasoning mechanisms in Large Language Models (LLMs).โ14Updated 2 months ago
- Test-time-training on nearest neighbors for large language modelsโ39Updated 11 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".โ19Updated 4 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)โ54Updated 5 months ago