nuochenpku / LLaMA_AnalysisLinks
This is official project in our paper: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers
☆31Updated 2 years ago
Alternatives and similar repositories for LLaMA_Analysis
Users that are interested in LLaMA_Analysis are comparing it to the libraries listed below
Sorting:
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 3 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Aioli: A unified optimization framework for language model data mixing☆32Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Updated 9 months ago
- The repository contains code for Adaptive Data Optimization☆31Updated last year
- ☆23Updated last month
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- Sotopia-RL: Reward Design for Social Intelligence☆46Updated 5 months ago
- ☆49Updated 9 months ago
- ☆75Updated last year
- When Reasoning Meets Its Laws☆34Updated 3 weeks ago
- ☆29Updated 3 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆15Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- ☆35Updated last year
- Evaluate the Quality of Critique☆36Updated last year
- ☆50Updated 11 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆58Updated 11 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 5 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Updated 10 months ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆40Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year