LLaMafia / SFT_function_learning
Explore what LLMs are really leanring over SFT
☆26Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for SFT_function_learning
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 5 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆42Updated 2 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆47Updated 4 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆26Updated 5 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆63Updated last year
- ☆51Updated 7 months ago
- Evaluating Mathematical Reasoning Beyond Accuracy☆37Updated 7 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆125Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆24Updated 8 months ago
- ☆65Updated 5 months ago
- ☆39Updated 7 months ago
- ☆24Updated 6 months ago
- Collection of papers for scalable automated alignment.☆73Updated 3 weeks ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆68Updated 5 months ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆22Updated 3 months ago
- Analyzing LLM Alignment via Token distribution shift☆13Updated 9 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆16Updated last month
- Towards Systematic Measurement for Long Text Quality☆28Updated 2 months ago
- ☆89Updated 11 months ago
- ☆101Updated 5 months ago
- The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agen…☆21Updated 8 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆14Updated last year
- ☆16Updated last week
- The code and data for the paper JiuZhang3.0☆35Updated 5 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆29Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆97Updated 2 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆70Updated 9 months ago
- Feeling confused about super alignment? Here is a reading list☆43Updated 10 months ago
- ☆22Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆51Updated 3 months ago