DeqingFu / transformers-icl-second-order
Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models.
☆16Updated 3 months ago
Alternatives and similar repositories for transformers-icl-second-order:
Users that are interested in transformers-icl-second-order are comparing it to the libraries listed below
- ☆35Updated last year
- Rewarded soups official implementation☆54Updated last year
- A library for efficient patching and automatic circuit discovery.☆54Updated 2 weeks ago
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆16Updated 6 months ago
- ☆21Updated 5 months ago
- Lightweight Adapting for Black-Box Large Language Models☆20Updated last year
- ☆45Updated 6 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆26Updated last year
- ☆19Updated this week
- ☆28Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆22Updated 8 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆64Updated 3 months ago
- ☆12Updated 11 months ago
- ☆15Updated 10 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆20Updated 2 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆57Updated 2 months ago
- ☆88Updated 2 weeks ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆50Updated 8 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆24Updated last month
- ☆35Updated 11 months ago
- Implementation of PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆33Updated 3 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- Bayesian low-rank adaptation for large language models☆22Updated 9 months ago
- Simple and scalable tools for data-driven pretraining data selection.☆15Updated 2 weeks ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆15Updated last month
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆101Updated 11 months ago
- ☆89Updated last year
- Sparse Autoencoder Training Library☆42Updated 4 months ago
- Conformal Language Modeling☆28Updated last year
- ☆60Updated 3 years ago