sail-sg / lm-random-memory-accessLinks
☆15Updated last year
Alternatives and similar repositories for lm-random-memory-access
Users that are interested in lm-random-memory-access are comparing it to the libraries listed below
Sorting:
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆18Updated last year
- ☆29Updated last year
- ☆44Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 7 months ago
- ☆51Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆40Updated 4 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆77Updated 5 months ago
- Analyzing LLM Alignment via Token distribution shift☆17Updated last year
- ☆41Updated 11 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- Augmenting Statistical Models with Natural Language Parameters☆27Updated 11 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- ☆96Updated last year
- About Official PyTorch implementation of "Query-Efficient Black-Box Red Teaming via Bayesian Optimization" (ACL'23)☆15Updated 2 years ago
- Test-time-training on nearest neighbors for large language models☆45Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 9 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆37Updated 3 months ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆82Updated 11 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- ☆38Updated last year
- Learning adapter weights from task descriptions☆19Updated last year
- Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation☆20Updated last year
- ☆13Updated last month
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆41Updated 3 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 7 months ago
- ☆44Updated last year
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆28Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- ☆17Updated last year