kmeng01 / romeLinks
Locating and editing factual associations in GPT (NeurIPS 2022)
☆641Updated last year
Alternatives and similar repositories for rome
Users that are interested in rome are comparing it to the libraries listed below
Sorting:
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆500Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆530Updated 4 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆836Updated 10 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆498Updated 5 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆640Updated 11 months ago
- ☆212Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆456Updated 8 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆500Updated last year
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆538Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆479Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆857Updated 2 weeks ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆353Updated 2 months ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆747Updated 5 months ago
- RewardBench: the first evaluation tool for reward models.☆604Updated last week
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆347Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆768Updated last year
- MEND: Fast Model Editing at Scale☆245Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆367Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆375Updated 9 months ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆563Updated last year
- ☆226Updated 8 months ago
- Codebase for Merging Language Models (ICML 2024)☆832Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆255Updated last year
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆685Updated 3 months ago
- Sparsify transformers with SAEs and transcoders☆568Updated this week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆617Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆497Updated 11 months ago
- Contriever: Unsupervised Dense Information Retrieval with Contrastive Learning☆741Updated 2 years ago