kttian / llm_factuality_tuningLinks
☆38Updated last year
Alternatives and similar repositories for llm_factuality_tuning
Users that are interested in llm_factuality_tuning are comparing it to the libraries listed below
Sorting:
- ☆54Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- ☆86Updated 2 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆62Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Methods and evaluation for aligning language models temporally☆30Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- ☆17Updated last year
- ☆41Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆74Updated 11 months ago
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆37Updated last year
- Learning adapter weights from task descriptions☆19Updated last year
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated 2 years ago
- ☆177Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 9 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆96Updated 4 years ago
- The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning (NeurIPS 2022)☆16Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆116Updated last year
- ☆49Updated 2 years ago
- ☆103Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 2 years ago
- ☆44Updated last year
- ☆64Updated 2 years ago
- We have released the code and demo program required for LLM with self-verification☆63Updated 2 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- ☆88Updated 2 years ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆79Updated 2 years ago