HITsz-TMG / awesome-llm-attributionsLinks
A Survey of Attributions for Large Language Models
☆218Updated last year
Alternatives and similar repositories for awesome-llm-attributions
Users that are interested in awesome-llm-attributions are comparing it to the libraries listed below
Sorting:
- A Survey on Data Selection for Language Models☆252Updated 6 months ago
- paper list on reasoning in NLP☆194Updated 7 months ago
- ☆189Updated 4 months ago
- Scaling Sentence Embeddings with Large Language Models☆110Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆400Updated 7 months ago
- EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.☆136Updated last year
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆143Updated last year
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆74Updated last year
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆339Updated last year
- Do Large Language Models Know What They Don’t Know?☆101Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆145Updated last year
- https://acl2023-retrieval-lm.github.io/☆158Updated 2 years ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆77Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆125Updated last year
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆257Updated 2 years ago
- Code implementation of synthetic continued pretraining☆138Updated 10 months ago
- Data and Code for Program of Thoughts [TMLR 2023]☆292Updated last year
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆526Updated last year
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆499Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆207Updated 11 months ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆161Updated last year
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆131Updated 9 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆118Updated last year
- ☆294Updated last year
- A Survey of Hallucination in Large Foundation Models☆55Updated last year
- ☆47Updated last year
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆191Updated 3 months ago
- Repository for MuSiQue: Multi-hop Questions via Single-hop Question Composition, TACL 2022☆176Updated last year
- [NAACL'24] Dataset, code and models for "TableLlama: Towards Open Large Generalist Models for Tables".☆130Updated last year
- [IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection☆90Updated last year