OAfzal / nlp-for-peer-reviewLinks
☆50Updated last year
Alternatives and similar repositories for nlp-for-peer-review
Users that are interested in nlp-for-peer-review are comparing it to the libraries listed below
Sorting:
- ☆116Updated last year
- ☆57Updated 2 years ago
- Code for "From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Mod…☆40Updated last year
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆26Updated 9 months ago
- ☆47Updated last year
- This repository contains data, code and models for contextual noncompliance.☆24Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago
- Repository for the Bias Benchmark for QA dataset.☆133Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆58Updated last year
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆36Updated 2 years ago
- ☆24Updated 2 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆42Updated 2 years ago
- Resources for cultural NLP research☆113Updated 3 months ago
- ☆44Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- ☆165Updated last year
- The data and the PyTorch implementation for the models and experiments in the paper "Exploiting Asymmetry for Synthetic Training Data Gen…☆64Updated 2 years ago
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆81Updated last year
- The Prism Alignment Project☆87Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆161Updated 6 months ago
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆154Updated 3 months ago
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆153Updated 4 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆41Updated last year
- Repository for the ACL 2024 conference website☆18Updated 10 months ago
- Token-level Reference-free Hallucination Detection☆97Updated 2 years ago
- [ACL 2025 Main] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆40Updated last year
- Exploring the Limitations of Large Language Models on Multi-Hop Queries☆29Updated 9 months ago
- Supporting code for ReCEval paper☆31Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆63Updated 2 years ago