ai-agi / LLMs-Enhanced-Long-Text-Generation-SurveyLinks
Long Form NLG Generation Based on Large Language Models
☆17Updated last year
Alternatives and similar repositories for LLMs-Enhanced-Long-Text-Generation-Survey
Users that are interested in LLMs-Enhanced-Long-Text-Generation-Survey are comparing it to the libraries listed below
Sorting:
- ☆49Updated last year
- ☆27Updated 2 years ago
- Safety-J: Evaluating Safety with Critique☆16Updated 11 months ago
- ☆26Updated 9 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆147Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- Code and Results of the Paper: On the Reliability of Psychological Scales on Large Language Models☆30Updated 9 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated 3 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆89Updated 4 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆126Updated 9 months ago
- ☆75Updated 6 months ago
- The awesome agents in the era of large language models☆65Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆150Updated 3 months ago
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆14Updated last year
- Constraint Back-translation Improves Complex Instruction Following of Large Language Models☆13Updated last month
- ☆54Updated 10 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆113Updated 9 months ago
- ☆15Updated 2 weeks ago
- ☆20Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 8 months ago
- ☆24Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆65Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Multilingual safety benchmark for Large Language Models☆50Updated 9 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆57Updated 10 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆92Updated last month
- Source code of our paper MIND, ACL 2024 Long Paper☆42Updated last year
- ☆33Updated 8 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆164Updated last year
- The information of NLP PhD application in the world.☆37Updated 10 months ago