EternityYW / BiasEval-LLM-MentalHealthLinks
Unveiling and Mitigating Bias in Mental Health Analysis with Large Language Models
☆11Updated 11 months ago
Alternatives and similar repositories for BiasEval-LLM-MentalHealth
Users that are interested in BiasEval-LLM-MentalHealth are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024 Datasets and Benchmark Track Oral] MedCalc-Bench: Evaluating Large Language Models for Medical Calculations☆61Updated last month
- [EMNLP'24] EHRAgent: Code Empowers Large Language Models for Complex Tabular Reasoning on Electronic Health Records☆98Updated 5 months ago
- ☆18Updated 7 months ago
- Mosaic IT: Enhancing Instruction Tuning with Data Mosaics☆18Updated 3 months ago
- ☆27Updated 4 months ago
- [ACL 2024 Findings] This is the code for our paper "Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation wi…☆39Updated 11 months ago
- RuleR: Improving LLM Controllability by Rule-based Data Recycling☆12Updated last month
- The official Github repository for paper "R^2AG: Incorporating Retrieval Information into Retrieval Augmented Generation" (EMNLP 2024 Fin…☆33Updated 6 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆25Updated 6 months ago
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆25Updated last year
- The official GitHub page for paper "NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional St…☆22Updated last year
- ☆13Updated 5 months ago
- ☆48Updated 3 months ago
- [EMNLP 2024] This is the code for our paper "BMRetriever: Tuning Large Language Models as Better Biomedical Text Retrievers".☆21Updated 8 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆27Updated 3 months ago
- Repository of paper "How Likely Do LLMs with CoT Mimic Human Reasoning?"☆22Updated 3 months ago
- Graph-R1: Incentivizing Reasoning-on-Graph Capability in LLM via Reinforcement Learning☆18Updated 2 months ago
- ☆14Updated 10 months ago
- AbstainQA, ACL 2024☆25Updated 7 months ago
- Dataset for Checking Consistency between Unstructured Notes and Structured Tables in Electronic Health Records☆21Updated 9 months ago
- ☆24Updated last month
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated last year
- ☆16Updated 10 months ago
- [NeurIPS 2024] Code and Data Repo for Paper "Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning"☆26Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆45Updated last month
- Public code repo for COLING 2025 paper "Aligning LLMs with Individual Preferences via Interaction"☆27Updated 2 months ago
- [EMNLP 2024] A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners☆21Updated 5 months ago
- Code for paper Towards Mitigating LLM Hallucination via Self Reflection☆24Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated last week