stellalisy / mediQLinks
☆35Updated 10 months ago
Alternatives and similar repositories for mediQ
Users that are interested in mediQ are comparing it to the libraries listed below
Sorting:
- [ACL 2024 Findings] This is the code for our paper "Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation wi…☆40Updated last year
- [EMNLP 2024] This is the code for our paper "BMRetriever: Tuning Large Language Models as Better Biomedical Text Retrievers".☆22Updated last year
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆105Updated last year
- [NeurIPS 2024 Datasets and Benchmark Track Oral] MedCalc-Bench: Evaluating Large Language Models for Medical Calculations☆77Updated 3 weeks ago
- Code for "DocLens: Multi-aspect Fine-grained Evaluation for Medical Text Generation" (ACL 2024)☆21Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆67Updated last year
- ☆25Updated 8 months ago
- code for EMNLP 2024 paper: How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for M…☆13Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆42Updated 6 months ago
- ☆29Updated last year
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆30Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆137Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆125Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆63Updated last year
- ☆48Updated 9 months ago
- [ML4H'25] m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning in Large Language Models☆47Updated 8 months ago
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- AbstainQA, ACL 2024☆28Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆31Updated 10 months ago
- ☆40Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆52Updated 6 months ago
- ☆180Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆64Updated 8 months ago
- [EMNLP'24] EHRAgent: Code Empowers Large Language Models for Complex Tabular Reasoning on Electronic Health Records☆116Updated 11 months ago
- ☆29Updated 9 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆26Updated last year
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆32Updated 11 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆48Updated last year
- ☆26Updated 2 years ago