harshjuly12 / Enhancing-Explainability-in-Fake-News-Detection-A-SHAP-Based-Approach-for-Bidirectional-LSTM-Models
View external linksLinks

Enhancing Explainability in Fake News Detection uses SHAP and BiLSTM models to improve the transparency and interpretability of detecting fake news, providing insights into the model's decision-making process.
11Oct 11, 2024Updated last year

Alternatives and similar repositories for Enhancing-Explainability-in-Fake-News-Detection-A-SHAP-Based-Approach-for-Bidirectional-LSTM-Models

Users that are interested in Enhancing-Explainability-in-Fake-News-Detection-A-SHAP-Based-Approach-for-Bidirectional-LSTM-Models are comparing it to the libraries listed below

Sorting:

Are these results useful?