harshjuly12 / Enhancing-Explainability-in-Fake-News-Detection-A-SHAP-Based-Approach-for-Bidirectional-LSTM-Models

Enhancing Explainability in Fake News Detection uses SHAP and BiLSTM models to improve the transparency and interpretability of detecting fake news, providing insights into the model's decision-making process.
12Updated 3 months ago

Alternatives and similar repositories for Enhancing-Explainability-in-Fake-News-Detection-A-SHAP-Based-Approach-for-Bidirectional-LSTM-Models:

Users that are interested in Enhancing-Explainability-in-Fake-News-Detection-A-SHAP-Based-Approach-for-Bidirectional-LSTM-Models are comparing it to the libraries listed below