UPSI Digital Repository (UDRep)
|
|
|
Abstract : Universiti Pendidikan Sultan Idris |
In the last few years, the trend in health care of embracing artificial intelligence (AI) has dramatically changed the medical landscape. Medical centres have adopted AI applications to increase the accuracy of disease diagnosis and mitigate health risks. AI applications have changed rules and policies related to healthcare practice and work ethics. However, building trustworthy and explainable AI (XAI) in healthcare systems is still in its early stages. Specifically, the European Union has stated that AI must be human-centred and trustworthy, whereas in the healthcare sector, low methodological quality and high bias risk have become major concerns. This study endeavours to offer a systematic review of the trustworthiness and explainability of AI applications in healthcare, incorporating the assessment of quality, bias risk, and data fusion to supplement previous studies and provide more accurate and definitive findings. Likewise, 64 recent contributions on the trustworthiness of AI in healthcare from multiple databases (i.e., ScienceDirect, Scopus, Web of Science, and IEEE Xplore) were identified using a rigorous literature search method and selection criteria. The considered papers were categorised into a coherent and systematic classification including seven categories: explainable robotics, prediction, decision support, blockchain, transparency, digital health, and review. In this paper, we have presented a systematic and comprehensive analysis of earlier studies and opened the door to potential future studies by discussing in depth the challenges, motivations, and recommendations. In this study a systematic science mapping analysis in order to reorganise and summarise the results of earlier studies to address the issues of trustworthiness and objectivity was also performed. Moreover, this work has provided decisive evidence for the trustworthiness of AI in health care by presenting eight current state-of-the-art critical analyses regarding those more relevant research gaps. In addition, to the best of our knowledge, this study is the first to investigate the feasibility of utilising trustworthy and XAI applications in healthcare, by incorporating data fusion techniques and connecting various important pieces of information from available healthcare datasets and AI algorithms. The analysis of the revised contributions revealed crucial implications for academics and practitioners, and then potential methodological aspects to enhance the trustworthiness of AI applications in the medical sector were reviewed. Successively, the theoretical concept and current use of 17 XAI methods in health care were addressed. Finally, several objectives and guidelines were provided to policymakers to establish electronic health-care systems focused on achieving relevant features such as legitimacy, morality, and robustness. Several types of information fusion in healthcare were focused on in this study, including data, feature, image, decision, multimodal, hybrid, and temporal. 2023 |
References |
Abdar, M., Salari, S., Qahremani, S., Lam, H.-K., Karray, F., Hussain, S., Khosravi, A., Acharya, U. R., Makarenkov, V., & Nahavandi, S. (2023). UncertaintyFuseNet: Robust uncertainty-aware hierarchical feature fusion model with Ensemble Monte Carlo Dropout for COVID-19 detection. Information Fusion, 90, 364–381. https://doi.org/10.1016/j.inffus.2022.09.023 Abou-Nassar, E. M., Iliyasu, A. M., El-Kafrawy, P. M., Song, O.-Y., Bashir, A. K., & El-Latif, A. A. A. (2020). DITrust Chain: Towards Blockchain-Based Trust Models for Sustainable Healthcare IoT Systems. IEEE Access, 8, 111223–111238. https://doi.org/10.1109/ACCESS.2020.2999468 Alanazi, T., & Muhammad, G. (2022). Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion. Diagnostics, 12(12). https://doi.org/10.3390/diagnostics12123060 Albahri, A. S., Alwan, J. K., Taha, Z. K., Ismail, S. F., Hamid, R. A., Zaidan, A. A., Albahri, O. S., Zaidan, B. B., Alamoodi, A. H., & Alsalem, M. A. (2021). IoT-based telemedicine for disease prevention and health promotion: State-of-the-Art. Journal of Network and Computer Applications, 173. https://doi.org/10.1016/j.jnca.2020.102873 Alshehri, F., & Muhammad, G. (2021). A Comprehensive Survey of the Internet of Things (IoT) and AI-Based Smart Healthcare. IEEE Access, 9, 3660–3678. https://doi.org/10.1109/ACCESS.2020.3047960 Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1). https://doi.org/10.1186/s12911-020-01332-6 Anagnostou, M., Karvounidou, O., Katritzidaki, C., Kechagia, C., Melidou, K., Mpeza, E., Konstantinidis, I., Kapantai, E., Berberidis, C., Magnisalis, I., Magnisalis, I., & Peristeras, V. (2022). Characteristics and challenges in the industries towards responsible AI: a systematic literature review. Ethics and Information Technology, 24(3). https://doi.org/10.1007/s10676-022-09634-1 Angerschmid, A., Zhou, J., Theuermann, K., Chen, F., & Holzinger, A. (2022). Fairness and Explanation in AI-Informed Decision Making. Machine Learning and Knowledge Extraction, 4(2), 556–579. https://doi.org/10.3390/make4020026 Aria, M., & Cuccurullo, C. (2017). bibliometrix: An R-tool for comprehensive science mapping analysis. Journal of Informetrics, 11(4), 959–975. https://doi.org/10.1016/j.joi.2017.08.007 Balagurunathan, Y., Mitchell, R., & el Naqa, I. (2021). Requirements and reliability of AI in the medical context. Physica Medica, 83, 72–78. https://doi.org/10.1016/j.ejmp.2021.02.024 Bania, R. K., & Halder, A. (2021). R-HEFS: Rough set based heterogeneous ensemble feature selection method for medical data classification. Artificial Intelligence in Medicine, 114. https://doi.org/10.1016/j.artmed.2021.102049 Barclay, I., & Abramson, W. (2021). Identifying Roles, Requirements and Responsibilitiesin Trustworthy AI Systems. UbiComp/ISWC 2021 - Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers, 264–271. https://doi.org/10.1145/3460418.3479344 Barredo Arrieta, A., Díaz-Rodríguez, N., del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 ben Yahia, N., Dhiaeddine Kandara, M., & Bellamine BenSaoud, N. (2022). Integrating Models and Fusing Data in a Deep Ensemble Learning Method for Predicting Epidemic Diseases Outbreak. Big Data Research, 27. https://doi.org/10.1016/j.bdr.2021.100286 Carrington, A. M., Manuel, D. G., Fieguth, P. W., Ramsay, T., Osmani, V., Wernly, B., Bennett, C., Hawken, S., Magwood, O., Sheikh, Y., Mcinnes, M., & Holzinger, A. (2023). Deep ROC Analysis and AUC as Balanced Average Accuracy, for Improved Classifier Selection, Audit and Explanation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1), 329–341. https://doi.org/10.1109/TPAMI.2022.3145392 Chou, Y.-L., Moreira, C., Bruza, P., Ouyang, C., & Jorge, J. (2022). Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Information Fusion, 81, 59–83. https://doi.org/10.1016/j.inffus.2021.11.003 Collins, J. W., Marcus, H. J., Ghazi, A., Sridhar, A., Hashimoto, D., Hager, G., Arezzo, A., Jannin, P., Maier-Hein, L., Marz, K., Kelly, J. D., & Stoyanov, D. (2022). Ethical implications of AI in robotic surgical training: A Delphi consensus statement. European Urology Focus, 8(2), 613–622. https://doi.org/10.1016/j.euf.2021.04.006 Delacroix, S., & Wagner, B. (2021). Constructing a mutually supportive interface between ethics and regulation. Computer Law and Security Review, 40. https://doi.org/10.1016/j.clsr.2020.105520 Deo, R. C. (2015). Machine learning in medicine. Circulation, 132(20), 1920–1930. https://doi.org/10.1161/CIRCULATIONAHA.115.001593 Deperlioglu, O., Kose, U., Gupta, D., Khanna, A., Giampaolo, F., & Fortino, G. (2022). Explainable framework for Glaucoma diagnosis by image processing and convolutional neural network synergy: Analysis with doctor evaluation. Future Generation Computer Systems, 129, 152–169. https://doi.org/10.1016/j.future.2021.11.018 Du, Y., Rafferty, A. R., McAuliffe, F. M., Wei, L., & Mooney, C. (2022). An explainable machine learning-based clinical decision support system for prediction of gestational diabetes mellitus. Scientific Reports, 12(1). https://doi.org/10.1038/s41598-022-05112-2 El-Sappagh, S., Alonso, J. M., Islam, S. M. R., Sultan, A. M., & Kwak, K. S. (2021). A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-82098-3 Esposito, M., Minutolo, A., Megna, R., Forastiere, M., Magliulo, M., & de Pietro, G. (2018). A smart mobile, self-configuring, context-aware architecture for personal health monitoring. Engineering Applications of Artificial Intelligence, 67, 136–156. https://doi.org/10.1016/j.engappai.2017.09.019 Faris, H., Habib, M., Faris, M., Elayan, H., & Alomari, A. (2021). An intelligent multimodal medical diagnosis system based on patients’ medical questions and structured symptoms for telemedicine. Informatics in Medicine Unlocked, 23. https://doi.org/10.1016/j.imu.2021.100513 Giordano, C., Brennan, M., Mohamed, B., Rashidi, P., Modave, F., & Tighe, P. (2021). Accessing Artificial Intelligence for Clinical Decision-Making. Frontiers in Digital Health, 3. https://doi.org/10.3389/fdgth.2021.645232 González-Gonzalo, C., Thee, E. F., Klaver, C. C. W., Lee, A. Y., Schlingemann, R. O., Tufail, A., Verbraak, F., & Sánchez, C. I. (2022). Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Progress in Retinal and Eye Research, 90. https://doi.org/10.1016/j.preteyeres.2021.101034 Guiñazú, M. F., Cortés, V., Ibáñez, C. F., & Velásquez, J. D. (2020). Employing online social networks in precision-medicine approach using information fusion predictive model to improve substance use surveillance: A lesson from Twitter and marijuana consumption. Information Fusion, 55, 150–163. https://doi.org/10.1016/j.inffus.2019.08.006 Harerimana, G., Kim, J. W., & Jang, B. (2021). A deep attention model to forecast the Length Of Stay and the in-hospital mortality right on admission from ICD codes and demographic data. Journal of Biomedical Informatics, 118. https://doi.org/10.1016/j.jbi.2021.103778 Hayden, E. C. (2014). The Automated Lab. Nature, 516(729), 131–132. https://doi.org/10.1038/516131a Ho, C. W.-L., & Caals, K. (2021). A Call for an Ethics and Governance Action Plan to Harness the Power of Artificial Intelligence and Digitalization in Nephrology. Seminars in Nephrology, 41(3), 282–293. https://doi.org/10.1016/j.semnephrol.2021.05.009 Holzinger, A., Dehmer, M., Emmert-Streib, F., Cucchiara, R., Augenstein, I., Ser, J. D., Samek, W., Jurisica, I., & Díaz-Rodríguez, N. (2022). Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Information Fusion, 79, 263–278. https://doi.org/10.1016/j.inffus.2021.10.007 Holzinger, A., Malle, B., Saranti, A., & Pfeifer, B. (2021). Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI. Information Fusion, 71, 28–37. https://doi.org/10.1016/j.inffus.2021.01.008 Hussein, A. S., Omar, W. M., Li, X., & Ati, M. (2012). Efficient Chronic Disease Diagnosis prediction and recommendation system. 2012 IEEE-EMBS Conference on Biomedical Engineering and Sciences, IECBES 2012, 209–214. https://doi.org/10.1109/IECBES.2012.6498117 Karim, M. R., Islam, T., Lange, C., Rebholz-Schuhmann, D., & Decker, S. (2022). Adversary-Aware Multimodal Neural Networks for Cancer Susceptibility Prediction from Multiomics Data. IEEE Access, 10, 54386–54409. https://doi.org/10.1109/ACCESS.2022.3175816 Kerasidou, A. (2021). Ethics of artificial intelligence in global health: Explainability, algorithmic bias and trust. Journal of Oral Biology and Craniofacial Research, 11(4), 612–614. https://doi.org/10.1016/j.jobcr.2021.09.004 Khaw, K. W., Alnoor, A., AL-Abrrow, H., Tiberius, V., Ganesan, Y., & Atshan, N. A. (2023). Reactions towards organizational change: a systematic literature review. Current Psychology, 42(22), 19137–19160. https://doi.org/10.1007/s12144-022-03070-6 Leal, F., Chis, A. E., Caton, S., González–Vélez, H., García–Gómez, J. M., Durá, M., Sánchez–García, A., Sáez, C., Karageorgos, A., Gerogiannis, V. C., Cerrai, D., & Mier, M. (2021). Smart Pharmaceutical Manufacturing: Ensuring End-to-End Traceability and Data Integrity in Medicine Production. Big Data Research, 24. https://doi.org/10.1016/j.bdr.2020.100172 Li, R. C., Asch, S. M., & Shah, N. H. (2020). Developing a delivery science for artificial intelligence in healthcare. Npj Digital Medicine, 3(1). https://doi.org/10.1038/s41746-020-00318-y Loey, M., El-Sappagh, S., & Mirjalili, S. (2022). Bayesian-based optimized deep learning model to detect COVID-19 patients using chest X-ray image data. Computers in Biology and Medicine, 142. https://doi.org/10.1016/j.compbiomed.2022.105213 Loh, H. W., Ooi, C. P., Seoni, S., Barua, P. D., Molinari, F., & Acharya, U. R. (2022). Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). Computer Methods and Programs in Biomedicine, 226. https://doi.org/10.1016/j.cmpb.2022.107161 Lucieri, A., Bajwa, M. N., Braun, S. A., Malik, M. I., Dengel, A., & Ahmed, S. (2022). ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions. Computer Methods and Programs in Biomedicine, 215. https://doi.org/10.1016/j.cmpb.2022.106620 Mamun, K. A. A., Alhussein, M., Sailunaz, K., & Islam, M. S. (2017). Cloud based framework for Parkinson’s disease diagnosis and monitoring system for remote healthcare applications. Future Generation Computer Systems, 66, 36–47. https://doi.org/10.1016/j.future.2015.11.010 Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113. https://doi.org/10.1016/j.jbi.2020.103655 Martínez-Agüero, S., Soguero-Ruiz, C., Alonso-Moral, J. M., Mora-Jiménez, I., Álvarez-Rodríguez, J., & Marques, A. G. (2022). Interpretable clinical time-series modeling with intelligent feature selection for early prediction of antimicrobial multidrug resistance. Future Generation Computer Systems, 133, 68–83. https://doi.org/10.1016/j.future.2022.02.021 Muhammad, G., Alshehri, F., Karray, F., Saddik, A. E., Alsulaiman, M., & Falk, T. H. (2021). A comprehensive survey on multimodal medical signals fusion for smart healthcare systems. Information Fusion, 76, 355–375. https://doi.org/10.1016/j.inffus.2021.06.007 Müller, H., Holzinger, A., Plass, M., Brcic, L., Stumptner, C., & Zatloukal, K. (2022). Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation. New Biotechnology, 70, 67–72. https://doi.org/10.1016/j.nbt.2022.05.002 Müller, J., Stoehr, M., Oeser, A., Gaebel, J., Streit, M., Dietz, A., & Oeltze-Jafra, S. (2020). A visual approach to explainable computerized clinical decision support. Computers and Graphics (Pergamon), 91, 1–11. https://doi.org/10.1016/j.cag.2020.06.004 Nicora, G., Rios, M., Abu-Hanna, A., & Bellazzi, R. (2022). Evaluating pointwise reliability of machine learning prediction. Journal of Biomedical Informatics, 127. https://doi.org/10.1016/j.jbi.2022.103996 Oprescu, A. M., Miró-Amarante, G., García-Díaz, L., Rey, V. E., Chimenea-Toscano, A., Martínez-Martínez, R., & Romero-Ternero, M. C. (2022). Towards a data collection methodology for Responsible Artificial Intelligence in health: A prospective and qualitative study in pregnancy. Information Fusion, 83–84, 53–78. https://doi.org/10.1016/j.inffus.2022.03.011 Ouyang, L., Yuan, Y., Cao, Y., & Wang, F.-Y. (2021). A novel framework of collaborative early warning for COVID-19 based on blockchain and smart contracts. Information Sciences, 570, 124–143. https://doi.org/10.1016/j.ins.2021.04.021 Pal, N. R. (2020). In Search of Trustworthy and Transparent Intelligent Systems With Human-Like Cognitive and Reasoning Capabilities. Frontiers in Robotics and AI, 7. https://doi.org/10.3389/frobt.2020.00076 Rahman, M. A., Hossain, M. S., Showail, A. J., Alrajeh, N. A., & Alhamid, M. F. (2021). A secure, private, and explainable IoHT framework to support sustainable health monitoring in a smart city. Sustainable Cities and Society, 72. https://doi.org/10.1016/j.scs.2021.103083 Rathi, V. K., Rajput, N. K., Mishra, S., Grover, B. A., Tiwari, P., Jaiswal, A. K., & Hossain, M. S. (2021). An edge AI-enabled IoT healthcare monitoring system for smart cities. Computers and Electrical Engineering, 96. https://doi.org/10.1016/j.compeleceng.2021.107524 Rehman, A., Saba, T., Haseeb, K., Marie‐sainte, S. L., & Lloret, J. (2021). Energy‐efficient iot e‐health using artificial intelligence model with homomorphic secret sharing. Energies, 14(19). https://doi.org/10.3390/en14196414 Rethlefsen, M. L., Kirtley, S., Waffenschmidt, S., Ayala, A. P., Moher, D., Page, M. J., Koffel, J. B., Blunt, H., Brigham, T., Chang, S., Wright, K., & Young, S. (2021). PRISMA-S: An extension to the PRISMA statement for reporting literature searches in systematic reviews. Journal of the Medical Library Association, 109(2), 174–200. https://doi.org/10.5195/jmla.2021.962 Rieke, N., Hancox, J., Li, W., Milletarì, F., Roth, H. R., Albarqouni, S., Bakas, S., Galtier, M. N., Landman, B. A., Maier-Hein, K., Baust, M., & Cardoso, M. J. (2020). The future of digital health with federated learning. Npj Digital Medicine, 3(1). https://doi.org/10.1038/s41746-020-00323-1 Rong, G., Mendez, A., Bou Assi, E., Zhao, B., & Sawan, M. (2020). Artificial Intelligence in Healthcare: Review and Prediction Case Studies. Engineering, 6(3), 291–301. https://doi.org/10.1016/j.eng.2019.08.015 Rostami, M., & Oussalah, M. (2022). A novel explainable COVID-19 diagnosis method by integration of feature selection with random forest. Informatics in Medicine Unlocked, 30. https://doi.org/10.1016/j.imu.2022.100941 Saba, T., Haseeb, K., Ahmed, I., & Rehman, A. (2020). Secure and energy-efficient framework using Internet of Medical Things for e-healthcare. Journal of Infection and Public Health, 13(10), 1567–1575. https://doi.org/10.1016/j.jiph.2020.06.027 Sachan, S., Almaghrabi, F., Yang, J.-B., & Xu, D.-L. (2021). Evidential reasoning for preprocessing uncertain categorical data for trustworthy decisions: An application on healthcare and finance. Expert Systems with Applications, 185. https://doi.org/10.1016/j.eswa.2021.115597 Saheb, T., Saheb, T., & Carpenter, D. O. (2021). Mapping research strands of ethics of artificial intelligence in healthcare: A bibliometric and content analysis. Computers in Biology and Medicine, 135. https://doi.org/10.1016/j.compbiomed.2021.104660 Santamaría, J., Cordón, O., & Damas, S. (2011). A comparative study of state-of-the-art evolutionary image registration methods for 3D modeling. Computer Vision and Image Understanding, 115(9), 1340–1354. https://doi.org/10.1016/j.cviu.2011.05.006 Séroussi, B., Hollis, K. F., & Soualmia, L. F. (2020). Transparency of Health Informatics Processes as the Condition of Healthcare Professionals’ and Patients’ Trust and Adoption: the Rise of Ethical Requirements. Yearbook of Medical Informatics, 29(1), 7–10. https://doi.org/10.1055/s-0040-1702029 Setchi, R., Dehkordi, M. B., & Khan, J. S. (2020). Explainable robotics in human-robot interactions. Procedia Computer Science, 176, 3057–3066. https://doi.org/10.1016/j.procs.2020.09.198 Sheikh, A., Anderson, M., Albala, S., Casadei, B., Franklin, B. D., Richards, M., Taylor, D., Tibble, H., & Mossialos, E. (2021). Health information technology and digital innovation for national learning health and care systems. The Lancet Digital Health, 3(6), e383–e396. https://doi.org/10.1016/S2589-7500(21)00005-4 Shi, Z., Chen, W., Liang, S., Zuo, W., Yue, L., & Wang, S. (2019). Deep Interpretable Mortality Model for Intensive Care Unit Risk Prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Vol. 11888 LNAI. https://doi.org/10.1007/978-3-030-35231-8_45 Shoeibi, A., Khodatars, M., Jafari, M., Ghassemi, N., Moridian, P., Alizadehsani, R., Ling, S. H., Khosravi, A., Alinejad-Rokny, H., Lam, H. K., Zhang, Y., & Gorriz, J. M. (2023). Diagnosis of brain diseases in fusion of neuroimaging modalities using deep learning: A review. Information Fusion, 93, 85–117. https://doi.org/10.1016/j.inffus.2022.12.010 Sohrabi, C., Franchi, T., Mathew, G., Kerwan, A., Nicola, M., Griffin, M., Agha, M., & Agha, R. (2021). PRISMA 2020 statement: What’s new and the importance of reporting guidelines. International Journal of Surgery, 88. https://doi.org/10.1016/j.ijsu.2021.105918 Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7 Ullah, I., Youn, H. Y., & Han, Y.-H. (2021). Integration of type-2 fuzzy logic and Dempster–Shafer Theory for accurate inference of IoT-based health-care system. Future Generation Computer Systems, 124, 369–380. https://doi.org/10.1016/j.future.2021.06.012 Wang, J., Jin, H., Chen, J., Tan, J., & Zhong, K. (2022). Anomaly detection in Internet of medical Things with Blockchain from the perspective of deep neural network. Information Sciences, 617, 133–149. https://doi.org/10.1016/j.ins.2022.10.060 Washington, P., Leblanc, E., Dunlap, K., Penev, Y., Varma, M., Jung, J.-Y., Chrisman, B., Sun, M. W., Stockham, N., Paskov, K. M., Haber, N., & Wall, D. P. (2021). Selection of trustworthy crowd workers for telemedical diagnosis of pediatric autism spectrum disorder. Pacific Symposium on Biocomputing, 14–25. Wenzel, M., & Wiegand, T. (2020). Toward Global Validation Standards for Health AI. IEEE Communications Standards Magazine, 4(3), 64–69. https://doi.org/10.1109/MCOMSTD.001.2000006 Yang, G., Ye, Q., & Xia, J. (2022). Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Information Fusion, 77, 29–52. https://doi.org/10.1016/j.inffus.2021.07.016 Yu, K.-H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719–731. https://doi.org/10.1038/s41551-018-0305-z Zarour, M., Ansari, M. T. J., Alenezi, M., Sarkar, A. K., Faizan, M., Agrawal, A., Kumar, R., & Khan, R. A. (2020). Evaluating the impact of blockchain models for secure and trustworthy electronic healthcare records. IEEE Access, 8, 157959–157973. https://doi.org/10.1109/ACCESS.2020.3019829 Zerka, F., Urovi, V., Vaidyanathan, A., Barakat, S., Leijenaar, R. T. H., Walsh, S., Gabrani-Juma, H., Miraglio, B., Woodruff, H. C., Dumontier, M., Dumontier, M., & Lambin, P. (2020). Blockchain for privacy preserving and trustworthy distributed machine learning in multicentric medical imaging (C-DistriM). IEEE Access, 8, 183939–183951. https://doi.org/10.1109/ACCESS.2020.3029445 |
This material may be protected under Copyright Act which governs the making of photocopies or reproductions of copyrighted materials. You may use the digitized material for private study, scholarship, or research. |