IA et santé
Open Access
Issue
Med Sci (Paris)
Volume 42, Number 1, Janvier 2026
IA et santé
Page(s) 71 - 77
Section Repères
DOI https://doi.org/10.1051/medsci/2025194
Published online 23 January 2026
  1. Jin D, Sergeeva E, Weng WH, et al. Explainable deep learning in healthcare: a methodological survey from an attribution view. arXiv preprint 2021 ; arXiv:2105.06602. [Google Scholar]
  2. Singh A, Sengupta S, Lakshminarayanan V. Explainable deep learning models in medical image analysis. arXiv preprint 2020 ; arXiv:2003.07319. [Google Scholar]
  3. Ribeiro MT, Singh S, Guestrin C. Why should I trust you? Explaining the predictions of any classifier. Proc 22nd ACM SIGKDD Int Conf Knowl Discov Data Min 2016 ; 1135–44. [Google Scholar]
  4. Lundberg SM, Lee SI. A unified approach to interpreting model predictions. Adv Neural Inf Process Syst 2017 ; 30 : 4765–74. [Google Scholar]
  5. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 2019 ; 1 : 206–15. [Google Scholar]
  6. Murdoch WJ, Singh C, Kumbier K, et al. Interpretable machine learning: definitions, methods, and applications. Proc Natl Acad Sci USA 2019 ; 116 : 22071–80. [Google Scholar]
  7. Hao Y. Evaluating attribution methods using white-box LSTMs. arXiv preprint 2020 ; arXiv:2004.12565. [Google Scholar]
  8. De Souza LA, Mendel R, Strasser S, et al. Convolutional neural networks for the evaluation of cancer in Barrett’s esophagus: explainable AI to lighten up the black box. Comput Biol Med 2021 ; 135 : 104579. [Google Scholar]
  9. Castelvecchi D. Can we open the black box of AI? Nature 2016 ; 538 : 20–3. [Google Scholar]
  10. Van der Linden I, Haned H, Kanoulas E. Global aggregations of local explanations for black box models. arXiv preprint 2019 ; arXiv:1907.03039. [Google Scholar]
  11. Azam S, Montaha S, Fahim KU, et al. Using feature maps to unpack the CNN black box theory with two medical datasets of different modality. Intell Syst Appl 2023 ; 18 : 200233. [Google Scholar]
  12. Salahuddin Z, Woodruff HC, Chatterjee A, et al. Transparency of deep neural networks for medical image analysis: a review of interpretability methods. Comput Biol Med 2022 ; 140 : 105111. [Google Scholar]
  13. Zhang Z, Chen P, McGough, et al. Toward an expert level of lung cancer detection and classification using a deep convolutional neural network. The Oncologist 2019 ; 24 : 1159–66. [Google Scholar]
  14. Zhang X, Liu B, Liu K, Wang, L. The diagnosis performance of convolutional neural network in the detection of pulmonary nodules: a systematic review and meta-analysis. Acta Radiologica 2023 ; 64 : 1680–90. [Google Scholar]
  15. Zafar MR, Khan NM. Deterministic local interpretable model-agnostic explanations for stable explainability. Mach Learn Model Extract 2021 ; 3 : 525–41. [Google Scholar]
  16. Zhang Y, Song K, Sun Y, et al. Why should you trust my explanation? Understanding uncertainty in LIME explanations. arXiv preprint 2019 ; arXiv:1904.12991. [Google Scholar]
  17. Rahnama AH, Boström H. A study of data and label shift in the LIME framework. arXiv preprint 2019 ; arXiv:1911.11371. [Google Scholar]
  18. Venkatsubramaniam B, Baruah PK. Comparative study of XAI using formal concept lattice and LIME. ICTACT J Soft Comput 2022 ; 13 : 2782–91. [Google Scholar]
  19. Humphreys P. Extending ourselves: computational science, empiricism, and scientific method. Oxford: Oxford Univ Press, 2004. [Google Scholar]
  20. Bach S, Binder A, Montavon G, et al. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 2015 ; 10 : e0130140. [Google Scholar]
  21. Yang H, Rudin C, Seltzer M. Scalable Bayesian rule lists. arXiv preprint 2016 ; arXiv:1602.08610. [Google Scholar]
  22. Letham B, Rudin C, McCormick TH, et al. Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann Appl Stat 2015 ; 9 : 1350–71. [Google Scholar]
  23. Atzmueller M, Fürnkranz J, Kliegr T, et al. Explainable and interpretable machine learning and data mining. Data Min Knowl Discov 2024 ; 38 : 2571–95. [Google Scholar]
  24. Nwoke U, Farooqui M, Oleson J, et al. Bayesian modeling framework for optimizing pre-hospital stroke triage decisions. J Appl Stat 2024 ; 1–23. [Google Scholar]
  25. Six AJ, Backus BE, Kelder JC. Chest pain in the emergency room: value of the HEART score. Neth Heart J 2008 ; 16 : 191–6. [Google Scholar]
  26. Friedman JH, Popescu BE. Predictive learning via rule ensembles. Ann Appl Stat 2008 ; 2 : 916–54. [Google Scholar]
  27. Scheipl F, Kneib T, Fahrmeir L. Penalized likelihood and Bayesian function selection in regression models. AStA Adv Stat Anal 2013 ; 97 : 349–85. [Google Scholar]
  28. Cai X, McEwen JD, Pereyra M. Proximal nested sampling for high-dimensional Bayesian model selection. Stat Comput 2022 ; 32 : 87. [Google Scholar]
  29. Tomova GD, Gilthorpe MS, Arriagada Bruneau, et al. Distinguishing the transparency, explainability, and interpretability of algorithms. J Epidemiol Community Health 2022 ; 76 : A66. [Google Scholar]
  30. Silva V, Costa M, Oliveira E. On the semantic interpretability of artificial intelligence models. arXiv preprint 2019 ; arXiv:1905.10615. [Google Scholar]
  31. Charlton CE, Poon MTC, Brennan PM, et al. Interpretable machine learning classifiers for brain tumour survival prediction. arXiv preprint 2021 ; arXiv:2105.05859. [Google Scholar]
  32. Bhattacharyya A, Pal S, Mitra R, et al. Applications of Bayesian shrinkage prior models in clinical research with categorical responses. BMC Med Res Methodol 2022 ; 22 : 251. [Google Scholar]
  33. Gelman A, Simpson D, Betancourt M. The prior can generally only be understood in the context of the likelihood. Bayesian Anal 2017 ; 12 : 1–15. [Google Scholar]
  34. Polton D. Les données de santé. Med Sci (Paris) 2018 ; 34 : 449–55. [CrossRef] [EDP Sciences] [PubMed] [Google Scholar]
  35. Gehrmann S, Dernoncourt F, Li Y, et al.. Comparing rule-based and deep learning models for patient phenotyping. arXiv preprint 2017 ; arXiv:1703.08705. [Google Scholar]
  36. Jean A. Une brève introduction à l’intelligence artificielle. Med Sci (Paris) 2020 ; 36 : 1059–67. [CrossRef] [EDP Sciences] [PubMed] [Google Scholar]
  37. Dumas M, Fay AF, Charpentier E, Matricon J. Le jumeau numérique en santé – État des lieux et perspectives d’application à l’hôpital. Med Sci (Paris) 2023 ; 39 : 953–95. [CrossRef] [EDP Sciences] [PubMed] [Google Scholar]
  38. Bilal A, Ebert D, Lin B. LLMs for explainable AI: a comprehensive survey. arXiv preprint 2025 ; arXiv:2504.00125. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.