Open Access
| Numéro |
Med Sci (Paris)
Volume 42, Numéro 1, Janvier 2026
IA et santé
|
|
|---|---|---|
| Page(s) | 71 - 77 | |
| Section | Repères | |
| DOI | https://doi.org/10.1051/medsci/2025194 | |
| Publié en ligne | 23 janvier 2026 | |
- Jin D, Sergeeva E, Weng WH, et al. Explainable deep learning in healthcare: a methodological survey from an attribution view. arXiv preprint 2021 ; arXiv:2105.06602. [Google Scholar]
- Singh A, Sengupta S, Lakshminarayanan V. Explainable deep learning models in medical image analysis. arXiv preprint 2020 ; arXiv:2003.07319. [Google Scholar]
- Ribeiro MT, Singh S, Guestrin C. Why should I trust you? Explaining the predictions of any classifier. Proc 22nd ACM SIGKDD Int Conf Knowl Discov Data Min 2016 ; 1135–44. [Google Scholar]
- Lundberg SM, Lee SI. A unified approach to interpreting model predictions. Adv Neural Inf Process Syst 2017 ; 30 : 4765–74. [Google Scholar]
- Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 2019 ; 1 : 206–15. [Google Scholar]
- Murdoch WJ, Singh C, Kumbier K, et al. Interpretable machine learning: definitions, methods, and applications. Proc Natl Acad Sci USA 2019 ; 116 : 22071–80. [Google Scholar]
- Hao Y. Evaluating attribution methods using white-box LSTMs. arXiv preprint 2020 ; arXiv:2004.12565. [Google Scholar]
- De Souza LA, Mendel R, Strasser S, et al. Convolutional neural networks for the evaluation of cancer in Barrett’s esophagus: explainable AI to lighten up the black box. Comput Biol Med 2021 ; 135 : 104579. [Google Scholar]
- Castelvecchi D. Can we open the black box of AI? Nature 2016 ; 538 : 20–3. [Google Scholar]
- Van der Linden I, Haned H, Kanoulas E. Global aggregations of local explanations for black box models. arXiv preprint 2019 ; arXiv:1907.03039. [Google Scholar]
- Azam S, Montaha S, Fahim KU, et al. Using feature maps to unpack the CNN black box theory with two medical datasets of different modality. Intell Syst Appl 2023 ; 18 : 200233. [Google Scholar]
- Salahuddin Z, Woodruff HC, Chatterjee A, et al. Transparency of deep neural networks for medical image analysis: a review of interpretability methods. Comput Biol Med 2022 ; 140 : 105111. [Google Scholar]
- Zhang Z, Chen P, McGough, et al. Toward an expert level of lung cancer detection and classification using a deep convolutional neural network. The Oncologist 2019 ; 24 : 1159–66. [Google Scholar]
- Zhang X, Liu B, Liu K, Wang, L. The diagnosis performance of convolutional neural network in the detection of pulmonary nodules: a systematic review and meta-analysis. Acta Radiologica 2023 ; 64 : 1680–90. [Google Scholar]
- Zafar MR, Khan NM. Deterministic local interpretable model-agnostic explanations for stable explainability. Mach Learn Model Extract 2021 ; 3 : 525–41. [Google Scholar]
- Zhang Y, Song K, Sun Y, et al. Why should you trust my explanation? Understanding uncertainty in LIME explanations. arXiv preprint 2019 ; arXiv:1904.12991. [Google Scholar]
- Rahnama AH, Boström H. A study of data and label shift in the LIME framework. arXiv preprint 2019 ; arXiv:1911.11371. [Google Scholar]
- Venkatsubramaniam B, Baruah PK. Comparative study of XAI using formal concept lattice and LIME. ICTACT J Soft Comput 2022 ; 13 : 2782–91. [Google Scholar]
- Humphreys P. Extending ourselves: computational science, empiricism, and scientific method. Oxford: Oxford Univ Press, 2004. [Google Scholar]
- Bach S, Binder A, Montavon G, et al. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 2015 ; 10 : e0130140. [Google Scholar]
- Yang H, Rudin C, Seltzer M. Scalable Bayesian rule lists. arXiv preprint 2016 ; arXiv:1602.08610. [Google Scholar]
- Letham B, Rudin C, McCormick TH, et al. Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann Appl Stat 2015 ; 9 : 1350–71. [Google Scholar]
- Atzmueller M, Fürnkranz J, Kliegr T, et al. Explainable and interpretable machine learning and data mining. Data Min Knowl Discov 2024 ; 38 : 2571–95. [Google Scholar]
- Nwoke U, Farooqui M, Oleson J, et al. Bayesian modeling framework for optimizing pre-hospital stroke triage decisions. J Appl Stat 2024 ; 1–23. [Google Scholar]
- Six AJ, Backus BE, Kelder JC. Chest pain in the emergency room: value of the HEART score. Neth Heart J 2008 ; 16 : 191–6. [Google Scholar]
- Friedman JH, Popescu BE. Predictive learning via rule ensembles. Ann Appl Stat 2008 ; 2 : 916–54. [Google Scholar]
- Scheipl F, Kneib T, Fahrmeir L. Penalized likelihood and Bayesian function selection in regression models. AStA Adv Stat Anal 2013 ; 97 : 349–85. [Google Scholar]
- Cai X, McEwen JD, Pereyra M. Proximal nested sampling for high-dimensional Bayesian model selection. Stat Comput 2022 ; 32 : 87. [Google Scholar]
- Tomova GD, Gilthorpe MS, Arriagada Bruneau, et al. Distinguishing the transparency, explainability, and interpretability of algorithms. J Epidemiol Community Health 2022 ; 76 : A66. [Google Scholar]
- Silva V, Costa M, Oliveira E. On the semantic interpretability of artificial intelligence models. arXiv preprint 2019 ; arXiv:1905.10615. [Google Scholar]
- Charlton CE, Poon MTC, Brennan PM, et al. Interpretable machine learning classifiers for brain tumour survival prediction. arXiv preprint 2021 ; arXiv:2105.05859. [Google Scholar]
- Bhattacharyya A, Pal S, Mitra R, et al. Applications of Bayesian shrinkage prior models in clinical research with categorical responses. BMC Med Res Methodol 2022 ; 22 : 251. [Google Scholar]
- Gelman A, Simpson D, Betancourt M. The prior can generally only be understood in the context of the likelihood. Bayesian Anal 2017 ; 12 : 1–15. [Google Scholar]
- Polton D. Les données de santé. Med Sci (Paris) 2018 ; 34 : 449–55. [CrossRef] [EDP Sciences] [PubMed] [Google Scholar]
- Gehrmann S, Dernoncourt F, Li Y, et al.. Comparing rule-based and deep learning models for patient phenotyping. arXiv preprint 2017 ; arXiv:1703.08705. [Google Scholar]
- Jean A. Une brève introduction à l’intelligence artificielle. Med Sci (Paris) 2020 ; 36 : 1059–67. [CrossRef] [EDP Sciences] [PubMed] [Google Scholar]
- Dumas M, Fay AF, Charpentier E, Matricon J. Le jumeau numérique en santé – État des lieux et perspectives d’application à l’hôpital. Med Sci (Paris) 2023 ; 39 : 953–95. [CrossRef] [EDP Sciences] [PubMed] [Google Scholar]
- Bilal A, Ebert D, Lin B. LLMs for explainable AI: a comprehensive survey. arXiv preprint 2025 ; arXiv:2504.00125. [Google Scholar]
Les statistiques affichées correspondent au cumul d'une part des vues des résumés de l'article et d'autre part des vues et téléchargements de l'article plein-texte (PDF, Full-HTML, ePub... selon les formats disponibles) sur la platefome Vision4Press.
Les statistiques sont disponibles avec un délai de 48 à 96 heures et sont mises à jour quotidiennement en semaine.
Le chargement des statistiques peut être long.
