Artificial Intelligence Architectures Used in Dentistry

Özet

Artificial intelligence architectures represent important technological approaches that contribute to the restructuring of image-based evaluation processes in dentistry. Deep learning–based Convolutional Neural Networks enable the automatic extraction of local features from dental radiological data, while Vision Transformer–based models offer a different perspective for analyzing complex anatomical structures through their capacity to model global contextual relationships within images. Hybrid Convolutional Neural Network –Transformer architectures provide a more comprehensive evaluation framework by integrating both local and global representations. Segmentation-oriented architectures such as U-Net and its variants enable pixel-level delineation of anatomical and pathological structures, thereby supporting diagnostic interpretation and treatment planning processes. Object detection architectures contribute to the automatic identification of clinically relevant regions within dental images characterized by wide fields of view. Due to its reliance on high-resolution imaging data and detailed anatomical assessment, dentistry constitutes a distinctive domain for artificial intelligence applications. In this context, appropriate architectural selection, data quality, and accurate definition of clinical requirements emerge as key determinants for the reliable and sustainable integration of artificial intelligence systems into dental practice.

Yapay zekâ mimarileri, diş hekimliğinde görüntü temelli değerlendirme süreçlerinin yeniden yapılandırılmasına katkı sağlayan önemli teknolojik yaklaşımlar arasında yer almaktadır. Derin öğrenme tabanlı Konvolüsyonel Sinir Ağları, dental radyolojik verilerde lokal özelliklerin otomatik olarak çıkarılmasına olanak tanırken; Vision Transformer tabanlı modeller görüntü içerisindeki global bağlamsal ilişkileri modelleyebilme kapasitesiyle karmaşık anatomik yapıların analizine farklı bir perspektif kazandırmaktadır. CNN–Transformer hibrit mimariler ise lokal ve global temsilleri bir araya getirerek daha bütüncül bir değerlendirme imkânı sunmaktadır. Segmentasyon odaklı U-Net ve türevleri, anatomik ve patolojik yapıların piksel düzeyinde sınırlandırılmasını mümkün kılarak tanısal yorumlama ve tedavi planlaması süreçlerini desteklemektedir. Nesne tespiti mimarileri geniş görüş alanına sahip dental görüntülerde klinik açıdan anlamlı bölgelerin otomatik olarak belirlenmesine katkı sağlamaktadır Diş hekimliği pratiği, yüksek çözünürlüklü görüntüleme verileri ve ayrıntılı anatomik değerlendirme gereksinimi nedeniyle yapay zekâ uygulamaları için özgün bir çalışma alanı oluşturmaktadır. Bu bağlamda uygun mimari seçimi, veri kalitesi ve klinik gereksinimlerin doğru tanımlanması, yapay zekâ sistemlerinin güvenilir ve sürdürülebilir entegrasyonu açısından temel belirleyiciler olarak öne çıkmaktadır.

Referanslar

Guo K, Yang Z, Yu C-H, Buehler MJ. Artificial intelligence and machine learning in design of mechanical materials. Mater Horiz. 2021;8:1153–1172. doi: 10.1039/D0MH01451F.

Obaideen K, Olabi AG, Al Swailmeen Y, Shehata N, Abdelkareem MA, Alami AH, Rodriguez C, Sayed ET. Solar Energy: Applications, Trends Analysis, Bibliometric Analysis and Research Contribution to Sustainable Development Goals (SDGs). Sustainability. 2023;15:1418. doi: 10.3390/su15021418.

Bonny T, Al Nassan W, Obaideen K, Al Mallahi MN, Mohammad Y, El-damanhoury HM. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res. 2023;12:1179. doi: 10.12688/f1000research.140204.1.

Heo M-S, Kim J-E, Hwang J-J, Han S-S, Kim J-S, Yi W-J, Park I-W. Artificial intelligence in oral and maxillofacial radiology: what is currently possible? Dentomaxillofacial Radiology. 2021;50:20200375. doi: 10.1259/dmfr.20200375.

Schwendicke F, Samek W, Krois J. Artificial Intelligence in Dentistry: Chances and Challenges. J Dent Res. 2020;99:769–774. doi: 10.1177/0022034520915714.

Zhang Z, Wu C, Coleman S, Kerr D. DENSE-INception U-net for medical image segmentation. Computer Methods and Programs in Biomedicine. 2020;192:105395. doi: 10.1016/j.cmpb.2020.105395.

Mahmood T, Rehman A, Saba T, Nadeem L, Bahaj SAO. Recent Advancements and Future Prospects in Active Deep Learning for Medical Image Segmentation and Classification. IEEE Access. 2023;11:113623–113652. doi: 10.1109/ACCESS.2023.3313977.

LeCun Y, Bottou L, Bengio Y. Gradient Based Learning Applied to Document Recognition. PROC OF THE IEEE. 1998;86:2278–2324.

Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Communications of the ACM. 2017;60:84–90.

Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv; 2014. Available from: https://arxiv.org/abs/1409.1556.

Szegedy C, Wei Liu, Yangqing Jia, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, MA, USA: IEEE; 2015. p. 1–9.

He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. arXiv; 2015. Available from: https://arxiv.org/abs/1512.03385.

Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv; 2017. Available from: https://arxiv.org/abs/1704.04861.

Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? 2017; doi: 10.48550/ARXIV.1706.00712.

Chan H-P, Samala RK, Hadjiiski LM, Zhou C. Deep Learning in Medical Image Analysis. Adv Exp Med Biol. 2020;1213:3–21. doi: 10.1007/978-3-030-33128-3_1. Cited in PMID: 32030660.

Pang Y, Sun M, Jiang X, Li X. Convolution in Convolution for Network in Network. arXiv; 2016. Available from: https://arxiv.org/abs/1603.06759.

Putra RH, Doi C, Yoda N, Astuti ER, Sasaki K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofacial Radiology. 2022;51. doi: 10.1259/dmfr.20210197.

Lin M, Chen Q, Yan S. Network In Network. arXiv; 2013. Available from: https://arxiv.org/abs/1312.4400.

Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research. 2014;15:1929–1958.

Cireşan DC, Meier U, Masci J, Gambardella LM, Schmidhuber J. High-Performance Neural Networks for Visual Object Classification. arXiv; 2011. Available from: https://arxiv.org/abs/1102.0183.

Gülcü A, Kuş Z. Konvolüsyonel Sinir Ağlarında Hiper-Parametre Optimizasyonu Yöntemlerinin İncelenmesi. Gazi Üniversitesi Fen Bilimleri Dergisi Part C: Tasarım ve Teknoloji. 2019;7:503–522. doi: 10.29109/gujsc.514483.

Akarslan ZZ, Akdevelioğlu M, Güngör K, Erten H. A comparison of the diagnostic accuracy of bitewing, periapical, unfiltered and filtered digital panoramic images for approximal caries detection in posterior teeth. Dentomaxillofacial Radiology. 2008;37:458–463.

Zhang X, Wang Z, Liu J, Bi L, Yan W, Yan Y. Value of Rehabilitation Training for Children with Cerebral Palsy Diagnosed and Analyzed by Computed Tomography Imaging Information Features under Deep Learning. Abdulhay E, editor. Journal of Healthcare Engineering. 2021;2021:1–9. doi: 10.1155/2021/6472440.

Zuo J, Li H, Zhang S, Li P. Nonsteroidal Anti-inflammatory Drugs for the Prevention of Post-endoscopic Retrograde Cholangiopancreatography Pancreatitis. Dig Dis Sci. 2024;69:3134–3146. doi: 10.1007/s10620-024-08565-9.

Zwolski M, Kupilas A, Cnota P. Review of different convolutional neural networks used in segmentation of prostate during fusion biopsy. Cent European J Urol. 2025;78:23–39. doi: 10.5173/ceju.2024.0064. Cited in PMID: 40371421.

Zhao Y, Coppola A, Karamchandani U, Amiras D, Gupte CM. Artificial intelligence applied to magnetic resonance imaging reliably detects the presence, but not the location, of meniscus tears: a systematic review and meta-analysis. Eur Radiol. 2024;34:5954–5964. doi: 10.1007/s00330-024-10625-7.

Zhang Y, Xia K, Li C, Wei B, Zhang B. Review of Breast Cancer Pathologigcal Image Processing. Zhang L, editor. BioMed Research International. 2021;2021:1994764. doi: 10.1155/2021/1994764.

Mohammad-Rahimi H, Motamedian SR, Rohban MH, Krois J, Uribe SE, Mahmoudinia E, Rokhshad R, Nadimi M, Schwendicke F. Deep learning for caries detection: A systematic review. Journal of Dentistry. 2022;122:104115. doi: 10.1016/j.jdent.2022.104115.

Zheng J, Li H, Wen Q, Fu Y, Wu J, Chen H. Artificial intelligent recognition for multiple supernumerary teeth in periapical radiographs based on faster R-CNN and YOLOv8. Journal of Stomatology Oral and Maxillofacial Surgery. 2025;126:102293. doi: 10.1016/j.jormas.2025.102293.

Zaborowicz K, Zaborowicz M, Cieślińska K, Daktera-Micker A, Firlej M, Biedziak B. Artificial Intelligence Methods in the Detection of Oral Diseases on Pantomographic Images—A Systematic Narrative Review. JCM. 2025;14:3262. doi: 10.3390/jcm14093262.

Yang S, Jeong JS, Song D, Han JY, Lim S-H, Kim S, Yoo J-Y, Kim J-M, Kim J-E, Huh K-H, et al. Comparison of 2D, 2.5D, and 3D segmentation networks for mandibular canals in CBCT images: a study on public and external datasets. BMC Oral Health. 2025;25:1126. doi: 10.1186/s12903-025-06483-4.

Upalananda W, Charuakkra A, Chaichulee S. End-to-end vs. human-defined feature extraction: comparing deep learning approaches for age classification using mandibular third molars. J Forensic Odontostomatol. 2025;43:20–30. doi: 10.5281/zenodo.17776415. Cited in PMID: 41452801.

Sivari E, Senirkentli GB, Bostanci E, Guzel MS, Acici K, Asuroglu T. Deep Learning in Diagnosis of Dental Anomalies and Diseases: A Systematic Review. Diagnostics. 2023;13:2512. doi: 10.3390/diagnostics13152512.

Liu J, Jin C, Wang X, Pan K, Li Z, Yi X, Shao Y, Sun X, Yu X. A comparative analysis of deep learning models for assisting in the diagnosis of periapical lesions in periapical radiographs. BMC Oral Health. 2025;25:801. doi: 10.1186/s12903-025-06104-0.

Dashti M, Londono J, Ghasemi S, Tabatabaei S, Hashemi S, Baghaei K, Palma PJ, Khurshid Z. Evaluation of accuracy of deep learning and conventional neural network algorithms in detection of dental implant type using intraoral radiographic images: A systematic review and meta-analysis. The Journal of Prosthetic Dentistry. 2025;133:137–146. doi: 10.1016/j.prosdent.2023.11.030.

Talu MH, Baybars Coşgun S, Danacı Ç, Aslan Tuncer S. From Image to Diagnosis: Convolutional Neural Networks in Tongue Lesions. Journal of Imaging Informatics in Medicine. 2025;1–11.

Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging. 2022;22:69. doi: 10.1186/s12880-022-00793-7.

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. Attention Is All You Need. arXiv; 2023.

Felek T, Tercanlı H, Gök RŞ. Evaluating vision transformers and convolutional neural networks in the context of dental image processing: a systematic review. BMC Oral Health. 2025;25:1626. doi: 10.1186/s12903-025-07036-5.

Daldal M, Baybars SC, Baydoğan MP, Tuncer SA. Classification of Apical Openness Using Vision Transformer: A Comparative Approach with Expert Decisions. J Digit Imaging Inform med. 2025; doi: 10.1007/s10278-025-01780-4.

Han K, Wang Y, Chen H, Chen X, Guo J, Liu Z, Tang Y, Xiao A, Xu C, Xu Y, et al. A Survey on Visual Transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2023;45:87–110. doi: 10.1109/TPAMI.2022.3152247.

Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv; 2020. Available from: https://arxiv.org/abs/2010.11929.

Caron M, Touvron H, Misra I, Jegou H, Mairal J, Bojanowski P, Joulin A. Emerging Properties in Self-Supervised Vision Transformers. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, QC, Canada: IEEE; 2021. p. 9630–9640. Available from: https://ieeexplore.ieee.org/document/9709990/.

Coşgun Baybars S, Daldal M, Parlak Baydoğan M, Arslan Tuncer S. Evaluation of Apical Closure in Panoramic Radiographs Using Vision Transformer Architectures ViT-Based Apical Closure Classification. Diagnostics. 2025;15:2350. doi: 10.3390/diagnostics15182350.

Ibrahem H, Salem A, Kang H-S. RT-ViT: Real-Time Monocular Depth Estimation Using Lightweight Vision Transformers. Sensors. 2022;22:3849. doi: 10.3390/s22103849.

Jiang Y, Zhang Y, Lin X, Dong J, Cheng T, Liang J. SwinBTS: A Method for 3D Multimodal Brain Tumor Segmentation Using Swin Transformer. Brain Sciences. 2022;12:797. doi: 10.3390/brainsci12060797.

Ghazouani F, Vera P, Ruan S. Efficient brain tumor segmentation using Swin transformer and enhanced local self-attention. Int J CARS. 2023;19:273–281. doi: 10.1007/s11548-023-03024-8.

Zhang X, Guo E, Liu X, Zhao H, Yang J, Li W, Wu W, Sun W. Enhancing furcation involvement classification on panoramic radiographs with vision transformers. BMC Oral Health. 2025;25:153. doi: 10.1186/s12903-025-05431-6.

Dujic H, Meyer O, Hoss P, Wölfle UC, Wülk A, Meusburger T, Meier L, Gruhn V, Hesenius M, Hickel R, et al. Automatized Detection of Periodontal Bone Loss on Periapical Radiographs by Vision Transformer Networks. Diagnostics (Basel). 2023;13:35–62. doi: 10.3390/diagnostics13233562.

Zhou Z, Jin Y, Ye H, Zhang X, Liu J, Zhang W. Classification, detection, and segmentation performance of image-based AI in intracranial aneurysm: a systematic review. BMC Med Imaging. 2024;24:164. doi: 10.1186/s12880-024-01347-9.

Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv; 2015. Available from: https://arxiv.org/abs/1505.04597.

Doğan F, Türkoğlu İ. Derin Öğrenme Modelleri ve Uygulama Alanlarına İlişkin Bir Derleme. DÜMF Mühendislik Dergisi. 2019;10:409–445. doi: 10.24012/dumf.411130.

Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. arXiv; 2018. Available from: https://arxiv.org/abs/1807.10165.

Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv; 2018. Available from: https://arxiv.org/abs/1804.03999.

Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18:203–211. doi: 10.1038/s41592-020-01008-z.

Gülşen İT, Kuran A, Evli C, Baydar O, Dinç Başar K, Bilgir E, Çelik Ö, Bayrakdar İŞ, Orhan K, Acu B. Deep learning model for automated segmentation of sphenoid sinus and middle skull base structures in CBCT volumes using nnU-Net v2. Oral Radiol. 2026;42:139–147. doi: 10.1007/s11282-025-00848-9.

Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin J-C, Pujol S, Bauer C, Jennings D, Fennessy F, Sonka M, et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magnetic Resonance Imaging. 2012;30:1323–1341. doi: 10.1016/j.mri.2012.05.001.

Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. NeuroImage. 2006;31:1116–1128. doi: 10.1016/j.neuroimage.2006.01.015.

Cardoso MJ, Li W, Brown R, Ma N, Kerfoot E, Wang Y, Murrey B, Myronenko A, Zhao C, Yang D, et al. MONAI: An open-source framework for deep learning in healthcare. arXiv; 2022. Available from: https://arxiv.org/abs/2211.02701.

Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR 2001. Kauai, HI, USA: IEEE Comput. Soc; 2001. p. I-511-I–518. Available from: http://ieeexplore.ieee.org/document/990517/.

Feng D, Haase-Schutz C, Rosenbaum L, Hertlein H, Glaser C, Timm F, Wiesbeck W, Dietmayer K. Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges. IEEE Trans Intell Transport Syst. 2021;22:1341–1360. doi: 10.1109/TITS.2020.2972974.

Hoeser T, Kuenzer C. Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review-Part I: Evolution and Recent Trends. Remote Sensing. 2020;12:1667. doi: 10.3390/rs12101667.

Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JAWM, Van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Medical Image Analysis. 2017;42:60–88. doi: 10.1016/j.media.2017.07.005.

Redmon J, Farhadi A. YOLOv3: An Incremental Improvement. arXiv; 2018. Available from: https://arxiv.org/abs/1804.02767.

Redmon J, Divvala S, Girshick R, Farhadi A. You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE; 2016. p. 779–788.

Bochkovskiy A, Wang C-Y, Liao H-YM. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv; 2020. Available from: https://arxiv.org/abs/2004.10934.

Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature Pyramid Networks for Object Detection. arXiv; 2016. Available from: https://arxiv.org/abs/1612.03144.

Zhao Z-Q, Zheng P, Xu S, Wu X. Object Detection with Deep Learning: A Review. arXiv; 2018. Available from: https://arxiv.org/abs/1807.05511.

Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC. SSD: Single Shot MultiBox Detector. 2015; doi: 10.48550/ARXIV.1512.02325.

Lin T-Y, Goyal P, Girshick R, He K, Dollár P. Focal Loss for Dense Object Detection. arXiv; 2017. Available from: https://arxiv.org/abs/1708.02002.

Zhou X, Wang D, Krähenbühl P. Objects as Points. arXiv; 2019. Available from: https://arxiv.org/abs/1904.07850.

Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv; 2013. Available from: https://arxiv.org/abs/1311.2524.

Zheng J, Li H, Wen Q, Fu Y, Wu J, Chen H. Artificial intelligent recognition for multiple supernumerary teeth in periapical radiographs based on faster R-CNN and YOLOv8. Journal of Stomatology Oral and Maxillofacial Surgery. 2025;126:102293. doi: 10.1016/j.jormas.2025.102293.

Girshick R. Fast R-CNN [İnternet]. arXiv; 2015. https://arxiv.org/abs/1504.08083

Bengio Y, Courville A, Vincent P. Representation Learning: A Review and New Perspectives. arXiv; 2012. Available from: https://arxiv.org/abs/1206.5538.

Zou Z, Chen K, Shi Z, Guo Y, Ye J. Object Detection in 20 Years: A Survey. arXiv; 2019. Available from: https://arxiv.org/abs/1905.05055.

Yayınlanan

14 Nisan 2026

Lisans

Lisans