Computer vision is an interdisciplinary field that focuses on automating and replicating tasks by processing, analyzing, and understanding visual data (images or videos) in the same way that humans do. Computer vision and pattern recognition techniques have demonstrated great potential in many fields, including medicine.1-6 Computer vision has a long history spanning decades of efforts and research, developing to enable machines to perceive visual stimuli meaningfully. There has been a significant acceptance of computer vision–based technologies in the field of radiology,7-11 where computer-aided diagnostic systems are assisting medical professionals in making accurate diagnoses and predictions.
Artificial intelligence (AI), or machine learning (ML), is the most discussed topic today in medical imaging research and has the potential to invest deeply in all fields of medicine, significantly altering the way medicine is practiced. Taking advantage of the increasingly large amount of labeled medical data, AI has the potential to augment existing computer-based tools and assist medical professionals for certain repetitive and/or specialized tasks. For example, in the case of computed tomography (CT) and magnetic resonance imaging, there are hundreds of slices to examine, making it time consuming and an exhaustive task to perform manually. In some cases, the images acquired from these systems can be accompanied by distortions (eg, noise or poor quality). AI might help achieve results in a short amount of time. Intelligent dental radiographic interpretation tools are developed to assist dental health professionals to help enhance oral health care. In 1978, Richard Bellman defined AI as the automation of activities associated with human thinking abilities, which includes learning, decision making, and problem solving.12 A dental professional uses clinical and imaging information collected along with knowledge to make a decision on prognosis and appropriate patient management. The impact and implications of imaging diagnosis in dentistry is accentuated by the fact that dentistry is one of the few health care fields that routinely uses imaging to screen for abnormality across all age groups. Additionally, multiple types of imaging for the same anatomical region of the same individual are taken over time, spanning years, along with the corresponding non–image-based clinical data. The added challenge of a relatively low number of oral and maxillofacial radiologists/specialists available to interpret the high number of diagnostic imaging performed in dentistry could benefit from the support of AI systems in imaging diagnosis.
Over the last decade, there have been studies conducted to evaluate the efficiency of AI systems in the detection of dental caries, vertical root fractures, apical lesions, salivary gland diseases, maxillary sinusitis, maxillofacial cysts, metastasis to cervical lymph nodes, osteoporosis, and orthodontic diagnosis and treatment to name a few. Many of these studies have focused on CT, cone-beam computed tomography, bitewing, cephalometric, and panoramic radiographic diagnoses using neural networks (convolutional, artificial, and probabilistic neural networks), and concluded that the AI systems offered accurate diagnostic capabilities.13-24 Figure 1 demonstrates a basic AI system that is designed to identify periodontal bone loss (PBL) in panoramic radiographs.
The CNN-based system is designed to predict 3 types of treatment, namely, fluoride, filling, and root canal.
AI in dentistry has a proven potential to assist clinicians in providing appropriate patient care and early diagnosis, enhance accuracy of diagnosis, treatment planning, and predicting as well as monitoring outcomes while improving efficiencies, serving as a secondary opinion, adding value to forensic diagnosis, measurement of clinical/treatment outcomes, and objective management of appropriate insurance coverage. Ideally, with ongoing advances in imaging and technology, computer systems could advance to an economically viable state, enabling AI to serve as an adjunct to the clinical acumen of the radiologist.25
Despite these potentials, AI solutions have not yet become the norm in routine medical practice. Specifically, in dentistry, imaging plays a vital role in screening and treatment. Dental conditions such as caries, apical lesions, and PBL are relatively prevalent, making it easy to build the dataset to train and optimize AI systems. Despite these well-suited conditions for AI, systems based on convolutional neural networks (CNNs) have only been recently adopted in dental radiograph research, and the applications based on these technologies are only now entering the clinical arena.26
Related art and challenges of AI
Lee et al27 proposed an end-to-end CNN-based deep learning system to detect 19 cephalometric landmarks in dental x-ray images. Multiple CNN-based regression systems were created to predict the coordinate variables from the images. A total of 38 regression systems with the same CNN structure were trained to compute 38 coordinate variables. Finally, 19 landmarks were extracted by pairing the regressed coordinates.
Song et al28 proposed a 2-step method to detect cephalometric landmarks on skeletal x-ray images. The first step involves extracting the regions of interest by cropping patches and registering the test images to the training images. The second step uses the state-of-the-art CNN-based ResNet5029 model to detect the landmarks in the extracted patches.
Bouchahma et al30 proposed an automated system to detect decay and predict required treatment from dental x-ray images. The CNN-based system is designed to predict 3 types of treatment, namely, fluoride, filling, and root canal. The CNN architecture was trained on 200 images and tested on 35 images. The system was able to achieve a total success rate of 86% for treatment prediction.
Muresan et al31 developed a novel approach using a CNN to detect teeth and classify 14 different dental issues in panoramic x-ray images. The classes consisted of healthy tooth, missing tooth, dental restoration, implant, fixed prosthetics work, mobile prosthetics work (dentures), root canal device, fixed prosthetic work and root canal device, fixed prosthetic work and implant, fixed prosthetic work and devitalized tooth, devitalized tooth and restoration, dental inclusion, polished tooth, another problem, and background. The CNN system was trained using 1000 panoramic images and reached an accuracy of 89% in detecting the teeth. Finally, a label was generated for each tooth, identifying the problem affecting it by using a histogram-based majority voting system.
Kim et al32 proposed a fully automated network using CNNs to detect PBL using panoramic dental radiographs. The overall framework consists of multiple stages: the first stage was trained to extract the region of interest (the teeth region), the second stage focused on training a network segment and predict the PBL lesion, the third stage consisted of using the pretrained weight of the second stage encoder to create a classification network that predicts the existence of PBL in each tooth, and the final stage consists of a classification network that predicts the existence of PBL lesion specifically for the molar and premolar teeth. The network was trained and tested on panoramic dental radiographs. Kim et al32 proposed the utility of a deep learning–based CNN algorithm for chronological age estimation. A total of 9435 panoramic dental x-rays were used in this study with ages ranging from 2 to 98 years. The author employed a curriculum learning strategy, together with the state-of-the-art DenseNet33-based CNN model to predict the chronological age from these images.
As suggested in Schwendicke et al,34 some of the main reasons for AI technologies not being fully adopted in dentistry are (1) dental data are not readily accessible because of data protection and privacy concerns; (2) datasets lack structure, are complex and multidimensional, and are often biased with overly sick, overly healthy, or overly affluent data points; (3) datasets are relatively small compared with other image-based datasets for AI; (4) a lack of “hard” gold standards and the requirement of an expert to label; and, (5) a lack of trust in AI, as it does not provide any feedback on how or why it arrived at the prediction. Furthermore, when trained AI models are tested on data never encountered in the training phase, they may produce irrelevant results that could lead to misdiagnosis.
Eye tracking and its applications in AI
Eye tracking has been extensively used for research in the field of marketing, image interpretation, and psychology.35-37 In particular, eye tracking has been evolving and expanding in understanding diagnostic interpretations in the field of medicine.38-47 In dentistry, the interpretation of radiographs interweaves the process of perception (visually scanning the image) and cognition (decision making and diagnostic reasoning).48 Eye tracking technology can be employed to specifically determine what the observer is focusing on within the image and further illustrate patterns in the scanning process. The application of eye tracking in dentistry has provided novel opportunities to study the interpretive process and elucidate the differences in decision making, perception, misinterpretation, and misdiagnosis between novices and experts.
There is a lack in availability of annotated data and a need for a framework of incorporating eye tracking data into AI systems. Annotating data typically involves manually tracing out regions or objects in the images. This approach is time consuming when dealing with a large dataset and requires an expert, such as a radiologist, to perform these annotations. Moreover, most of the annotation work is performed outside clinical hours and by novice clinicians, possibly reducing the accuracy of the annotations. With the help of AI-based systems and eye tracking technology, the process of annotating data can be automated and performed by experts during clinical hours. Advances are being made to improve disease classification by integrating eye tracking data with deep learning techniques.42,46,49,50 Figure 2 demonstrates the automated annotation process. As seen in Figure 2, the radiographs are first presented to the expert on a screen. The eye tracking information is captured while the expert analyzes the radiograph. The captured eye tracking data are then fed into the AI system to generate the annotations. In some cases, the findings of the radiographs are transcribed and can be used within the AI to highlight the type of abnormality. This system can tremendously speed up the process of annotation and ensure accuracy.
AI has demonstrated a particularly impressive ability to recognize patterns in data through correlation.
Researchers are striving to create AI models that can match or even surpass human capabilities. To meet this expectation, it is crucial to have accurate data to develop such models that can mimic the behavior of a human expert. AI has demonstrated a particularly impressive ability to recognize patterns in data through correlation. But such models are fundamentally incapable of seeing the cause and effect. For example, unlike a real doctor, AI algorithms cannot explain why a particular image may suggest a disease. Furthermore, human involvement is pivotal in the field of medicine; however, current state-of-the-art systems are limited to visual imagery alone, and humans turn into passive observers. To truly mimic an expert, the AI system must consider the visual perception and cognition of the human. Figure 3 illustrates an AI system trained to perceive radiographs as humans do. In the first pass, the radiograph is fed into the AI system and trained to predict what the expert looked at. The second pass involves passing the learned eye-gaze map with the radiograph to classify the type of abnormality. Fusing different aspects to train an AI model paves the way for a new generation of AI systems to highlight the identified features and explain the reasons for the outputs generated. The trained AI system will significantly reduce errors and serve as a second opinion to the radiologist. It can further address and bridge the gaps in understanding how different radiologists perceive similar radiographs and analyze patterns or shortcuts used by an expert.
The trained AI system can further be used to teach and assist novice clinicians in interpreting radiographs. Instructors and clinicians can use the trained AI to demonstrate and assess gaze patterns during medical training and education, in turn accelerating the transition from novice to expert. Figure 4 illustrates a training tool example that evaluates the findings of novice clinicians. First, both the AI and the student are presented with a radiograph. The eye tracker records the movements and areas concentrated on by the novice. This acquired gaze map is then compared with the gaze map generated by the “expert AI” and the results are presented to the novice.
Figures in this article
Figure 1. A basic AI architecture to identify periodontal bone loss (PBL) in panoramic radiographs.
Figure 2. Illustration of the automated annotation process using eye tracking and AI.
Figure 3. Illustration of an AI system trained to perceive radiographs as humans do.
Figure 4. A training tool that evaluates the findings of novice clinicians against the “expert AI.”
Discussion
Imaging diagnosis involves visual perception of an image and application of pattern recognition to distinguish normal from abnormal findings, followed by detailed characterizations of the abnormal findings that would allow for interpretation. The interpretation thus derived enables appropriate patient care and management. The applications of AI and image interpretation to generate accurate and reliable diagnosis is expanding. Incorporating eye tracking capabilities for the AI used in image interpretation could facilitate troubleshooting and analyzing misdiagnosis because it would provide some access to the “logic” of AI-based decisions. An enhancement to education would be to allow for self-paced learning for novice clinicians who could learn specifically from expert eye-movement patterns, mimicked by AI, in the absence of the expert. Interprofessional engagement of clinicians, data scientists, and computer engineers is key to the development of effective, efficient AI tools that are framed by ethical guidelines.
The resultant AI protocols could be employed not only for initial screening that supports clinicians by improving efficiency but could also be employed in training future clinicians, training and improving ML algorithms, and medical claims processing. Some of the perceived limitations of intelligent AI systems that need to be addressed in the coming future concern medicolegal regulation for misdiagnoses; the impact of impersonal diagnostics on individual patients, their rights, and identity; adequate planning for secure management of data accessed; and guidelines for ethical standards. Other considerations are related to consent, storage of data, and access as well as use of data currently and in the future.
Gao J, Yang Y, Lin P, Park DS. Computer vision in healthcare applications. J Healthc Eng. 2018:5157020. doi:10.1155/2018/5157020
Razeghi O, Solís-Lemus JA, Lee AWC, et al. CemrgApp: an interactive medical imaging application with image processing, computer vision, and machine learning toolkits for cardiovascular research. SoftwareX. 2020;12:100570. doi:10.1016/j.softx.2020.100570
Khemasuwan D, Sorensen JS, Colt HG. Artificial intelligence in pulmonary medicine: computer vision, predictive model and COVID-19. Eur Respir Rev. 2020;29(157):200181. doi:10.1183/16000617.0181-2020
Rajendran R, Rao SP, Agaian SS, Panetta K. A versatile edge preserving image enhancement approach for medical images using guided filter. IEEE. 2016:002341-002346. Selected articles from the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). doi:10.1109/SMC.2016.7844588
Agaian S, Rad P, Rajendran R, Panetta K. A novel technique to enhance low resolution CT and magnetic resonance images in cloud. 2016:73-78. Selected articles from the 2016 IEEE International Conference on Smart Cloud (SmartCloud). doi:10.1109/SmartCloud.2016.34
Maji P, Pal SK. Rough-Fuzzy Pattern Recognition: Applications in Bioinformatics and Medical Imaging. John Wiley and Sons; 2012.
Chawla A, Lim TC, Shikhare SN, Munk PL, Peh WCG. Computer vision syndrome: darkness under the shadow of light. Can Assoc Radiol J. 2019;70(1):5-9. doi:10.1016/j.carj.2018.10.005
Peters AA, Decasper A, Munz J, et al. Performance of an AI based CAD system in solid lung nodule detection on chest phantom radiographs compared to radiology residents and fellow radiologists. J Thorac Dis. 2021;13(5):2728-2737. doi:10.21037/jtd-20-3522
Brown M, Browning P, Wahi-Anwar MW, et al. Integration of chest CT CAD into the clinical workflow and impact on radiologist efficiency. Acad Radiol. 2019;26(5):626-631. doi:10.1016/j.acra.2018.07.006
Litjens G, Debats O, Barentsz J, Karssemeijer N, Huisman H. Computer-aided detection of prostate cancer in MRI. IEEE Trans Med Imaging. 2014;33(5):1083-1092. doi:10.1109/TMI.2014.2303821
Katzen J, Dodelzon K. A review of computer aided detection in mammography. Clin Imaging. 2018;52:305-309. doi:10.1016/j.clinimag.2018.08.014
Bellman R. Artificial Intelligence: Can Computers Think? Course Technology; 1978.
Lopes Devito K, de Souza Barbosa F, Felippe Filho WN. An artificial multilayer perceptron neural network for diagnosis of proximal dental caries. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2008;106(6):879-884. doi:10.1016/j.tripleo.2008.03.002
Lee J-S, Adhikari S, Liu L, Jeong H-G, Kim H, Yoon S-J. Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system: a preliminary study. Dentomaxillofac Radiol. 2019;48(1):20170344. doi:10.1259/dmfr.20170344
Lee J-H, Kim D-H, Jeong S-N, Choi S-H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J Dent. 2018;77:106-111. doi:10.1016/j.jdent.2018.07.015
Casalegno F, Newton T, Daher R, et al. Caries detection with near-infrared transillumination using deep learning. J Dent Res. 2019;98(11):1227-1233. doi:10.1177/0022034519871884
Kise Y, Ikeda H, Fujii T, et al. Preliminary study on the application of deep learning system to diagnosis of Sjögren’s syndrome on CT images. Dentomaxillofac Radiol. 2019;48(6):20190019. doi:10.1259/dmfr.20190019
Ekert T, Krois J, Meinhold L, et al. Deep learning for the radiographic detection of apical lesions. J Endod. 2019;45(7);917-922.e5. doi:10.1016/j.joen.2019.03.016
Murata M, Ariji Y, Ohashi Y, et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol. 2019;35(3):301-307. doi:10.1007/s11282-018-0363-7
Ariji Y, Fukuda M, Kise Y, et al. Contrast-enhanced computed tomography image assessment of cervical lymph node metastasis in patients with oral cancer by using a deep learning system of artificial intelligence. Oral Surg Oral Med Oral Pathol Oral Radiol. 2019;127(5):458-463. doi:10.1016/j.oooo.2018.10.002
Ariji Y, Sugita Y, Nagao T, et al. CT evaluation of extranodal extension of cervical lymph node metastases in patients with oral squamous cell carcinoma using deep learning classification. Oral Radiol. 2020;36(2):148-155. doi:10.1007/s11282-019-00391-4
Hung M, Voss MW, Rosales MN, et al. Application of machine learning for diagnostic prediction of root caries. Gerodontology. 2019;36(4):395-404. doi:10.1111/ger.12432
Kim Y, Lee KJ, Sunwoo L, et al. Deep learning in diagnosis of maxillary sinusitis using conventional radiography. Invest Radiol. 2019;54(1):7-15. doi:10.1097/RLI.0000000000000503
Lee K-S, Jung S-K, Ryu J-J, Shin S-W, Choi J. Evaluation of transfer learning with deep convolutional neural networks for screening osteoporosis in dental panoramic radiographs. J Clin Med. 2020;9(2):392. doi:10.3390/jcm9020392
Amato F, López-Rodríguez A, Peña-Méndez EM, Vaňhara P, Hampl A, Havel J. Artificial neural networks in medical diagnosis. J Appl Biomed. 2013;11(2):47-58. doi:10.2478/v10136-012-0031-x
Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: a scoping review. J Dent. 2019;91:103226. doi:10.1016/j.jdent.2019.103226
Lee H, Park M, Kim J. Cephalometric landmark detection in dental x-ray images using convolutional neural networks. In: Medical Imaging 2017: Computer-Aided Diagnosis. SPIE Medical Imaging; 2017:10134.
Song Y, Qiao X, Iwamoto Y, Chen Y-W. Automatic cephalometric landmark detection on x-ray images using a deep-learning method. Applied Sciences. 2020;10(7):2547. doi:10.3390/app10072547
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. IEEE. 2016:770-778. Selected articles from the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/CVPR.2016.90
Bouchahma M, Hammouda SB, Kouki S, Alshemaili M, Samara K. An automatic dental decay treatment prediction using a deep convolutional neural network on x-ray images. IEEE. 2019:1-4. Selected articles from the 2019 IEEE/ACS Sixteenth International Conference on Computer Systems and Applications (AICCSA). doi:10.1109/AICCSA47632.2019.9035278
Muresan MP, Barbura AR, Nedevschi S. Teeth detection and dental problem classification in panoramic x-ray images using deep learning and image processing techniques. IEEE. 2020:457-463. Selected articles from the 2020 IEEE Sixteenth International Conference on Intelligent Computer Communication and Processing (ICCP). doi:10.1109/ICCP51029.2020.9266244
Kim J, Bae W, Jung K-H, Song I-S. Development and validation of deep learning-based algorithms for the estimation of chronological age using panoramic dental x-ray images. 2019.
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. IEEE. 2017:4700-4708. Selected articles from the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/CVPR.2017.243
Schwendicke F, Samek W, Krois J. Artificial intelligence in dentistry: chances and challenges. J Dent Res. 2010;99(7):769-774. doi:10.1177/0022034520915714
Panetta K, Wan Q, Rajeev S, et al. ISeeColor: method for advanced visual analytics of eye tracking data. IEEE Access. 2020;8:52278-52287. doi:10.1109/ACCESS.2020.2980901
Wan Q, Kaszowska A, Panetta K, Taylor HA, Agaian S. Enhanced head-mounted eye tracking data analysis using super-resolution. Electronic Imaging. 2019;2019(3):647-1–647-8. doi:10.2352/ISSN.2470-1173.2019.3.SDA-647
Wan Q, Kaszowska A, Panetta K, Taylor HA, Agaian S. A comprehensive head-mounted eye tracking review: software solutions, applications, and challenges. Electronic Imaging. 2019;2019(3):654-1–654-9. doi:10.2352/ISSN.2470-1173.2019.3.SDA-654
Kundel HL, Nodine CF, Carmody D. Visual scanning, pattern recognition and decision-making in pulmonary nodule detection. Invest Radiol. 1978;13(3):175-181. doi:10.1097/00004424-197805000-00001
Kundel HL, Nodine CF, Krupinski EA. Searching for lung nodules. Visual dwell indicates locations of false-positive and false-negative decisions. Invest Radiol. 1989;24(6):472-478.
Nodine CF, Kundel HL, Lauver SC, Toto LC. Nature of expertise in searching mammograms for breast masses. Acad Radiol. 1996;3(12):1000-1006. doi:10.1016/s1076-6332(96)80032-8
Auffermann WF, Krupinski EA, Tridandapani S. Search pattern training for evaluation of central venous catheter positioning on chest radiographs. J Med Imaging (Bellingham). 2018;5(3):031407. doi:10.1117/1.JMI.5.3.031407
Mall S, Brennan PC, Mello-Thoms C. Modeling visual search behavior of breast radiologists using a deep convolution neural network. J Med Imaging (Bellingham). 2018;5(3):035502. doi:10.1117/1.JMI.5.3.035502
Helbren E, Halligan S, Phillips P, et al. Towards a framework for analysis of eye-tracking studies in the three dimensional environment: a study of visual search by experienced readers of endoluminal CT colonography. Br J Radiol. 2014;87(1037):20130614. doi:10.1259/bjr.20130614
Hermanson BP, Burgdorf GC, Hatton JF, Speegle DM, Woodmansey KF. Visual fixation and scan patterns of dentists viewing dental periapical radiographs: an eye tracking pilot study. J Endod. 2018;44(5):722-727. doi:10.1016/j.joen.2017.12.021
McLaughlin L, Bond R, Hughes C, McConnell J, McFadden S. Computing eye gaze metrics for the automatic assessment of radiographer performance during x-ray image interpretation. Int J Med Inform. 2017;105:11-21. doi:10.1016/j.ijmedinf.2017.03.001
Aresta G, Ferreira C, Pedrosa J, et al. Automatic lung nodule detection combined with gaze information improves radiologists’ screening performance. IEEE J Biomed Health Inform. 2020;24(10):2894-2901. doi:10.1109/JBHI.2020.2976150
Karargyris A, Kashyap S, Lourentzou I, et al. Creation and validation of a chest x-ray dataset with eye-tracking and report dictation for AI development. Sci Data. 2021;8(1):1-18. doi:10.1038/s41597-021-00863-5
O’Regan JK, Lévy-Schoen A. Eye Movements from Physiology to Cognition: Selected/Edited Proceedings of the Third European Conference on Eye Movements, Dourdan, France, September 1985. Elsevier; 2013.
Khosravan N, Celik H, Turkbey B, Jones EC, Wood B, Bagci U. A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning. Med Image Anal. 2019;51:101-115. doi:10.1016/j.media.2018.10.010
Stember JN, Celik H, Krupinski E, et al. Eye tracking for deep learning segmentation using convolutional neural networks. J Digit Imaging. 2019;32(4):597-604. doi:10.1007/s10278-019-00220-4
Biography
Karen Panetta (S’84, M’85, SM’95, F’08) received a BS in Computer Engineering from Boston University, Boston, Massachusetts, and an MS and PhD in Electrical Engineering from Northeastern University, Boston Massachusetts. Dr Panetta is currently Dean of Graduate Engineering Education, Professor in the Department of Electrical and Computer Engineering, and Adjunct Professor of Computer Science at Tufts University, Medford, Massachusetts, and Director of the Dr Panetta’s Vision and Sensing System Laboratory. She was President of the IEEE-HKN in 2019. She is the Editor-in-Chief of the IEEE Women in Engineering magazine. Dr Panetta was the IEEE-USA Vice President of Communications and Public Affairs. From 2007 to 2009, she served as the worldwide Director for IEEE Women in Engineering, overseeing the world’s largest professional organization supporting women in engineering and science. Her research focuses on developing efficient algorithms for simulation, modeling, signal, and image processing for biomedical and security applications. She is a Fellow of the IEEE. She is also the recipient of the 2012 IEEE Ethical Practices Award and the Harriet B Rigas Award for Outstanding Educator. In 2011, Dr Panetta was awarded the Presidential Award for Engineering and Science Education and Mentoring by US President Obama
Rahul Rajendran received a BE in Electronics and Communication Engineering from Visvesvaraya Technological University, Belgaum, India, in 2014, and an MS in Electrical and Computer Engineering from University of Texas, San Antonio, Texas, in 2016. He is currently working toward his PhD in Electrical and Computer Science at Tufts University, Medford, Massachusetts. His current research interests are image and video analytics, signal processing, 3D sensors, medical imaging, and security. He is a student member of the IEEE
Aruna Ramesh received a Bachelor of Dental Surgery degree from Saveetha Dental College and Hospitals, Chennai, India in 1992, an Oral and Maxillofacial Radiology MS from University of North Carolina, Chapel Hill, North Carolina, in 2000, and a Doctor of Dental Medicine (DMD) degree from Tufts University School of Dental Medicine, Boston, Massachusetts, in 2004. She is a Diplomate of the American Board of Oral and Maxillofacial Radiology. She is a fellow of the American College of Dentists (FACD). Dr Ramesh is currently the Associate Dean of Academic Affairs and Professor in the Department of Diagnostic Sciences at the Tufts University School of Dental Medicine. She is a licensed dentist, specializing in oral and maxillofacial radiology, with experience spanning over almost 20 years. Her experience and background covers teaching, patient care, research, and administrative roles. She has several peer-reviewed publications and over 40 invited lectures to local, state, national, and international venues. From 2015–2020, Dr Ramesh was a member of the executive council and the councilor of Scientific Affairs and Public Policy in the American Academy of Oral and Maxillofacial Radiology, the national governing body for the field. Dr Ramesh serves as Commissioner in the Board of Commissioners with the National Commission on Recognition of Dental Specialties and Certifying Boards