GazeRadar: A Gaze and Radiomics-Guided Disease Localization Framework

Date of Event

Abstract: We present GazeRadar, a novel radiomics and eye gaze-guided deep learning architecture for disease localization in chest radiographs. GazeRadar combines the representation of radiologists’ visual search patterns with corresponding radiomic signatures into an integrated radiomics-visual attention representation for downstream disease localization and classification tasks. Radiologists generally tend to focus on fine-grained disease features, while radiomics features provide high-level textural information. Our framework first ‘fuses’ radiomics features with visual features inside a teacher block. The visual features are learned through a teacher-focal block, while the radiomics features are learned through a teacher-global block. A novel Radiomics- Visual Attention loss is proposed to transfer knowledge from this joint radiomics-visual attention representation of the teacher network to the student network. We show that GazeRadar outperforms baseline approaches for disease localization and classification tasks on 4 large scale chest radiograph datasets comprising multiple diseases. Code: https://github.com/bmi-imaginelab/gazeradar.
Bhattacharya, Moinak, Shubham Jain, and Prateek Prasanna. 2022a. “GazeRadar: A Gaze and Radiomics-Guided Disease Localization Framework.” In Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, 686–96. Springer Nature Switzerland.