RadioTransformer: A Cascaded Global-Focal Transformer for Visual Attention–Guided Disease Classification

Date of Event

Abstract: In this work, we present RadioTransformer, a novel student-teacher transformer framework, that leverages radiologists’ gaze patterns and models their visuo-cognitive behavior for disease diagnosis on chest radiographs. Domain experts, such as radiologists, rely on visual information for medical image interpretation. On the other hand, deep neural networks have demonstrated significant promise in similar tasks even where visual interpretation is challenging. Eye-gaze tracking has been used to capture the viewing behavior of domain experts, lending insights into the complexity of visual search. However, deep learning frameworks, even those that rely on attention mechanisms, do not leverage this rich domain information for diagnostic purposes. RadioTransformer fills this critical gap by learning from radiologists’ visual search patterns, encoded as ‘human visual attention regions’ in a cascaded global-focal transformer framework. The overall ‘global’ image characteristics and the more detailed ‘local’ features are captured by the proposed global and focal modules, respectively. We experimentally validate the efficacy of RadioTransformer on 8 datasets involving different disease classification tasks where eye-gaze data is not available during the inference phase.Code: https://github.com/bmi-imaginelab/radiotransformer
2022b. “RadioTransformer: A Cascaded Global-Focal Transformer for Visual Attention–Guided Disease Classification.” In Computer Vision – ECCV 2022, 679–98. Springer Nature Switzerland.