As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Breast cancer can be detected at early stages by radiologists from periodic screening mammography. However, just by viewing the mammogram they cannot discern the subtype of the cancer (Luminal A, Luminal B, Her-2+ and Basal-like), which is a crucial information for the oncologist to decide the appropriate therapy. Consequently, a painful biopsy must be carried out for determining the tumor subtype from cytological and histological analysis of the extracted tissue. In this paper, we aim to design a computer aided diagnosis (CAD) system able to classify the four tumor subtypes just from the image pixels of digital mammography. The proposed strategy is to use a VGGNet-based deep learning convolutional neural network (CNN) that can be trained to learn the underlying micro-texture pattern of image pixels, expected to be characteristic of each subtype. We have collected 716 image samples of 100x100 pixels wide, manually extracted from real tumor image areas that had been labeled in the digital mammography by a radiologist, jointly with the corresponding oncologist diagnose based on histological indicators. Using this ground truth, we have been able to train and test the proposed CNN, which can achieve an accuracy rate of 78% when discerning only Luminal A and Luminal B subtypes. In turn, it yields an accuracy rate of 67% when all four tumor subtypes are considered.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.