As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Breast density is a crucial factor to follow-up the relapse of breast cancer in mammograms and the risk of local recurrence after conservative surgery and/or radiotherapy. Accurate breast density estimation with visual assessment is still a challenge due to faint contrast and significant variations in background fatty tissues in mammograms. The important key of breast density estimation is to properly detect the dense tissues in a mammographic image. Thus, this paper presents an automatic deep breast density segmentation using conditional Generative Adversarial Networks (cGAN) that consist of two successive deep networks: generator and discriminator. The generator network learns the mapping from the input mammogram to the output binary mask detection the area of the dense tissues. In turn, the discriminator learns a loss function to train this mapping by comparing the ground-truth and the predicted mask under observing the input mammogram as a condition. The performance of the proposed model was evaluated on the public INbreast mammographic datasets. The proposed model can segment the dense regions with overall recall, precision and F-score about 95%, 92%, and 93%, respectively, outperforming state-of-the-art of breast density segmentation. The proposed model can segment more than 40 images with a size of 512×512 per second on a recent GPU.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.