As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In order to achieve accurate classification of road crack images, this paper proposes to classify road cracks based on improved VGG-16(Visual Geometry Group) convolutional neural network. First, the data set of road crack was constructed, which contains four types of pictures: single crack, transverse crack, patched crack, and no crack. A total of 8,400 pictures were taken, laying a foundation for the construction and training of subsequent models. Then, based on the VGG-16 network model, the model optimizes the number of fully connected layers and replaces the SoftMax classifier in the original VGG-16 network with a 4-label SoftMax classifier. These changes optimize the model structure and parameters, and then use the migration learning method to train the self-built data set. The final test accuracy of the model is 95%. In terms of average recognition accuracy, this research model and VGG-16NET are superior to AlexNET and GoogleNET. From the test results, the research model is slightly better than the VGG-16NET model, which has better classification performance for the road crack category and can accurately distinguish between single cracks, transverse cracks, patched cracks and no cracks. The realization of automatic identification and classification of road cracks plays an important role in saving labor costs, effectively implementing road maintenance and ensuring driving safety.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.