4.7 Article

Three-dimensional texture measurement using deep learning and multi-view pavement images

Journal

MEASUREMENT
Volume 172, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.measurement.2020.108828

Keywords

Pavement macro-texture; Three-dimensional reconstruction; Deep learning; Image processing; Mean texture depth; Dynamic fraction coefficient

Funding

  1. Research and Development Center of Transport Industry of Technologies, Materials and Equipment of Highway Construction and Maintenance [2019-004]
  2. Science and Technology Research Project of Jiangxi Provincial Department of Education [GJJ190361]
  3. Natural Science Foundation of Jiangxi Province, China [20202BABL214046]

Ask authors/readers for more resources

This study proposes a method based on deep learning to measure and reconstruct macro-texture using any number of pavement views, and demonstrates its effectiveness and stability through experiments. The 3D model allows for the assessment of pavement performance, with average errors of 7.62% for mean texture depth and 6.32% for dynamic fraction coefficient.
Surface macro-texture measurement plays an essential role in assessing pavement performance and quality. This paper presents a framework for macro-texture measurement and reconstruction using any number of pavement views. In the framework, an encoder based on deep CNN is used to extract features from single view pavement images. The features are then converted into a voxel model using a feature mapping unit and a decoder. A multi-view combination module aggregates the voxel models from any number of views to build a 3D model of pavement macro-texture. The 3D model can be used to assess pavement performance. The experiment demonstrates the effectiveness of the framework using a database with 16 asphalt pavements. The results show that the proposed framework performs 3D model reconstruction with satisfactory precision and stability. The 3D models assess mean texture depth and dynamic fraction coefficient with the average errors of 7.62% and 6.32%, respectively.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available