. Point cloud generation

Starting from the derived camera poses and multi-stereo correlation results, depth maps are converted into metric 3D point clouds. This conversion is based on a projection in object space of each pixel of the master image) according to the image orientation and the associated depth values. The produced 3D points are also colored with the RGB attributes of the related image pixel.

W o r k f l o w

. Image triangulation

For the orientation of a set of terrestrial images, the method relies on the open source APERO software. As APERO is targeted for a wide range of images and applications, it requires many input parameters to give to the user a fine control on all the initialization and minimization orientation steps. APERO is constituted of different modules for tie point extraction, initial solution computation, bundle adjustment for relative and absolute orientation. If available, external information like GNSS/INS observations of the camera perspective centers, GCPs coordinates, known distances and planes can be imported and included in the adjustment. APERO can also be used for camera self-calibration, employing the classical Brown’s parameters or a fish-eye lens camera model. Indeed, although strongly suggested to used previously calibrated cameras, non-expert users may not have accurate interior parameters which can therefore be determined on-the-field.

The pipeline consists of automated tie point extraction, bundle adjustment for camera parameters derivation, dense image matching for surface reconstruction and orthoimages generation. The single steps of the 3D reconstruction pipeline have been investigated in different researches producing accurate metric results in terms of automated markerless image orientation and dense image matching].

. Acquisition and camera calibration protocol

The pipeline is primarily focused on terrestrial applications, therefore on the acquisition and processing of terrestrial convergent images of architectural scenes and heritage artifacts. The employed digital camera needs to be preferably calibrated in advanced following the basic photogrammetric rules in order to achieve precise and reliable interior parameters. The acquisition protocols are reported in section «Protocols».

. Surface measurement with automated multi-image matching

Once the camera poses are estimated, a dense point cloud is extracted using the open-source MicMac software. MicMac was initially developed for matching aerial images and then adapted to convergent terrestrial images. The matching has a multi-scale, multi-resolution, pyramidal approach and derives a dense point cloud using an energy minimization function. The pyramidal approach speed up the processing time and assures that the matched points extracted in each level are similar. The user selects a subset of master images for the correlation procedure. Then for each hypothetic 3D points, a patch in the master image is projected in all the neighborhood images and a global similarity is derived. Finally an energy minimization approach, similar to is applied to enforce surface regularities and avoid undesirable jumps.

. Orthoimage generation

Due to the high density of the produced point clouds, the orthoimage generation is simply based on an orthographic projection of the results. The final image resolution is calculated according to the 3D point cloud density (near to the initial image footprint). Several point clouds (related to several master images) are seamless assembled in order to produce a complete orthoimage of surveyed scene.

. Accuracy and performance evaluation

The results achieved with the 3D reconstruction pipeline described before were compared with some groundtruth data to check the metric accuracy of the yield 3D data. A first set of images depict a Maya relief (roughly 3°—2 m) 8 images acquired with a calibrated Kodak DSC Pro SRL/n (4500°—3000 px) mounting a 35 mm lens. The ground-truth data were acquired with a Leica ScanStation 2 with a sampling step of 5 mm. The generated image-based point cloud was compared with the range-based one delivering a standard deviation of the differences between the two datasets of ca 5 mm.   

Fig. 6: Examples of the geometric comparison with ground-truth data. Original scene (left), derived 3D point cloud (center) and deviation map for a Maya bas-relief (std = ca 5 mm).

Fig. 1: An example of photographic acquisition (Fontains church).

Fig. 2: An aerotriangulation made by Apero.

Fig. 3: Example of the pyramid approach for surface reconstruction during the multi-scale matching

Fig. 4: The multi-stereo image matching method: the master image (left), the matching results in the last pyramidal step (center) and the generated point cloud (right).

Fig. 5: An orthoimage generated by orthographic projection of a dense point cloud.