Hide Media Panel
Your Browser does not support WebGL. Try the newest version of Firefox or Google Chrome.
You should see an interactive map here. However, your browser seems to not support JavaScript or you have JavaScript disabled for this web site. Please enable JavaScript or get a modern web browser like Firefox or Google Chrome.

Automatic (Hydraulic) Roughness Mapping

Introduction

After two campaigns of research at the gravel bar near Ladenburg it was finally possible to create roughness maps of certain spots (areas of interest: AOI) at the research sight.

The existence of roughness values allows more detailed hydraulic models to be calculated, since the current velocity and erosion rate depend on it. These are essential to predict changes within the river bed or flow (e.g. flooding), which again modify the whole landscape. Roughness, as value describing the surface condition, is a multi-scale level concept. It’s values are always depending on the chosen scale. The description of lower tree levels or fine scale soil characteristics both are common roughness measurement scales (Hollaus et al. 2011). The values are generated by using the change in z-values of neighbouring points. A higher aggregated height difference causes a higher roughness value.

In addition to generate those values, we will compare different methods of data acquisition and manipulation in order to detect the best suitable set of methods for different circumstances and the difference between the methods, being terrestrial laser scanning (TLS), Structure from Motion (SfM) and Kinect ® generated data.

To make the data sets comparable we chose three spots (1m²) at the sand bank, having different grain sizes and surface features. These spots were measured with all methods. After marking them to prevent man made impacts on the AOI, we registered the wooden poles dug in around the AOI to serve as later tiepoints.

Methods and Data Acquisition

Fig. 4.1 shows the undertaken steps from data acquisition over data manipulation to the final comparison.

Research Workflow

TLS Data with Riegl VZ-400

Using the Riegl VZ-400 ® it is possible to generate a very detailed scan with a high point density (over 100.000 points/ m² possible). Additionally the resulting point cloud is already georeferenced. Therefore we can use this data to transform the grid information of the other acquired point clouds via tiepoints into these correct values. The coordinates of the previ-ously placed wooden poles can be used as reference coordinate system for the regional import with OPALS, which uses coordinates to select a certain AOI out of the point clouds.

I. Data acquisition

To prevent measurement failures several scans from different positions were taken and combined into one point cloud. This operation can be retraced in Chapter 1.

II. Data Manipulation
II.1 RiSCAN PRO/OPALS

After generating the point clouds with the Riegl VZ-400® and combining them into one file the AOI can be exported by using the RISCAN program and saving the AOI as new data set. First all scans are opened. Using the selection mode followed by the Polyline selection within the toolbar, the AOI can be pinpointed and exported as polydata. This step worked well, despite the problem of inaccuracy during the step of placing the polygon.

Another alternative is the usage of the OPALS module OPALS Import to select a regional section of the point cloud.

OPALSImport -inf "Scan_1.txt" "Scan_2.txt" -outf 2013TLSGrob.odm -iFormat xyz -tilePointCount 50000 -nbThread 1- Filter "Region [ 469286.4 5480554.4 469287.4 5480553.6 469288.3 5480554.6 469287.3 5480555.3

The nbThread 1 has the consequence of splitting the data in parts, to make working with it possible. The iFormat defines the kind of data structure the infiles are made of.

II.2 Roughness measurement with CloudCompare

After having extracted the AOI, a first method of roughness measurement can be applied. The point cloud was cut with CloudCompare, and exported as .asc or .txt afterwards.

After being opened in the program CloudCompare (File  Open: 2013TLSGrob.txt , the first 30 lines have to be skipped) we simply can use the Tool for roughness calculation (Tools  Other  Roughness).

Cloud Compare is a free source program to visualize point cloud and manipulate themSource: http://www.danielgm.net/cc/. More complicated calculations should not be done using this software. The data size may not exceed 1 GB, otherwise the program act very slow and tends to crash.

The results can be checked by using the “distribution fitting” button, or by using Tools  Statistics  Compute stat. params. These results allow a first approach and estimation of the roughness values. Another problem is the missing exclusion of ghost points. Though not messing the results of the single points, they still exist and form a source of distraction.

II.3 Roughness measurement with OPALS

After having seen first results using CloudCompare the more precise and complicated calculation with OPALS was applied.

a) Generating Gridfile

The already existing 2013TLSGrob.odm was transformed into a grid model by the module OPALS Cell.

OPALSCell -inf 2013TLS1.odm -outf 2013TLS1cell001.tif -feature min -feature median -cellSize 0.01 -nbThread 2

The “feature min” and “feature median” will be needed to implement further calculations. They generate the minimal and median value of each grid cell. To compare the results of different cell sizes, we chose the cell sizes 0.02 m, 0.01 m and 0.005 m.

b) Eliminating ghost points

This step is only necessary if OPALS is used to cut the AOI out from the whole scan. The manual cut with CloudCompare offers the chance to eliminate the ghost points during the cutting process. Ghost points are points within the point cloud, which represent laser mistakes or things we did not want to survey. As a result we had to extract those points from our point cloud. The attribute differencing ghost points from points describing the surface of the AOI is their deviation from the average z-value of the AOI. Taking the distance of more than 5 cm as cutting point, we could distinguish between relevant and irrelevant points using the module OPALS Algebra to accomplish this manipulation of our point cloud.

OPALSAlgebra -inf 2013TLSGrobcell001min.tif
-outf 2013TLSGrobcell001clean.tif
-formula "return z[0] if(z[0] > mean -0,05 and z[0] < mean +0,05) else None"

If some cell values are beyond the borderlines we have no value for the cell at all. Therefore we had to fill those gaps with other values, in order to preserve a closed surface within the AOI. Again the module OPALS algebra helped us solving this problem. A way to avoid those gaps is to exchange those 0-cell-values of the 2013TLSGrobcell001min.tif with the values of the 2013TLSGrobcell001mean.tif.

OPALSAlgebra -inf 2013TLSGrobcell001min.tif
-inf 2013TLSGrobcell011mean.tif -outf 2013TLSGrobcell001fill.tif -formula "return z[1] if(z[0] is None) else z[0]"

At this point 2013TLSGrobcell001fill.tif is prepared to be used for the calculation of its roughness.

c) Roughness calculation

The roughness values of each cell were acquired by using the OPALS module OPALS Cell and the feature root mean square (RMS) of the cells.

OPALScell -inf 2013TLSGrob.odm -outf TLSGrobrough001.tif -feat rms -cellsize 0.01 –nbThread 2 2.2 Microsoft XBOX Kinect ® Data

Microsoft XBOX Kinect ® Data

The data acquisition using a Kinect is another method to generate point clouds which can be used for roughness modeling. In comparison to the Riegl VZ-400® it is cheaper (price: 89 Euro, 05.07.2013). Additionally the range of the Kinect can not be compared to the Riegl VZ-400®. The Kinect works within a distance of 3-5 meters and has to have a minimal distance between object and laser of about 50 cm. Being designed for closed rooms a Kinect® does not work properly in direct sunlight. To protect the Kinect® from too much IR radiation it is wise to choose your AOI in the shadow. Otherwise an application for creating shadow is advisable.

I Data Acquisition
I.1 Kinfu

The Kinect is linked to a laptop. The laptop runs on his own battery, whereas the Kinect is supplied by an exterior battery.

The laptop is supplied with the program KinFuPossible source: http://codewelt.com/kinect3dscan, which enables us to use the Kinect for scanning 3D geometry in real time and saving it in various formats. The best way to survey the AOI is to move the Kinect with moderate speed over the AOI. After dividing the AOI in rows and columns you can first scan each row and then each column to decrease the failure due to shadowing effect as far as possible. We took several scans to obtain the possibility to choose the best for further usage. The quality of the scans depended on coverage of the AOI. This selection was achieved by importing all scans, being available in ASC format, to CloudCompare.

I.2 CloudCompare

Using CloudCompare we were able to select the best scans having as much as possible closed and complete AOI and as much as possible poles to have points of recognition. Having a DTM developed out of several scans of the Riegl VZ-400® we do not depend on those poles for georeferencing.

Kinect Point Cloud raw in CC: Own screenshot

The quality of the scans made it not possible to find enough corresponding point pairs to use the 3d-trafo_affine tool, which will be explained at 4.2.3.II.2 or the registration via Cloud Compare.

II Data Manipulation
II.1 Cloud Compare

After missing the chance to georeference the Kinect® scans properly, we still want to import them to OPALS to calculate some roughness models. We again cut them into proper shape with CloudCompare and additionally adjusted them correctly in z-direction manually. Without this adjustment the roughness calculation with CloudCompare would be quite useless.

II.2 Roughness measurement with CloudCompare
See above 4.2.1 II.1
II.3 Roughness measurement with OPALS
See above 4.2.1 II.2

Structure from Motion (SfM) Data

Another method to generate surface models to calculate roughness values is the application of pictures or movies of the AOI. Those pictures, which were taken from different angles, can be combined to a 3D-Modell of the AOI. There is also the possibility to cut single frames out of movies and use them as pictures. All equipment needed is a digital camera and a program to merge the pictures. As a result this is the cheapest method, assuming a free availability of the digital camera.

To ensure good results, the pictures should not be too bright. There also should be enough pictures to cover the AOI, otherwise the program is not able to find the necessary amount of matches between the single pictures.

I Data Acquisition

Leaving the step of taking pictures and movies with a digital camera self-explanatory, we move directly to the generation of the multi-view-pictures. First the pre-steps for those pictures originating from movies (I.1) will be explained. Secondly both picture kinds (from movies and manually taken) are merged (I.2) and refined (I.3).

I.1 Free Video to JPG Converter

The Program Free Video to JPG ConverterSource: http://www.dvdvideosoft.com/products/dvd/Free-Video-to-JPG-Converter.htm is able to extract single frames out of a video. It is possible to select every hundredth or tenth or etc. frame or a frame every ten seconds or every minute etc.

After importing the video to Free Video to JPG Converter it has to be decided which adjustments should be taken. We decided to extract every tenth frame. This calibration was chosen because it produces the best results considering the balance between accuracy and too much data volume. The time effort rises exponentially with the increasing number of frames. Therefore we chose the results delivering data not rising above 3 GB.

The program delivers files with blanks in the filenames. Those can not be used by SURE. Therefore it is advisable to apply a batch transformation (Recommendable: Dir|rename-item -NewName { $_.name –replace “ “}) within Windows Powershell.

I.2 VisualSFM

The program VisualSFMSource: http://homes.cs.washington.edu/~ccwu/vsfm/ is a GUI application for 3D reconstruction using structure from motion. This process tries to estimate 3D objects from 2D image sequences which may be coupled with motion signals. It can be compared to our brain, which also couples the 2D images every single eye generates to a three dimensional view. Fig. 4.2 demonstrates how every single picture is identified and positioned. The work flow we used can be seen in fig. 4.3.

Frame Merging and 3D Calculation. Own screenshot
Work Flow VisualSFM (Source: VisualSFM web site, see footnote 4)

Step 4 (Dense Reconstruction) (fig. 4.3) should not be done. The SURE program (I.3) is better suited for this task, working quicker than VisualSFM.

Additionally the program creates .nvm-files. These files are not suitable for further calculation with the programs we want to use (OPALS, Python, CloudCompare etc.).

I.3 SURE

The Program SURESource: http://www.ifp.uni-stuttgart.de/publications/software/sure/index.en.html is able to transform the .nvm-files to .las-files. During this transformation the program fills the vacant spaces between the points of the .nvm-files derived from the SFM process. As a result the point cloud is denser than before, and better suited for the calculation of roughness values. The application of SURE is simple. The selected .nvm-files are opened via SURE and automatically processed.

After comparing the results of both picture kinds we definitely see advantages using the video generated frames. Additionally to the fact of having more frames, the flow of the video movement is better than the stationary manual frames.

II. Data Manipulation
II.1 Cloud Compare

At this point we again need to cut our data in form. Therefore we are going to undertake the same steps as before to proceed to the calculation of the roughness.

See above 4.2.2 II.1
II.2 Python

The Kinect® point cloud that covers the area of interest best was then co-registered to the TLS reference data by means of a 3D affine transformation, for which a Python script was available. To run the script, the following command is used in the command line window. This script is able to adjust our SfM clouds to the correctly referenced TLS point cloud using certain tiepoints. As already explained, we installed wooden poles, making the search and localization of the tiepoints easier. Unfortunately the first results of the 3d-trafo-affine were distorted or in a shape beyond recognizable. We assumed the little difference in the z-values could be responsible for the unsatisfying results, because the AOI with different pole height got the best results. More variation within the z-values decreases the bias of measurement failures. Additionally the input point clouds have different scales (mm vs m), which also could have caused trouble for the normally perfectly working tool. Therefore we added point pairs located directly on the surface to improve the results. Regrettably this caused even bigger deformation. To achieve this we opened the Python script via the cmd-window (Fig. 4.4).

Command Window with python order, own screenshot

Additionally to the 3d-trafo-affine tool itself we created a textfile (.txt) containing the corresponding point pairs of the TLS scan and the SfM clouds. This is followed by the cloud who should be changed and the name of the outfile. An instruction appears after entering the 3d-trafo_affine only.

The chosen point cloud will arrange itself according to the points chosen within the TLS scan.

II.3 Georeferencing & Roughness measurement with CloudCompare

After the failure of the python script we switched to CloudCompare to georeference our SfM data with the TLS scans. Using the tool “Align (point pairs picking)” we followed the same calculation principle as python. They differenced only in regard of success.

For the roughness calculation see above 4.2.1 II.1
II.4 Roughness measurement with OPALS
See above 4.2.1 II.2

Results, Comparison and possible Improvements

As a first step we analysed the different point amount of the different methods (Tab.1).

Point Amount
TLSKinect®SfM
Fine178.35310.25584.822.641 (vid)
Medium39.976127.85419.995.362
Rough111.951136.69130.322.185

The number of 3D points is directly linked with the point density. Nevertheless it has to be thought of the different AOI sizes, because of the manual cutting. A ten times higher point density also does not assure a ten times better roughness map. Especially in terms of roughness calculation a complete recording of the whole surface is important, too. This problem increased with the roughness of the terrain and the therefore rising shadow effect of larger grain sizes.

Next to these general problems there occurred several individual ones for each method. They will be mentioned with disadvantages and advantages in the following.

TLS Data

We were able to generate properly georeferenced roughness maps from the scans we obtained by using the Riegl VZ-400. This also marks the important difference towards the other methods. The possibility to locate the generated scans within the global coordinate system without huge costs is very important.

The deployment of TLS is most effective for medium sized AOI of several square meters, especially because it can generate the point clouds of these areas very quick. Roughness mapping our scale depends on very accurate data. Therefore greater distance or bumps in the topography can influence the quality of those scans, which can be seen in the smaller point amount of the Medium AOI. It was the most peripheral AOI also in the shadow of a little bump (fig. 4.6).

Location of AOI, own screenshot
Microsoft XBOX Kinect®

The data acquisition using the Kinect® was by far the most time and labor intensive method considering the achieved results. Especially the need of a shadow application cost effort and the short range of the Kinect® forced also physical strain during the scanning process. This strain could be reduced by applying the Kinect® to a stick, and therefore being able to achieve a non-invasive data acquisition.

The data quality itself is not satisfactory. Having problems with the scanning itself, because of the limited amount of memory, we were not able to produce complete scans of the whole AOI. The overall point number and the resolution were, especially in comparison to the other methods, poor. The strain coming along with the several necessary programs to be mastered can not be put into relation with the outcome.

Structure from Motion

The SfM data proved to be easily gained and after considerable effort using multiple programs the point clouds were very useful for our research. The huge point amount is good, but the calculation times and handiness of the point clouds were troublesome and certain programs were hardly able to cope with the cloud size.

Nevertheless the high resolution is very useful for the detection of individual rocks or the exact location of the poles. This method definitely suits little AOI, with larger ones the editing of the point clouds could falter to a halt, due to the large data sizes.

Results and Comparison

Different Methods

The roughness maps resulting from the different methods differ in quality and completeness. The Kinect® result (left) is incomplete and therefore not useable for georeferencing. The TLS result (middle) is complete and deployable. The six larger white spots, representing extreme results, are caused by the poles. The best results can be derived from the SfM data, which produced the right data. Single stones and the poles can be detected properly and no-value areas are limited to places in the direct shadow of rocks.

Roughness maps Rough. Left: Kinect, Middle: Rough TLS; Right: Rough SfM, own screenshots
Different Methods for Different AOI

The methods delivered different qualities in form of completeness and accurancy for the different grain sizes. Those variations can partly be explained by unsteady data acquisition parameters (manual scan with Kinect®, light intensity etc.).

The Kinect® was deployed best at the rougher AOI. Smaller grain sizes made it harder to come up with resilient results. On first sight the TLS scans generated the best results for AOI Fine, but the remarkable point amount difference in comparison to AOI Rough and AOI Medium can be explained only with potential shadow effects (Tab.2). As fig. 4.6 visualizes, the AOI Fine is located directly between the two most relevant scan positions. This factor should not be underestimated. The SfM data worked very well for AOI Fine, the others also bore good results, but non-value areas due to the shadowing effect could not be prevented. Using the videos instead of single pictures as data source reduces this effect additionally.

Different Cell Sizes

The final roughness maps are directly linked to the previously chosen cell size for the calculations. As smaller cell size offers the possibility of more detailed roughness information but increases the risk of holes within the grid structure caused by too few points at the same time. The trade-of between maximum completeness and highest resolution has to be found for every scan.

The Kinect® scans were best dealt with a cell size of 0.01m (rough & medium) and 0.02m (fine). The TLS scans bore the best results with cell sizes of 0,02m (rough & medium) and 0,01m (Fine). The SfM data could be treated with the smallest cell sizes of 0.015m (rough), 0,01m (medium) and 0,01m (fine).

Comparison of final values

The final roughness values are listed in tab. 4.2. Several regularities can be detected within the results. The mean and maximum roughness values normally rise with an increasing number of points or decreasing cell size, because more irregularities of the surface can be detected. However, if the cell size becomes too small, the increasing number of nodata spaces can bias the result in form of mussing cell values and therefore missing roughness values. Additionally it has to be considered, that the Kinect scans did not include all six poles. Therefore the mean values of the Kinect® scans tend to be lower than the others. The poles were needed to work with the AOI. Otherwise orientation within the point cloud would not have been possible. The highest mean roughness was calculated with the SfM data. The video based data extends the picture based data amount by far. The comparison between the cell size of 0,015m and 0,02m demonstrates, that a higher number of data and smaller cell sizes not necessarily generate better results.

Roughness values derived from Opals Histo
Cellsize [m] Mean Max Median RMS Skewness Data Used
KINECT®
Fine 0,02 0,004 0,05 0,002 0,006 2,869 6.012
Medium 0,01 0,002 0,061 0,001 0,003 7,812 23.997
Rough 0,01 0,004 0,127 0,002 0,008 5,94 18.914
TLS
Fine 0,01 0,004 0,073 0,003 0,007 5,086 22.783
Medium 0,02 0,005 0,048 0,004 0,007 2,957 4.988
Rough 0,02 0,009 0,125 0,007 0,013 5,384 7.044
SfM
Fine (vid) 0,01 0,062 1.703 0,049 0,085 7.676 7.125.412
Medium (pic) 0,01 0,012 0,35 0,008 0,019 8.690 597.013
Rough (pic) 0,015 0,058 2.837 0,026 0,123 7.796 4.304.182
Rough (pic) 0,02 0,061 2.846 0,029 0,127 8.116 2.786.599

Acknowledgements

This research was supported by the generous help, knowledge and mental assistance of Martin Hämmerle and preparative Kinect® training delivered by Johannes Fuchs.