Hide Media Panel
Your Browser does not support WebGL. Try the newest version of Firefox or Google Chrome.
You should see an interactive map here. However, your browser seems to not support JavaScript or you have JavaScript disabled for this web site. Please enable JavaScript or get a modern web browser like Firefox or Google Chrome.

Multimedia Presentation of Project Results

Introduction

The multimedia documentation of our project should consist of two parts: A web site and a video. Both should present the motivation, methods and results of the project in such a way that it can be understood by people with no previous knowledge in the field of laser scanning, or physical geography in general. These requirements were defined by the course leader at the beginning of the project, and all of our following concept and production work is based on them.

The Documentary Video

Concept

We decided to create a video that consists of two distinct parts. The first part introduces the study area, the laser scanning technology and the most important aspects of the data acquisition strategy. This information is given to the viewer in form of a story-like text that is told by a narrator voice while a sequence of different scenes from the laser scanning work is shown. The second part consists of a number of short interviews with one member of each group. The purpose of the interviews is to give the audience a quick summary of the goals of each team.

Production

On the day of the laser scanning campaign, we captured about one hour of footage of the scanning work at the gravel bar, using the video mode of a Canon PowerShot G12 digital photo camera and a simple tripod. The G12 can record video with 24 frames per second at a maximum resolution of 1280x720 ("720p"). We filmed a couple of scenes from every part of the scan procedure: Calibration, scanner set-up at multiple positions at the gravel bar, reflector placement and actual scanning. Additionally, we performed and recorded short interviews with one member of each sub-team of the seminar class.

The G12 is designed for taking still images and has very limited video functionality. With the official firmware, it isn't even possible to zoom during filming. However, this posed no problem, since we limited any zooming, panning or moving of the camera to an absolute minimum to avoid shaky recordings. With the tripod and no camera movement, we achieved to capture very clear scenes with good image quality.

It is not possible to connect an external microphone to the PowerShot G12, so we were forced to use the camera's built-in microphone for audio recording. Since this microphone is not optimized for voice recording, it captured a lot of background noises like wind, the sound of cars on the nearby road, and others. This rendered the audio track of our recorded videos basically unusable. For the final film, we muted the original audio channel and replaced it with new voice recordings, produced in a quieter environment. In order to achieve professional voice audio, we commissioned the voice recordings to semi-professional voice actors. We contacted them via www.hoer-talk.de, an online community for people with interest and knowledge in sound recording and narrating.

The final composition and arrangement of the single video clips into the final film was done with Adobe Premiere Pro, a professional software for video cutting and editing. We tried to match video and audio clips as good as possible and used the narration as overlap for our recorded audio, lowering its volume to near zero, so that it can still be heard, but does not interfere with the narrator's voice. Besides, we decided to do the same with the quotes of the interviewed persons, with the difference that they can be heard in initial volume first and are then lowered. Muting them and simply replacing their voices with the one of the voice actor would have appeared unfamiliar and weird. Volume progression is easily done in Premiere Pro via keyframes. For the intro and outro of the video, GEMA-free music was chosen from www.klangarchiv.com.

Additionally, we used an animation of a spinning Earth for the intro, which was made with Adobe After Effects. Here, a high resolution image composition of the whole Earth (NASA blue marble) is used as input. Using After Effects, the image is mapped to a sphere, making it look like Earth. The lens flare effects are added through an After Effects plug-in. The rising sun and the rotation of the Earth is a simple animation via keyframes. For the Earth zoom that is shown after the intro, we used NASA world wind, an open source equivalent to Google Earth. We simply recorded the zoom in using a desktop video capture program and added it in Premiere Pro.

Scene from the documentary video (own screenshot)

The Web Site

Concept

The purpose of the web site is to present the documentary video, the project report and point cloud data (via an interactive 3D point cloud viewer) to the public. We decided to implement a simple two-column layout with text on the left and multimedia elements like images and videos on the right side of the page. A fundamental feature of this concept - which we dubbed the "media panel" - is that the multimedia elements are loaded and displayed on-demand (i.e. when the user clicks a link or button) and always stay in a fixed position in the viewport, so that they don't move out of view when the page is scrolled. This allows the user to scroll through and read arbitrary long sections of text while keeping accompanying multimedia elements on the screen all the time.

Another fixed-position element is the menu bar at the top of the viewport. It contains links to all sections/chapters of the web site, and it also features a submenu with entries to open Google Maps, the 3D point cloud viewer and some other multimedia elements in the media panel. The widths of the text and media panel columns are set up as percentages of the viewport width, so that they change dynamically with the browser window size. This way, the web site makes optimal use of the available screen space and is good-looking and usable with a wide range of browser window sizes and display resolutions from Full-HD all the way down to 800x600 (or even lower, however with uncontrolled effects and limitations).

Production

The web site is implemented using modern web standards. The documents are written in HTML5, with a little bit of server-side preprocessing using PHP. Videos are embedded using the HTML5 <video> element, eliminating the need for additional browser plug-ins like Adobe Flash. Layout and visuals are defined in CSS 3, including the use of Google web fonts to provide fresh, modern-looking typography.

Interactivity to view multimedia elements is added via JavaScript - most noticeable, the Google Maps plug-in and the WebGL point cloud viewer. The latter is based on Markus Schütz' PoTree projectSource: http://www.potree.org (Accessed: 2013/10/11). For better integration into the web site, some minor modifications were made to the PoTree code. The original version reacts to mouse events (clicking and moving) in the entire document/viewport, so that all mouse interactions with the document (e.g. selecting text by clicking and dragging) are interpreted as commands for PoTree - usually to move the camera. Since this is irritating, we limited the mouse event-sensitive area to the PoTree canvas, which results in a more logical and expected behaviour. We also implemented functions to get and set the 3D camera's transformation matrix. These are used to provide links which point the camera to interesting things. Finally, some configuration variables like background color and point drawing size were modified in order to optimize the visual appearance of the point cloud.

In order to load a point cloud into PoTree, it needs to be converted into PoTree's own data file structure. The original laser scanning data that was acquired during the project is stored in the RiSCAN PRO project format. This data was converted to the PoTree format (a detailed description of the required steps can be found in appendix 7.4.2). After conversion, the generated files need to be copied to PoTree's resources folder, and a few lines muste be added to configuration files to set up PoTree to load and display the point cloud.

The web site with point cloud viewer (own screenshot)

Conclusions

We showed that it is possible to produce a good-looking documentary video with a Canon PowerShot G12 digital photo camera and an low-cost tripod, as long as the mentioned limitations regarding audio recording and camera movement are handled correctly. In order to achieve usable audio recordings in the field, it is neccessarry to find a solution to reduce environment noise. Adobe Premiere was successfully used to create the final arrangement of video sequences, music and voice recordings.

We also showed that HTML5 and related modern web standards like WebGL can be used to produce a user-friendly web site that provides rich interative functionality to present the project's results in a very informative and interesting way. The PoTree WebGL point cloud viewer proved to be a viable solution to implement an interactive presentation of 3D point cloud data on the web without the need for additional brower plug-ins like Java or Adobe Flash.

Appendix

Video Speakers Text

Narrator:

"The lower Neckar river near Ilvesheim, a small town in the southern German state of Baden-Württemberg. The settlement is enclosed by an oxbow which is no longer used by ships, and the water is allowed to shape its bed mostly free of anthropogenic influences. This resulted in the formation of a gravel bar.

Since 2011, the gravel bar has been investigated by geography students from nearby Heidelberg University. Every summer, they perform a topographical survey in order to learn about the erosion and deposition processes that continuously change the gravel bar's shape and volume. Their most important tool: An expensive piece of 21st century technology called a laser scanner. In modern geography, laser scanners are widely used to produce highly detailed 3D models of practically anything from the size of a small boulder up to entire landscapes. As the name suggests, laser scanners work by sending out pulses of infrared laser light. Whenever the light hits a surface, a portion is reflected back to the scanner, and the scanner measures the time it took for the light to travel to the obstacle and back.

Knowing the speed of light, it can then compute the distance to the reflective object with very high precision. By repeating this process for all directions in lots of very narrow-angle steps, the laser scanner can measure the exact shapes and positions of all surrounding objects. The result is a so called point cloud – a dataset which contains millions of 3d coordinates that represent the shapes of the detected surfaces.

The laser scanner does not only detect the geometry of surrounding objects, but also their colors, using a digital camera that is mounted on its top.

In order to acquire a complete model of the region of interest - without missing parts and in good resolution everywhere - it is not sufficient to perform only a single scan. Just like the human eye, the scanner's optical sensor cannot see through objects. To capture a part of the scenery, it needs to be in line of sight from the scanner's position. Each obstacle in the field of view will produce a so-called "scan shadow" in the resulting data set - an area void of any recorded points.

To fill the scan shadows, multiple scans from different positions are made, so that each part of the scenery is visible from at least one position.

The data which is acquired duing each scan is transferred to a laptop computer which is connected to the scanner. The computer is used both to control the scanner and to perform post-processing and analysis of the acquired data. This includes the merging of data from different scanning positions to produce a shadowless point cloud for the entire region. The computer can do this mostly automatically, but it needs some hints. These are provided in the form of special reflectors which are placed in the surrounding area prior to scanning.

When a laser pulse hits a reflector, an especially large portion of the light is returned to the scanner and a very strong reflection is recorded for the reflector's location. The scanner's software is able to recognize the same reflector in different overlapping pieces of the data set and uses this information to put the pieces together. The registered dataset is used as the basis to answer a number of scientific questions."

DTM Group Member:

"The fist step is to take the point cloud and derive a digital elevation model. To verify the correctness of our model, we use alternative techniques to take additional measurements at a number of reference points, spread over the entire gravel bar and especially here at the river bank.

The elevation model is derived from the point cloud by applying an erosion filter algorithm. It removes things like these shrubs and other vegetation, and we get a shape that represents the ground surface."

Roughness Group Member:

"We test and compare different methods to measure the roughness of the ground surface, which is required for hydraulic modeling. Our first method to acquire surface roughness values is to extract them from the laser scan point cloud."

"The second method is called photogrammetry or structure-from-motion, where a computer program reconstructs the 3d shapes of objects using multiple photos from different perspectives. Additionally, we experiment with another 3d scanning technology in this little tent behind us. Inside, we try to capture the geometry of a small patch of ground using a Microsoft Kinect stereo camera sensor. The tent is required because the Kinect sensor doesn't work correctly when exposed to direct sunlight.

Finally, we will also estimate the surface roughness in a very traditional way, that is, by manually measuring the size of a few randomly sampled pebbles."

Multitemporal Analysis Group Member:

"We are conducting a multitemporal analysis with laser scan data from 2011 and 2012. We are going to use the 2013 scan as well. In addition we are measuring several of these metal rods by hand. They were driven in to the level of the gravel bar last year and this one is now protruding by 56 cm."

Hydraulic Modeling Group Member:

"Our group will use the digital terrain model to perform a flood simulation. There was a flood on the 3rd of June 2013 upto the level of the path on the other river bank. We are going to compare our model with different digital terrain models that vary in resolution."

Conversion of Point Cloud Data from RiSCAN PRO to PoTree Format

  1. Use RiSCAN PRO to export the point cloud data of each separate scan to "xyzrgb" ASCII format (Right-Click on scan item in the project tree and select "Export"). Since we have six separate scans, the result are six text files ("scan1.txt", ... "scan6.txt")
  2. Concatenate these files into one single file. We used this Linux command: cat scan*.txt > all.txt
  3. Feed this file to the PoTree converter. The PoTree converter is a Java program and is called as follows: java -Xms2g -Xmx2g -jar PotreeConverter.jar all.txt output where "output" is the name of the folder where the generated files are written.

"-Xms2g -Xmx2g" are optional arguments for the Java virtual machine that increase the available working memory for Java to 2 gigabytes. Without these options, the converter runs out of memory during the conversion process and crashes. The amount of required memory for successful conversion depends on the number of points to convert, and our point cloud turned out to be too large to be converted successfully with Java's default memory settings.