Rapid Response to Natural Disasters
Motivation:
Experience at KeckCAVES has shown that immersive 3D visualization is a powerful tool for scientific research. There is a great opportunity in applying the software and methods developed, and the lessons learned, to time-sensitive problems directly affecting society at large. Specifically, this sub-goal of the overall CI-TEAM project aims to apply immersive visualization software to the timely response to natural disasters such as earthquakes, landslides, forest fires, dam failures, etc.
The common element of most of these types of disasters is that they affect people on the surface of Earth, as opposed to the crust or mantle below it, or the atmosphere above it. This means the types of data most often encountered in, and most useful to, response to these disasters are 2.5D datasets such as high-resolution scans of an environment undergoing a disastrous rapid change such as uplifting or shaking due to an earthquake.
Major activities:
Development in this project area focused on the improvement of immersive visualization software applicable to the types of data commonly encountered in disaster response, namely 3D geometries on or closely above Earth’s surface, and on the training of users specifically in rapid-response scenarios. Students and post-docs supported by this project traveled to areas impacted by natural disasters, such as Baja California after the 2010 El Mayor-Cucapah earthquake and Napa Valley after the 2014 South Napa earthquake, and collected large amounts of 3D data using high-resolution terrestrial laser scanners, and using a new technique called “structure from motion” in which a digital camera is used to create overlapping images from which a 3D scan is derived. The students then created detailed maps of displacement on and away from the involved fault lines using KeckCAVES software including Crusta and LiDAR Viewer.
To support their analysis work, a new point-based computing framework was developed based on LiDAR Viewer’s hierarchical multi-resolution and out-of-core point cloud storage format, enabling non-computer scientist users to develop their own filtering and statistical analysis tools using the popular Python programming language. The virtual globe application Crusta was extended with a module to interactively deform, and undo deformation due to earthquakes, high-resolution 3D topography datasets based on models of 3D sub-surface fault geometry.
Specific Objectives:
The objectives were to engage in VR-based RSR efforts, establish the cyber0infrastructure needed for RSR through development of new software tools, and train students in the workflow of RSR using CI tools.
Significant Results:
The software development performed in this project area led to new releases of the KeckCAVES software packages LiDAR Viewer, including its new point-based computing programming interface, and Crusta. The team developed new methods for extracting structure-from-motion using overlapping photographs to develop 3D structures.
The primary case study was the magnitude 6.0 2014 South Napa earthquake, which occurred only an hour’s drive from Davis and exhibited surface rupture, making it a prime target for RSR tools by the CI-TEAM group. A team was established sending students into the field for data collection within hours of the earthquake, and continued working for several months, resulting in collection of data, new technique development, and student-led, peer-reviewed publications using KeckCAVES technology. The work was done both in the open, with observations posted on Twitter immediately after the event, and using enterprise collaboration tools such as Evernote and Slack to organize workflow. The team developed a KMZ notebook with site-specific photos for use in Google Earth. A student-led publication is in preparation to provide a manual for using structure-from-motion techniques for RSR.
Key outcomes or Other achievements:
In addition to the RSR event response, the team developed tools for use in the geology classroom: see Immersive Visualization in the Classroom for more information.